text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Data center fire protection elements include smoke detectors, which are usually installed to provide early warning of a developing fire. This allows technicians to investigate and use fire extinguishers if necessary before the fire becomes any larger. Another component of data center fire protection is a fire sprinkler system, which is often provided to control a full scale fire if it develops. In addition, fire suppression gaseous systems are sometimes installed to put out or hold back a fire earlier than a fire sprinkler system. Passive data center fire protection elements include the installation of fire walls around the data center, so a fire can be restricted to a segment of the facility for a period of time if active fire prevention systems fail. For critical facilities, these firewalls are often inadequate to protect electronic equipment, cables, coolant lines, and air ducts.
<urn:uuid:57e1118b-9ea5-4ecc-b0b1-3e42216ec324>
CC-MAIN-2022-40
https://cyrusone.com/resources/tools/data-center-fire-protection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00466.warc.gz
en
0.894567
168
2.671875
3
International Business Machines (IBM.N) announced on Monday that it has developed a new quantum computing chip that, according to the company’s management, will allow quantum systems to begin outperforming classical computers in some tasks within the next two years. The “Eagle” computing device, according to IBM, features 127 “qubits,” which can represent information in the quantum form. Qubits can be both a 1 and a 0 at the same time, unlike traditional computers, which use “bits” that must be either a 1 or a 0. This could make quantum computers far quicker than their classical equivalents in the future, but qubits are extremely difficult to create and require massive cryogenic refrigerators to function properly. While Apple Inc.’s (AAPL.O) latest M1 Max chip has 57 billion transistors (roughly equivalent to bits), IBM claims that its new Eagle chip has more than 100 qubits. When combined with other developments in the quantum computer’s cooling and control systems, IBM claims that new approaches learned while manufacturing the device, which is built at its facilities in New York state, would eventually create additional qubits. At that point, the company says it will be close to what is called “quantum advantage,” the point at which quantum computers can beat classical computers. Darío Gil, a senior vice president at IBM and head of his research division, said that does not mean quantum computers will overtake traditional ones all at once. Instead, what IBM envisions is a world where some parts of a computing application run on traditional chips and some parts run on quantum chips, depending on what works best for each task. “We believe that we will be able to reach a demonstration of quantum advantage – something that can have practical value – within the next couple of years. That is our quest,” Gil said.
<urn:uuid:9d15983a-ce7f-4900-ac1a-f5180b742106>
CC-MAIN-2022-40
https://enterpriseviewpoint.com/ibm-says-quantum-chip-could-beat-standard-chips-in-two-years/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00466.warc.gz
en
0.947092
396
2.75
3
What Is an API Key? An application programming interface (API) key is a code used to identify and authenticate an application or user. API keys are available through platforms, such as a white-labeled internal marketplace. They also act as a unique identifier and provide a secret token for authentication purposes. APIs are interfaces that help build software and define how pieces of software interact with each other. They control requests made between programs, how those requests are made, and the data formats used. They are commonly used on Internet-of-Things (IoT) applications and websites to gather and process data or enable users to input information. For example, users can get a Google API key or YouTube API keys, which are accessible through an API key generator. An API key is passed by an application, which then calls the API to identify the user, developer, or program attempting to access a website. It can help break development silos and will typically be accompanied by a set of access rights that belong to the API the key is associated with. Why Use API Keys API keys are commonly used to control the utilization of the API’s interface and track how it is being used. This is often as a precaution to prevent abuse or malicious use. Common reasons why to use API keys include: API Keys for Project Authorization API keys are used in projects, providing authentication to identify users and the project itself. API keys provide project authorization through: API keys can be used to identify a specific project or the application making the call to the API. While API keys are not as secure as the tokens that provide authentication, they help identify the project or application that makes the call. This ensures they can also be used to designate usage information to a specific project and reject unauthorized access requests. API keys are commonly used to check that the application making the call to the API has access to do so. Authorization will also check that the API being used in the project is enabled. API Keys for Authentication of Users Authentication schemes are used to identify the caller requesting API access. Endpoints or devices can check the authentication token to confirm the user has permission to make the call, while the API server can use authentication token information to make a decision on whether to authorize a request. API keys can be used for the following authentication purposes: This verifies that the person making the call is the person they claim to be by checking or authenticating the identity of the user. This checks whether the user making the call has permission to make the kind of request they have issued. Security of API Keys API security is increasingly important, especially given the rapid rise in IoT usage. APIs transmit sensitive user data between the applications and systems they access and interact with. Therefore, an insecure API could be a high-value and easy target for attackers to obtain critical data and gain unauthorized access to computers and networks. They are often the subject of broken access control, distributed denial-of-service (DDoS), injection, and man-in-the-middle (MITM) attacks, which means they need to be extremely secure. A common method for securing your API is representational state transfer, or REST API, which controls the data that an API can access as it operates through a Hypertext Transfer Protocol (HTTP) Uniform Resource Identifier (URI). This helps prevent attackers from introducing malicious data to an API. When To Use API Keys There are several common usages for API keys, including: Block Anonymous Traffic Anonymous traffic can be an indicator of potentially malicious activity or traffic. API keys can identify application traffic, which can be used to debug issues or analyze application usage. Control the Number of Calls Made to Your API Controlling the number of calls made to an API helps to govern API consumption, limit traffic and usage, and ensure only legitimate traffic accesses the API. Identify Usage Patterns in Your API's Traffic Identifying usage patterns is crucial to spotting malicious activity or issues within the API. Activity on the API server can be logged as a series of events, which can be filtered by the specific API key. API Keys Cannot Be Used For API keys cannot be used for the following purposes: API keys cannot be used for secure authorization because they are not as secure as authentication tokens. Instead, they identify an application or project that calls an API. Identifying the Creators of a Project API keys are generated by the project making a call but cannot be used to identify who created the project. Identifying Individual Users API keys are used to identity projects, not the individual users that access a project. How Fortinet Can Help Fortinet provides API security that protects organizations against data breaches and security attacks that target APIs. It provides Security Assertion Markup Language (SAML) capabilities that enable them to authenticate and authorize users across all their systems and APIs, which is crucial to mitigating cyberattacks. Fortinet also provides network access control (NAC), a zero-trust network solution that enhances visibility into IoT devices accessing corporate networks.
<urn:uuid:e9ff3536-569f-40db-a8ed-c1e605300a50>
CC-MAIN-2022-40
https://www.fortinet.com/de/resources/cyberglossary/api-key
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00466.warc.gz
en
0.904529
1,058
3.515625
4
Certificates, signed emails, symmetric and asymmetric encryption, S/MIME, TLS and PGP – for many who do not regularly deal with email encryption these terms are quite foreign. However, with the new basic data protection regulations (DSGVO) these terms have been pushed to the top of the to-do lists for many SMBs. Although, many companies lack the necessary knowledge to implement the new requirements in regards to the encryption of their email communication. In this article, Hornetsecurity aims to explain some of the basic terms and technologies around email encryption. Symmetric email encryption uses the same key to encrypt and decrypt the email. This means that the sender and recipient of an email share the same key. Thus, this procedure is very simple, but its security is essentially tied to the secrecy of the keys – if the key falls into the hands of a third party that person can decrypt the entire communication. Asymmetric email encryption uses a total of four keys, one key pair each – a public and a private key per communication partner. The public key is accessible to everyone who wants to communicate and is transferred with the certificate exchange. It is used to encrypt the data, in our case, emails. To decrypt the encrypted data again, the private key belonging to the public key is required. Although the key pair is mathematically interdependent and it’s practically impossible to calculate it. PGP and S/MIME are asymmetric encryption methods. Both procedures have a decisive advantage and disadvantage. The advantage is that the email provider of the sender and recipient also has no insight into the email. The disadvantage is that only the message is encrypted. The sender and recipient as well as the subject can still be read. The main difference between email encryption with S/MIME and PGP is the issue of certificates. While PGP (also known as OpenPGP) is an open source solution in which everyone can create their own certificates, certification at S/MIME takes place via official certification authorities, the so-called Certificate Authorities (CA). TLS differs fundamentally from email encryption with S/MIME or PGP. Here it’s not the email itself that is encrypted, but only the connection between the two communicating servers. This means that the email cannot be accessed during transport, but it is not encrypted on the respective mail servers. With on-premise solutions, the emails are encrypted directly on site, i.e. at the companies themselves. The email encryption software can be purchased, rented or operated completely independently from an external provider. Although this procedure offers the company a high degree of transparency and decision-making freedom, it involves an administrative effort that should not be underestimated. The costs for maintenance and operation are also quite significant. Today, on-premise solutions are considered a thing of the past and are increasingly being replaced by modern cloud-based computing. With the cloud-based computing alternative, also known as “Software as a Service” (SaaS) solution, the security provider relieves the company of all expenses, such as administrative and operational costs. All of the company’s email traffic is then handled by the security provider’s servers, including Hornetsecurity’s email encryption service. The route between the customer’s mail server and the service provider is protected by TLS. This solution is characterized by the elimination of administrative work for any particular company. However, to fully ensure secure email communication, TLS and S/MIME can and should be used simultaneously. This is the only way to encrypt the email itself and its transport route.
<urn:uuid:7d8f6d6a-40d8-40bc-8525-ed3e5ffe57ba>
CC-MAIN-2022-40
https://www.hornetsecurity.com/en/services-en/email-encryption-guide/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00466.warc.gz
en
0.953948
742
3.125
3
Quantum Computer Based on Shuttling Ions Is Built by Honeywell (Honeywell) A quantum charged coupled device – a type of trapped-ion quantum computer first proposed 20 years ago – has finally been fully realized by researchers at Honeywell in the US. Other researchers in the field believe the design, which offers notable advantages over other quantum computing platforms, could potentially enable quantum computers to scale to huge numbers of quantum bits (qubits) and fully realize their potential. Trapped-ion qubits were used to implement the first quantum logic gates in 1995, and the proposal for a quantum charged coupled device (QCCD) – a type of quantum computer with actions controlled by shuffling the ions around – was first made in 2002 by researchers led by David Wineland of the US National Institute of Standards and Technology, who went on to win the 2012 Nobel Prize for Physics for his work. Some large companies have recently shown interest in the trapped ion platform, among them the multinational technology conglomerate Honeywell, which formed Honeywell Quantum Systems in 2020 to focus solely on the technology. The firm’s latest result is the first demonstration of a fully functional QCCD. The device uses ytterbium-171 ions as qubits, which are chilled to their quantum ground states by barium-138 ions using a process called sympathetic cooling. The researchers demonstrate a sufficient set of gates to perform universal quantum logic. In addition, they created a teleported CNOT gate, which allows for non-destructive mid-circuit measurement – a crucial component for quantum error correction. Chris Monroe of University of Maryland, College Park, a co-author of Wineland’s on the original 2002 paper, who now runs the spin-off company IonQ, agrees: “In this field, every single little piece has been demonstrated separately. One of the important features of this work is that it integrated lots of them in one system. I love the QCCD idea: I actually coined that phrase myself.”
<urn:uuid:71d290a1-1612-42cc-aaeb-b72eef1d2b01>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/quantum-computer-based-on-shuttling-ions-is-built-by-honeywell/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00466.warc.gz
en
0.945837
416
3.546875
4
According to Gartner’s 2019 CIO Survey, 48% of healthcare provider CIOs selected data and analytics, and AI/machine learning as the top game-changing technologies. Big data and analytics are revolutionizing many industries and the healthcare industry is one of the most prominent areas where big data is transforming processes. New sources of patient data can generate deeper insights to help healthcare providers improve the quality of care and streamline their operations. Benefits of Big Data in Healthcare One of the biggest challenges of big data in healthcare is breaking down data silos to combine medical data from multiple sources and get a comprehensive view of your business from one source, instead of various, disparate sources. A truly data-driven healthcare analytics solution should connect all your data sources and have the ability to analyze structured, unstructured, and real-time data. Without the ability to incorporate all your patient data including diagnostic information, doctor’s observations, and real-time data from medical equipment, your insights are not as effective as they could be. However, once you are able to consolidate your data sources, there are many benefits of big data in healthcare. Improve outcomes – Big data analytics can improve the quality of patient care and safety levels by providing access to real-time patient information, reducing the possibility of human error and wrong diagnosis. Increase operational efficiencies – Combining your data from multiple sources improves visibility into resource utilization, inventory levels, and procurement, enabling you to improve cost efficiencies, minimize waste, reduce risks, and streamline their operations. Streamline finance and accounting – Big data analytics in healthcare generate insights to help you quickly identify financial inaccuracies and rectify these issues, helping you align your KPIs to meet your financial goals. Top Use Cases for Big Data in Healthcare 1. Analyze Electronic Health Records (EHRs) Electronic health records are one of the most common use cases for big data in healthcare. EHRs track and record your patient’s health data like pre-existing conditions and allergies, reducing the need for unnecessary tests and the associated costs. Sharing patient data between healthcare providers as they treat patients can reduce duplicate tests and improve patient care. However, medical data is usually siloed due to security reasons, but big data and analytics for EHR data can improve the quality of care while reducing costs. 2. Deploy Evidence-Based Medicine When a patient is admitted to a hospital, doctors usually run a battery of tests to identify the symptoms and the underlying disease. Evidence-based medicine enables healthcare providers to gather evidence of a patient’s health and compare the symptoms to a bigger patient database, enabling faster, more accurate, and effective diagnosis and treatment. Big data helps consolidate and analyze information from this large patient database generated from multiple, disparate sources. 3. Reduce Hospital Readmissions Hospitals’ costs increase due to high patient readmission rates within a month of release. Using big data, healthcare providers can identify at-risk patients based on patient trends, medical history, diagnostic information, and real-time data from medical equipment. Hospitals can then offer these patients a lower readmission rate, allowing patients to focus on their treatment instead of their healthcare expenses. 4. Detect and Prevent Fraud In the US alone, the National Healthcare Anti-Fraud Association estimates the loss to health care fraud to be about $80 billion annually, accounting for around 3 to 10 percent of the total annual spending on healthcare. Fraud in healthcare can range from genuine errors in billings to false claims that result in wrong payments. Hospitals have to store and navigate through massive amounts of claims, billings, and other information. Due to the volume, velocity, and variety of data, claim verification and processing could take weeks or months. Detecting fraud and collecting evidence for legal action also take a long time and could result in huge financial losses for the organization. Big data analytics can help detect anomalies much faster and notify you instantly, significantly reducing the potential for healthcare fraud. 5. Provide Real-Time Information Physicians need access to real-time information about their patients to improve patient care – including their visits to an emergency room, length of hospital stay, new diagnoses, progress in treatment, etc. These real-time insights are derived from data collected using technologies like IoT sensors, which can optimize the hospital’s clinical, business, and administrative workflows. Using big data and advanced analytics, you can analyze real-time information to enable proactive patient care and ensure data-driven decision-making, improving the quality of care, and lowering costs. 6. Optimize Supply Chain Processes Hospitals spend almost one-third of their overall operating expenses on managing their supply chains. Big data plays a major role across the healthcare supply chain from placing the order, to order fulfillment and invoicing. Real-time visibility into supply chain operations can help hospitals avoid supply round-tripping and wastage that are both expensive and affect care delivery. You can also use supply chain data combined with procedural data to improve your forecasting capabilities, ensuring that products are available at the right time, at the right place, and at the right cost. Analyzing supply chain data can also help automate routine procurement tasks to free up staff to focus on strategic initiatives.
<urn:uuid:62773156-b4e9-4b86-ab8c-875a0be0748f>
CC-MAIN-2022-40
https://www.itconvergence.com/blog/6-use-cases-for-big-data-and-analytics-in-healthcare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00466.warc.gz
en
0.92305
1,082
2.734375
3
You already know that SCADA (Supervisory Control and Data Acquisition) can save you a lot of money and increase profitability, but your SCADA implementation can be a sinkhole of cost overruns, delays and limited capabilities. This guide shows how you can use SCADA effectively and profitably - including concrete applications and examples. I'll give you an example of the world's simplest SCADA system. Imagine a fabrication machine in a factory that produces widgets. Every time the machine finishes a widget, it activates a switch that turns on a light. That light tells a human-machine operator that a widget has been made. Obviously a real SCADA system does a lot more than that but the idea here is the same. A full-scale SCADA system just monitors more stuff over much greater distances. Now let's talk about the two elements involved in these supervisory systems. All you need are the system or machine you want to monitor and control, and a data collection system made up of sensors and control outputs used to monitor and control the first system. A SCADA system performs four functions: Networked data communication These functions are performed by four kinds of components: Sensors (either digital or analog) and control relays Remote telemetry units (RTUs) or programmable logic controllers (PLC SCADA) Master unit (HMI SCADA) The communications network that connects the Master unit to the RTUs in the field Collection of articles discussing various elements of SCADA. A SCADA network consists of one of more Master Terminal Units (MTUs), which are utilized by operators to monitor and control a large number of Remote Terminal Units (RTUs). The MTU is often a computing platform, like a PC, which runs SCADA software. The RTUs are generally small dedicated SCADA devices that are hardened for outdoor use and industrial environments. A SCADA system usually includes signal hardware (input and output), controllers, networks, user interface (HMI), communications equipment and software. Altogether, the term SCADA refers to the entire central system. The central system usually monitors and process data from various sensors that are either in close proximity or off-site (sometimes miles away). SCADA systems are being implemented with greater regularity in today's ultra-competitive manufacturing environments. While SCADA systems are used to perform real-time data collection and control at the supervisory level, HMI's are typically seen as local user interfaces that allow operators to manipulate the machine or process locally, and perform SCADA programming work to customize the system. Did you think carefully when you picked your master or did you just pick the first one you found? Maybe you "inherited" a manager when you changed jobs. Whatever the reason, you're probably not positioned as well as you should be when it comes to a master station. Think for a second about how much easier it would be with your master working constantly to help you. DNP3 communications take place in the Distributed Network Protocol format. The DNP3 protocol is a protocol that is widely used in the water and electric utility industries. DNP3 communications are a key part of how process automation systems and devices on networks in these industries work together. DNP3 communications are commonly used within SCADA systems. The various components of these systems communicate using the DNP3 protocol. These devices include the master (or HMI), the RTU's (Remote Terminal Units), and IED's (Intelligent Electronic Devices). Maintaining high-quality water at De Anza Moon Valley is not a job LaVay takes lightly. "We're on a well, a 600-foot deep well -and I'm in charge of all that," said LaVay, the facility manager. He spends a lot of his time physically monitoring his water-treatment equipment, which primarily consists of analog sensor data. "We do have an existing alarm system, but frankly, it doesn't hold a candle to DPS equipment. I mean, it's not even close..." Your SCADA system will most likely monitor thousands of individual sensors throughout your network. If two different sensors sent major alarms through the monitoring system, your master would be alerted. If, for example, one of these alarms indicated a power failure at a site, and another alarm indicated a battery failure at the same location, the RTU would receive these inputs, translate them, and forward them to the master. The monitoring system master would then react to this critical alarm combination with a user-specified control relay, such as activating the back-up generator at the related site, preventing a network failure. Effective monitoring saves significant expenditures on repairs and lost revenue due to network downtime. These benefits make it crucial for operators to employ a system to thoroughly monitor their network. A Modbus Human Machine Interface (HMI) is the interface of a Modbus system that allows the operator to interact with the system equipment. This interface is a type of software that presents the Modbus messages in a human-readable form. It is a critical part of the Modbus master. A Modbus HMI is necessary for an operator to be able to interpret alarm polls and status reports from their Modbus system. As Modbus communications take the form of packets of word bits, it would be extremely difficult and time-consuming for an operator to manually interpret even a single Modbus message. With thousands of alarms and response messages coming in every day, it would be impossible for an operator to monitor their network without a Modbus HMI. Typically, a Modbus HMI is a type of browser screen. In this screen, network operators can view their Modbus alarms and other messages in their English form. The Modbus HMI simply uses the codes programmed into the system to retrieve the information from the packets of bits. SCADA security is a topic of increasing concern for network operators. As these systems can be utilized to transmit sensitive data, it is important to develop the system in a way that prevents this information from being released onto the Internet. SCADA control functions are what enable a system to automatically respond to certain situations with a programmed response. Sensors cannot generate or interpret protocol communications. RTU devices along the network interpret the information from these sensors and translate it into a language the master can understand. The master can utilize the information it receives from various inputs to enact control relays at the RTU level. This means that whenever a user-specified combination of alarms occur, the RTU will automatically respond with a control relay that has been programmed into the system, securing the network by responding to the Change-of-State (COS) event indicated by the alarm. Telecom SCADA integration is the process of integrating SCADA capabilities into a telecom system, such as a network alarm monitoring system. The integration process can only occur when you have a master that supports both telecom and SCADA protocols. To allow for integration within your network, you must find a master that can support SCADA in addition to telecom protocols. Advanced monitoring platforms support many different communication protocols through a single master, allowing for complete integration. Integrating alarms from these devices of varying protocols allows you to easily view all of your alarms on a single screen. You need to see DPS gear in action. Get a live demo with our engineers. Download our free SCADA tutorial. An introduction to SCADA from your own perspective. Have a specific question? Ask our team of expert engineers and get a specific answer! Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
<urn:uuid:3804e77a-3bd3-4d4c-88ec-396d4fdc9aed>
CC-MAIN-2022-40
https://www.dpstele.com/scada/top-16-resources.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00466.warc.gz
en
0.925497
1,600
3.359375
3
A new simulation by Sandia for the U.S. Department of Energy’s National Nuclear Security Administration has found that trying to use too many cores for multicore supercomputing processing just leads to slower, not faster, computations. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin company. Using algorithms for deriving knowledge from large data sets, Sandia simulations found a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores caused a decrease in speed. Sixteen multicores performed barely as well as two, and after that, a steep decline is registered as more cores are added, Sandia reported. The problem is the lack of memory bandwidth as well as contention between processors over the memory bus available to each processor. “The difficulty is contention among modules,” noted James Peery, director of Sandia’s Computations, Computers, Information and Mathematics Center. “The cores are all asking for memory through the same pipe. It’s like having one, two, four, or eight people all talking to you at the same time, saying, ‘I want this information.’ Then they have to wait until the answer to their request comes back. This causes delays,” he explained. ‘Not Rocket Science’ “Processors pull data in from external memory, manipulate the data, and write it out. That’s what they do. As CPUs get faster, they need more memory bandwidth, and if they don’t get it, that lack of bandwidth constrains performance. This is not rocket science,” Nathan Brookwood, principal analyst for Insight 64, told TechNewsWorld. “If your state department of transportation issued a press release warning ‘average freeway speed will decrease as traffic density increases,’ would anybody pay attention?” he said, noting that this is essentially what Sandia has done. “It’s easier to add cores to CPUs than to add memory bandwidth to systems. On-chip caches mitigate this a bit; programs that spend a lot of time iterating on small data sets that fit in on-chip caches can avoid these problems, but programs that use big data sets that don’t fit into on-chip caches run smack into them, and scale poorly. Multicore approaches just make this worse, since they provide a straightforward approach for increasing demands on memory bandwidth, but little help on adding that bandwidth,” Brookwood explained. “Again, it’s not a new problem. System architects and chip designers have been wrestling with it for years, and there are no easy solutions. On the other hand, it’s not a show stopper, just another problem those software guys will have to tackle if they want to increase performance using multicore chips,” he added. ‘The Problem Is Often Ignored’ Sandia did acknowledge Brookwood’s point in the Sandia report. “To some extent, it is pointing out the obvious — many of our applications have been memory-bandwidth-limited even on a single core,” noted Arun Rodrigues, who was on the Sandia team that ran the simulations. “However, it is not an issue to which industry has a known solution, and the problem is often ignored,” Rodrigues added.
<urn:uuid:e15a784c-489d-4b55-9992-c83e370d2ce0>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/scientists-find-too-many-cooks-er-cores-spoils-the-cpu-65855.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00466.warc.gz
en
0.945701
731
2.78125
3
With every passing day, it appears there’s yet another headline-grabbing story about artificial intelligence. But whether it’s beating the best human players at ‘Go’, picking up racist language from social media (opens in new tab) or simply destroying humanity (opens in new tab), it’s perhaps not clear to see how AI will have a direct impact on our lives in the future. So, with the merit of AI’s real-world applications still being debated, when are we likely to see its benefits? The answer is sooner than you think, but perhaps not in the way that you would expect. Following Wimbledon 2017, IBM Watson generated highlight reels of what it thought were the best shots of the tournament. It’s hard to argue with the choices, and anyone that viewed these reels would be none-the-wiser that there was no human input. Normally this would take a large team of editors a lengthy period of time to go through hundreds of hours of match footage. By the time you factor in multiple camera angles from 18 courts to get the best possible edit, you realise how big an undertaking this actually is. However Watson could do all of this automatically. Using cognitive algorithms to analyse audio and video from the footage, it was then able to identify shots and points that were highlight worthy. For example, if the crowd roared the system could identify that the last point was important in the context of the match. These highlights were then bundled together in any number of combinations – whether these were the best shots of the day, a particular court or an individual player. A tennis tournament is an excellent test of AI’s capabilities. At any given time there could be up to 18 matches in play – too much for any person to be able to follow. For tennis fans, watching a tournament on TV has meant being at the whim of the directors and editors that decide what we should be watching. With more live footage available than ever before, the viewer still makes the choice on what tie to watch, and unless they are constantly changing channel, they may not witness the best of the action available. Meanwhile, the match of the century could be unfolding on an outside court and we have to make do with a replay later in the evening. What if it were possible for an AI system to learn from what we liked? Our favourite players, the style of play that excited us, and if there was a break point in the final set. This is then delivered as a personalised broadcast, tailor made to our preferences. No filler, just the bits you want to see. The truth is that we’re really not that far away from this. The BBC has been at the forefront of some of broadcasting’s most important innovations. Right now, BBC R&D are working on something it calls object based media (opens in new tab). The BBC describes this as when a programme’s content is automatically edited to suit the needs of the individual viewer – how much time they have, what device they are viewing on or what they are most interested in. The ‘objects’ are audio and video content that is broken down into smaller chunks, which could be a single scene, a soundbite or even a still image. Algorithms are then used to piece these objects together into packages that are delivered to viewers based on their preferences. In an age when video on demand is challenging the dominance of scheduled broadcasting, this is the next stage in its evolution. Not just the content when you want it, but in the format that works best for you. Imagine condensing soap operas to focus in on a single storyline, or news reports giving greater levels of analysis if it directly affected you. Teaching an old dog new tricks Traditional broadcasters are losing viewers to the newcomers to the entertainment space like Netflix and Amazon. The seemingly limitless budgets of streamers consistently produce appealing, high-quality content, unhampered by the overheads of keeping a TV station running. Broadcasters would appear to be at a significant disadvantage, but perhaps hold an ace up their sleeve in the form of their archives. AI could be used to breathe new life into old footage – re-cutting TV shows to appeal to modern audiences, and analysing viewing preferences to bring classic productions from yesteryear to new viewers. There seems to be another must watch show on Netflix every week and it would be unreasonable to expect British broadcasters to match this level of output, especially considering how much is invested in news and other public interest broadcasting. Instead the largely untapped resource of archive shows could help win back eye-time from viewers and also be monetised around the world. It’s true that we’re now only discovering the potential for AI in the entertainment space. But one question that is sure to be debated for some time is the creativity conundrum. Yes, a computer can learn how to edit together footage, but can it ever match a human in terms of creative or artistic output? There are many examples (opens in new tab) of AI being used for artistic endeavours. However, for the time being at least, any reference points that an algorithm could perceive as ‘artistic’ or ‘creative’ would be based on human output. Therefore, any creative work that a computer system could produce is in some shape or form mimicking what has gone before it. AI is already being used to write stories (opens in new tab), so it’s not hard to imagine complete animated features being created without any human input. Whether these are of high quality or not remains to be seen, and there will of course be a need for a human to review before they are released. Human oversight is likely to be the dominant theme in relation to how AI is used in the media. For now at least, it’s likely that smart systems will be used to cut out routine tasks like editing, with production teams putting them to work to free up their time for other tasks. Whichever way you look at it, in one shape or form, AI will be here to stay. It offers too many possibilities for content producers to ignore. As our viewing habits change, it’s the only feasible way to deliver the optimised experience and level of choice that we will demand. The technology could even open the doors to a new breed of content creators. User generated content, could be given the polish of professional editing or post-production via AI systems – lowering the bar for entry to the media industry and giving viewers even more choice of things to watch. Now, only if there was a system that could help me choose what to watch Daniel Sacchelli, Events Director of BVE (opens in new tab) Image Credit: Sergey Nivens / Shutterstock
<urn:uuid:fcd03626-d0d4-4965-ae1c-49fbd97b21c3>
CC-MAIN-2022-40
https://www.itproportal.com/features/how-ai-will-change-the-broadcasting-and-entertainment-landscape/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00466.warc.gz
en
0.959983
1,415
2.6875
3
One of the biggest possible nightmares in participating in online activities is being harassed by online stalkers. People who know what’s best for them should stay clear of these people. In the online world however, it is very difficult to determine if someone is actually doing the stalking since there is seldom any physical intimidation similar to the real world where the target would see the stalker’s presence everywhere he or she goes. Forms of Online Stalking Online or cyber stalking can take many forms but they all boil down to the great discomfort and anxiety of the target. Online stalkers can be someone known to the target or a complete stranger. If one thinks about it, a known perpetrator tend to be more dangerous because they already have access to certain personal information which a stranger may find difficult to obtain. Some of the more common forms of cyber stalking include monitoring of online activities of the target and gathering personal information from social media accounts of relatives and friends of the target. It can also be in the form of making false accusations , encouraging other people to join in the harassment activities, posing as the target specifically in online purchases and transactions, and sending a virus to the target’s computer. Whatever form the stalking takes, the end result is distress to the target. How to Avoid Online Stalkers Protection is always the first step in avoiding something potentially dangerous. In the online world, it makes sense to protect the personal information of users. Posting very personal information in social media platforms makes possible targets very vulnerable to online stalking attacks. It is best to keep private things private and share only those that can be safely seen by anyone. Online users must always be cautious in their dealings since the cyber-world has made it possible for stalkers to hide behind a made up name and profile image. If any form of online communication makes a person uncomfortable in some way, it would be best not to react but rather report the questionable behavior or communication to the people who can handle it better such as the online community moderator and the like. Know the Current Risks Online users who know the current risks will always be one step ahead of potential stalkers. Awareness allows people to create their own safety measures applicable to their situation. By observing the usual decorum in human communication even in the online world, users can help themselves stay clear of online stalkers.
<urn:uuid:e137eb3f-9887-43e3-8fec-28535c6fc05e>
CC-MAIN-2022-40
https://www.it-security-blog.com/tag/protection-from-online-stalkers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00666.warc.gz
en
0.946867
479
2.65625
3
March 25, 2019 Active Directory Backup Best Practices Active Directory is a widely-known service for centralized management and user authentication in Windows-based environments. Administrators can manage computers added to the domain centrally, which is convenient and time saving for large and distributed infrastructure. MS SQL and MS Exchange usually require Active Directory. If the Active Directory Domain Controller (AD DC) becomes unavailable, then related users cannot log in and systems cannot function properly, which can cause troubles in your environment. That’s why backing up your Active Directory is important. Today’s blog post explains Active Directory backup best practices including effective methods and tools. NAKIVO Backup & Replication is an all-in-one backup and site recovery solution that helps you protect your data and follow Active Directory backup best practices. With the help of Windows Server Backup, you can perform seamless backup of your Active Directory data, ensuring that your application data remains transactionally consistent under any circumstances. Active Directory Working Principle Active Directory is a management system that consists of a database where the individual objects and transaction logs are stored. The database is divided into several sections that contain different types of information – a schema partition (which determines the AD database design including object classes and their attributes), configuration partition (information about AD structure) and domain names context (users, groups, printer objects). The Active Directory database has a hierarchical tree-like structure. The Ntds.dit file is used to store the AD database. Active Directory uses LDAP and Kerberos protocols for its function over the network. LDAP (Lightweight Directory Access Protocol) is an open cross-platform protocol used for accessing directories (such as Active Directory) which also has access to directory services authentication by using user name and password. Kerberos is a secure authentication and single sign-on protocol that uses secret key cryptography. Usernames and passwords checked by Kerberos authentication server are stored in the LDAP directory (in case of using Active Directory). Active Directory is tightly integrated with the DNS Server, Windows protected system files, System Registry of a domain controller, as well as the Sysvol directory, COM+ Class Registration Database, and cluster service information. Such integration has direct influence on the Active Directory backup strategy. What Data Must Be Backed Up? According to the previous section, you need to make a copy of not only Ntds.dit, but all components integrated with Active Directory. The list of all components which are integral parts of the Domain Controller system is as follows: - Active Directory Domain Services - Domain Controller System Registry - Sysvol directory - COM+ class registration database - DNS zone information integrated with Active Directory - System files and boot files - Cluster service information - Certificate services database (if your Domain Controller is a certificate service server) - IIS meta folders (if Microsoft Internet Information Services are installed on your Domain Controller) General AD Backup Recommendations Let’s take a look at some general recommendations for Active Directory backup. At least one domain controller in a domain must be backed up It is obvious that if you have just one domain controller in your infrastructure, you should back up this DC. If you have more than one domain controller, you should back up at least one of them. You should back up the domain controller that has FSMO (Flexible Single Master Operation) roles installed. If you have lost all domain controllers, you can recover a primary domain controller (containing FSMO roles), and deploy a new secondary domain controller, replicating changes from the primary DC to the secondary DC. Include your Active Directory backup within your disaster recovery plan Compose your disaster recovery (DR) plan with multiple scenarios for recovering your infrastructure as you prepare for hypothetical disasters. The best practice is to create a thorough DR plan before disaster occurs. Pay close attention to the recovery sequence. Keep in mind that a domain controller must be recovered before you can recover other machines with services related to the Active Directory as they may become useless without the AD DC. Creating a workable disaster recovery plan that takes into account dependencies of different services running on different machines guarantees you a successful recovery. You can back up your domain controller to a local site, remote site, or cloud. Among the best practices of Active Directory backup is to have more than one copy of your domain controller according to the 3-2-1 backup rule. Back up Active Directory on a regular basis You should back up your Active Directory regularly with an interval that doesn’t exceed 60 days. AD services presume that the age of the Active Directory backup cannot be more than the lifetime of AD tombstone objects, which by default is 60 days. This is because the Active Directory uses the tombstone objects when objects need to be deleted. When an AD object is deleted (the majority of said object’s attributes are deleted), it is marked as the tombstone object and is not deleted physically until the tombstone lifetime period expires. If there are multiple domain controllers in your infrastructure and the Active Directory replication is enabled, the tombstone object is copied to each domain controller until the tombstone lifespan expires. If you restore one of your domain controllers from a backup whose age is more than the tombstone’s lifespan, you will encounter inconsistent information between Active Directory domain controllers. The recovered domain controller would have the information about objects that don’t exist anymore in this case. This can cause errors accordingly. If you installed any drivers or applications on your domain controller after making a backup, they will not be functional after recovering from said backup as the system state (including registry) will be recovered to a previous state. This is just one more reason to back up Active Directory more frequently than once per 60 days. We strongly recommend that you back up the Active Directory Domain Controller every night. Use software that ensures data consistency As with any other database, the Active Directory database must be backed up in a way that ensures database consistency is preserved. The consistency can best be preserved if you back up the AD DC data when the server is powered off or when Microsoft Volume Shadow Copy Service (VSS) is used on a running machine. Backing up the Active Directory server in a powered-off state may not be a good idea if the server is operating in 24/7 mode. Active Directory backup best practices recommend that you use VSS-compatible backup applications to back up a server running Active Directory. VSS writers create a snapshot which freezes the system state until the backup is complete to prevent modifying active files used by Active Directory during a backup process. Use backup solutions that provide granular recovery When it comes to recovering an Active Directory, you may recover the entire server with Active Directory and all its objects. Running a full recovery may consume a significant amount of time, especially if your AD database is of considerable size. If some Active Directory objects accidentally get deleted, you may want to recover only those objects and nothing else. Active Directory backup best practices recommend that you use backup methods and applications that can perform granular recovery, i.e. just recover particular Active Directory objects from a backup. This allows you to limit the amount of time spent on recovery. Native Active Directory Backup Methods Microsoft has developed a series of native tools for backing up Windows Servers including servers running Active Directory domain controllers. Windows Server Backup Windows Server Backup is a utility provided by Microsoft with Windows Server 2008 and later Windows Server versions that replaced the NTBackup utility which was built into the Windows Server 2003. To access it, you just need to enable Windows Server Backup in the Add Roles and Features menu. Windows Server Backup features a new GUI (graphical user interface) and lets you create incremental backups by using VSS. The backed up data is saved into a VHD file – the same file format used for Microsoft Hyper-V. You can mount such VHD disks to a virtual machine or to a physical machine and access the backed up data. Notice how, unlike the VHD created by MVMC (Microsoft Virtual Machine Converter), the VHD image is not bootable in this case. You can back up the entire volume or the system state only by using the wbadmin start systemstatebackup command. For example: wbadmin start systemstatebackup --backuptarget:E: You should select a backup target that differs from the volume from which you are backing up the data, and one that is not a remote shared folder. When it’s time to recover, you should boot the domain controller into Directory Services Restore Mode (DSRM) by pressing F8 to open advanced boot options (like as you would do when entering a Safe Mode). Then you should use the wbadmin get versions -backupTarget:path_to_backup machine:name_of_server command to select the appropriate backup, and begin restoring the needed data. You can also use NTDSutil to manage particular Active Directory objects in the command line during recovery. The advantages of using Windows Server backup for Active Directory backup are affordability, VSS-capability, and the ability to back up the whole system or Active Directory components only. Disadvantages include the need to possess the appropriate skills and knowledge base to configure a backup and recovery process. System Center Data Protection Manager Microsoft recommends that you use the System Center Data Protection Manager (SC DPM) for backing up data including the Active Directory in Windows-based infrastructure. SC DPM is a centralized enterprise-grade backup and recovery solution that is a part of the System Center Suite and can be used to protect the Windows Server which includes services such as Active Directory. Unlike the free built-in Windows Server Backup, SC DPM is paid software that must be deployed separately as a complex solution. Installation may seem somewhat challenging when compared with Windows Server Backup. Indeed, a backup agent must be installed to ensure your machine is fully protected. The main features of the System Center Data Protection Manager related to Active Directory backup are: - VSS support - Incremental backup - Backup to Microsoft Azure cloud - No granular object recovery for Active Directory Using SC DPM is most practical when you need to protect a high number of Windows machines including MS Exchange and MS SWL servers. Backing Up the Virtual Domain Controller The listed native Active Directory backup methods can be used for backing up Active Directory servers deployed on both physical servers and virtual machines. Running domain controllers on virtual machines offers a set of advantages specifically for VMs such as host level backup, the ability to be recovered as VMs running on different physical servers, etc. Active Directory backup best practices recommend you to use host-level backup solutions when making backups of your Active Directory domain controllers running on virtual machines at a hypervisor level. The Active Directory is classified as one of the most business critical applications whose disruption can cause downtime of users and services. Today’s blog post explained Active Directory backup best practices to help you protect your infrastructure against AD failure. Selecting the right backup solution is the important takeaway point in this case.
<urn:uuid:e83cca08-029b-40c1-8591-cb719a7e44f6>
CC-MAIN-2022-40
https://www.nakivo.com/blog/active-directory-backup-best-practices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00666.warc.gz
en
0.902533
2,272
2.625
3
|Getting Started||Data Definition| This chapter describes: The facilities described in the following sections are accessed from the Main menu (see the chapter Getting Started for the diagram of the menu tree). Pressing F1=help from the Main menu displays a help screen explaining the facilities available from the Main menu. To clear the help screen, press F1=help again or the Space bar. Pressing F2=data-definition from the Main menu invokes the Data Definition menu, which you use to specify data fields (see the chapter Data Definition for further details). Pressing F3=panels from the Main menu invokes the Panels menu, which you use to create and maintain panels (see the chapter Panels for further details). Pressing F4=global-dialog from the Main menu invokes the Dialog Definition menu, which you use to specify global dialog. Global dialog contains the procedures and functions that are active for the entire screenset (see the chapter Dialog for further details). Pressing F5=generate from the Main menu invokes the Generate menu, which you use to generate the COBOL copyfile. The copyfile contains the Data Block, and field numbers of all the fields used in the Data Block, to be used by the calling program or a COBOL program. This copyfile, which is simply a file referenced in a COBOL COPY statement, enables the inclusion of COBOL source code from files other than the source code file currently being compiled (see the chapter Generating the Copyfile for further details). Pressing F6=run from the Main menu runs the current screenset (see the chapter Running the Screenset for further details). Pressing F7=print from the Main menu provides a list of options that you can use to print the current screenset to a text file (see the chapter Printing for further details). Pressing F8=tutorial from the Main menu invokes an on-screen tutorial providing a brief overview of the features included in the Dialog System. You can exit from the tutorial at any time by pressing the Escape key. This action results in the following prompt: Exit from Tutorial. Are you Sure? Y/N Press Y to exit from the tutorial and return to the Main menu or N to continue with the tutorial. Pressing Alt from the Main menu invokes the Main Alt menu, the functions of which are described in the following section (see the chapter Getting Started for the diagram of this menu in the menu tree). Pressing Esc from the Main menu results in the following prompt: Exit. Are you Sure? Y/N Press Y to exit from the definition software to the operating system command line or N to continue your current session on Dialog System. Dialog System works on a single screenset at one time, so to save any data, panels, or dialog that you have defined, you must save them as a screenset. You can subsequently load that screenset to continue working with it. The facilities to do this, and the following other facilities, are accessed from the Main Alt menu: Pressing F1=help from the Main Alt menu provides a help screen explaining the facilities available from the Main Alt menu. Pressing F2=initialize from the Main Alt menu clears the current screenset, which contains definitions that you have created but no longer require. The system is then ready to start a new screenset. When you press F2 from the current screenset, the following prompt appears: Are you Sure? Y/N Press Y to clear the screenset, or N to cancel the initialization procedure. Pressing F3=load-screenset from the Main Alt menu invokes the Load Screenset menu to load a screenset file that you have previously created and saved. If you have worked on a screenset but not saved it, the following prompt appears when you select this option: Load without saving? Y/N Press N to return to the Main menu without loading another screenset. Press Y to invoke the Load Screenset menu (see Figure 3-1). Figure 3-1: Load Screenset Menu At the prompt, type the name of the screenset to be loaded and press Enter. Alternatively, you can select the directory option, which enables you to select the screenset name from your current directory. The F2=directory option displays the directory path, and extension of the listed files at the top of the screen. Directly beneath this line there is a list of existing screensets with file extension, date and time of creation, and size in bytes, together with the submenu shown in Figure 3-2. Figure 3-2: Directory Menu This Directory menu is available from several Dialog System menus. If no drive or path is shown, the current default directory is listed. If no extension is specified, the directory lists only the files that can be used at the point from which you entered the directory. For example, if you were loading a source file into the editor, the directory lists source files only. Figure 3-2 shows a Windows-format path on the status line. On UNIX, the drive designation, F:, does not appear and backslashes (\) are replaced by slashes (/). Pressing Enter from the Directory menu enables you to select a file from the directory listing for loading. Use the <up-arrow> and <down-arrow> keys to highlight the required file, then press Enter. The filename is displayed at the prompt on the Load Screenset menu. Press Enter again to load the screenset. Pressing F2=list-files from the Directory menu redisplays the directory, or, if the highlighted file is a library, it lists the files contained in the library with an .lbr extension. Pressing F3=list-dirs from the Directory menu lists the subdirectories in the current directory. Pressing F4=delete-file from the Directory menu displays the following prompt: Delete filename? Y/N/Esc ||is the name of the selected file.| Press Y to delete the file, and N or Escape to cancel the delete operation. Pressing F5=sort-name from the Directory menu sorts the filenames of the files in the directory list into alphabetical order. Pressing F6=sort-time from the Directory menu sorts the files in the directory list into chronological order. Pressing F7=unsort from the Directory menu shows the files in the directory list in the order in which they were created. Pressing F8=list-asc/desc from the Directory menu shows the files in the directory in ascending or descending order, according to the sorting option chosen. Pressing F9=drive from the Directory menu invokes a prompt for a drive letter, then lists files for the current directory in the drive you specify. Ensure that there is a disk or tape in the drive you specify before you use this option, otherwise there is a flashing error message. If you get this error message, insert a disk or tape and select the retry option to continue. Pressing Ctrl from the Directory menu invokes the Directory Control menu as shown in Figure 3-3. This menu offers the sorting options and browse-file facility explained as follows: Figure 3-3: Directory Control Menu Pressing F5=sort-size from the Directory Control menu sorts the files in the directory in size order. Pressing F6=sort-ext from the Directory Control menu sorts the files in the directory in alphabetic order of their file extension. Files with the same extension are further sorted in order of filename. Pressing F8=browse-file from the Directory Control menu displays the contents of the highlighted file (the system assumes that the highlighted files are in displayable text format). Pressing Esc from the Directory menu exits back to the file prompt on the Load Screenset menu. Pressing F4=save-screenset from the Main Alt menu invokes the Save screenset menu (see Figure 3-4) to save a screenset file that you create. This menu contains the standard help option and also the Directory menu described earlier in the section Load Screenset (F3). Figure 3-4: Save Screenset Menu You can use this menu to copy a screenset. Load the screenset and save it with a different name. This does not affect your original screenset. Pressing F5=screenset-switches from the Main Alt menu enables you to alter the current screenset settings, such as the priority of global or local dialog. The submenu shown in Figure 3-5 is displayed when you select this option. Figure 3-5: Screenset Switches Menu Pressing F2=normal-MF-panels from the Screenset Switches menu determines whether Dialog System uses its own internal coding (normal) or Micro Focus Panels to manipulate the panels in a screenset at run time. This is an attribute of the screenset and is saved when the screenset is saved. You can change this option at any time during the definition process. Whether or not you use a Panels screenset depends on your application. The advantage of using a Panels screenset is that these panels can be moved at run time using dialog (using the MOVEPNL function described in the chapter Dialog). The disadvantages are that these panels occupy more memory at run time than normal panels, and that you are limited to 50 panels per screenset compared with the 500 allowed for normal panels screensets. The size of the screenset itself is not affected. Pressing F3=global/local-dialog-1st from the Screenset Switches menu determines the default order in which the dialog for the screenset is obeyed at run time. If the order is set to local first, the dialog for a panel is processed at run time by searching the local dialog first, obeying any keystrokes or procedures defined there and ignoring any definition of those keystrokes or procedures in the global dialog. If a keystroke or procedure is not defined in the local dialog, its definition in the global dialog is obeyed. If the order is set to global first, the reverse is true. See the chapters Panels and Dialog for details of the local and global dialog facilities. Local dialog first. Pressing F4=key-translation-off/on from the Screenset Switches menu specifies whether Dialog System's run-time component looks for a module that performs key translation (Dsusrtrn) when this screenset is run (see the section User Control of Every Keystroke in the chapter Programming for more details on this module). Pressing F6=normal/top/bottom coords from the Main Alt menu determines the type of coordinates that are displayed on the status line. Pressing F6 toggles between the following settings: |Normal||When you are in a panel, the displayed coordinates have as their origin the top left-hand corner of the panel. When you are not in a panel, the displayed coordinates have as their origin the top left-hand corner of the screen.| |Top||The displayed coordinates always have as their origin the top left-hand corner of the screen.| |Bottom||The displayed coordinates always have as their origin the bottom left-hand corner of the screen.| Pressing F7=set-dets from the Main Alt menu displays a panel, similar to that shown in Figure 3-6, which contains the current screenset's details. This is a display feature only. Figure 3-6: Set Details Screen Pressing F8=colorize from the Main Alt menu invokes the Colorize menu (see Figure 3-7), where you can specify details of panel colors. Figure 3-7: Colorize Menu You can use this feature to create applications whose screens' colors are independent of the hardware on which they are run. This menu enables you to map attributes used in the screenset onto the COBOL system's attributes. The COBOL system has 16 system attributes available to map onto the attributes used in the screenset. If colorization is on, Dialog System's run-time component uses the system attributes in place of the screenset attributes. The system attributes for your application can, if required, be explicitly specified in a side file named application-name.cfg . The Colorize menu is also used for setting the default panel palette (see the chapter Panel Painting for details). When a new panel is defined, SYS-2 through SYS-6 form the attribute roll list, and SYS-16 is the panel's default attribute. The default colorize palette can be specified in the configuration file dsdef.cfg (see the chapter Setting Up the Configuration File for more details). Pressing the F2=on/off toggle from the Colorize menu specifies whether or not Dialog System's run-time component colorizes the screenset. Pressing one of the keys F3 to F9 from the Colorize menu maps the screenset colors to the COBOL system colors SYS-1 to SYS-7. Colorization is only active if the F2 toggle is set to on. On UNIX, to assign an attribute from the palette of 255 attributes to one of these function keys, use the <right-arrow> and <left-arrow> keys to move the "X" located on the palette to the desired attribute and press the required function key. As you scroll the "X" through the palette, the two numbers in parentheses above the palette line change. The first number is the attribute's palette number (1-255) and the second number is its hexadecimal value. When the "X" reaches the far right of the screen, the palette wraps, replacing the current row of attributes with the next selection. Pressing F10=more from the Colorize menu invokes the second colorize menu (see Figure 3-8), which you can use to map the system colors SYS-8 to SYS-16. Figure 3-8: Second Colorize Menu Pressing F9=import from the Main Alt menu invokes the Import Files menu, shown in Figure 3-9 , which enables you to import screenset text files. Figure 3-9: Import Files Menu Dialog System can transfer the components of a screenset, for example the Data Block, panel text and panel attributes, between two different screensets using the import and export utilities. The definition of the screenset components is held in a text file. The import utility extracts the information from such a text file to insert new components into a screenset (see the chapter Syntax of Import/Export Files for details on the structure of the text file). It is possible to create your screens in this format (using a method other than export, for example where you are converting from some other application to Dialog System) and then import them into your screenset. At the prompt, type the name of the screenset to be imported and press Enter. You can alternatively use the Directory menu described earlier in this chapter. After you select a file, the import of the file begins. The import process consists of two stages, described in the following sections: The import file is checked for valid syntax (see the chapter Syntax of Import/Export Files for further details). If any invalid syntax is found, the process is aborted with an error message similar to the following: FATAL ERROR:LINE 0085 After Token: END Esc: To ABORT. The error message gives the line number and the last token of syntax that was successfully parsed in the import file. To end the import process when invalid syntax has been found, press the Escape key. After syntax checking, the imported details are inserted into the current screenset and the imported information is checked for conditions such as conflicts with existing components in the screenset, or reaching screenset limits. If any errors are found, either recoverable or fatal error messages, as appropriate, are displayed. Recoverable error messages provide the option to continue with the process. For example, if a field length is too large, the following recoverable error message appears: ERROR Maximum field-length is 9999 Continue Y/N? If you select Y to continue, corrective action is automatically taken. In this example, a field length of 9999 would be used. For a list of recoverable and fatal error messages that could occur when you import an external file, see the chapter Error Messages. Pressing F10=export from the Main Alt menu invokes the Export Files menu, shown in Figure 3-10, which enables you to export screenset text files. Figure 3-10: Export Files Menu Dialog System can transfer the components of a screenset, for example the Data Block, panel text, and panel attributes, between two different screensets using the import and export utilities. The export utility creates a text file containing the definition of these screenset components (see the chapter Syntax of Import/Export Files for details on the structure of the text file). It is possible to create your screens in this format. At the prompt, type the name of the screenset to be exported and press Enter. You can use the Directory menu described previously in this chapter. If you do not specify a name, the name work.txt is used. After you select a filename, a popup component list and the Export Component Selection menu appear (see Figure 3-11). Figure 3-11: Export Component Selection Menu The popup list contains the following components: |Screenset Details||Screenset switches and colorize palette| |Data Block||Data Block master fields & groups| |Error Messages||Error messages and error message filename| |Validations||Validations per master field| |Panels||Text, attributes, fields, groups, dialog| Use the functions in the Export Component Selection menu to select which components you want to export then create the export file or invoke the Export Panel Selection menu. Pressing F2=panel-components displays the panels component list. This function is available only when the selection bar is positioned on the Panels component and the Panels component is selected. Select the panels components you want to export in the same way as the main components. The panels component list contains the following components: Text and Attributes Fields and Groups Pressing F4=select-all-entries from the Export Component Selection menu selects every component in the component list for output to the export file. All selected components are marked with the greater-than (>) and less-than (<) symbols. Pressing F5=unselect-all-entries from the Export Component Selection menu deselects all the components in the component list. Pressing Space=(un)select-entry from the Export Component Selection menu enables you to select the components that you wish to export by selecting or deselecting individual components. Use the <up-arrow> and <down-arrow> keys to move up and down the list to the desired component name, and the Space bar to toggle the component between selected and deselected. Selected components are marked with the > and < symbols. Pressing Enter from the Export Component Selection menu either creates the export file, or displays a further menu. Use this function after you have selected all the components you want to export. If you have not selected the Panels component, the export file is created. If have selected the Panels component, the popup panel list and the Export Panel Selection menu, shown in Figure 3-12, appear. You use the Export Panel Selection menu to select the panels that you want to export and create the export file. Figure 3-12: Export Panel Selection Menu Pressing F4=select-all-panels from the Export Panel Selection menu selects every panel in the panel list for output to the export file. All selected panels are marked with the > and < symbols. Pressing F5=unselect-all-panels from the Export Panel Selection menu deselects all the panels in the panel list. This is useful if you want to export only the Data Block. Pressing Space=(un)select-panel from the Export Panel Selection menu enables you to restrict what is exported by selecting or deselecting panels. Use the <up-arrow> and <down-arrow> keys to move up and down the list to the desired panel name, and the Space bar to toggle the panel between selected and deselected. Selected panels are marked with the > and less-than < symbols. Pressing Enter from the Export Panel Selection menu creates the export file. Use this function after you have selected all the panels you want to export. Import is not an effective way to update a screenset, because if you import components into a screenset that already contains identically named components, those components are not updated. For example, you cannot change values such as the length, usage, or properties of a master field by importing an identically named field with new values. The best way to update a screenset is to use the following steps: The following table lists the limitations in this version of Dialog System: |First Panel||n/a||Yes||(exist by default| |Screenset Type||n/a||Yes||(and thus can only| |Key Translation Flag||n/a||Yes||(be updated| |Key Entry||Yes||Yes||see note below| |Attribute Palette||n/a||Yes||inserted with the panel| |Border||n/a||Yes||inserted with the panel| |Field||Yes||No||no insert to existing group| |Select Bar||n/a||No||bar is inserted with Group| Note: The following keys cause problems with the import parser, and cannot currently be imported: [ (left square bracket) ] (right square bracket) " (double quote) Although you cannot import these keys, you can use them in a dialog. You must remove these keys from a dialog before you export it, and add them manually to the dialog after you have imported it. Copyright © 1998 Micro Focus Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law. |Getting Started||Data Definition|
<urn:uuid:dd7ea6e9-5f94-4ea2-9477-7b33f11b5542>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/net-express/nx30books/dcusng.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00666.warc.gz
en
0.818888
4,905
3.078125
3
Accessibility to healthcare varies considerably between urban and rural areas. There is a shortage of not only sub-specialty care in rural areas but also primary care and healthcare providers. In this scenario, technology can play a huge role in making healthcare more accessible through telemedicine. Telemedicine refers to the delivery of healthcare services from a distance by leveraging telecommunication and information technology. Telemedicine is being increasingly used to diagnose, treat, and evaluate patients without requiring them to visit a healthcare centre. This is usually done with the help of videoconferencing, high-precision cameras, monitors, sensors, and high-definition screens. So, by leveraging these technologies, telemedicine can help overcome the distance barrier and make healthcare more accessible to people living in remote areas. By using telecommunication and information technology, doctors or healthcare providers can evaluate the condition of a patient and prescribe treatment accordingly. They can also examine the X-rays and MRI reports of their patients to diagnose the condition. If the doctor is not present for videoconferencing, the patient information or data can be stored by using photos, videos, dictation, etc. Later on, the physician can access this information from the server, in order to diagnose and treat the patient. Apart from treatment and diagnosis, telemedicine can be effectively used for follow-up visits and also for managing chronic conditions and for consulting specialists. There are basically 3 types of telemedicine – remote patient monitoring, store-and-forward, and interactive telemedicine. Remote patient monitoring or telemonitoring is use for patients with chronic diseases. With the help of mobile medical devices that can collect data about vital signs like blood sugar, blood pressure, etc., the patient can be monitored in his or her home. Store-and-forward refers to storing and sharing patient information like results of laboratory tests or diagnostics images and videos with a physician located in another place. Here, the communication between the patient and the doctor does not take place in real-time. On the other hand, in interactive telemedicine, patients and physicians can communicate in real-time. The main benefits of telemedicine is that it can increase accessibility to healthcare, reduce the need for outpatient visits, improve health outcomes, and reduce the overall cost of healthcare. It is especially suitable for people with limited mobility and those living in remote and difficult terrains. Moreover, it can reduce the risk of transmission of infectious diseases from patients, as they can consult doctors without requiring to step out of their home. But, the inability to begin treatment immediately, costs of data management, and data protection, I.e., the protection of patient information are some of the main drawbacks associated with telemedicine.
<urn:uuid:3bb9ab2a-982f-47b8-a71a-3c09ab322ab9>
CC-MAIN-2022-40
https://www.alltheresearch.com/blog/telemedicine-making-healthcare-accessible
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00666.warc.gz
en
0.931898
569
3.6875
4
During this presidential election, we have seen a dramatic increase in the use of social media by public officials. From state senates to the White House, politicians are embracing the use of posting on these platforms to express their viewpoints, accomplishments, criticisms, and personal opinions. The use of open source intelligence tools helps determine words that come from political staffers and which come straight from the politicians themselves. Arguably, the most discussed example of this trend comes from the Twitter account of presidential nominee, Donald Trump. Trump tweets frequently about a variety of different topics that are considered controversial. For better or worse, his Twitter activity helps make him the most talked about person in the world today. In this webinar, we discussed how BrightPlanet harvested 3,200 tweets from Trump’s Twitter account for personality profiling. The tweets were passed along to John Kreindler and Sean Farrell of Receptiviti, a psychology-based data collection company, in order to uncover which tweets are created from a Trump staffer and which tweets are authentically Trump’s own words. Through analysis of the language, Receptiviti was able to dictate the emotion, tone, and decision-making processes behind each published tweet. Here’s what they found: Trump’s Tweets Come from Two Devices One of the major discoveries made was that Trump tweets came from an Android device and an iPhone. This is significant, because there are major differences between the tweets that come from each device. With their NLP software uncovering people’s psychology through their tone and speaking patterns, Receptiviti notes that the Android tweets are full of exaggerated statements and hyperbole in comparison with the iPhone tweets, which tend to be more neutral. The Words Make the Person Our words make us who we are. David Robinson, a data scientist at Stack Overflow, notes that the publishing of the Android tweets were much earlier in the day. He also notes, the Android tweets have a tendency to contain fewer links and hashtags, simpler words, and were angrier in tone. Based on comparing tweets to past transcripts, we’ve found that our model can classify Android/iPhone personalities within 94% accuracy. Overall, the Trump press conference transcripts were very similar to the Android tweets with 74% votes in similarity. We Could Determine the iPhone Staffer After identifying Trump as the likely author of the Android tweets, the question that remained was who was the author of the iPhone tweets? We examined nine of Trump’s top staffers that are active on Twitter. As active users, they all have their own distinct personalities and individual characteristics when it comes to their speech. The current RF model has the ability to categorize staffers over 89% of the time. Comparatively, the voice of each staffers had their own speech, tone, and display of critical-thinking. We found a staffer who was a match, with 53% similarity to the iPhone tweets. The profiles of Gavin Smith, the South Carolina Field Director for Trump’s campaign were similar to the iPhone Trump tweets. Open source intelligence, as demonstrated by BrightPlanet and Receptiviti, can open up a whole new world of information. Download the free webinar to learn more.
<urn:uuid:1e4bc1c2-6a1c-403c-aec0-091dbe968264>
CC-MAIN-2022-40
http://brightplanet.com/2016/11/01/opensourceintelligencetoolstrump/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00666.warc.gz
en
0.957962
661
2.5625
3
When it comes to the facet of technology, it is the young students who have become the masters and teach what they know. Creekside Middle School in Patterson, CA rolled out over 1,000 new Chromebooks for the students which meant that the teachers needed to get many professional development sessions in order to answer any questions the students had regarding the new technology. After seeing the effects that the training had on their (seemingly) ancient teachers, students began asking if they, too, could attend these training sessions. Kerry McWilliams, former principal at the school, said yes but only if the students taught their own peers. Thus began Tech Boost, a biannual conference in which the student experts teach other students how to code, create fun video games, design awesome web pages, and even apps for their smartphones. Current Creekside Principal, Cathy Aumoeualogo, knows that the potential is there, in the students, they just have to unlock it. “It’s about tapping into the student talent that is already there. The teachers and administrators just had to plan how to fit it into the semester.” When Children Teach, They Gain Valuable Future Skills Students who help teach, or “student geniuses” as they call them, gain real-world learning experience while earning credit for an academic course. It’s a win-win: the student is able to gain real-world experience and for their hard work, they gain valuable credit to help them in their academic careers. With their acquired real-world tech skills and the confidence to wield them, these students graduate school as empowered technology leaders, confident in their own voice. It is important to encourage both boys AND girls to use their leadership skills and teach their students. Computer science is a heavily male-dominated field which many girls find hard to get into.
<urn:uuid:20afcb60-3faa-4e10-a5fa-cb4ee0af4d35>
CC-MAIN-2022-40
https://ddsecurity.com/2017/04/13/student-run-tech-initiatives-empowers-prepares-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00666.warc.gz
en
0.977432
386
3.3125
3
This article was first published in February 2020. Since then, cybersecurity attacks and virtual scams have been on the rise due to the uncertainty caused by the COVID-19 pandemic. Let’s examine the most common scams we’ve seen. Scammers have been around since long before the advent of computing. From the infamous Gregor MacGregor, who invented a fictitious Central American country (of which he was, of course, Crown Prince) to the masquerading Pretorian Guard of the Roman empire, who successfully conned (then assassinated) the unfortunate Emperor Julianus, fraud is nothing new. Whether for financial gain, power, fame, or information, scammers are always on the look-out for a new mark. Interestingly, their methods have changed little, which is why phishing remains one of the most common methods cybercriminals use to gain information. Here are four of the most common real-life phishing scams netting victims today. Most Common Real-Life Phishing Scams Scams Related to COVID-19 Nearly 50 million Americans have filed COVID-19-related complaints with the Federal Trade Commission, and the number of daily complaints to the FBI’s Internet Crime Complaint Center has more than tripled over the past four months, according to FBI Deputy Assistant Director Tonya Ugoretz. As it related to cybersecurity, beware contact tracing scams. If a person tests positive for COVID-19, authorities may use a process called contact tracing to identify others who may have been exposed. It’s a critical step to prevent the disease from spreading. But it’s also an opportunity for scammers to use a phishing technique known as smishing. In a phishing scam, fraudsters pose as a trustworthy source to steal your personal information. Smishing is phishing that occurs by text message. Indeed, if you have been exposed to the virus, contact tracers will text you, then follow up with a phone call. Smishers take this one step further by including a URL within the text. This link may then lead you to enter personal information or may download malware on to your device. If you receive a text with a link, The Federal Trade Commission (FTC) says to ignore and delete scam messages. CEO Fraud / Email Account Compromise / The Gift Card Scam According to a recent FBI report, U.S. victims of business email compromise (BEC) scamming have lost a collective $2.9 billion. In this incredibly common scam, cybercriminals spoof an executive account – typically the CEO or CFO – and request an employee complete a wire transfer or purchase gift cards for a new “incentive” program or as a personal favor. Check out the phishing examples below: These phishing scams play on employee’s sense of duty and urgency. The scammers write “I have an urgent request” and “confirm if you can handle this” – and it works. In the FBI report, they report that between 2016 and 2018 there was a 136% increase in identified losses tied to BEC phishing. Although this common real-life phishing scam has targeted every industry, schools, local governments, and real estate have been particularly impacted by this attack. Look-Alike Login Pages As users have become more conscious of these threats, cyber criminals have adjusted and refined their practices. Take a look at the website screenshot below: Although this phishing email is poorly composed (note the originating account and erroneous capitalization), it works. Why? A rushed employee skimming email might not even read the email contents, instead recognizing the LinkedIn logo at the top of the email and the brand-colored button in the email. Clicking on the ‘Verify Your LinkedIn Account Now’ button would bring you to a page that looks like a legitimate LinkedIn login page. The user enters their account information, receives an ‘Account Confirmed’ notification, and then continues on with their business – failing to realize they were just phished. This phishing scam is particularly dangerous because users may leak access credentials without realizing it, enabling cyber criminals to have essentially unlimited access to the account. Tech Support Scams It’s not uncommon for the average user to express a lack of confidence in technical matters, and cyber criminals take advantage of that in this next common phishing scam. In a tech support scam, cyber criminals utilize repetitive pop-up messages, full screen functionality, and dialog box loop cycles to perpetuate the deception that something is wrong with the users PC. In addition, scammers take on the persona of behemoth tech giants – like Microsoft, Apple, and Google – to add legitimacy to the scam. In the example below, documented thoroughly on the Microsoft blog, cyber criminals convince users to call a toll-free number and then talk them into purchasing anti-virus software they don’t need. Whether it’s a Nigerian prince offering an incredible investment opportunity or a work-from-home job stuffing envelopes that promises thousands a week, skepticism is a vital tool in the anti-phishing toolbox. If an opportunity seems too good to be true, it probably is. Security Services company ADT reported that in 2018 Americans lost over $700,000 to the ‘Nigerian Prince’ email scam and millions more in more sophisticated get-rich-quick schemes. In these cases, cyber criminals are taking advantage of user’s desires – for financial gain, free vacations, lucky happenstance, or a change of pace. Pyramid schemes and ponzi schemes both fall under this category. Social media has only exacerbated the issue. In one recent example, WhatsApp users have fallen victim to the ‘Loom Circle’ scam. The scam targets younger victims and promises a return of £1280 from a £160 investment. How To Protect Your Users From The Most Common Phishing Scams Humans are notoriously the weakest element of security. People are gullible, trusting, and curious – and it is precisely these qualities criminals rely on to make millions. There are ways for businesses to protect their investments and put controls in place to prevent the penetration of the network – even if a phishing scam is successful in targeting an employee. Some first steps to hardening your security posture include: - Security Awareness Training - Simulated Phishing Campaigns - Multi-Factor Authentication - Email Filtering - Perimeter Controls (Firewalls/DNS Filtering) - Endpoint Protection - Enhanced Access Management To better protect your users and your business, talk with our security experts today. Mindsight’s team of pentesters and ethical hackers understand how cyber criminals think and act, and they can help you harden your security posture to protect your business, its data, and the resources you rely on. Like what you read? Contact us today to discuss the most common real-life phishing scams and what you can do to protect yourself. Mindsight, a Chicago IT services provider, is an extension of your team. Our culture is built on transparency and trust, and our team is made up of extraordinary people – the kinds of people you would hire. We have one of the largest expert-level engineering teams delivering the full spectrum of IT services and solutions, from cloud to infrastructure, collaboration to contact center. Our customers rely on our thought leadership, responsiveness, and dedication to solving their toughest technology challenges. About The Authors Mishaal Khan, Mindsight’s Security Solutions Architect, has been breaking and – thankfully – rebuilding computers for as long as he can remember. As a Certified Ethical Hacker (CEH), CCIE R&S, Security Practitioner, and Certified Social Engineer Pentester, Khan offers insight into the often murky world of cybersecurity. Khan brings a multinational perspective to the business security posture, and he has consulted with SMBs, schools, government institutions, and global enterprises, seeking to spread awareness in security, privacy, and open source intelligence. Siobhan Climer, Science and Technology Writer for Mindsight, writes about technology trends in education, healthcare, and business. She writes extensively about cybersecurity, disaster recovery, cloud services, backups, data storage, network infrastructure, and the contact center. When she’s not writing tech, she’s reading and writing fantasy, gardening, and exploring the world with her twin daughters. Find her on twitter @techtalksio.
<urn:uuid:fa15a3db-41d6-469e-823b-04d03d703497>
CC-MAIN-2022-40
https://gomindsight.com/insights/blog/something-smells-phishy-scams-related-to-covid-19-an-updated-cybersecurity-report/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00066.warc.gz
en
0.928536
1,775
3
3
Forward vs Reverse DNS Lookup Zones: Do You Need Both? An understanding of network services is required for CompTIA Network+ certification. Some important network functions include DNS services, including Forward and Reverse DNS Lookup Zones. You can use a Forward Lookup Zone to map a domain with its IP address. On the other hand, a Reverse Lookup Zone will map an IP address to its domain records. These may seem simple but are powerful tools to secure your network and to identify where visitors are coming from. Let's explore the differences between these DNS Lookup Zones and how and when they are used. Understanding DNS Zones If you can think of the Domain Name System (DNS) as a library — with indexes, bookshelves, and dictionaries — then DNS Zones are like separate, but connected, wings of a library. What are DNS Records, and How are They Stored? DNS Records are simply mappings from a name to either an IP address or a service. There are many types of records, but the most common ones are "A Records" and "CNAMEs." A Records map a name directly to an IP address. CNAME records are an alias record and map one name to another. Some common examples of DNS records are: A Records are the most basic type of DNS record. A Records are used to point a domain or subdomain to an IP address. CNAME records are used to point a domain or subdomain to another hostname. Mail Exchanger (MX) records are used to help route email. MX records differ from A Records and CNAMEs in that they also require a "priority" value as a part of their entry. A TXT record is used to store text-based information (e.g., to hold SPF data and verify domain ownership). How they are stored depends entirely on the DNS software. Microsoft DNS Server is a popular DNS software, particularly for topologies that already have Active Directory. MS DNS servers can store these files as plain text as do most software such as BIND (most popular name server software). Alternatively, DNS Servers, as part of an AD topology, can store the records in AD. Other software such as PowerDNS has the capability to store these records in SQL. You'll need a sound understanding of DNS for the new CMMC certification. DNS Zones and Subdomains DNS Zones encompass all records for a domain. For example, a zone for xzy.com would contain records such as www.xzy.com, an MX record for xzy.com, and possibly other records such as mail.xzy.com. Subdomains are usually used to break out autonomous children domains to allow administrative control over that subdomain to another managing entity. For example, you may have a hq.xzy.com, where all the headquarter DNS records are under. An example might be server01.hq.xzy.com or server02.hq.xzy.com. Subdomains allow segmentation of DNS zones, usually by administrative function. DNS Zone Files A DNS Zone File is a text file that maps domain names and IP addresses. One example of a Zone File is a DNS master file that accurately describes a zone. Text DNS Zone Files are defined by RFCs 1034 and 1035. They are human-readable and editable. There is some variability in them in that some admins prefer to put the entire full records in each line, while some prefer to use shorthand. These types of differences are also between management software such as the Microsoft DNS GUI and others like Webmin. As long as the zone file works and is easily readable, it is usually best to keep the same formatting as already exists. This will become more apparent further down in the examples. When are DNS Zones Used A DNS zone is given administrative responsibility for the domain name space in the DNS. DNS Zones are used any time a domain name wishes to have DNS records. It is not uncommon for organizations to have internal zones that are not publicly accessible and only hosted on internal DNS servers. Active Directory is a prime example of an internal zone that isn't publicly accessible. There are two types of DNS zones: a forward lookup zone and a reverse lookup zone. What is a Forward DNS Lookup Zone? A forward lookup zone typically converts a name to an IP address or another name at some point. The important part, though, is that you start with a name. Eventually, that name gets resolved to an IP address in most cases. This zone contains all the records of domain names to their IP addresses. When to Use Forward DNS Lookup Zones You will use a forward lookup zone anytime you have a name that you want to use instead of an IP address. You can create a record for how the name maps to the IP address in a forward lookup zone. $TTL 86400 ; 24 hours In the above zone file, we have quite a few lines. The Start of Authority (SOA) defines a few things about the zone. This is metadata, such as who has authority (primary name server and email of admin). It also defines records related to its serial number and how long to cache the records. Further down, we have NS records that define the name servers hosting this domain. At the very top, we defined an $ORIGIN, which means all records not fully terminated with a trailing period get the $ORIGIN appended to them, such as the "mail" A Record. What is a Reverse DNS Lookup Zone? A Reverse Lookup Zone contains all the records of IP addresses to their domain names. It would be too easy to define a reverse lookup as the opposite of forward, but it is true. A reverse lookup zone is used any time you want to convert an IP address to a name. When to Use Reverse DNS Lookup Zones Reverse lookup zones should be implemented whenever possible. Implementation may be difficult if the IP addresses are public as you would need either the owner of the IP space to provide reverse lookup services for you or delegate the subnet to you if you have a large enough address space. Many times having the reverse lookup zones can be helpful to troubleshoot or investigate issues. Spam filters, many of which are in the cloud, may use reverse lookups to help detect business IP addresses versus home user connections. They do this in one of two ways. First, if there is no reverse lookup, they may block it. Second, if the reverse lookup results in specific keywords, they may block it. Keywords that may trigger blocking include words in the name that do not appear to be a business address. Reverse DNS Lookup Zone Example The following reverse zone is for 192.168.0.0/24 or 192.168.0.X. It is important to note that the zone name is actually 0.168.192.in-addr.arpa. The IP is reversed in the lookup such that it is easy to put the last octet for the IP address. $TTL 86400 ; 24 hours @ IN SOA ns1.xzy.local. hostmaster.xzy.local. ( 2020081001 ; serial number 900 ; refresh 600 ; retry 86400 ; expire 3600 ) ; default TTL @ NS ns1.openhospitality.com. @ NS ns2.openhospitality.com. 2 PTR ns1.xzy.local. 3 PTR ns2.xzy.local. 10 PTR mail.xzy.local. 220.127.116.11.in-addr.arpa. PTR www.xzy.local. This reverse zone has the typical boilerplate records at the top. The meat of this is at the end. You achieve this one of two ways. You can put the last octet as the numerical value or put the entire record name with a trailing period. Forward and Reverse Records: Related But Not Synchronized One important fact to understand about forward and reverse lookup records is that they are separate zone files. Combined with that fact, the relationship of forward to reverse is a many to one, while the relationship between reverse to forward is a one to one. Basically, you can map many different names to a single IP address, and they will resolve correctly. On the other hand, you can only effectively map any given IP address to a singular name. In DNS records, when there are keys with multiple values, DNS uses a functionality called round-robin to randomly return the value. For example, if you have two reverse records that map 192.168.0.1 to test1.xzy.local and test2.xzy.local, half of the time you do a reverse lookup on 192.168.0.1, it will return test1, and the other half, it will return test2. All this to say that sometimes the reverse records do not match up perfectly with the forward records, and this is one scenario that describes why. Forward vs. Reverse DNS Lookup Zones: Do You Need Both? Typically, when you need DNS services, your first and only thought is converting names to IP addresses via a forward lookup. The reverse lookup is usually an afterthought or something that is not well maintained. It is not required for forward lookups to work, but as mentioned above, some services may rely on it like mail services that query reverse lookup records to determine if the source is a spammer. In other cases, it can be helpful on things like traceroutes to see the name associated with the IP address.
<urn:uuid:11c2b69c-51b0-48c8-a98f-c7d9ea9fc444>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/technology/networking/forward-vs-reverse-dns-lookup-zones-do-you-need-both
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00066.warc.gz
en
0.922094
2,037
3.59375
4
This is part one of a three-part series on Preparing for and Mitigating Potential Cyber Threats. People are not perfect and are the biggest threat to a company’s assets and the possibility of a data breach. Human error is almost a certainty when it comes to a bad actor walking through an open door in a network. As more and more organizations face security threats, they are taking precautionary steps to ensure their safety. Educating employees and having standard operating procedures to plan for what to do should some of the situations outlined below become a reality. Increase Education and Vigilance Organizations need to cultivate a culture of cybersecurity awareness. Since the human factor in the security of the network is a vulnerability, organizations need to make sure that adequate training and tools are available to employees are prepared should a bad actor dial in on them as a possible entry point for a breach. The most recent Verizon Data Breach Investigations Report (DBIR) found that 85% of cyberattacks are due to human mistakes, such as clicking on malicious links, sharing passwords, or accidentally deleting files or data. Do Not Take Chances With Passwords As cyber threats continue to evolve, the importance of password security has become increasingly clear. In recent years, several high-profile data breaches have been linked to weak passwords, demonstrating just how vulnerable we are to attack. If you want to keep your data and resources safe from attackers, there are a few things you need to do. First, never give out your login information or personally identifiable information (PII) to anyone. Second, be careful of phishing emails and infected attachments. If you think something might be suspicious, don’t open it. Finally, keep your sensitive information like credit card numbers and IP addresses in a secure place. By following these simple steps, you can help protect yourself from becoming a victim of identity theft or other cybercrimes. Employ Multi-Factor Authentication Practices Sometimes having a strong password is not enough to prevent a cyber-attack. As a result, it is essential that organizations and individuals have all the proper tools necessary to protect themselves against cyber threats. This includes using strong passwords and multi-factor authentication (MFA). Using MFA will help prevent hackers from gaining access to your accounts if they are able to gain access to your username and password. It can also help reduce the likelihood that your account will be compromised in the first place. MFA can include: - Something you know – such as a password or PIN (personal identification number) - Something you have – such as a device like a mobile phone or wearable device like Google Glass or Apple Watch - Something we know – such as a security question, answer, or biometric identifier It is important for organizations to implement MFA practices on all accounts that have access to sensitive information, such as customer records. The most common method for doing so is by using SMS text messages with a one-time code. Be Vigilant With Email “Hey Jackie, here is a spreadsheet with the latest forecasting numbers we tallied from the last board meeting. Thanks, Jim” …Jim is on vacation and your team has agreed to use the central CRM to share data rather than spreadsheets. Also, Rob’s email is [email protected] and not [email protected]. Keeping an eye out for suspicious emails that may have been sent from a source you do not know is one of the best ways to avoid falling into that sandtrap. Bad actors are becoming cleverer all of the time. Opening attachments is an easy way for them to run malware to infect a computer and potentially the company network. If a suspicious email comes through, do not open any attachments —following an organization’s standard operating procedures, whether that is to flag the email as a phish or just delete it together. Avoid downloading files from unknown senders and unrecognized sources, as they may contain viruses. In addition to attachments, links within an email that have come from an unknown source is another way a bad actor can gain access to a computer and install malware. If you’re ever unsure about whether a link is safe or not, there are a few things you can look for. First, check the URL to see if it looks suspicious. If it’s a long, nonsensical string of characters, it’s probably best to avoid clicking on it. If you’re concerned about whether a website is safe to visit, there are a few things you can do to check. One is to use Google Safe Browsing, which will tell you if the site has hosted malware in the past 90 days. To use it, just go to the URL: http://google.com/safebrowsing/diagnostic?site= and type in the address of the site you want to check – for example, google. Do Not Leave Accounts Open Close background applications when you are not using them, and don’t leave accounts open for long periods of time. Additionally, make sure to keep your operating system up-to-date. In The End… In the end, there will always be people… The potential for human error to lead to infiltrations and breaches is always a concern for businesses. However, by taking steps to educate employees and strengthen passwords and other security measures, the risk of a breach can be greatly reduced. Organizations can reduce the risk of breaches by doing routine vulnerability scans and having next-generation network protection. By having a mature cyber security posture, organizations can further reduce the likelihood of breaches. Part 2 in this series: Is Your Team Prepared?
<urn:uuid:34ffd07d-a969-41ed-b4b0-2d4f40b947ed>
CC-MAIN-2022-40
https://www.cybermaxx.com/resources/preparing-for-and-mitigating-potential-cyber-threats-part-1-people-are-the-biggest-threat/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00066.warc.gz
en
0.940765
1,199
2.78125
3
How Chatbots Are Helping Us in Different Ways? With the introduction of technology like Artificial Intelligence, the whole world has become almost robotic. Yes, artificial intelligence has introduced us to the automation and robotic environment and Chatbots agents are one such example of this robotic agent. As chatbots are helping in different ways, they have lessened the burden of human beings by taking over their work. Today, different industries are making use of these AI-based agents. And in this blog, I am going to introduce the fact how these chatbots are helping in different ways. But before that let’s have a quick look at some of the developer stats related to that. According to a survey, A global October 2019 survey of e-commerce decision makers revealed that 24 percent of e-commerce companies worldwide planned on implementing AI chatbots on their sites in 2020. Additionally, 9 percent of respondents reported that their company had already implemented them, and 39 percent of companies did not have plans to implement AI chatbots in 2020. Proceeding with my blog, I am going to get a deep dive into the informative sea of the role of chatbots following an authentic definition of chatbots. What are Chatbots? Chatbots are machine agents that provide access to data and services through natural language interaction. Though the term chatbot is relatively recent, computer sys- tems interacting with users in natural language have been developed and researched since the 1960’ies. The current surge of interest in chatbots is in part due to recent advances in AI and machine learning. Chatbots are software agents that interact with users through natural language conversation. As such, chatbots are seen as a promising technology for customer service. Chatbots represent a potential means for automating customer service. In particular because customer service is increasingly provided through online chat. How Chatbots are Helping in Multiple Ways? Chatbots may serve a number of purposes, such as customer service, social and emotional support, information, entertainment, and ties the user to other people or machines. Let’s follow some of the pointers which would help us to know the role of and the blessing of chatbots in this digital world: #1. Automation in Customer Service Customer service has always been key to service companies. With the uptake of the internet, customer service has gradually transformed from being personal and dialog based towards being automated and self-service oriented. However, automation and online self-service solutions do not fully meet users’ needs for help and assistance and service providers’ costs associated with manual customer service are still increasing. In an effort to provide more efficient customer service, while meeting customers in their preferred channels, service providers offer customer service through a range of online channels, such as company webpages, social media, email, and chat. Customer service through chat is increasingly prioritized. Chat represents a relatively resource effective channel for the service provider, compared to support by email and telephone, as customer service personnel may handle multiple requests in parallel. The chat also provides the user with a written summary of the interaction which may be helpful in terms of instruction details or links to useful online resources. Given the increasing uptake of chat as a prioritized channel for customer service, chatbots are seen as ever more relevant as a complement to customer service. #2. Chatbots for Productivity The vast majority of participants (68%) reported productivity to be the main reason for using chatbots. These participants highlighted the ease, speed, and convenience of using chatbots. Also, they noted that chatbots provide assistance and access to information. #3. Chatbots for Social and Relational Purposes One of the most important benefits of using chatbots is the potential social and relational benefits they can provide. This category of motivation was reported by 12% of participants. It is noteworthy that, while chatbots can enhance interactions between humans, most of the participants addressing social and relational motivations commented on the social experience of interacting with the chatbot. For example, the chatbot is perceived as a way to avoid loneliness or fulfill a desire for socialization. #4. Chatbots in Education Chatbots in education promise to have a significant positive impact on learning success and student satisfaction. When an educational institution uses a chatbot to communicate with students, the error rate at which the application works is initially very high. The most effective way to apply it is if it is initially implemented in some of its predefined topics. A good example of the successful application of chatbots as educational assistants is a chatbot, Jill, developed by Ashok Goel and applied at Georgia Tech. More than 400 students attend the online courses of Ashok Goel every semester. The chatbot was trained on specific questions (40,000 items) that came from a variety of sources in the early years. When answers showed 97% accuracy, the test mode was cancelled and the bot went online. Classification of chatbots in education according to tasks Depending on the functions carried out by chatbots in education, we can classify them on the basis of the following tasks: Administrative and management tasks to foster personal productivity: Chatbots provide personal assistance to students, aiding onboarding (Farkash, 2018) and personal productivity. Tasks include schedule or email management and task, submission deadline or assessment reminders. This uninterrupted personalization involves giving each student a rapid and personalized service, which takes pressure off academic services administration. Taking care of FAQs: Chatbots provide a response to student FAQs regarding administration or learning concepts and contents. Unlike the first, they do not include personalization elements but student services in the form of FAQs. Tasks include information about admissions and enrolment, financial services, technical problems (email, virtual campus, etc) or frequent queries relating to study content. they allow student mentoring during the learning process. They are able to respond emotionally (they include non-verbal communication gestures and expressions), they monitor the student’s understanding (cognitive control) and they can provide support and make suggestions to the student when needed. One of the main tasks is the provision and adaptation of contents. In this case, they are chatbots that enable educational programme contents to be generated and adapted, which are then sent straight to the user, taking their preferences into account. So this is how chatbots which are AI agents are helping us in multiple ways. Different industries are getting benefit of these chatbots by adopting their intelligent assistance. I have discussed and touched some of the areas where chatbots are leaving their huge impact. If you are also in one of those industries or planning to get started with, do not waste your time and do contact the best Chatbot companies in India from where you can get the best services at the best cost.
<urn:uuid:6f04306e-4e67-4964-a6c4-dea6f9ecc91f>
CC-MAIN-2022-40
https://resources.experfy.com/ai-ml/how-chatbots-created-storm-tech-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00066.warc.gz
en
0.947947
1,409
2.953125
3
We’ve written on this blog about keeping your network safe by using strong authentication, advanced firewall technology, and modern antivirus software. Today, we’ll discuss another strategy for securing your networks: the honeypot. This technique allows you to detect intrusions on your network by setting up a decoy and being alerted that something is going wrong: someone trying to make a connection that they shouldn’t to your honeypot; someone trying to copy files that they shouldn’t copy; even stopping an insider or employee from looking around in parts of the network they shouldn’t be. Honeypot technique and devices The honeypot technique is one in which you offer something to an attacker that seems to be of value, but is really just a decoy that can report tampering to you. The decoy can look like a file server, a particular port on a particular machine, a workstation, or any other thing that you’d like to dress it up as. There are a few companies out there who will sell you a piece of hardware that’s attached to your network that looks like one of the devices on your network, but has some kind of reporting or logging features to let you know when someone has tampered with it. Those devices are just tiny workstations themselves, devices that run an operating system, logging and reporting software, and nothing else. Your best bet may be to set up your own virtualized workstation or server, which does nothing besides sit there and wait to be tampered with… then report when it has been tampered with. If you have unused resources on a machine that’s already running, you can set that system up to run multiple honeypots, like reporting login attempts, connection attempts, etc. If someone attempts to connect with it, or copy data out of it, you can be alerted that malicious behavior is taking place and put a stop to it. When do you need a honeypot? Honeypot detection techniques can be used on your network to notify you of a breach of the highest severity. Since they detect malicious behavior that is already on your network, they are a reactive security measure that lets you know that you’ve already been compromised (as opposed to a proactive measure that would try to prevent the breach in the first place). If someone tampers with a honeypot that you’ve set up, you know that it’s time to take dramatic action to tighten up your network. If an attacker (from outside of your organization) has made it that far, they’ve evaded the rest of your defenses and are able to connect to your network and they’re likely surveilling the network and looking for targets. While a modern EDR or firewall system will be able to find, for example, a spike in network activity that indicates your data is being exfiltrated, a honeypot can alert you that someone was looking for data in places they shouldn’t, before that point. For instance, effective honeypot alerts can mitigate a ransomware breach by sending an alert before the attacker can even copy or encrypt your data, simply because they’ve stumbled across a decoy file share while canvassing your network. Honeypots are a great tool to implement on your network as a high-priority alert about what’s happening on your network. Once they’re implemented, they (ideally) require no attention until something goes wrong; but once they do, it signals that there is an urgent matter to attend to. As part of your overall security strategy, they can be a very valuable resource in protecting your data and stopping malicious behavior.-Written by Derek Jeppsen on Behalf of Sean Goss and Crown Computers Team
<urn:uuid:0c798c52-cc9e-499d-8c46-19fdd3b47118>
CC-MAIN-2022-40
https://www.crowncomputers.com/item/99-reactive-network-defense-with-honeypots/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00066.warc.gz
en
0.956741
773
2.671875
3
How Mobile Technology Can Improve the K-12 Education Processes Given the widespread use of mobile devices in personal and work life, it doesn’t seem farfetched to imagine greater use of this technology within K-12 classrooms. In fact, many classrooms are now investing in mobile technology for teaching and administrative purposes. COVID-19 has sped up the adoption by schools or school districts to adopt the technology. Many companies like Docutrend or Promethean have been hard at work developing their offerings to help teachers to provide the best for their students. Mobile technology provides convenience, speeds up tasks One huge benefit of mobile technology is that it is portable. Tablet computers, and smartphones, in particular, can be easily carried or placed in a pocket—enabling them to be used in virtually any location. This provides convenience to educators, helping explain why they are more likely to agree than disagree that mobile technology improves K-12 education processes. Furthermore, immediate access to a mobile device speeds up the completion of tasks. Mobile technology opens up learning possibilities When students have access to mobile devices and associated mobile apps, they have access to a wide range of content they might not otherwise encounter. For example, they can view information that’s updated in real-time, available in a variety of languages, and sourced from all corners of the world. While this information could theoretically be viewed on a desktop PC as well, there are typically not many of these in a classroom. A classroom is more likely to have access to multiple mobile devices due to their size, portability, and cost. Mobile technology gives students technical skills The skills required for student success are shifting as technology evolves. Once, schools prioritized typing on a traditional keyboard, but this need is largely being replaced by typing (or swiping) on a mobile device touchscreen or voice-controlled input. Additional trends include interactive displays and applications Companies like Promethean are bringing mobile learning to another level. With their interactive displays, teachers can prepare lessons for their students and place them in a virtual location where students easily can access them. Practice and assessments are provided and turned in virtually upon completion. When possible, there are a variety of lessons provided to meet the differentiated needs of the students. This could include videos, readings, online collaborative work, virtual discussions, project-based, and more. Students are given a certain amount of time to complete the lessons. Teachers then provide feedback to the students on the “assignments” using a comment feature and/or a grade. In most cases, teachers schedule “office hours” when they are available via online video “chats,” such as Microsoft Teams, Google Hangouts, FaceTime, or Skype, to answer questions or to check-in “face-to-face” with students. In order for students to excel in new technologies, they need to be using them. They will gain advantages over students who lack access to these devices (and associated software), making them better prepared for college and ultimately a career. Mobile technology in the form of devices and software can truly improve processes from a convenience, speed, and learning perspective. Teachers can complete tasks more efficiently, while students have the opportunity to acquire a vast array of content and technical skills that can serve them throughout their personal and work lives.
<urn:uuid:cb706f2d-59d5-4246-a3cf-5f0d31d52d21>
CC-MAIN-2022-40
https://www.docutrend.com/blog/educators-will-see-a-greater-use-of-technology-in-the-classroom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00066.warc.gz
en
0.946352
687
3.46875
3
A Stub Citation is typically a partial sentence that is used as a pre-cursor to a fuller Citation. Stub Citations do not contain Mandates. They don’t convey any particular ideas of what the reader should do. They simply serve as a mechanism to present the information that follows. In the image from above the highlighted texts are all Stub Citations. No Mandates (Informational) No Mandate Citations, or Informational Citations, are Citations that contain no auditable action items and instead just provide information, i.e. “For the purposes of this document, an organization and a person acting as a data manager both fall under the same jurisdiction.“
<urn:uuid:c1140436-3fda-48f8-8af0-8f50186e08b8>
CC-MAIN-2022-40
https://support.commoncontrolshub.com/hc/en-us/articles/115001488523-Is-this-a-stub-or-are-there-no-mandates-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00066.warc.gz
en
0.889926
152
2.515625
3
Modern businesses generate, store and analyze vast amounts of data. Considering new technologies like the Internet of Things (IoT), the cloud, 5G, and artificial intelligence (AI), it shouldn’t be a surprise that this information is being used for various purposes. Business applications, software, hardware, and other information technology (IT)-related assets are becoming increasingly important in today’s fast-paced digital environment. Data integration is a standard process used to manage the growing amount of information. What Is Data Integration? In simple terms, data integration is combining information that comes from several different sources to create more unified datasets. These datasets can then be used for analytical, operational, or other related purposes. Integration is a core element of data management. Without it, information gathered by companies would have little use and would not meet employees’ needs. Most companies gather data from various sources, both internal and external. Business applications and employees need to access this information to accomplish tasks or complete transactions. It would be challenging for them to combine data from different sources manually. Data integration pulls all this information together for users to make it easier to complete relevant tasks. It’s often placed in a data warehouse, a central repository that users can easily access. For example, a loan officer may need to verify financial information before approving a home mortgage loan, like records, property values, and credit history. Each task would be time-consuming and tedious without properly integrated data. Data integration allows companies to access a complete picture of key performance indicators (KPIs), supply chain management, regulatory compliance measures, cybersecurity, financial risks, and other essential information regarding business operations and processes. Cybersecurity Risks Associated With Data Integration Any professional working with data knows cybersecurity is a top concern. It’s efficient for applications and programs to share data, but the downside is that it typically increases an organization’s security vulnerabilities. Here are three examples of cybersecurity risks associated with data integration. Risk #1: Data Silos Access controls are an essential part of data governance, but they can present issues for security teams. Many organizations have a large collection of data silos that work independently. In other words, it can create uniform tracking, protection, and preservation challenges. Each data silo has its own set of operational and access control methods that make it difficult for IT teams to secure information at every stage of its life cycle. An increased number of data silos gives threat actors more opportunities to exploit vulnerabilities and execute attacks. Risk #2: Burden on Developers Security teams and developers are forced to take on the extra burden of securing information spread across multiple silos at every layer. Additionally, the burden is only exacerbated with new data privacy laws, and the stakes for protection are much higher. Development and security teams are often disconnected, making data security even more challenging to manage. Risk #3: Insider Threats Suppose enterprises focus all cybersecurity efforts on external threats and endpoints. In that case, they fail to acknowledge the potentially damaging insider risks within the organization. General network security is critical, but organizations must also consider unknown and unmanaged insider threats. Some of the worst data breaches occur within the company, meaning data security must exist internally and externally. Tips to Improve Data Integration Security These risks can damage an organization, so companies must take active measures to manage their data integration practices. Security must be engrained into data integration for the best protection. Here are some tips to improve data integration security in an organization. Conduct Risk Assessments and Threat Modeling Risk assessments and threat modeling are two essential components in cybersecurity and can help improve data integration security. Threat modeling analyzes the security of an application so problems can be mitigated and future attacks can be prevented. Risk assessments are similar to threat modeling because they analyze and assess privacy dangers when working with information. Consider implementing these strategies to secure data integration. Build Audit Trails A clear audit trail can help an organization manage any integrity issues. A data audit will profile information across several repositories and assess its quality and integrity. Audit trails assist with data integration security, but they can also help organizations adhere to regulatory requirements. Leverage Data Integration Solutions Many vendors offer data integration solutions to meet an organization’s unique needs considering the high demand for these tools. Popular data integration tools include Hevo, Jitterbit, Talend, Informatica PowerCenter, and Oracle Data Integrator. It’s critical to find solutions with security and compliance features to offer the best protection for integration. Prioritize Security in Data Integration Data integration is beneficial for businesses, but it does require enhanced security at each layer. It should not impede sharing or access to information, but it must protect sensitive data and keep it out of the hands of threat actors. Organizations that leverage integration should prioritize security as cyberattacks become more frequent and sophisticated.
<urn:uuid:90acbd0e-91d5-4a28-a284-efd17d51ee83>
CC-MAIN-2022-40
https://cyberexperts.com/data-integration-security-risks-and-tips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00267.warc.gz
en
0.914643
1,021
3.078125
3
Jun 06 2019 As we stated in our last blog, the tech trends of 2019 all point to data management and digital transformation. The second trend is Artificial Intelligence (AI), which refers to using machines to mimic the cognitive functions of human learning and reasoning. To be most effective and accurate, AI requires massive data, since the models adapt and adjust through training and added data. Of interest, the World Meteorological Organization just published a report indicating AI is contributing to efficiency gains in handling data for weather and climate predictions. Complex dynamic processes such as hurricanes, fire propagation, and vegetation dynamics can be better described with the help of AI. Perhaps this will balance out the interference with data collection created by 5G? While the public seems wary of AI in general but accepting of experts such as NASA using AI for complex data analysis, how quickly will the public embrace new AI consumer products? Duplex is a Google Assistant tool that calls restaurants and books reservations. The AI-powered voice caused controversy because it is so lifelike. Nearly 73% of people polled say they are somewhat or very unlikely to trust that or similar products. Douglas Merrill, former CIO and VP of Engineering at Google, advises companies to get ahead of the AI trend. He recommended four principles tht companies need to adhere to for AI to be successfully integrated into their organization: 1) Don’t be intimidated by AI. Understand the basics and use IA to enhance your company’s current mission. 2) Keep it simple. The best AI projects are easily understandable by anyone affected. 3) You don’t necessarily have to compile additional data. In fact, you probably already have a wealth of data that may be utilized with the right analytics. 4) AI is an ongoing expense, not a one-time investment. Note his third point. You may already have all the data you need but are you making the most of the data you have? Can you pull the relevant information from that data and utilize it? About 61% of companies with an innovation strategy are using AI to identify opportunities in data that they would have otherwise missed. Our computational storage approach and in-situ processing sorts data faster and with a smaller infrastructure footprint for real-time data analytics. Data collection and data movement are not the answer. The key is making sense of the data you have. Find out more about how NGD Systems and computational storage can help you with your AI implementation.
<urn:uuid:a6b9ec4e-83e3-4d78-a50d-fb085c89169c>
CC-MAIN-2022-40
https://ngdsystems.com/tech-trend-2-artificial-intelligence-or-making-a-reservation-in-your-best-human-like-voice/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00267.warc.gz
en
0.942799
498
2.546875
3
A big part of maintaining a healthy Ubuntu machine is ensuring it has the right amount of system resources available. System resources should sufficiently accommodate any workload that may run on the machine, as well as meet the needs of the OS itself. Fortunately, there are three simple Ubuntu commands you can use to monitor hardware use. Monitoring Disk Space: The DF Command The first command to know is the DF command. DF stands for “Disk Free.” As the name implies, the DF command is used to find out how much free disk space is available. If you come from a Windows background, you will find that the DF command works a little bit differently than the Get-ChildItem cmdlet or DIR command. Those commands display the current volume’s contents and the amount of remaining disk space. By contrast, when you enter the DF command, Ubuntu will display a list of file systems present on the system, as well as the total size and amount of free space available within each file system. As a best practice, you should append the -H switch when using the DF command. The -H switch tells Ubuntu to put the results into a human-readable format. The output will be shown in megabytes and gigabytes as opposed to bytes. You can see what this command looks like in Figure 1. Figure 1. The DF command shows the disk’s contents. Monitoring Processes: The Top Command Windows systems use numerous background processes to perform low-level tasks within the operating system. There are also processes associated with any applications that you might choose to run. You can view these processes through the Windows Task Manager or by using the Get-Process cmdlet within PowerShell. The concept of processes is not unique to Windows. Ubuntu and other Linux systems also make use of processes. As in the case of Windows systems, some of these processes make extremely light use of the available hardware resources, while others tend to be far more demanding. You can see the processes that are running on an Ubuntu machine by entering the Top command. After you enter this command, Linux will present a summary of the total number of tasks that are running on the system. Linux will also give you a breakdown of the individual processes. This not only includes the amount of CPU and memory resources used by each process, but also the user who launched the process, the process ID, and the command that is tied to the process. Figure 2. The Top command provides information about the processes that are running on the system. If you try the command on your own system, the Top data will be continuously displayed until you press Ctrl+C. Incidentally, the screen capture shown in Figure 2 was taken from a command line-only Ubuntu shell running on the Windows Services for Linux. The reason why most Linux deployments are command line only is because the GUI consumes a significant amount of system resources. To see just how much of a difference this makes, see Figure 3, which shows the results of running the Top command on an Ubuntu machine that has the Linux desktop installed. Figure 3. The GUI consumes significant system resources. Monitoring Memory Use: The Free Command Finally, just as the Top command will show the processes running on your system, the Free command will show how the system’s memory is used. Entering Free at the command prompt causes Ubuntu to display the system’s total memory, how much memory is in use, and how much memory is free. You can also get information about swap memory by using the Free command. Figure 4. The Free command shows how the system is using the available memory.
<urn:uuid:fd1028f4-2d3f-42a8-aede-b5b1bc6e88f2>
CC-MAIN-2022-40
https://www.itprotoday.com/linux/3-ubuntu-commands-monitoring-system-resource-use
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00267.warc.gz
en
0.913957
746
3.125
3
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Consider the animal in the following image. If you recognize it, a quick series of neuron activations in your brain will link its image to its name and other information you know about it (habitat, size, diet, lifespan, etc…). But if like me, you’ve never seen this animal before, your mind is now racing through your repertoire of animal species, comparing tails, ears, paws, noses, snouts, and everything else to determine which bucket this odd creature belongs to. Your biological neural network is reprocessing your past experience to deal with a novel situation. Our brains, honed through millions of years of evolution, are very efficient processing machines, sorting out the ton of information we receive through our sensory inputs, associating known items with their respective categories. That picture, by the way, is an Indian civet, an endangered species that has nothing to do with cats, dogs, and rodents. It should be placed in its own separate category (viverrids). There you go. You now have a new bucket to place civets in, which includes this variant that was sighted recently in India. — Dón Grieshnak (@DGrieshnak) March 26, 2020 While we have yet to learn much about how the mind works, we are in the midst (or maybe still at the beginning) of an era of creating our own version of the human brain. After decades of research and development, researchers have managed to create deep neural networks that sometimes match or surpass human performance in specific tasks. But one of the recurring themes in discussions about artificial intelligence is whether artificial neural networks used in deep learning work similarly to the biological neural networks of our brains. Many scientists agree that artificial neural networks are a very rough imitation of the brain’s structure, and some believe that ANNs are statistical inference engines that do not mirror the many functions of the brain. The brain, they believe, contains many wonders that go beyond the mere connection of biological neurons. A paper recently published in the peer-reviewed journal Neuron challenges the conventional view of the functions of the human brain. Titled “Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks,” the paper discusses that contrary to the beliefs of many scientists, the human brain is a brute-force big data processor that fits its parameters to the many examples that it experiences. That’s the kind of description usually given to deep neural networks. Authored by researchers at Princeton University, the thought-provoking paper provides a different perspective on neural networks, analogies between ANNs and their biological counterparts, and future directions for creating more capable artificial intelligence systems. AI’s interpretability challenge Neuroscientists generally believe that the complex functionalities of the brain can be broken down into simple, interpretable models. For instance, I can explain the complex mental process of my analysis of the civet picture (before I knew its name, of course), as such: “It’s definitely not a bird because it doesn’t have feathers and wings. And it certainly isn’t a fish. It’s probably a mammal, given the furry coat. It could be a cat, given the pointy ears, but the neck is a bit too long and the body shape a bit weird. The snout is a bit rodent-like, but the legs are longer than most rodents…” and finally I would come to the conclusion that it’s probably an esoteric species of cat. (In my defense, it is a very distant relative of felines if you insist.) Artificial neural networks, however, are often dismissed as uninterpretable black boxes. They do not provide rich explanations of their decision process. This is especially true when it comes to the complex deep neural networks that are composed of hundreds (or thousands of layers) and millions (or billions) or parameters. During their training phase, deep neural networks review millions of images and their associated labels, and then they mindlessly tune their millions of parameters to the patterns they extract from those images. These tuned parameters then allow them to determine which class a new image belongs to. They don’t understand the higher-level concepts that I just mentioned (neck, ear, nose, legs, etc.) and only look for consistency between the pixels of an image. The authors of “Direct Fit to Nature” acknowledge that neural networks—both biological and artificial—can differ considerably in their circuit architecture, learning rules, and objective functions. “All networks, however, use an iterative optimization process to pursue an objective, given their input or environment—a process we refer to as ‘direct fit,’” the researchers write. The term “direct fit” is inspired from the blind fitting process observed in evolution, an elegant but mindless optimization process where different organisms adapt to their environment through a series of random genetic transformations carried out over a very long period. “This framework undercuts the assumptions of traditional experimental approaches and makes unexpected contact with long-standing debates in developmental and ecological psychology,” the authors write. Another problem that the artificial intelligence community faces is the tradeoff between interpretability and generalization. Scientists and researchers are constantly searching for new techniques and structures that can generalize AI capabilities across vaster domains. And experience has shown that, when it comes to artificial neural networks, scale improves generalization. Advances in processing hardware and the availability of large compute resources have enabled researchers to create and train very large neural networks in reasonable timeframes. And these networks have proven to be remarkably better at performing complex tasks such as computer vision and natural language processing. The problem with artificial neural networks, however, is that the larger they get, the more opaque they become. With their logic spread across millions of parameters, they become much harder to interpret than a simple regression model that assigns a single coefficient to each feature. Simplifying the structure of artificial neural networks (e.g., reducing the number of layers or variables) will make it easier to interpret how they map different input features to their outcomes. But simpler models are also less capable in dealing with the complex and messy data found in nature. “We argue that neural computation is grounded in brute-force direct fitting, which relies on over-parameterized optimization algorithms to increase predictive power (generalization) without explicitly modeling the underlying generative structure of the world,” the authors of “Direct Fit to Nature” write. AI’s generalization problem Say you want to create an AI system that detects chairs in images and videos. Ideally, you would provide the algorithm with a few images of chairs, and it would be able to detect all types of normal as well as wacky and funky ones. This is one of the long-sought goals of artificial intelligence, creating models that can “extrapolate” well. This means that, given a few examples of a problem domain, the model should be able to extract the underlying rules and apply them to a vast range of novel examples it hasn’t seen before. When dealing with simple (mostly artificial) problem domains, it might be possible to reach extrapolation level by tuning a deep neural network to a small set of training data. For instance, such levels of generalization might be achievable in domains with limited features such as sales forecasting and inventory management. (But as we’ve seen in these pages, even these simple AI models might fall apart when a fundamental change comes to their environment.) But when it comes to messy and unstructured data such as images and text, small data approaches tend to fail. In images, every pixel effectively becomes a variable, so analyzing a set of 100×100 pixel images becomes a problem with 10,000 dimensions, each having thousands or millions of possibilities. “In cases in which there are complex nonlinearities and interactions among variables at different parts of the parameter space, extrapolation from such limited data is bound to fail,” the Princeton researchers write. The human brain, many cognitive scientists believe, can rely on implicit generative rules without being exposed to rich data from the environment. Artificial neural networks, on the other hand, do not have such capabilities, the popular belief is. This is the belief that the authors of “Direct Fit to Nature” challenge. Direct fitting neural networks to the problem domain “Dense sampling of the problem space can flip the problem of prediction on its head, turning an extrapolation-based problem into an interpolation-based problem,” the researchers note. In essence, with enough samples, you will be able to capture a large enough area of the problem domain. This makes it possible to interpolate between samples with simple computations without the need to extract abstract rules to predict the outcome of situations that fall outside the domain of the training examples. “When the data structure is complex and multidimensional, a ‘mindless’ direct-fit model, capable of interpolation-based prediction within a real-world parameter space, is preferable to a traditional ideal-fit explicit model that fails to explain much variance in the data,” the authors of “Direct Fit to Nature” write. In tandem with advances in computing hardware, the availability of very large data sets has enabled the creation of direct-fit artificial neural networks in the past decade. The internet is rich with all sorts of data from various domains. Scientists create vast deep learning data sets from Wikipedia, social media networks, image repositories, and more. The advent of the internet of things (IoT) has also enabled rich sampling from physical environments (roads, buildings, weather, bodies, etc.). In many types of applications (i.e., supervised learning algorithms), the gathered data still requires a lot of manual labor to associate each sample with its outcome. But nonetheless, the availability of big data has made it possible to apply the direct-fit approach to complex domains that can’t be represented with few samples and general rules. One argument against this approach is the “long tail” problem, often described as “edge cases.” For instance, in image classifications, one of the outstanding problems is that popular training data sets such as ImageNet provides millions of pictures of different types of objects. But since most of the pictures were taken under ideal lighting conditions and from conventional angles, deep neural networks trained on these datasets fail to recognize those objects in rare positions. “The long tail does not pertain to new examples per se, but to low-frequency or odd examples (e.g. a strange view of a chair, or a chair shaped like an unrelated object) or riding in a new context (like driving in a blizzard or with a flat tire),” co-authors of the paper Uri Hasson, Professor at Department of Psychology and Princeton Neuroscience Institute, and Sam Nastase, Postdoctoral researcher at Princeton Neuroscience Institute, told TechTalks in written comments. “Note that biological organisms, including people, like ANNs, are bad at extrapolating to contexts they never experienced; e.g. many people fail spectacularly when driving in snow for the first time.” Many developers try to make their deep learning models more robust by blindly adding more samples to the training data set, hoping to cover all possible situations. This usually doesn’t solve the problem, because the sampling techniques don’t widen the distribution of the data set, and edge cases remain uncovered by the easily collected data samples. The solution, Hasson and Nastase argue, is to expand the interpolation zone by providing a more ecological, embodied sampling regime for artificial neural networks that currently perform poorly in the tail of the distribution. “For example, many of the oddities in classical human visual psychophysics are trivially resolved by allowing the observer to simply move and actively sample the environment (something essentially all biological organisms do),” they say. “That is, the long-tail phenomenon is in part a sampling deficiency. However, the solution isn’t necessarily just more samples (which will in large part come from the body of the distribution), but will instead require more sophisticated sampling observed in biological organisms (e.g. novelty seeking).” This observation is in line with recent research that shows employing a more diverse sampling methodology can in fact improve the performance of computer vision systems. In fact, the need for sampling from the long tail also applies to the human brain. For instance, consider one of the oft-mentioned criticisms against self-driving cars which posits that their abilities are limited to the environments they’ve been trained in. “Even the most experienced drivers can find themselves in a new context where they are not sure how to act. The point is to not train a foolproof car, but a self-driving car that can drive, like humans, in 99 percent of the contexts. Given the diversity of driving contexts, this is not easy, but perhaps doable,” Hasson and Nastase say. “We often overestimate the generalization capacity of biological neural networks, including humans. But most biological neural networks are fairly brittle; consider for example that raising ocean temperatures 2 degrees will wreak havoc on entire ecosystems.” Challenging old beliefs Many scientists criticize AI systems that rely on very large neural networks, arguing that the human brain is very resource-efficient. The brain is a three-pound mass of matter that uses little over 10 watts of electricity. Deep neural networks, however, often require very large servers that can consume megawatts of power. But hardware aside, comparing the components of the brain to artificial neural networks paints a different picture. The largest deep neural networks are composed of a few billion parameters. The human brain, in contrast, is constituted of approximately 1,000 trillion synapses, the biological equivalent of ANN parameters. Moreover, the brain is a highly parallel system, which makes it very hard to compare its functionality to that of ANNs. “Although the brain is certainly subject to wiring and metabolic constraints, we should not commit to an argument for scarcity of computational resources as long as we poorly understand the computational machinery in question,” the Princeton researchers write in their paper. Another argument is that, in contrast to ANNs, the biological neural network of the human brain has very poor input mechanisms and doesn’t have the capacity to ingest and process very large amounts of data. This makes it inevitable for human brains to learn new tasks without learning the underlying rules. To be fair, calculating the input entering the brain is complicated. But we often underestimate the huge amount of data that we process. “For example, we may be exposed to thousands of visual exemplars of many daily categories a year, and each category may be sampled at thousands of views in each encounter, resulting in a rich training set for the visual system. Similarly, with regard to language, studies estimate that a child is exposed to several million words per year,” the authors of the paper write. Beyond System 1 neural networks One thing that can’t be denied, however, is that humans do in fact extract rules from their environment and develop abstract thoughts and concepts that they use to process and analyze new information. This complex symbol manipulation enables humans to compare and draw analogies between different tasks and perform efficient transfer learning. Understanding and applying causality remain among the unique features of the human brain. “It is certainly the case that humans can learn abstract rules and extrapolate to new contexts in a way that exceeds modern ANNs. Calculus is perhaps the best example of learning to apply rules across different contexts. Discovering natural laws in physics is another example, where you learn a very general rule from a set of limited observations,” Hasson and Nastase say. These are the kind of capabilities that emerge not from the activations and interactions of a single neural network but are the result of the accumulated knowledge across many minds and generations. This is one area that direct-fit models fall short, Hasson and Nastase acknowledge. Scientifically, it is called System 1 and System 2 thinking. System 1 refers to the kind of tasks that can be learned by rote, such as recognizing faces, walking, running, driving. You can perform most of these capabilities subconsciously, while also performing some other task (e.g., walking and talking to someone else at the same time, driving and listening to the radio). System 2, however, requires concentration and conscious thinking (can you solve a differential equation while jogging?). “In the paper, we distinguish fast and automatic System 1 capacities from the slow and deliberate cognitive functions,” Hasson and Nastase say. “While direct fit allows the brain to be competent while being blind to the solution it learned (similar to all evolved functional solutions in biology), and while it explains the ability of System 1 to learn to perceive and act across many contexts, it still doesn’t fully explain a subset of human functions attributed to System 2 which seems to gain some explicit understanding of the underlying structure of the world.” So what do we need to develop AI algorithms that have System 2 capabilities? This is one area where there’s much debate in the research community. Some scientists, including deep learning pioneer Yoshua Bengio, believe that pure neural network-based systems will eventually lead to System 2 level AI. New research in the field shows that advanced neural network structures manifest the kind of symbol manipulation capabilities that were previously thought to be off-limits for deep learning. In “Direct Fit to Nature,” the authors support the pure neural network–based approach. In their paper, they write: “Although the human mind inspires us to touch the stars, it is grounded in the mindless billions of direct-fit parameters of System 1. Therefore, direct-fit interpolation is not the end goal but rather the starting point for understanding the architecture of higher-order cognition. There is no other substrate from which System 2 could arise.” An alternative view is the creation of hybrid systems that incorporate classic symbolic AI with neural networks. The area has drawn much attention in the past year, and there are several projects that show that rule-based AI and neural networks can complement each other to create systems that are stronger than the sum of their parts. “Although non-neural symbolic computing—in the vein of von Neumann’s model of a control unit and arithmetic logic units—is useful in its own right and may be relevant at some level of description, the human System 2 is a product of biological evolution and emerges from neural networks,” Hasson and Nastase wrote in their comments to TechTalks. In their paper, Hasson and Nastase expand on some of the possible components that might develop higher capabilities for neural networks. One interesting suggestion is providing a physical body for neural networks to experience and explore the world like other living beings. “Integrating a network into a body that allows it to interact with objects in the world is necessary for facilitating learning in new environments,” Hasson and Nastase said. “Asking a language model to learn the meaning of words from the adjacent words in text corpora exposes the network to a highly restrictive and narrow context. If the network has a body and can interact with objects and people in a way that relates to the words, it is likely to get a better sense of the meaning of words in context. Counterintuitively, imposing these sorts of ‘limitations’ (e.g. a body) on a neural network can force the neural network to learn more useful representations.”
<urn:uuid:c2d42d82-4f6a-41f3-8492-b44c12b612b2>
CC-MAIN-2022-40
https://bdtechtalks.com/2020/06/22/direct-fit-artificial-neural-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00267.warc.gz
en
0.938827
4,115
3.390625
3
After facing two years of disrupted education, many teenagers must now decide what step to take next in their life…continue in education or enter the workplace? At the end of last year, the Office of National Statistics announced that the number of job vacancies in the UK reached 1.1 million between July and September 2021 – a record high. Good news, you would expect, for those young people choosing to seek work? Unfortunately, it’s not that simple. Disadvantage gaps still exist, and brilliant young people from inner city and poorer areas are facing barrier after barrier. Apprenticeships have often been heralded as a simple way to improve social mobility, as they offer opportunities to develop within the workplace in a wide range of roles. However, since the introduction of the Apprenticeship Levy in 2017, many larger employers are using it to help existing employees to take degree level apprenticeship courses. They are developing from within, which does nothing to bring new, brilliant minds from a diverse range of backgrounds into the workforce. This is confirmed by a report by the Social Mobility Commission, which found that; “workplace learners from more deprived backgrounds are less likely to get selected for an apprenticeship than their more privileged peers.” To improve this, employers must see apprenticeships in a new way and focus on improving two crucial areas: recruitment and outreach. Don’t let your recruitment become a barrier Recruitment needs to be inclusive and accessible to people from lower socio-economic backgrounds. For example, stipulating minimum requirements for GCSEs in English and maths on the advert may be a barrier to those young people who haven’t got the grades. However, an apprenticeship would give them the opportunity to gain their English and maths qualifications as part of the programme, so it shouldn’t prevent them from applying. The minimum requirement is unnecessary and serves only to make the recruitment process a blocker. The UK Government wants to “unleash the power of the private sector to unlock jobs and opportunity for all” with its Levelling Up agenda. As well as giving more people across the country greater opportunities, it will also be crucial for the UK to recover from the pandemic and ensure that we can fill the skills gaps that we are already facing in STEM, digital and creative areas. On a global scale, the government is also committed to supporting the UN’s 17 Sustainable Development Goals, which set out a shared blueprint for peace and prosperity for people and the planet. Enabling social mobility will play a key part in achieving six of these goals: - SDG 1 - End poverty in all its forms everywhere - SDG 4 - Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all - SDG 5 - Achieve gender equality and empower all women and girls - SDG 8 - Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all - SDG 10 - Reduce inequality within and among countries - SDG 11- Make cities and human settlements inclusive, safe, resilient and sustainable. In July last year the Department for Education (DfE) published its Outcome Delivery Plan which includes how its work will contribute to the delivery of the SDGs. The report has a strong focus on driving growth through apprenticeships so “more employers and individuals can benefit.” However, the UK will only be able to unlock opportunity for all, reduce inequality and achieve these SDGs, when all people are aware of the opportunities on offer. Don’t expect a diverse range of candidates to come to you For apprenticeship programmes to improve social mobility and ‘level up’ the UK, we need to see stronger connections between parents and teachers as the first step, then better outreach from employers and schools to promote apprenticeship opportunities. HR, recruitment and talent teams need to be involved; attending careers fairs, holding employability days, promoting opportunities on social media and actively targeting young people in areas where understanding of apprenticeships is low. It's easy for larger employers, with well-known brand names, to rely on their traditional recruitment processes and expect candidates to come to them. This is not true for those who want to use apprenticeship programmes to benefit candidates from a wide range of backgrounds. Take the Financial Services Customer Advisor apprenticeship programme for example. This is a role that serves a diverse range of customers and clients, so it makes sense for employers to look for candidates who will bring a mix of ideas, approaches and life experience to the role. To find these candidates, employers must work in partnership with schools, colleges and training providers to take a proactive and dynamic approach to their recruitment and outreach. All young people deserve the chance to work in fulfilling roles, wherever they live, and whatever their background. Improving apprenticeship recruitment and outreach will give more people that chance. At Capita, we work in partnership with our clients to develop apprenticeship programmes that help them to improve social mobility and build a strong, diverse and inclusive workforce. To find out more visit Capita apprenticeships.
<urn:uuid:1168c9d2-3edd-4fc7-ada1-b216c934ec84>
CC-MAIN-2022-40
https://www.capita.com/our-thinking/recruitment-and-outreach-must-improve-apprenticeships-enable-social-mobility?utm_source=Website&utm_medium=LinkedIn
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00267.warc.gz
en
0.958655
1,044
2.65625
3
Planning to hit the road for spring break? Getting ready to travel with your laptop for leisure or business? Whatever your spring break plans are, you have to be careful of potential cyberattacks that could comprise your data and security. As important as it is to implement safe cyber practices at home, it’s even more important to be aware of cybersecurity risks while traveling. Whenever you travel, there’s always the possibility of you losing your laptop or smartphone or connecting to a public Wi-Fi network, leaving your data vulnerable to getting stolen or lost. Why Public Wi-Fi Network Is a Hacker’s Goldmine Approximately 24.7% of Wi-Fi hotspots around the world don’t use any encryption at all, according to a report from Kapersky Labs. This leaves your data vulnerable to hackers. A Wi-Fi hotspot is considered to be “secured” if it requires you to enter a password that conforms to the WPA2 or WPA standards for security codes. Wi-Fi hotspots that are unsecured due to the absence of encrypted data connections and strong password validation procedures are just the first of the dangers lurking on your devices. Hackers have several ways to target you on public Wi-Fi, such as: Hackers can easily plant malware on your devices, especially if you allow file-sharing over an unsecured Wi-Fi network. Executing a man-in-the-middle attack This type of cyber attack allows a malicious actor to insert him/herself into a conversation between two persons, impersonate both of them, and gain access to information that they were trying to send each other. Launching rogue Wi-Fi hotspots Hackers usually set up rogue hotspots that look like legitimate open hotspots. When you connect to a rogue hotspot, hackers can inject malware on your device or intercept your data. Steps to Protect Yourself on Public Wi-Fi Due to the security risks of free public Wi-Fi, you should use it with caution. Here are a few ways to keep yourself safe when using unsecured Wi-Fi network: Use a Virtual Private Network (VPN) Whether you’re connecting to a hotel hotspot or using airport Wi-Fi, a VPN will not only keep your data secure but also allow you to keep up with your favorite television shows from home by bypassing geographic restrictions. A VPN protects your connection by encrypting all your online activities, making it safe from intruders, no matter what Wi-Fi hotspot you connect to. Turn off Automatic Wi-Fi Connection Feature While connecting to public Wi-Fi is convenient, it is notoriously unsecured, especially in hotel and airport lounges. To keep your data safe, you should disable the automatic Wi-Fi connection feature on your laptop or smartphone and only connect to networks you have verified to be legitimate. Unless you’re currently using them, your device’s Bluetooth or file-sharing services, such as AirDrop, should always be disabled as well. Enable Multi-Factor Authentication Multi-factor authentication requires you to provide a unique code when logging into your accounts, in addition to your password and username. The code itself is typically delivered via email or text message, making it much more difficult for cybercriminals to impersonate you and gain access to your account. Patch and Protect Before traveling, you should update all of your devices with the most recent software and operating systems. Also, make sure to install the latest security patches released by the software company. The more up-to-date your devices are, the less chance they have of getting hacked. Don’t Give Away Too Much Information Be very careful about signing up for a public Wi-Fi network if you are getting asked for a lot of personal information, such as your phone number or email address. If you really need to connect to public networks like this, stick to places you trust and consider using an alternative email address that is not your primary one. While you may be looking for relaxation and rest, cybercriminals can turn your vacation into a nightmare if you are not careful. If you would like to learn more about protecting your data while you are traveling, contact one of our experts today.
<urn:uuid:70c35d7a-1a08-4d06-8089-9dee9b694d4a>
CC-MAIN-2022-40
https://www.dwdtechgroup.com/network/keep-your-data-safe-while-on-spring-break/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00267.warc.gz
en
0.924927
893
2.546875
3
Dangers of Credential Stuffing to Businesses What is Credential Stuffing? Credential stuffing is when criminals steal usernames and passwords from one website and use them to try to log into different websites or applications. We are not talking about hacking into one account. We are talking about millions of stolen account credentials being fed by botnets into other websites to try to gain unauthorized access. Botnets try millions of username-password combinations, until one of them allows access to a system. Credential stuffing succeeds when people re-use their passwords on multiple sites — especially their business accounts. Account Takeover (ATO) Much has been written about the dangers of having personal accounts hacked, including identity theft and emptied bank accounts. What is often overlooked is the potential link to business accounts. Leaked credentials increase the risk of cyber attacks on corporate networks, ransomware attacks on corporate computers, and data theft. Leaked employee credentials leaves the door open for hackers to take over the employee’s account and penetrate the corporate network. Enterprise account takeover (ATO) attacks most often begin with a data breach when email addresses and passwords are stolen. Then, cybercriminals can gain access to and control a victim’s account. Once control is established, the attacks can range from theft of data to ransomware. A recent Forrester report states ATO has caused between $6.5 billion and $7 billion in annual losses across financial services, insurance, ecommerce, and other verticals. How big is the problem According to the latest Verizon Data Breach Investigations Report, the number of credential stuffing attempts organizations are experiencing per year ranged from thousands to billions. The median number of attempts enterprises experienced per year was 922,331. This situation was highlighted by the U.S. Securities and Exchange Commission, which warned of an increase in credential stuffing in the financial industry. How are business fighting back Organizations are attempting to secure their employees’ credential leaks with an increasing sense of urgency. The first step many businesses are taking is to adopt guidelines for password security. This begins with defining password requirements, such as the number of characters, the use of uppercase, lowercase, and special characters, change frequency, and so on. Next, many enterprises are strictly forbidding the use of corporate passwords for third-party applications. While these security measures are good practices, they leave enterprises one step behind the hackers. To be efficient and effective, you need to act before the credentials are sold. You must exhaustively scan for database leaks to take ATO prevention to the next level. Preventing Account Takeovers The final step in preventing credential weaponization from credential stuffing or other brute force attacks is credential monitoring. CybelAngel is the only digital risk protection platform that detects and manages leaked credentials before they are compromised. Our customers have the benefit of CybelAngel’s data lake, which contains 10 billion exposed email addresses and passwords. This cache of exposed email addresses and passwords is continuously updated with scans of over 335,000 Deep & Dark Web posts and 3,000 newly exposed databases each and every day of the year. Now, add CybelAngel’s Credentials Watchlist to your arsenal of cybersecurity weapons. Our Credentials Watchlist is a real-time feed of compromised staff credentials coupled with exposure monitoring features. The Watchlist is integrated into the CybelAngel Digital Risk Protection Platform, where the feed is exposed across multiple dashboards. Our customers can access contextualized credential leaks and unique reports, all of which empower their teams to mitigate credential leaks and avoid the threat of VIP impersonations. CybelAngel can protect your business from the dangers of an Account Takeover. Click here to talk to one of our experts about a free trial.
<urn:uuid:df8352e6-cb71-47e2-ae35-3ac0700de4ae>
CC-MAIN-2022-40
https://cybelangel.com/dangers-of-credential-stuffing-to-businesses/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00467.warc.gz
en
0.935418
780
2.515625
3
The Data Protection Act 2013, Data Privacy in Lesotho Lesotho’s Data Protection Act, 2013, also known as the DPA for short, is a data protection law that was passed in Lesotho in 2013. The DPA was passed to provide Lesotho citizens with the fundamental right to data protection and privacy, as this right is not explicitly given under The Constitution of the Kingdom of Lesotho. As such, legislation was needed to guarantee data subjects within Lesotho the right to data privacy. To this end, the DPA provides the principles for the regulation of the collection, processing, and disclosure of personal data in Lesotho, as well as the punishments that can be imposed as a result of failing to comply with the law. What is the scope and jurisdiction of the DPA? As it pertains to the personal scope of the law, the DPA applies to “a public or private body or any other person which or who, alone or together with others, determines the purpose of and means for processing personal information, regardless of whether or not such data is processed by the party or by a data processor on its behalf”. Moreover, the DPA defines data processing broadly to include the “collection, receipt, recording, organization, collation, storage, updating or modification, retrieval, alteration, consultation or use, dissemination by means of transmission, distribution or making available in any other form, merging, linking, as well as blocking, degradation, erasure, or destruction, of personal information”. Conversely, the territorial scope of the DPA states that the law applies to any person who processes personal data, whether they are: - An established or ordinary resident with Lesotho who processes data while in the country. - A non-established or non-ordinary resident in Lesotho, who uses automated or non-automated means to process personal data Lesotho, or these means used to forward personal data to other individuals or parties. What are the requirements of data controllers under the DPA? Under the Data Protection Act, 2013, data controllers within Lesotho must adhere to the following data protection principles: - Purpose specification and further processing limitation– The DPA mandates that the collection of personal data is limited to specific, explicit, and legitimate purposes, and forbids personal data to be further processed in a manner that is incompatible with these purposes. - Minimality– The DPA mandates that the processing of personal data is relevant, adequate, and not excessive. - Data retention– The DPA mandates that records detailing personal data that has been collected are kept for no longer than is necessary. - Information security– The DPA mandates that controllers take measures to secure the integrity of all personal data collected against loss, damage, unlawful access, and unauthorized destruction. - Quality of information– The DPA mandates that all personal data that is collected must be complete, not misleading, and kept up to date, whenever necessary. - Automated processing control– The DPA prohibits the processing of personal data solely on the basis of automated processing, subject to certain exceptions. What are the rights of data subjects under the DPA? The Data Protection Act, 2013 provides data subjects within Lesotho with various rights as it relates to the collection, processing, and dissemination of their personal data. These rights include the right to rectification, with a charge to the data subject, as well as the right to access any personal data that a particular data controller may hold concerning them. What’s more, the DPA also provides citizens with the right to object to or opt-out of the processing of their personal data, as well as the right not to be subject to data processing decisions made solely on the basis of automated processing. Alternatively, the DPA does not provide data subjects with the right to be informed, or the right to data portability. In terms of penalties that can be imposed against data controllers who fail to comply with the law, the DPA is enforced by the Data Protection Commission or the Commission for short. As such the Commission is authorized to levy the following monetary penalty of up to LSL 50 million ($3,383), as well as a term of imprisonment of up to five years for the following offenses: - Violating any of the provisions or regulations of the DPA. - Obstructing, hindering, or otherwise unlawfully influencing the Commission, or any person acting on behalf of the Commission with respect to enforcement of provisions of the DPA. - Violating the rules of confidentiality as it applies to personal data. - Unlawfully and intentionally obstructing an individual in the execution of a warrant issued in accordance with the DPA. - Failing to assist an individual in the execution of a warrant issued in accordance with the DPA, in instances where such assistance is reasonably required. Through the passing of the Data Protection Act, 2013, data subjects within Lesotho were provided the explicit right to privacy through legislation for the first time. While the DPA may not offer the same level of protection as the South African POPIA law, the Data Protection Act, of 2013 was nevertheless a turning point in the quest to achieve guaranteed data privacy rights for citizens of the country. As such, Lesotho has joined the ranks of the many African countries to guarantee the data protection and in turn privacy rights of their citizens through the means of legislation in the last decade.
<urn:uuid:8599803c-d815-4245-b73b-6cf351bb2cc9>
CC-MAIN-2022-40
https://caseguard.com/articles/the-data-protection-act-2013-data-privacy-in-lesotho/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00467.warc.gz
en
0.92953
1,123
2.8125
3
As an educator, I’m always trying new things to engage my students and to prepare them for their future. A few years ago, I made the commitment to incorporate technology into my instructional model. Technology, after all, is second nature to our students. It only seemed appropriate to build on the skills these “digital natives” are learning on outside of the school setting and have it part of our classroom setting. Since introducing interactive technology to my classroom, I have noticed a striking increase in student engagement. My students love it! By making an effort to incorporate interactive activities into my daily lesson plans, I’ve found an opportunity to better promote collaboration in the classroom. I try to incorporate lessons that include group work, discussion, debate, higher-order thinking skills and encourage a variety of viewpoints, as well as lesson that allow me the capability of differentiated instruction. Now our mobile devices have become a key tool in this process. Some teachers might assume that in order to use mobile devices effectively in the classroom, each student should have their own. But that isn’t always true. Instead, we occasionally use mobile apps to encourage group collaboration. For example, I can present a difficult word problem to them and push it to their mobile devices. They then work together in groups of 3-4 to come to an answer they all agree on. I can quickly push their group’s answer up on the digital whiteboard to easily present the solution to the entire class. When I do this, my students are excited, they are working together and they are learning from their peers. Adopting classroom technology has also broadened my ability to perform assessments. I can quickly check a student’s comprehension when presenting new material using multiple choice, true/false, short answer and essay questions. I used to have to ask the class if everyone understood a concept – an admittedly imperfect measurement. The problem was that many students were reluctant to indicate their lack of understanding to their peers. With the mobile app and the anonymous “clicker” capability, everyone participates and I obtain an instantaneous and true encapsulation of my classroom’s overall understanding. I can also tell which students are getting the concepts, as well as provide more one-on-one attention when I see that other students haven’t quite gotten it yet. Because mobile apps allow for quick and easy feedback, I now sometimes even ask students to show me what they understand about a lesson before I teach it. I now spend more time teaching concepts that I know kids are struggling with and less time on the areas they seem to already understand. This has helped guide my lesson plans and I’m seeing results. The mobile application even allows me to experiment with lessons that include quizzes, polls or contests to augment assessments with collaboration further enhancing my students’ learning. It’s been an illuminating – and sometimes even surprising – experience to teach using technology… and I will never look back. You May Like: It’s Time to Change How Teaching Is Done Another aspect that has transformed the culture of my classroom is student driven instruction. Every student with a mobile device can now interact with the lesson displayed on my interactive whiteboard. Mobile tools such as the MimioMobile mobile app allow my students to actively participate in the lesson from anywhere in the classroom. So instead of waiting for me to call on students one at a time, each child can test their own understanding and interact with the lesson directly from their seat. And most importantly, I can catch more students progressing or in some instances, struggling with the flexibility to roam around the room. It is fun to watch their excitement about using highly responsive technology as part of their everyday routine. It has become second nature to them! I’ve found that as I continue to use various features, it has become natural for me to integrate software and hardware into my lesson plans. I’ve been able to bring resources that I’ve already prepared into the software, and I have started to take advantage of discussions through online teacher communities, as well as new lesson plans created by teachers, for teachers. It has allowed me to give inventive lessons to my students every single day. I have found that lectures and outdated textbooks do not produce the same level of engagement as a lesson integrating interactive tools. We owe it to our students to take advantage of the tools they are familiar with and keep them actively participating in their own education. For me, technology has been an excellent lesson resource, an easy, intuitive tool, and has made me a better, more effective teacher!
<urn:uuid:39abed36-6cb5-4bb1-84b6-6c08c206271b>
CC-MAIN-2022-40
https://mytechdecisions.com/compliance/im-never-going-back-how-technology-is-transforming-my-teaching/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00467.warc.gz
en
0.968581
937
3
3
I’ve been re-reading the Mandiant report on the notorious APT1 group, and it occurred to me that the tools and techniques used by this relatively unsophisticated (but very successful) group are similar to those used by penetration testers. That isn’t to say that penetration testers, or pen testers as they are colloquially known, are unsophisticated – the objective for a pen test is to simulate a computer attack. This simulation does not usually include evading detection – in fact, it’s usually quite the opposite. Due to limitations of time a tester is easy to detect because they are trying to discover (and perhaps exploit) as many vulnerabilities as possible within a short window of opportunity. The same methodology used by pen testers is often used by some so-called APT groups and script kiddies with little thought given to the trail they are leaving. By understanding how a pen test works you will be able to better detect malicious actors attempting to attack your computer systems. A pen test is traditionally divided into these broad stages: - Footprinting and Reconnaissance - Privilege Escalation - Lateral Movement These serve as a guide (or high-level methodology in our vernacular) – pen testers and malicious actors need to be pragmatic to achieve their goals. They might skip a stage or return to an earlier stage depending on the scenario. The stages described are part of a loose framework and not a rigid step-by-step set of instructions. It’s import to keep in mind that these phases can both occur outside of the network OR inside if an attacker has obtained a foothold via a Remote Access Toolkit (RAT) as part of a client-side campaign. Lurking in the Shadows The first stage of a pen test is Footprinting and Reconnaissance. This stage involves accumulating and examining a lot of data about the target network from publicly available sources, such as DNS, WHOIS services, company websites, and social networking sites, such as Facebook, Twitter and LinkedIn. The key here is that the information obtained is free and authorised. One technique used to obtain information about an organisation is performing a DNS zone transfer in order to obtain a copy of a DNS server’s database for a particular domain. This database can give an attacker a wealth of information, including other name servers, host names, MX (Mail eXchange) records, and so on. Although it’s uncommon now for DNS servers to allow DNS zone transfers that doesn’t stop pen testers, and malicious actors, trying. A request for a DNS zone transfer might be an indicator that someone is in the early stages of performing reconnaissance against your network and or organisation. DNS zone transfers are easy to detect using the following Snort rules: alert tcp $EXTERNAL_NET any -> $HOME_NET 53 (msg:"DNS zone transfer TCP"; flow:to_server,established; content:"|00 00 FC|"; offset:15; classtype:attempted-recon; sid:9000000; rev:0;) alert udp $EXTERNAL_NET any -> $HOME_NET 53 (msg:"DNS zone transfer UDP"; flow:to_server; content:"|00 00 FC|"; offset:14; classtype:attempted-recon; sid:9000001; rev:1;) Social networking sites are a treasure trove for a pen tester targeting your organisation. Pen testers will profile your employees in order to determine how best to craft a phishing email. Phishing emails attempt to acquire usernames, passwords and financial details by masquerading as a trusted entity. Phishing emails may also contain links to websites infected with malware or malicious attachments. The look and feel of these fake emails are identical to those from the legitimate source. For example, if an employee is known to have just returned from a conference in Hawaii (because they posted a Facebook update saying what a great time they’d had), then an attacker might craft a phishing email requesting he open an attachment that contains information regarding next year’s conference. Of course, the attachment is malicious and upon opening it the employee’s system is exploited. Phishing has proven very popular with APTs because it provides a very high success rate. Verizon’s 2013 Data Breach Investigations Report refers to the delivery of malware using phishing emails as the Assured Penetration Technique (recycling the APT acronym) since success is almost guaranteed. The best way to avoid your employee’s being exploited in this way is to educate them. Warn them of the dangers of over-sharing on social networking sites and educate them about not clicking on links or opening attachments in unsolicited messages. The next stage is traditionally Scanning. This phase involves taking information obtained in the Reconnaissance and Footprinting stage and determining which systems are listening for network connections. A popular tool for scanning is Nmap. Nmap can perform ping sweeps, port scans and even basic vulnerability scans. In its default configuration Nmap will send a specially crafted ICMP echo request to determine whether a system is live. This packet differs from a normal ICMP echo request in that it contains no payload. Usually an ICMP echo request will contain a timestamp, a sequence number and a series of alphabetic characters. The following is a normal ICMP echo request captured using tcpdump: 10:39:16.777274 IP 172.16.150.134 > 172.16.150.2: ICMP echo request, id 30216, seq 1, length 64 0x0000: 4500 0054 0000 4000 4001 b5ff ac10 9686 E..T..@.@....... 0x0010: ac10 9602 0800 2578 7608 0001 944d af51 ......%xv....M.Q 0x0020: 22dc 0b00 0809 0a0b 0c0d 0e0f 1011 1213 "............... 0x0030: 1415 1617 1819 1a1b 1c1d 1e1f 2021 2223 .............!"# 0x0040: 2425 2627 2829 2a2b 2c2d 2e2f 3031 3233 $%&'()*+,-./0123 0x0050: 3435 3637 4567 And the following is an ICMP echo request sent using Nmap captured using tcpdump: 10:39:11.831271 IP 172.16.150.134 > 172.16.150.2: ICMP echo request, id 50491, seq 0, length 8 0x0000: 4500 001c 3444 0000 2c01 d5f3 ac10 9686 E...4D..,....... 0x0010: ac10 9602 0800 32c4 c53b 0000 ......2..;.. An ICMP echo request with an empty payload is usually indicative of a Nmap scan and should trigger alarms when detected in your network. The following Snort rule will detect ICMP echo requests sent using Nmap: alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"ICMP ping nmap"; dsize:0; itype:8; classtype:attempted-recon; sid:9000002; rev:1;) Nmap will also send an ICMP timestamp request to determine whether a given host is live. If you don’t expect ICMP timestamp requests on your network then their presence might also be an indicator of an Nmap scan. The following simple Snort rule will detect ICMP timestamp requests: alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"ICMP Timestamp Request"; itype: 13; sid:9000003; rev:1;) Many organisations block ICMP at their network perimeter in an attempt to reveal any information about systems on their network. To deal with this scenario, Nmap will also send TCP SYN packets to ports 80 and 443. If a TCP port is open then a host will respond with a TCP SYN/ACK packet (or in some rare cases with a TCP SYN packet). If a TCP port is closed then a host should respond with a TCP RST packet. In either case a response is sent indicating that the host is live. This type of host discovery is stealthy since it never actually completes the TCP three-way handshake. (In the event that TCP ports 80 or 443 are open on the scanned host and it responds with a SYN/ACK packet, the scanner will respond with a RST packet to tear down the connection.) One possible method to detect these kinds of scans is to write IDS signatures to look for TCP SYN packets destined for hosts that you know do not have TCP ports 80 or 443 open. For example, a TCP SYN packet destined for your mail server might be considered unusual behaviour. The following simple Snort rule will detect a TCP SYN packet destined for ports 80 or 443 on your mail servers: alert tcp $EXTERNAL_NET any -> $SMTP_SERVERS 80,443 (msg:"Potential nmap SYN scan"; flags:S; sid:9000004; rev:1;) To really ensure that you are thwarting attempts to scan your network, ensure that your firewall only allows legitimate traffic required to conduct your business, including your internal firewalls. Networks are intended to be porous – not hollow. If an attacker is able to breach your perimeter then having your internal firewalls properly configured can thwart their attempts to gain further access. I Feel Used The third stage of a pen test is Exploitation. In this stage any vulnerabilities or weaknesses found during Scanning are actively exploited. Different exploitation techniques are used depending on the vulnerabilities identified. Because the information security landscape is constantly evolving as vulnerabilities are discovered and patched it is difficult to identify one exploit that is indicative of a pen test. A tool often used by pen testers is Metasploit, which contains over 1000 exploits. The goal of exploitation is to abuse the system to achieve some nefarious goal, which is achieved using a payload. There are over 200 payloads supported by Metasploit, many of which can be trivially detected when not obfuscated to avoid detection. Like pen testers, malicious actors have common toolkits that they use for exploitation. They might amend them to suit their purposes, but they broadly remain the same. For example, variants of the Auriga malware are mentioned in both the Verizon and Mandiant reports. The appendices to the Mandiant APT1 report contains details of many others and how they can be detected. Once a host inside your network has been compromised a pen tester will try to escalate their privileges and move laterally to another host until her goal has been reached, which is usually to obtain system administrator access or access to a particular system. A pen tester will usually perform these steps manually, but a malicious actor would probably automate these steps by utilising backdoor and command and control features of the existing malware to download additional malware. A malicious actor will also try to evade detection, which a pen tester will not tend to do. Privilege escalation involves elevating access to the host’s resources that are normally protected from the user resulting the user being allowed to perform unauthorised actions. This might be accomplished by logging a user’s keystrokes, capturing stored credentials, or even dumping password hashes from the domain controller. A pen tester will not usually look to evade detection, but this step is key to malicious actors to ensure their continued success. This is usually achieved by disabling security controls in place to detect malicious behaviour, such as Anti-Virus (AV) software or Host Intrusion Prevention Systems (HIPS), or unlinking the malicious process from the process list. Since a regular user is not normally authorised to perform these actions privilege escalation is required to complete this step. A malicious actor will also attempt to hide their presence on the network. Dell SecureWorks discovered the use of a known (and fairly old) program called HTran during their investigation of the RSA breach in 2011. This purpose of this program is to disguise the source or destination of Internet traffic. However, they also found a debugging message generated by the program when an error occurred that betrayed the location of the hidden host. The following Snort rules were released by SecureWorks to detect these debugging messages, which could indicate the presence of HTran on your network: alert tcp $EXTERNAL_NET any -> $HOME_NET any (msg:"HTran Connection Redirect Failure Message"; flow:established,from_server; dsize:<80; content:"|5b|SERVER|5d|connection|20|to|20|"; depth:22; reference:url,www.secureworks.com/research/threats/htran/; sid:9000005; rev:1;) alert tcp $EXTERNAL_NET any -> $HOME_NET any (msg:"HTran Connection Redirect Failure Message (Unicode)"; flow:established,from_server; dsize:<160; content:"|5b00|S|00|E|00|R|00|V|00|E|00|R|005d00|c|00|o|00|n|00|n|00|e|00|c|00|t|00|i|00|o|00|n|002000|t|00|o|002000|"; depth:44; reference:url,www.secureworks.com/research/threats/htran/; sid:9000006; rev:1;) Pen testers and malicious actors will spread across the network looking to further their access in order to achieve their goals. Hosts are exploited, privileges escalated and detection evaded all the while searching for and capturing the desired data and sending it out of the network. One way to spread throughout a network is to gain unauthorised access to systems by leveraging someone’s authorised access. The Verizon report showed that authentication-based attacks (i.e. guessing, cracking, or reusing valid credentials) factored into about four out of every five breaches involving hacking. One of the first things that an attacker will try to further their access is to re-use credentials they have already obtained. After all, this is much easier than guessing or cracking passwords, and it’s surprising how often it succeeds. The guidance here is not to re-use passwords wherever possible – although this is sometimes easier said than done when managing an enterprise with 1000s of workstations. This blog post is just the tip of the iceberg. There are many tools and techniques shared by pen testers and malicious actors. Next time you receive a pen test report request guidance on how exploitation of an identified vulnerability can be detected, so you can ring the alarm bell when a real-world attacker does something similar. Also, analyse your logs after the pen test to see what a real attack might look like there. Finally, a packet capture during the test will provide a wealth of information that can be used to discover indicators of an attack and create IDS signatures. Identifying malicious actors in your network is just the beginning – read our blog post on dealing with targeted attacks for a suggested incident response process. Published date: 20 June 2013 Written by: Will Alexander
<urn:uuid:3becaf24-e02a-4d46-98b9-1f3aebc8293b>
CC-MAIN-2022-40
https://research.nccgroup.com/2013/06/20/how-to-spot-a-penetration-tester-in-your-network-and-catch-the-real-bad-guys-at-the-same-time/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00467.warc.gz
en
0.920507
3,260
2.640625
3
These days, there is an increasing focus on sustainability efforts as part of “going green.” A number of organizations across different sectors are working to reduce the amount of electricity they are consuming every day. These initiatives not only ensure that the group is lessening its environmental impact, the resulting cost savings could provide additional financial resources for mission-critical projects. While this is a common practice in many businesses, the trend has recently spread to include educational institutions as well. Although it can be difficult to lower energy usage with so many students and educators moving about the school, there are a few steps that individuals can take to curtail their electricity consumption. Educate students on the importance of energy usage One of the first steps educators should take is to explain to students the reason for the energy consumption project. Teachers can discuss the impact that electricity utilization has on the surrounding area and the environment as a whole. Administrators should explain that even taking small steps, such as some of those discussed below, can do a lot to contribute to these efforts when each individual takes part. Furthermore, to add to students’ motivation for participating in the energy reduction project, decision-makers can choose to put some of the money saved toward a small reward for pupils, such as a pizza party. Turn classroom computers off when not in use Much of the electricity consumed in schools goes toward technology. These days, the majority of institutions have a computer lab or classroom computers available for student usage. Teachers should encourage their classes to turn these devices off when not in use to conserve energy. Administrators and educators can also leverage a classroom management system to remotely establish settings that direct the user to turn the classroom computer off before leaving for the day. “Regardless of the teaching style you’re using for your class in this century, politely ask students to switch off their devices such as iPads and laptops when they’re not being used,” stated TeachThought. “Computers when left powered on can consume as annual electricity of 1,000 kilowatts.” Leverage a PC power management system To further maximize energy savings on technology, administrators can deploy a PC power management software to ensure that resources are automatically conserved. Such technology, such as that provided by Faronics, can save an average of $50 annually per computer. For a school that has 100 computers, that equals up to $5,000 in savings which can be put toward a class trip, supplies or other areas. Doors, windows and thermostats Teachers should share tips about energy savings that students can leverage at home as well. This can include opening the windows when weather allows instead of using the air conditioning and closing doors to ensure cool air or heat is not wasted. Raising the thermostat by a few degrees in the warmer months and lowering the temperature a small amount during cooler months can also help reduce energy usage. Classroom project: Tracking energy savings Another way to motivate students and help spread the energy conservation message is to have students track and analyze the amount of electricity saved by their efforts. For example, teachers can split the class into small groups and put each in charge of recording a different aspect of energy savings. One school in California recently took this approach and discovered that they were able to reduce their electricity bills by 33 percent. While students should play a large role in these efforts, it is important that everyone in the school participates. “Saving energy will always require a dual effort from students as well as teachers,” TeachThought stated. “By teaching students of energy saving through the tips mentioned above, you’ll be able to reduce energy consumption of your institute and your students will be able to reduce their energy bills.”
<urn:uuid:2eec9cdf-2d06-45f2-a99d-1451a3e8a91f>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/energy-conservation-tips-for-saving-electricity-in-the-classroom
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00467.warc.gz
en
0.964545
767
3.203125
3
May 5, 2022 — The analysis of the millions of artworks that are part of the cultural and artistic heritage is a job that seems impossible for human beings, but not so for supercomputers. The European project Saint George on a Bike, coordinated by the Barcelona Supercomputing Center (BSC) in collaboration with the Europeana Foundation, started in 2019 with the aim of using the MareNostrum 4 supercomputer to train Artificial Intelligence models, which help disseminate among citizens the wealth of European cultural heritage and recognition of its value as well as the understanding and awareness of its conservation and promotion. The project aims to generate automatic descriptions of hundreds of thousands of images from various cultural heritage repositories using natural language processing and deep learning algorithms. In a second phase of the project, the researchers launched a crowdsourcing campaign in Zooniverse, a citizen science portal based on open peer-to-peer collaboration, to collect thousands of manual annotations that help better train these Artificial Intelligence models. The campaign is completely open and anyone can participate by accessing this link. “Our project will allow quick access to enriched cultural information, which can serve equally well for cultural and social ends, education, tourism, and possibly for historians or anthropologists. Indirectly the citizens can benefit from better public services, when these are based on the insight that the richer metadata we produce offers – such as web accessibility for the visually impaired or narratives that can expose social injustice or integration and gender issues through cultural heritage corpora and help create a more tolerant European identity” says Maria-Cristina Marinescu, BSC researcher and Saint George on a Bike project coordinator. To date, no AI system has been built and trained to help in the description of cultural heritage images with the maximum coverage of topics, objects and iconographic relations while factoring in the time-period and scene composition rules for sacred iconography from the 14th to the 18th centuries. “This ambitious project interprets images according to their context for the first time, and thus seeks to give machines a certain common sense, which is one of the great barriers to Artificial Intelligence today” says Joaquim Moré, BSC researcher and project’s expert in computational linguistics. “For instance, For example, when it first identifies a motorbike in a 15th century painting of St George, it corrects itself and identifies the most plausible object for the period, which is a horse. This adaptation will be also made according to the cultural context. For example, in the Japanese cultural context, what in Europe we would call a knight would be a samurai” he concludes. The project has also launched an inspiring video (below) that highlights the use of Artificial Intelligence to detect images and compositions never seen before, the extraction of relations between thousands of images or the opportunity to organize virtual exhibitions with related paintings from all over the world. Further information about the project: https://saintgeorgeonabike.eu.
<urn:uuid:17090233-4cbe-4097-8bcd-3331645be1a1>
CC-MAIN-2022-40
https://www.hpcwire.com/off-the-wire/bscs-saint-george-on-a-bike-ai-art-project-launches-citizen-science-campaign/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00467.warc.gz
en
0.920088
612
3.078125
3
As a business owner, data loss is more than just a nuisance; it can harm your business’s finances and credibility. Some studies have even found that nearly half of all small businesses are forced to close their doors after experiencing a data loss event. With disk imaging, however, you can protect your business from data loss. Disk imaging offers a simple solution for backing up data. What Is Disk Imaging? Disk imaging is the process of backing up an entire disk. Hard disk drives (HDDs), solid-state drives (SSDs) and even Universal Serial Bus (USB) drives support disk imaging. With disk imaging, the contents of the respective drive are copied and saved in a separate file. Disk imaging has been around for many decades. It emerged in the 1960s as an alternative to manual backups. The technology has changed since then, but disk imaging continues to offer a simple and effective way to prevent data loss. How Disk Imaging Works You can use disk imaging to back up any storage drive on your computer. It involves copying the entire contents of the drive. There are different ways to perform disk imaging. Most operating systems (OSs) have a built-in disk imaging tool. Alternatively, you can download and use a third-party disk imaging tool. Using either your OS or a separate tool, you can copy the storage drive. The backup file consisting of the copied data is known as a disk image. Best Practices for Disk Imaging If you’re going to leverage disk imaging to protect against data loss, there are a few things you should know. It can be time-consuming, especially for large storage drives. Whether you use your computer’s OS or a separate tool, it may take several hours to complete the disk imaging process. Some tools will automatically compress disk images. After a tool has copied the contents of your storage drive, for instance, it will compress the newly created file. Compressed files, of course, are smaller than uncompressed files. They won’t take up as much space. As a result, they are easier to transfer and easier to store. Encryption is another feature of many disk imaging tools. In addition to copying the contents of storage drives, they can encrypt the copied data. The disk images that they create will be encrypted. You don’t have to worry about these disk images falling into the wrong hands. Since they are encrypted, nefarious individuals won’t be able to access them. Encrypted files such as this require an encryption key to decipher.
<urn:uuid:c61d7935-b3b8-4120-b02b-4acde9af19c1>
CC-MAIN-2022-40
https://logixconsulting.com/2022/08/09/disk-imaging-a-simple-solution-to-prevent-data-loss/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00467.warc.gz
en
0.943349
521
2.90625
3
Now, virtual server environments can abstract the physical characteristics of a server into software and so provide increased scale and utilisation of hardware resources. But, storage must still be provided to virtual server and virtual desktop machines, with the hypervisor taking on an important role as the virtualisation layer, abstracting physical storage resources to virtual devices. So, what has happened to the LUN? That depends on the virtualisation environment you’re using. Physical and virtual drives and LUNs Regardless of hypervisor type, the persistent retention of data needs some form of storage device, either a traditional hard drive or a solid-state disk (SSD). For block storage, VMware’s vSphere suite, including ESXi and Microsoft’s Hyper-V use fundamentally different approaches to presenting physical storage. VSphere systems take LUNs configured on a storage array and format them with the VMware File System (VMFS). This is a proprietary file format used for storing virtual machine files that takes advantage of on-disk structures to support highly granular levels of object and block locking. The reason this is necessary is that most vSphere deployments use a small number of very large LUNs, with each LUN holding many virtual machines. An efficient locking method is needed to ensure performance doesn’t suffer as virtual environments scale up. A single virtual machine is comprised of many separate files, including the VMDK or Virtual machine Disk. A VMDK is analogous to a physical server hard drive, with a virtual guest on vSphere potentially having many VMDK files, depending on the number of logical drives supported, the number of snapshots in use and the type of VMDK. For example, for thin provisioned VMDKs, where storage is allocated on demand, a guest hard drive will consist of a master VMDK file and many VMDK data files, representing the allocation units of each increment of space as the virtual machine writes more data to disk. By contrast, Microsoft has chosen to incorporate all components of the virtual machine disk into a single file known as a VHD (virtual hard disk). VHD files are deployed onto existing Microsoft formatted file systems, either using NTFS or CIFS/SMB. There is no separate LUN format for Hyper-V. VHD files allocated as thin volumes (known as dynamic hard disks) expand by increasing the size of the file and consuming more space on disk. Inside the VHD, Microsoft stores metadata information in the footer of fixed size VHDs and in the header and footer of dynamic VHDs. VHDs have advantages over VMDKs and VMFS in block-based environments in that the underlying storage of the data is NTFS, Microsoft’s standard file system for storage on Windows servers. This means VHD files can easily be copied between volumes or systems by the administrator without any special tools (assuming the virtual machine isn’t running of course). It also makes it easy to clone a virtual machine, simply by taking a copy of the VHD and using it as the source of a new virtual machine. This is particularly beneficial using the new deduplication features of Windows 2012 which can significantly reduce the amount of space consumed by virtual machines that have been cloned from a master VHD. Designing for performance and capacity The aggregation of servers and desktops into virtual environments means that the I/O profile of data is very different to that of a traditional physical server. I/O workload becomes unpredictable as the individual I/O demands from virtualised servers can appear in any order and so are effectively random in nature. This is referred to as the “I/O blender” effect and the result is that storage provisioned for virtual environments must be capable of handling large volumes of I/O and for virtual desktops, to cope with “boot storms”, which result from high I/O demand as users start their virtual PCs in the morning and close down at the end of the working day. To guarantee performance, typical storage deployments will use a number of options: - all-flash arrays are becoming increasingly popular for virtual environments. For virtual servers, they provide consistent, predictable performance; for virtual desktops they handle the boot-storm issue with lots of I/O bandwidth. - Hybrid flash arrays use a mix of traditional spinning media and solid state, targeting active I/O at the solid-state storage using dynamic tiering technology. This provides a more attractive price point than all-flash arrays, as many deployments have large amounts of inactive VM data. - Advanced features – For vSphere these include VAAI (vStorage APIs for Array Integration) and for Hyper-V ODX (Offloaded Data Transfer). Both of these features offload repetitive tasks from the hypervisor and reduce the amount of data transferred over the storage network, when performing common tasks such as replicating virtual machines or initialising file systems. Ultimately, provisioning storage for virtual environments is all about getting the right IOPS density for the capacity of storage being deployed. This may seem difficult to estimate but can be taken from existing physical servers as part of a migration programme, or by pre-building some virtual servers and measuring IOPS demand. For virtual desktops, a good estimate is around 5-10 IOPS per desktop, scaled up across the whole VDI farm. This would require additional capacity to be built in for boot storm events. LUN performance and presentation For block devices, LUNs can be presented using Fibre Channel, Fibre Channel over Ethernet (FCoE) or iSCSI. Fibre Channel and FCoE have the benefit of using dedicated host bus adaptors (HBAs) or CNAs (Converged Network Adaptors) that make it easier to separate host IP traffic from storage network traffic. However, there are still some important design considerations even where a dedicated storage network is available. Firstly, there’s the option to present LUNs across multiple Fibre Channel interfaces for both resiliency and performance. We’ll take resiliency as a given, as that would be standard practice for storage administrators, but for performance, multiple HBAs (or dual-port HBAs) allow physical segmentation of vSphere and Hyper-V LUNs by tier for performance purposes. This may not seem like the most logical approach, but bear in mind LUNs presented to vSphere and Hyper-V are typically large, and so queue depth to individual LUNs can become an issue, especially with workloads of different priorities. This can be especially important where high performance all-flash devices have been deployed. For iSCSI connections, dedicated NICs should be used and multipathed for redundancy. Both Microsoft and VMware have deployment guides to show how to enable iSCSI multipathing. While on the subject, it’s worth discussing LUN sizes. vSphere (and less so Hyper-V) are limited in the number of LUNs that can be presented to a single hypervisor. Typically, storage for these environments is presented using large LUNs (up to 2TB) to maximise the presentable capacity. As a result, the users of that LUN, which could represent many hosts, all receive the same level of performance. Creating many LUNs of 2TB in size is quite expensive in storage terms. So, Thin Provisioning on the storage array presents a useful way to enable LUNs to potentially expand to their full 2TB capacity, while enabling multiple LUNs to be presented to a host to ensure I/O is distributed across as many LUNs as possible. Limits of the LUN and the future The grouping of storage for hypervisor guests at the LUN level represents a physical restriction on delivering quality of service to an individual virtual machine; all guests on a LUN receive the same level of performance. Microsoft recommends using a single LUN per VM, which may be restrictive in larger systems (and certainly represents a significant management overhead), but is still possible to achieve. VMware has stated its intention is to implement vVOLs – virtual volumes – to abstract the physical characteristics of the virtual machine storage from the storage array to the hypervisor. This would enable better granularity in terms of prioritisation of virtual machines and their I/O workload, even when they exist on the same physical array. But while some companies focus on removing the storage array completely, it’s clear there are benefits in retaining an intelligent storage array, one that understands and can communicate with the hypervisor.
<urn:uuid:bbba3bc3-4f9f-4d02-a96a-2eba068e193e>
CC-MAIN-2022-40
https://www.computerweekly.com/feature/LUN-storage-management-for-vSphere-and-Hyper-V
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00467.warc.gz
en
0.915306
1,785
2.890625
3
5 Digital Health Care Advances Like many other umbrella technological concepts, digital health has no clear definition, but its promises are vast. Emerging as a buzzword in the mid-2000s, digital health was a broader and more-advanced-seeming successor to prior terms such as mHealth and wireless health. The extent to which the health care and health-tracking landscape has evolved in the past 15 years is difficult to answer, but, broadly speaking, it’s perhaps fair to say the health care ecosystem hasn’t yet seen a revolutionary level of change. Still, it would be a shame to overlook the wave of innovative developments that highlight the potential of technologies like IoT to drive breakthroughs in the field — both in terms of patient care and beyond. Here, we highlight several such advances, including digital technologies to help facilitate quick emergency response times. An Autism Wristband That Predicts Aggressive Meltdowns Researchers at Northeastern University developed a wristband that probes physiological data to predict autistic episodes 60 seconds in advance with 84% accuracy. The scientists were able to achieve that outcome by measuring variables such as heart rate, skin surface temperature, perspiration and movement via a wristband. Rather than create a bespoke wristband, the researchers used an off-the-shelf E4 device from a company known as Empatica in a trial involving 20 youth. The E4 includes a photoplethysmography sensor, a 3-axis accelerometer, electrodermal activity sensor, infrared thermopile and an internal clock that can measure time increments as small as 0.2 seconds. While the prediction of an outburst a minute in advance may seem like a relatively small amount of time, it could be enough to enable a caregiver to help calm the patient. “If we could give caregivers advance notice, it would prevent them from getting caught off guard and potentially allow them to relax the individual and make sure everyone in the environment is safe,” said Matthew Goodwin, a Northeastern professor in a statement. The group experimented with a variety of wearable technologies in the research, but “but ultimately selected the E4 because it includes multiple autonomic [variables],” Goodwin said via email. In addition, the E4 hardware also had the benefit of logging physical activity via accelerometry and made raw data accessible. It was “not pre-processed with a black box algorithm as is the case with most consumer wearables,” Goodwin said. In addition, the system enabled streaming data in real time, which supports cloud-based analytics and IoT integration. Furthermore, E4 has further advantages, according to Goodwin. The system “has been validated in scientific publications, has a relatively discrete form factor, and is packaged in a waterproof and shockproof housing that makes it highly durable. Additionally, the sensor builds off a predicate device that I was involved in developing.” For disclosure, Goodwin is a scientific advisor to Empatica. Using Drones for Medical Samples (and Emergencies) The idea of using consumer drones to respond to medical emergencies is not new. In 2014, a Dutch engineer created a drone carrying a defibrillator payload. According to researchers at Delft University of Technology, the technology could boost the chances of survival for a patient suffering from cardiac arrest from 8% to 80%. Perhaps unsurprisingly, researchers have since demonstrated that drones can respond to simulated emergencies substantially faster than traditional ambulances. In a recent experiment, researchers using the DJI Phantom 3 Professional drone competed against ambulances in the Iraqi city or Erbil. A drone equipped with first-aid supplies reached patients in an average of 90 seconds — a full 120 seconds faster than the ambulance. While a variety of medical drone-related projects remain in development, a few are being commercialized. Earlier this year, for instance, UPS and urban delivery firm Matternet teamed up to use drones to deliver medical samples at WakeMed, a North Carolina health care system. The samples in question include blood and tissue collected during procedures at WakeMed. FAA sanctioned the project. A Wearable Sweat Tracker Those two words have become commonplace in the United States in recent decades, leading belated comedian George Carlin to ask in the mid-1990s: “When did we get so thirsty in America? Is everybody so dehydrated they have to have their own portable supply of fluids with them at all times?” Yet the risks of dehydration are real. Side effects from drinking insufficient water include increased risk of injury to athletes, stress to the cardiovascular system, temporary brain shrinkage including diminished attention and memory, as well as other problems. But determining how much water to drink has traditionally been difficult. Researchers from the University of California, Berkeley and at Northwestern University seek to improve the situation with the invention of a patch that measures sodium in sweat from the skin. By measuring sweat through the device, athletes could determine how much to drink to rehydrate. The research efforts at Berkeley at Northwestern are not related. The Northwestern effort, backed by Gatorade, enables patients to determine if they should drink more water and replenish electrolytes. The technology could also spot a biomarker for cystic fibrosis. Its diagnostic capabilities could evolve to include other diseases over time. ‘ The Berkeley research aims to improve scientists’ understanding of sweat metrics and understand differences in sweat-to-blood ratios in healthy and diabetic patients. The Berkeley scientists also concluded in an article published in Science Magazine that sweat monitoring is a “convenient tool for tracking hydration status,” while also determining more research is needed to understand how factors such as age, body mass and diet influence sweat composition. In the long-run, sweat-monitoring sensors could be integrated directly into workout clothing. “For example Lycra or yoga pants would be perfect because they will be permanently in contact with your skin for longer periods of time,” Mallika Bariya from the University of California at Berkeley told NPR. The underpinnings of the modern insurance industry stretch back to circa 1686, when Edward Lloyd opened a new coffee shop near London’s docks. Patrons came to gather to drink tea and coffee, of course, but they also came to gossip and bet. As BBC explains, some visitors of the establishment bet Royal Navy officer Admiral John Byng would be executed “for his incompetence in a naval battle with the French.” That bet turned out to be right. Out of that culture of gambling, insurance opportunities began to dawn. If sea captains wanted to insure a ship, they could approach patrons to sign the bottom of a contract. The term “underwriter” emerged from that context, as did the insurance company Lloyd’s of London. Information — about the ships, crews and anything else relevant — was an important part of the business. But once a seafarer sets sail, it was often difficult to determine what happened to a given vessel until it returned to the London port. The traditional industry is similarly reactive. In the case of life insurance, for instance, an applicant needs to fill out scores of documents and undergo medical exams while trying to makes sense of commission-driven sales staff. Companies like Ethos Life Insurance is aiming to change that equation. Arming itself with predictive analytics the company says applying for life insurance can take minutes for the majority of applicants. The majority of applicants also are not required to undergo a medical exam. Other companies such as Ladder, Fabric and Quotacy have similar business models. From more of an IoT perspective, John Hancock announced it would cease offering traditional life insurance, replacing it with interactive policies that draw data from wearable devices and smartphones. The company launched an interactive life insurance option in 2015. The company’s policy awards policyholders for achieving wearable-based fitness goals and for entering workouts and healthy meals into an app. Policyholders score premium discounts for hitting exercise targets tracked on wearable devices such as a Fitbit or Apple Watch and get gift cards for retail stores and other perks by logging their workouts and healthy food purchases in an app. Smart Speaker Skills for the Elderly While the advocacy group Consumer Watchdog, Electronic Frontier Foundation and others accuse Google and Amazon of spying on users, the devices continue to be popular. More than 133 million smart speakers are in use around the world. The microphones embedded in such devices — and the cameras included in some home hubs — can provide a convenient means of communication with the elderly and their loved ones. On that note, State Farm and Amazon have teamed up to create a “skill” for owners of the Amazon Echo Show to help create “a virtual circle of support, coordination and communication at any time of the day while delivering a personalized experience to the senior,” according to a statement shared with CNBC. Owners of the Echo Show device can share alerts and allow visitors and caregivers to check in on them. Separately, Amazon is working on developing mechanisms for its Alexa devices to help manage health information and provide health-related reminders.
<urn:uuid:87d6f82a-8ab4-4fa8-936d-53713541c6c7>
CC-MAIN-2022-40
https://www.iotworldtoday.com/2019/09/23/5-digital-health-care-advances/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00667.warc.gz
en
0.952878
1,879
2.96875
3
DNS Load Balancing Definition Domain name system (DNS) load balancing is distributing client requests to a domain across a group of server machines by configuring a domain in the Domain Name System (DNS) to correspond to a mail system, a website, a print server, or another online service. What is DNS Based Load Balancing? Load balancing improves availability and performance by distributing traffic across multiple servers. Organizations speed both private networks and websites using various types of load balancing, and most websites and internet applications would not function correctly or route traffic effectively without it. DNS, sometimes called the phone book for the internet, translates website domains such as avinetworks.com into IP addresses in a process called DNS resolution. Thus like a phone book connects names and phone numbers, DNS turns domain names into long, numerical IP addresses so that web servers can identify the sites along with connected devices. DNS resolution saves humans from having to memorize long, difficult number sequences to access applications and websites. In DNS resolution, user browsers make DNS queries, also called DNS requests: they request the correct list of IP addresses of destination websites from a DNS server. DNS-based load balancing improves availability and performance by distributing traffic across multiple servers, but it provides different IP addresses through the DNS in response to DNS queries to distribute that traffic. DNS load balancers may respond to a DNS query using various rules or methods for choosing which IP address to share. Round-robin DNS is among the DNS load balancing techniques used most often. Advantages of DNS Load Balancing? The advantages of DNS load balancing include: Ease of configuration. Simply direct multiple DNS records for one hostname toward the various IPs serving web service requests. Traffic is routed at the DNS level so there are no additional server configuration changes to make and no software to install. Health checks. DNS load balancing health checks monitor unhealthy and failed servers and remove them from client query requests almost instantly without affecting users. Scalability. All servers sit behind a single external IP, so it’s possible to scale out and add DNS services dynamically without updating DNS name services.Improved performance. The traditional round-robin DNS approach accounts for neither health visibility nor server loading. High-volume DNS load balancing is based on load and performance. Drawbacks of DNS Load Balancing? Unfortunately, because DNS load balancing is a simple implementation, it also has inherent problems that limit its efficiency and reliability. Most notably, DNS always returns the same set of IP addresses for a domain because it does not check for network or server errors or outages, so at times it may direct traffic toward servers that are inaccessible or down. Another potential problem is that both clients and intermediate DNS servers or resolvers, both to reduce the level of DNS traffic on the network and to improve performance. The system assigns each resolved address a validity lifetime or time-to-live (TTL). Short lifetimes improve accuracy but increase DNS traffic and processing times meant to be reduced by caching. Meanwhile, long lifetimes may prevent clients from learning of server changes quickly. Standard DNS load balancing and failover solutions work well enough in many network environments. However, for certain classes of network infrastructures, the standard DNS load balancing failover mechanism does not function well: - Any Internet Service Provider (ISP) network for whom DNS server failure would create performance issues for users at an unacceptable level because DNS is part of core services. - Large service providers with high volume network infrastructures such as cloud service providers, network carriers, and high transaction data center environments. - Service providers and businesses whose infrastructures must maintain high end-user performance requirements, such as online retailers, stock traders, etc. - Global Server Load Balancing (GSLB) implementations. DNS Load Balancing vs Hardware Load Balancing Three major load balancing methods exist: DNS load balancing, hardware-based load balancing, and software-based load balancing. Here is the difference between DNS load balancing and hardware load balancing: Equipment. Hardware load balancing supplements network servers with actual hardware to distribute and balance traffic based on those specifications of the hardware itself. DNS load balancing distributes client requests across many servers in different data centers using a domain name configuration under the Domain Name System (DNS). Cost. DNS load balancing is typically lower cost, and may be a subscription. Hardware balancing has a higher cost up front for the equipment itself but does not usually involve additional costs until it is time to replace the hardware. Maintenance. Typically physical hardware demands maintenance of its own, while DNS server load balancing solutions include maintenance. Scalability. It is generally less expensive and easier to scale DNS load balancing, as users can utilize more servers by merely changing their subscription. Some providers offer global DNS load balancing services. Particularly on a global scale, it is more costly to scale and expand hardware balancing. How to Configure DNS Load Balancing To configure DNS load balancing for an API endpoint, a website or, another web service, point the A records for the website hostname to all IPs of the target machines. For example, five different machines serve requests for website.com hostname, and each has a unique IP address. Configure DNS load balancing here with five separate A records for website.com, each pointing to a different target machine’s IP address. Each new end user will be routed to a different IP address once the DNS changes are propagated. Cloud-based DNS load balancing can also handle mail server traffic. The most common approach to implementing DNS load balancing for a mail server is to assign all MX records for a given domain the same priority (usually a priority of 10). Most SMTP servers target the first record in a response, and every time a request is made, the SMTP server resolving to the domain will get all MX records in a different order. DNS Round Robin vs Network Load Balancing Network load balancing is a broad term that describes the management of network traffic without detailed protocols for routing. DNS round-robin load balancing is a particular DNS server mechanism. DNS round-robin load balancing distributes traffic to improve site reliability and performance, just like other kinds of DNS-based load balancing. However, rather than using a hardware-based or software-based load balancer, DNS round-robin uses an authoritative nameserver, a type of DNS server, to perform load balancing. Authoritative nameservers contain A records or AAAA records. These DNS records contain the matching domain name and IP address for websites. The goal of a client DNS query is to find the single A (or AAAA) record of a domain. A DNS query will always return the same IP address in a basic setup, because each A record is tied to a single IP address. In contrast, domains have multiple A records in round-robin DNS, each tied to a different IP address. As DNS queries come in, they are spread across associated servers because IP addresses rotate in a round-robin fashion. If a round-robin DNS load balancer is using four IP addresses, every fourth request would return any one IP address. This makes it less likely that any one server will get overloaded. The round-robin approach is popular, but there are other traffic routing methods. Some DNS-based load balancing configurations use a weighted algorithm to assign traffic proportionately based on capacity in response to DNS queries. Examples of this type of load balancing algorithm include weighted least connection and weighted round-robin. Many approaches to DNS load balancing are dynamic, in that the DNS load balancers consider server response times and health when assigning requests. Dynamic algorithms all follow different rules and offer different advantages, but they do the same broader thing: optimize how traffic is assigned and monitor server health. Least connection is one type of dynamic load balancing algorithm. In this configuration, traffic is assigned to the server with the fewest open connections at the time, based on server monitoring. Another common dynamic algorithm is geo-location, where the load balancer assigns all regional requests to a defined server. For example, all requests originating in the US might go to server USA. Finally, a proximity-based algorithm instructs the load balancer to assign traffic dynamically to the user’s closest server. Does Avi Offer a DNS Load Balancing Solution? Yes. The Avi DNS virtual service is a generic DNS infrastructure that can implement DNS Load Balancing, Hosting Manual or Static DNS Entries, Virtual Service IP Address DNS Hosting, and Hosting GSLB Service DNS Entries. For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.
<urn:uuid:caaef1c6-a263-464d-94cc-d24cc4c1099a>
CC-MAIN-2022-40
https://avinetworks.com/glossary/dns-load-balancing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00667.warc.gz
en
0.897019
1,799
3.78125
4
The push for self-driving cars—at least here in the US—is happening mostly in the name of increasing road safety. More than 33,000 people die on US roads each year, and the National Highway Traffic Safety Administration says its data shows that "in an estimated 94 percent of crashes, the critical cause is a human factor." Advanced driver-assistance systems (think Tesla's autopilot or the semiautonomous mode on Audi's A4) are already a boon to drivers, reducing fatigue and keeping an ever-vigilant watch out for hazards, but the RAND Corporation has just published a study that suggests we may never be able to prove the safety of a self-driving car. "Under even aggressive testing assumptions," the authors write, "existing fleets would take tens and sometimes hundreds of years to drive these miles—an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use. These results demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability." At issue is how much confidence one has in the statistics. Fatal crashes are rare events—in 2014, there were 1.08 deaths per 100 million vehicle miles. Proving an autonomous car is at least that safe, according to RAND, would take quite a while. "To demonstrate that fully autonomous vehicles have a fatality rate of 1.09 fatalities per 100 million miles (R=99.9999989%) with a C=95% confidence level, the vehicles would have to be driven 275 million failure-free miles," RAND says. "With a fleet of 100 autonomous vehicles being test-driven 24 hours a day, 365 days a year at an average speed of 25 miles per hour, this would take about 12.5 years." (RAND uses a figure of 1.09 deaths per 100 million miles.) RAND says that even simulations and virtual testing may not be enough to prove autonomous cars safe, posing a challenge for policy makers, insurers, and the car industry. At a recent discussion on regulating autonomous cars, state and federal regulators both expressed their concern over getting things wrong, fearing the inevitable pillorying that will accompany the first fatal autonomous car crash. Not everyone in the car industry is on board with autonomous driving. On Sunday, we reported that some automakers—ones that don't appear to have a heavy research portfolio in autonomous cars—have called for the NHTSA to slow down. But this seems unlikely. Regulators in the US want to harmonize regulation at the state level so that we don't end up with a patchwork of rules where cars happily drive themselves up until reaching a state line where control has to be handed over to the human on board. And in Germany, Chancellor Merkel has also promised to revise laws to allow for testing autonomous vehicles. Fully autonomous cars—what the NHTSA classifies as level 4, capable of driving from point A to point B with no human intervention—are being tested in a handful of locations in the US and elsewhere. But most industry experts Ars has spoken to think we're still more than a decade away from cars that can cope with Manhattan or Bangkok during rush hour.
<urn:uuid:1aed6617-1631-4baa-ace2-89c9fb90f213>
CC-MAIN-2022-40
https://arstechnica.com/cars/2016/04/car-makers-cant-drive-their-way-to-safety-with-self-driving-cars/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00667.warc.gz
en
0.958533
667
2.859375
3
Dynamic routing protocols are used by Layer 3 network devices to automatically share routing information. Various routing protocols have been developed over the years, but most fall into one of three categories: - Interior Gateway Protocol – Distance vector, such as EIGRP or RIP - Interior Gateway Protocol – Link state, such as OSPF or IS-IS - Exterior gateway protocol – Path vector, such as BGP In this article, we’ll be comparing the characteristics of the two types of Interior Gateway Protocols (IGPs), specifically we’ll discuss Link State vs Distance Vector dynamic routing protocols. Distance Vector Routing Protocols Distance vector routing protocols are characterized by the fact that they determine the best path to a particular destination based on the distance to that destination. Distance is measured in several ways. For example, RIP uses a hop count as the distance, simply the number of routers you must traverse to get to the destination. Other distance vector protocols such as EIGRP use additional parameters to perform more efficient routing. For example, EIGRP can be configured to take into account network latency, link bandwidth, as well as traffic load, and reliability when making routing decisions. Routers participating in a distance vector routing protocol periodically exchange routing information with neighboring routers. Typically, it is the routing table itself as well as hop counts and other network traffic-related information that is shared among routers. Routers rely solely on the information from neighboring routers and do not assess the whole network topology when making routing decisions. The name “distance vector” comes from the fact that such a protocol uses vectors (also called arrays in mathematical language or direction of the route) and distances to other nodes on the network. It analyzes those distances to determine the best path. Distance vector routing protocols use the Bellman-Ford algorithm to calculate the best route. Routers using distance vector routing protocols do not maintain information about the whole network topology, but only of the routers to which they are directly connected. Each router advertises the distance to the networks it has learned about and receives information about networks its neighbors have learned about. This process continues until the routing tables of all the participating routers have stabilized. This is called convergence. Examples of distance vector routing protocols include: Routing Information Protocol (RIP) – uses only hop count as the measure of distance Interior Gateway Routing Protocol (IGRP) – uses multiple metrics for each route including bandwidth, delay, load, and reliability. It is currently considered obsolete and should not be implemented in production networks. Enhanced Interior Gateway Routing Protocol (EIGRP) – often called an “advanced distance vector routing protocol,” which sends only incremental updates which reduces the workload on the router and the amount of data to be transmitted. Basic Example of Distance Vector Topology The following topology shows a basic example of a simple Distance Vector routing algorithm (such as RIP for example): Each Node above represents a router device in a network. After the above network topology is stabilized (converged), here are the final distances stored at each Node in the network: Link State Routing Protocols Link state routing protocols are characterized by the fact that every node maintains a complete map of the network topology in the form of a graph, or a database. Within this database, each individual router knows which routers are connected to which other routers. Based on this complete map of the network, each router will independently calculate the best path to every possible destination in the network. This calculated best path is then installed within the routing table. The information shared between link state routers is connectivity related and is contained within what is known as a link state advertisement (LSA). These LSAs are shared in such a way that each participating router will have received an LSA from every node on the network. With the complete set of LSAs, a router produces a network map. The routing protocol is said to have converged when all routers in the topology have constructed the same network topology within their databases. This network map is then analyzed and a shortest path tree is created. This is a data structure that simply determines the best path to each destination based on the network topology and the link cost information that has been shared via the LSAs. From this tree, the routing table is then constructed. Examples of link state routing protocols include: Open Shortest Path First (OSPF) – among the most popular IGPs in production networks today Intermediate System to Intermediate System (ISIS) – most often used by ISPs for their internal networks Basic Example of Link State Topology The following topology shows a basic example of a simple Link State network topology. This algorithm uses accumulated costs along each path, from source to destination, to determine the total cost of a route. The cost of each path is determined by the routers using various factors such as speed of the link etc. Comparison of Distance Vector vs Link State Protocols EIGRP and OSPF are often considered flagship protocols of each type. Both are highly functional, and scalable, and have been extensively deployed worldwide. The most striking differences between them help to highlight the differences between distance vector and link state routing protocols. These differences include: - EIGRP has a flat structure and is highly scalable, while OSPF has a hierarchical structure to accommodate larger networks. This hierarchical structure results in an OSPF topology being separated into distinct areas. - EIGRP’s metric or distance is determined using a complex formula that takes various parameters into account while OSPF determines the metric based on the cumulative cost along the path to a particular destination. Comparison Table of Distance Vector and Link State The following table contains a more detailed look at the differences between these two routing protocol types. |Distance Vector||Link State| |Algorithm characteristics||Slower but more versatile||Faster but less versatile| |Routers share||Routing tables||Link State Advertisements| |Network||Knows only information from directly connected routers||Maintains a complete map of the network topology| |Best path calculated||Based on shortest distance||Based on cost (link state)| |Convergence time||Fast||Very fast| |Complexity of deployment||Relatively simple||Somewhat more involved| |Examples||RIP, IGRP, EIGRP||OSPF, ISIS| Both link state and distance vector routing protocols have been around for almost half a century, so the technology in which these are based is extremely mature. Each type delivers more than sufficient dynamic routing capabilities for most implementations. However, knowing the differences between them can help you make the decision of which to use in your situation.
<urn:uuid:54366a33-8ac8-47ec-8b76-5156ed335af4>
CC-MAIN-2022-40
https://www.networkstraining.com/distance-vector-vs-link-state-protocols/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00667.warc.gz
en
0.914448
1,459
3.609375
4
Flavors of OWL In OWL 101, we introduced OWL. Confusingly, there isn’t just one OWL standard. Instead, there are different “flavors” of OWL—each a distinct subset of the full OWL standard—that are simpler and, in some cases, more computationally efficient for reasoning than the full standard. Fortunately, regardless of which flavor you work with, they are all OWL (i.e., any ontology written using any subset of OWL features is still valid OWL and should be consumable by most OWL tools). By the way, they are all RDF too! That is to say, an OWL ontology is a collection of triples. This lesson briefly looks at those flavors (or more formally, “profiles”) and discusses when you might want to use one or the other. Note that this is a fairly technical lesson. - Why there are different flavors of OWL and when you should care. - OWL 2 / Full - OWL 2 / EL - OWL 2 / QL - OWL 2 / RL - Defunct OWL versions OWL might be used in two very different ways: - As a powerful data modeling language (ontologies of use). - As a way to inject automatic reasoning abilities (ontologies of meaning). The different flavors of OWL will only really matter to you if you’re using OWL to do automated reasoning. The reason for this (pun intended), is these different flavors of OWL exist purely to allow you to trade-off more expressive modeling power against the more time-consuming computational requirements of automated reasoning. Different flavors of OWL trade-off expressive modeling power for computational efficiency when performing reasoning. If you’re not using OWL for automated reasoning at all, then there is no trade off to consider; you can simply use the most expressive profile, OWL 2 / Full, to say whatever you like without regard to the worst case reasoning performance. OWL Profile Philosophy Before we dive into the official OWL profiles, it’s worth calling out a general guideline of how to think about them. In essence, each OWL profile corresponds to a specific usage model. In most cases, it should be clear which flavor is the most appropriate. The profile known as OWL 2 / Full allows you to use every construct available within OWL 2. OWL 2 / Full is by far the most expressive of the profiles. And if you are unconcerned with reasoning performance (perhaps because you aren’t using an automated reasoner at all), then this is what you want. If you are using an automated reasoner, however, then you’d best be careful. OWL 2 / Full defines a series of inference rules that are so complex that, in the best case, might run really slowly on today’s computers. In the worst case, it is possible to describe inferences using OWL 2 / Full that cannot be completed at all using any computer. Said another way, reasoning over OWL 2 / Full is undecidable. Thus if you’re using an automated reasoner, and you don’t want it to run forever and/or give incomplete results, then you should restrict yourself to a more limited profile. Aside from OWL 2 / Full, OWL 2 / EL (existential logic) is the most expressive profile we will consider. It is useful in cases where you have a fairly large number of classes and properties that are linked together through somewhat complicated relationships, and you want to use an automated reasoner to draw out (i.e., make explicit) further relationships. If you restrict yourself to OWL 2 / EL, then all these relationships between classes can be inferred fairly quickly, as can questions about which instances are members of which classes. (Here, “quickly” means polynomial time, for details see Computational Properties). For more details on OWL 2 / EL, see http://www.w3.org/TR/2009/REC-owl2-profiles-20091027/#OWL_2_EL, and http://www.w3.org/TR/2009/REC-owl2-primer-20091027/#OWL_2_EL. OWL 2 / QL Whereas OWL 2 / EL is geared toward a large number of intricately related classes and properties, OWL 2 / QL (query logic) is geared toward efficiently processing a large amount of instance data. For example, suppose that you have a database full of instances, and you also have an ontology written in OWL 2 / QL. Although OWL 2 / QL is still quite powerful, it is sufficiently limited that a query written in OWL 2 / QL can be fully rewritten as a SQL query. Because of this, OWL 2 / QL will often be the flavor to use for cases where you have large amounts of instance data that sit in a more or less traditional relational database. Note: we’ll even show you how to expose data in a relational database via a SPARQL endpoint in an upcoming, hands-on tutorial. For more information on OWL 2 / QL, see http://www.w3.org/TR/2009/REC-owl2-profiles-20091027/#OWL_2_QL, and http://www.w3.org/TR/2009/REC-owl2-primer-20091027/#OWL_2_QL Just as OWL 2 / QL is geared toward running efficiently on top of a relational database, OWL 2 / RL (rules logic) is geared toward running efficiently on traditional business rules engines. If your application depends on a rules engine, this is the best flavor of OWL to use. Specifically, it works best for data that has already been massaged into RDF and plays well with any rules that might be used to implement arbitrary business logic. For more information on OWL 2 / RL, see http://www.w3.org/TR/2009/REC-owl2-profiles-20091027/#OWL_2_RL and http://www.w3.org/TR/2009/REC-owl2-primer-20091027/#OWL_2_RL The above flavors of OWL are defined as part of the spec for OWL 2. The previous version of OWL back from 2004 defined a slightly different group of profiles: OWL/Lite, OWL/DL, and OWL/Full. Since these older profiles have now been superseded by OWL2 / EL, OWL2 / QL, and OWL2 / RL, we will not say much about them. The older subsets of OWL 1 are merely mentioned here since you are likely to see references to them on outdated Web pages. The choice of OWL flavor comes to down to your application: - If you need to reason efficiently over intricately related classes and properties, consider using OWL 2 / EL. - If you have a large amount of instance data and you primarily just want to query it efficiently using some moderately complex class relationships, consider using OWL 2 / QL. - If you have more complicated class relationships and are building on a typical business rules engine, consider using OWL 2 / RL. In practice, most vendors may only partially implement a given profile and may include just a few features from other profiles or even provide their own custom reasoning to boot. It pays to read the manual carefully. However, if you’re not using a reasoner at all and are only concerned with modeling your data, simply consider using OWL 2 / Full.
<urn:uuid:60280f40-7244-4fa0-aa30-b1da16fef2da>
CC-MAIN-2022-40
https://cambridgesemantics.com/blog/semantic-university/learn-owl-rdfs/flavors-of-owl/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00667.warc.gz
en
0.901911
1,656
2.765625
3
RDFS vs. Owl We introduced RDFS and OWL as data modeling languages for describing RDF data. So which should you use? Simply put, both are used extensively in practice, but OWL is the primary modeling language in Semantic Web applications. We recommend using OWL as it builds on RDFS and provides more expressivity. This article provides a more thorough comparison of the two modeling languages. RDFS allows you to express the relationships between things by standardizing on a flexible, triple-based format and then providing a comparatively smaller vocabulary (such as rdf:type or rdfs:subClassOf) which can be used to say things about concepts in your area(s) of interest. OWL is similar, but bigger, better, and badder. OWL lets you say much more about your data model; it shows you how to work efficiently with database queries and automatic reasoners; and it provides useful annotations for bringing your data models into the real world. Said another way: if RDFS is the stuff that passes for coffee in the American Northeast, OWL is a hot cup of Italian espresso. First Difference: Vocabulary Of the differences between RDFS and OWL, arguably the most important is just that OWL provides a much larger vocabulary. For example, OWL includes all your old friends from RDFS such as rdfs:type, rdfs:domain, and rdfs:subPropertyOf. However, OWL also gives you new and better friends! For example, OWL lets you describe you data in terms of set operations: Example:Mother owl:unionOf (Example:Parent, Example:Woman) It lets you define equivalences across databases: AcmeCompany:JohnSmith owl:sameAs PersonalDatabase:JohnQSmith It lets you restrict property values: Example:MyState owl:allValuesFrom (State:NewYork, State:California, …) We won’t cover any of these predicates right now because, in fact, OWL provides so much new, sophisticated vocabulary to use in data modeling and reasoning that gets its own lesson! Second Difference: Logical Consistency Another major difference, in contrast to RDFS, OWL tells you how you can and cannot use certain vocabulary. In other words, whereas RDFS provides no real constraint mechanisms, OWL does. For example, in RDFS, anything you desire can be an instance of rdfs:Class. You might decide to say that Beagle is an rdfs:Class and then say that Fido is an instance of Beagle: Example: Beagle rdf:Type rdfs:Class Example:Fido rdf:Type Example: Beagle Next, you might decide that you would like to say things about beagles, perhaps you want to say that Beagle is an instance of dogs bred in England: Example:Beagle rdf:Type Example:BreedsBredInEngland Example: BreedsBredInEngland rdf:Type rdfs:Class The interesting thing in this example is that Example:Beagle is being used as both a class and an instance. Beagle is a class of which Fido is a member; but Beagle is itself a member of another class: Things Bred in England. In RDFS, all this is perfectly legal because RDFS doesn’t really constrain which statements you can and cannot insert. In OWL, by contrast, or at least in some flavors of OWL, the above statements are actually not legal (i.e., they are logically inconsistent). While one can model this in OWL, a simple consistency check will reveal the inconsistency. That is, you are logically inconsistent to say that something can be both a class and an instance. This is then a second major difference between RDFS and OWL. RDFS enables a free-for-all, anything goes kind of world full of the Wild West, Speak-Easies, and Salvador Dali. The world of OWL enables more logical rigor. Constraints and Computability Why does OWL concern itself with logic and constraints? From a technical point of view, the reason is related to computing power required to implement the kinds of inferences which all this new vocabulary enables. For example, if I know that Example:Frank is of rdf:type Example:Human, and Example:Human is a rdfs:subClassOf Example:Animal, then I can now infer that Example:Frank is also of type Example:Animal. It turns out that some kinds of inferences can be computed quickly. Others can take a REALLY long time to run even on today’s fastest computers. Other kinds of inferences will never be solvable by ANY computer. Unlike RDFS, OWL lets you decide how expressive you want to be, given the computational realities involved. In fact, OWL allows you to restrict your data modeling options to those that enable faster search queries; those that enable conceptual reasoning; or those that can be easily implemented with rules engines. For more details, see the lesson on Flavors of OWL. From a knowledge representation perspective, OWL seeks to model data in an unambiguous, machine understandable manner to promote data interoperability and greater automation. Providing mechanisms for modelers to validate their ontologies for logical consistency promotes model durability and deeper interoperability. To be sure, one can still “say anything about anything,” but OWL provides a means to check for logical consistency and a means to apply constraints. Third Difference: Annotations, the meta-meta-data Suppose that you’ve spent the last hour building an ontology that describes your radio manufacturing business. During lunch, your task is to build an ontology for your clock manufacturing business. This afternoon, after a nice coffee, your boss now tells you that you’ll have to build an ontology for your highly profitable clock-radio business. Is there a way to easily reuse the morning’s work? OWL makes doing things like this exceedingly easy. Owl:Import is what you would use in the clock-radio situation, but OWL also gives you a rich variety of annotations such as owl:versionInfo, owl:backwardsCompatibleWith, and owl:deprecatedProperty, which can easily be used link data models together into a mutually coherent network of ontologies. Unlike RDFS, OWL is sure to satisfy all of your meta-meta-data-modeling needs. OWL gives you a more expressive vocabulary to use, which makes empowers you to develop expressive and rigorous data models. OWL allows you to tailor what you say based on the computational realities and application requirements, such as queries, rules, policy enforcement, etc. Further, OWL allows you to easily express relationships between different ontologies using a standard annotation framework. All these are advantages as compared to RDFS, and are typically worth the extra effort it takes to familiarize yourself with them. You’ll still see RDFS used – often for older data models or smaller ontologies – so it’s useful to have a familiarity with it. For yourself, stick to OWL. Next lesson: Flavors of OWL
<urn:uuid:758a056f-d128-42c0-98cb-b9be2cbe5980>
CC-MAIN-2022-40
https://cambridgesemantics.com/blog/semantic-university/learn-owl-rdfs/rdfs-vs-owl/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00667.warc.gz
en
0.902254
1,546
2.90625
3
To recover OES file system data in the event of a disaster, perform a full system recovery of Linux file system to rebuild the operating system. Before You Begin The full system restore operation for a Linux system includes the following general steps. Detailed instructions are provided in the following section. Install a default install on the system that you want to restore. Install the Linux File System iDataAgent on the default install. Create and mount a root file system on the system that you want to restore. If any additional file systems were lost, create and mount them as well. Use the Linux File System iDataAgent to restore your data. This recovery procedure is for basic systems without MetaDisks or Logical Volume Management software. For this procedure, you must use a default installation of Linux. To perform a full system restore: Note: Try to avoid the unconditional overwrite of the root directory on a live file system. This is a mechanism that allows an unconditional overwrite of open files in the root directory (/) on a live file system. Performing such a restore can result in an inconsistent system that may also fail to boot. Use this option AT YOUR OWN RISK. When you perform a full system restore, the client computer must have a default install partition with the Linux File System iDataAgent installed on the default install. Do not install the default install on the same disk partition that will contain the restored root file system. Install the default install with the networking option enabled. The TCP/IP, hostname, and domain name settings of the default install must match those of the system that you are restoring. Note: If you install the default install on an external drive, it can be used for other systems. However, you will have to remove and re-install the Linux File System iDataAgent software for each client. In addition, you will have to reconfigure TCP/IP, hostname, and domain name settings for each system. Load a default install (minimal install) Linux installation on a bootable partition of your system. Once the software is installed, boot that partition using a boot disk or LILO (Linux Loader). Install the Linux File System iDataAgent software. When the Linux File System iDataAgent is installed, continue with the next step. For more information, see UNIX/Linux File System Agent Deployment. Note: Use restored or saved data to obtain the information needed to repartition and label the new disk. Create partitions/slices on the disk by entering the following command: fdisk [-l] [-b SSZ] [-u] device Create the root file system by entering the following command: mkfs [-V] [-t fstype] [fs-options] /dev/<hda1> [size] where <hda1> is the Drive Identifier of the partition where you want to create the root file system. Mount the new root file system at /mnt by entering the following command: mount /dev/<hda1> /mnt where <hda1> is the Drive Identifier of the partition containing the root file system. If any other file systems existed on the root disk before the system crash, you must recreate them as well. For each file system, enter the following command: mkfs [-V] [-t fstype] [fs-options] /dev/<hda2> [size] where <hda2> is the Drive Identifier of the partition containing the file system that was lost. Create an empty directory called "proc" on /mnt as follows: If you have recreated any file systems other than root, you must mount these as well. For each file system, enter the following commands: mount /dev/<hda2> /mnt/<file_system_name> where <file_system_name> is the name of the file system and <hda2> is the Drive Identifier of the partition containing the file system. From the CommCell Console, right-click the backup set that contains the backup data of the root file system, click All Tasks, click Restore, type "/" as the path to restore from, type or accept "/mnt" as the restore destination, and use the Advanced tab to exclude from the content of the restore any file systems that were not affected by the system crash. Note: Do not select Unconditional Overwrite from the Restore Options dialog box. When restoring encrypted data, refer to Data Encryption. Click OK to start the restore. Verify that the restore operation has completed successfully. If required, install either the LILO or grub boot loader (per the boot loader that was used in your environment) to the restored disk. Be sure to refer to the documentation for your distribution for the appropriate syntax and usage for your boot loader. For example, the following indicates use of the grub boot loader: where <hda> is the Drive Identifier of the partition containing the root file system. Exit and reboot the computer. The system boots to the newly restored root. The procedure is now complete. Note: If you have installed to a new server where the mount points may be different, be sure to edit the fstab and mtab files in the restored /etc directory to match the new server; also, be sure to edit the /boot/grub/menu.lst file to match the new configuration. Also, if there are hardware changes, be sure to reboot to single user first so that you can add new drivers to the operating system if needed.
<urn:uuid:548eb19c-d534-43af-bac4-19a56a68ca4e>
CC-MAIN-2022-40
https://documentation.commvault.com/v11/essential/59160_restore_data_linux_file_system_full_system_restore.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00667.warc.gz
en
0.859525
1,227
2.625
3
MIT Researchers Clear the Way Toward Robust Quantum Computing with Two-Qubit Operation or “Gate” (Mit.edu) MIT researchers have made a significant advance on the road toward the full realization of quantum computation, demonstrating a technique that eliminates common errors in the most essential operation of quantum algorithms, the two-qubit operation or “gate.” “Despite tremendous progress toward being able to perform computations with low error rates with superconducting quantum bits (qubits), errors in two-qubit gates, one of the building blocks of quantum computation, persist,” says Youngkyu Sung, an MIT graduate student in electrical engineering and computer science who is the lead author of a paper on this topic. “We have demonstrated a way to sharply reduce those errors.” “We have now taken the tunable coupler concept further and demonstrated near 99.9 percent fidelity for the two major types of two-qubit gates, known as Controlled-Z gates and iSWAP gates,” says William D. Oliver, an associate professor of electrical engineering and computer science, MIT Lincoln Laboratory fellow, director of the Center for Quantum Engineering, and associate director of the Research Laboratory of Electronics, home of the Engineering Quantum Systems group. “Higher-fidelity gates increase the number of operations one can perform, and more operations translates to implementing more sophisticated algorithms at larger scales.” The next generation of quantum computers will be error-corrected, meaning that additional qubits will be added to improve the robustness of quantum computation. “Qubit errors can be actively addressed by adding redundancy,” says Oliver, pointing out, however, that such a process only works if the gates are sufficiently good — above a certain fidelity threshold that depends on the error correction protocol. Up to this point, only small molecules have been simulated on quantum computers, simulations that can easily be performed on classical computers. “In this sense, our new approach to reduce the two-qubit gate errors is timely in the field of quantum computation and helps address one of the most critical quantum hardware issues today,” Sung says.
<urn:uuid:5b36d86f-b09d-4134-ae3a-3761fd77f087>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/mit-researchers-clear-the-way-toward-robust-quantum-computing-with-two-qubit-operation-or-gate/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00667.warc.gz
en
0.923607
447
2.796875
3
Each domain should have a number of DNS records that define, among others, how website and email traffic are routed. This article looks at the most common DNS records. Name Server (NS) records point to where DNS records for a domain are hosted. You can not change your domain’s name servers via your hosting control panel. Instead, name servers are defined in the control panel for your domain registration. If we are the registrar for your domain then you can change the name servers via your billing account: Image: the Nameservers page in our billing account. In the above example, the name servers for the domain example.net are pointing to ns5.catalyst2.net and ns6.catalyst2.net (which are the name servers for our shared Linux servers). When a resolver tries to find the IP address for example.net it will obtain these name servers, and the resolver then knows that it can find more DNS records for the domain at that address. Note: Like all DNS changes, it can take up to 24 hours for a change of name servers to fully propagate. The DNS records we will discuss next can all be viewed and edited via your hosting control panel. In cPanel, you can manage the DNS records for your domain as follows: Image: cPanel’s Zone Editor The first record shown in the above image is the main A record for the domain example.net: example.net. A 220.127.116.11 A and AAAA records map a domain name to an IP address. The above record defines the host name (example.net.), the type of record (A) and a value (18.104.22.168). In other words, it tells resolvers that website traffic for the domain example.net should be routed to the IP address 22.214.171.124. A records are used for IPv4 addresses. For IPv6 addresses there is an AAAA record. It is likely that your website only has an IPv4 address. In that case there isn’t an AAAA record in your DNS zone. As an eagle-eyed reader you may have noticed that there is full stop at the end of the domain name (“example.net.”). The trailing dot indicates that the name is a “fully qualified domain name”. This is the opposite of a “relative” domain name, which we will look at now. If you look at the DNS zone for your domain you are likely to see more than one A record. For instance, there may also be A records for cpanel and webmail. example.net. A 126.96.36.199 cpanel A 188.8.131.52 webmail A 184.108.40.206 Notice that the entries for cpanel and webmail don’t have a trailing dot. Both are relative names and the host name (“example.net.”) is implied. In other words, the host name “example.net.” is appended automatically if there is no trailing dot. It is usually easier to enter the full domain. The following records are in effect identical to the records in the previous example: example.net. A 220.127.116.11 cpanel.example.net. A 18.104.22.168 webmail.example.net A 22.214.171.124 Canonical name (CNAME) records define an alias (they point a domain to another domain). This is useful if you got lots of A records pointing to the same IP address. In the last example we had three A records that all pointed to the IP address 126.96.36.199. We can make our DNS zone a little tidier by converting A records for subdomains to CNAME records: example.net. A 188.8.131.52 cpanel.example.net. CNAME example.net. webmail.example.net. CNAME example.net. mail.example.net. CNAME example.net. www.example.net. CNAME example.net. Here, all four CNAME records resolve to 184.108.40.206. This works because the CNAME records point to the main A record, which in turn points to 220.127.116.11. CNAME records can also point to an external domain. This is commonly used by third-party services. The below example shows three CNAME records used by Office 365 (note: this is purely an example – the actual Office 365 records you need to add may be different): autodiscover.example.net. CNAME autodiscover.outlook.com. lyncdiscover.example.net. CNAME webdir.online.lync.com. sip.example.net. CNAME sipdir.online.lync.com. Mail exchange (MX) records define where mail for your domain should be routed to. In the following example the MX record is much like the CNAME records we created for our subdomains: example.net. A 18.104.22.168 example.net. MX example.net. (0) The MX record for example.net simply points to example.net, which means that both the website and email are hosted at 22.214.171.124. MX records have a priority field. In the above example the priority of our mail server is “0” (zero). As we have defined only one MX record the priority of the record doesn’t really matter, but the priority is important if you have more than one MX record. You don’t need more than one MX record but it is useful to have multiple records. It is commonly used for load balancing and as a fall-back solution. If you got a Linux hosting package with us you probably have two MX records: example.net. A 126.96.36.199 example.net. MX example.net. (0) example.net. MX backuk-dmx01.active-ns.com. (100) The second MX record in the above example points to one of our fall-back servers. The MX record has a higher priority number which, confusingly, means that it has a lower priority. In other words, email for example.net are delivered to 188.8.131.52. If the email can’t be delivered to the primary mail server then it is routed to the fall-back server instead. The fall-back server then queues the email and tries to deliver it to the primary mail server at a later time. Your website and email don’t have to be hosted on the same server. For instance, the fall-back server we defined in the previous example points to an external server. If you are using our advanced spam filter then your MX records also point to an external server: example.net. MX st2.mx.email-filter.net. (10) example.net. MX st3.mx.email-filter.net. (10) Incoming emails for example.net are now routed to the spam filter (which in turn delivers emails that are not marked as spam to your inbox). In the above example both MX records have the same priority (10). This is an example of load balancing: because the priority values are equal incoming emails may be routed to either mail server, which spreads the work load. Similarly, if you use G Suite for your email your MX records should be be as follows: example.net. MX aspmx.l.google.com. (1) example.net. MX alt1.aspmx.l.google.com. (5) example.net. MX alt2.aspmx.l.google.com. (5) example.net. MX alt3.aspmx.l.google.com. (10) example.net. MX alt4.aspmx.l.google.com. (10) If your email is hosted elsewhere you may need to check if your domain’s email router is set up correctly. You find this option in cPanel by selecting Email Routing. Image: cPanel’s Email Routing page. By default, the router is set to Automatically Detect Configuration. This is usually fine, but it can cause issues. In particular, if both the DNS and mail for your domain are managed elsewhere then it is important that the email router is set to “remote”. Of course, we are happy to troubleshoot any mail delivery issues you may encounter. TXT records provide arbitrary text values for your domain name. If you got one of our hosting plans then you should see at least two TXT records: an SPF and DKIM record. Both are used to prevent email spoofing and make it less likely that your emails are marked as spam. You can learn more about these records on the SPF Records and DKIM records pages. TXT records are also commonly used by third party services that need to verify if you control a domain name. Such services may ask you to add records like these: example.net. TXT verification_code=234u39fd989098d983900d9d933d0d0d verifycode.example.net. TXT 234u39fd989098d983900d9d933d0d0d In the first example we added a TXT record for example.net and in the second we added a record for verifycode.example.net. It is worth noting that the ‘verifycode’ subdomain doesn’t have to exist – the value of the TXT record can still be retrieved via a DNS query.
<urn:uuid:bfb51099-312f-46fb-a95a-8c35972d8cb3>
CC-MAIN-2022-40
https://www.catalyst2.com/knowledgebase/dns/common-dns-records/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00067.warc.gz
en
0.859264
2,066
2.921875
3
HOW TO AVOID DATA BREACHES What is a data breach? A data breach is an event resulting in the exposure of sensitive or confidential data, outside of a trusted environment. Every week, new data breaches are reported across the globe, many of which have far-reaching consequences for companies and their customers. Big multinational companies are often targeted, and hackers have repeatedly gained access to personal details including passwords, email addresses, credit card details and home addresses. This can have a devastating effect on consumer confidence. There’s no mistake about it: data breaches are extremely bad for business. It’s no surprise that cyberattacks are so common For criminals, hacking into sensitive or confidential data can be an easy way to gain large financial reward – and their job is often made easier if companies haven’t taken adequate measures to protect their data. Despite the regular occurrence of data breaches, there are steps you can take to mitigate the risk. Although you may never be able to guarantee your security 100%, companies do have the power to protect themselves and reduce their vulnerability to attack. In this article we’ll explore how data breaches happen alongside the methods cyber-attackers have used to gain access to sensitive data, providing actionable insights and tips on how to avoid a data breach in the future. Did you know that data breaches have multiple causes? Intentional data breaches In the case of an intentional data breach by hackers, the stolen data is often used for illegal profit. It may be associated with identity theft, allowing criminals to assume false identities to carry out illegal activities, or used by competing companies to gain insights into your business activity. Some hackers even hold companies to ransom, using ransomware to gain access and lock organizations out, in order to demand a payment in exchange for restored access to the IT environment. Hackers may be acting within the company, or outside of it. In most cases this is a result of an APT (Advanced Persistent Threat) that succeeds in penetrating the corporate infrastructure. Accidental data breaches Accidental data breaches may not be intended, but the end result may be just as harmful for business. This type of data breach may be caused by technical issues (if software is vulnerable to attack) or human error (if systems or security are incorrectly configured) on the part of the system administrator or DBA. A common example is the deployment of a cloud-based resource without either being aware of or following best practices. Inadvertent data breaches Inadvertent data breaches differ in that they stem from mistakes or oversights that the users of data are responsible for as opposed to the admins. Sometimes employees may unwittingly allow access to data to other employees that should not have access to perform their job but lost or stolen mobile devices, portable storage devices, errant e-mails, and hard copies are also equally common examples. This attack can also be due to depraved indifference where users simply don’t care or assume “someone else will take care of that – not my job.” How does a data breach happen? Methods used to breach data: - Spyware and viruses, where malicious software is installed in order to gain access to sensitive data. This can often be prevented using up-to-date anti-virus software, but cyber-attackers are always working to detect vulnerabilities and exploit them. - Phishing and smishing. Phishing is a technique used to fool unsuspecting users into giving up personal information such as usernames and passwords. It’s commonly done using email, but smishing uses SMS text messages to achieve the same goal. - Unsecured access points such as weak passwords or system structure failure. However diligent website administrators are, they are only human and may forget to secure back-end data. If someone really wants to hack into the IT environment, they will try hard to identify any of these vulnerabilities. - System glitches. A bug in your code, or other problems within your IT system can leave it vulnerable to a data breach. Hackers are always developing new methods Because hackers are always developing new methods of attack, it can be easy to fall victim to a new scam. For example, Business Email Compromise (BEC) is a form of phishing that is now gaining traction among cyber-thieves. This involves the impersonation of a person of importance within an organization, by constructing a false email to lure an employee to give up user access credentials, or to click on a link which may deliver malware, a Trojan, or other malicious payload. Sometimes, data breaches happen when companies are slow to respond to common vulnerability exposures (CVEs) or install the latest security patch. These breaches, known as CVE and patch exploitation, allow hackers to gain access through these security gaps in order to attack a company. Should your business be concerned? Common data breaches In 2020, phishing and malware are the two most common attacks attempted against businesses of all sizes, but other cyberattacks are close behind. You can use the analogy of holes in a boat – as one attack method is shown to work, it becomes commonly used – like water rushing to a hole. Hackers and cybersecurity bad actors typically take the easiest route in attempting to gain access or data from a target. Right now, phishing is a low work strategy with potentially high rewards: it is very easy to find millions of email addresses in the dark web, construct an email, and attempt to phish out information (such as access credentials) to unsuspecting targets. The biggest data breaches - even larger companies are at risk The biggest data breaches to date have affected giant multinational corporations including Microsoft, FedEx and British Airways. In 2018, Marriott Hotels revealed that personal data, including credit card details, belonging to up to 500 million guests had been accessed by hackers. Meanwhile, internet giant Yahoo! was breached on two separate occasions, suffering attacks that affected every one of its 3 billion users. Data breaches aren’t always related to the security of your data right now. In September 2019 it was discovered that the phone numbers of 20% of Facebook users (419 million people) were freely available in a database online, having been gathered when developers had access to these details. This permission was revoked in 2018, but it shows how easily historical vulnerabilities can be exploited. Data breaches are a serious threat. Every organization on the planet needs to have some sort of data security program in place. Whether they outsource the data security management to a MSSP (Managed Security Service Provider) or opt to host data security management in-house, it needs to be done. There are now too many ways in which sensitive or confidential data can be exposed when it should not be. Therefore, companies need to do more in their effort to protect the data. How comforte is helping to combat data breaches? How comforte is helping to combat data breaches Unlike most other solutions, comforte protects the most sensitive and valuable asset held by any company – their data. A data breach happens when sensitive data is found and exfiltrated (on purpose or by accident). Removing the word ‘sensitive’ and replacing it with ‘random’ produces a “data breach of random data”. Imagine the letters used in the board game ‘Scrabble’ dumped all over the floor, exposed for the world to see. This wouldn't be considered a data breach by data privacy regulations such as GDPR, nor by privacy professionals. comforte’s data-centric solution does just that – it anonymizes and protects sensitive data, wherever and whenever it is used. Our universal solution transforms sensitive data – including names, social security or tax ids, credit card numbers, user ids and email addresses – into random characters which have no meaning. The value companies receive are two-fold: The rest of your company’s security layers immediately receive a reduction in risk. Anti-virus and spyware software solutions become stronger; anti-phishing and smishing solutions become stronger; firewall and perimeter defenses become stronger – because the small amount of risk that these solutions have (if and when an attacker is able to get past) are then protected by comforte’s second layer of sensitive data anonymization. Think of comforte as additional insurance for when your existing security layers fail. This extra layer of protection enables companies to take a step towards meeting the compliance and regulatory requirements of data privacy laws and standards. At a minimum, data privacy laws and standards state that organizations must have ‘reasonable data security’ in place. Anonymizing sensitive data is more than reasonable – it is highly effective! Protection from spyware and viruses The first-line defense against spyware and viruses is to install the best anti-virus protection software and the best intrusion detection system available. That being said, simply installing such software is not enough as both of these products may fail – for example, if a new virus bypasses the anti-virus detection, or if very sophisticated spyware manages to get past the intrusion detection software. If the data protected by such software is in clear text form, the attacker may be able to gain access to it. To prevent this, comforte AG offers a solution which does not leave text in clear text form. Therefore, if (or when) a spyware element or a virus get past these security layers, sensitive data is still not exploitable by an attacker. Addressing phishing, smishing, and unsecured access points There are common-sense actions that companies can take to stop successful phishing and smishing attempts. These include using anti-phishing and anti-smishing products, as well as offering training classes to educate employees to be less susceptible to attacks. However, it’s vital to address what might happen if a phishing or smishing attempt does succeed. In the same way that comforte AG protects against spyware and viruses, we deliver a data-centric solution that protects sensitive data in clear text form. Even if a hacker gains access to your data, it can’t be exploited. End-point-protection software solutions look to prevent malware or bad-intended code being executed on end-points, which can access sensitive data. As discussed above, comforte AG provides an extra layer of security beyond this, by ensuring that sensitive data is not stored in clear text form. If an attacker was able to take over an end-point and then request sensitive data held by a company, any data they received could not be exploited. A data-centric approach like no other comforte AG does not put in additional controls at each layer of the cybersecurity defense landscape. Instead, our solution places protections on the data itself – adding an extra level of protection to ensure that sensitive data can’t be read or exploited, even if a hacker manages to bypass anti-virus software and other protective measures. Data security across the globe The United States is still the number one country in terms of targets, due to the amount of data requested and collected by many companies. However, no country is immune to attack. In September 2019, a data breach resulted in stolen data from citizens in Ecuador, and similar incidents have occurred in both Panama and Australia earlier in the year. No country is safe from hackers and bad actors in today’s cyber-world. While the technical aspects of data security are the same throughout the world, regulations differ from nation to nation. According to the United Nations Conference on Trade and Development, there is some form of data privacy legislation active in 107 countries worldwide, so it’s vital for companies to ensure compliance wherever they operate. Typically, regulations will stipulate that companies must inform individuals if their data has been breached. They may also be liable for a fine. - Internationally, companies receiving payments must comply with PCI DSS, the Payment Card Industry Data Security Standard. - In Europe (including the UK), companies need to be aware of and comply with GDPR, the General Data Protection Regulation. - In the USA, there is no single piece of data protection legislation; companies must comply with multiple laws that are often sector-specific. Data breaches in different industries For a long time, the financial industry was the sector most targeted by cyber-attackers, due to the ease with which hackers and bad actors could gain access to credit card data. These days, it’s harder for cyber-attackers to achieve their goals, due to the rise of secure solutions for merchants and retailers, open banking applications and financial institutions. Now, it seems that the healthcare industry is becoming the most targeted sector, due to the amount of personal data stored. This is attractive to hackers, who stand to gain access to names, addresses, birthdates, social security numbers, insurance numbers, payment information, data relating to family and relatives, and more. This information can be used for identity theft, as well as other forms of cyberattack. How to avoid a data breach – actionable tips In today’s world, technology is changing so fast that it may be impossible to guarantee that any company can avoid a data breach. The goal for companies must be to take steps to reduce the likelihood of this happening, by protecting their data as far as they can. For companies holding and processing large volumes of data, a big data security strategy is essential. Data security best practices include: - Putting in place a company-wide security policy, and offering staff training to ensure compliance. This should educate staff on issues such as phishing and make provision for remote working, mobile devices and the Internet of Things. - Restricting access to sensitive data. - Using a firewall and data encryption. - Installing the latest anti-malware software and keeping this updated. - Regular data back-ups. - Implementing biometric security measures where possible, in addition to password protection. This multi-factor authentication approach will make it harder for unauthorized individuals to infiltrate your data. Recently there has been a shift towards a Data-Centric Approach – the top focus for comforte AG – allowing organizations to protect the sensitive and confidential data itself, rather than simply putting security on the environments around the data. This involves tokenizing or encrypting sensitive or confidential data, so that even if a hacker bypasses a firewall and gains access to cloud-based systems, the actual data is still protected. This reduces the threat of a data breach incident because the data doesn’t get exposed to the outside world. How to identify if your company’s data has been breached Your company may not be the first to know that its sensitive or confidential data has been accessed. Many past data breaches have been discovered by sources outside of their company – cybersecurity professionals, government agencies with a focus on cyber-protection, and sometimes white-hat hackers. Discovery of a data breach may not happen immediately, and could be several years after the event. However, there are steps that companies can and should take to detect cyber-incidents or data breaches when they happen. Monitoring and reporting with SIEM and SOC Technologies that can identify a data breach usually have a Security Information Event Management (SIEM) component installed on as many end-points in their IT environment as possible. SIEM components report suspected security events to a central server, where a security team is monitoring. Companies may also have a SOC (Security Operations Center) which monitors security events, investigates possible threats to the company, and may even defend the company against cyberattacks. If a data breach is detected, it’s important that you act quickly to inform the individuals affected and to address the cause of the breach. Passwords and other access credentials should be changed, and you may need to freeze access to sensitive data such as financial information. Act now - Get expert help to protect your data! Has your company been hacked in the past, or would a future data breach have a devastating impact on your business? As we’ve seen, it is imperative for companies to act urgently to protect their data. As well as the threat of a data breach and its consequences for business, companies must also comply with relevant data security and privacy laws. This may not be possible with your existing infrastructure, but you must take action to protect the company’s data or you could face potential legal action. At comforte, our data security experts have been working to combat cyberattacks for almost a decade, and our customers include the two of the largest credit card processors in the world. We can help you protect your data! Contact us for secure solutions, peace of mind and expert advice on how to avoid a data breach. Simply fill out the online form below and hit send or call one of our international offices directly today.
<urn:uuid:20adf24b-2f08-4cc2-b606-bf5ef1707909>
CC-MAIN-2022-40
https://www.comforte.com/data-security/how-to-avoid-a-data-breach
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00067.warc.gz
en
0.942557
3,464
2.8125
3
This guide will give a deeper understanding of bit rates and their overall impact on the performance of surveillance devices. What Does Bit Rate Mean? A bit rate in video is essentially the amount of data that's being used and sent for that particular stream. Another simple way to think of it is bit rate is the quality of the video. More data means better quality video, but not without limitations. While there's a much more technical definition available online the above sentences should sum it up into something easily digestible for most. Bit rates are, as of this writing, represented by a Kilobits per second number or "kbps". For example, 256, 512, 1024, 2048, 4096 and up. Those specific values do not have to be used however and can often be adjusted higher or lower as needed. Bit Rate Location: The majority of surveillance products on the market allow adjustments to be made on many settings within the camera or recorder. Bit rate is no exception and is often found under the device configuration page and is labeled Video/Audio or something similar. Conversely, some devices on the market do not allow adjustments to the bit rate. Bit Rate Type: There are two main bit rate types that can be selected, VBR and CBR. VBR stands for Variable Bit Rate, which means the bit rate will fluctuate in value depending on whats happening in the scene with more activity/movement leading to a higher value and less activity leading to a lower, more stable value. When adjusting the bit rate value with VBR selected it's often setting the highest (Max. bit rate) it will go to while it dynamically adjusts. CBR stands for Constant Bit Rate, which means the bit rate value (Max. bit rate) will stay the same regardless of what's happening in the scene. The value will not fluctuate higher or lower from the value currently set on the device. This can be positive for certain scenarios where consistent quality is an absolute requirement but can also have a negative impact on the total storage/days of video available due to the extra data constantly being used and in some cases the extra consumption of bandwidth as well. Resolution/Bit Rate Synchronization: When most people hear the word "quality" or "high definition" they think of the resolution of video (720p, 1080p, 4K, etc) and they aren't wrong however even with high resolution video an incorrect bit rate value can make the image quality pixelated and ultimately useless as a form of evidence. There are numerous standard definition (SD) and high definition (HD) resolutions available to choose from on most surveillance cameras and each resolution has a corresponding bit rate "sweet spot" that isn't a really high or really low value. Corresponding both resolution and bit rate values ultimately provides good performance and good quality video all while avoiding any potential problems. Keep reading for our recommended bit rates for most resolutions currently available. Bit Rate (Mis)configurations: Users often configure bit rate values to high or to low. This can have a detrimental impact on surveillance systems as well as the quality of the image. Each scenario is different but most of the negative results are due to exceeding bandwidth limitations of the recorder and using unnecessary bandwidth (too high) or major pixelization of the video if set to low. Bit Rate To High The "No Resource" error displays on a Hikvision NVR which is often caused by exceeding the bandwidth limitations due to an abnormally high bit rate set on one or more cameras. Bit Rate To Low (click to view full size image) A pixelated 4K IP camera with a bit rate value set at 256kbps. Most cameras are configured with a default bit rate which is often a good value in between performance and quality. For most scenarios the bit rate does not need to be adjusted as the default values provide adequate video quality right out of the box. In addition, by adjusting the default values you increase the likelihood of causing certain problems such as the ones shown above or others. Hopefully this guide has provided a better understanding of what bit rate is and the impact it can have. If you think venturing away from the default value could be beneficial to your scenario but still aren't exactly sure what value to use you can find our recommended camera resolutions and corresponding bit rate values on this page. Here's a quick recap of everything: 1. Bit rates are essentially the quality of the video and are represented as a number value. 2. Most surveillance systems allow adjustments to be made to that value and are usually located in the devices configuration. 3. The bit rate value is often accompanied by either a Variable Bit Rate (VBR) or Constant Bit Rate (CBR) setting. 4. The resolution of the video should always correspond with the bit rate value. 5. Setting the bit rate value to high or low can have a negative impact on the video quality, bandwidth and operation of the recorder. 6. The default values usually don't need to be changed because they provide a good middle ground between overall performance and quality of video. 7. Our recommended bit rates for the majority of currently available resolutions can be found here.
<urn:uuid:9e0fd48d-ce00-43e4-855f-ca283952e37d>
CC-MAIN-2022-40
https://support.nellyssecurity.com/hc/en-us/articles/360045364014-Bit-Rate-Explained
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00067.warc.gz
en
0.935613
1,062
2.8125
3
Basics for Business Architecture: #2 – Business Processes & Business Rules Professionals should always focus on business solutions first, then and only then on designing systems. Not just lip service, I mean applying the power techniques of true business architecture. The first of these techniques is structured business strategy. See: http://www.brsolutions.com/2015/05/31/basics-for-business-architecture-1-structured-business-strategy/. The second technique is business processes and business rules. Effective business solutions require architecting both the following: Many professionals are unclear about the respective roles of business processes vs. business rules. At the risk of stating the obvious, let me make the following points. - What is done to create value-add (business processes). - What ensures value-add is created correctly (business rules). - Business processes and business rules are different. They serve very different purposes: A business process is about doing the right things; business rules are about doing things right. - There is no conflict whatsoever between business rules and business processes. In fact, they are highly complementary. Each makes the other better. If they don’t fit hand-in-glove, somebody is simply doing something wrong. - You need both. Neither can substitute for the other. Period. Tags: business architecture, business processes vs. business rules, business rules vs business processes Ronald G. Ross Ron Ross, Principal and Co-Founder of Business Rules Solutions, LLC, is internationally acknowledged as the “father of business rules.” Recognizing early on the importance of independently managed business rules for business operations and architecture, he has pioneered innovative techniques and standards since the mid-1980s. He wrote the industry’s first book on business rules in 1994.
<urn:uuid:5c7b71a2-0d75-47e9-a2ba-16df7a2877a2>
CC-MAIN-2022-40
https://www.brsolutions.com/basics-for-business-architecture-2-business-processes-business-rules/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00067.warc.gz
en
0.934394
394
2.515625
3
The impact of AI and machine learning on specific industries is undeniable. As was with mobility, different industries will utilize new technology and leverage it to increase revenue, optimize operations, and increase productivity. Through AI and machine learning, organizations can become faster, leaner, and drive growth. The transportation industry relies and thrives on data. Companies such as Freightwaves are building entire platforms around data. Small amounts of insight can give companies indicators of how markets are moving (pricing, capacity, demand, etc.). The transportation industry has been ahead of the curve here as well. Companies have previously used satellite technology to track cargo across the world. Now, companies can use IoT technology to capture millions of data points and then use machine learning to put these data points into actionable insights to improve operational efficiency and ultimately deliver better customer experiences. As we continue to gather more data around transportation, companies will be able to make informed decisions around buying behavior and begin leveraging predictive analytics into their business processes. For example, an organization may know when customers will be ready to order new freight capacity. This information is important for sales brokers, but it’s also importing for capacity planning. An AI platform in transportation will be able to automate information for logistics, supply chain and planning. AI and machine learning will also play a key role in truck maintenance. By compiling information across the fleet, maintenance will be able to determine when and where the trucks will need to undergo repairs and eliminate mid-shipment breakdowns. Avoiding such breakdowns are crucial to on-time delivery and helping save on costs. One of the major costs centers for transportation companies is delays in cargo arrival. If a shipper has committed to a certain service level agreement, it’s crucial that they achieve it. In much the same way that services like Waze have revolutionized personal travel, AI will be able to automate route management for drivers to avoid traffic across their route. As AI systems continue to grow in knowledge, traffic management platforms will be able to predict traffic patterns across the entire fleet for maximum time (and cost) savings. Truck wear and tear is certainly something that can create havoc for transportation companies. By applying AI and machine learning technology to fleet maintenance, repairs can be done at the opportune time, identifying the balance between avoiding breakdowns and keeping trucks on the road. The enables companies to avoid losing capacity by doing preventive maintenance at the appropriate time for maximum gains. These are key questions that all transportation companies think about daily. An AI platform can take that information, apply machine learning, and produce actionable optimizations with which a company can apply to the business, improving locations of warehouses, support hiring decisions, creating meaningful cost savings, and more. Companies that combine AI and machine learning with lean operations will be well prepared for the future. They’ll have actionable insights they can use to drive down costs while improving operations and increasing customer satisfaction. In transportation, data is a currency, and the earlier you have it, the faster it will grow. For more information about the challenges (and solutions) facing the transportation & logistics industry today, check this blog post where we explore how T&L organizations are tasked with cutting transportation costs, improving inventory management, and offering segmented services to their customers.
<urn:uuid:a9a084d2-ca73-48ee-99f0-6b92bef9ebf5>
CC-MAIN-2022-40
https://www.extremenetworks.com/extreme-networks-blog/how-cloud-managed-networking-with-ai-ml-equals-operational-excellence-for-transportation-logistics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00067.warc.gz
en
0.947869
666
2.703125
3
SQL vs. NoSQL? Which database architecture should you use for your next project? Which one is the “best”? Some argue that one is always better than the other. But they are very different technologies that solve different problems. Let’s take a look at them and see where they differ and where they overlap. SQL databases support Structured Query Language (SQL), a language for working with data in relational databases. Broadly speaking, “SQL database” and “relational database” refer to the same technology. A relational database stores data in tables. These tables have columns and rows. The columns define the attributes that each entry in a table can have. Each column has a name and a datatype. The rows are the records in the table. For example, a table that holds customers might have columns that define the first name, last name, street address, city, state, postal code, and a unique identification code (ID). You could define the first six columns as strings. Or, the postal code could be an integer if all the clients are in the United States. The ID could be a string or an integer. The relationships between the tables give SQL its power. Suppose you want to track your customer’s vehicles. Add a second table with vehicle ID, brand, model, and type. Then, create a third table that stores two columns: vehicle ID and customer ID. When you add a new vehicle, store its ID with the customer that owns it in this third table. Now, you can query the database for vehicles, for customers, for customers that own certain vehicles, and vehicles owned by customers. You can also easily have more than one vehicle per customer or more than one customer per vehicle. Three common examples of SQL databases are SQLite, Oracle, and MySQL. NoSQL database means many things. They’re databases that, well, don’t support SQL. Or they support a special dialect of SQL. Here’s a non-exhaustive list of the more popular NoSQL databases. Key-Value (KV) databases store data in dictionaries. They can store huge amounts of data for fast insertion and retrieval. In a KV database, all keys are unique. While the keys are often defined as strings, the values can be any datatype. They can even be different types in the same database. Common examples of values are JSON strings and Binary Large Objects (BLOBs). Two popular examples of KV databases are Redis and Memcached. A document store operates like a KV database but contains extra capabilities for manipulating values as documents rather than opaque types. The structures of the documents in a store are independent of each other. In other words, there is no schema. But, document stores support operations that allow you to query based on the contents. MongoDB and Couchbase are common examples of document stores. Relational databases use rows to store their data in tables. What sets column-oriented databases apart from them is — as the name suggests — storing their information in columns. These databases support an SQL-like query language, but they store records and relations in columns of the same datatype. This makes for a scalable architecture. Column-oriented databases have very fast insertion and query times. They are suited for huge datasets. Apache Cassandra and Hadoop HBase are column-oriented databases. Graph databases work on the relationships between values. The values are free form, like the values in a document database. But you can connect them with user-defined links. This creates a graph of nodes and sets of nodes. Queries operate on the graph. You can query on the keys, the values, or the relationships between the nodes. Neo4j and FlockDB are popular graph databases. SQL vs. NoSQL Databases: Which One? So, when you compare SQL and NoSQL databases, you’re comparing one database technology with several others. Deciding which one is better depends on your data and how you need to access it. Your Data Should Guide Your Decision Is there a perfect fit for every data set? Probably not. But if you look at your data and how you use it, the best database becomes apparent. Relational Data Problems Can you break your data down into entities with logical relationships? A relational database is what you need, especially when you need to perform operations with the relationships. Relational databases are best when you need data integrity. Properly designed, the constraints that relational databases place on datatypes and relations help guarantee integrity. NoSQL databases tend to be designed without explicit support for constraints, placing the onus on you. Caching Data Problems Caching is storing data for repeated access. You usually identify cached data with a single key. NoSQL databases excel at solving caching problems, while relational databases tend to be overkill. Key-Value stores are an obvious choice for caching problems. Many websites use Redis and Memcached for data and session information. But a document store that saves documents for historical purposes or reuse is an example of a caching solution, too. Graph Data Problems If a graph database stores data with relationships between data, why isn’t it a relational database? It’s because in a graph database relationships are just as important as the data. The relations have fields, names, and directions. Graph queries may include relationships and their names, types, or fields. Relation queries also use wildcards, which account for indirect relationships. Suppose a database represents rooms in several hosting facilities. It stores buildings, rooms, racks, computers, and networking equipment. This is a relational problem since you have entities with specific relationships. There could be a table for each entity in a relational database and then join tables representing the relationships between them. But now imagine a query for all the networking equipment in a given building. It has to look in the buildings, find the rooms, look in the rooms for racks, and finally collect all the equipment. In a graph database, you could create a relation called “contains.” It would be a one-way relation reflecting that one node contains another. Each item in each facility is a node contained by another, except for the buildings. When you query the database for networking gear, a wildcard could combine relationships between the buildings, room, and racks. This query models real life, since you say “Give me all of the gear in building X.” Scalability: SQL vs. NoSQL Which technology scales better? NoSQL may have a slight edge here. Relational databases scale vertically. In other words, data can’t extend across different servers. So, for large datasets, you need a bigger server. As your data increases in size, you need more drive space and more memory. You can share the load across clusters, but not data. Column-oriented databases were created to solve this problem. They provide horizontal scalability with a relational model. Key-Value, document, and graph databases also scale horizontally since it’s easier to distribute their datasets across a cluster of servers. SQL vs. NoSQL: Which One? SQL and NoSQL are effective technologies. SQL has been around for decades and has proven its worth in countless applications. NoSQL is a set of technologies that solve a variety of different problems. Each of them has its own advantages and tradeoffs. The question is, which one is best suited for your application? Take the first step by carefully modeling your data and defining use-cases to learn how you need to store and retrieve it. Then, pick the right technology for your application. Author – Eric Goebelbecker Eric has worked in the financial markets in New York City for 25 years, developing infrastructure for market data and financial information exchange (FIX) protocol networks. He loves to talk about what makes teams effective (or not so effective!).
<urn:uuid:cc69e4f9-7848-4f15-a335-fb5b6ed7d4e5>
CC-MAIN-2022-40
https://www.dataopszone.com/sql-versus-nosql-what-is-the-difference/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00267.warc.gz
en
0.91865
1,687
3.1875
3
The Significance and Role of Firewall Logs A firewall is a network security device that monitors incoming and outgoing network traffic and decides whether to allow or block specific traffic based on a defined set of security rules. In addition, to this policy, firewall log information is needed to audit the security efficacy of the firewall. Firewall logging records when and how access attempts are made, including source and destination IP addresses, protocols, and port numbers. A SIEM data lake uses this data to help investigate historic attacks and find evidence of probing. In this article: - Firewall logging - Linux firewall logs - Windows firewall logs - How to analyze firewall logs - Log analysis and alerting with Exabeam A firewall, at its most basic form, is created to stop connections from networks or protocols not explicitly allowed by rule. The firewall inspects the source address, destination address, and the destination port and protocol of all connections to determine if this traffic matches any of its rules. A Data Lake can aggregate information on the source address, port, destination address, and port for simplicity. As tracked by the firewall, we can view this information as an identifying attribute of any attempt to connect. These attributes are the basis by which firewall rules are created. And it’s these rules that determine which connections are permitted and which must be denied. If the connection attributes (source, destination, port, protocol) match an existing rule, the firewall may permit access and allow the network traffic. Firewall policies typically suffer from two significant problems for firewall managers. - The firewall policies grow in size with each new rule access request. Often rules are added, but few rules are ever removed. - Firewall admins write rules to allow the most access possible (not a security best practice). These overly permissive rules have unintended consequences that would enable risky network connections. Therefore, the success of any firewall depends on the management of these policies. Firewall managers must know when and why a firewall allows and denies access and from where. A firewall ruleset must be augmented with a successful logging feature to be most effective. The logging documents how the firewall deals with each network connection. These logs offer insights into, for example, time, day, source and destination IP addresses, protocols, port numbers, or applications. When and why firewall logging is useful - Conduct firewall rule-usage analysis optimizes rules. Complete faster ruleset audits - Find and eliminate redundant, shadowed, or overly permissive rules - To discover potentially malicious activity occurring within your network - If you identify repeated unsuccessful tries to access your firewall from a single IP address (or from a group of IP addresses), you may wish to investigate the origins of the network traffic - Outgoing connections derived from internal servers, for example, web servers, may show that someone is using your system as a launchpad. They could be launching attacks against computers on other networks from your system - Improve your network performance and overall security efficacy by optimizing or removing old, irrelevant, shadowed rule sets Linux Firewall Logs The Linux kernel has a packet filtering framework called Netfilter. Netfilter offers various functions and operations for packet filtering, network address translation, and port translation, which provide the functionality required to direct packets through a network and prohibit packets from reaching sensitive locations within a network. This framework lets you permit, drop, and modify the traffic in and out of a system. A tool called iptables furthers this functionality as a firewall, which you can configure using rules. Additional programs, like Fail2ban, also rely on iptables to block attackers. How do iptables work? Iptables is a command-line interface to the packet filtering capabilities in Netfilter. However, we won’t distinguish between iptables and Netfilter for this article. To keep things clear, we will refer to the entire concept as iptables. The packet filtering function offered by iptables is structured as tables, targets, and chains. Put simply; a table lets you process packets in a certain way. The filter table is the default table. Chains are connected to these tables. You can monitor traffic at different points using these chains. You can see traffic as it arrives on the network interface or before it is passed over to a process. You can also add rules to the chains to match certain packets, such as TCP packets going to port 70 and connecting them to a target. A target will determine if the packet should be permitted or blocked. When a packet enters or exits (according to the chain), iptables compares it against the rules in these chains one at a time. When it identifies a match, it isolates the target and carries out the required action. If it doesn’t recognize a match with any of the rules, it carries out the action specified by the chain’s default policy. The default policy also acts as a target. By default, all chains have a policy of permitting packets. Working with and interpreting iptable firewall logs To create firewall logs, the kernel needs to be firewall logging enabled. By default, matched packets are logged as kern. Warn (priority 4) messages. You can change the log priority with the –log-level option to -j LOG. Most of the IP packet header fields are disclosed when a packet matches a rule with the LOG target. By default, firewall log messages are written to /var/log/messages. Windows Firewall Logs Windows Defender Firewall (in Windows 8, Windows 7, Windows Vista, Windows Server 2012, Windows Server 2008, and Windows Server 2008 R2) is a stateful host firewall that helps secure the device by allowing you to create rules that determine which network traffic is permitted to enter the device from the network and which network traffic the device is allowed to send to the network. The firewall does not log any traffic by default. However, you can choose to configure the firewall to log permitted connections and traffic that is dropped. If you authorize Windows firewall logging, it creates “pfirewall.log” files in its directory hierarchy. You can see the Windows firewall log files via Notepad. How to enable Windows 10 firewall logs Go to Windows Firewall with Advanced Security. Right-click on Windows Firewall with Advanced Security and click on Properties. The Windows Firewall with Advanced Security Properties box should appear. You can move between Domain, Private, and Public Firewall profiles. Generally, you should configure the Domain or Private Profile. Let’s see how to create Windows Firewall logging on a Windows Firewall Private Profile. The steps below will work both for a public profile and a domain. Click Private Profile > Logging > Customize Go to “Log Dropped Packets” and switch to Yes Generally, we turn on logging for “Log Dropped Packets” only. We won’t log successful connections as successful connections generally are not helpful when resolving problems. Copy the default path for the log file. (%systemroot%system32LogFilesFirewallpfirewall.log) And then Press OK. Open File Explorer and go to where the Windows Firewall log is kept. (%systemroot%system32LogFilesFirewall) . You will see, in the Firewall folder, a pfirewall.log. Copy the pfirewall.log to your desktop. This will let you open the file with no firewall warnings. Interpreting the windows firewall logs Your Windows Firewall log will look like the following: Here is an analysis of the critical aspects of the above log: - The time and date of the connection. - What became of the connection. “Allow” means the firewall permitted the connection, while “drop” means it has prevented it. - The kind of connection, TCP or UDP. - The IP of the source of the connection (your PC), the IP of the destination (your desired recipient, e.g., a webpage), and the port used on your computer. You can use this to identify any ports that need opening for software to work. You should also lookout for any suspicious connections, as they may indicate malware. - Tells you if this connection was your computer receiving a packet of data or sending one. How to Analyze Firewall Logs Firewall logging, especially permitted events, can help discover potential network security threats. An organization generally places strict protection on assets that should not be freely accessible. These may include internal corporate networks and the workstations of employees. Typically, no unmediated inbound connection to these systems is allowed. What to look for in firewall log analysis Once you have gathered and started analyzing the logs, you can decide what to look for. You should refrain from only looking for “harmful” events. Your firewall logs not only help you isolate compromises and incidents, but they can also help you specify the normal operations of the firewall. One way to see whether the behavior logged is suspicious is to see what the normal operations are and then to note the anomalies. Some events should always cause suspicion and prompt further investigation. They are as follows: - Traffic dropped - Firewall stop/start/restart - Firewall configuration modifications - Administrator access granted - Authentication failed - Administrator session ceased - Rule add/update - Firewall log reset/delete - Rule usage analysis Log Analysis and Alerting with Exabeam Log Management and Next Generation SIEMs Log management is challenging, and it is becoming increasingly so with the rapid growth of network devices, microservices and cloud services, endpoints, and the vast increase in data and traffic volumes. Next-generation Security Information and Event Management (SIEM) solutions like Exabeam Fusion SIEM can assist you with your management of security-related log events and help you learn about events relevant to a security incident. - Security data lake, which can retain and search against unlimited data volumes of historical logs - User and entity behavior analytics technology for improved threat detection via behavior analysis. - Automated incident response capabilities provide service integrations via playbooks for automated investigation, containment, and mitigation of incidents. - Advanced data exploration features can assist security analysts with their threat hunting activity. What’s New in Exabeam Product Development – August 2022 What’s New in Exabeam Product Development – July 2022 What’s New in Exabeam Product Development – June 2022 Exabeam News Wrap-up – Week of September 12, 2022 The 4 Steps to a Phishing Investigation Exabeam News Wrap-up – Week of September 5, 2022 Subscribe today and we'll send our latest blog posts right to your inbox, so you can stay ahead of the cybercriminals and defend your organization. See a world-class SIEM solution in action Most reported breaches involved lost or stolen credentials. How can you keep pace? Exabeam delivers SOC teams industry-leading analytics, patented anomaly detection, and Smart Timelines to help teams pinpoint the actions that lead to exploits. Whether you need a SIEM replacement, a legacy SIEM modernization with XDR, Exabeam offers advanced, modular, and cloud-delivered TDIR. Get a demo today!
<urn:uuid:8e03ebb9-38b4-4773-823e-d7dfcead9681>
CC-MAIN-2022-40
https://www.exabeam.com/siem/the-significance-and-role-of-firewall-logs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00267.warc.gz
en
0.893427
2,372
3.34375
3
What is packet sniffing? Packet sniffing is a technique whereby packet data flowing across the network is detected and observed. Network administrators use packet sniffing tools to monitor and validate network traffic, while hackers may use similar tools for nefarious purposes. What are packet sniffers? Packet sniffers are applications or utilities that read data packets traversing the network within the Transmission Control Protocol/Internet Protocol (TCP/IP) layer. When in the hands of network administrators, these tools “sniff” internet traffic in real-time, monitoring the data, which can then be interpreted to evaluate and diagnose performance problems within servers, networks, hubs and applications. When packet sniffing is used by hackers to conduct unauthorized monitoring of internet activity, network administrators can use one of several methods for detecting sniffers on the network. Armed with this early warning, they can take steps to protect data from illicit sniffers. NETSCOUT's Omnis Security platform utilizes packet-based analysis for advanced threat analytics and response. What is the difference between the term “sniffer” and “Sniffer?” When spelled with a lowercase “s,” the term “sniffer” indicates the use of a packet sniffing tool for either good or nefarious purposes. In the hands of authorized network administrators, a sniffer is employed to maintain the unimpeded flow of traffic through a network. Conversely, in the hands of a hacker, a sniffer may be used for unauthorized monitoring of the network. When spelled with an upper case “S,” the term “Sniffer” refers to trademarked technology from NETSCOUT. This branded sniffer enables network administrators to monitor bandwidth and ensure that no single user is using too much available capacity. Is the original Sniffer still available today? Network General Corporation (now known as Network Associates Inc.) introduced the Sniffer Network Analyzer in 1988. Since then, the Sniffer has passed through several hands, including McAfee. In 2007, NETSCOUT acquired Network General, along with Sniffer. The first generation of Sniffer read the message headers of data packets on the network. This monitoring tool provided administrators with a centralized global view of all network activity, offering details such as the addresses of senders and receivers, file sizes and other packet-related information. How do hackers use packet sniffing? Hackers will typically use one of two different methods of sniffing to surreptitiously monitor a company’s network. In the case of organizations with infrastructure configured using hubs that connect multiple devices together on a single network, hackers can utilize a sniffer to passively “spy” on all the traffic flowing within the system. Passive sniffing, such as this, is extremely difficult to uncover. When a much larger network is involved, utilizing numerous connected computers and network switches to direct traffic only to specific devices, passive monitoring simply won’t provide access to all network traffic. In such a case, sniffing won’t be helpful for either legitimate or illegitimate purposes. Hackers will be forced to bypass the constraints created by the network switches. This requires active sniffing, which adds further traffic to the network, and in turn makes it detectable to network security tools. How to protect networks from illicit sniffers There are several steps organizations can take to protect their networks from illicit sniffing activities. The following defenses can reduce the risk of exposure to hackers: - Do not use public Wi-Fi networks: Wi-Fi networks found in public spaces typically lack security protocols to fully protect users. Hackers can easily sniff the entire network, gaining access to sensitive data. Avoiding such networks is a wise security choice unless the user is accessing an encrypted VPN. - Rely on a trusted VPN connection: When accessing the internet remotely, always use a trusted Virtual Private Network that encrypts the connection and masks all data from sniffers. Any sniffer attempting to monitor traffic over a VPN will only see data that has been scrambled, making it useless to the hacker. - Always deploy robust antivirus software: By installing effective antivirus software, organizations can prevent malware from infiltrating the network and system. Robust antivirus tools will also uncover sniffers present in the system and offer to delete them. - Look for secure HTTPS protocols before surfing the web: Before surfing the internet, look for the “HTTPS” in the address bar of a website. Some sites only indicate “HTTP.” The additional “S” at the end is an indication that the site adheres to more robust security protocols that encrypt communications and will prevent sniffers used by hackers from seeing the data. - Don’t fall prey to social engineering tricks and traps: Hackers and cyberattackers will often employ phishing emails and spoofed website to trick people into unwittingly downloading sniffers. Being aware and cautious when browsing can prevent users from falling prey to nefarious tactics.
<urn:uuid:b87310bc-ef62-468e-8590-d94293602a4d>
CC-MAIN-2022-40
https://www.netscout.com/what-is/sniffer
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00267.warc.gz
en
0.904541
1,036
3.328125
3
March 14, 2022 Is complete immersion possible? In theory, yes, if scientists and technologists figure out how to accurately simulate all the human senses in virtual reality. This would undoubtedly increase the realism of virtual experiences and while I’m sure multisensory VR games would be AWESOME, I’m more interested in job training, professional design, marketing (brand experiences), and other enterprise applications. Ericsson believes there will be widespread use of virtual environments engaging all five senses by 2030. Others are less optimistic: The problem, as Louis Rosenberg writes for Business Today, is your body. Rosenberg explains that without elaborate hardware like external cameras, your brain maintains two versions of reality, one of you in virtual reality and one of you sitting or standing where you are (in physical space). This creates the “feeling of being cut off from the world.” It’s not that today’s VR games and apps aren’t fun or useful, or capable of changing behavior. Virtual reality doesn’t need to stimulate all the senses to be effective, but the technology to do so would make it possible to spend longer stretches of time in the virtual world. Why so difficult? Rosenberg remarks that high-fidelity visuals are much easier to create than a unified, sensory virtual model of the world. Let’s focus on touch: Touch is essential to how we understand and interact with the world, and third on most hierarchies of the senses (after sight and hearing). Replicating touch in VR would allow you to feel a car’s interior, the parts of a machine locking into place, and the heat of fire. It would also allow for more natural computer interfaces. So, why is touch so difficult to emulate? Among other reasons, touch is challenging because it involves the whole body. On a basic level, specialized receptors all over your skin (3,000 in each fingertip alone) sense things like temperature, pressure, and texture. This information turns into electrical signals that travel to the brain, where they’re interpreted as sensations. Add the feeling of weight, resistance, force, etc. and you’re attempting to simulate both human physiology and the laws of physics in the digital realm. In a few cases, VR training is already changing industry certification requirements. Virtual reality may very well replace all in-person training one day, but for this to happen, haptics and movement in VR will need to significantly advance. Technavio predicted that the haptics market will grow by nearly $16 billion between 2021 and 2025. Haptics, or the use of touch in human-computer interfaces, includes haptic technology and haptic feedback. Traditional haptic technology relies largely on vibrating motors to simulate tactile sensations. The most common haptic devices are graspable (joystick) and wearable (glove), and may use a combination of electric actuators, pneumatics, hydraulics, and other technologies to create tactile and kinesthetic sensations. Haptic tech can also be touchable (skin patch) or contactless, making use of ultrasound, lasers, and other technologies to create tactile sensations in mid-air (ex. Ultraleap). Haptic VR accessories Used mainly by gamers to intensify the immersive experience and interact with virtual objects, products like VR gloves and vests are slowly finding their way into industrial applications: In 2019, for instance, Nissan tested haptic gloves for designing vehicles in VR, and NASA has been testing VR and haptics for remote operation of robotic arms and space vehicles. Here are the major categories of haptic accessories for VR: Hand controllers typically feature multiple sensors for tracking motion, gesture, and/or position. The HTC Vive controller, for example, has 24 sensors and haptic feedback to stimulate the user’s hands and fingers. There are also foot controllers such as the 3dRudder motion controller allowing you to move forward, speed up, turn, etc. and SprintR, “a wireless footpad that lets you easily walk, run, and jump in VR hands free.” Foot controllers aren’t necessarily haptic devices, but they do provide greater, more natural freedom of movement. Suits and Vests Full and partial VR suits provide sensations in different parts of the body. bHaptics’s Tactsuit X40, a wireless haptic vest, has 40 haptic points and can be combined with additional products like Tactal (a haptic face cover) and Tactosy for Feet, Arms or Hands. Marketed as a training solution for complex tasks and environments, the full-body TeslaSuit features haptic feedback, climate control, motion capture, and biometric sensors that track vitals and stress. Holosuit is another full-body motion-capture suit composed of gloves, pants, and a jacket. Actronika’s haptic vest Skinetic uses “vibrotactile haptics,” with 20 embedded voice-coil motors combining touch and sound for “true-to-life sensations.” Skinetic appeared at CES 2022 in January, along with Owo’s wireless haptic vest, which looks a lot like a running shirt. With precision hand and finger tracking, VR gloves allow users to see their hands and interact with objects in the virtual world. In addition to syncing real and virtual hand motions, gloves allow you to feel haptics all over your hands so you can, for instance, feel the size, shape, or roughness of a virtual object. Most have something akin to internal “tendons” that tense and relax to create haptic feedback. With 133 points of tactile feedback and up to 40 pounds of resistive force per hand, HaptX Gloves DK2 allow you to interact with heavy objects in VR. According to HaptX’s website, Fortune 500 companies are using its gloves for workforce training and industrial design. SenseGlove Nova, a “force-feedback glove” for VR training and research, replicates the feeling of using tools and dashboards via “vibrotactile feedback.” Volkswagen, Honda, and P&G Health are some of SenseGlove’s customers. Other haptic glove companies include VRgluv, BeBop Sensors, and Manus VR. Meta is working on its own haptic gloves, and bHaptics offers the TactGlove. VR shoes allow you to walk, run, jump, and even detect different surfaces in VR. They also help solve VR’s “infinite walking problem” created by the fact that the virtual world is endless but the room in which you’re playing is not. EKTO One Simulator Boots use VIVE trackers and motorized wheels to pull your leg back as you walk (not run) forward, giving you the sensation of walking while keeping you in one spot. Cybershoes allow you to walk and run in VR but are used while seated. Both the EKTO boots and Cybershoes strap over the user’s shoe, but there is at least one pair of haptic shoes that look like regular sneakers. These are DropLabs EP 01 shoes intended for gaming and music. Bonus: Furniture, Masks, and More Lastly, we have VR furniture and some experimental masks. Like VR shoes, VR chairs and treadmills keep users from getting hurt while exploring expansive VR worlds. Gaming solutions like the Yaw VR Motion Simulator and Roto Motorized VR Chair allow greater freedom of movement in a seated position, so you can, for instance, drive a virtual car. Positron’s Voyager VR chair is designed for cinematic VR, while the Holotron, more of a full-body exoskeleton suspended in place, provides “lifelike control of humanoid avatars." “Optimized for the home,” the disc-shaped Virtuix Omni One treadmill gives you 360-degrees of movement in VR and comes with a standalone VR headset. Users wear a vest attached to an articulated arm and special shoes. Other VR treadmill companies include Birdly, Cyberith, Kat VR, and Infinadeck. Finally, we come to masks and other experimental devices that aim to bring smell and taste into VR. Examples include the FeelReal Multisensory Mask, which simulates hundreds of smells and aromas; OhRoma by CamSoda, a kind of gas mask with fragrance cartridges; and OVR Technology’s olfactory headset for wellness, with preloaded “scentware” to promote relaxation. On the taste front, researchers are experimenting with face masks and even drinking glasses with electrodes to emulate the taste and feel of food and beverages on your tongue. Watch this space! Image Source: Road to VR
<urn:uuid:640b0262-1553-4f43-a253-01282687fa6d>
CC-MAIN-2022-40
https://www.brainxchange.com/blog/immersion-beyond-sight-the-xr-accessories-of-today-and-tomorrow
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00267.warc.gz
en
0.918721
1,860
2.734375
3
Reducing Risks from IoT Devices in an Increasingly Connected World With a staggering majority of devices – expected to reach more than 75 billion by 2025 – connected to vast networks and the internet, reducing cyber risk becomes a critical focal point for the age of IoT. In this eBook, we discuss: - The new risks posed by consumer-grade IoT devices - Potential exploits that could be carried out within three different IoT subsystems - How network monitoring can identify vulnerabilities in IoT devices and help mitigate the consequences of potential attacks - Examples of a network security monitoring tool detecting attacks on multiple IoT systems
<urn:uuid:dc0875d0-0220-4e75-b6ef-0aed4f4a9656>
CC-MAIN-2022-40
https://www.forescout.com/ebook-reducing-risks-from-iot-devices-in-an-increasingly-connected-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00267.warc.gz
en
0.913821
123
2.515625
3
Google’s self-driving car is one of the most pioneering moonshots the search engine is testing, but in the public eye the car is still seen as a dangerous and destructive idea, to have a car run by itself with no human controls. One argument for the self-driving car is safety, but new reports show Google’s cars aren’t without some flaws when it comes to driving on the road, with four of the 50 self-driving cars logging crashes since September 2014. Google reported that two out of the four crashes happened while a human was in charge, and all of them were minor. It does not detail how long the human had been driving the car before the crash, or what lead to the crash in the first place. In addition, three of the crashes involed Lexus SUVs that Google had outfitted with sensors. Both issues remain unsolved, due to Google not wanting to speak to the press about difficulties on its self-driving program. The pilot program is being tested in California and both the UK and Germany want to get involved this year. Google has been testing the cars in urban environments, trying to get the cars to recognise everyday things on the road like children crossing the street, animals underneath cars and other common features on the road. It wants to build a database capable of recognising all sorts of incoming objects before they are near, allowing the computer to make quick decisions on whether to stop, slow down or keep moving. Google’s end goal is to offer a self-driving car without a steering wheel or pedals, although we expect the first model will have both as part of regulator agreements. This new information on the amount of crashes might push Google further back from achieving its end goal, although new moves by Tesla Motors, Uber, German car manufacturers and Apple might speed up the process considerably.
<urn:uuid:81a6dce1-ac9f-4fa0-8be1-b978baabe947>
CC-MAIN-2022-40
https://www.itproportal.com/2015/05/11/four-googles-self-driving-cars-dinged-california/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00267.warc.gz
en
0.973695
379
2.734375
3
What is it? It wasn’t long since the massive WannaCry ransomware hit the cyber-world, and starting from yesterday, Tuesday June 27th, a massive new attack of the ransomware variant has been identified, affecting organizations around the globe (wired article). This ransomware is a new version Petya, also known as NotPetya (as it is a variant that borrows code from Petya but it’s different) and has other different names as Petrwarp, Nyetra, SortaPetya and Petna. The attack first took place in Ukraine and started spreading quickly throughout the world. It is reported by Kaspersky that more than 2000 organizations globally are infected, including Maersk, Rosneft, and many others. The result of the attack is a large scale power-down of sites and services around the globe, with a major effect on daily business and logistics. Why is it so dangerous? This ransomware is more aggressive compared to WannaCry, as it encrypts the MFT (Master File Tree) tables for NTFS partitions and overwrites the MBR (Master Boot Record) with a custom bootloader that shows a ransom note and prevents victims from booting their computer. It’s also forcefully reboots systems and prevents them from working altogether. This ransomware also contains advanced propagation techniques that makes it more dangerous than any ransomware before. Why can the attack spread so fast? After infecting a single machine the ransomware then propagates using several techniques, one of them is in a similar fashion as WannaCry; through exploitation EternalBlue. Another propagation technique includes stealing of credentials and exploiting users’ permissions and using legitimate methods of connecting to other hosts. This what makes the current ransomware more dangerous compared to WannaCry that uses only the EternalBlue exploit, as credentials harvesting results in very fast lateral movement. It spreads like an oil spill with a logarithmic character. If only 1 computer is infected in the network and this computer stores login credentials of a network administrator, the entire network is compromised. Technically speaking, the ransomware uses PsExec, WMI and SMB connections via ADMIN$ in order to propagate in the LAN. It then tries to forcefully reboot the system and also creates a scheduled task to reboot the infected system one hour after the initial infection. The full encryption happens when the system reboots. It is not 100% clear if no encryption happens prior to the reboot. Where does it come from? At this point there are no clear indicators where the attack comes from. With information currently available, it is suggested that Petya was deployed onto potentially several millions of computers by hacking Ukrainian accounting software called “MeDoc”. It then used their automatic update feature to download the malware onto all computers using the software. Although MeDoc being the initial infection vector is unconfirmed (andeven denied by the company itself), current evidence points to them (source1, source2, source3). Who are or can be affected? It appears a Windows only variant so far. So Windows users are at risk. Multiple large enterprises are hit: Maersk, TNT, and several Ukrainian entities are amongst them. Any other Windows-based organization can be hit as well. How can we protect our company? We in Comsec recommend the following recommendations, prevention and mitigation measures: - It is recommended to make all employees aware of this event to make sure suspicious emails are not opened and any suspicious email or activity is reported to the relevant IT personnel. - Obtain and patch systems to the latest version using the manufacturer security update, as it is likely to believe that Petya is actively exploiting known vulnerabilities inside networks. (TechNet Article) - For unsupported or unpatched systems, it is recommended to isolate them from the network and to consider shutting them down if possible. Alternatively, Microsoft released a security update for the SMB vulnerability also for Windows platforms that are in custom support only, including Windows XP, Windows 8, and Windows Server 2003 (Technet Article. The system update is available here: KB4012598 - Disable SMBv1 in all unpatched machines, and in all machines where it does not impact their business purpose. - Isolate communication UDP ports 137 and 138 as well as TCP ports 139 and 445 in networks to avoid spreading or infection. - Update all AntiVirus and AntiMalware products signatures - Make sure all the organization’s critical data is backed up both in online and offline backup storage. - If possible, block the ADMIN$ share in the network. The worm uses this share with WMI to spread itself, thus disallowing access prevents the possible spread. - If you are infected, do not pay the 300$ asked ransom fee. The e-mail address referred to is no longer in service. The decryption key will not be received when you pay the fee. - The malware tries to reboot immediately and then again after 30-60 minutes. If infection is identified (pretends to be a windows CheckDisk scan) shut down the infected machine immediately. - In the recent hours it was found that a vaccine is available to prevent infection of a host. Create 3 read-only files named perfc, perfc.dll and perfc.dat in C:\Windows. This can be done by using the following script file (rename to vaccine.bat and execute in the entire domain using the GPO). It has to be noted that the vaccin works only if the executed / inject dll matches the exact name 'perfc' echo Administrative permissions required. Detecting permissions... net session >nul 2>&1 if %errorLevel% == 0 ( if exist C:\Windows\perfc ( echo Computer already vaccinated . ) else ( echo Vaccination file. > C:\Windows\perfc echo Vaccination file. > C:\Windows\perfc.dll echo Vaccination file. > C:\Windows\perfc.dat attrib +R C:\Windows\perfc attrib +R C:\Windows\perfc.dll attrib +R C:\Windows\perfc.dat echo Computer vaccinated. ) else ( echo Failure: You must run this batch file as Administrator. We can help you, just ask! Comsec is constantly tracking the recent developments in the world and we update our blog accordingly. We are ready to assist with any questions or requests that you may have.
<urn:uuid:f628ed0f-cab0-4aef-9cc0-1acbd5613df6>
CC-MAIN-2022-40
http://blog.comsecglobal.com/2017/06/petya-nyetya-new-ransomware-attack-hit.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00467.warc.gz
en
0.918073
1,383
2.546875
3
Subnet Mask Cheat SheetRecords Cheat SheetGeoDNS ExplainedFree Network TroubleshooterKnowledge BasePricing CalculatorLive CDN PerformanceVideo Demos BlogsNewsPress ReleasesIT NewsTutorials Give us your email and we'll send you the good stuff. Tanya Valdez is a Technical Writer at Constellix. She makes the information-transfer material digestible through her own transfer of information to our customers and readers. Connect with her on LinkedIn. A major player in DNS is the IP address. It is one of the cornerstones of the internet that keeps devices connected. We’ve all heard the term, but what is an IP address? Houses need addresses to be located and verified in order to receive mail and utility services. Websites and internet-connected devices such as computers and routers also have unique numbers that help them communicate with other online devices. The IP part of the identifier stands for Internet Protocol. This is the method by which data is sent between online devices. IP addresses provide the identification required for devices to send data over the internet or a local network. IP addresses do not move around with your gadgets. They are linked to the network you are connected to and are based on location. Most local devices such as your printer and modem, use dynamic IP addresses by default. Disconnecting and reconnecting the modem can change your current dynamic IP address. However, businesses typically purchase static IP addresses from their internet service provider (ISP) to avoid an address change (more on IP address types will be discussed later in this resource). The same goes for mobile devices. You may be on a business trip or down the street working from a coffee shop. You are using a different IP address based on your current location. The establishment’s ISP assigns you with a new IP address to use on their connection. Devices come preset with software that contains rules for them to connect to the internet and swap information and data back and forth. This is all part of the internet of things or IoT. From what we’ve discussed thus far, you are probably assuming there are different types of IP addresses and that is a safe assumption. There are different versions, types, and categories of IP addresses. It may sound confusing, but don’t worry. I’ll break it down for you. There are four main IP address categories: This is the primary address that your entire network is associated with and is provided by your ISP. While there is one primary address, each machine that connects to your home internet network is assigned a private IP address. These gadgets include smartphones, tablets, computers, and Bluetooth devices. With the ever-growing IoT with smart devices, such as speakers, TVs, thermostats, and lightbulbs, IP addresses are needed to properly identify each. Tech Byte: In 2020, the average number of connected devices per household was 10 and an estimated 35 billion will be installed around the world by 2021. Dynamic IP addresses are public network identifications that are active for a specific time and then expire once that time is up. They are temporary, always changing, and cost-effective for businesses and the ISP since they do not have to run special protocols to keep a network’s IP address the same if, for example, they move. It is also beneficial for the user because new devices can be connected much easier since most routers will assign IP addresses automatically using Dynamic Host Configuration Protocol (DHCP). Static IP addresses are public identifiers that do not change. Once the ISP assigns a static IP address to a network, it will remain consistent. These are typically used by individuals and businesses that have devices that are tied to websites or email addresses, such as network printers. Static IP addresses make it easier to work from home using a virtual private network (VPN) to remotely access your organization’s files. Having a static identifier allows other devices to find them on the web. There are two types of website IP addresses: public and local. Shared IP addresses are not unique and are shared with other websites on a particular webserver. This is commonly used for smaller websites that do not have many files or pages. The downside of a shared IP address is that the actions of one site owner can tarnish an IP’s reputation. If an IP address was involved in sending SPAM emails and was blacklisted as a result, the other websites’ emails will suffer as well. Dedicated IP addresses are unique and assigned to only one website. These are commonly used by e-commerce sites or large websites in order to maintain control over their IP’s reputation. Since e-commerce sites have to use SSL (secure sockets layer), a dedicated IP address is typically used in conjunction. IPv4 and IPv6 addresses are two versions of the Internet Protocol. IPv4 is the original 32-bit IP address scheme that is running out of addresses. To support the growing amount of connected devices, IPv6 was developed. IPv6 is a 128-bit address that comes with many benefits. It eliminates the need for NAT (Network Address Translation), more efficient routing, auto-configuration capabilities, and allows for easier administration. See our What is IPv6 resource for more information on IPv4 and IPv6 addresses. There are four different types of IP address classifications. The way the address will be used determines its addressing method. This is the most common classification as it refers to a single sender and receiver. It can be used for both IPv4 and IPv6 addresses. Broadcast addressing is strictly for IPv4 addresses and allows for data management for all destinations on a network with a single transmission operation. IPv6 does not support broadcast addressing but instead uses multicast addressing to handle this function. This addressing method is available for both IPv4 and IPv6 and allows a host to send a network packet (unit of data) to a group of hosts within the IP network using a special IP multicast group address. In this one-to-many communication, only the hosts that require the data will process the packet. The others will discard it. In anycast addressing, a single-destination IP address is shared by multiple machines. The host that will receive the requested information is based on location. The router will send it to the closest receiver on the network. Anycast addressing is available for both IPv4 and IPv6. An IP address consists of a string of numbers separated by periods. Much like house addresses, they are not random. The Internet Assigned Numbers Authority (IANA) produces and allots IP addresses. IP addresses are expressed as a set of four numbers, with each ranging from 0 to 255. Here are a few IP address examples: There is more than one way to locate your IP address. The quickest way is to do a Google search for “what is my IP address.” The search engine will provide you with your public IP address. There are a couple of ways that you can obtain your local IP address for your router. For Windows users, you can access it in your settings under Network & Internet. Then, select if you are connected wirelessly or via an ethernet cable to obtain the proper information. Optionally, you can run a command and enter ipconfig at the command prompt or in the Run box. The generated window will include your IP address as part of the returned information. Mac users can locate the IP address by accessing the network connection in System Preferences from the Apple menu. You can also find it by running a command in the MacOS Terminal and enter the command ipconfig getifaddr en1 for a wired Ethernet connection or ipconfig getifaddr en0 to get the IP address of a wireless connection. You might be wondering why you would need to know your IP address. This information might be necessary when troubleshooting internet issues. You also need it to utilize remote services to control machines on your network. The setup of some devices requires you to know it as well. More information can be found on our What Is My IP Address resource page. The IoT continues to grow and to keep up, the IPv6 was developed. There are different versions, types, and categories of IP addresses depending on how they will need to connect. They work much like home addresses with house numbers, streets, and zip codes to properly send and receive information. IP addresses are the way devices are identified in order to connect to the internet. If you found this useful, why not share it? If there’s a topic you’d like to know more about, reach out and let me know. I’ll do my best to bring you the content you’re looking for! Here are some more interesting reads: How Does DNS Lookup Work? Sign up for news and offers from Constellix and DNS Made Easy
<urn:uuid:7ef37441-6098-4c17-a5f8-8c02287f44e5>
CC-MAIN-2022-40
https://constellix.com/news/what-is-an-ip-address
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00467.warc.gz
en
0.941564
1,856
3.234375
3
With the ongoing threat of network attacks existing for almost all companies that are connected to the Internet, there is often a need to set up some type of intrusion detection system (IDS) or intrusion protection system (IPS). These systems' main purpose is to detect attacks as they are being initiated; this detection is done by comparing the streams of incoming traffic against a database of known attacks. The main differences between an IDS and an IPS is in what happens when the device detects an attack. An IDS will detect the attack and alert the network administrators/engineers; an IPS has the ability to directly block the attack traffic once it has been detected. This can proactively prevent a good amount of damage to the internal network. Cisco's Adaptive Security Appliance (ASA) line adds this ability with an additional piece of hardware of software, depending on the base ASA model. This article takes a look at this additional capability, what it offers. and how it can be configured to monitor traffic through an ASA. ASA IPS Module Details The exact details of the IPS functionalities of an ASA depend on the specific model of ASA that is being used. The ASA 5505, 5510, 5520, 5540, 5580, and 5585-X all use an additional hardware module that is inserted into the ASA chassis. The ASA 5512-X, 5515-X, 5525-X, 5545-X, and 5555-X all use an additional software module that is uploaded to the ASA. The connection to manage the ASA module differs also by the model of the ASA used: - ASA 5505: The ASA 5505 IPS module does not have an external management interface and is managed using a management VLAN within the ASA. By default, the VLAN that is used is 1, and the default IPS management IP address is 192.168.1.2. - ASA 5510, ASA 5580, ASA 5585-X: These devices have an external management interface that is used to configure the device and the IPS module; the ASA 5585-X actually has several external management interfaces. With these devices, the ASA and the ASA IPS module are typically assigned with IP addresses that are on the same subnet (default: ASA – 192.168.1.1, ASA IPS – 192.168.1.2). It is also possible to configure the ASA to be managed via an inside interface while the ASA IPS module is solely managed via the external management interface. - ASA 5512-X, ASA 5555-X: These devices work similarly to the previous models with an external management interface and with the same default IP addresses.
<urn:uuid:643ef409-1179-4b69-8084-a17afb039628>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=2140100&amp;seqNum=5
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00467.warc.gz
en
0.917387
549
2.671875
3
The Foundational Role of Policies in GRC Strategies Policies are critical to the organization as they establish boundaries of behavior for individuals, processes, relationships, and transactions. Starting at the policy of all policies – the code of conduct – they filter down to govern the enterprise, divisions/regions, business units, and processes. GRC, by definition (www.OCEG.org), is “a capability to reliably achieve objectives [governance] while addressing uncertainty [risk management] and act with integrity [compliance].” Policies are a critical foundation of GRC. When properly managed, communicated, and enforced policies: - Provide a framework of governance. Policy paints a picture of behavior, values, and ethics that define the culture and expected behavior of the organization; without policy there is no consistent rules and the organization goes in every direction. - Identify and treat risk. The existence of a policy means a risk has been identified and is of enough significance to have a formal policy written which details controls to manage the risk. - Define compliance. Policies document compliance in how the organization meets requirements and obligations from regulators, contracts, and voluntary commitments. Unfortunately, most organizations do not connect the idea of policy to the establishment of corporate culture. Without policy, there is no written standard for acceptable and unacceptable conduct — an organization can quickly become something it never intended. Policy also attaches a legal duty of care to the organization and cannot be approached haphazardly. Mismanagement of policy can introduce liability and exposure, and noncompliant policies can and will be used against the organization in legal (both criminal and civil) and regulatory proceedings. Regulators, prosecuting and plaintiff attorneys, and others use policy violation and noncompliance to place culpability. An organization must establish policy it is willing to enforce — but it also must clearly train and communicate the policy to make sure that individuals understand what is expected of them. An organization can have a corrupt and convoluted culture with good policy in place, though it cannot achieve strong and established culture without good policy and training on policy. Hordes of Policies Scattered Across the Organization Policy and training matter. However, when you look at the typical organization you would think policies are irrelevant and a nuisance. The typical organization has: - Policies managed in documents and fileshares. Policies are haphazardly managed as document files and dispersed on a number of fileshares, websites, local hard drives, and mobile devices. The organization has not fully embraced centralized online publishing and universal access to policies and procedures. There is no single place where an individual can see all the policies in the organization and those that apply to specific roles. - Reactive and inefficient training programs. Organizations often lack any coordinated policy training and communication program. Instead, different departments go about developing and communicating their training without thought for the bigger picture and alignment with other areas. - Policies that do not adhere to a consistent style. The typical organization has policy that does not conform to a corporate style guide and standard template that would require policies to be presented clearly (e.g., active voice, concise language, eighth-grade reading level). - Rogue policies. Anyone can create a document and call it a policy. As policies establish a legal duty of care, organizations face misaligned policies, exposure and liability, and other rogue policies that were never authorized. - Out of date policies. In most cases, published policy is not reviewed and maintained on a regular basis. In fact, most organizations have policies that have not been reviewed in years for applicability, appropriateness, and effectiveness. The typical organization has policies and procedures without a defined owner to make sure they are managed and current. - Policies without lifecycle management. Many organizations maintain an ad hoc approach to writing, approving, and maintaining policy. They have no system for managing policy workflow, tasks, versions, approvals, and maintenance. - Policies that do not map to exceptions or incidents. Often organizations are missing an established system to document and manage policy exceptions, incidents, issues, and investigations to policy. The organization has no information about where policy is breaking down, and how it can be addressed. - Policies that fail to cross-reference standards, rules, or regulations. The typical organization has no historical or auditable record of policies that address legal, regulatory, or contractual requirements. Validating compliance to auditors, regulators, or other stakeholders becomes a time-consuming, labor-intensive, and error-prone process. Inevitable Failure of Policy & Training Management Organizations often lack a coordinated enterprise strategy for policy development, maintenance, communication, attestation, and training. An ad hoc approach to policy management exposes the organization to significant liability. This liability is intensified by the fact that today’s compliance programs affect every person involved with supporting the business, including internal employees and third parties. To defend itself, the organization must be able to show a detailed history of what policy was in effect, how it was communicated, who read it, who was trained on it, who attested to it, what exceptions were granted, and how policy violation and resolution was monitored and managed. If policies and training programs don’t conform to an orderly style and structure, use more than one set of vocabulary, are located in different places, and do not offer a mechanism to gain clarity and support (e.g., a policy helpline), organizations are not positioned to drive desired behaviors in corporate culture or enforce accountability. With today’s complex business operations, global expansion, and the ever changing legal, regulatory, and compliance environments, a well-defined policy management program is vital to enable an organization to effectively develop and maintain the wide gamut of policies it needs to govern with integrity. The bottom line: The haphazard department and document centric approaches for policy and training management of the past compound the problem and do not solve it. It is time for organizations to step back and define a cross-functional and coordinated team to define and govern policy and training management. Organizations need to wipe the slate clean and approach policy and training management by design with a strategy and architecture to manage the ecosystem of policies and training programs throughout the organization with real-time information about policy conformance and how it impacts the organization. This post is an excerpt from GRC 20/20’s latest Strategy Perspective research: Policy Management by Design: a Blueprint for Enterprise Policy & Training Management Have a question about Policy & Training Management Solutions and Strategy? GRC 20/20 offers complimentary inquiry to organizations looking to improve their policy management strategy and identify the right solutions they should be evaluating. Ask us your question . . . Engage GRC 20/20 to facilitate and teach the Policy Management by Design Workshop in your organization. Looking for Policy Management Solutions? GRC 20/20 has mapped the players in the market and understands their differentiation, strengths, weaknesses, and which ones best fit specific needs. This is supported by GRC 20/20’s RFP support project that includes access to an RFP template with over 400 requirements for policy management solutions. GRC 20/20’s Policy & Training Management Research includes: Register for the upcoming Research Briefing presentation: Access the on-demand Research Briefing presentation: Strategy Perspectives (written best practice research papers): - Policy Management by Design: A Blueprint for Enterprise Policy & Training Management - Regulatory Change Management: Effectively Managing Regulatory Change in Financial Services - Benchmarking Your Policy Management Program - Policies, The Last Mile of Risk Management: The Relationship Between Risk and Policies Solution Perspectives (written evaluations of solutions in the market): - RegEd CODE™: Enabling an Integrated Compliance Lifecycle - NAVEX Global’s Agile Code of Conduct - MetaCompliance: Effectively Managing & Communicating Policies - HITEC’S PolicyHub: Streamlining Policy Management Case Studies (written evaluations of specific strategies and implementations within organizations):
<urn:uuid:e227453c-3107-447a-a6dd-a5a12460b107>
CC-MAIN-2022-40
https://grc2020.com/2017/02/22/policy-training-management-demands-attention/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00467.warc.gz
en
0.932416
1,639
2.515625
3
The early 2000s introduced us to countless technological breakthroughs. Since then, technology appears as if it has become a necessity. Almost everyone in the world is into tech. It has even been used as a status symbol. Especially in the higher class, the latest technology in gadgets alone defines your level of “coolness.” However, technology doesn’t only dictate one’s social economics standing; and it shouldn’t be the only reason why you should use the latest technology. Apparently, people also rely on technology because of countless benefits that it can give. How Technology Impacts our Lives These days, many people rave over the latest tech trends because they use it as a status symbol. But technology can do so much more than that. Apart from being used as a status marking people can also use it to make their lives easier and safer. Take a closer look at how it can benefit us more. The management of money is safer One of the best things that technology does is help in managing our money. Using technology allows you to automate tasks, set up reminders, gather receipts, track investments, compare prices, and more. With technology, you won’t have to waste your time doing simple financial tasks. With just a few clicks, you can instantly pay your bills. You can also set up reminders that would ensure that you won’t miss any due date for your bills no matter where you are in the world. In addition to that, you can keep track of your transactions. With the help of mobile applications, you don’t have to worry about losing your receipts as everything is recorded digitally. You can even get to track your investments. There are apps that will help you track your shares and keep you posted on new investment opportunities. The best part of it is that you can even get to avoid financial mistakes. With all kinds of scams in the online world, it’s easy to have second thoughts in trusting technology with your money. But technology isn’t all that bad. There are good tools that can help protect you from scams. With this, you’re sure that your money is always safe. Smart Home Automation Of course, technology can secure you too. Experts say that burglary happens every 25.7 seconds. And even if almost all of us are quarantined in our homes due to the pandemic, the police still see a spike in burglary incidents in many cities, according to Hawaii Tribune Herald. No one deserves to not feel safe even in their own home. Luckily, technology can also help make your home feel a lot safer. These days, there are countless devices that can keep an eye on both outside and inside of your home. One of which are CCTV cameras that you can place outside your home to see who’s lurking in your property without your permission. There are also surveillance cameras, which you can place indoors to keep track of your loved ones, especially when you’re away. There are alarms that would sound off if someone breaks into your house too. Smart locks are also available that feature built-in cameras, intercom systems, and even an emergency siren. Apart from the security, technology can also make everything convenient. A home’s high-tech features can do even the most mundane task for you. With this, you won’t have to exert much effort. This gives you more time to relax and enjoy the comfort of your home. With the help of technology, you won’t need a remote to control the lights. Some smart lighting systems can now be controlled by your own voice or even just by making sounds, like clapping. You won’t even need to lift a finger nor stand up to open or close the blinds. Like the smart lighting systems, you can also get to control your windows or blinds via voice command or artificial intelligence. Capturing and Reviewing Information is Easier Technology also allows doing business easier. Back in the days when we had to do things manually, doing business was harder as you had to capture datas and other documents manually. And when you need to review the information you need, you’d still have to go over a pile of physical files. This eats so much time and effort before. But thanks to the advancements in our technology, you won’t have to keep or duplicate important documents manually. You also won’t need to store multiple physical files. Today, we have pieces of equipment that we can use to scan documents much easier and faster. We also have various software and applications that can help us store important documents. As a result, you won’t need to search through piles of papers or folders if you need to review some information. You can now easily look for the file you need with just a few clicks. You’ll be more productive as you’ll have more time to spend on other tasks that you need to do. Fast and Easy Data Retrieval Speaking of data, you won’t need to worry about losing your data too. Before, losing documents is a huge hassle. If you accidentally destroy or lose it, there’s a small chance of getting it back. But because of the latest innovations introduced over the years, documents and other data are in safer hands. True enough, there’s still a chance that you can lose your files in spite of the advancements in technology. It’s easy to erase it. Plus, computers and other devices can be infected with viruses that could destroy the files. But the good news is that you can easily create a backup for your files. Thus, when you accidentally erase it, you’ll still have backup. On top of that, in case your files were destroyed by a virus, you can retrieve it with the help of computer experts. Access to Information is Trouble-Free Probably one of the best things that technology can do is provide easy access. Before, delivering work from other places seems impossible as it wasn’t easy to take piles of physical files with you. Access to information was also limited as you have to go to the library to search for a book or file that you need. But thanks to technology, it’s not easier to work, study, or do almost any task wherever you are as long as you have an internet connection. With just a few clicks, you can get access to loads of information. Because of this, we have more freedom and we can deliver our work from almost any part of the world. Technology makes sharing files, studying, and more much easier. You won’t have to be tied to a physical location as you can have access to almost anything wherever, whenever. Easy and Fast Communication Technology helps us connect. Back in the 90s, it was hard to communicate. You’d have to write a letter and wait for days or even months to receive a reply. Important documents also take time to be delivered. But today, technology has made it easier for us. Thanks to innovations, you won’t need to wait for weeks to send a message to someone. You can just send your message via messaging apps and wait for seconds for it to be delivered. The best part of it is that technology doesn’t make us miss a loved one too much. Because today, you can even call a loved one via video chat. SInce you can get to see each other through video calls, you’ll feel as if you’re just a few steps apart Sending important files is much easier because of the technology. People in the office can communicate better and send important data within minutes. Finding Lost Items in No Time Finding small yet important things such as car keys and mobiles phones can be frustrating. They aren’t just hard to find but your tasks can also be affected if you can’t find them. Without your car keys, you won’t have access to your car. Without your phone, you won’t get to send important messages, play games, etc. But thanks to technology, you won’t need to spend effort and time looking for these important items. There are apps available that will help you find your car keys and phone. Some cars don’t even have keys. To access them, you’ll only need a card that you can easily keep in your wallet. Others have an in-car biometric technology, which keeps your car safe from thieves. Of course, technology has made the lives of people with special needs much easier as well. Sure, people with special needs can get personal assistants to help them with their activities. However, getting things done by themselves, with the help of technology, is different. Technology helps them do their activities easier and it gives them freedom. As a result, they are more empowered, confident, and hopeful. Technology can do so much for many people. It’s not just about being “cool.” Using the latest technology can also make lives easier.
<urn:uuid:3e73b9f3-87f6-4720-86a9-bcbb4be674fa>
CC-MAIN-2022-40
https://itchronicles.com/technology/how-technology-makes-life-easier-and-safer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00467.warc.gz
en
0.962525
1,885
2.5625
3
- Cybersecurity fingerprinting refers to a set of information that can be used to identify network protocols, operating systems, hardware devices, software among other things. - Hackers use fingerprinting as the first step of their attack to gather maximum information about targets. Fingerprinting, also known as footprinting, can be deployed as a security measure to authenticate users. - However, attackers exploit this to identify vulnerabilities in the target systems that they can exploit. - Fingerprinting can provide attackers with valuable information such as OS type, OS version, SNMP information, domain names, network blocks, VPN points, and more. - To gather details about the target’s network, the attackers usually launch custom packets. - When these packets receive a response from the target network in the form of a digital signature, the OS, software, and protocols can be deduced by the attackers. - This allows them to customize the attack to cause maximum damage to the target systems. Types of fingerprinting Fingerprinting techniques rely on detecting patterns and observing differences in the network packets generated. There are two types of fingerprinting — active and passive. - Active fingerprinting involves sending TCP or ICMP packets to a system and analyzing the response from the target. The packet headers contain various flags that cause different operating systems and versions to respond differently. - However, active fingerprinting brings with it the risk of easy detection. - Passive fingerprinting techniques are stealthy in nature as they do not involve sending any packets to the target system. They rely on scanning the network as sniffers to detect patterns in the usual network traffic. - Different operating systems have different TCP/IP implementations. Passive fingerprinting uses this to determine the possible OS used by the target. - After a fair amount of data is gathered, it can be used to analyze the target system. This technique is considered less accurate than active fingerprinting. Organizations must regularly implement active and passive fingerprinting techniques on their networks to understand an attacker will be able to access. This information can assist in enhancing the OS and network security. Apart from this, there are a few other measures organizations can implement. - Ensure that web servers, firewalls, intrusion prevention systems, and intrusion detection systems are properly configured and monitored to restrict active fingerprinting by attackers. - Network interface cards must not be enabled to work in promiscuous mode unless absolutely necessary. In such cases, they must be strictly monitored to prevent passive fingerprinting attacks. - Regularly monitor the log files for any sign of unusual activity. - System administrators must patch security vulnerabilities as soon as possible.
<urn:uuid:826db0e2-d1b6-464a-bca9-25bf09fa31c2>
CC-MAIN-2022-40
https://www.infosec4tc.com/2019/10/14/what-is-cybersecurity-fingerprinting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00467.warc.gz
en
0.909358
550
3.421875
3
Hewlett-Packard researchers announced Tuesday that they have developed technology that could significantly boost the performance capabilities of an array of computer chips, while cutting back on power consumption at the same time. If the research proves sound and makes it to market, its impact could be felt across a broad spectrum of technologies from the automotive and airline industries to consumer electronics to the military. The revolutionary chip design, featured in the Jan. 24 issue ofNanotechnology, increases the transistor density of adaptable chips — known as “field programmable gate arrays” (FPGAs) — by a factor of eight. The trick, according to HP researchers Stan Williams and Greg Snider, is to eliminate the wiring running between the chip’s transistors and instead place a layer of switches connected with nanowires, measuring only a few atoms in thickness, above them. “As conventional chip electronics continue to shrink, Moore’s law is on a collision course with the laws of physics,” said Williams, an HP senior fellow and director of Quantum Science Research at HP Labs. “What we’ve been able to do is combine conventional CMOS technology with nanoscale switching devices in a hybrid circuit to increase effective transistor density, reduce power dissipation, and dramatically improve tolerance of defective devices.” The research could be the answer to a problem that has bedeviled the chip industry for more than a decade: how to continue to reduce the size of computer chips without increasing their cost. Throughout its history, the silicon chip manufacturing industry has relied on a reduction in the size of the transistor to make smaller, faster and, most importantly, cheaper chips. Until now, progress has been in sync with Moore’s Law, a theory that predicts the number of transistors on a chip will double every two years. In recent years, however, chip manufacturers have found it increasingly difficult to shrink costs even as chip components have continued to decrease in size. To keep prices low, chip makers have been forced to sacrifice energy efficiency and performance in favor of reducing manufacturing costs in some cases. Layering the wiring system will result in a smaller chip in which the transistors are more tightly packed without necessitating a reduction in size for the transistor, according to HP. That means manufacturers would not need to make any modifications to their current production facilities. “The expense of fabricating chip is increasing dramatically with the demands of increasing manufacturing tolerances,” said Snider, who is a senior architect in quantum science research at HP Labs. “We believe this approach could increase the usable device density of FPGAs by a factor of eight using tolerances that are no greater than those required of today’s devices.” Brave New World Though the technology exists only as a simulation, the company expects to have a prototype by the end of the year. If all goes well, said Rob Enderle, principal analyst at the Enderle Group, this could “move the market up three generations in the time it typically would take to do one.” HP’s discovery is very significant, he told TechNewsWorld. The microprocessor industry is having difficulty with power efficiency and is also running into a number of problems as they continue to shrink chips to ever smaller sizes. “This technology could help the industry move much more quickly, and if [the HP chip] hits targets, could allow them to actually accelerate significantly the next generations of microprocessors,” he said. Enderle predicts that the new design will have a bigger impact on processors in the near term, but will impact a number of closely related chip designs, including memory chips, in the long run. Consumers and businesses can expect the new chips — which will mostly likely not make an appearance in electronics before 2011 — to power smaller, more power-efficient and higher-performance devices at a lower price. “Think thinner iPods, more intelligent automobiles and homes, and increased power efficiency in appliances,” Enderle explained. “In addition, [the new chips] should also make things less expensive to build, which should translate into lower prices.” As with any technology breakthrough, it is difficult to predict the full spectrum of changes the HP design will bring. However, according to Enderle, it is not hyperbole to say the sky may be the limit. “This is one of those things where it is difficult to imagine the full impact before the event,” Enderle said. “It will touch so many technologies — from consumer to military, from automotive to airline, from entertainment to law enforcement — that the total change, if it makes it to market as expected, could dwarf our imaginations.”
<urn:uuid:fe83a999-954c-4900-93e9-fa7939ac6379>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/hp-researchers-give-chips-a-nano-spin-55194.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00467.warc.gz
en
0.953784
983
3.4375
3
Sound advice from our friends at The National Cybersecurity Alliance: Personal Information is Like Money. Value it. Protect It. Your devices make it easy to connect to the world around you, but they can also pack a lot of info about you and your friends and family, such as your contacts, photos, videos, location and health and financial data. Follow these tips to manage your privacy in an always-on world. - Secure your devices: Use strong passphrases, passcodes or touch ID features to lock your devices. These security measures can help protect your information if your devices are lost or stolen and keep prying eyes out. - Think before you app: Information about you, such as the games you like to play, your contacts list, where you shop and your location, has value – just like money. Be thoughtful about who gets that information and how it’s collected through apps. - Now you see me, now you don’t: Some stores and other locations look for devices with WiFi or Bluetooth turned on to track your movements while you are within range. Disable WiFi and Bluetooth when not in use. - Get savvy about WiFi hotspots: Public wireless networks and hotspots are not secure, which means that anyone could potentially see what you are doing on your mobile device while you are connected. Limit what you do on public WiFi, and avoid logging in to key accounts like email and financial services on these networks. Consider using a virtual private network (VPN) or a personal/mobile hotspot if you need a more secure connection on the go. Keep A Clean Machine - Keep your mobile phone and apps up to date: Your mobile devices are just as vulnerable as your PC or laptop. Having the most up-to-date security software, web browser, operating system and apps is the best defense against viruses, malware and other online threats. - Delete when done: Many of us download apps for specific purposes, such as planning a vacation, and no longer need them afterwards, or we may have previously downloaded apps that are no longer useful or interesting to us. It’s a good security practice to delete all apps you no longer use.
<urn:uuid:92345ea8-8845-4046-bc7c-c98a6634b4cb>
CC-MAIN-2022-40
https://decyphertech.com/always-on-privacy-basics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00667.warc.gz
en
0.941961
447
2.6875
3
What is Ransomware? An Easy-to-Understand Guide Ransomware, or ransom malware, is a type of malware used to prevent access to computer systems by infecting them with a virus and threatening not to remove it until demands are met. In most cases, the ransom is monetary, with payment demanded in cryptocurrency (e.g., Bitcoin). Depending on the type of ransomware malware, denial of access could be permanent if the ransom is not paid. Ransomware has wreaked havoc on organizations and individuals for more than 30 years. In 1989, Joseph L. Popp, a Harvard-educated biologist, introduced a Trojan by sending 20,000 compromised diskettes named “AIDS Information—Introductory Diskettes” to attendees of a World Health Organization AIDS conference. The Trojan encrypted file names on the computers and hid directories for the systems where users inserted the diskette. Then, a message popped up, telling users to pay $189 to PC Cyborg Corp. (by mail to a PO box in Panama) to have their systems decrypted. Ransomware criminals are as varied as their approaches and targets. However, what remains consistent about ransomware is that small and medium businesses are the target of the bulk of attacks, because they lack the depth of security infrastructure that larger organizations have, making them easier targets. Ransomware is used by individuals, small groups, and crime syndicates. The accessibility and ease of use make it readily accessible as would-be criminals can buy ransomware on the dark web or use ransomware-as-a-service. The primary roles in a ransomware operation are: - Ransomware procurement (create it or buy it) and hosting - Campaign development and execution - Payload collection and distribution The more committed cybercriminals band together in networks to leverage reach, resources, and skills. The way these ransomware networks organize differs, but the three most common structures are: 1. Consolidated ownership and operations One organization or individual controls all three operational functions and keeps 100% of the profits. 2. Channel-styled operations The lead organization handles ransomware procurement and hosting as well as payment collection and distribution. Campaign development and execution, or the spread of ransomware, is managed by a third-party organization or individual who generally receives 50-75% of the profits. This model is popularly known as ransomware-as-a-service. 3. Ransomware infrastructure-in-a-box A third-party service provider packages and sells the bundle of products and services needed to launch ransomware attacks and collect payment. Then the attacker procures the ransomware, buys the bundled solution, and keeps 100% of the profits. How Ransomware Works There are three main types of ransomware, ranging from annoying to potentially devastating: - 1. Scareware Ransomware Scareware malware tricks users into believing that their system is infected and they need to purchase a product to repair it. Users are inundated with pop-up notifications intended to bully them into buying the fake solution. While the pop-ups are annoying, there is no underlying threat to the systems until users click on their malicious links. - 2. Screen Lockers or Locker Ransomware Screen locker ransomware is a form of malware that freezes users out of their systems. It blocks them from logging in or accessing files. Payment is demanded to regain access. A common tactic for locker ransomware is to put an official-looking seal on the page (e.g., FBI or US Department of Justice) with a note stating that illegal activity has been detected on the computer and the user must pay a fine. Screen locker ransomware uses non-encrypting malware to lock the infected computer. - 3. Encrypting Ransomware or Crypto Ransomware Encryption ransomware is a form of malware that uses complex algorithms to lock all data on the targeted system. The danger of crypto ransomware is that it usually cannot be decrypted without a key. Two signs a system may have been infected by ransomware: - 1. The screen is locked and shows a message about how to pay to unlock the system, and/or the file directories contain a “ransom note.” - 2. Files have a new extension appended to the file names, such as .ecc, .ezz, .exx, .zzz, .xyz, .aaa, .abc, .ccc, .vvv, .xxx, .ttt, .micro, .encrypted, .locked, .crypto, _crypt, .crinf, .r5a, .XRNT, .XTBL, .crypt, .R16M01D05, .pzdc, .good, .LOL!, .OMG!, .RDM, .RRK, .encryptedRSA, .crjoker, .EnCiPhErEd, .LeChiffre, .keybtc@inbox_com, .0x0, .bleep, .1999, .vault, .HA3, .toxcrypt, .magic, .SUPERCRYPT, .CTBL, .CTB2, .locky, or 6-7 length extension consisting of random characters. Ransomware Entry Points Ransomware depends not on the complexity of its code, but the vulnerabilities of its targets. At its core, ransomware is a worm looking for a hole. Preparation for a near-inevitable ransomware attack helps to prevent the malware from breaching systems and closing holes. Many organizations have porous security perimeters, especially considering the spike in remote workers. However, ransomware usually finds easier access, entering from a download delivered via a phishing email, because that point of entry requires the least effort on the part of the attacker. The ransomware appears as a link or attachment, often from a known source, with an enticement to click it. The attachment or link is an executable file that unleashes the ransomware. Inadvertent downloads of malware from an infected website—sometimes executed by clicking, others by simply landing on the site—are also popular attack points for ransomware. (This includes chat and social media messaging.) This stealthy ransomware enters systems through vulnerabilities in various browser plugins, with the delivery mechanism being merely visiting a website. This ransomware, known as drive-by ransomware, is delivered in the background, often without the user being aware of it. Other entry points include good old-fashioned social engineering and malware carried on USB drives. More sophisticated ransomware attacks take advantage of backdoors or vulnerabilities in systems and networks. Attackers probe targets to find weaknesses in security systems, such as lapsed patches and updates, gaps in the configuration of security tools, and non-secure remote users. Ransomware Attack Profiles Ransomware attacks do not necessarily begin at the time of entry. Often, ransomware works quietly, without users noticing it. It lurks in the background while it prepares for its attack on the point-of-entry system or spreads across the network to other systems before activating and making its presence known. Sometimes ransomware lies dormant after download or downloads in segments to avoid detection. Regardless of its download timeline, once file lockdown begins, ransomware acts quickly—taking between 18 seconds and 16 minutes to encrypt 1,000 files. Ransomware has two approaches to encryption: Simpler versions use the encryption functions on Windows and Unix, including macOS and Linux, while more sophisticated ransomware uses custom encryption implementations to bypass security software. “Off-the-shelf” open-source projects offer packaged ransomware. No matter the type of ransomware attack, once files are encrypted, no one can decrypt them without the attacker’s decryption key. After files are locked down, the ransomware presents a message (i.e., a ransom note) that tells users: - What has occurred - The amount and currency of the ransom - Where to send the payment - What will happen if the ransom is not received Ransom notes usually reveal the type of ransomware used for the attack. Who Does Ransomware Target? Attackers can be loosely classified into two groups, based on their typical ransomware targets: - 1. Big-game hunters They target organizations with high-value data or assets, especially those sensitive to downtime, as they are more likely to pay a ransom. - 2. Spray and pray attackers This approach directs attacks at an acquired list of emails or compromised websites. These smaller, generic ransomware attacks cause significant harm and disruption because of their scale. Five types of organizations that are prime ransomware targets are: - 1. Professional services Service-oriented businesses, such as real estate, accounting, law firms, and other small-to-medium-sized businesses have been ransomware targets. - 2. Healthcare Sensitive information makes hospitals and clinics ransomware targets. Both must have electronic medical records accessible to administer and monitor patient care. - 3. Education Public school districts, trade schools, colleges, and university systems have all been ransomware targets. All are susceptible because of disruption to classes and the sensitive student data that they store. - 4. Manufacturing Manufacturers have been targeted for ransomware attacks because many require operations to run factory production lines around the clock. Disruption would create impacts across the supply chain. - 5. Infrastructure Industrial control systems (ICS) are ransomware targets because of their wide-ranging dependencies. Holding critical infrastructure hostage could put access to energy, water, and other utilities at risk. How to Prevent Ransomware The most effective way to prevent ransomware attacks is with a combination of technology and user training. While technology is an excellent way to prevent a ransomware attack, people can undermine even the most sophisticated cybersecurity tools. - Engage users Ongoing training coupled with continual education and awareness messages help users to not only understand ransomware threats, but learn how to avoid and prevent potential attacks. - Take advantage of technology In addition to solutions for general data protection with detection, monitoring, and response capabilities, consider multi-layered ransomware protection solutions. - Perform back-ups Schedule regular back-ups and, if possible, encrypt and isolate back-ups to protect them from network breaches. - Segment networks Protect IT systems by controlling the flow of traffic between networks and subnetworks to prevent unauthorized lateral movement. - Install all patches Keep computers, networks, mobile devices, and other systems safe from known vulnerabilities by installing patches when they become available. Ransomware removal approaches vary depending on the type of attack. Following is an overview of tactics for ransomware removal. How to Remove Screen Locker Ransomware Most screen locker ransomware can be taken care of with removal tools. There are a number of free removal tools made available by vendors who support the fight to put a stop to the profitability of ransomware. How to Remove Encrypting or Crypto Ransomware First, neutralize the malware with antivirus or other programs. In the case where backups exist, programs can be executed to scan systems and delete the ransomware malware. Once the ransomware has been removed, files can be restored from backups. If there is not a backup for the infected systems, ransomware recovery is more complicated, and success is not guaranteed. Use a tool to scan the system and identify the specific strain of ransomware. In some cases, there are remedies to remove the malware. If not, a ransomware decryptor tool can be used. This searches for and applies decrypting keys, which are available for free for certain types of ransomware. If a decryption key is not found, there are two options: - Put your data “on hold” and wait for a solution for that specific ransomware type. - Evaluate the need to pay the attacker’s ransom. - The number of ransomware attacks increased by more than 150% over the past year, and these attacks are projected to cost businesses $11.5 billion, in addition to the cost of loss of customer and partner trust. - North America was the most targeted geographic region, with 66% of one research company’s ransomware alerts coming from organizations in North America. - In professional services, more than 70% of ransomware incidents involved companies with fewer than 1,000 employees, and 60% had revenues of less than $50 million. - More than 18 million patient records were impacted by ransomware attacks on healthcare organizations, a 470% increase with an estimated cost of almost $21 billion. - The education sector reported 31 ransomware incidents in Q3 2020, a 388% increase between the second and third quarters of 2020. - The number of reported ransomware attacks on manufacturing entities more than tripled in 2020 compared to the previous year. - Ransomware attacks against industrial entities jumped more than 500 percent over the last two years (as of 2020). How to Respond to Ransomware Ransomware is a frightening prospect, and time does matter in terms of a response. However, it is important to consider how to respond to ransomware before taking action. If ransomware is detected, a few pre-remediation steps can help with overall recovery. - 1. Immediately disconnect the infected device. - 2. Create a system backup - 3. Disable any cleanup or system optimization software. - 4. Identify the type of ransomware. - 5. Record evidence of the ransomware attack. Should Organizations Pay the Ransom? Generally, security experts and law enforcement agencies do not support paying ransom in response to a ransomware attack. The primary reason is that there is no guarantee that the files will be released and the extortion will stop. According to the FBI, “It does not guarantee you or your organization will get any data back. It also encourages perpetrators to target more victims and offers an incentive for others to get involved in this type of illegal activity.” And, organizations that pay ransom are frequently subjected to future attacks, simply because they demonstrate a willingness to negotiate financially with the attackers. There are a number of industry and government organizations fighting ransomware. An example of a joint effort is “No More Ransom,” an initiative by the National High Tech Crime Unit of the Netherlands’ police, Europol’s European Cybercrime Centre, Kaspersky, and McAfee. The group’s goal is to stop the payment of ransom by helping retrieve their encrypted data without having to pay ransom after an attack. - Discovered in August 2020 - Caused massive gasoline shortages at U.S. east coast gas stations in a 2021 attack and stole a large amount of data from a chemical distribution company the following month - Ransom of nearly $5 million paid, with the majority later recovered by the Justice Department REvil (a.k.a. Sodin and Sodinokibi) - Discovered in April 2019 - Encrypts victims’ files very quickly - Doubles ransom demand if not paid in time - Operates via ransomware-as-a-service - Launched an auction site to sell stolen data - Discovered in January 2020 - Targets industrial control systems (ICS), making installed automated devices became non-operational by stopping operations and processes - Includes a static “kill list” that stops many anti-virus solutions - Evades detection by modifying file extensions with a hexadecimal, five-random character string rather than following a uniform extension - Believed to be the first for-profit ransomware designed to shut down specific processes used in ICS NetWalker (a.k.a. Mailto) - Discovered in September 2019 - Spreads through a VBS script that is attached to phishing emails and executable files that spread through networks - Appends files with a random character string extension - Operates as ransomware-as-a-service - Discovered in April 2019 - Spawned at least eight variations - Provides a payment portal where victims can see the amount of ransom, the countdown, and the Bitcoin wallet address - Shares most of its code with the BitPaymer ransomware - Launched a site to shame victims who do not pay a ransom and to publish their data Maze (previously known as ChaCha) - Discovered in May 2019 - Launches attacks by using exploit tools called Fallout and Spelvo - Targets Windows systems in large organizations - Encrypts and exfiltrates data with a threat to publish the information if the ransom is not paid - Considered one of the most notorious strains of ransomware - Discovered in December 2019 - Targets remote management software (RMM), software commonly used by managed service providers, to prevent attacks from being detected and stopped - Takes victims’ files before encrypting them and threatens to publish the files if the ransom is not paid - Distributes ransomware payloads via virtual machines - Utilized Facebook ads to pressure a victim into paying a ransom - Discovered in February 2019 - Adds the “.clop” extension to every encrypted file - Publishes the data on a leak site called ‘CL0P ^ _- LEAKS’ if the ransom is not paid - Deactivates local security systems such as Windows Defender and Microsoft Security Essentials to expand the scale of the attack - Distributed via fake software updates, trojans, cracks, and unofficial download sources - Discovered in March 2020 - Threatens victims with publishing sensitive data if they do not pay the ransom - Encrypts victims’ data by using the vulnerability of a remote desktop network and VPN - Discovered in August 2018 - Considered one of the largest and most active ransomware-as-a-service operators - Targets large enterprises and government agencies - Compromises systems using TrickBot, a malware Trojan - Discovered in December 2019 - Targets education and software industry - Deployed in a Trojanized version of Java Runtime Environment and compiled in ImageJ - Attacks Windows and Linux using the Java image format as part of the attack process - Denies access to the administrator after it infects the system by accessing file servers and the domain controller - Discovered in May 2020 - Described as human-operated ransomware, with attackers researching targets - Accumulates network access and maintains persistence on target networks - Stays dormant until the best time to execute for the most financial gain is determined - Bypasses event logging using a deployed remote manipulator system Zeppelin (previously Vega or VegaLocker) - Discovered in November 2019 - Designed to stop running on machines that are based in Russia - Operates as ransomware-as-a-service - Targets technology and healthcare companies in Europe and the US - Believed to have conducted attacks through managed security service providers (MSSPs) - Discovered in July 2019 - Infects organizations through unprotected or poorly secured RDP ports - Disables Volume Shadow Copy Service (VSS) to make data recovery difficult - Ignores critical system files and objects stored in the Sample Music folder - Targets corporate environments - Discovered in January 2019 - Affects enterprise networks previously compromised by Qakbot and Emotet Trojans - Targets businesses located in the US, Canada, the Netherlands, and France - Leverages automated and manual components in its attacks - Uses a signed executable as part of the payload - Offers security consulting services - Discovered in March 2020 - Follows the big-game-hunting approach, targeting large companies and government networks with substantial uptime requirements - Deployed on networks previously infected with the Qakbot trojan - Utilizes the CVE-2019-0859 Windows vulnerability to gain administrator-level access on infected hosts - Known for seeking high ransoms, as high as $3 million - Discovered in January 2020 - Configured to overwrite the master boot record (MBR), a more destructive approach to ransomware than typical approaches - Offered as ransomware-as-a-service with a private ransomware builder that can be used to generate new Thanos ransomware clients based on forty-three configuration options - Steals and encrypts files, and changes file extensions to .crypted - Discovered in March 2016 - Targets Microsoft Windows-based systems, infecting the master boot record to encrypt the hard drive’s file system table and prevent Windows from booting - Infected millions of people during the first year of its release - Considered the first ransomware-as-a-service - Morphed into NotPetya, which was released in June 2017 Understand Ransomware to Fight It Ransomware has the attention not just of IT, but of executive teams. It ranks among the top priorities for business and IT leaders. To fight ransomware, organizations must understand: - Perpetrators and how they operate - How ransomware works - Prevention tactics - Remediation and best practices for an effective response - Ongoing monitoring and maintenance considerations A proactive approach to ransomware prevention can significantly reduce the risk of attack. However, in the event of an attack, planning is the best front-line defense. Effective response procedures expedite containment of the incident, prevent data loss, and streamline the recovery process. When assessing security as it relates to ransomware, remember content protection and governance. Machines can be replaced, but content may be difficult or impossible to recover. Securing content and providing granular access if a rollback is required enables business continuity after a ransomware attack. Egnyte has experts ready to answer your questions. For more than a decade, Egnyte has helped more than 16,000 customers with millions of customers worldwide. Last Updated: 5th August, 2021
<urn:uuid:4f1da1fa-33c1-43e3-aba4-d4b6757982da>
CC-MAIN-2022-40
https://www.egnyte.com/guides/governance/ransomware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00667.warc.gz
en
0.92124
4,610
3.484375
3
How many robots does it take to make a cup of coffee? Think about it. Before your name is incorrectly scribbled onto a cup, consider the processes, warehousing, transport and logistics needed to get those green beans to your barista. Supply chains can be complex, but robotics help streamline these otherwise complicated networks. That being said, as demand for fast deliveries continues to grow and a need to react quickly to trends becomes more prevalent, how can robotics help suppliers keep up? Robots have a long history of keeping the supply chain moving. In fact, one of the world’s first industrial robots was created for the sole purpose of transferring objects from one place to another. Today, most tasks that are vital to the supply chain, like the movement of products through a warehouse, rely on robots as standard. Consider Automated Guided Vehicles (AGVs) as an example. These portable robots use markers, magnets and vision systems to navigate a warehouse floor. The machines can move faster than a human worker, transporting goods from one place to another without the need for any intervention. What’s more, they are not restricted to the weight limits that a human worker could be capable of lifting. Transporting goods from one place to another might be a simple task, but ultimately, it is better automated. The AGV market reflects this notion. According to a Global Automated Guided Vehicles Market report, the market is expected to reach $24.61 billion by 2020, which represents a growth of 12 percent in the five preceding years. Robotics have proven to be a vital part of supply chain automation. But, looking to the future, this technology will find a broader range of supply chain applications — beyond the basic transfer of objects. Next generation robotics According to a study published by the Information Services Group (ISG), 72 percent of enterprises will increase their investments in robotic process automation by 2019. The surge in investment is likely to be prompted by the success of other companies that have invested heavily in the technology. For example, Amazon has made no attempt to hide its ambitious goals. Since its major investment in robots in fulfilment centers, the company boasts a 50 percent increase in capacity, compared to a facility without robotics. Amazon’s robots were supplied by a company that Amazon acquired, Kiva Systems. The robots automate picking and packing processes in large warehouses. Supply chains and robots form a natural relationship, automating essential, but often monotonous, tasks. However, new generations of robots are changing the role of this technology in the supply chain. No longer are robots limited to basic applications in warehousing and fulfilment, but for more advanced supply chain operations. Robots may have always transported coffee beans in the fulfilment center. In the future, prepare to see your coffee ground, packaged, delivered and brewed by robots too. Maybe this time, they will spell your name right. Jonathan Wilkins is marketing director at EU Automation.
<urn:uuid:2caa7526-132d-451b-8dcb-1aeee45c17f9>
CC-MAIN-2022-40
https://www.mbtmag.com/home/blog/13247145/robotics-in-the-supply-chain
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00667.warc.gz
en
0.950343
607
3.015625
3
Cybersecurity has become the buzzword in the medical arena with the increasing use of wireless, internet and network-connected devices. Due to the surge in the use of connected medical devices, both in hospitals and at home, it has become more important than ever before to ensure that such devices are sufficiently safeguarded from cyberattacks which have the potential to not only render the devices inoperable but also disrupt the delivery of patient care across healthcare facilities. With a view to address this likely threat, various health regulatory agencies around the world are recommending the industry to manufacture medical devices which are adequately resilient to cybersecurity threats or risks and once so assured, can be considered as trustworthy devices. But what exactly are trustworthy devices? And what are these Cybersecurity Threats and Risks which are being mentioned time and again? In this blog, we break down the definition of trustworthy devices and establish that it is the software that these devices rely upon for safe operation that eventually makes or breaks the device. What is a trustworthy device? In 2018, the U.S. Food and Drug Administration (FDA) issued its draft guidelines titled Content of Premarket Submissions for Management of Cybersecurity in Medical Devices with the intent to, inter alia, assist the medical industry to promote the design and development of medical devices by safeguarding them against cybersecurity threats and risks. In these guidelines, we first come across the definition of trustworthy devices: “Trustworthy Device – a medical device containing hardware, software, and/or programmable logic that: (1) is reasonably secure from cybersecurity intrusion and misuse; (2) provides a reasonable level of availability, reliability, and correct operation; (3) is reasonably suited to performing its intended functions; and (4) adheres to generally accepted security procedures.” If a medical device meets the above criteria, it will be considered a trustworthy device. However, a device is only as safe as the software within it. Trustworthy software is needed to achieve stable and successful solutions in any industrial space, including the medical devices industry. Therefore, it is important that we identify the trustworthiness of the software in order to establish the trustworthiness of the medical device. Trustworthiness of a software is based on five characteristics: These characteristics, directly and in combination, provide protection against hazards and threats related to environmental disturbances, human errors, system faults and possible attacks. In addition, trustworthiness concerns in any software are addressed by analyzing and undertaking pro-active steps to address architecture, design, code quality and implementation procedures which would safeguard the software from safety hazards, security attacks and theft. Why is this important? What does it mean? The value of a medical device which relies on software for safe operations will either increase or diminish depending on how secure, reliable and resilient the software which it harbors is. Therefore, it is crucial to integrate the safe process-oriented activities throughout the software lifecycle. Methods for proving and controlling the provenance of various software components, configurations and their pedigree further improve trust in the software. Trust and confidence in medical devices are also bound to increase if the software design and operation is made transparent, evidential and auditable to the end-user. Also, documentation demonstrating the trustworthiness of the software will help too, both the regulators and end-users, quickly and efficiently assess the device’s safety and effectiveness vis-à-vis cybersecurity. Thus, supporting and safeguarding the software against cybersecurity threats and risks will ensure the safety of the medical device, which relies on the software for safe operations. It is when the software is foolproof that the medical device will remain safe and effective throughout its lifecycle. Interested in learning more? Click here to get in touch with Irdeto’s Connected Health team!
<urn:uuid:e6c5e5c3-a28e-440d-b40e-7a855b76c410>
CC-MAIN-2022-40
https://blog.irdeto.com/healthcare/what-is-a-trustworthy-device-how-to-ensure-they-are-trustworthy-and-why-its-important/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00667.warc.gz
en
0.948007
799
3.28125
3
The just-launched federal website provides a range of information around extreme weather and natural disasters. As wildfires, drought and flooding continue to batter swaths of the U.S., the Biden administration launched a new tool on Thursday to help communities prepare for extreme weather. The Climate Mapping for Resilience and Adaptation portal is an online dashboard that provides real-time and location-specific information about extreme weather threats. It features an interactive map that offers hazard-specific information, such as how many personnel are responding to a particular wildfire or what kinds of flood alerts have been issued in a certain community. The website, created in partnership with the National Oceanic and Atmospheric Administration and the Interior Department, also shares projections for future weather threats. With the portal’s assessment tool, users can hone in on specific counties, tribal lands or census tracts and learn what weather hazards might look like in the coming decades up to 2099. Information about climate-resilient building codes is also available. Besides climate data, the portal identifies disadvantaged communities that could be eligible for programs like Biden’s Justice40 Initiative, which aims to direct federal resources to areas disproportionately harmed by climate change and other environmental problems. “We wanted to build a portal that is science-based, pulling together the best data on impacts that communities have historically faced, what is happening right now, and perhaps most importantly, what the future may hold,” said David Hayes, special assistant to the president for climate policy, during a press conference on Thursday. From heat waves in California to flooding in Kentucky to droughts in New England, extreme weather has devastated communities this summer. And as the global temperature continues to rise, those threats are expected to become more severe and frequent, experts have said. NOAA has tracked climate and weather disasters since 1980 that result in over $1 billion in damages or costs. Since then, the number of these events has quadrupled. In the 1980s, there were on average three disasters over $1 billion each year, costing about $20 billion annually, explained NOAA administrator Rick Spinrad. In the 2010s, that jumped to 13 disasters for an average cost of $92 billion each year. The new portal, also called CMRA (pronounced “camera”), goes beyond data to share resources about federal funding opportunities, case studies of how communities are navigating climate threats, and information about other federal policies. CMRA was developed by the software company Esri. Funding for the project came through the infrastructure law Biden signed last year, according to the company. At Thursday’s press conference, Phoenix Mayor Kate Gallego described some of the weather her city has experienced since she was elected in 2019, including extreme heat and flash flooding. “It is so valuable for us to be able to have a tool like CMRA where we have all of the best scientific data out there,” Gallego said. While CMRA provides an overview of multiple hazards, the federal government has other online tools, such as Heat.gov and Drought.gov, that provide more in-depth information about specific threats, Hayes said. Molly Bolan is the assistant editor for Route Fifty.
<urn:uuid:9ce48517-c0e8-44c3-8e83-38d33adc4119>
CC-MAIN-2022-40
https://fcw.com/digital-government/2022/09/white-house-launches-tech-tool-help-communities-confront-climate-risks/376923/?oref=fcw-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00667.warc.gz
en
0.949189
662
3.34375
3
Prefabricated modular data centers have serviced a wide range of sectors throughout the years. This includes automotive, defense, healthcare, and education. Despite the demonstration of its relevance and importance in today’s society, it’s not the most innovative technology present. Its rise can be attributed to hyperscalers like Microsoft who have begun to venture into the prefab space. Last 2020, Microsoft introduced Azure Space which they have branded as a portable data center in a box made for specifically tough environments. As seen through industry experience, it is evident that prefab data centers are a proven and tested technology that has been utilized in various sectors for over a decade. Earlier data center infrastructure management versions had some functionality lapses when it was first launched. And is one of the reasons why prefab data centers have been attracting attention. Its mobility and scalability are factors that drive the market. Clients are able to relocate their data centers instead of destroying them and rebuilding them at the new site. Many organizations consider scalability as one of their biggest challenges. For prefab data centers, the modules can be replaced with ease when there is a need for updated technology or if it becomes obsolete. Prefab Data Centers for Military Use Prefab or modular data centers rapidly gained popularity among organizations. It is now being used in various industries and organizations such as in educational establishments, healthcare sectors, life sciences, and government and defense. The unmatched flexibility of prefab data centers made it suitable for an environment like military camps. Defense department grows more hi-tech, it is essential for armed forces around the world to have reliable data storage and computing power as near the front line as possible. Prefab data centers can be ruggedized for harsh military environments. Due to this, the military has been using prefab data centers for many years now. Various steps can also be taken to make sure that they are secure and shock-proof. Prefab data centers need to be sensitive compartmented information facility (SCIF) compliant on the frontline so they can prevent electronic surveillance and inhibit sensitive military and security information from leaving. Meanwhile, other manufacturers claim that their units have the ability to withstand small firearms. Prefabricated Data Centers for Healthcare Centers Health care is one of the industries that have been using prefab data centers for a long time now. This is more evident in the time of the Covid-19 pandemic. As the pharmaceutical companies were racing to formulate, manufacture and distribute Covid-19 vaccines. Time is limited so vaccine developers cannot wait years for a data center. Thus, the existing prefab data centers were utilized. Compared to traditional facilities which take 12 to 24 months of construction, prefab data centers only take less than eight months. This quality is essential for situations that call for a fast scale-up. Hyperscalers also use prefab data centers. Even with their own data warehouses, colocating in facilities with power, cooling, and IT modules. Security concerns that surround these colocation sites must also be considered. The partnership of hyperscaler and datacenter must always remain confidential. Construction Revolution Through Prefabricated Modular Data Centers One representative of a prefabricated modular data center is a containerized data center. Deployed by 20’ or 40’ containers that are easily maneuvered. The prefab data center offers fast deployment, expandable, changeable, and movable. With no environmental limitations, deploying at military sites, oceangoing research vessels, ore exploration, and disaster recovery. They are considered a powerful than legacy data centers and the construction costs are less than half. From the containerized data centers, the modular data centers evolved. For indoor use, even eliminating the container or shelter of the data center while retaining the advantages of the containerized data center. Containerized data centers are the most usual types of prefabricated modular data centers. The datacenter’s customized infrastructure can also be considered as prefabricated modular data centers. All In One, No Zoning Prefabricated and Assembly Line Production In-House Quick and Standardized Installation Low Cost and High-Power Density Full Administrator Control Wireless Monitoring For Pre-Fabricated Data Centers AKCP Wireless monitoring solution enables tracking, alerting, and remote control of your prefab data center infrastructure and energy usage. Thus, you save the operational and staff costs, and you are permanently connected to the systems operating within the IT Container, no matter where it is located. Datacenter monitoring with thermal map sensors helps identify and eliminate hotspots in your cabinets by identifying areas where temperature differential between front and rear are too high. Thermal maps consist of a string of 6 temperature sensors and an optional 2 humidity sensors. Pre-wired to be easily installed in your cabinet, they are placed at the top, middle, and bottom – front and rear of the cabinet. This configuration of sensors monitors the air intake and exhaust temperatures of your cabinet, as well as the temperature differential from the front to the rear. - Obstructions within the cabinet Cabling or other obstructions can impede the flow of air causing high-temperature differentials between the inlet and outlet temperatures. The cabinet analysis sensor with pressure differential can also help analyze airflow issues. - Server and cooling fan failures As fans age, or fail, the airflow over the IT equipment will lessen. This leads to higher temperature differentials between the front and rear. - Insufficient pressure differential to pull air through the cabinet When there is an insufficient pressure differential between the front and rear of the cabinet, airflow will be less. The less cold air flowing through the cabinet, the higher the temperature differential front to rear will become. When the data is combined with the power consumption from the in-line power meter you can safely make adjustments in the data center cooling systems, without compromising your equipment, while instantly seeing the changes in your PUE numbers.
<urn:uuid:955c9811-2c12-45f5-abf8-852b4392c950>
CC-MAIN-2022-40
https://www.akcp.com/articles/rise-of-prefabricated-data-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00667.warc.gz
en
0.936088
1,229
2.546875
3
What is a MAN-IN-THE-MIDDLE (MITM) ATTACK? In a man-in-the-middle (MITM) attack, the attacker eavesdrops on the communications between two targets, and then secretly relays and possibly alters the messages between the two parties who believe they are directly communicating with each other. In this scenario, the attacker successfully masquerades as another entity. The attacker also knows the content of the communication and can potentially tamper the message. It is quite akin to a mail carrier, who reads the content of the letter, or even replaces the content with their version. Potential outcomes of a man-in-the-middle attack include theft of a user ID and password for an online account, a local FTP ID and password, or a secure shell (SSH) or telnet session. An MITM attack targets an application, exploiting an application vulnerability such as a faulty secure sockets layer (SSL) configuration. In the man-in-the-middle MITM attack, weaknesses and vulnerabilities in application code and configuration are leveraged to compromise the security of the application. Differences Between Man-in-the-Middle and Meet-in-the-Middle Attacks The man-in-the-middle attack is sometimes confused with a meet-in-the-middle attack. But they are completely different. Man-in-the-middle attacks are an active attack on a cryptographic protocol. In this case, attackers can intercept, relay, and even alter messages. A meet-in-the-middle attack involves a time-space trade-off to drastically reduce the effort to perform a brute-force attack. For example, if one can devise a mechanism to reduce an operation with a 64-bit key that would need 2^64 brute-force operations to 2^32 operations, a brute-force attack becomes feasible. Such a cryptanalytic attack is called the meet-in-the-middle attack. Because of the meet-in-the-middle attack, a brute-force attack, which was considered earlier as impossible, can become feasible. Different Types of Man-in-the-Middle Attacks Man-in-the-middle attacks exploit a number of different vulnerabilities, including: 1. Address Resolution Protocol (ARP) Cache Poisoning ARP Cache Poisoning and Man-in-the-Middle Attacks Address Resolution Protocol (ARP) is used to dynamically map between internet host addresses and 10 Mb/s Ethernet addresses. ARP caches internet-Ethernet address mappings. In the normal ARP communication, the host computer sends a packet that has the source and destination IP address inside the packet. The host will broadcast it to all the devices connected to the network. The device, which has the target IP address, will only respond with the ARP reply with its MAC address in it. Then, communications take place between source and destination nodes. ARP is not a secure protocol, and it does not provide any authentication. As a result, the ARP reply packet can be easily spoofed. The spoofed ARP packet can be sent to the machine that sent the ARP request without knowing that this is not the actual machine but rather an attack. This results in data breaches. The ARP cache table is updated by the attacker, which routes all network traffic to the attacker. In ARP cache poisoning, the attacker controls the network router, monitors the network traffic, and spoofs the ARP packets between the host and the destination. This enables the attacker to perform a man-in-the-middle attack. ARP poisoning is a hacking technique that sends a forged ARP request or ARP reply. ARP protocol is a stateless protocol. The protocol processes ARP replies without assigning or considering authentication. As a result, the ARP cache can be infected with records that contain wrong mappings of IP-MAC addresses. A hacker can exploit ARP cache poisoning to capture network traffic between two nodes. In a local network, the attacker uses ARP poisoning to make a node think the attacker’s machine is a router. Then, the hacker turns on an operating system feature called IP forwarding. This feature enables the hacker's system to forward any network traffic it receives. The hacker forwards the traffic to the actual router, but the attacker captures all network traffic between the victim's node and router. DNS Spoofing and Man-in-the-Middle Attacks When someone tries to access a website in a browser, a Domain Name System (DNS) request is sent to the DNS server, and as a response, a DNS reply is obtained. This request-response exchange is mapped to a unique identification number. If the attacker can access the unique identification number, the attacker distributes a corrupt packet containing the identification number to the victim, thereby enabling the attack to be launched. DNS spoofing is a kind of man-in-the-middle attack, where the hacker intercepts a DNS request. The attacker returns a DNS record that leads to the attacker’s server instead of the actual intended server. As DNS protocol uses unencrypted request/reply, a secure version of the protocol called DNSSEC should be used to mitigate DNS spoofing. To avoid users needing to provide their credentials all the time, sessions are used in client-server settings. When the user authenticates to the application, the server remembers the user for a set amount of time. Sessions make the application convenient to use. But sessions allow attackers to bypass the authentication scheme, and there are multiple attack methods designed specifically to steal user sessions. When cookies are used as a session identifier, secure handling of the cookies needs to be ensured. Attackers will try to steal cookies, or more specifically, the session token information stored in those cookies. This attack is called session hijacking because it relies on stealing the token to access the victim’s authenticated session. Mitigation Techniques for Session Hijacking Some of the techniques used to mitigate session hijacking include: - Anonymizing the session ID prevents attackers from enumerating the session ID and guessing the next session ID. There are well-documented patterns to implement this in the web application back end developed in technologies such as .NET, Ruby, Node.js, Python, and Java. - Set a time-to-live session so that it gets invalidated after the elapse of this time. Session timeout defines the action window time for the user of the web application. If the session timeout is set to the minimal value in the context of the application, the time duration for the attacker to steal sensitive data in case of a security breach is limited to this time window. - Implement a two-tier session system with two states—one for a short period immediately after a user logs in, and another that keeps users logged in but with a lower access level. Users can access less-secure areas for a longer period of time, but when trying to access more privileged areas, the user must authenticate again. A new session is created with privileged access for a short period of time and then degrades to a low-level session again. - Secure the cookies. Secure cookies are a type of HTTP cookie that has secure attribute set. This limits the scope of the cookie to “secure” channels. For example, web browsers include secure cookies only when a request is made over an HTTPS connection and not on the HTTP connection. - Use HTTPS over the entire site and not just for login and registration pages. If one must set up HTTPS, then the cookie should be set as secure. This will stop the cookie from being sent unless it is part of an HTTPS request. It prevents situations where insecure content from the same domain is sent over HTTP along with the session ID. In addition to the above, extra measures need to be taken to avoid session hijacking attacks by binding the session to various user information such as the IP address, location information, or user agent. Since the information typically does not change in mid-use, these tactics can be used as a way to check if the session or account has been hijacked. Secure Sockets Layer (SSL) Hijacking and Man-in-the-Middle Attacks For proper security, cryptography needs authentication. In the context of secure sockets layer/transport layer security (SSL/TLS), certificates are used for authentication. When attempting to connect to a particular hostname in the browser, the expectation is that the server will present a certificate that proves it has the right to handle traffic for that hostname. If the user receives an invalid certificate, the right thing to do is to abandon the connection attempt as part of the SSL/TLS handshake. Unfortunately, browsers do not do so. Because the web is full of invalid certificates, it is almost guaranteed that none of the invalid certificates a user encounters will be the result of an attack. Thus, web browsers do not enforce SSL security as part of the SSL handshake. Instead, browsers issue a certificate warning. If the user ignores the certificate warning, it can potentially result in a man-in-the-middle attack. In mobile applications, a man-in-the-middle attack can be prevented with certificate pinning. This will prevent a data breach caused by the man-in-the-middle attack due to SSL/TLS hijacking. At the beginning of the SSL/TLS session, the client sends a “ClientHello” message. This indicates to the server that the client intends to begin an SSL/TLS communication. The client tells what cryptographic and compression algorithm to use and which SS/TLSL version it supports. The server responds with a “ServerHello” message that contains equivalent information that is present in the client's hello. Thus, both the client and server establish the version and algorithms to be used. The server sends its certificate to the client, and if the server wants the client to authenticate itself, then it will send a request for the client’s certificate. The identity of the certificate is established using the certificate authority (CA) model. Unfortunately, this may result in the CA being compromised, and it is difficult to determine whether a remote system is operated by the correct party, even when the authenticated certificate is presented. In applications, developers control both the client and the server software involved in the construction of the SSL/TLS channel. Therefore, the exact certificate the application expects to see can be set up. The CA signed certificates can be eliminated, and identities that the developer notarizes can be distributed. This is called pinning the certificate. Other SSL Man-in-the-Middle Attack Methods Other man-in-the-middle attack methods related to SSL/TLS include: - Exploitation of validation vulnerabilities. This man-in-the-middle attack uses a bug in the client when validating the credentials presented during the SSL/TLS handshake. If the validation is not implemented correctly, a bad actor can use a special invalid certificate or a certificate chain that cannot be distinguished from a valid one. - Rogue certificates. Rogue certificates are fraudulent CA certificates that are accepted by clients as genuine. They are difficult to obtain, but they are still a possibility. - Self-signed certificates. This attack presents the victim with a self-signed certificate that has most fields copied from the real one. Such a certificate is bound to generate a warning, but users are generally known to click through such warnings. How to Prevent Man-in-the-Middle Attacks Traditionally, static and dynamic vulnerability scanners have been used to detect vulnerabilities that are exploited during a man-in-the-middle attack. However, static and dynamic scanners simply cannot scale to support the requirements of modern software—whether in development or production. They lack the requisite accuracy, instead generating a huge number of false positives and false negatives. The reports they produce require significant time to triage, diagnose, and remediate. Finally, specialized skills and staff are required to manage the scans and review the reports. Security professionals with these skill sets often hard to recruit and retain. A different approach is required to uncover SSL/TLS vulnerabilities in applications that lead to man-in-the-middle attacks—one that embeds security instrumentation within the software. Interactive application security testing (IAST) enables security teams to shift left for security testing and to automate detection of vulnerabilities. Real-time analysis of results and virtual elimination of false positives and false negatives make IAST more suitable for detecting and mitigating vulnerabilities. IAST can be made part of the continuous integration/continuous deployment (CI/CD) pipeline and can be automated. Also, the process can be integrated with an issue-tracking system like Jira and other development and QA tools.
<urn:uuid:4184777c-9d7e-46dd-8658-9f069afd7eee>
CC-MAIN-2022-40
https://www.contrastsecurity.com/glossary/man-in-the-middle-attack?hsLang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00667.warc.gz
en
0.904462
2,635
3.765625
4
NASA is continuing efforts to recover the InSight landerâs heat probe that stopped operating since it began digging the Martian surface in February. The agency said Thursday it is analyzing various ways of maneuvering InSightâs robotic arm to assist the heat probe, which is known as âthe moleâ and is intended to dig up to 16 feet below Mars’ surface to measure the amount of heat it emanates. During the operation, the team discovered up to 4 inches of duricrust or thick cemented soil beneath the planet’s surface. The team is working on soil-scraping techniques as part of the probe and plans to make the images captured by the lander available to the public. “This might increase friction enough to keep it moving forward when mole hammering resumes,” said Sue Smrekar, deputy principal investigator for the InSight mission at NASA’s Jet Propulsion Laboratory in Pasadena, Calif. JPL manages the InSight program, while the German Aerospace Center built the heat probe as part of the Heat Flow and Physical Properties Package instrumentation package.
<urn:uuid:18a802d7-c549-4fef-a784-a5ef8168ec7a>
CC-MAIN-2022-40
https://executivegov.com/2019/10/nasa-continues-insight-lander-recovery-operations-in-mars/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00667.warc.gz
en
0.900915
237
2.9375
3
Mobile apps are a relatively new phenomenon, and yet in a short span, this ecosystem has gone through several overhauls already. The advances in app functionality and user experience are there to see for everyone, but equally important are the radical shifts in the security landscape. So let's take a walk down the memory lane and see what and how much has changed over the years. If anything, such a comparison will make us appreciate the intricacies involved in mobile application security today. It might even make us more determined and vigilant! The rise of independent researchers Earlier there were only two sources of discovering security holes: the app development end, and the app users. Of course, the users consisted of regular users and attackers both but finding a potential bug used to be largely a matter of chance. But these days, as the adoption of mobile devices has exploded, independent or University-backed researchers have risen. They perform academically advanced and sophisticated attacks on the mobile platforms, demonstrating their weakness. Consider the Drammer attack, which was the result of several researchers collaborating, and revealed a fundamental flaw in Android hardware. While on one hand, this means our mobile apps will be more secure in future, in the short term it generates food for the attackers and headaches for the likes of Google. Google funds studies aimed at Android bugs In the earlier days, bugs on mobile apps were considered normal. After all, the world was used to an era of massive enterprise platforms where it wasn't uncommon to find critical bugs even after a few years into production. So, the standard response by Google in the early days was to issue a security patch, and that was that. But the extent of damage revealed how sinister these vulnerabilities were if left unattended. As a result, Google has woken up to the scale of the problem, and begun to fund studies aimed at revealing fatal flaws in the Android hardware and software. It's a fabulous and brave move, and it makes for an interesting trend of modern mobile application security! Finding and fixing critical bugs is so important that companies are not leaving anything to chance. Or rather, they're using the power of chance as well. Bug bounties are a unique trend of modern mobile application security scene, where anyone who discovers a critical bug gets rewarded in hard cash. Google already has the Android "security rewards" program in place, but the money being awarded is more important. As per Google's blog, USD 550,000 was awarded in 2015, and in 2016 the company increased the spend by 33%! Certainly, for those who are very talented, determined, and lucky, becoming a bounty hunter of bugs is not a bad career choice! Anti-virus software on mobile phones When Android was launched, some users breathed a sigh of relief: "Oh, finally! A mobile platform built on the Linux core. What could be more secure?!" Except that, it didn't turn out to be anything closer. While the Linux kernel does its job well, a mobile phone is a highly personal device that needs a lot of permissions and works many times on your behalf. In other words, it's much more open to attacking and takeover. As a result, users who thought they were free of the Windows-era tyranny of having to install an anti-virus now have to tolerate one on their Android phone! Rise of automated mobile application security testing As one vulnerability after another piled up and the security checklist numbering started to run into three digits or more, the ecosystem responded with automated security testing. This includes static analyzers for code-level security, dynamic analyzers to gauge app behavior, and much more. While this doesn't make your app bullet-proof, it takes care of most of the important cases that you're likely to miss. These sure are interesting times to be living as someone concerned about mobile application security. We'd like to motivate you to not see this extreme volatility as a hassle, but a zig-zag path to the promised land of hacker-proof apps! Hack proof apps are difficult to come across. Target, Walmart, and eBay have all been hacked. How does your app fare?
<urn:uuid:cf51f511-2600-44b4-a0b1-b9efce5adcf0>
CC-MAIN-2022-40
https://www.appknox.com/blog/how-mobile-application-security-has-evolved
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00667.warc.gz
en
0.965202
844
2.53125
3
Mostly, A Linux system administrator installs rpm packages on Linux by using yum command, but you can use rpm command in Linux to install rpm on CentOS, Fedora, RHEL, etc, while packages do not exist on the repository. At the end of this article, you will learn how to install an rpm on CentOS. Yum command download package from Official CentOS repository, and installed on your system. The repository contains thousands of RPM packages. But if you want to install an rpm package which is not existing on the repository, rpm command in Linux will be helpful to install rpm on centos and other rpm bases Linux distributions. RPM Package Manager (RPM) is a free and open-source package management system for installing, uninstalling and managing software packages in Linux. Prerequisites to run rpm command in Linux Before you begin, you need to know about prerequisites, so later you will not face problems running rpm command in Linux. Keep in mind the following points. - A user account with sudo privileges or root user. - You must have access to a terminal window/command line. - RPM, DNF, & YUM Package Managers (all included by default). - RPM package is built for your system architecture and your CentOS version, which you want to install. Check system architecture and Version If you have 32 bit operating system installed on your system, you must download 32 bit rpm package. 64 bit package doesn’t work on 32 bit system. So, You must know the system architecture. you can use following command to get information about system architecture. [root@localhost ~]# uname -r 4.18.0-147.5.1.el8_1.x86_64 [root@localhost ~]# In the above example you can see I am running 64 bit CentOS. I can install 64 bit rpm packages. Download rpm package for CentOS Basically, user prefer to use a web browser to locate and download a .rpm file, but you could also use command-line tools like wget command or curl command. In the example, I am going to install one of the most famous software team viewer. First, I will use wget command to download teamviewer. You must know the downloading address of the package. Better, you can use web browser to download teamviewer package. [root@localhost ~]# wget https://download.teamviewer.com/download/linux/teamviewer.x86_64.rpm --2020-05-03 13:20:16-- https://download.teamviewer.com/download/linux/teamviewer.x86_64.rpm Resolving download.teamviewer.com (download.teamviewer.com)... 220.127.116.11, 18.104.22.168, 2606:4700::6810:3f10, ... Connecting to download.teamviewer.com (download.teamviewer.com)|22.214.171.124|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://dl.teamviewer.com/download/linux/version_15x/teamviewer_15.5.3.x86_64.rpm [following] --2020-05-03 13:20:17-- https://dl.teamviewer.com/download/linux/version_15x/teamviewer_15.5.3.x86_64.rpm Resolving dl.teamviewer.com (dl.teamviewer.com)... 126.96.36.199, 188.8.131.52, 2606:4700::6810:3e10, ... Connecting to dl.teamviewer.com (dl.teamviewer.com)|184.108.40.206|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 14565386 (14M) [application/x-redhat-package-manager] Saving to: ‘teamviewer.x86_64.rpm’ teamviewer.x86_64.r 100%[===================>] 13.89M 354KB/s in 2m 58s 2020-05-03 13:23:17 (79.9 KB/s) - ‘teamviewer.x86_64.rpm’ saved [14565386/14565386] [root@localhost ~]# Alternate method to download package: For downloading TeamViewer Go the following link Click here. Choose a suitable file as per your architecture and Os Version. Install rpm on CentOS by rpm command Finally I am on the right way. Before, run the rpm command please check the status of downloaded file. You can use the ls command to verify downloading file. Now, run rpm command followed by -i option and rpm package name. See the syntax below: #rpm -i package_name.rpm [root@localhost ~]# rpm -i teamviewer.x86_64.rpm warning: teamviewer.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID 0c1289c0: NOKEY error: Failed dependencies: libQt5DBus.so.5()(64bit) >= 5.5 is needed by teamviewer-15.5.3-0.x86_64 libQt5Gui.so.5()(64bit) >= 5.5 is needed by teamviewer-15.5.3-0.x86_64 libQt5Qml.so.5()(64bit) >= 5.5 is needed by teamviewer-15.5.3-0.x86_64 libQt5Quick.so.5()(64bit) >= 5.5 is needed by teamviewer-15.5.3-0.x86_64 libQt5WebKit.so.5()(64bit) >= 5.5 is needed by teamviewer-15.5.3-0.x86_64 libQt5WebKitWidgets.so.5()(64bit) >= 5.5 is needed by teamviewer-15.5.3-0.x86_64 libQt5Widgets.so.5()(64bit) >= 5.5 is needed by teamviewer-15.5.3-0.x86_64 libQt5X11Extras.so.5()(64bit) >= 5.5 is needed by teamviewer-15.5.3-0.x86_64 qt5-qtdeclarative >= 5.5 is needed by teamviewer-15.5.3-0.x86_64 qt5-qtquickcontrols >= 5.5 is needed by teamviewer-15.5.3-0.x86_64 [root@localhost ~]# In the above example teamviewer didn’t install. It is showing error: Failed dependencies. Teamviewer need some other packages to work properly. Check RPM dependencies Installation has been failed due to missing dependencies. You can use rpm command followed by -qpR or -qR option and package name to check for missing dependencies. The command syntax as follows: #rpm -qpR package_name.rpm #rpm -qR package_name.rpm. I am trying to install teamviewer so my syntax will be as follow: [root@localhost ~]# rpm -qpR teamviewer.x86_64.rpm warning: teamviewer.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID 0c1289c0: NOKEY /bin/bash /bin/bash /bin/bash /bin/bash libQt5DBus.so.5()(64bit) >= 5.5 libQt5Gui.so.5()(64bit) >= 5.5 libQt5Qml.so.5()(64bit) >= 5.5 libQt5Quick.so.5()(64bit) >= 5.5 libQt5WebKit.so.5()(64bit) >= 5.5 libQt5WebKitWidgets.so.5()(64bit) >= 5.5 libQt5Widgets.so.5()(64bit) >= 5.5 libQt5X11Extras.so.5()(64bit) >= 5.5 libc.so.6(GLIBC_2.17)(64bit) libdbus-1.so.3()(64bit) qt5-qtdeclarative >= 5.5 qt5-qtquickcontrols >= 5.5 rpmlib(BuiltinLuaScripts) <= 4.2.2-1 rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 rpmlib(PayloadIsXz) <= 5.2-1 [root@localhost ~]# In the above example, you can see lots of missing dependencies on the screen. You need to install every package before installing teamviewer. Installing RPM packages with yum command As you have seen above, you can use rpm command only for installing an independent package. If you want to install dependent package on CentOS or other rpm based Linux distros like RHEL or Fedora, You will have to use the handy tool called yum where You don’t have to worry about dependencies. This yum package manager can pull all of the required dependencies and set them up for us. You can use the yum command to install our downloaded package with the following command: #yum localinstall package_name.rpm My syntax as below: [root@localhost ~]# yum localinstall teamviewer.x86_64.rpm Last metadata expiration check: 0:10:02 ago on Sun 03 May 2020 01:14:59 PM IST. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: teamviewer x86_64 15.5.3-0 @commandline 14 M Installing dependencies: qt5-qtbase x86_64 5.11.1-7.el8 AppStream 3.3 M qt5-qtbase-common noarch 5.11.1-7.el8 AppStream 39 k qt5-qtbase-gui x86_64 5.11.1-7.el8 AppStream 6.0 M qt5-qtdeclarative x86_64 5.11.1-3.el8 AppStream 3.4 M qt5-qtlocation x86_64 5.11.1-2.el8 AppStream 3.0 M qt5-qtquickcontrols x86_64 5.11.1-2.el8 AppStream 1.0 M qt5-qtsensors x86_64 5.11.1-2.el8 AppStream 220 k qt5-qtwebchannel x86_64 5.11.1-2.el8 AppStream 92 k qt5-qtx11extras x86_64 5.11.1-2.el8 AppStream 34 k qt5-qtxmlpatterns x86_64 5.11.1-2.el8 AppStream 1.1 M xcb-util-image x86_64 0.4.0-9.el8 AppStream 21 k xcb-util-keysyms x86_64 0.4.0-7.el8 AppStream 16 k xcb-util-renderutil x86_64 0.3.9-10.el8 AppStream 19 k xcb-util-wm x86_64 0.4.1-12.el8 AppStream 32 k pcre2-utf16 x86_64 10.32-1.el8 BaseOS 228 k qt5-qtwebkit x86_64 5.212.0-0.36.alpha2.el8 epel 13 M Transaction Summary ================================================================================ Install 17 Packages Total size: 46 M Total download size: 32 M Installed size: 191 M Is this ok [y/N]: After type “Y”, It will install our package along with dependencies. Remove RPM Package by rpm command The rpm command works well for removing packages from the system. You can usee the RPM installer with -e and package name to remove (or uninstall) a software package. Enter the following command syntax into a terminal window: #rpm -e package_name.rpm The -e option instructs RPM to erase the software. I’ve tried my best to cover most of the basic uses of rpm command in Linux to install the rpm package on centOS you can use yum command to doing the same. For more detailed information, you can check the manual page. To display the manual page use man command from the terminal. If I’ve missed any important command, please do share it with me via the comment section.
<urn:uuid:8a344d54-04f1-4305-b955-7283db2303f7>
CC-MAIN-2022-40
https://www.cyberpratibha.com/how-to-install-rpm-on-centos-by-rpm-command-in-linux/?amp=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00667.warc.gz
en
0.752089
2,868
2.65625
3
Johns Hopkins University Applied Physics Laboratory will help NASA study electric currents across Earth’s atmosphere as part of a potential $53.3M space exploration mission the agency plans to launch by June 2024. The Electrojet Zeeman Imaging Explorer initiative will deploy three small satellites intended to help researchers explore the electrical charge that flows up to 90 miles above the polar regions and connects an aurora to the magnetosphere, APL said Tuesday. Peg Luce, deputy director of the heliophysics division at NASA’s Washington headquarters, said the EZIE team will use an instrument that has supported CubeSat-based Earth science programs. “With these new missions, we’re expanding how we study the Sun, space and Earth as an interconnected system,” Luce added. Jeng-Hwa Yee, a principal professional staff member at APL, will be the principal investigator for the mission that will be funded under the space agency’s Heliophysics Explorers Program. The lab also collaborates with Princeton University to develop the Interstellar Mapping and Acceleration Probe for a NASA mission slated to launch in early 2025.
<urn:uuid:d86082c2-11e5-4f94-8e55-10be6942a90f>
CC-MAIN-2022-40
https://www.govconwire.com/2020/12/nasa-taps-johns-hopkins-apl-for-atmospheric-electric-current-research-mission/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00067.warc.gz
en
0.876471
240
2.671875
3
Over the past few years, healthcare organizations have been rapidly moving towards connected devices and cloud, driven by digital transformation projects and accelerated by the pandemic. However, to allow operating with complete trust, cybersecurity solutions need to keep pace with new telehealth/telemedicine technologies adopted which due to their criticality for patient health must be “always on”. Above all, network infrastructure needs to be reliable, connected medical devices have to be controlled in terms of what infrastructure they are allowed to access, and huge amounts of sensitive patient data must be protected while at the same time making it easily accessible for relevant persons. DNS can play a big part in this, thanks to its unique ability to control application access and detect data theft attempts in a timely manner. Why healthcare organizations are targeted From phishing to man-in-the-middle attacks, abuse of network vulnerabilities and ransomware, more and more attacks are being carried out on medical infrastructure, putting patient health at danger. A major difficulty lies in the fact that there are multiple networks and hundreds of digital components within any hospital or clinic which can be targeted by cybercriminals: electronic health records (EHR), Internet of Things (IoT) devices, e-prescribing and decision support systems, intelligent heating, ventilation, and air conditioning (HVAC), etc. Protecting patient privacy has become extremely challenging for healthcare providers, who in addition are obliged to comply with GDPR, HIPAA and other regulations. All this makes it harder to implement security measures, so bad actors rush to take advantage, as proven by the August 2022 hack on the software system used by NHS trusts which prevented medics from accessing patients’ records for several weeks. DNS has been proven to be a favorite target The high number of connected (IoT) devices used in healthcare establishments to monitor heart rates, dispense drugs or perform tests all provide an entry point for external threat actors, with DNS frequently being used as a vector for the attack. The IDC 2022 Global DNS Threat Report confirmed that frequency of DNS attacks on each healthcare facility has risen considerably from 6.71 attacks in 2021 to 7.7 in 2022, with each attack costing on average $906k versus $862k in 2021, and some attacks resulting in damages of over $5M. Aside from the financial aspect, the impacts of DNS attacks are proving very disruptive to critical medical services, with 43% of attacks causing Cloud service downtime, 41% App downtime, and over 1-in-4 (27%) of breaches leading to theft of sensitive data, bringing the obvious risk of hefty regulatory fines. According to the report, the main attack types deployed include phishing (53%), DNS-based malware (38%), DDoS/amplification (29%), and cloud instance misconfiguration abuse (28%). The average time to mitigate each attack is prety high, calculated at almost six hours (5.95 hours), but more worrying are the countermeasures being taken to mitigate attacks – all of which leave hospital or clinic employees with no access to vital medical apps and services. When faced with a DNS attack, 36% of organizations shut down a DNS server or service, 32% disabled the affected apps and 29% shut down part of network infrastructure (which is the highest percentage out of all industries surveyed). How can DNS security help? While DNS is a main target for cybercriminals, it can also be utilized as a key component of the network security ecosystem. In the DNS Threat Report, 74% of healthcare Security/IT professionals stated they consider DNS security as being critical for securing their overall network, and 57% view DNS security as their top method for protecting against malware and ransomware. However, it is acknowledged that basic DNS protection solutions are not adequate, so recommendations to protect against ransomware include: 1) Investing in response policy zones (RPZs), threat intelligence, and log analysis 2) Using a high-performance dedicated DNS Zero Trust strategies are also becoming increasingly adopted within hospitals and clinics, with 73% having already implemented or planning them. DNS filtering and threat intelligence have a strong role to play in this, helping control access to apps and services. Unique value brought by EfficientIP DNS Security solutions Offering purpose-built DNS security, SOLIDserver complements firewalls, IPS and authentication solutions for strengthening security of healthcare IT systems. DNS Guardian’s unique DNS Transaction Inspection functionality overcomes limitations of firewalls with regards to detection of data exfiltration, thus reducing theft of sensitive patient data. Having the best performance of any DNS on the market, SOLIDserver DNS can be used as a foundational tool for anti-ransomware programs. And for combatting abuse of connected devices causing lateral spread of threats across networks, the DNS Client Query Filtering (CQF) feature takes application access control to the next level. 66% of healthcare companies already use DNS for controlling which users can access which apps. CQF enhances this by offering unique microsegmentation capability of filtering down to individual client level, via allow lists and deny lists. For example, by using allow lists, access for every IoT device can be limited to only authorized infrastructure, apps or services, thus overcoming issues around privilege abuse and providing good defense against IoT botnets. According to the Threat Report, 85% of healthcare IT/security professionals deem allow and deny lists to be valuable in zero trust frameworks. EfficientIP‘s market-leading 360° DNS security solutions bring much-needed value to healthcare IT systems, in particular for safeguarding data, preventing threats from connected devices, and meeting regulatory compliance. To check how secure your network is against DNS attacks, and receive recommendations on improving your security posture, why not try our free DNS Risk Assessment ? Maximize Your Network Security & Efficiency Identify Vulnerabilities: Expert Assessment of Your DNS Traffic with a FREE DNS Risk AssessmentTRY NOW
<urn:uuid:3436f9d2-c8da-4f97-9d57-c7f0d3724f0a>
CC-MAIN-2022-40
https://www.efficientip.com/dns-security-for-healthcare-2022/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00067.warc.gz
en
0.934813
1,215
2.53125
3
Single sign-on authentication or SSO allows users to log in once to access multiple applications, services and accounts, and across different domains. With SSO, a user only has to log in once with their log-in credentials (username and password etc.) to access their SaaS applications. Using SSO means that a user does not have to authenticate for every app they log into. For example, if you log into a Google service such as Gmail, then you are automatically authenticated to other Google apps such Youtube, AdSense, Google Analytics, etc. It should be noted that there is a significant difference between single sign-on and same sign-on. Single sign-on authentication (SSO) refers to systems where a single authentication provides access to multiple applications by passing the authentication token seamlessly to configured applications. Same sign-on, also known as Directory Server Authentication, refers to systems requiring authentication for each application but using the same credentials from a directory server. Single sign-on is known to be a framework and normally referred to as a solution. Generally, when people say an SSO solution, they are also talking about a software as the words can be used interchangeably. SSO works based upon a trusting relationship set up between an application, (service provider), and an identity provider, (such as, Active Directory Federation Services). This trust relationship is often represented by a certificate that is exchanged between the identity provider and the service provider. This certificate is used as the key to verify identity information that is being sent from the identity provider to the service provider. This tells the service provider that the identity is coming from a trusted source. With Single Sign-On (SSO), this identity data takes the form of tokens that contain identifying bits of information about the user like a user’s email address or username. Here is how the SSO process works: An SSO token is a collection of data or information that is passed from one system to another during the Single sign-on process. The data can be as simple as a user’s email address and information about which system is sending the token. Tokens must be digitally signed for the token receiver to verify that the SSO token is coming from a trusted source. The certificate that is used for this digital signature is exchanged during the initial configuration process. SSO is just one aspect of managing a user’s access. SSO must be combined with access control, activity logs, permission controls, and other measures for tracking and controlling user behavior within an organization’s internal systems. If the SSO system doesn’t know who a user is, then there is no way for it to allow or restrict a user’s access. SAML-based SSO (Security Assertion Markup Language): is an online service provider that can contact a separate online identity provider to authenticate users who are trying to access secure content. SAML allows secure web domains to exchange user authentication and authorize data. Smart card-based SSO: will ask an end-user to use a card holding the sign-in credentials for the first login. Once used, a user will not have to re-enter usernames or passwords. Kerberos-based setup: is a system where once the user credentials are provided, a ticket-granting ticket (TGT) is issued. The TGT that is issued, gathers service tickets for other applications that the user wants to access without asking the user to re-enter their credentials multiple times.
<urn:uuid:e0a50164-7860-40da-bcfe-0dddf63d4d53>
CC-MAIN-2022-40
https://www.logintc.com/types-of-authentication/single-sign-on-sso/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00067.warc.gz
en
0.931524
730
2.875
3
Introduction to head replacement process Head replacement process refers to the process of replacing defective HDD heads with the heads from identical and functional hard disk drive. This process must be performed in order to recover data from disks that have suffered from head crush failure. Process of replacing damaged HDD heads with functional ones is pretty complex task, especially if you consider risk of damaging HDD platters, which may cause permanent data loss. Various methods and techniques were used to perform head replacement process, with different percentage of success and high chances that something will go wrong.However, all of these methods demanded a lot of experience and technical skills. Still, the process remained liable to mistakes as a consequence of lack of standardized procedure, even if performed by experts. Idea behind HddSurgery head replacement tools was to make head replacement process safe by introducing strictly defined procedure and providing quality equipment to support it. Figure 1: From idea to solution – head replacement tool development in HddSurgery HddSurgery tools background Idea behind HddSurgery head replacement tools was born in HelpDisc Data recovery company, with goal of making head replacement process easier and safer. HddSurgery head replacement tools started with solution for Maxtor DM9 and DM10 series. As the response from data recovery community was highly positive, work and development continued. Head replacement solutions for great number of modern HDDs followed, so today HddSurgery tools support head replacement process on following HDD brands – Maxtor, Seagate, Western Digital, Samsung and Hitachi. During the years of development, capacity of hard drives has grown exponentially. This constant growth, imposed by the needs of market, was followed with various solutions from leading HDD manufacturers. Basic principle of HDD functioning hasn’t changed, however its mechanics, especially design of its heads, magnets and motor sustained a lot of modifications. Effort of manufacturers, to increase capacity and performance in comparison to the competition, resulted in a situation that now we have great number of different HDD mechanics. In accordance with the facts above, the number of HddSurgery head replacement tools has grown, so the nomenclature was created for all future tools, so that it will be easier to distinguish them. – HddSurgery brand mark – 3 letter mark for HDD Brand (Sea – Seagate, WDC – Western Digital, HGST – Hitachi…) – Families of hard drives that this tool supports (7200.10, 7200.9, 7200.8, ES) – Marks the number of platters this tool will work on (p2-3 stands for drives with 2 or 3 platters) From the aspect, which is the most important for head replacement (and consequently for HddSurgery tools design), modern hard drives are divided into two groups: 1. HDDs which park their heads on “landing zones” 2. HDDs which park their heads on a ramp Figure 2: Parking principles of HDD heads; Left – “landing zones”, Right – ramp parking While turned off, hard disk drive has to stop and park its heads on a safe location in its interior, so data stored on the platters won’t be threatened and where its highly-sensitive heads can’t be damaged. “Landing zones” are areas on the HDD platters, closest to the axis of their rotation, on which data can’t be stored and which are predicted only for parking of the heads while hard drive is not working. Surface of these “landing zones” is rougher than the rest of the platters, so there is no danger that heads will affix to them as a result of adhesion. Safe head replacement on this type of hard drives means that the heads have to be lifted while lying on the safe “landing zone” and transferred above surface of the platters (where any contact can lead to permanent data loss) to the area beyond them, where entire head assembly can be safely dismounted from HDD casing. Mounting of donor heads in a patient drive is done in the same way, however with reverse order of steps. Ramp parking of HDD heads was implemented to replace “landing zones” parking principle. This principle implies existing of separate part – ramp, located outside HDD platters, on which heads are parked after hard drive is turned off. Almost all hard drives made in last few years operate on this principle. Head replacement on these drives includes preventing of mutual contact between the heads after their sliding-off the ramp and during their dismounting from HDD casing. Heads remained separated by the tool until their mounting on patient drive ramp. Our next articles will describe head replacement process on these two groups of hard drives, by using HddSurgery head replacement tools.
<urn:uuid:0563c85f-babb-46f4-9d46-fdf735cbd00e>
CC-MAIN-2022-40
https://www.forensicfocus.com/news/head-replacement-tools-from-hddsurgery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00067.warc.gz
en
0.951963
1,014
2.578125
3
ReclaiMe Data Recovery Company announces that they have completed the research devoted to determining the reliability and accuracy of various file metadata like creation, modification, and access timestamps in modern filesystems using copy-on-write (CoW).Old filesystems like NTFS, FAT, and EXT, which do not use copy-on-write, have one single instance of the file metadata. All file metadata is not necessarily stored in one place, but there is no more than one copy of each metadata piece on a disk. When deleting a file, a filesystem does not actually erase the metadata. Instead, it just marks the space previously allocated to the file and its metadata as free for reuse. Previous metadata thus remains intact until the next write, which happens to use the same space. During normal operation on these filesystems (FAT/NTFS/EXT), there is only one place where metadata of a particular file is stored. There are few exceptions when more than one copy of metadata can exist: • Journal on NTFS or EXT. Metadata copy gets into a journal for a short time resulting in the existence of two copies of metadata for a that (short) period of time. • Defragmentation may create the copies of metadata. When the defragmenter moves metadata to a more optimal location, old copies are not zeroed and therefore can exist long enough to be discovered, until their space is used for a new write operation. Generally, non-CoW filesystems store a single copy of file metadata; even journaling and defragmentation cannot produce many copies. As for filesystems using copy-on-write, the situation with file metadata was not that clear because copy-on-write implies that every file modification generates a new copy of file metadata. In the current research, a file was created and modified on BTRFS, modern filesystem using in NETGEAR NASes. Once all the changes were written to the disk, search for the metadata revealed that there are at least four old, not yet reused, metadata blocks, and two groups of new blocks, which give different timestamps. From all this variety of metadata blocks, the filesystem driver finds a metadata block with correct timestamp by traversing filesystem trees. If a filesystem fails or the file is deleted, some metadata blocks can be destroyed or become unreachable because pointers to these blocks are damaged. Data recovery tools have to sort all the different metadata and generally, timestamps cannot be determined reliably. Taking the latest timestamps is a good assumption but in general case it is not possible to prove that a block with the latest timestamp was not destroyed during the data loss event. The research formed the basis of their latest data recovery lesson available at www.data.recovery.training. This also includes the detailed description of the experiment and the corresponding disk image files, using which you can repeat the experiment.
<urn:uuid:257a278b-7eca-48ea-8fd3-9e0caf14cb09>
CC-MAIN-2022-40
https://www.forensicfocus.com/news/reclaimes-research-determining-timestamps-on-btrfs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00067.warc.gz
en
0.922086
595
2.921875
3
Rogue Automation: Vulnerable and Malicious Code in Industrial Programming | SPONSORED In this White Paper, previously unknown design flaws that malicious actors could exploit to hide malicious functionalities in industrial programming for robots and other automated manufacturing machines are revealed. Since these flaws are difficult to fix, enterprises that deploy vulnerable machines could face serious consequences. An attacker could exploit them to become persistent within a smart factory, silently alter the quality of products, halt a manufacturing line, or perform some other malicious activity. Industrial Programming Vulnerability The research was set in motion a few years ago, when we stumbled upon something we had never seen before: a store that distributed software for heavy industrial machines in the form of apps. We downloaded some of these apps and reverse-engineered them to understand how they worked. What we were looking at was something quite different from any software or programming language we were familiar with. The code was written in one of the many proprietary programming languages used to automate industrial machines, the types of robots typically used to assemble cars, process food, and produce pharmaceutical items, among other industrial purposes. The most notable part of our investigation is that we found a vulnerability in one of these apps. The vulnerable app was a full-fledged web server, running on the bare-metal computer of the controller of the industrial robot on which it was installed. It was written in a custom, proprietary programming language. Although designed many decades ago, languages such as this are still in use today to run critical automation tasks. And although these custom languages are expected to have some form of networking functionalities, we were surprised to see that they had enough features to create a working web server. While the IT software development industry has been dealing with the consequences of unsecure programming for many decades, the industrial automation world might be unprepared to detect and prevent the exploitation of the issues that were found in this research. It is believed that, given the pace of IT/OT convergence, the automation engineering industry should start embracing and establishing secure coding practices. It is highly likely to face in 10 years the same challenges that the IT software development industry is facing today.
<urn:uuid:51e91548-d0fc-432c-a0b2-c92346b45b38>
CC-MAIN-2022-40
https://www.iiot-world.com/ics-security/rogue-automation-vulnerable-and-malicious-code-in-industrial-programming/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00067.warc.gz
en
0.966859
431
2.671875
3
When I first saw it on the shelf at our local Savers I had no idea what it did but I was intrigued. There was just something about it that looked familiar. So here’s some history: The Addiator, it turns out, is a mechanical calculator that ‘performs’ addition and subtraction. Once made by Addiator Gesellschaft of Berlin, variants of this design were manufactured from 1920 until 1982, until they were made obsolete by handheld electronic calculators. Even more amazing is this mechanical calculator design was first introduced by Troncet, a Frenchman, way back in 1889. Are you as freaked out as I am? No? Then let’s go over a few things to illustrate all the strange, unexpected analogues this thing has to our modern-day smartphones. Creepiness #1: First of all, see that thing clipped to the right side? That’s right. This thing comes with a stylus (19th century models called it a “peg”). The user sticks the stylus into the space beside a number and pushes up or down to ‘enter’ the number. Right away that’s a little spooky. Who in his right mind would think to create a device that required a stylus to interact with a set of numbers except for maybe some dude from the future who travelled back in time and tried to model a calculator after an iPad but lacked the technical know-how and the materials to build one in 1920’s Berlin! Addiator: basic math tool or evidence for the existence of time travel? Creepiness #2: There’s a reset button. Guess where it is? It’s that metal bar at the top. But you don’t press it. Instead, you grasp it and then pull down the Addiator. Just like you do on your smartphone when you want to refresh the data!! More evidence this thing is the workings of a time traveller. Creepiness #3: Take a look at that logo on the bottom of the thing. Quick. What does it totally remind you of? Right. A QR Code! Only thing is the QR code wasn’t invented until 1994. Look closer and you’ll see it’s actually not a QR Code at all but rather a picture of 3 figures holding up the number 010. Which is also weird. Because when it comes to basic addition or subtraction, who’s thinking about zeroes and ones? Nobody! You don’t need a calculator to add zeroes and ones. But you know what does come in zeroes and ones? That’s right. The binary code used by our computers and smartphones! So why would it say “010” if it hadn’t been created by time travelers or aliens? You tell me. I’m too creeped out to continue! Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously. |cookielawinfo-checbox-analytics||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".| |cookielawinfo-checbox-functional||11 months||The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".| |cookielawinfo-checbox-others||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.| |cookielawinfo-checkbox-necessary||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".| |cookielawinfo-checkbox-performance||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".| Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features. Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors. Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc. Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads. Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
<urn:uuid:d0180fc6-4074-4cf5-bbad-59def6bebfff>
CC-MAIN-2022-40
https://anexinet.com/blog/3-reasons-this-addiator-thing-i-found-at-a-thrift-store-is-totally-blowing-my-mind/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00067.warc.gz
en
0.917903
1,078
2.671875
3
Argument injection vulnerability in Opera before 7.50 does not properly filter - characters that begin a hostname in a telnet URI, which allows remote attackers to insert options to the resulting command line and overwrite arbitrary files via (1) the -f option on Windows XP or (2) the -n option on Linux. The software constructs a string for a command to executed by a separate component in another control sphere, but it does not properly delimit the intended arguments, options, or switches within that command string. When creating commands using interpolation into a string, developers may assume that only the arguments/options that they specify will be processed. This assumption may be even stronger when the programmer has encoded the command in a way that prevents separate commands from being provided maliciously, e.g. in the case of shell metacharacters. When constructing the command, the developer may use whitespace or other delimiters that are required to separate arguments when the command. However, if an attacker can provide an untrusted input that contains argument-separating delimiters, then the resulting command will have more arguments than intended by the developer. The attacker may then be able to change the behavior of the command. Depending on the functionality supported by the extraneous arguments, this may have security-relevant consequences. - Assume all input is malicious. Use an “accept known good” input validation strategy, i.e., use a list of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does. - When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, “boat” may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as “red” or “blue.” - Do not rely exclusively on looking for malicious or malformed inputs. This is likely to miss at least one undesirable input, especially if the code’s environment changes. This can give attackers enough room to bypass the intended validation. However, denylists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright. - Inputs should be decoded and canonicalized to the application’s current internal representation before being validated (CWE-180, CWE-181). Make sure that your application does not inadvertently decode the same input twice (CWE-174). Such errors could be used to bypass allowlist schemes by introducing dangerous inputs after they have been checked. Use libraries such as the OWASP ESAPI Canonicalization control. - Consider performing repeated canonicalization until your input does not change any more. This will avoid double-decoding and similar scenarios, but it might inadvertently modify inputs that are allowed to contain properly-encoded dangerous content.
<urn:uuid:9511855a-0ce4-4557-b112-0430708d74e8>
CC-MAIN-2022-40
https://avd.aquasec.com/nvd/2004/cve-2004-0473/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00067.warc.gz
en
0.853995
693
2.703125
3
MTU and MSS are two important terms you should be familiar with when you jump into the networking world, and especially if you are working with GRE tunnels and IPSEC. Maximum Transmission Unit (MTU) MTU is the largest packet or frame size, specified in octets (eight-bit bytes) that can be sent in a packet- or frame-based network. The internet’s transmission control protocol (TCP) uses the MTU to determine the maximum size of each packet in any transmission. MTU is usually associated with the Ethernet protocol, where a 1500-byte packet is the largest allowed. What is Fragmentation? One of the most common problems related to MTU is that sometimes higher-level protocols may create packets larger than a particular link supports. To get around this issue, IPv4 allows fragmentation, which divides the datagram (the basic information unit transferred in a packet-switched network) into pieces. Each piece is small enough to pass over the link it is fragmented for, using the MTU parameter configured for that interface. The fragmentation process takes place at the IP layer (OSI layer 3), which marks packets as fragmented. This ensures the IP layer of the destination host knows it should reassemble the packets into the original datagram. Fragmentation is not supported by some applications, and so should be avoided. The best way to avoid fragmentation is to adjust the TCP Maximum Segment Size (MSS), explained below. MTU Example: Anatomy of a Datagram The following diagram illustrates what MTU looks like in a typical network data transmission. The common value of MTU on the internet is 1500 bytes. The MTU is built of: - A payload, with 1460 bytes - The TCP and IP headers, with 20 bytes each Consider that you want to implement the generic routing encapsulation (GRE) protocol, a tunneling protocol that lets you encapsulate network-layer protocol in a virtual IP link. The following image shows the same datagram with GRE encapsulation, which adds 24 bytes for the GRE header. The total size of this kind of packet is 1524 bytes, exceeding the 1500 bytes MTU value. In order to keep to an MTU of 1500, you can decrease the “data” size of the packet. The mechanism that makes this possible is MSS. What Is TCP MSS? TCP MSS is a parameter in the options field of the TCP header, which defines the maximum segment size. It specifies the largest amount of data, specified in bytes, that a computer or communications device can receive in a single TCP segment. MSS does not include the TCP header or the IP header. Rather, it dictates the maximum size of the “data” part of the packet. Using the GRE tunneling example in the previous section, because the size of total headers is 64, the TCP MSS value should be set to 1436 or lower, to ensure that fragmentation is not needed. What Is an MSS Announcement? During the three-way TCP handshake, the receiving party sends an “MSS announcement”. This announcement declares what is the maximum size of the TCP segment the receiving party can accept. MSS can be used independently in each direction of data flow. Since the end device will not always know about high level protocols that will be added to this packet along the way, it often won’t adjust the TCP MSS value. To compensate for this, network devices have the option of rewriting the value of TCP MSS packets that are processed through them. For example, in a Cisco Router the command ip tcp mss-adjust 1436 at the interface level will rewrite the value of the TCP MSS of any SYN packet that passes through this interface. GRE Tunnelling and TCP MSS in Web Application Firewalls (WAF) WAFs commonly use GRE tunnels. To address the possibility of fragmentation, you will need to adjust the TCP MSS value. The following diagram illustrates a WAF topology using Imperva WAF. The customer server sends the packet with an MSS value of 1460, but in the router’s interface, MSS is adjusted to 1420. This allows the GRE packets to pass through with no segmentation. The Imperva WAF is asymmetric – it intercepts inbound traffic, but outbound traffic is allowed to pass directly via the ISP. This means that you only need to set the MSS value on the router handling inbound traffic. There is no need to adjust MSS on the organization’s tunnel interface. The diagram above shows how the SYN packets in the three-way handshake travel. After the three-way handshake is completed and the connection established, the end user will send packets whose data won’t exceed the 1420 bytes size. In addition the customer’s server will send packets whose data won’t exceed the default 1460 bytes. Imperva Web Application Firewall Imperva provides the market-leading Web Application Firewall, which prevents attacks with world-class analysis of web traffic to your applications. In addition, Imperva’s application security offering includes several other layers of protection: Runtime Application Self-Protection (RASP) – Real-time attack detection and prevention from your application runtime environment goes wherever your applications go. Stop external attacks and injections and reduce your vulnerability backlog. API Security – Automated API protection ensures your API endpoints are protected as they are published, shielding your applications from exploitation. Advanced Bot Protection – Prevent business logic attacks from all access points – websites, mobile apps and APIs. Gain seamless visibility and control over bot traffic to stop online fraud through account takeover or competitive price scraping. DDoS Protection – Block attack traffic at the edge to ensure business continuity with guaranteed uptime and no performance impact. Secure your on premises or cloud-based assets – whether you’re hosted in AWS, Microsoft Azure, or Google Public Cloud. Attack Analytics – Ensures complete visibility with machine learning and domain expertise across the application security stack to reveal patterns in the noise and detect application attacks, enabling you to isolate and prevent attack campaigns.
<urn:uuid:62da6618-4bcc-4886-9add-ba429d8593c0>
CC-MAIN-2022-40
https://www.imperva.com/learn/application-security/what-is-mtu-mss/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00067.warc.gz
en
0.870326
1,327
3.21875
3
Topics covered in this Blog: Smartphones have been an essential part of life now for everyone. Mostly each and every person holds one. And mobile phones can be connected to any activity of life through different applications from health to entertainment and big calculations to marketing. Then how can Access Control Solution be different? Using mobile phones in Access Control Systems is not the new buzz in discussing readers and credentials. Electronic access control companies are promoting the different ways that mobile technology, soft or virtual credentials can be used to replace touch-based biometric or cards. It is not surprising that all the companies are trying to get in the queue of different credentials through mobile phones such as Face Recognition, Bluetooth, QR code, or PIN. Today, we are going to discuss the Bluetooth base Access Control System in depth. Profiling and Authentication Method using Bluetooth For using Bluetooth technology in access control, developers use Bluetooth Low Energy (BLE) for performance and high energy efficiency. The operation of access control via Bluetooth technology requires direct communication between the Bluetooth-enabled mobile phone and the door controller using a mobile application or software. As Bluetooth-enabled smartphones continue to multiply as standard means of connection and communication between devices, a natural progression will be the use of mobile phones as electronic keys. The ubiquity and personalized nature of smartphones has made them the ideal and touchless Bluetooth credential for physical access control. Every smartphone has its unique ID which can be used as a PIN/key for authentication. This process is called Profiling which is done through software. The enrollment process is the one-time process that takes place during the installation of the mobile application, from then on, the web-based application controls access to the Bluetooth device. Benefits of Bluetooth based Access Control System - You just need to gesture with your smartphone in the range of the Bluetooth-enabled door controllers to get access. Ultimately it becomes easy and time-saving for the employees. - No constraint in the number of users in comparison to biometric access control solutions. - Works where biometric devices are difficult to use. - Requires one-time Bluetooth pairing. - Secured communication between mobile and device. - Works on mobile’s unique IMEI number. - Access control using Bluetooth technology using the smartphone will reduce the cost of hardware like readers. Only software profiling is enough to use your mobile phone to operate/authenticate access control through door controllers. Research shows that the use of Bluetooth-based phones is continuing to get increased in the future. Currently, also those not having them are already the exceptions. They are unquestionably going to be a major component in physical access control. You can contact us regarding any query or comment in the comment box given below. Share your thoughts related to new access control technologies and we will love to hear from you.
<urn:uuid:ce3fe010-174a-44c6-aa64-f1481f86285b>
CC-MAIN-2022-40
https://www.matrixaccesscontrol.com/blog/how-bluetooth-based-access-control-works-and-how-is-it-beneficial/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00067.warc.gz
en
0.914536
578
2.5625
3
Google’s Coral Dev Board Thanks to the Rasberry Pi, tech enthusiasts quickly got accustomed to single-board computers (SBCs). As the name suggests, they’re complete computers built on single circuit boards. Due to their small size but full functionality, SBCs make tech development accessible to more people who want to learn about it in a hands-on way. Google recently announced its new SBC, known as the Coral Dev Board, which people can buy for $149. However, instead of targeting people who are new to developing, it’s geared toward people with more advanced skills. What’s Exciting About This Board? Google says this product is ideal for working on prototypes of projects that perform on-device Machine Learning, such as Internet of Things (IoT) gadgets. It has an Edge TPU co-processor, which is the brand’s microchip designed for running IoT devices that rely on edge computing to work. One of the advantages of that component is that although it’s a piece of hardware, the chip contains artificial intelligence (AI) software and algorithms. As such, people developing machine learning algorithms could run trained data sets via edge computing. It’s possible to put the data sets directly on a smart device instead of keeping that information in the cloud. Then, devices could use machine learning without needing network connections. Some of the Specs The co-processor is an add-on part of the main board and part of a system-on-module. The co-processor also offers Wi-Fi and Bluetooth, along with onboard memory and graphics processing. The base section of the board is similarly equipped with capabilities to help people with their projects. There’s a microSD slot, USB ports, an audio jack, plus video and camera interfaces. A third-party blog about the Coral Dev Board notes people need to create their machine learning models in TensorFlow Lite, not TensorFlow. The next step is to compile them using a web compiler so they work with the Edge TPU co-processor. There are some pre-compiled modules available, too. An Extra Tool to Speed up the Process The Coral Dev Board runs on the Linux operating system, and people who want to train their machine learning algorithms faster than usual can buy an accessory called a USB Accelerator. It brings a process called inference — which machine learning algorithms use to draw conclusions from statistical algorithms during training — to existing systems. People can use the USB Accelerator on Linux systems, such as the Coral Dev Board or Raspberry Pi. One of the handiest characteristics of the USB Accelerator is that it plugs into a USB-C port. Users can also use the provided USB-C to USB-A converter cable if they don’t have access to the C-style ports, which feature on some of the most recent computers, such as the new Macs. Those interested in ordering the USB Accelerator can buy the product for $74.99. Users set up the Coral Dev Board with their serial console program of choice. After they download it or launch that application, Google gives step-by-step instructions for getting everything up and running. Other Things to Keep in Mind Before Purchasing One potential downside of the board — if people intend to use it for other things besides their machine learning projects — is that the documentation from Google about how to set up the product warns connecting a monitor and a keyboard to the board could compromise the overall performance. People can use some SBCs as desktop computers, especially when they have dedicated operating systems. However, this system is more specific and not built for that purpose. But, besides using it to work with machine learning projects, Google says the Dev Board could suit manufacturers that want to evaluate their products while using in-house custom hardware. Also, although Google discusses these products on in-house pages, clicking the link to purchase either the Coral Dev Board or the USB Accelerator takes people to a partner site called Mouser Electronics. The Google site says the Coral Dev Board ships within a week, but the page on Mouser Electronics indicates it’s sold out and there’s a seven-week lead time from the factory. Products That Are Part of a Larger Vision It’s worth clarifying that Coral is Google’s AI-specific branch, and it seems the tech giant has big things in mind for the future. More specifically, Coral is a platform for local AI, and Google promises it’s flexible for startups, large businesses and everything in between. There are already several other Coral products available to explore besides the two mentioned here. It’ll be especially interesting to see what kind of progress the Coral Dev Board makes possible once more people start using it. Even at this earlier stage, though, it’s still a powerful, yet compact, product of sufficient interest to people working on machine learning IoT projects. By Kayla Matthews Kayla Matthews is a technology writer dedicated to exploring issues related to the Cloud, Cybersecurity, IoT and the use of tech in daily life. Her work can be seen on such sites as The Huffington Post, MakeUseOf, and VMBlog. You can read more from Kayla on her personal website.
<urn:uuid:02125eec-5cfa-4017-ba0a-4dc21e4e540d>
CC-MAIN-2022-40
https://cloudtweaks.com/2019/03/googles-coral-dev-board/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00067.warc.gz
en
0.94066
1,093
2.84375
3
Command and Control An issue was discovered in Xen through 4.12.x allowing x86 PV guest OS users to cause a denial of service via a VCPUOP_initialise hypercall. hypercall_create_continuation() is a variadic function which uses a printf-like format string to interpret its parameters. Error handling for a bad format character was done using BUG(), which crashes Xen. One path, via the VCPUOP_initialise hypercall, has a bad format character. The BUG() can be hit if VCPUOP_initialise executes for a sufficiently long period of time for a continuation to be created. Malicious guests may cause a hypervisor crash, resulting in a Denial of Service (DoS). Xen versions 4.6 and newer are vulnerable. Xen versions 4.5 and earlier are not vulnerable. Only x86 PV guests can exploit the vulnerability. HVM and PVH guests, and guests on ARM systems, cannot exploit the vulnerability.
<urn:uuid:fca9deca-97b4-4d41-ac7c-907101c49aa2>
CC-MAIN-2022-40
https://attackerkb.com/topics/CrjeTHGKnl/cve-2019-18420
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00067.warc.gz
en
0.799673
273
2.90625
3
Data Science in every drop charity: water: Ensuring clean, reliable water sources around the world. 785 million people, or 1 in 10 in the world, are without clean water, according to the US Centers for Disease Control and Prevention (CDC). When a water pump fails, many people in rural communities must walk several miles to a neighbouring water system just to access clean drinking water. What’s more, repairing these systems can be complicated and inefficient, as there is no visibility when the pumps break down. Since 2006, the non-profit charity: water has provided clean, safe drinking water to more than 11 million people in 29 countries. The organization builds sustainable, community-owned water projects. charity: water developed a remote cloud-connected sensor device to monitor the performance of clean water projects located in developing regions. Specifically, 3,000 first-generation Internet of Things (IoT) sensors were retrofitted on water points to track the operational functionality of systems installed in northern Ethiopia. The sensors transmit hourly real-time water flow data to the cloud-based tracking system. Over four years, the organization captured more than 32 million data points, but didn’t have the tools to analyze them and struggled to filter out the “noise” in the data – and knew that harnessing this data could improve the scale and reliability of its services. charity: water partnered with Accenture Labs’ Tech4Good program, which applies cutting-edge applied research to help address critical challenges facing society. Its aim is to help build a more sustainable and inclusive world. Building a two-part anomaly-detection system using data science, machine learning and advanced probabilistic models – the team applied the system to charity: water’s supply network data to help provide deeper, more accurate insights from cloud-connected pump sensors throughout northern Ethiopia. First, the system models normal water usage behavior and consumption patterns at a specific water pump. This helps the team learn how behavior changes throughout the week and months. For example, charity: water can understand precisely when the least amount of water is consumed in some areas, including the wettest months of the year, when communities may have different water sources. Second, the system analyzes the data and “scores” it to flag anomalies. This helps detect malfunctioning water flow sensors. And if a pump breaks down, charity: water can subsequently alert network operators who dispatch mechanics to repair it as quickly as possible. As a next, Accenture is now working on delivering a predictive maintenance solution to help charity: water notify an operator or technician before a pump breaks. The team hopes to one day deploy this solution for thousands of cloud-connected water systems. Through in-depth collaboration and working sessions, the team first clearly identified the data challenges that charity: water faced. After dissecting the massive sets of data and developing a new water consumption model, charity: water now has a better understanding of the user behavior and water needs of the communities it serves. And its anomaly-detection system has set the stage for a predictive maintenance solution in which charity: water could prevent failures in its water systems. With improved technology like this, charity: water can proactively obtain better visibility into its water systems and malfunctions, instead of reactively deploying limited resources. Communities benefit from less pump downtime and charity: water can concentrate on using its resources to help more communities. In addition, the team published their work in a peer-reviewed journal, Sustainability – doing so shares the team’s methodology and findings with others in the field who are trying to solve the water crisis. This can free up resources for other teams and organizations to focus on raising money and installing more water projects worldwide. As charity: water’s network expands, maintaining it will become a bigger challenge. The team will continue to innovate to help decrease the maintenance costs and downtime of water systems, resulting in more communities having efficient, reliable access to clean, safe drinking water at scale. And with its commitment to transparency and accountability, charity: water is reinventing charity on its way to ending the water crisis.
<urn:uuid:5e1dc8d5-7c9b-4a27-b2e4-719ddd6ffb09>
CC-MAIN-2022-40
https://www.accenture.com/fi-en/case-studies/technology/charity-water
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00067.warc.gz
en
0.941732
852
2.875
3
This blog is intended to be a warning bell and to draw attention to a potential security risk involved in running sensitive applications in the WSL (“Windows Subsystem Linux”) Windows utility. As reported by Microsoft [here], WSL usage is growing fast (“more than 3.5 million monthly active devices today”). With the enhancements included in the upcoming WSL 2 and the planned improvements in WSL’s roadmap, it is reasonable to expect the usage of WSL to continue growing even faster. The security risk that I will highlight is not caused by any software bug (in Windows or Linux), nor is it exploited by any new and complicated attack technique. Instead, it is based on simple usage of the WSL utility as it was designed. If you are using or intend to use WSL, or if you are responsible for the security of endpoint users using WSL, this research is a must-read. As for the rest of you, you may continue reading at your own risk (😉 ) (remember, there are no exploitable bugs or new technical discoveries here). - WSL is a Windows utility that allows users to run Linux applications under Windows. - Any standard (non-admin) Windows process has full access rights to all the files that make up the WSL machine. - If a malicious program runs as this standard process, it can steal sensitive static data (e.g., SSH keys) by simply copying them from the WSL file system. - By modifying the programs in the WSL file system, our malicious program can also capture sensitive dynamic data (e.g., usernames, passwords, passphrases). - The WSL design allows the activation of Windows processes by programs running inside the Linux machine. Therefore, a standard (non-root) Linux program can completely take over the Linux machine. - WSL 2, designed as a “lightweight Utility VM”, has markedly diminished the attack surfaces of WSL, but is still vulnerable to the security weakness described here. - Bottom line: Running sensitive applications inside WSL is significantly less secure than running the equivalent applications in a standalone Windows or Linux Desktop system. WSL 2 is now formally released (as part of Win-10 version 2004), and has generated a lot of buzz. Microsoft has been stating regularly that it “loves” Linux [here], and has lately declared that it will support running GUI Linux applications in WSL [here]. These changes in Microsoft’s attitude and intensive development plans for WSL are an indication of a marked change in the expected usage of WSL. Initially, it was intended to support developers (and possibly security researchers) that want to develop or test Linux applications inside Windows, but now it appears to be a generic Windows utility that allows all user types to run Linux applications in Windows Desktop systems. WSL 2 is a more secure system than WSL 1 (or the legacy “bash” feature), but it does not eliminate the risks involved in running sensitive applications inside it. To be clear: It is a well-known fact that the “host” system always has full control over the “guest” system. The real issue here is that the intended usage of the WSL utility will expose many users to a security risk they might not be aware of. This is probably already the case as it appears that quite a lot of WSL users are currently using it to run an SSH client or server or both (programs that process sensitive credentials data). Searching Google for “Running OpenSSH in WSL” finds some 270,000 results. Of course, not all of these references are relevant, but looking in the first few pages, one can see that most of them do discuss how to install and run OpenSSH Client or Server in WSL, or how to interface the Linux SSH programs with the Windows SSH keys and programs. Figure 1: Two examples of online references to SSH usage in WSL The History of WSL “bash” – First, there was the “bash” feature. A pico-process named bash, launched by a bash-launcher program bash.exe. The bash pico-process emulates a Linux terminal and executed Linux programs – converting Linux kernel APIs to the equivalent Windows kernel APIs. The internal architecture and working mechanisms of Windows’ pico-processes are not relevant to our story. I am not sure when it first became an integral part of Windows, and I doubt that anyone (except me, of course) is still using the “bash” command, but it is included here for “completeness” sake. If for some reason you are still using it to run sensitive Linux applications such as OpenSSH – beware as it is easy to exploit, as you shall see. “WSL 1” – For Windows 10, the “wsl” command (wsl.exe initially released in August 2016) replaced the “bash” command. A support utility (wslhost.exe) was also activated, in addition to the bash pico-process. Since WSL 1 file system infrastructure is very similar to the “bash” application, sensitive applications running in this environment are exposed to security risks similar to applications running under “bash.” “WSL 2” – WSL 2 was introduced in Win-10 version 2004. The architecture of WSL 2 is drastically different from that of WSL 1. The Linux distribution package is run as a “Lightweight Utility VM” under HyperV, replacing the pico-process technique. The features of the “Lightweight Utility VM” WSL 2, which are relevant for our case, are discussed later on in this blog, but if you are wondering what exactly a “Lightweight Utility VM” is, an excellent explanation can be found here. When you run the “wsl” command to activate WSL 2, vmwp.exe (HyperV process) is run instead of the bash pico-process. A side note: The architectural changes in WSL 2 may aid some attack methods. Connor Morley from F-Secure has published a white paper [here] which analyzes the possibility of weaponizing WSL 2 to achieve a persistent and stealthy attack on the hosting Windows system. Here we examine a much more modest target of “stealing” sensitive information from applications running inside a WSL that has been “legally” installed. Our attack is a “single-shot” event [from the Windows side] that modifies programs/files inside the WSL file system, and no persistence in the Windows side is required. File System Implementations “bash” file system The Linux file system for bash was implemented as an integral part of the Windows file system. Typically, the root folder for the Linux file system is located for each user at: The Windows’ security settings for all the files in this folder and all its’ subfolders allow any program that is running in this user’s session full access. Figure 2: A standard Windows process has full access to a sensitive Linux file (sshd-OpenSSH server) “WSL 1” file system The Linux file system for WSL 1 is similarly an integral part of the Windows file system. Typically, the path to the root folder of the Linux file system for each user is something like: Like the “bash” case, the Windows’ security settings for all the files in this folder and all its’ subfolders allow any program that is running in this user’s session full access. Note: The user can control the location where the Linux DISTRO is installed. In some of the POC cases, you will see the rootfs folder at C:\Users\<weak user>\Documents\Ubuntu\rootfs. “WSL 2” file system Since WSL 2 is a VM run by HyperV, its’ entire file system is implemented in a single vhdx file such as: When the VM is inactive, any program that runs in the session of a user for which WSL 2 has been installed has full access rights to this file. Modifying data in this file is possible, although more complex than modifying “bash” or WSL 1 files. In the next session, I will show a simple method to bypass this complexity. “Seamless” integration with the Windows file system is achieved via a COM object that implements the “Plan 9 File System” protocol (using the new vp9fs.dll). Programs running inside the VM can access Windows files using the prefix “/mnt/c/” in the file name (as was done in “bash” and “WSL 1”). Windows processes can access Linux files “legally” by adding the prefix \\wsl$\\<distribution-name>\ to the full file name. It is instructive to see that in this kind of access, the Windows’ process does not have any “root” privileges even if it is elevated. Figure 3: Elevated Windows Process cannot modify a sensitive Linux file “legally” Attacking Sensitive Applications Running Inside WSL For the first 3 cases we assume the following preconditions: – A malicious program is running in a standard process in the Windows session of a user. – This user uses WSL to run sensitive applications (not necessarily right now). I discuss attacks on SSH Client and SSH Server programs, but this attack mechanism applies to any sensitive application. The “bash” POC was done in a Windows-10 machine, version 1909. The WSL POCs were run in a VM machine (registered for the Windows Insiders program) under VMWare (Windows 10, version 2004, Build 19624.rs_prerelease.200502-1339). Attack-A: Attacking sensitive applications running inside ”bash”. This case is straightforward as the attacker always has full Read/Write access to the entire Linux file system . Potential attack actions could include: a) Stealing Credentials from SSH Client The ssh command accepts the credentials of the user and passes them on to the server who will authenticate the user. The SSH Client program (ssh) can be modified so that every time it is activated the following data is “leaked” out to the attacker: – All command-line arguments. – An identity file (private SSH-key identified by the “-i” command-line argument). – A Password that is requested and manually entered. – A “passphrase” that is requested and manually entered. In this way, the attacker can accumulate (over time) all the information required to access all the SSH servers to which this user connects. POC-A: Sensitive program modification (“bash” case) Figure 4: Inside the Linux environment, the “ssh” program can only be modified by “root” Figure 5: A standard Windows program modifies ssh (“bash” case) b) Stealing Credentials from SSH Server If the user happened to set up an SSH-server in WSL, a significant amount of sensitive information could potentially be leaked out. One use case for such a server is to enable the user to work from home. He may also use this server to connect to other servers in the organization’s intranet (via “tunneling”). Note that inside the Linux environment, only “root” Linux users can access the Private SSH keys, but from the Windows side, they are entirely exposed. Figure 6: Private SSH keys in Linux are “root” protected, standard Windows processes have full access All the credential information required to connect to this server can be accumulated by: – Access to the sshd_config file. – Access to files and folders referenced by the config file (e.g., AuthorizedKeysFile). – Modifications to the sshd program itself to capture credential information passed dynamically. By modifying the configuration file (sshd_config), any access limitations defined by the user can be removed (and, of course, any remote connection can be authorized). c) Exploiting SSH tunnels The user may set up an SSH Server in WSL and define Port-Forwarding into other servers (again – to support work from home is one possible use case). In this case, the attacker can gain permanent remote access to these servers using the information collected in step “b” above. Attack-B: Attacking sensitive applications running inside ”WSL 1”. Since the implementation of the Linux file system is identical to the “bash” case, all the attacks described in Attack-A above are applicable here too. POC-B: Sensitive program modification (WSL 1 case) Figure 7: A standard Windows program modifies ssh (WSL 1 case) Attack-C: Attacking sensitive applications running inside ”WSL 2”. As mentioned before, the entire file system for a WSL 2 machine is implemented in a single vhdx file. Two issues make it harder to perform the attacks described above on this system: a) Modifications to this file system are more complicated to implement than modifications to the file system of “bash” and “WSL 1” since you cannot simply open and update the files’ contents. b) When the machine is active, vmwp.exe has exclusive Write Access to the file. One simple way to bypass both of the above limitations is to use the WSL version conversion feature: – Convert the WSL 2 file system into a WSL 1 file system. – Perform the required modifications to the Linux files. – Be kind (😉) and convert the WSL 1 file system back to WSL 2 format [optional]. This course of action is achieved by executing a batch with the following console commands: 1: wsl –set-version <distro name> 1 2: <attack program> 3: wsl –set-version <distro name> 2 Luckily [for the attacker], if the WSL 2 VM is currently active, the first command will silently terminate the WSL 2 session (even if user programs such as OpenSSH Server are currently running). POC-C: Sensitive program modification (WSL 2 case – conversion to WSL 1 and back) Figure 8: Original WSL 2 state: passwd is a SUID program (Auto-elevating to “root”) – When the conversion to WSL 1 is initiated, the WSL 2 session terminates “silently.” Note that there are no “exit” or “logout” messages in the screenshot above. – After the conversion is complete, a Windows program (with no “admin” privileges) modifies the contents of the passwd program (changing the string “unchanged” to “1337 1337”). Figure 9: conversion to WSL 1 / modifying passwd / Conversion back to WSL 2 – After the conversion back to WSL 2, passwd is still a SUID program, but it gives a different error message when the password is not modified (“password 1337 1337”). Attack-D: The Loopback attack: Attack by a non-root program running inside WSL 2. Being a “Lightweight Utility VM,” programs running inside WSL 2 can activate Windows programs (that will run under the Windows kernel). This feature can be used by a standard program running inside WSL to take over the virtual Linux machine. For this attack, we assume: – A malicious program is running in a standard process in a currently active WSL VM. The attack consists of the following steps: a) A malicious non-root program running inside the WSL 2 machine: - Creates a <malicious A>.exe and a <malicious B>.exe files in a Windows folder. - Activates <malicious A>.exe (e.g. by “execve” command). b) <malicious A>.exe program creates a detached process running <malicious B>.exe and terminates. c) <malicious B>.exe executes Attack-C described above. POC-D: A non-root program inside WSL 2 can modify any file in the WSL file system by going outside Figure 10: In WSL 2- Show original sshd_config data/run a malicious non-root attack program Figure 11:The batch command (attacking WSL) that we want to execute In Windows We need to dissociate between the program activated from inside WSL 2 and the Windows attack since the attempt to convert into WSL 1 will immediately terminate the WSL 2 session (including the original “attack” Windows process). I achieved this through basic KB automation commands, which put the full path to the batch file in the Windows “Run-window” and then activated it. Figure 12: Convert to WSL 1 / Modify the OpenSSH server configuration file / Convert to WSL 2 Note that the Windows program ReplaceStringInFile replaces the string “#PermitRootLogin” by the string “PermitRootLogin ”. Figure 13: Back in WSL 2, the “protected” configuration file was modified as expected WSL Insecurity Compared with Windows or Linux Standalone Implementation By now, it should be clear why running sensitive applications inside WSL is significantly less secure than running the equivalent applications in a standalone Windows or Linux Desktop system, but let me spell it out anyway. The Protection codes in the following table have the following meanings: Ѵ: Access requires “admin” or “root” privileges. –: Protection is not required X: Access does not require “admin” or “root” privileges. Each Entry is formatted like this: R: < Protection code> W: < Protection code> (where ‘R’ stands for “Read Access” and ‘W’ stands for “Write Access”) |OS files||Sensitive programs||Sensitive data files (e.g., SSH keys)||Sensitive configuration files| |Windows||R: – W: Ѵ||R: – W: Ѵ||R: Ѵ W: Ѵ||R: – W: Ѵ| |Linux||R: – W: Ѵ||R: – W: Ѵ||R: Ѵ W: Ѵ||R: – W: Ѵ| |WSL (access from Windows, accessing files in WSL File System)||R: – W: X||R: – W: X||R: X W: X||R: – W: X| An additional consideration is the security programs that may be active in the machine. To the best of my knowledge and at the time this blog was published, there is currently no known Windows security program that protects Linux files (inside the WSL file system). Any security programs that might be implemented inside the WSL VM (e.g., maybe protecting SSH keys) will not be active while the attack is carried out. The main security risk identified here is credentials theft or theft of other sensitive data processed by Linux applications running inside WSL. One could, quite reasonably, argue that there is no real security vulnerability here since the user should know that data inside a VM is always exposed to standard programs running in the “hosting” session. But in this case the risk is more severe than the “normal” risk involved in running a VM under Windows because of the intended usage of WSL as a standard utility of Windows. One can infer this intended usage from Microsoft’s definition of WSL 2 as a “Lightweight Utility VM”. As discussed in the introductory section, the average WSL user will become less technically savvy and usage of WSL will increase as WSL will be used for more and more modern use-cases. There is, however, a more critical and distinct security difference between running a full VM under Windows and running a WSL 2 VM. The loopback attack, for the WSL 2 VM, is a Local Privilege Escalation vulnerability that is explicitly enabled by the “Lightweight Utility VM” features of WSL 2. This vulnerability does not exist if an equivalent Linux system is run in a full VM under Windows. Which brings me back to the title of this blog: (SAFE + SAFE) < SAFE. WSL combines two secure (“safe”) systems in such a way that a less secure overall system is created. Note: I would like to thank Gilad Reti for drawing my attention to the possibility of running a Windows program from inside the WSL VM and to this Microsoft presentation.
<urn:uuid:6554ac5a-b04b-47b4-8c7b-2a9f4599d445>
CC-MAIN-2022-40
https://www.cyberark.com/resources/threat-research-blog/running-sensitive-apps-in-wsl-safe-safe-safe
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00268.warc.gz
en
0.896765
4,625
2.78125
3
Artificial intelligence (AI) has long captured the public imagination – embodied in cutting-edge robots and the highly automated world portrayed in science fiction movies. Despite its futuristic connotation, AI is already here – and has been for years. Consumer tools – including virtual assistants like Siri and Alexa, and machine learning-driven algorithms – are widely being utilized to make our daily lives easier. Yet, 63 percent of the public don’t realize they are actually using AI technologies in their day-to-day lives. Whether we realize it or not – the truth is, many of us are exposed to and utilizing AI every day. Within the corporate world, AI is ushering in a significant new era of doing business. Entire industries are poised to be transformed, or are already being impacted, by AI applications – from manufacturing and health care to public safety and transportation. AI is currently being used to combat epidemics, manage and warn of disasters, and fight crime. Its application is bringing about a number of benefits including increased efficiencies, new products and fewer repetitive tasks. Computer-aided medical image interpretation is being used to scan digital images and highlight conspicuous sections that may be a disease. Google’s “smart reply” function is helping users manage their inbox, deciphering incoming messages and automatically suggesting different responses. Banking apps are using machine learning to decipher and convert handwriting to text, enabling customers to deposit checks via their mobile phone. As soon as 2020, the market for AI is expected to reach $70 billion. By 2035, AI technologies are projected to boost corporate profitability by an average of 38 percent. It is clear that AI will have an immense effect on consumer, enterprise and government markets around the world. While there are some obstacles to overcome, AI has the potential to solve many of today’s problems and enable us to work smarter and more efficiently. However, this change is only possible if we embrace AI with the mindful focus on applications dedicated to improving lives, products and experiences for everyone. Looking forward, an open mind will be a critical asset as companies experiment with how to incorporate AI into our personal lives, professional lives, and society at large.
<urn:uuid:964413b5-c02a-4168-a94f-29794a76520c>
CC-MAIN-2022-40
https://blog.motorolasolutions.com/en_us/artificial-intelligence-within-reach-its-already-here/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00268.warc.gz
en
0.952741
445
2.84375
3
Mass notification and emergency communication (MNEC) technologies play a crucial role in disaster response. Equally as important as the equipment itself is having a plan for deploying that technology and establishing safety protocols. In the event of an emergency, it’s all hands (and all staff members) on deck. AV/IT professionals are key players in disaster response. They maintain critical information systems and keep communication lines open. Technology staff act as first responders, providing triage to A/V and IT systems as needed. “The technology team’s biggest responsibility is to make sure everything’s working,” says Steven D. Clagg, chief information officer at Aurora Public Schools. “One of things we do on a monthly basis with the technology team is that we test all the gear even if there’s no work order out.” This monthly test is just one of many safety protocols the Aurora, Colorado school district has in place. The district’s superintendent John L. Barry comes from a military background, having served in the Air Force and having helped lead the independent inquiry of the space shuttle Columbia disaster, he is well versed in emergency response. As a result, school district leaders (including the CIO) undergo extensive incident response training that covers everything from how to recognize a student headed for trouble to what to do in the event of a lockdown. The district also has a detailed incident response plan that includes mass notification messaging, the convening of school leadership and a partnership with local authorities. “[The plan] goes through several levels so there’s a technology layer, but first and foremost is safety. That safety response plan takes into consideration all kinds of emergencies, not just a dangerous person in the school. The plan is sort of all inclusive,” says Clagg. When trouble arises, be it a power outage, an impending storm or something more serious, the first step is to classify the incident. Situations are either red, orange or yellow depending on their severity, with red being the highest alert. Blackboard Connect, a mass messaging tool that is tied into the district’s learning management system, is used to send email, text or voice dial alerts to staff, students and parents. The type of incident and where it originated determines who triggers the alert. Internal incidents are handled by the district’s security team. Outbound messaging is handled by the Communications department. The district then convenes its Incident Response Team (IRT). “Depending on the severity of the incident either the team will meet virtually or the team will meet in person. Red lockdown is our heaviest so that means there’s an intruder in the building and that would require all of us to immediately convene at the emergency operations center,” says Clagg. The IRT consists of district leaders and representatives from every school department including the public information officer, principals, Transportation Services, IT, Nutrition Services, Aurora police and fire departments, security, etc. Anyone who is responsible for any kind of aid in a crisis situation is involved. Each IRT member has a checklist they need to go through. IT, for example, must make sure all the district’s critical systems are up and running. “For us, the number one critical system is Infinite Campus or our student information system. On the network side, is our network working? Do we have wireless access in the room or in the emergency site? Next we go through a checkup of services we’re consuming. Email is number one. [Then] access to our cable system to get the media up on our screens right away because sometimes the media has information we don’t,” says Clagg. Infinite Campus is considered a critical system because it’s where student data like grades, attendance and health information is stored. If there was an incident in a particular classroom, school administrators could use Infinite Classroom to identify students and to locate siblings or family members who may also attend school in the district. As soon as IRT is convened, Clagg sends an email to the email systems administrator. That person then has to physically check that everything is working as it should be and send back a check-all email. The technology staff is critical to keeping communication lines open. If the network goes down, the district has no access to staff or student information, no email and the 1,600 security cameras district-wide will go offline, essentially rendered useless. The Aurora schools would become isolated from the outside world. IT certainly plays an essential role in emergency response, although not an immediately obvious one for some. In Clagg’s experience, getting IT to embrace that role sometimes takes a little prodding. “They need to understand that they have to drop what they’re doing and be ready to react to a crisis, and that’s a little hard for our team sometimes because they’re so customer focused,” says Clagg. “We’ve had to work that into our culture, that we’re first responders too.” Once in the emergency operations center, the IRT has a number of technologies at its disposal. The room has a Promethean whiteboard that can be used to display maps of the school as well as staff or student information. A large screen monitor is used to display Microsoft OneNote where a scribe keeps a running incident log. There are also Web cameras that can be used with Adobe Connect Pro to broadcast live. A Polycom conference phone allows for conference calls between district leaders and local authorities and a geographical information system provides maps of the district and the affected areas. If cell phone service is down, the district has a number of handsets that can be passed out. Establishing Safety Protocols In addition to installing MNEC technology, organizations should also create and adhere to basic safety protocols. Locking doors and handing out visitors badges may seem simple, but it can be lifesaving. Referring to the 2008, Virginia Tech shooting, Clagg explains the gunman bypassed locked classrooms even when he could see people inside. That one barrier was enough to deter the shooter. This is one reason the Aurora schools have a locked door policy. “All exterior doors at all school levels are locked. You cannot get into a building without someone buzzing you in. No employee is allowed to enter another door without first signing in at the front door,” says Clagg. “If I catch any of my staff going in through the side door, that’s part of their performance rating.” All internal classroom doors are locked as well so in the event of a lockdown no one is scrambling to shut and lock the door. Teachers can simply close it and gather students together in the farthest corner of the room. Training plays a large role in disaster preparedness as well. The district does two annual mass trainings involving the school principals and the IRT. All training is provided and funded by the district whose security director is a former police officer. In addition to lockdown training and incident response, Clagg has also attended sessions on shooter profiles and when to call for an assessment for a student in trouble. “Here’s our vision: provide environments that are physically and emotionally safe for students, for peer work and for learning,” says Clagg. “Everything, really, is behind that. Everything we do is to make sure that happens.”
<urn:uuid:ec21e93e-bf77-434b-a1b3-6bd92d976850>
CC-MAIN-2022-40
https://mytechdecisions.com/physical-security/mass-notification-technology-is-only-as-good-as-your-plan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00268.warc.gz
en
0.956783
1,542
2.84375
3
Scientists have decided that redesigning streets to make them more user-friendly for drunks could help reduce conflict and violence. After using computer simulations based on the Welsh... ...to mimic the movements of people staggering home after a good night out, researchers came to the staggering realisation that drunk people trip over things. Scientists went on to the streets of Cardiff to get information about drunken behaviour they could feed into their computer model, breathalysing locals and studying their behaviour. A quarter of the individuals encountered were found to be so drunk they were staggering. — Larry, Attack Monkey, Light Reading
<urn:uuid:d6ed99ae-0681-473c-82db-164b2a58d804>
CC-MAIN-2022-40
https://www.lightreading.com/welsh-proofing-streets/a/d-id/659531?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00268.warc.gz
en
0.965901
126
2.84375
3
When we talk about IT Setups and IT infrastructure we come across some terms and we wonder what the difference between them could be. Today we look at two common terminologies: Data center and Disaster Recovery Center, we come across in usual IT conversations. A data center is a centralized physical space to host computer systems and other infrastructure components such as network equipment, telecommunication links, storage systems etc. Data canters usually comprise of multiple power sources such as regular power supply from energy company, DG sets to provide power redundancy, multiple data communication links and other environmental controls such as air conditioning/cooling equipment, fire safety equipment’s, Security devices, access control equipment and so on. Servers and network components are usually organized in ‘racks’. Businesses in need of round the clock systems operations and high-speed connectivity requirements could depend on Data Centers for services to save running costs of IT operations. Data Center – Types Data Center design comprises of networking equipment such as routers, switches, firewalls. Storage equipment and Servers. Critical data and applications are hosted in Data Center to support business critical services hence data center security is very critical to Data Center design. Classification of Data Centers is determined by there ownership, technologies used for computing and storage , energy efficiencies etc. Enterprise Data Centers are built and owned by organizations and optimized for their end users. They are usually in-house within company premises. Managed Services Data Centers are managed by 3rd party service providers . Equipment is leased and not procured. Colocation data centers are those where organizations rent space within Data Centers owned by other organizations. The infrastructure is provided by Data Center owner companies such as buildings, cooling , bandwidth , security (Physical / Logical) etc. and components such as Servers, Network equipment , storage etc. is provide by the organization. Cloud data center – Data and applications are hosted on Cloud infrastructure such as IBM Cloud, Microsoft Azure, or any other Public Cloud service provider. Disaster Recovery Center A disaster can strike at any point of time or anywhere. It may happen due to a network switch failure to physical calamity – Flood, fire etc. happened in Geographical area where Data Center is located. In most of the cases Disasters can’t be predicted but we can plan to minimize damage to business and recover critical business data with meticulous planning. Disaster Recovery Centers are setup to address the uninterrupted Business continuity requirements of organizations. Disaster Recovery Center is a specialized data center where the replication of data and computer processing is setup at off-premises location to ensure the services are not impacted when disaster strikes. While designing a Disaster Recovery Centers several factors are taken into consideration such as Primary site and DR site should not be in same Seismic zones. In today’s competitive world organizations required to provide on demand services to their customers and ensure availability of services round the clock and minimize the downtime. Gone are the days of traditional On Premises Data Centers and these are replaced by Virtualized systems on a mass scale. Use of virtualization technology makes it easy to protect critical data and applications by creating VM based backups and replicas which can be stored off-site or at remote location. So, Production load of Primary data center can quickly be moved onto a Disaster Recovery site / Center to quickly resume business operations. Setting up Disaster Recovery Center is subset of ‘Business Continuity’ Disaster Recovery Center – RTO and RPO RTO (Recovery Time Objective) and RPO (Recovery Point Objective) are two commonly used terms in Business Continuity and Disaster Recovery. RTO defines timeframe within which systems need to be restored to their original state. RPO defines how much data organization can afford to lose before it starts impacting the business. Comparison Table: Data Center vs Disaster Recovery Center DIASTER RECOVERY CENTER |Definition||Data Center is physical space which hosts Computer systems and network equipment to support day to day operations||Disaster Recovery Center is an alternate facility which is used to recover and restore IT operations when Primary data Center is not available. It is having ready and up to date copy of critical applications and databases required to run the businesses| |Location||Could be On Premises, Co-located outside or on Cloud||Off-Premises , Co-located outside or on cloud| |Purpose||To facilitate daily work/operations||Backup site to support restoration of operations in the event of crash or disaster| |Standards/Types||•Tier 1- Basic capacity with UPS •Tier 2 – Redundant capacity with cooling and power •Tier 3 – Concurrently Maintainable and any component can be taken out without affecting production •Tier 4 Fault tolerant insulated from any type of failure |•Hot – Real time replication from primary site •Warm – Backup facility with network connectivity and pre-installed hardware equipment •Cold – Backup facility having an office space with power, cooling system, air conditioning and communication equipment |Nature of Establishment||A permanent physical or virtual location (Located on cloud) to support organizations IT operations||A physical location that is temporary in nature or could be a virtually located on Cloud to support normal business operations during disaster| Download the comparison table here. Disaster Recovery Center planning involves careful assessment of key points related to: - Identification of critical business processes to continue its vital processes - Identification of data that requires to be protected and develop a backup schedule - Data recovery techniques suitable for business - Location of data to be determined will it be stored on site or off site - Training of staff to quickly recover from a disaster or supervision on how to work during disaster - Test and update Data Center recovery Plans
<urn:uuid:1f2f2d0c-c49d-46b0-9a8c-7df789bb306b>
CC-MAIN-2022-40
https://networkinterview.com/data-center-vs-disaster-recovery-center/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00268.warc.gz
en
0.927293
1,216
2.921875
3
Since March 2019, hackers have been targeting the United Nations (UN) and a number of affiliated humanitarian aid organizations such as UNICEF, the UN Development Programme, and the UN World Food Programme with a sophisticated, mobile-centric phishing campaign. The details of the phishing scam were reported by cyber security firm Lookout, which has reported the situation to the targeted organizations and to law enforcement. This phishing campaign is yet more evidence that hackers are becoming increasingly sophisticated in how they carry out phishing attacks. The current speculation is that the unknown hackers might use all user credentials gathered during these UN attacks for a later business email compromise (BEC) attack involving UN aid organizations. Details of the UN phishing campaign Phishing emails sent from legitimate UN email addresses instruct recipients to go to a fake Microsoft Office 365 login page, which is used to harvest login credentials from users. These login credentials are then sent back to servers controlled by the hackers, giving them an opportunity to carry out even more of these attacks. The fake Microsoft Office 365 login page resembled a real login page, so even if users might have had hesitation about logging in originally, the hackers did such a good job replicating a real page that any user would be lulled into a false sense of security. When viewing the login on the phishing site, they might never guess that it hosted malware There’s a lot to unpack here, primarily because the cyber criminals took particular pains to make this phishing scam look as legitimate as possible. For example, all phishing emails came from legitimate UN email addresses. And the malware used to carry out the attack was capable of detecting what type of a device a user was accessing, so as to deliver either a mobile- or desktop-based experience. For example, the malware detects if the page is being viewed on mobile, and then delivers mobile-specific content. According to the Lookout researchers, the clear preference was for users deploying mobile devices, because many mobile web browsers will truncate long URLs to fit on a tiny screen – this makes it much easier for hackers to use phony URLs that resemble real URLs. In the past, says Lookout, phishing attacks targeting the United Nations used the same URLs used in this attack, so obfuscating phishing URLs by truncating them was one way to evade a potential IP network block. In addition, says Lookout, the Google Safe Browsing database did not have any record of these URLs, so users would not be shown any type of warning or alert that they were browsing on an unsafe page. In addition, this phishing attack utilized an advanced keylogging tool, such that users did not actually have to hit the “login” button for the keystrokes to be recorded. If users completed only part of the login process (such as entering only a password but not a username), it still was able to log keystrokes and return the information in the password field to the hackers’ servers. Moreover, the keylogging tool could also detect if a user entered one password and then replaced it with another, as might be the case if a user forgot the original password. And, finally, the phishing campaign hackers went the extra step of including SSL certificates for the phishing login pages. The SLL certificates had a range of validities, with one validity range of May 5, 2019 to August 3, 2019 and another validity range of June 5, 2019 to September 3, 2019. There is another range of validities that is set to expire by the end of the year (but that is still valid). Thus, users who might check to see if a security certificate were present on the page as a tip-off that the page was either real or fake would also be lulled into a false sense of security. Kevin Bocek, vice president of security strategy and threat intelligence at Venafi, comments on the cyber attacks: “These latest attacks targeting United Nations and global charity websites use TLS certificates to make malicious domains appear legitimate, and they take advantage of the implicit trust users have in the green padlock created by TLS certificates. Internet users have been trained to look for a green padlock when they visit websites, and bad actors are using SSL/TLS certificates to impersonate all kinds of organizations. This may appear sophisticated, but these kinds of phishing attacks are very common. For example, in 2017, security researchers uncovered over 15,000 certificates containing the word ‘PayPal’ that were being used in attacks. And in June, the FBI issued a warning stating that the green padlock on websites doesn’t mean the domain is trustworthy and safe from cyber criminals.” Why the phishing campaign took place A natural question to ask might be: Why would cyber attacks specifically search out humanitarian aid organizations, charities and the United Nations? It might make sense to target a huge multinational corporation, but going after an organization such as the U.S. Institute of Peace, the Heritage Foundation or the International Federation of the Red Cross and Red Crescent Societies would seem to make no sense at all, right? Even Lookout admits that the reason for carrying out the phishing campaign does not make a lot of sense on the surface. However, one explanation for the phishing campaign might be that hackers were looking to hijack payments or carry out sophisticated BEC scams, in which emails from a legitimate email account are used to scam a victim into wiring funds into a third-party bank account controlled by the hackers. Since the UN and the other targeted organizations carry out massive, million-dollar relief programs, this might actually make a lot of sense. Another, more insidious explanation for the phishing campaign, which has been live since March 2019, is that the hackers were working at the behest of a rogue nation-state. This nation-state, in turn, might be looking for details about pending UN investigations, or even worse, looking for names of whistleblowers that they can then track and harass in their home country. There might also be efforts to embarrass or harass top UN officials and their deputies as a result of the phishing attack. Alexander García-Tobar, CEO and co-founder of Valimail, notes the growing sophistication of the hackers: “The latest phishing campaign targeting officials from the United Nations, UNICEF, Red Cross and other humanitarian aid organizations demonstrates how sophisticated and highly convincing phishing attacks have become. By using deviously coded phishing sites, hackers are attempting to steal login credentials and ultimately seek monetary gain or insider information.” One major takeaway from this phishing campaign targeting organizations linked to the UN is that hackers are adopting a mobile-first mentality. This is a new twist on an old approach. As the Lookout researchers highlighted, this cyber attack had all the markings of a mobile-aware phishing campaign. We live in a “post-perimeter world,” says Lookout, in which the lines are blurring between personal networks and corporate networks, as well between the devices we use at home and the devices we use at work. Global cyber criminals are paying attention, and are fine-tuning their attacks to take this into account.
<urn:uuid:ce3944f3-e0a3-45a9-866e-a67dfa3155a2>
CC-MAIN-2022-40
https://www.cpomagazine.com/cyber-security/hackers-target-un-and-humanitarian-organizations-with-phishing-campaign/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00268.warc.gz
en
0.952317
1,471
2.828125
3
Cloud security refers to the measures that organizations take as they integrate the public cloud more into their business operations. At a rudimentary level, cloud security is concerned with securing data both in transit across networks and while stored in data centers, then, further securing access controls to that data. Main risks and threats in the cloud No one solution that can wholly protect systems against all cyber threats, and as organizational networks become more virtualized and dependent on the public Internet, so too will the threat surface increase. Companies that deploy to the cloud face three main risks: Data exposure is heightened in the cloud today, as companies are relying more on the public Internet to access services and communicate between distributed systems. Data exposures stem from breaches of confidentiality, integrity, or availability, resulting in unauthorized or accidental access, alteration, or loss of sensitive data. These forms of exposure may result from malicious actors, however, misconfigurations can do as much damage by leaving open doors to the system, or incorrectly serving unauthorized user’s information from wrongly linked databases. Unauthorized or Over-authorized Users Unauthorized or over-authorized users can stem from many issues, including poorly configured systems, and malicious actors. Cybersecurity measures must be taken against malicious actors, but adhering to the Least Privilege cloud security principle is one way to stave off poor security designs. The Least Privilege principle states that users should be given the minimum permission necessary to do their jobs. Configure permissions accordingly. Malicious actors are the boogie men of cloud security. While it is never pleasant to have an unauthorized guest plunder company data stores, to be targeted by hackers means they value the data inside enough to put their resources toward getting it. This is in contrast to suffering from weak configurations that happen to open up the system to passersbys Types of attacks on cloud Hackers sometimes deploy sophisticated cybersecurity attacks to gain access to high-value targets that can be worth millions. DoS or DDoS Attack — Characteristically a brute force assault, Denial of Service (DoS) attacks aim at reducing or crippling the servers and services under attack, while Distributed Denial of Service (DDoS) attack uses multiple computers at the same time to each launch DoS attacks against a target or set of targets for a multiplier effect. These attacks are perpetrated for several reasons, the least of which is to cause annoyance. More resolute hackers have found that DoS attacks can open up opportunities to penetrate the target in other ways, gaining access to sensitive, valued data. When they’re large enough, these kinds of attacks can have dramatic financial and political ramifications. DoS attacks are brute force strategies requiring resources to sustain until the system under attack collapses. Man-in-the-Middle (MitM) Attack — Man-in-the-Middle attacks are sly methods by hackers to step in-between client and server communications and obtain controls reserved for trusted entities. Hackers can use session hijacking to obtain a client’s IP address after it has initiated a session with a server (the hacker had previously infiltrated the client via malware or other methods), and then quickly assume the client IP and controls, and then simply discarding the client, ultimately gaining access to the server. IP spoofing also attempts to gain access—instead of infiltrating a client, the hacker simply sends an access request with a fake IP address in hopes that it will be granted. Another similar method is replay attacks, a hacker will attempt to impersonate a trusted system by sending older, intercepted messages. MitM attacks are sophisticated, and require the use of multiple technologies to triangulate vulnerabilities. Phishing Attack — Phishing attacks are attempts on users rather than systems by impersonating trusted sources to coax sensitive information out of a person. A common method is email impersonation—a hacker will send a legitimate-looking email asking to follow a link to a seemingly reputable website, and in the process capture sensitive information or load malware. Hackers won’t stop there if the data to be captured is valuable enough. Even more, targeted is spear phishing, where hackers perform due diligence on their mark to create a stronger illusion of credibility. The target may receive emails with doctored headers that look real and be presented with fully functional websites that represent reputable brands. Phishing attacks rely on social psychology and technology, making them very sophisticated and difficult to automate. Malware Attack — Malware, or malicious software, is software, non-consentingly installed on computer systems, that perform malicious tasks by stealing data, replicating, propagating, hiding, lurking, or destroying files. Malware encompasses many feared agents: Trojans, Worms, Macro Viruses, File Infectors, Boot Record Infectors, Polymorphic Viruses, Stealth Viruses, Logic Bombs, Droppers, and Ransomware. In combination with other attacks, malware is used to steal passwords, data, and wreak havoc on systems, some of the most notorious malwares have been very small in size belying their impact, like the Sasser worm, only 15.8 KB, causing an estimated $18.1 billion in damages. Shared responsibility of cloud security In the cloud, providers and consumers act more like partners rather than vendors and buyers, in this way, they share responsibility for security. Because it is fair that a CSP should give their best effort to secure their client’s data, they are responsible for data inside their domain and potentially how it is encrypted leaving its domain. But that effort does have a limit, which typically begins where the client’s systems start. This is the premise of shared responsibility. Shared responsibility encompasses both cloud management and security. For each cloud service, a certain level of responsibility falls on the vendor, and a certain amount falls on the consumer. The Center for Internet Security models such a shared responsibility agreement. Cloud security architectures are the designs and blueprints of how an organization will implement and manage its cloud security. Data Security — Data security addresses security measures that protect data traversing a network, and when that data comes to rest in storage. Several controls can be deployed in security data, including encryption, public key infrastructure, deployment of encryption and tunneling protocols, use of block and streaming ciphers, and using granular storage resource controls. Network Security — While data can be encrypted before transit, network security is concerned with controls on the pathways between systems. Companies can deploy several network/security controls to further protect their systems, including network segmentation, firewalls, DDoS protection, packet capture, intrusion prevention/detection systems (IPS/IDS), packet brokers, network access controls (NAC), and APIs. Endpoint Protection — Endpoints provide logical places for security measures, like bouncers at clubs. These measures include host-based firewalls screen received data (standard firewalls are usually Internet, perimeter defenses), antivirus/anti-malware software, endpoint detection and response (EDR) systems provide real-time awareness, use of data loss and prevention (DLP) systems to enforce data flows, harden systems, blacklist and whitelist applications. Access Control — Access controls ensure those privileges are granted only to those who need them. Poor access control management can lead to difficulty to prevent threat opportunities. Consider these measures: identification, authentication, and authorization systems; multi-factor authentication, or single sign-on (SSO). Why is cloud security important? Cloud security is paramount for organizations dealing in sensitive data. When data is breached, compliance and client protection can devastate a company’s prospects faster than many technical blunders. Besides the routine benefits of security, the following are several additional benefits. Centralized Security — Cloud security can centralize protection, bringing a holistic sense of the entirety, protecting the company from malicious actors, uncovering shadow IT, and optimizing performance. Reduced Costs — Securing data in the cloud can reduce capital costs, and turn them into operational costs, effectively reducing them to a line item. Reduced Administration — CSP reduces company administration load the same way they reduce costs, by offloading those tasks from clients. Now, staff can be utilized more effectively. Reliability — Cloud service providers offer their consumers virtually unlimited resources, with guaranteed uptimes. Cloud security challenges The landscape that cloud deployments operate in is accelerating in complexity as innovative technologies vie for resources. Technologies like the Internet of Things are adding millions of new devices, and more companies are moving their IT support to the cloud. Given this backdrop, companies are challenged to manage this complexity and gain greater visibility and insights into their cloud networks. Manage Intensifying Cloud Complexity Cloud deployments cannot be secured the same way that on-premises infrastructures are. Creating cohesive security must grapple with the fact that in the cloud data exposure is the norm, and that it should be assumed that someone is always listening. Coupled with a thorough security risk assessment, one that highlights how company data will flow across systems, cloud complexity can be mitigated. Gain Greater Cloud Network Visibility Network visibility is more challenging today because of virtualization, and distributed system complexities. Data centers can be in different geographic locations, and data can traverse multiple fabrics, all complicating how networking data is retrieved and analyzed. Today, sophisticated network monitoring platforms can replace the myriad of networking tools that companies use to produce visual analytics of their networks. Cloud security best practices Companies securing their part of cloud operations need to consider four areas of concern, how the cloud security approach is designed, how security will be implemented and governed, how to protect the property and data, and how to respond when attacks are successful. Cloud Security Engineering — Cloud security engineering attempts to design and develop systems that protect the reliability, integrity, usability, and safety of cloud data, and protect users legitimately accessing those systems. In this pursuit, engineers deploy layered security, protection against availability attacks (e.g. DDoS, ping of death, etc.), least privilege security principles, separation of duties, and security automation. Security Governance — Technology is not enough to prevent attacks, or secure data, which affect security governance is a company culture must. Practices to consider are: developing company-wide security policies, documenting security procedures, performing routine assessments and audits, developing account management policies, leveraging industry standards, using platform-specific security standards, assigning roles and responsibilities, keeping software tools up to date, and classifying data. Vulnerability Management — More than ever, vulnerability testing and management are necessary. The cloud has stretched the threat surface, so that extensive testing methods need to be explored, including black-box, gray-box, and white-box testing. A constant vulnerability scanning must be diligently adhered to, which reveals weaknesses in configurations, or application design. Many of these tasks can be automated. Incident Response — Incident response covers when a cybersecurity incident occurs. The event happens, the damage is done, and now the company must mitigate the damage and respond and fix the issue. Contrary to the name, incident response is best prepared beforehand through contingencies and self-healing systems. These contingencies need to respond to different incident types, internal vs external, whether it is a data breach, criminal act, denial of service, or malware attempt. Business Email Address Thank you. We will contact you shortly. Note: Since you opted to receive updates about solutions and news from us, you will receive an email shortly where you need to confirm your data via clicking on the link. Only after positive confirmation you are registered with us. If you are already subscribed with us you will not receive any email from us where you need to confirm your data. "FirstName": "First Name", "LastName": "Last Name", "Email": "Business Email", "Title": "Job Title", "Company": "Company Name", "Phone": "Business Telephone", "LeadCommentsExtended": "Additional Information(optional)", "LblCustomField1": "What solution area are you wanting to discuss?", "ApplicationModern": "Application Modernization", "InfrastructureModern": "Infrastructure Modernization", "DataModern": "Data Modernization", "GlobalOption": "If you select 'Yes' below, you consent to receive commercial communications by email in relation to Hitachi Vantara's products and services.", "EmailError": "Must be valid email.", "RequiredFieldError": "This field is required."
<urn:uuid:ebbd9db2-34d0-411f-863e-b58ba9a99c4e>
CC-MAIN-2022-40
https://www.hitachivantara.com/en-anz/insights/faq/what-is-cloud-security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00468.warc.gz
en
0.926151
2,715
3.15625
3
How variable frequency drives and motors create energy-efficient cooling for data centres Data Centers use enormous amounts of energy, especially in cooling the IT equipment housed within their walls. However, investing in variable frequency drives (VFDs) and energy efficient motors in cooling systems can make a massive contribution to keeping energy consumption under control. We asked ABB Group's Global Product Marketing Manager, Maria Fedorovicheva, to explain how. Why look for efficiency gains in data center cooling processes? As an integral element of modern computing infrastructures, data centers already use around one percent of the world's total energy production. Cooling systems, which play a critical role in ensuring the reliability and availability of the facility around the clock, typically consume up to 40% of all the energy used by the data center. The other large user is the IT equipment itself. Based on these figures, it is clear that the fans, pumps, and compressors – which form the heart of data center cooling systems – should be one of the first places to look for efficiency gains. Are there any regulatory parameters to control energy usage in data centers? One of the most common metrics for energy efficiency has been devised by the industry consortium, Green Grid. This is the power usage effectiveness (PUE) - the ratio defined as the total power entering the data center divided by the power used by its IT equipment. In an ideal world, a data center would have a PUE of 1. According to a study performed by the Uptime Institute, the PUE levels of data centers have been decreasing over the years, from around 2.6 in 2006 to 1.7 in 2019, although the recent trend, since 2013, is flat. To help drive PUE down further, it is necessary to take actions to increase the efficiency of data centers during their operating life as well as implementing cutting edge technologies in new projects. One solution that can help in decreasing PUE substantially is to adopt VFDs and energy-efficient motors for the cooling systems. Why VFDs specifically? VFDs have proved to be a highly effective energy-saving solution for cooling. Drives enable the speed of electric motors used in cooling applications to be controlled precisely, so that they produce the required flow at any time, resulting in energy savings of up to 35%. This is in contrast to running the motor at full speed and controlling the output by throttling and damping. The relationship between motor speed and energy consumption means that even just a moderate reduction in speed can result in a very significant improvement in energy efficiency. While data center cooling systems are sized to handle peak loads under the most adverse conditions – from summer heat to component failures – they seldom, if ever, operate at their design loads. Instead, they operate mostly in a lightly loaded state. VFDs provide the flexibility to enable the cooling system to match the varying load profile so that high system efficiency can be maintained even at partial loads. Does motor technology matter as well? Absolutely. Different motor technologies show a different performance depending on the load. At 25% load the efficiency difference between motor technologies can easily be over 10%. It therefore makes sense to choose a motor based on its performance in the range where it will be operating most of the time. And in most cases, that range is not the nominal load, but well below it. Are there any other considerations regarding cooling process efficiency? In fact, the whole system efficiency matters – we can install highly efficient motors to run applications like pumps, fans or compressor with the minimum possible losses and use drives to match the motor speed to demand so that we save energy. But if, say, the design of a fan causes it to create massive aerodynamic losses, the whole system efficiency or wire-to-air efficiency may suffer. That means when considering energy efficiency in data projects, it makes sense to go beyond looking at each element on a component by component basis to evaluate the efficiency of the entire cooling system. As the server density of data centers continues to increase then so will their heat loads. Future-proofing data center cooling systems means that they must be specified with the scalability to meet future needs. By scalability, we mean that they can be adjusted to suit changing loads, such as when a facility is expanded in size. Again, VFDs developed specifically for heating, ventilation and air conditioning (HVAC) applications as well as motors with high-efficiency characteristics- not only at nominal speed but also at part loads - offer an important benefit, as they are designed with flexibility and scalability built-in.
<urn:uuid:5f6841e1-2564-412b-9b63-a61a24ac1c42>
CC-MAIN-2022-40
https://datacenternews.asia/story/how-variable-frequency-drives-and-motors-create-energy-efficient-cooling-for-data-centres
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00468.warc.gz
en
0.938668
937
2.75
3
The Department of Energy (DoE) has unveiled an online portal that will help users view resources that may assist in COVID-19 response efforts. The Lab Partnering Service COVID-19 portal enables users to communicate with researchers, explore potential virus response facilities and access patents for licensing, DOE said Thursday. The online hub was initially deployed in 2018 to assist researchers and various organizations to validate, locate and attain information from the department's national laboratories. "We are grateful to all of DOE’s 17 National Labs, who have stepped up to facilitate access to their researchers, intellectual property, and facilities during this trying time,” said Dan Brouillette, DOE secretary. DOE has also launched the COVID-19 Technical Assistance Program, an initiative that seeks to help non-DOE organizations fight COVID-19 through the provision of targeted funding.
<urn:uuid:a55807fb-b804-4790-8197-2ccc4df08743>
CC-MAIN-2022-40
https://executivegov.com/2020/06/doe-launches-covid-19-online-resource-hub-dan-brouillette-quoted/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00468.warc.gz
en
0.944761
178
2.5625
3
Viable quantum computers are predicted to be in use in the next few years. Quantum computers can easily break most public-key algorithms, which will create a need for more robust algorithms and crypto-agile systems that can switch their entire public key infrastructure (PKI) to a new algorithm and standard when an attack is detected. While all crypto algorithms are breakable on paper, the computing power required to do so is incredibly high – this level of technological advancement does not currently exist. However, there have been demonstrations of computers that possess adequate power to accelerate computational speeds enough to break algorithms in a fraction of the time it takes to do so by today’s standards. Therein lies the actual threat of quantum computing – the massive boost to computing speed would render the prime-factorization-based Rivest–Shamir–Adleman or RSA algorithm and Elliptic-Curve Cryptography or ECC easily breakable. And if the invulnerable RSA and ECC can be cracked, every other algorithm and hash function can eventually be broken. This means the advent of commercial quantum computing (predictably, by 2025) will expose every system in the world to the risk of compromise – quantum cryptography systems and quantum key distribution systems will need to be sufficiently agile to adapt to the evolving threat landscape. Crypto-agility, simply put, is impossible without automation. Today, we don’t have a quantum computer capable enough to run either of those algorithms, but the impact will be felt almost everywhere when that happens. Without quantum-resistant encryption, everything that has been transmitted, or will ever be transmitted over a network, is vulnerable to eavesdropping and public disclosure. Some of the significant consequences will include: - Reputational damage - Legal costs - Service disruption - Financial losses - Compliance risks - Intellectual Property (IP) theft, including theft of data - Theft or loss of personal identity information or PII - Inability to execute and control sovereignty According to Gartner, long-lived PKIs and related systems (10-plus years) must start planning for post-quantum cryptography now and establish crypto-agility strategies for deployments. This means having well-established lifecycle handling in place for both digital certificates and the provisioning of the certificate authority or CA certificates. Quantum computers will break PKI instantly, but the transition period between old and new algorithms will be extended. It never hurts to be proactive. As a critical first step, identify the starting point. Start with understanding and assessing your risks. Some of the key questions worth considering include: - Where is your critical data? - How is it encrypted? - Is it vulnerable to harvesting? Also, have a complete understanding of the types of keys present in your infrastructure and their locations. What might help is assessing where you are on the crypto agility maturity model. Determine which algorithms will or will not be suitable for your use cases while analyzing performance characteristics focusing on user experience. You might look at implementing some of the below-mentioned steps to strengthen data encryption: - Maximize the entropy in your network: Quantum random number generators deliver full entropy at a high speed - Use only provable Quantum-Safe encryption algorithms (Symmetric Advanced Encryption Standard or AES) - Detect and fight eavesdropping, be Quantum-Safe - Reconsider key length and the frequency of key exchange - Consider deploying quantum key distribution or QKD on particularly sensitive links - Stay “crypto-agile” by improving standard AES or migrating to more specific algorithms - Use the latest encryption key management, with a clear separation of duties - Use automation to minimize human intervention and focus on a robust key generation and distribution mechanism - Monitor progress with evaluation and standardization of post quantum or PQ algorithms - Consider deploying only when you have sufficient confidence in their effectiveness or if there are compliance or regulatory requirements. The PKI ecosystem can experience turbulence due to a myriad of reasons, and the effects are just as varied. Businesses need to be prepared to face these possible pitfalls while minimizing or eliminating their losses and to do that, they need a PKI that is fully managed, monitored, and accounted for. Poorly managed PKI often marks the entry point for a host of vulnerabilities that could affect productivity, efficiency, and revenue. Ensure that you stay proactive with holistic visibility into your infrastructure.
<urn:uuid:592d5d3a-8487-4604-8cb6-cd64cbdc6abc>
CC-MAIN-2022-40
https://www.appviewx.com/blogs/building-security-for-a-quantum-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00468.warc.gz
en
0.905427
908
2.6875
3
The alternative medicine is meant for Possibility or choice on conventional medicine in treating diabetes. According to study, it affects more than 285 million people worldwide. In USA more than 29 million American adults have diabetes and another 86 million have prediabetes. What causes diabetes? The people suffering with these disease have high sugar levels in blood. In our body insulin unlocks the cell to allow sugar to enter the cell and be used for energy. Because the insulin production in body decreases blood cells doesn’t respond to insulin. The fat cells in diabetic patients were found to produce a hormone called resistin. It impairs insulin action and causes glucose intolerance. According to study, the below table represents top 10 states increase in rate of diabetes among people in each state. The West Virginia state in top position, more number of people in this region are highly effected with the diabetes compared to other states. Top Alternative medicine treatment for diabetes The alternative medicine is also categorized into different forms as herbal therapies and plant therapies to cure any form of diseases. As the natural products containing good anti-bacterial and anti-diabetic compounds. In world there are 1200 flowering plants as well as 700 recipes and compounds possessing anti-diabetic properties which are evaluated scientifically. Medicinal herbs Related to cure diabetes are listed below Herbs to Regulate Diabetes The Dioscorea a crawler plant is considered as Chinese traditional medicine. Several studies claim that the rhizome extract of this plant boosts insulin resistance and improves glycemic control it determines certain type of sugar levels in person with diabetes. Blueberry possess high anti-oxidants it helps to decrease the levels of blood pressure and lipid oxidation it makes the insulin resistance better. It also neutralizes cancer, cardiovascular and neurodegenerative disease. By taking moderate amounts of vitamin C containing phenolic and anthocyanins improves the diabetic health. A study claims that people with type2 diabetes consuming 22g of blueberry for six weeks twice a day keeps your health better. The compound from cinnamon extract acts as potential anti-diabetic agent. Cinnamon grows in tropical areas across Southeast Asia, South America and the Caribbean. Cassia bark cinnamon a type of cinnamon mainly helps the diabetic patients Research has suggested that cinnamon can help to improve blood glucose levels and increase insulin sensitivity. A daily intake of just 1, 3, or 6 grams shown to reduce serum glucose, triglyceride, LDL or bad cholesterol and total cholesterol after 40 days among 60 middle-aged diabetics. The fenugreek an oldest herb used in Ayurveda and also in Chinese herbal therapies improving glucose tolerance. It contains Diosgenin, GII, galactomannan, trigoneoside, and 4-hydroxyisoleucine antidiabetic compounds. The plant widely grown in South Asia, North Africa and parts of the Mediterranean. Leaves sold as a vegetable or as herb. However, these seeds lower blood sugar by slowing down digestion and absorption of carbohydrates. This suggests they may be effective in treating people with all types of diabetes. Intake of 2.5 grams of fenugreek twice a day for three months reduces fasting glucose levels and decreases the bad cholesterol and lowered blood sugar in people with normal levels of type 2 diabetes. Fig leaf contains chemicals that might help people with type 1 diabetes use insulin more efficiently. Considered as diabetic remedy in south Western Europe. The diabetic needs less insulin when on a treatment of using the fig leaf extract An additional remedy is to boil four leaves of the fig in some freshly filtered or bottled mineral water and drink this as a tea. Fig fruit itself is also a good source of potassium, a nutrient that helps to control blood pressure. Gymnema Sylvestre helps support healthy pancreas functions. By working directly on the level of the blood it helps balance the blood sugar levels. Consuming Gymnema sylvestre, it completely fills your taste bud receptors. Prevents glucose from spreading in those same receptors. Gymnemic acid is similar to glucose, locks into the glucose receptors in your intestines. This prevents the absorption of sugar molecules, leading to balanced blood sugar levels. Bitter Melon also commonly known as bitter gourd, or bitter squash. The chemical found in bitter melon known as Charantin reduces the insulin resistance. During glucose entering into cells it acts as substitute for insulin. Bitter melon has Lectin substance which reduces blood glucose concentrations by acting on peripheral tissues and suppressing appetite similar to the effects of insulin in the brain. Indeed, hypoglycemic effect developed after intake of bitter melon caused by Lectin Ocimum sanctum commonly known as basil. A study revealed a positive effect on post lunch and fasting glucose. Also, improves functioning of beta cells and makes easier insulin secretion process. However, basil extracts used in the medication of type-2 diabetes. But basil leaves contain antioxidants that reduces oxidative stress in diabetics. Fish oil has numerous vitamins it helps in having good eye sight. But also the compounds in fish oil reduce the risk of diabetes. Moreover, omega-3-fatty acids have the ability to correct the functioning of the pancreas, thus helping in insulin production reduces diabetes . It contains high amounts of vitamin D improves immunity, control of glucose and lipid metabolism Intake of bilberry plant leaves for 6 day time period observed the great improvement in glucose levels by 15 to 20 percent. It contains anthocyanidin substance that prevents diabetes. Meanwhile, it manages the insulin in the body and maintains the blood sugar under the control. Furthermore, bilberry syrup highly recommended for people diagnosing with diabetes. It boosts your body’s immune system to fight against free radical, bacteria and viruses. Although, relaying on these therapies can improve the diabetic patients health to some extent. At last, More number of people preferring conventional medicine on alternative medicine.
<urn:uuid:4202c558-3ef8-4fda-b766-9f127d7f4d58>
CC-MAIN-2022-40
https://areflect.com/2017/11/16/top-10-alternative-medicines-for-diabetes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00468.warc.gz
en
0.912126
1,269
2.90625
3
According to the New York Times, four leading technology companies and seven American universities have agreed on principles for making software developed in collaborative projects freely available. It’s a story that dates back to the Bayh-Dole Act of 1980, which allowed universities to hold the patents on federally funded research and to license that intellectual property to industry. Since then the legal wrangling over intellectual property rights in research projects involving universities and companies, specialists say, can take more than a year. Althought experts don’t this this legal maneuvering slows the pace of innovation, but does prompt some companies to seek university research partners in other countries. The companies involved in the agreement, which will be announced today, are I.B.M., Hewlett-Packard, Intel and Cisco. The educational partners are the Rensselaer Polytechnic Institute, the Georgia Institute of Technology and the universities of Stanford, California at Berkeley, Carnegie Mellon, Illinois and Texas. Peter A. Freeman, assistant director for computer and information science and engineering at the National Science Foundation, came up with today’s best quote: “’It’s the science, stupid.’ It’s not the intellectual property.”
<urn:uuid:47bf93d0-3275-443f-bb78-d5ea2d2bed3f>
CC-MAIN-2022-40
https://www.cio.com/article/255042/it-strategy-it-s-the-science-stupid-it-s-not-the-intellectual-property.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00668.warc.gz
en
0.89963
260
2.5625
3
The Internet would be a better place if it treated all packets equal, but because ISPs discriminate against certain protocols the need for protocol obfuscation exists. Erik Hjelmvik discusses how to identify and build better obfuscated protocols. By Erik Hjelmvik In an ideal Internet all packets would be treated as equal by the Internet Service Providers (ISP) and backbone operators who transport them across cyberspace. Unfortunately, this is not always the case since many ISPs restrict or completely block Internet access to some services by discriminating against certain network protocols. Several telecommunication companies, who are also offering Internet access, have for example been known to block the Voice-over-IP (VoIP) application Skype in their networks. The underlying reason for this discrimination has in most cases been because the telecommunication providers see Skype as a competitor to their own telephony services. Peer-to-peer (P2P) file sharing applications are also often blocked or bandwidth limited by ISPs. The principle of network neutrality (also known as “internet openness”) advocates that users should be able to send and receive data across the Internet without having the traffic discriminated based on content, application, protocol, source or destination. An ISP who is limiting the bandwidth of one or several P2P protocols is thereby violating the network neutrality principle. The legal requirements for ISPs to comply with the network neutrality principle varies between countries. However, from an ethical point of view it is pretty obvious that it should be the users, not the ISPs, who decide what protocols and applications can be used on the Internet. The network neutrality principle also protects the concept of an open Internet that allows for Blocking of P2P File Sharing P2P file sharing is a technology for efficient sharing of data between peers across the Internet. Just as with any other technology for transferring files, P2P file sharing can be used for sharing lawful as well as unlawful content. There is a great deal of lawful content, such as open-licensed software and digital media, that can be downloaded through P2P file sharing. Unfortunately, the amount of unlawful content available on P2P file sharing networks is significantly greater. Copyright violation, however, is not usually a concern for ISPs. The reason many ISPs block P2P traffic is because more than half of the traffic on the Internet is P2P traffic (according to the Ipoque Internet Study 2008/2009), and a small group of active P2P users can typically use up the majority of an ISPs available bandwidth. A common method for actively controlling the bandwidth of network traffic is to apply “traffic shaping,” which is a rate limiting technique that delays packet transmissions when the bandwidth exceeds a predetermined threshold. ISPs can assign differentiated threshold values depending on used application layer protocol and thereby effectively throttle the bandwidth for P2P traffic, or whatever traffic class they want to suppress. But first they need to perform traffic classification of the sessions in their networks to determine what protocols or applications that are being used. The most simple form of traffic classification uses the server-side TCP and UDP port numbers; HTTP for example typically uses TCP port 80 while DNS relies on UDP port 53. Port number classification is obviously easily dodged by P2P applications using port numbers that are user supplied or randomized. Several port independent methods for classifying traffic have therefore evolved, many use Deep Packet Inspection (DPI) to match payload data in the observed traffic to signatures of known Enter Protocol Obfuscation Modern P2P file sharing applications such as Vuze, uTorrent and eMule have introduced protocol obfuscation techniques to avoid being fingerprinted by the port independent traffic classification methods. The popular VoIP application Skype applies obfuscation to all of its traffic, which makes the application difficult to identify through network monitoring. The concept of protocol obfuscation implies that measurable properties of the network traffic, such as deterministic packet sizes and byte sequences, are concealed/clouded so that they appear random. The obfuscation of payload data is typically achieved by employing encryption, and flow properties are obfuscated by adding random sized paddings to the payload. These obfuscation techniques do not always provide sufficient protection against traffic shaping. In the technical report titled “Breaking and Improving Protocol Obfuscation” Wolfgang John and I show how even P2P applications that employ protocol obfuscation are identifiable with statistical measurements. The obfuscated protocols used by BitTorrent and eDonkey P2P file sharing applications can for example be identified by measuring packet sizes and directions of the first packets in a TCP session. Identifying Obfuscated Protocols There are many vendors who provide proprietary solutions that claim to support identification of even obfuscated protocols, but none reveal what methods they rely on when performing such protocol identification. Open-source solutions for traffic classification and protocol identification haven’t yet had any support for obfuscated protocols. The open-source plug “OpenDPI” from ipoque has purposely been stripped of its possibility to identify encrypted or obfuscated protocols and the popular L7-filter classifier cannot provide accurate detection of any obfuscated protocol. However, recently an open-source tool has become available that can identify practically any protocol, including obfuscated protocols. This tool is the Statistical Protocol Identification (SPID) proof of concept, which I have made publicly available on SourceForge. The SPID proof of concept application is not intended to be a traffic classification tool used in production environments, but rather a demonstration of how well statistical methods can be used to identify most protocols. The SPID application can also be used by designers of obfuscated protocols in order to verify the obfuscation strength of the protocol. How to Improve Obfuscation As long as a protocol is identifiable, to a third party monitoring the network traffic, it runs the risk of being subjected to discrimination in the form of traffic shaping or even being completely blocked. To guarantee network neutrality, protocols need to implement proper obfuscation of both payload and flow properties. The payload obfuscation can easily be achieved by applying encryption. Even a lightweight crypto such as RC4 would be sufficient, since even basic cipher breaking would require more computing resources than an ISP can be expected to throw at large volumes of network traffic. The encryption can alternatively be applied by tunneling the data inside some already existing protocol that employs encryption, such as SSH, SSL or IPSec NAT-T. When doing so, it is important that the tunneling protocol implementation does not differ too much from its normal operation. The anonymity network service TOR, which uses a custom TLS implementation to encrypt connections between Onion Routers, have for example realized the need to modify TOR’s TLS handshake to mimic that of Firefox+Apache in order to prevent the traffic from being fingerprinted as TOR. Further information on how to build better obfuscated protocols can be found in the “Breaking and Improving Protocol Obfuscation” report. As noted initially, the Internet would be a better place had it treated all packets equal, but as long as ISPs want to play hardball by discriminating against certain protocols, the need for protocol obfuscation will remain. Unfortunately, such obfuscation of measurable protocol properties inhibits the ability for researchers to measure trends and usage of various protocols and applications on the Internet. There are, however, situations when it could be argued that ISPs should be allowed to perform traffic shaping. One such situation is the case where different classes of traffic require different types of network performance. VoIP traffic, for example, requires low latency transmissions with minimal jitter but does not require very much bandwidth. Transfers of large files across the Internet, on the other hand, require high bandwidths but are generally very resilient against both jitter and latency. An ISP with the knowledge of what protocols are being used in each session could use that information to apply Quality of Service (QoS) to cater the different needs of the various protocols and applications. In reality, however, such QoS assignments would typically result in the VoIP traffic receiving a higher priority than the file transfer. This would imply that it is beneficial for a VoIP protocol to be identifiable, but not for a file transfer protocol. As a result, it’s likely that designers of protocols for large file transfers might attempt to mimic protocols with better QoS prioritizations in order to fool ISPs’ traffic classification attempts. Hence, don’t be surprised if applications that gain on mimicing other protocols or hiding through obfuscation actually start applying these techniques. This is one of the reasons I believe that using protocol identification in order to discriminate against certain protocols is futile. Erik Hjelmvik is an independent network security researcher and open source developer. He also works as a software development consultant, specializing in embedded systems. In the past, Erik served as an R&D engineer at one of Europe!s largest electric utility companies, where he worked with IT security for SCADA and process control systems.
<urn:uuid:09801861-da28-4a62-ab43-5f06e74073d7>
CC-MAIN-2022-40
https://www.cio.com/article/282695/internet-network-neutrality-and-protocol-discrimination.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00668.warc.gz
en
0.931046
1,923
3.09375
3
When you think of the government, chances are you don’t think of the word “efficiency.” But these days, money is tight for everyone. As a result, Americans are finally forcing their government to be somewhat more financially responsible than it’s been in the past—albeit somewhat ineffectively. While the government, for whatever reason, doesn’t seem to be able to shed money spent on education, entitlements or our military, the evolution of technology has created an environment where costs can be significantly reduced by deploying the latest solutions on the market. And nowhere is this sentiment truer than in the data center realm. Between 1998 and 2010, the federal government quadrupled the amount of data centers it operated. But according to the White House, these data centers only used 27 percent of their computing power. Seems awfully wasteful, right? But in order to reduce some of the country’s operating expenses, President Barack Obama announced plans to shut down even more data centers. According to the White House, the goal is to shut down more than 800 data centers by 2015, which will save the taxpayers over $3 billion as a result. At the end of the day, if you can implement new technology that would easily reduce expenses, what excuse do you have for saying no? As we as a society begin relying more and more on technology — particularly computing resources that are delivered through the cloud — the need to house all of the infrastructure that powers the country’s collective IT becomes that much more pronounced. A large data center can consume as much electricity as an entire town. Since that’s the case, it’s imperative that data centers operate as efficiently as possible. That way, while their electric costs might be high, energy is being used efficiently. A data center running at full capacity, for example, is much more efficient than two data centers running at half-capacity. So how can we store more computing resources in smaller spaces? While the question might sound somewhat ironic, there are actually many reasons as to what’s allowing data center consolidation. With that in mind, let’s take a look at three of them: First and foremost, Moore’s law tells us that computing power doubles every two years. The concept traces its roots back to the 1970s, so you can imagine how much change we’re talking about every two years these days. Simply put, you’ll need fewer computing resources to complete tasks when those resources are considerably more powerful than they were just a short while ago. Back to that stat about 27 percent of computing power being used in federal data centers. That number seems low, doesn’t it? That’s where server virtualization comes into the picture — something that’s integral to the federal government’s ability to reduce the number of data centers it oversees. Rather than having to dedicate a separate server for each operating system you use, for example, with virtualization, you’re able to layer multiple virtual servers on top of one another. This enables you to ensure that you’re using 100 percent of the computing capacity of a specific machine. In other words, virtualization would easily allow the federal government to utilize 100 percent of the computing capacity of each server. That 27 percent figure means that simply by virtualizing its servers, the government could roughly consolidate every four servers to one physical machine. Additionally, in the past, in order to provision a new machine, you’d have to order the equipment, wait for it to arrive, load applications, configure it and the rest of the routine. However, thanks to virtualization, you’re able to quickly provision virtual instances. This accelerates innovation and ensures optimal data center operation, on top of consolidating physical space and reducing hardware, storage and heating and cooling expenses. Because of the wealth of benefits, it comes as no surprise that the virtualization market continues to thrive. Thanks to advances in construction, the government is also able to minimize the impact it has on the environment while having to use less electricity to keep all of the physical machines inside the data center cool. By using green construction tactics, data centers are assuredly going to be more efficient as energy is distributed more effectively. While the decision to build a green data center is certainly one you wouldn’t make lightly — it can be pricey up front — businesses are certain to save considerably over the long term. When you think about it, because data centers consume so much electricity, it’s almost a no-brainer to build with sustainability in mind. Taxpayers will certainly appreciate paying lower energy bills over the useful life of the data center. Okay, maybe that subheading is an exaggeration. While it appears the government is moving in the right direction by consolidating its data centers, what the heck was the guy in charge thinking when quadrupling data centers from 1998 to 2010? That decision certainly doesn’t exude the confidence of a decision maker with any semblance of foresight. But the government could afford to take it a step further, taking a page from Facebook’s playbook. The social networking juggernaut has an Open Compute Project, which seeks to construct the best data center possible in an open source spirit. The end result? Over $1 billion in savings and a 24 percent reduction in operating costs. One can reasonably expect that Facebook’s project will result in data center designs that are increasingly more efficient. The question is, if the goal is data center efficiency, will the government participate?
<urn:uuid:35eda2e5-6387-4ed6-8a97-fbc778c7ef25>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/data-center-drive-for-efficiency
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00668.warc.gz
en
0.948188
1,156
3.203125
3
Studies have shown that almost 90% of data breaches are caused by human errors, and you might be surprised to know that cyber crimes are to cost the world $10.5 trillion annually by 2025. People are the weakest link in any organization’s digital security system. Thus, the proverb ‘prevention is better than cure’ fits exactly with the cybersecurity industry. If somehow you can avoid human mistakes, you are protected against 90% of cyber security threats. Rest 10% of the cyber attacks are taken care of by the cyber security firewall. A basic understanding of cybersecurity for every employee improves the overall security of the organization. One cannot act against cyber attacks unless one recognizes them. Phishing is the process of breaching the organization’s cyber security firewall by pretending to be a trusted entity. Phishing scams use emails to gain access to systems and breach the cyber security firewall. Cyber security training includes educating employees on things like suspicious links, attachments, and untrustworthy sources. This results in overall improved security for an organization. Cyber attacks can cause huge financial losses and data thefts. There have been many incidents in the past where cyber-attacks resulted in physical damage to the company infrastructure. Right cyber security training ensures the safety of the company network and provides protection to the assets. Fewer cyber security risks mean fewer financial losses. Thus, it is kind of a return on investment when a company allocates funds for cyber security awareness training for employees. According to the Department for Digital, Culture, Media & Sport’s recent Cyber Security Skills report, only 1 in 9 businesses (11%) will provide cyber security training or a security awareness program to non-cyber employees in 2020. You can provide cyber security training to your employees to create a secure workplace. A certified cyber security workplace helps you gain more clients as they have the assurance that your company is secured against cyber threats. The safety of clients’ information is ensured by adopting the best cybersecurity practices across the organization and performing information security audits. Ensure compliance and advanced protection Fifty-four percent of companies say their IT departments are not sophisticated enough to handle advanced cyberattacks. An advanced level of cyber security training empowers your IT organization to ensure compliance and provide protection against the advanced level of cyberattacks. You can partner with a professional cyber security services company to educate employees and prepare them to handle advanced levels of future security attacks. It also brings confidence to use the advanced technologies as your organization is cyber-secured. Support Other’s Security We are living in a digitally interconnected world. Breaches in the security of one organization can get access to the network of other organizations to hackers. By investing in cyber security training, you support others’ cyber security. The cyber security training takes you beyond the regulatory requirements and makes your network secure against cyber security threats. How to train your employees on cyber security? You can partner with a professional cyber security services company to train your employees on cyber security. InfoSec4TC is a leading cyber security company that helps organizations comply with the Information/Cyber Security standards, compliance, regulation, and legal requirements and educates employees to protect the organization from Cyber security threats. infoSec4TC helps your business in the following areas: ● Provide the needed tools to employees to learn and lead ● Develop and empower the employees ● Train employees to develop their in-demand skills ● Use Cyber security learning as a tool to achieve critical business outcomes ● Invest in employees’ growth and development ● Choose a learning solution that grows with the business Get in touch with InfoSec4TC cybersecurity experts to ensure cybersecurity.
<urn:uuid:e0fcb7ba-2966-4f6c-ab8f-8d84375320f6>
CC-MAIN-2022-40
https://www.infosec4tc.com/2022/09/01/what-makes-cyber-security-training-crucial-for-employees-heres-all-you-need-to-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00668.warc.gz
en
0.94407
754
2.921875
3