text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
Kubernetes pods are the smallest deployable units in the Kubernetes platform. Each pod signals a single running process within the system and functions from a node or worker machine within Kubernetes, which may take on a virtual or physical form.
Occasionally, Kubernetes pod disruptions may occur within a system, either from voluntary or involuntary causes. Kubernetes pod disruptions are more likely to occur in highly available applications and prove to be a concern for cluster administrators who perform automated cluster actions.
Essentially, pods will remain in Kubernetes unless a user or controller removes them or there is a faulty system process. System administrators may apply Kubernetes pod disruption budgets (PDBs) to ensure that systems run undisrupted by creating an allowance/buffer for simultaneous pod disruptions.
- What is Kubernetes Pod Disruption?
- What is Pod Disruption Budget (PDB)?
- What’s the Difference Between a Voluntary and Involuntary Disruption?
- How to Specify a Disruption Budget
- Assessing Pod Disruption Status
- How to Avoid Outages Using Pod Disruption Budget (PDB)
- Other Useful Details in Kubernetes PDB
What is Kubernetes Pod Disruption?
Every pod in the Kubernetes system follows a defined life cycle across phases such as pending, running, succeeded, or failed. Within a Kubernetes API, each pod features a specification and active status determined by a set of conditions. Kubernetes enables users to schedule pods to nodes only once, from which they will run until it stops or gets terminated.
In some system scenarios, Kubernetes nodes may experience a lack of RAM or disk space, which forces the system (i.e., controller) to disrupt pods (i.e., their life cycles) to keep the nodes running. Cluster administrators/controllers may deliberately disrupt the pods (voluntary), or disruption may occur from a software or hardware error.
What is Pod Disruption Budget (PDB)?
PDB is a solution to Kubernetes pod disruption managed across various controllers such as ReplicaSet, StatefulSet, ReplicationController, and Deployment. PDBs prevent server downtime/outages by shutting down too many pods at a given period.
In practical terms, a PDB maintains the minimum amount of pods required to support an SLA (service-level agreement) without incurring losses. Kubernetes users may also define PDB as a platform object that defines the minimum number of available replicas required to keep the cluster functioning stably during a voluntary eviction.
PDBs are used by clusterautoscaler to determine how to drain a node during scale down operation. It controls the pace of pod eviction during node upgrades. For example, for a service with four pods and a minAvailable setting of three, the ReplicaSet controller will evict one pod and wait for it to be replaced with a new one before evicting another pod.
To set a pod disruption budget for a service running NGINX, use the following command:
kubectl create poddisruptionbudget my-pdb –selector=app=nginx –min-available=80%
In the example above, the PDB sets the requirement that 80% of nginx pods must stay healthy at all times. When users call for a pod eviction, the cluster will enable the graceful process only if it fulfills the PDB requirement.
Before starting with a PDB, users should visit a few considerations.
Firstly, users should establish the type of application protected by the PDB. The process proceeds with examining how applications respond to pod disruptions. Users will then need to create YAML files of PDB definitions and create the PDB object from those files.
However, users must note that PDBs only apply to voluntary disruptions under deliberate admin/user commands. Therefore, PDBs will not work with fleets of involuntarily disrupted applications/pods.
If users attempt to disrupt more pods than the stipulated value voluntarily, they will encounter an error code 429 message that prevents pod eviction due to a violation of the PDB value.
What’s the Difference Between a Voluntary and Involuntary Disruption?
There are mainly two types of Kubernetes Pod Disruptions: voluntary disruptions caused by the deliberate actions of controllers and users and unavoidable involuntary disruptions resulting from hardware or software faults.
Some common examples of involuntary disruptions include the hardware failure of physical machines, nodes disappearing due to node network partitions, and kernel panics. Examples of voluntary pod disruptions include cluster administrator actions such as draining nodes to scale clusters or removing a pod from a node in line with system updates and maintenance.
It is important to remember that PDBs only apply to voluntary pod disruptions/evictions, where users and administrators temporarily evict pods for specific cluster actions. Users may apply other solutions for involuntary pod disruptions, such as replicating applications and spreading them across zones.
Pod disruptions may occur in the form of node-pressure eviction, where controllers proactively delete pods to reclaim resources on nodes, which avoids starving the system. In such cases, the kubelet ignores your PDB. Alternatively, an API-initiated eviction respects a user’s preconfigured PDB and terminalgraceperiodseconds (i.e., the time permissible for a graceful deletion of pods).
The graceful shutdown of pods, which has a default time frame of 30 seconds, is essential for Kubernetes cluster management, preventing potential workload disruptions and facilitating proper clean-up procedures. From a business/organizational perspective, a graceful termination of pods enables faster system recovery with minimal impact on the end-user experience.
Therefore, PDB is not a foolproof solution for all instances of unavailability but rather an object specifically for API-initiated evictions.
How to Specify a Disruption Budget
PDBs comprise three fields: .spec.selector, .spec.minAvailable, and .spec.maxAvailable. Essentially, .spec.selector serves as the label for the selected set of pods within the system.
With a PDB in place, users/admins can set the minimum or maximum quantity of replicas and control pod disruption with the .spec.minAvailable and .spec.maxAvailable fields. .spec.minAvailable determines the number of active pods required at all times while .spec.maxAvailable states the maximum amount of disrupted pods allowed.
Cluster administrators/controllers may only choose one between .spec.maxAvailable and .spec.minAvailable fields for each PDB. Setting a 0 value for .spec.maxAvailable or 100% .spec.minAvailable means that users forbid pod evictions.
Additionally, there are some factors to consider before specifying a Kubernetes PDB. Users/administrators should have a Kubernetes system running higher than or equal to v1.21; if not, it is necessary to upgrade the program to fulfill the required compatibilities.
Additionally, users who apply PDB should be owners of applications running on Kubernetes clusters that require high availability, such as quorum-based applications. It is also essential to affirm that service providers or cluster owners (if the user requires permission) agree to budget usage before beginning.
Understand Application Reactions
Various application types display different responses to the pod disruption process. Therefore, users should always consider PDB implementation based on the type of Kubernetes application they handle. By assessing application reactions, users can optimize PDB functions and avoid extra processes in some scenarios.
For example, in restartable jobs where users need to complete the jobs, the respective job controller will create replacement pods without PDBs. Similarly, for single-instance stateful applications that require permission, users may choose to tolerate downtime without applying PDBs or Set PDB with maxUnavailable=0, prepare for downtime/update, and delete the PDB, since users may recreate it later if necessary.
Users may express the required value of their PDBs with integers or in percentage form. Specifically, eight for minAvailable states that there should be a minimum of eight active pods at all times, while 50% minAvailable means that at least half of the total pods should always remain active.
Kubernetes rounds up pod values. For example, in the cluster scenario with a total of nine pods and 50% minAvailable, the PDB will ensure that at least five pods stay online at all times.
Assessing Pod Disruption Status
Kubernetes users should regularly check on the PDB status for a better understanding of system performance and to keep systems online. Some important factors include the current number of healthy pods, the minimum number of desired healthy pods (i.e., .spec.maxAvailable value), and the acceptable reasons for disruption (e.g., SufficientPods – where the cluster has the minimum number of healthy pods to proceed with the disruption).
How to Avoid Outages Using Poddisruption Budget (PDB)
The first step to creating a PDB involves creating a Poddisruptionbudget resource, which matches targeted pods. These resources will help drive the Kubernetes system toward timing pod drain requests to achieve nondisruptive eviction.
With a PDB in place at the start of a draining process, users can determine selectors and the state of all associated pods. By doing so, users can effectively drain nodes (i.e., during system updates) while maintaining the minimum number of active pods to avoid a negative impact. As such, PDBs can reduce or eliminate system outages to maintain cluster performance.
Other Useful Details in Kubernetes PDB
Kubernetes 1.21 brings a score of updates and changes to the platform, including the PDB APIs. Notably, an empty selector once matched zero pods by default, but with the recent patch, it matches every pod within a given namespace.
At times, users may experience various PDB configuration complications during a node upgrade or cluster action. Therefore, it is crucial to identify some common scenarios to facilitate a quick response and minimal downtime.
Here are some potential PDB issues:
Caution When Using Horizontal Pod Autoscalers
Horizontal pod autoscalers enable users to scale Kubernetes resources according to system loads based on an entered metric. However, a poorly configured PDB may lead to a mismatch of values, which calculates the existing pods, without considering the shifting values of an autoscaler.
For best practices using a pod scaler with PDB, users should define the PDB for applications or system pods that may block a scale-down. Alternatively, users may use pause pods that provide systems with the boost required to handle additional requests during a spike in server activity.
Additionally, some users may not realize that their clusters run PDBs (since they may come packaged in Kubernetes software extensions such as Operator). Therefore, users must pay close attention to PDB configurations and the possible complications and issues that may stem from platform actions such as node upgrades.
PDB With a Single Replica
Users who apply PDB in deployments with a single pod will cause kubectl drain to remain stuck. In such scenarios, users need to manage pod drains and updates manually. Hence, users should always perform PDB on deployments with more than one replica, necessary for a high-accessibility Kubernetes system.
Indefinite Deadlocks With Multiple PDBs
Multiple PDBs may result in confusion (i.e., overlapping selectors) within the cluster, causes draining processes to hang indefinitely. Therefore, for best practices, users should apply meaningful selector names linked to each set of pods along with a matchLabel that fits the controller selector.
Kubernetes remains one of the most widely used workload management platforms worldwide due to its highly intuitive functions, such as PDBs. PDBs give users greater control over their API-eviction processes, minimizing the risks of workload disruption and outages. However, users need to note that PDBs have their share of limitations and should only apply them according to specific Kubernetes scenarios.
PDBs are suitable for:
- Voluntary pod disruptions (i.e., cluster administrator actions such as running routine maintenance).
- High-accessibility deployments.
PDBs are unsuitable for:
- Involuntary pod disruptions (i.e., large-scale hardware or software errors).
- Node-pressure evictions.
- Deployments involving a single replica.
By creating a PDB for each application, users can maintain highly available applications despite frequent instances of voluntary pod disruptions (e.g., software extensions).
While the Kubernetes scheduler can help users allocate pods to nodes based on available resources, complexities may arise when there is a need to drain or remove excess nodes during system rescheduling while some pods continue to run (leading to potential downtime). With PDB resources in place, users can keep k8 applications functional to accept incoming requests with minimal delay.
|
<urn:uuid:7271ecc2-a5eb-4ffc-b0cf-ff34f6431197>
|
CC-MAIN-2022-40
|
https://www.logicmonitor.com/blog/what-is-kubernetes-pod-disruption
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00660.warc.gz
|
en
| 0.889512 | 2,732 | 2.828125 | 3 |
Antivirus software is likely the best-known type of security software, installed on approximately 76 percent of computers globally. It protects businesses and consumers alike from malware, ransomware, and similar threats. Thanks to this wide usage, there are hundreds of antivirus software options available, which can make choosing one difficult. This guide covers everything you need to know about antivirus software and how to choose the right one for your business.
- What is Antivirus Software?
- How Does Antivirus Software Work?
- Best Antivirus Software
- Business Antivirus Software is the Minimum
What is Antivirus Software?
Antivirus software, sometimes called anti-malware, is a type of security tool that both businesses and consumers use to protect their devices from malware. It regularly scans devices, looking for anything that seems out of place or that matches a known threat signature. Once it discovers malware, the system automatically removes it from the device. Advanced virus protection can also block malicious websites and provide firewall protection.
Free vs. Paid Antivirus Software
Most consumers use free antivirus software to protect their devices, but businesses need to pay for their antivirus protection. For one, free antivirus software typically isn’t updated as often as the paid versions, meaning that businesses, which are more at risk for attack, don’t get the level of protection they need. Additionally, many antivirus software vendors don’t license their free versions for business use.
How Does Antivirus Software Work?
Antivirus software works in the background of a device, scanning files and applications for known malware signatures and suspicious file structures. Once it identifies something that it recognizes as malware, the platform will quarantine it until it can delete it from the system.
Some antivirus software scans files as they enter your device, but others will scan programs already on your device. Both options work well, but if you’ve had a device for a while, you should look for software that scans files that you already have on your device since malware could already be present.
Best Antivirus Software
Businesses that need antivirus protection should look at the following platforms, picked for their high user reviews, solid security ratings, and included features.
Avast Business Endpoint Protection
Avast Business Endpoint Protection combines next-generation antivirus (NGAV) software with patch management to identify and remove malware while also fortifying vulnerabilities. The Business Hub gives IT real-time visibility into all of the devices on the network, showing them potential threats and giving them access to comprehensive reports. Artificial intelligence (AI) and machine learning use behavioral clues to identify both known and unknown threats. Additionally, with basic remote control, IT can remotely access a user’s device to troubleshoot technical issues quickly.
There are three pricing tiers for small businesses that include between 11 and 100 users, but there are also solutions for smaller and larger businesses, as well as managed services providers (MSPs).
- Patch management
- 24/5 email, chat, and phone support
- Server protection
- Seven layers of malware protection
- Cloud threat lab analysis
- Central management console
- The system notifies IT any time it detects a threat
- Good threat detection capabilities
- Easy to manage
- Installation can be slow and complicated
- Can sometimes be resource-intensive
Bitdefender Gravityzone Business Security
Bitdefender Gravityzone Business Security offers resource-efficient virus protection software with machine learning. IT gets a single management console where they can track security events and handle any manual work associated with them. Ransomware mitigation technology provides real-time backups of files to keep organizations from having to pay the ransom to get access to their data. Plus, network-based security blocks brute force attack attempts and lateral movements within the network.
Gravityzone pricing depends on the number of devices organizations want to protect and the length of the contract they want to sign. Longer contracts will offer steeper discounts, and there is custom pricing available for businesses that want to protect more than 100 devices.
- Remote installation
- Local and cloud machine learning
- Email security
- SIEM integration
- Risk analytics
- Full-disk encryption
- Easy to use with a user-friendly interface
- Ranks highly in most third-party security tests
- Helps determine weaknesses in a company’s infrastructure
- The cloud dashboard can sometimes be slow to update
- Some users complained about limited third-party integrations
Trend Micro’s OfficeScan
Trend Micro’s OfficeScan offers endpoint protection with NGAV and machine learning to close security gaps in a business’s network. It protects physical endpoints, including Windows PCs and servers, Mac computers, point-of-sale (POS) systems, and ATMs, and there’s an add-on available for virtualized endpoint protection. The threat protection includes behavioral analysis and sandboxing to identify unknown threats as well as web reputation checks to block malicious websites.
Pricing is not available on the Trend Micro website, so interested organizations will have to contact them for more information.
- Machine learning and behavioral analysis
- Data loss prevention (DLP)
- Exploit prevention
- On-premises or cloud deployment
- Whitelist checking
- Easy to deploy and manage
- Behavioral analysis quickly terminates suspicious processes
- Doesn’t require a ton of CPU resources
- Computers can sometimes run slowly during the scan
- The interface is not very user-friendly
Panda Endpoint Protection Plus
Panda Endpoint Protection Plus, now a part of WatchGuard, offers virus protection for both known and unknown threats, including phishing and ransomware. It covers Windows systems, Linux, macOS, Android, and virtual environments and provides automatic analysis of these systems. With web filtering, the software protects users from malicious websites and bot attacks.
Pricing for Panda Endpoint Protection Plus depends on the number of users businesses want to protect and the length of the license. Licenses are available for one- and three-year periods, and there are add-on products like encryption and patch management that organizations can add as needed.
- Behavioral analysis
- Phishing protection
- Web traffic filtering
- Windows, Linux, macOS, and Android support
- Centralized device control
- File quarantine
- Good detection accuracy
- Easy to use and deploy
- User-friendly interface
- Can take time to configure properly for each organization
- Is sometimes resource-intensive and slows devices down
AVG Antivirus Business Edition
AVG Antivirus Business Edition offers protection against malware, ransomware, and malicious web pages. With protective AI and real-time detection, businesses get protection from both known and unknown attacks. The software also provides a firewall and identity protection. The file shredder securely deletes files, while the File Server Security tool keeps Windows files safe and private.
Pricing depends on the number of devices organizations want to protect and the length of the license. Licenses are available in one-year, two-year, and three-year terms, and longer contracts will have lower price points. Additionally, there are certain price breaks for protecting more devices.
- Free email and phone support
- Web protection
- Identity protection
- Remote access
- Automatic updates
- Ransomware protection
- Isn’t very resource-intensive
- Easy to install and use
- Priced competitively compared to similar tools, especially for smaller businesses
- May not be cost-effective for businesses with hundreds of devices
- Only supports Windows devices
Norton Small Business
Norton Small Business offers antivirus software for PC, Android, and Mac devices, including smartphones and tablets. Users can mix and match licenses, so not all employees have to have the same types of devices to get protection. 24/7 support makes it easy to lock lost or stolen devices remotely at any time and address issues before they lead to downtime. The system is cloud-based and simple to use.
Pricing is based on the number of devices, and the license is good for a full year. Higher discounts will be given to businesses that cover more devices, but it’s probably not cost-effective for enterprises.
- 24/7 support
- Desktop, tablet, laptop, and smartphone protection
- Centralized management console
- Email enrollment
- Cloud backups
- Website filtering
- Effective and easy to use
- Simple installation and deployment
- Scheduled device scans
- Can be difficult to manage multiple device subscriptions
- Sometimes causes devices to run slowly
Kaspersky Endpoint Security Cloud
Kaspersky Endpoint Security Cloud offers cloud-based antivirus and firewall tools with the option to upgrade for patch management and endpoint detection and response (EDR). It includes two mobile licenses per user, allowing coverage for a laptop, smartphone, and tablet or a work phone and a personal phone. With vulnerability scanning, users can identify any patches that they need to install to keep their devices secure. For remote access and control, businesses will have to upgrade their licenses.
There are two pricing tiers that businesses can choose from, and pricing depends on the number of users and the length of the contract. There is also custom pricing available for businesses looking to cover more than 150 users.
- File, web, and email protection
- Vulnerability scanner
- Patch management available
- Ransomware rollback
- Remote intrusion detection
- Solid malware detection
- User-based pricing makes it more cost-effective
- Good ransomware protection
- May automatically erase files that it thinks are contaminated
- May not offer real-time visibility
ESET Endpoint Security
ESET Endpoint Security offers multilayered defense for Windows, Mac, Linux, and Android devices. In addition to virus and ransomware protection, the platform also includes mobile devices management (MDM) for remote access to employee smartphones and tablets. Browser protection keeps users safe from malicious websites, while network attack protection identifies network vulnerabilities before an attacker exploits them.
There are four pricing tiers that businesses can choose from, depending on the features they need. There are also add-on solutions available, including cloud security, EDR, and email protection.
- Centralized management console
- Fileless attack protection
- Behavioral analysis and machine learning
- Ransomware protection
- Advanced memory scanner
- Not very resource-intensive
- Effective at detecting and blocking viruses
- Helpful and responsive support team
- Glitches can sometimes take a while to fix if they don’t directly impact security
- There may be a learning curve
Business Antivirus Software is the Minimum
Antivirus software is a necessity for businesses, and it’s the bare minimum security tool you should have in place. You’ll also need firewall protection, email security, and identity and access management (IAM) software. Depending on the size of your organization, you may also need EDR, security information and event management (SIEM), and other security suites. In order to protect your business from the rising number of cybersecurity threats, start with antivirus software and add more protection as you’re able.
|
<urn:uuid:ff61bb77-2003-42d9-9ac1-b52793933b9a>
|
CC-MAIN-2022-40
|
https://www.itbusinessedge.com/security/antivirus-software/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00660.warc.gz
|
en
| 0.90794 | 2,391 | 2.625 | 3 |
XFILTER – BEYOND FILTERING
- Traditional Filtering is Not Enough
- Updated Legislation and Standards means Tougher Controls
- Active Monitoring and Behaviour Tracking required
- Increase Security and Enable Compliance with XFilter solutions
FILTERING IS NOT ENOUGH
Traditional web filters are point-in-time comparisons to a categorised list of websites. Whilst this prevents users from seeing inappropriate material, they ignore the bigger threat of internal hacking or theft of personal information.
The Prevent Duty and Online Safety
The Counter-Terrorism and Security Act, passed in 2015, contains a duty (known as the Prevent duty) which means that schools, childcare providers and further education establishments, along with prisons, local authorities and NHS trusts, are under a legal obligation to “have due regard to the need to prevent people from being drawn into terrorism”, with teachers and staff responsible for identifying signs that children might be vulnerable to radicalisation.
Going Beyond Blocking & Filtering
Schools and other education establishments have been predominantly focused on filtering website content and blocking website categories in an attempt to satisfy duty of care requirements around online safety and cyber bullying. However the Keeping Children Safe in Education guideline (KCSIE) which was updated in September 2016, actually warns of the risk of over-blocking leading to “unreasonable restrictions as to what children can be taught with regards to online teaching and safeguarding.”
With the enhanced auditing requirements needed to meet KCSIE and the Prevent Duty, schools now have to look much deeper into internet and social media traffic to identify potential children at risk.
XFilter solutions by Infosec Partners allow you to move beyond simple blocking and filtering to provide increased security, enable compliance and enhance your Safeguarding abilities.
|
<urn:uuid:08aa9f7b-b916-4010-8e97-39f0bb1e659f>
|
CC-MAIN-2022-40
|
https://www.infosecpartners.com/cyber-security-consulting/education/xfilter
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00660.warc.gz
|
en
| 0.931004 | 362 | 2.625 | 3 |
With the industries revolutionizing at every possible aspect such as production, manufacturing, marketing and distribution at great speed. There is a lot of data involved in all of the above-mentioned processes. The traditional ways of storing data such as implementing a local server and maintaining all the data locally eventually proved to be cost-consuming rather than being cost-effective. Also, server implementation comes with additional responsibilities like maintaining the server, constantly updating it and provide security. Gradually companies felt the need for an alternative solution for storage purposes. This paved the way for companies to adopt cloud storage solutions for their businesses. In the late 1990s, cloud storage had been widely adopted by many organizations, educational institutions and various business enterprises. But cloud storage solutions are not confined to the business environment alone.
One of the major reasons for adopting cloud solutions is that they are highly flexible, portable and reliable. Cloud-based solutions offer various services such as SaaS (Software as a Service), PaaS (Platform as a Service) and IaaS (Infrastructure as a Service). All that is required to access data is proper internet connectivity and the right login credentials. There are many types of cloud storage solutions based on the requirements – personal, public and private cloud solutions.
Personal cloud storage solutions
Personal cloud storage, as the name suggests, is for storing personal data such as images, videos, documents, music and other data. Personal cloud storage solutions eliminate the need to buy devices with high storage capacities and provide the user with the advantage of using high-capacity cloud-based storage without using control over their data. Also, the data that are stored on the cloud can be accessed by various devices like smartphones, computers and tablets. The personal cloud also enables sharing files without using a public cloud service.
Public cloud storage solutions
Public cloud services, also known as utility storage or online storage, enables individuals as well as organizations to store, edit and manipulate data over the cloud. This type of storage is usually available in a remote cloud server and can be accessed over the internet on a subscription basis. Public cloud storage facilities are provided by the service provider that hosts and manages the infrastructure that is used by multiple users.
Private cloud storage solutions
Private cloud storage solutions are more secure than public and personal cloud-based storage solutions. This method of storage solutions is widely adopted by business enterprises as they deal with mission-critical data. Similar to public cloud storage, private cloud storage also works on a subscription basis. Since these solutions are offered individually to enterprises, it can be tailored to meet their particular needs.
With plenty of cloud storage providers in the market, it is crucial to pick the right provider that can provide sufficient storage, bandwidth and most importantly, security. Some of the well-known cloud storage service providers in the market are listed below:
Pcloud, a reasonably priced cloud storage service, enables storing files of any size and also enables Pcloud transfer to send files up to 5GB for free. Pcloud is cross-platform compatible and offers pricing plans on a monthly, yearly or lifetime basis.
Advantages: appropriate pricing, easy to use, user-friendly UX/UI.
Drawbacks: Has no collaboration tools.
OneDrive, a Microsoft platform, works efficiently alongside Microsoft office and Microsoft 365 suite. For Windows (Windows 10) users, OneDrive has been directly integrated into the file explorer, making the online backup process easier. OneDrive provides amazing features like multipage scanning which can be used to scan multiple pages and save them into a single document.
Advantages: Built-in into Windows OS, easy file synchronization and restoration.
Drawbacks: offers limited storage space for the free-version i.e., 5GB.
iCloud, a product of Apple, is easily accessible across all Apple platforms. Offers free as well as subscription methods. If you are a Windows user with an iPhone, you can still sync the files to iCloud Drive using the official client or the iCloud website to access the iWork applications.
Advantages: Cross-platform compatibility, works efficiently within Apple platforms.
Disadvantages: Similar to OneDrive, iCloud also offers limited storage for free versions i.e., 5GB.
Mega enables users to store data through encrypted connections and the user can have control over the encryption key. This prevents the data from being scanned by others or the provider itself. Mega works well with mobile devices and for desktop users, it provides an open-source sync client that is open to vulnerability checks, to improve security.
Advantages: 50GB storage space for free-versions, user-friendly UI, provides an open-source sync client.
Tresorit, a cloud storage provider with the main focus on increasing security and strong data encryption for both business and personal use. Users can choose the people who can access their data. Moreover, Tresorit offers two-factor authentication to add an additional layer of security. The various packages that Tresorit offers, makes it easier for you to select the right plan that is best suited for your needs.
Advantages: Tresorit offers a 14-day free trial, strong end-to-end encryption.
6) Google Drive
Google Drive is suited best for both personal and professional use. It is one of the most commonly used go-to cloud storage applications. Google’s office suite allows users to access and store documents, spreadsheets, ppt and even stores high-quality mobile images via google images.
Advantages: 15GB storage for free versions, cross-platform compatibility, full access to G Suite.
MediaFire, a free cloud storage platform, initially offers 10GB of free storage which can then be increased up to 50GB by referring people on social media. MediaFire supports files up to 4GB large with unlimited downloads. The application has few noticeable features like automatic image and video sync and streaming services.
Advantages: 50GB storage space, cross-platform compatibility, impeccable UX/UI and sharing options.
Drawbacks: Ad-filled free account.
8) Dropbox Business
One of the oldest, with almost 11 years of experience and reliable platforms for business when it comes to cloud storage. This application is highly user-friendly and is compatible with almost every platform. Dropbox enables drag and drop functionality for desktop applications. Keeping a check on your team’s progress becomes easier with Dropbox Business as the administrator can gain insights into the status of each team member. This platform also allows users to create a personal account to maintain all the files in one place.
Advantages: User-friendly and offers 30-day free trial options which can be canceled anytime.
Drawbacks: Dropbox Business contains no online editing tools.
As mentioned earlier, selecting the right cloud storage service is highly important for both customers and businesses in this digital transformation era. The increased usage of cloud services is pushing even the traditional businesses to move towards the digitized, cloud-based business model with the number of advantages they have to offer to the vendors as well as the customers. With this trend of cloud-technology moving forward there will be less focus on the local storage and infrastructure.
|
<urn:uuid:888c2bd2-b61c-4dda-bbee-7ba91fcdb6f9>
|
CC-MAIN-2022-40
|
https://expersight.com/best-free-cloud-storage-file-sharing-solutions/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00060.warc.gz
|
en
| 0.922774 | 1,569 | 2.65625 | 3 |
When we think about sustainability, we usually think about reducing carbon footprints and being more energy-efficient to combat global warming. Many wide-reaching global initiatives are at the forefront of the news and stand as a reminder of the importance of keeping a balance between earth’s resources and human consumption.
Why is sustainability important?
Sustainability in its simplest form means meeting current needs without compromising future needs – ensuring that you are not making decisions today that endanger future generations. This concept can take many forms, both global and grandiosity as well as local and impactful.
Sustainability and Brivo
Brivo is committed to sustainability and protecting the environment. Our suite of products helps companies improve their sustainability goals. For instance, access control data can be integrated into energy management systems to better manage energy consumption. Environmental thermostats and controls in multifamily properties can help save energy by optimizing temperatures at the individual unit level.
We create products that help our customers operate their systems more efficiently and reduce power consumption. We’ve been RoHS compliant for years to help reduce hazardous waste. We use recyclable materials in our packaging as much as possible.
We also offer a company environment where we encourage employees to volunteer time by providing time off to help in their community. We believe in corporate responsibility, sustainability, and reducing energy consumption. Our offices are located in urban environments to support walking, biking, or public transit to the office, they are equipped with energy-efficient lighting and automatic controls, and we recycle.
But what more can we do?
This Earth Day, we are committing to focus more on sustainability and to continue to find ways to improve and to raise awareness, encourage participation, and train employees. As a small step forward, in honor of Earth Day, we have adopted two 30-inch garden rows at the Charles Koiner Conservancy for Urban Farming, a local land trust close to our headquarters in Maryland. The Koiner Urban Farm supports land stewardship, farm management, and urban farming and the sponsorship help cover the costs associated with planting, caring, and harvesting the crops in these rows. We are also exploring larger commitments to additional earth-friendly causes.
We plan to initiate an employee resource group to focus on sustainability and review our current policies to make recommendations on what else we can do to improve. Similar to our Diversity, Equity, and Inclusion commitments, we look to foster stewardship within our company to make a difference. Today is perhaps the beginning of our path to change, but the journey of 1000 miles starts with the first steps.
|
<urn:uuid:aff48673-084f-4f21-b09e-44c00b8586b5>
|
CC-MAIN-2022-40
|
https://www.brivo.com/sustainability-small-changes-can-make-a-big-difference/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00060.warc.gz
|
en
| 0.946165 | 522 | 2.53125 | 3 |
If you haven’t heard about wireless mesh network architectures, they are groups of groups of radio-based devices that automatically form an interconnected web—as opposed to the ring and star topologies of traditional wired networks. A mesh network can move data from a server, through the web, to any point within, or on the periphery of the web that’s within transmission range of the node.
Meshes are generally designed to be self-discovering (each node finds its nearby colleagues) and the networks self-healing (when one node goes out, the network simply routes around it).
Most mesh networking products were built around the Wi-Fi standard for wireless LANs, where access points (APs) were programmed to intercommunicate with each other, as well as with the Wi-Fi client devices of end users.
But back in 2003, when the mesh concept first escaped the gravitational field of the military, where the technology was developed (you can easily imagine why such an architecture would be useful in battlefield conditions), one company, California-based Firetide, took a slightly different approach.
According to Manish Chandra, Firetide’s director of product management, instead of making access points do double duty as backhaul and client communication devices the company built a separate infrastructure device to handle backhaul duties.
Firetide’s infrastructure nodes were designed to function as a Layer 2 Ethernet-compliant routing and switching device that would work with any IP network. Each was equipped with a radio (or two) and wired Ethernet ports. You just plug one of them into your wired network, and it just becomes a wireless extension of the network. The nodes were compatible with APs from a number of Wi-Fi equipment vendors.
As Chandra explained to VoIPplanet.com, “If you want to extend that voice over IP network to an outdoor environment, and you cannot lay down cable because it is cost-prohibitive, since we are IP Layer 2 compliant, that network just gets plugged into our infrastructure and we extend it where you cannot lay down cable.”
That led, initially, to adoption in a number of industrial settings, largely for video surveillance, and in healthcare, where a primary application was voice. And the edge Firetide had in these early deployment was bandwidth—which is still a hurdle that Wi-Fi is striving to clear.
Firetide infrastructure nodes are available with a variety of radios, and when dual radio HotPoints (or HotPorts, as they are now called) use their radios in tandem (“bonded,” to use the accepted tech term), they support some 70Mbps of bandwidth, enough to carry multiples of the number of voice calls supported by the then-universal 802.11b Wi-Fi networks.
Last spring, Firetide put into place the last piece of its strategic plan: its own line of Wi-Fi access points (taking the HotPoint name originally attached to the infrastructure nodes).
The combination of infra nodes and APs “gives us end-to-end network coverage,” Chandra told VoIPplanet. It also armed Firetide to do battle in a new arena, municipal wireless networks, which has been trending hot since mid-2005, and which was made feasible only by the appearance of wireless mesh networks.
The company has had considerable success in the muni arena, landing major deployment deals since the fall of ’06. All of which brings us, finally, to Vo-Fi.
Two recent deployments underscore how Firetide is putting its technology to work bringing VoIP to communities small and large.
Late last year, Nyherji, one of Iceland’s leading service providers, deployed a Firetide wireless mesh network and Avaya Communications Manager IP telephony solution to connect the remote communities surrounding the town of Isafjordur to the Internet and to deliver VoIP service.
“The service provider wanted to connect these communities into a single network. But if they wanted to lay down cable, it becomes cost prohibitive,” Chandra said. ” So we went ahead and we integrated our product with Avaya voice over IP products—the call managers, the media servers, the media gateways—and both their wired and wireless voice over IP phones.”
The resulting integrated network has completed its first Arctic winter and is running smoothly. The joint venture with Avaya has also resulted in Firetide’s recenty announced status as “Avaya compliant,” a certification that the two companies’ products interoperate reliably.
The other showcase deployment is building out mesh-based network Wi-Fi connectivity for roughly half of the Infocomm Development Authority of Singapore’s [email protected] initative, which will provide free Wi-Fi Internet connectivity (initially) and VoIP (in a second phase) to the entire island nation of Singapore.
According to Chandra, aside from the issue of scale, the deal—which was struck in October and will be rolled out over the course of roughly a year—is typical, of municipal deployments nowadays, in that VoIP is a required component.
Not only do municipalities anticipate that their wireless LANs will be used for voice services, but they are looking for seamless connectivity between an indoor (often private) and an outdoor (public) network environment. That is, you might initiate a Vo-Fi call on your company’s WLAN, leave the building, and have that call seamlessly switched over to the public, outdoor Wi-Fi network.
Chandra cites Firetide’s “end-to-end network coverage,” and its indoor/outdoor infrastructure (with its generous bandwidth capability) as making it uniquely adapted to such Vo-Fi initiatives.
|
<urn:uuid:00f36920-c6ca-4f55-aa85-3f7543ffa406>
|
CC-MAIN-2022-40
|
https://www.enterprisenetworkingplanet.com/unified-communications/taking-vo-fi-into-the-rugged-and-not-so-rugged-outdoors/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00060.warc.gz
|
en
| 0.957183 | 1,202 | 2.53125 | 3 |
Home Automation (Domotics)
The 1900s was the advent of washing machines, dishwashers, cloth dryers which were more of laboursaving machines. This was home (more of work) automation for the Xennials and the Millenials.
Networks have brought in a lot of options as well as the expectation of Home Automation.
“Home automation is a step toward what is referred to as the “Internet of Things,” in which everything has an assigned IP address, and can be monitored and accessed remotely.”
Home Automation can be categorized as below
1. Electric and Heating/Cooling Systems e.g. ZigBee automation
2. AI Controlling home devices as well as user preferences e.g. Google Home
3. Robots who would interact with humans e.g Milagrow
Home automation is part of "The Internet of Things," also known as IoT. The way devices and appliances can be networked together to provide us with a seamless control over all aspects of your home and more. Home automation has been around for many decades in terms of lighting and simple appliance control. Recently technology caught up with the idea of the interconnected world at the touch of your fingertips or a simple voice command to Alexa, Google Assistant, Siri, and Cortana. The dream of making your home smart is now a reality. Smart Home and home automation are quite interchangeable, in fact, if you research what is a smart home most of the same results will appear.
Home automation has today progressed from large commercial building or expensive homes to every home with DIY options for every automation.
“Home automation is a step toward what is referred to as the “Internet of Things,” in which everything has an assigned IP address, and can be monitored and accessed remotely”
The Automation Frenzy!
Goodnight Google, Lights Out Alexa….
“Alexa wake me up at 5:00 am”, Alexa/Google Home opens up a world of endless possibilities of home automation technologies and techniques. Set up yourion lights to slowly turn on as the sun rises, or when you are due to wake up. If you're feeling fancy, add in a weather forecast.
Anything that can be brought on the network can be automated and remotely controlled. Even your wearable device which monitors your sleep provides your sleep analysis with step counter is Home Automation.
Home automation commonly connects a lot of simple binary devices which is more of “on” and “off” such as lights, power outlets and electronic locks. Even simple monitors such as pets and baby movement monitors.
The intelligent automation is an integration of the binary automation with AI, which enables users to have hardware and software remember choices and provide a personalized output for the users.
With the Alexas and the Siris, there is a total excitement for Home Automation with an approx. of 1.5 million home automation is done in the US in 2016 (initial phase) vs an estimate of 27.7 million by 2023 just in the energy management segment. Smart Home sectors which are growing.
Criticism and Controversies
The closed ecosystem!
In the present scenario, home automation lacks technical standards, where the availability of various automation devices, in terms of both hardware and software running on them, makes it difficult to integrate them easily as well as develop common applications for the available technology ecosystem. Also proprietary software like Alexa, Siri and Google Home make it inconvenient to the users to customize as well as interconnect. This forces users to use the same ecosystem. Vendors also supporting older devices as well as patches for the older versions becoming as the issue for the users.
Expensive technology which would very soon become legacy due to the product/technology maturity curve.
Privacy risk of your conversation being heard by Alexa and Google Home is a big concern amongst users.
1. Alexa recording private conversations
2. Alexa’s advice to kill foster parents
3. Amazon workers listening to your conversations
4. Alexa calls cops on man allegedly beating girlfriend
5. Amazon Echo device recorded a private conversation between her and her husband and sent the recording to an employee of the husband
According to a recent study, after buying one smart product, 70% of those consumers buy another smart product.
This, combined with decreasing prices will help bring even more innovation to the home automation space. Companies will have more capabilities to experiment and improve their product offerings, especially in regards to integration with other smart systems and networks.
|
<urn:uuid:30279a88-69d1-4e20-991f-3652a11c4b46>
|
CC-MAIN-2022-40
|
https://home-automation.ciotechoutlook.com/cioviewpoint/home-automation-domotics-nid-4846-cid-125.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00060.warc.gz
|
en
| 0.94919 | 938 | 2.75 | 3 |
Automotive Acoustics: How to Address Complexity
The push to solve modern cars’ complexity challenges is increasingly focused on software, and this includes the intricacies built into acoustics design and performance. How can automakers address these challenges?
This is a critical question since automotive acoustics systems are subject to ever-increasing sophistication and performance requirements. In addition, audio is constrained by the laws of physics. It requires precise and consistent results with low latencies, and automakers must meet these demands with hardware that can operate reliably and within cost constraints.
Vehicle Acoustics Challenges
When you think about the sound coming from the exterior of a car, the first thing that might come to mind is the pounding bass from a vehicle next to you at a stoplight. But there’s more to exterior sound than noisy neighboring cars in traffic.
There is a multitude of components complicating the design of automotive audio systems, creating a new class of obstacles. However, new answers to these challenges are emerging, as well.
This article explores how automakers are finding ways to satisfy myriad audio feature demands — often with competing requirements — while leveraging their existing hardware and software configurations.
Car Exterior Acoustics Factors
There are at least five different types of sounds making their way into the cabin of a vehicle, all competing for the driver’s attention. (Ironically, this list does not include music or other entertainment programming intentionally chosen by the car occupants — the one source that most people associate with automotive acoustics). Automakers are increasingly seeking to manage these various audio sources by incorporating the following types of solutions into their automotive designs.
1. Engine Noise Reduction
While hearing the engine might be appropriate for a muscle car, luxury brands often want the car interior to be as quiet as possible. One important means of accomplishing this goal is with active noise cancellation, which works much like noise-canceling headphones but on a larger and more complex scale. This uses the engine’s known sound profile and pumps sound with an opposite phase into the cabin to remove or reduce engine noise for the comfort of the driver and passengers.
2. Engine Sound Augmentation
While electric vehicles (EVs) and hybrids may be producing audio to mimic an engine sound to alert pedestrians, the same can be done for subtly adding the external sound of an engine inside the car. In the pursuit of increasingly efficient gasoline engines, cars are being built using fewer cylinders (or none, in the case of EVs). Some designs feature cylinder deactivation, where some temporarily stop firing, eco-smart on/off engines, and turbochargers. These can periodically change the sound of an engine (or eliminate it), which can be disconcerting to the driver. Engine sound augmentation adds some of the missing audio back in to make a gas-sipping engine sound more like its less-efficient counterparts, giving the driver familiar audio cues.
3. Road Noise Reduction
Similar to engine noise reduction, some automakers may wish to further silence the interiors of their cars by removing road noise created by tires on pavement or environmental noise, like construction zones or even the pounding bass from our booming next-lane occupants. These typically require a microphone to sample the sound and then, acting similarly to engine noise reduction, they create canceling audio waveforms for the internal audio system to playback.
4. Pedestrian Awareness
Purely electric cars — and hybrids when running on electric power — make very little noise. This quiet is nice for the environment and interior occupants but can reduce pedestrian and bicyclist safety. Distracted pedestrians may not be aware of a vehicle operating so quietly until it’s dangerously close, and then they may be startled and move unpredictably. This is why EVs and hybrids often emit an engine-like sound, alerting nearby pedestrians, bicyclists, and visually disadvantaged people to their presence.
5. Reverse Alerts
What sound is ubiquitous on construction sites? The infamous “beep-beep” backup sound. Something similar is coming to many new cars since it alerts those around the vehicle that the driver may not see them. These alerts don’t always have to be loud and intrusive. Gentler-sounding or even “white-noise” alerts are becoming popular options for passenger vehicles.
Car Interior Acoustics Factors
The modern car interior has a few sound sources of its own. These may be more familiar than the audio sounds of the exterior but are equally complex to manage and achieve precise specifications.
1. Head Unit
Also known as the infotainment system or center stack, the head unit is often the heart of what most people think of for car audio. It contains the stereo — the first “in-vehicle” sound source — that for some customers, is still the most important. It also includes several forms of media playback (streaming, USB, Bluetooth®, and on-device media), Android Auto™, and Apple CarPlay®. Still other in-car audio comes from navigation system prompts, infotainment system UX sounds, alerts, turn signals, and more.
2. Hands-Free Calling
A good hands-free system for the car requires excellent echo-cancellation and remote-end noise reduction; a premium one also uses a microphone array and adaptive beamforming to focus in on the person talking, and to eliminate as much of the vehicle’s ambient noise from the call as possible.
3. Speech Control
Speech control has similar requirements to hands-free, although with wake-up words it must be running continuously. It also must be routed to different places — an on-board speech engine, off-board recognizer, a phone-based voice assistant, or even a combination of these.
4. Alerts and Warnings
Standard alerts — like door or trunk ajar, engine still running, or passenger seat belt disconnected — also need to be handled by the audio system, often requiring input from several different modules. In addition to these alerts, a wide variety of sounds related to ADAS (Advanced Driver Assistance Systems) are being added, including forward collision warning, lane departure chimes, or blind spot detection alerts. Increasingly, sounds for events like turn signals are also created through the audio system, instead of from a dedicated “clicker” relay.
5. In-Car Communication
This feature is most noticeable when someone in the front of the car is trying to talk to someone in the back seat, especially in louder or larger vehicles. Audio systems with this feature isolate and emphasize the speech between front and rear seat passengers. This eliminates the need to raise one’s voice, or worse turn their head around to talk to the passengers thus taking eyes off the road. Essentially, the phrase, “What did you say?” gets eliminated.
Combined Sound Sources in Automobiles
The fundamental challenge of merging these sound sources arises from too many systems in play, all of which have demands on the audio system. In current car designs, most of these audio sources are individualized, and completely independent modules. That means the audio waveforms being played back through the speakers, and being sampled by microphones, are not controlled by a single entity — in effect, these multiple audio components are battling it out in the acoustical space of the car. Domain controller and zonal architectures consolidate ECUs, combining some of the systems to run on the same silicon, yet they are still unaware of each other.
An uncoordinated audio system will deliver a poor audio experience. Unsynchronized audio inputs and outputs with competing needs can introduce a host of issues, like saturated mic levels, far-end echoes, audio glitches, distorted speech, induced noise, howling feedback loops, stuttering audio, inaudible alerts, and poorly performing speech recognition. All of these things can also potentially hamper driver perception and performance, and ultimately, affect safety.
Addressing Vehicle Acoustics Complexity
Methods of solving the vehicle acoustics complexity problem can be simply stated, but difficult to achieve. An increasing number of automakers find that the solution lies in centralized control that integrates and orchestrates holistic vehicle audio, and audio systems.
Why is this so challenging? Part of the problem is that there are typically multiple vendors providing different components of the system. There are multiple SoCs (systems-on-a-chip), DSPs (digital signal processors), and operating systems that even a centralized audio controller must run on. It must be aware of a wide variety of audio uses, and manage the audio resources intelligently, while also prioritizing them for sound quality — but not at the expense of safety. And it must have consistently low latencies for the most demanding use cases, while being able to accommodate playback streams from slow and non-deterministic audio applications.
Taken as a whole, that’s a tall order for an overall audio system, and very few technology platforms can measure up. In our next post, we will examine some specific audio technology solutions that can help with these challenges.
About BlackBerry QNX Acoustics Management
The QNX® Acoustics Management Platform (AMP) 3.0 from BlackBerry is a breakthrough in automotive software. For the first time, automakers can design and manage the total sonic experience in their cars with a pure software solution designed to run on general-purpose application processor cores — saving bill-of-material costs and cutting time to production, while delivering new features and uncompromising sound quality.
|
<urn:uuid:143b43eb-3b77-4805-872b-2d5c4d5793fb>
|
CC-MAIN-2022-40
|
https://blogs.blackberry.com/en/2022/07/automotive-acoustics-how-to-address-complexity
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00060.warc.gz
|
en
| 0.941766 | 1,956 | 2.578125 | 3 |
Ransomware attacks are never far from the headlines and that’s likely to remain the status quo for the foreseeable future. Indeed, Verizon’s 2016 data breach investigation report states that attacks have grown 16% globally year on year, a worrying trend for security professionals everywhere. But what’s behind the explosive growth of this relatively new form of cyber attack? To answer that, we must first look at how ransomware has evolved to date.
What is ransomware?
Ransomware is a distinct type of cyber attack, in that it extorts payment from the victim in exchange for allowing access to something that was encrypted during the attack.
Early ransomware disguised itself as spyware removal or PC cleanup applications. These did not rely on encryption, but instead they damaged the PC and offered to fix it upon payment for the application. After a couple more years, these scams gave way to attacks using fake antivirus applications. Whilst similar to earlier ransomware attempts, they went one step further and also tried to trick users into paying for multiple years of support.
Encryption-based ransomware first came into prominence in 2011, in the form of malware that prevented access to the computer system. As defenses and recovery methods improved, ransomware evolved into the crypto ransomware that is so prominent now. There are three variants that currently dominate the crypto ransomware landscape:
- CryptoWall:The oldest of the three, it also has the greatest share of worldwide ransomware infections, at 83.45%.
- Locky: The most recent of the top three, it is also the fastest growing and the most advanced ransomware found in the wild. It captured 16.47% of all ransomware attacks between February 17 and March 2, 2016.
- TeslaCrypt: This malware was spread primarily through hijacked WordPress and Joomla sites, and represents .08% of all infections. However, recent news that the master decryption key for TeslaCrypt has been released to the public by its developer spells the end of it for good.
What’s behind its growing popularity?
There are several reasons why ransomware attacks have been spreading so quickly over the last few years. One is the technical side. Developing effective ransomware has become easier, even to a point, where you can buy “Ransomware-as-a-service”. However, other, more sinister factors are also at play. With the digital transformation of crime, we’re now seeing ‘professional’ cybercriminals whose sole focus is to collect ransoms and launder money. The development of international payment systems like bitcoin have made it even easier to transfer money anonymously, making it less complex for criminals to extort money without being traced.
As a result we are seeing a trend where it’s now easier for technically skilled people to become successful criminals, and professional criminals are using digital methods very effectively. Ransomware attacks have also been added to most exploit kits, which attack PCs through drive-by downloads, without any human intervention at all.
How does it catch users out?
While using cleverly-worded emails has been the tool-of-choice for would-be attackers for some time, there are other ways to infect users that are equally effective.
Nearly all strategies rely on user behaviour. Either a phishing email convinces them to open an attached file, they are directed to a seemingly legitimate site, or the user is surfing the web for news or subject of interest and clicks on the wrong thing. Advanced Threat Detection software can help to protect against some of these attack vectors, but it won’t help you when the infection lives on the internet.
When it comes to email, attackers are getting smarter, and instead of asking you to open an attachment that is too easily blocked or interrogated, they instead send users to a fake website where the infection is delivered. Email security programs go to great lengths to authenticate websites, ensuring the URL “matches” the domain of the sender, comparing the site against known spurious websites, checking for valid certificates, and so on. But sites can contain redirects, and in most cases, the problem isn’t the security software, it’s the user. The reason to open is compelling, and they click on the link.
What can users do to protect themselves?
The growth in ransomware attacks is expected to expand to other platforms such as Macs, smartphones, and IoT endpoints and the most successful iterations of ransomware will evolve to stay ahead of defenses. Users should deploy multiple layers of protection to give them the best chance of keeping their networks secure. These include the so-called secure trinity: Next Generation Firewalls, Email Security, Backup providing:
- Advanced Threat Detection: that executes suspicious or unknown files in a sandbox environment prior to being forwarded to the user.
- Web filtering: to prevent drive-by downloads and “phone home” activity with a web security gateway or other secure web filtering solution.
- Email protection on premise and in the Cloud (e.g. O365): to identify and stop email messages that carry ransomware and other attacks before they get to the inbox.
- Security policies: disable Office macros and other potential means of attack.Data backups: keeping good backups of all data, and having a disaster recovery plan in place to recover from ransomware.
Cybercriminals don’t care who they target with ransomware, as long as the victim is willing to pay. All sizes of organisations have been targeted, with the health care and public sectors taking an especially heavy hit recently. However, while ransomware continues to evolve, it doesn’t mean users can’t protect themselves effectively. A combination of a layered security approach and educating users/employees offers the best approach to remaining ransomware free.
[su_box title=”About Wieland Alge” style=”noise” box_color=”#336588″][short_info id=’59481′ desc=”true” all=”false”][/su_box]
|
<urn:uuid:e8cf9e37-4e5b-4a4e-8fef-978e0617a669>
|
CC-MAIN-2022-40
|
https://informationsecuritybuzz.com/articles/protect-rsing-threat-ransomware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00060.warc.gz
|
en
| 0.944571 | 1,249 | 2.75 | 3 |
AI Ethics: Building Trust by Following Ethical Practices
As machine learning and artificial intelligence (AI) usher in the Fourth Industrial Revolution, it seems like everyone wants to get in on the action. And who can blame them? AI promises improved accuracy, speed, scalability, personalization, consistency, and clarity in every area of business. With all those benefits, why are some businesses hesitating to move forward?
On the one hand, businesses know that they need to embrace AI innovation to remain competitive. On the other hand, they know that AI can be challenging. Most everyone has heard news stories of high profile companies making mistakes with AI, and they are worried that it may happen to them too, damaging their reputation. In regulated industries, there’s the question of how to explain AI decisions to regulators and customers. Then there’s the challenge of how to engage with staff so that they can embrace organizational change.
How do you manage AI to ensure that it follows your business rules and core values, while reaping the most benefits? It’s all about building trust in AI.
Let’s take a look at the four main principles that govern ethics around AI and how these can help build trust.
- Principle 1: Ethical Purpose
- Principle 2: Fairness
- Principle 3: Disclosure
- Principle 4: Governance
Principle 1: Ethical Purpose
Just like humans, AIs are subject to perverse incentives, maybe even more so than humans. So, it stands to reason that you need to choose carefully the tasks and objectives, as well as the historical data, that you assign to AIs.
When assigning a task to an AI, consider asking questions such as: Does the AI free up your staff to take on more fulfilling human tasks? Does your new AI task improve customer experience? Does it allow you to offer a better product or expand your organization’s capabilities?
In addition, there is more to this than merely considering the impacts upon your organization’s internal business goals. Consider the negative externalities, the costs suffered by third parties as a result of the AI’s actions. Pay particular attention to situations involving vulnerable groups, such as persons with disabilities, children, minorities, or to situations with asymmetries of power or information.
Principle 2: Fairness
Most countries around the world have laws protecting against some forms of discrimination, including everything from race and ethnicity to gender, disability, age, and marital status. It goes without saying that companies need to obey the law with regard to protected attributes. But beyond that, it is also good business practice to safeguard certain sensitive attributes, such as where there is an asymmetry of power or information.
If the historical data contains examples of poor outcomes for disadvantaged groups, then an AI will learn to replicate decisions that lead to those poor outcomes. Data should reflect the diversity of the target population with which the AI will be interacting. Bias can also occur when a group is underrepresented in the historical data. If the AI isn’t given enough examples of each type of person, then it can’t be expected to learn what to do with each group.
The good news is that with AIs, it is easier to detect and remove bias than with humans. Since an AI will behave the same way every time it sees the same data, you can run experiments and diagnostics to discover AI bias.
For the full list of principles on how to implement ethical AI practices, download our white paper, AI Ethics. This paper also covers how to develop an AI Ethics Statement that will apply to all projects and how DataRobot’s automated machine learning platform can be a valuable tool to implement ethical AIs.
About the Author:
Colin Priest is the Sr. Director of Product Marketing for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.
|
<urn:uuid:a8c91406-792c-4fae-abd5-8dd90bb465dd>
|
CC-MAIN-2022-40
|
https://www.datarobot.com/blog/ai-ethics-building-trust-by-following-ethical-practices/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00060.warc.gz
|
en
| 0.944253 | 881 | 2.9375 | 3 |
What SSTP is and its suitability for VPN connections
SSTP (Secure Socket Tunneling Protocol) is a protocol commonly used to establish secure VPN connections. It entered the industry in the 2000s, introduced by Microsoft as an alternative to outdated PPTP and L2TP/IPSec. Essentially, SSTP deals with generating reliable tunnels for traversing encrypted data. Thus, it is one of the protocols for creating secure travel paths between the VPN server and your device. However, is SSTP still relevant, or are there better, more modern alternatives?
What is SSTP?
SSTP is a Microsoft proprietary VPN protocol. It equips Transport Layer Security (TLS) to establish safe connections between the VPN client and server. Its integration started with Windows Vista and later Windows versions also include native support. Additionally, SSTP is also available on other operating systems, like Linux and Android.
In simple terms, it is a VPN tunneling protocol responsible for crafting tunnels VPNs use to transfer data. So, SSTP builds roads for the encrypted data to reach the intended recipients fully intact.
Similar to PPTP, SSTP moves PPP (Point-to-Point Protocol) traffic. However, it does so through SSL/TLS channels. As a result, SSTP has more security mechanisms backing up its reliability. For one, its connections have key negotiation, encryption, and traffic integrity checking.
One of the advantages of SSTP is that it allegedly worked better for evading VPN blocks. Such benefits come from using SSL/TLS over TCP port 443. It is the same port HTTPS utilizes.
Pros and cons of SSTP
Like any VPN protocol, there are advantages and disadvantages. Newer standards can push outdated ones out (like PPTP), but others might simply be better suited to serve particular purposes.
- High-end security. Experts consider SSTP a secure protocol, supporting AES-256-bit encryption. The latter is a cryptographically reliable option.
- Difficult to block or detect. SSTP uses TCP port 443, the same as HTTPS. So, it might be challenging to differentiate between SSTP and HTTPS traffic, ending with fewer chances of blocked access.
- Easy setup on Windows. Operating systems like Windows have integrated support for SSTP. Thus, it might be easier to configure SSTP than, say, OpenVPN, which is not built-in to Windows.
- Decent performance in sufficient conditions. The speed of SSTP connections can be satisfactory. However, experts have noted that it might struggle to support activities like online gaming or peer-to-peer sharing.
- Owned by Microsoft. Microsoft is not a role model for preserving users’ privacy. Its questionable activities get overlooked frequently, while most of the gruesome privacy invasions relate to other big tech. In reality, the reliability of SSTP is a matter of perspective. After all, Microsoft allegedly works with the NSA. Over the years, Microsoft has supplied access to many resources requested by NSA. So, a dubious dilemma is whether people wishing for more privacy online should go for SSTP.
- Performance limitations. Using a TCP tunnel has its pitfalls. SSTP will indeed function properly if it has enough excess bandwidth. It ensures that the tunneled TCP timers do not run out. If they do, the performance will drop drastically.
- TCP meltdown problem. The latter is one of the main reasons for significant drops in SSTP performance. It happens when you stack one transmission protocol on top of another. Such a scenario occurs when the TCP tunnel traverses TCP traffic. The underlying layer can identify an issue and solve it by compensating for it. The layer above reacts by overcompensating. This attempt to make up for shortcomings triggers delays and problems with data transfers. As a result, the SSTP connections turn idle when encountering TCP meltdown.
- Lack of opportunities to test SSTP defense. Circling back to Microsoft, it also prevents cybersecurity researchers from contributing to protocol reliability. Since the SSTP code is unavailable, it becomes impossible for volunteer experts to test it. Take WireGuard as a complete opposite. Its code is publicly available, meaning anyone can inspect it more closely. Thus, it also becomes impossible to deny whether the allegedly close relationship between Microsoft and NSA does not extend to SSTP.
Modern alternatives for SSTP
While the security of SSTP is similar to OpenVPN, other features are not equally adept. Let’s explore the main reasons why WireGuard, IKEv2/IPSec, or OpenVPN are better options.
- SSTP is a closed-source protocol. Lack of transparency makes it challenging to trust SSTP. Its ownership and potential association with NSA is spooky, enough to make privacy-conscious users look the other way. Besides dubious backdoors, the closed-source protocol might have undiscovered or unpatched vulnerabilities. It limits the further development of SSTP, which could potentially strengthen its validity.
- More stability and better security. IKEv2/IPSec and other protocols using UDP are faster than those equipped with TCP. WireGuard also chooses UDP, which has become a standard for VPN connections.
- Lack of compatibility. Microsoft owns SSTP and has made it available for Windows, Linux, Android, and routers. Such cross-platform compatibility is not exactly enough for modern users. VPN protection is essential for most internet-connected devices. WireGuard and IKEv2/IPSec do not face such obstacles.
Products using WireGuard, IKEv2/IPSec, and OpenVPN protocols have proved their reliability and seamless usage. Of course, SSTP has its benefits, like the stronger resistance against VPN blocks. However, its alleged association with NSA is a strong factor and likely one turning heads the other way.
Atlas VPN supports both IKEv2/IPSec and WireGuard protocols. We strongly believe these options to be resistant, trustworthy, and robust. You can select which is more suitable for your online journey via Atlas VPN settings.
|
<urn:uuid:91cc5010-1634-4d23-9023-8e44a49eb1bc>
|
CC-MAIN-2022-40
|
https://atlasvpn.com/blog/what-sstp-is-and-its-suitability-for-vpn-connections
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00060.warc.gz
|
en
| 0.932379 | 1,270 | 3.0625 | 3 |
In 1999, President Clinton signed the Gramm-Leach-Bliley Act (GLBA) into law. The act essentially updated and replaced the 70-year-old Glass-Steagall Act and provided greater opportunities for financial institutions to offer more services
Before 1999, banks’ ability to consolidate was quite limited; investment banks, commercial banks, and insurance companies were considered separate, and the merging of any of these services was typically illegal. The GLBA removed this regulation but meant that the financial institutions would be governed more strictly in consumer privacy, consumer data sales, and information sharing. These components are codified in the Financial Privacy Rule, the Safeguards Rule, and Pretexting Provision of the act.
Since 1999, increased threats of data loss and concerns about data protection have prompted regulators to use the GLBA provisions as grounds for expanding oversight into other institutions which deal with financial data. Any institution that handles consumer finances is bound by the standards of the GLBA. Most recently, this definition has included higher education.
Compliance with the Family Educational Rights and Privacy Act (FERPA) has long been standard operating procedure within higher education. However, increasing cyberattacks and data breaches have prompted the Federal Government to clarify that, due to the large amount of private financial information held by higher education institutions, colleges and universities have the definitional qualification of being a “financial institution.”
GLBA Audits and More
These audits are meant to test existing data protection policies. Per the government’s student aid website, audits mandate that:
1. The institution designates an individual to coordinate its information security program.
2.The institution performs a risk assessment that addresses three required areas:
a) Employee training and management
b) Information systems, including network and software design, as well as information processing, storage, transmission, and disposal; and
c) Detecting, preventing, and responding to attacks, intrusions, or other systems failures.
3.The institution documents a safeguard for each risk identified in step 2 above.1
Of further importance to higher education are the Privacy and Safeguards Rules as found on the FTC webpage. Though they have been in place for years, the extending definition of “financial institution” places colleges and universities under this regulatory burden. The Privacy Rule maintains that users have the right to opt out of third-party information sharing and to receive an annual privacy notice about how their information is being used and protected. The Safeguards Rule requires institutions to have measures in place to keep customer information secure.
Higher Education Institution GLBA Requirements
While this act is beneficial to consumers, the extended rule application adds a significant burden for educational institutions—which are now considered on a similar level as banks. Under the expanded guidelines, institutions must:
- “Develop, implement, and maintain a comprehensive information security program.”
- “Base your information security program on a risk assessment that identifies reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of customer information that could result in the unauthorized disclosure, misuse, alteration, destruction, or other compromise of such information, and assesses the sufficiency of any safeguards in place to control these risks.”
- “Develop administrative, technical, and physical safeguards.”
- “Develop, implement, and maintain a comprehensive information security program that is written in
- one or more readily accessible part and contains administrative, technical, and physical safeguards that are appropriate to your size and complexity, the nature and scope of your activities, and the sensitivity of any customer information at issue.”
- “Designate a qualified individual responsible for overseeing and implementing your information security program and enforcing your information security program.”
Essentially all university departments are privy to these requirements as part of an organization-wide privacy audit. Changing protocols must be updated regularly, disseminated across campus, audited frequently, and modified accordingly. The scope of this regulation appears in the act definitions: “Authorized user means any employee, contractor, agent, customer, or other person that is authorized to access any of your information systems or data.” And “Customer information means any record containing nonpublic personal information about a customer of a financial institution, whether in paper, electronic, or other form, that is handled or maintained by or on behalf of you or your affiliates.
Comprehending governmental regulations can be challenging for even seasoned financial experts. But extending these rules into the realm of university systems means a significantly increased workload for those who are tasked with compliance.
- Who in your university manages non-public student or staff data, financial or otherwise?
- Admissions departments handle names, transcripts, and other data from applications and inquiries.
- Marketing departments receive logs of web traffic, IP addresses, user information, and computer system information.
- Libraries keep data on users. Maintenance has records on who enters which buildings when (via keycard data).
- Student Life keeps logs of living arrangements, conflicts, and disciplinary activity, all linked to student data.
- Residence staff may keep records in their dorm rooms, allowing possible unintended access to others in the building.
- Professors’ files contain personal student data, whether in hardcopy or digital form.
- Personal information may be linked, through syncing software (e.g. Google Drive, Dropbox, etc.) to personal, home-based computers or mobile devices.
As universities continue to venture into the realm of digital information collection, the opportunities for cyber criminals only increase. Remote learning, online offerings, and financial aid applications broaden a school’s repository of vulnerable, private data. Between 2019-2020, ransomware attacks on higher education intuitions doubled, with the average cost of solution costing $447,000.
Non-Compliance Penalties and Risks
Failure to comply with the updated obligations will result in the institution being reported to the FTC for further investigation. Consequences may include lengthy oversight periods or disabling institutional access to Department of Education information systems. In addition, due to the FTC’s reading of its own authority, it may impose significant monetary fines or even prison time for violators.5 Violations can cost an organization $100,000, and individuals in leadership can be fined up to $10,000 and sentenced to five years in prison. Organizations have already been found to be in violation of this act. For instance, in 2020, Mortgage Solutions FCS agreed to a $120,000 settlement for violating GLBA regulations.6
Finally, it’s worth noting that as hard-hitting as fines would be to a financial bottom line, they are nothing when weighed against the indirect costs to the institution: loss of trust and damage to reputation. Higher education is already under intense competitive pressure. Few schools could afford a financial or reputational hit as well.
What You can Do to Ensure GLBA Compliance
Thankfully, higher education administrators need not tackle this regulatory burden on their own. Ensuring institutional GLBA compliance can begin with a simple phone call. TruOps is a leader in regulatory compliance and incorporates cutting-edge, automated technologies to identify, evaluate, prioritize, and report on risk vulnerabilities within existing information systems. Our experts work with partner institutions to understand their existing processes and address GLBA compliance.
The TruOps Integrated Risk Management platform, comprised of integrated modules, is deployed as a flexible, cloud-based solution. We streamline risk assessment and deliver solutions across both the internal organization and third-party environments. We’ll assemble and implement a multi-faceted approach to overcome regulatory burdens and establish real-time risk awareness with simple-to-understand dashboards and reports.
We know that colleges and universities have a lot on their plates, and we take pride in our ability to provide solutions to quickly help them navigate the full scope of governmental requirements. Using the TruOps Integrated Risk Management solution, our higher education clients can confidently make informed risk and compliance decisions to securely manage their business.
Let us take on the burden of compliance with our proven strategies and extensive experience.
Give TruOps a call today. www.truops.com
©2022 TruOps, LLC. All rights reserved
|
<urn:uuid:f37349e8-f0c5-47f3-a76e-1cf5e8b9667a>
|
CC-MAIN-2022-40
|
https://www.gbiimpact.com/news/glba-compliance-for-higher-education
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00260.warc.gz
|
en
| 0.92798 | 1,690 | 3.1875 | 3 |
React Framework: The best choice to build modern web apps
And the obvious question is, Why? Why use React? What has made React grow so huge and Why big companies like Facebook, Twitter, Airbnb, Netflix, Uber, Pinterest, and Udemy are betting on React. In this article, we are going to reflect on some of the reasons which make React the best choice to build modern front-end apps.
What is React?
The features of React
Let’s have a look at some core technical features of React and how it differs from other frameworks.
Here is a component which shows a random number on click using JSX looks like.
A React component using JSX.
Here is how the same component looks after compilation, which we can also write by hand but is not efficient hence not recommended.
A component without JSX.
Virtual DOM is a component-wise in-memory representation of DOM nodes that it has to render on real DOM. When a component is re-rendered, React diffs it with its previous representation and updates real DOM nodes which have changed. The whole diffing process is called reconciliation. In this process, React decides if it is efficient to create DOM nodes, update existing DOM, or to destroy previous and create a new one.
State and Props
A component’s state holds the data based on which UI changes, for example, say when isLoading boolean is true, show a loader and if false show some other UI. You just update the state, React will take care of the rendering, that’s it.
But to make reusable components, there should be a way for a component to get data from outside, with props you can pass data to the component. Similar to state, if props of a component changes, React will re-render that component.
You may also like: React vs. Angular: How are they different?
Lifecycle hooks are methods provided by React which are called at different phases of a component’s lifecycle. Say for example, when component mounts, you may want to initiate an API request. You can do that in componentDidMount, but that would be async request, and you would ideally want to show a loader. So, the render method will be called just after that, and when state changes after API request resolve, it will render again.
As we have discussed some of the core technical features of React, let’s have a look at some other benefits of React.
React seems less magic compared to other frameworks like Vue and Angular, where you can write for-loops and other logic in HTML tags.
Backward compatible, Progressive enhancement
React is used much more internally at Facebook than Angular at Google. This is because Facebook runs from the master branch of React, it is guaranteed to backward compatible.
Progressive enhancement means new features will include additive or optional and not a replacement of any existing API. And if there is any breaking change or API deprecation, React will lay down an efficient strategy to migrate existing code to use the newer version.
Great community support and packages ecosystem
React is backed by Facebook as well as adopted by some other tech giants and also used by millions of other small and medium scale companies. That means most probably almost every problem or use case you encounter is encountered by someone else, and you might find a solution for it. Or you can get an idea based on existing solutions to solve your problem. And at last, you can ask people directly on StackOverflow, Reddit, Reactiflux or Twitter.
Exciting things are coming ahead.
Let’s have a very quick overview of some new things in React and new things coming ahead.
Suspense and Lazy loading
Shipping the whole frontend app at once can be potentially a performance problem, especially if your app is huge. This can be avoided with code-splitting modules, that means fetching modules when they are required in runtime.
With React.lazy you can lazyload component like this.
A lazily loaded React component
I would recommend you to use dynamic import for route level component or huge dependencies which are only used in part of the app.
You may also want to show some loader when this dynamically imported component is being loaded. You can do that by wrapping it in a Suspense component.
Suspense captures promise anywhere in child tree thrown by React.lazy to show fallback content. You can have multiple Suspense components. The nearest ancestor one will be used to show fallback content.
Hooks are a new way of components in React. With hooks now you can write functional components with state and lifecycle hooks. Functional components are more readable than class components. Also, the compiled size of functional components will be smaller than class components.
React hooks also solve other problems like wrapper hell, render props pattern, and consuming context. They also provide a much better way to separate reusable logic and serves as a base to simplification of API for other libraries like GraphQL Apollo.
The upcoming concurrent mode will provide React the ability to pause rendering and to work on different priorities. For example, say there is a search input, and you want your search input to be always responsive while the user is typing but if React is busy rendering some expensive UI, on the low-end devices it might start lagging. In upcoming concurrent mode, React will take care of these high priority tasks itself.
Better event handling
In the ongoing Flare project, React is moving from an imperative way of handling events to a declarative way. If you have ever worked in React on something where you needed to manage, focus, or listen for hover events, especially from a child component, you know how hard it is. In the new declarative way instead of providing callbacks from parent to child event handlers deep down the tree, you will be able to code like this (experimental pseudo-code).
The new system will have some wrapper event components which will do a lot of heavy lifting for users. Like Press event component will handle some dozen of mouse, pen, touch and keyboard events and will be consistent between different platforms.
React also need this to support partial hydration, which is also one of another big thing coming to React.
Better debugging tools
The upcoming version of the debugger tool has some great features. Some of them are copying props from dev tools, able to know why a component rerendered, hooks support, and better viewing of nested trees.
Cross-platform apps with React Native and Electron
Though you cannot use your React web app code as it is, you can use some of the components you write for web apps to make mobile apps using React Native and desktop apps using Electron. And of course, the philosophy of writing component-based apps remains the same at every platform.
React Native apps are real native apps and not just a wrapper around web apps to work on mobile devices. Facebook is recently pushing a lot harder than ever to make React Native even better.
If you have a small project, inexperienced team and you don’t care how things are working as far as they are working, you should go for VueJs as it has the smallest bundle, easiest learning curve.
If you know from upfront that this project is going to be huge, you want your team to learn TypeScript and strict them to follow a particular set of libraries and project structure, you should go for Angular, as it has huge bundle size, mandatory TypeScript.
|
<urn:uuid:9e0fba81-5d26-416d-b159-6bd9ff2b965e>
|
CC-MAIN-2022-40
|
https://www.greycampus.com/blog/programming/react-for-web-development
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00260.warc.gz
|
en
| 0.928261 | 2,169 | 2.515625 | 3 |
Researchers developed a new type of smart window that controls different factors of varying tint, saving up to 40 percent on an average building’s energy costs. These smart windows require power for operation, they complicated to install in existing buildings. This system features solar cells that selectively absorb near-ultraviolet (near-UV) light, so new windows completely self-powered.
Loo, from Princeton university, said, this new technology is actually smart management of the entire spectrum of sunlight, because near-UV light is invisible to the human eye. The researchers set out to harness it for the electrical energy needed to activate the tinting technology.
Using near-UV light to power these windows means that the solar cells can be transparent and occupy the same footprint of the window without competing for the same spectral range or imposing aesthetic and design constraints. Typical solar cells made of silicon black because they absorb all visible light and some infrared heat.
The researchers used organic semiconductors contorted hexabenzocoronene (cHBC) derivatives for constructing new solar cells, because its chemical structure could modify to absorb a narrow range of wavelengths.
To make new solar cells, the semiconductor molecules deposited as thin films on glass, enabling cHBC semiconductors to produce electricity when hits sunlight.
The researchers also created a smart window using electrochromic polymers to control the tint. They can operate solely using power produced by the solar cell. The window changes from clear to dark blue when the near-UV light generates an electrical charge in the solar cell.
The charge triggers a reaction in the electrochromic window, causing it to change from clear to dark blue. When darkened, the window can block more than 80 percent of light.
The research team is also looking to create a flexible version of solar-powered smart window system. That can apply to existing windows via lamination.
They explained that the near-UV solar cell technology can also power internet-of-things sensors and other low-power consumer products.
“It does not generate enough power for a car, but it can provide auxiliary power for smaller devices, for example, a fan to cool the car while it’s parked in the hot sun,” Loo said.
More information: [nature energy]
|
<urn:uuid:f05aa989-2998-40a7-a482-4ee3f7f2cb09>
|
CC-MAIN-2022-40
|
https://areflect.com/2017/07/01/smart-window-new-transparent-solar-cell-technology/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00260.warc.gz
|
en
| 0.889085 | 478 | 3.328125 | 3 |
Cyber attacks are one of the most common incidents that any organisation can face. In fact 39% of businesses in the UK reported a cyber security breach or attack in 2020. Every cyber incident no matter how big or small will initiate a cyber security incident response effort aimed at mitigating the impact of the event, limiting the damage to the organisation’s operations, finances, and reputation. However a successful cyber attack incident response begins long before an attack actually takes place.
Cyber security incident response planning requires an understanding of where an attack could come from, and creating plans for each attack vector. Included in those plans are preparing teams for the incident and setting out clear communication plans.
Cyber attacks really are that common
83% of organisations in the UK reported that they identified phishing attacks directed at their organisation in 2020. Phishing attacks are a form of social engineering where criminals send fake emails to an organisation’s staff to either gather useful information such as email addresses, bank account details, or even passwords; or they are used to get users to download malware through a link or attachment.
While phishing attacks are fairly common, it is the far more destructive ransomware that makes the headlines. Ransomware is growing in scope and destructiveness, as are some other forms of cyber attack such as viruses, spyware or malware, denial of service attacks, or hacking of accounts. Organisations operate under constant threat of a cyber attack.
Preparation is key… What should go into a cyber security incident response plan?
The Cyber Security Breaches Survey found that just 31% of businesses and 27% of charities in the UK include cyber security threats in their business continuity plans.
There are many variables to include in a cyber attack incident response plan, but just some of the elements to consider include:
Connect detection controls to a response platform – You can’t respond to incidents you don’t detect. Make sure to get early warning of an attack through effective detection controls and have an efficient security operations centre who can mobilise quickly in an attack.
Create incident response playbooks – Go further than an incident response plan – create a playbook. A playbook will include all the information, contact details, step by step tasks the cyber attack incident response team need to carry out in order to respond to the incident. Included in the playbook will be written guidance on who to notify, including regulators, and communications plans for employees, stakeholders, and the public.
Cyber attacks inherently affect IT systems, so it is important that the playbook is stored somewhere secure, and accessible when the IT network is not working.
Move incident response plans off the page – Test incident response plans with tabletop exercises, or rehearse them with full cyber attack simulations to see what works and what doesn’t work. Acting out cyber attack incident response plans will ensure that everything is covered in the plan, and that everyone knows what to do in the real thing.
Help employees stay safe – It’s no coincidence that phishing and other forms of social engineering are the most common form of cyber incident. Humans have long been seen as the weakest link when it comes to cyber security, and attackers exploit their lack of awareness. Engage employees with the cyber security programme, and train them to become security champions, protecting themselves and the organisation’s network.
Review cyber security incident response plans regularly – Things change. So don’t forget to update plans when they do. When new systems come online, add them to the playbook. Review the list of incident responders and their contact details to ensure that it is as up to date as possible.
Spring into action – speed is essential in a cyber security incident response
When an attack occurs, time is of the essence. Make sure that the incident response team is made aware of the attack as soon as possible with an integrated IT alerting system that will bring the team together in a matter of seconds.
The plans and playbooks are crucial at this point, so it is vital that they are readily accessible at the teams’ fingertips. Most of the first few tasks should already be done – the incident response team should know how they are, the Lead Investigation Officer should already be in place, tasks should be mapped out, and communications plans should be in place.
With that said, some of the activities that follow a cyber incident include:
Quickly contain the breach – Understand which servers, devices, systems are impacted and take them off line as quickly as possible. Disconnect everything from the internet while searching for the full scope of the attack, and disable remote access. Change passwords for all systems immediately.
Assess the breach – Gather more information about the breach. Is it a stand alone attack, or are other organisations also affected?
Log everything – Keep a comprehensive log of the incident and response, including when the incident occurred, how it was discovered, the actions undertaken to manage it, the members of the incident response team, and more.
Prioritise what to work on when – Information about the criticality of certain systems, and what to prioritise should be in the cyber security incident response plan. Of highest priority are the systems required to operate, or to return to operations as quickly as possible.
Communicate with stakeholders – Notify employees, stakeholders, customers, and the public as soon as possible. Contact regulators and insurance providers in the first instance.
Recovering from a cyber attack
The initial response to a cyber attack is only the start of an organisation’s recovery from the incident. Once the heat of the response has ended, that’s when the post event analysis kicks in to learn the lessons of that attack and the organisation’s response to it.
Analysing the incident response is important. The organisation will want to understand which containment actions worked, whether the cyber attack incident response plans were effective, and the losses and costs of the attack. Reports and logs that keep record of every action carried out during the response will support the organisation in their efforts to learn from the attack and improve their response in their next incident – whatever it may be.
Supporting your cyber security incident response planning
Creating cyber security incident response plans may seem like a daunting task, but there are platforms available to support organisations through every step of the planning, response, and analysis stages.
At Crises Control, our incident management platform is just one part of our powerful mass communications platform that will support your cyber security incident response with real time alerts, secure, available messaging, and the ability to prepare and execute messaging between the incident response team and the organisation, and with employees, the public, customers and suppliers, and other stakeholders.
Schedule a demo to learn more about how Crises Control can support your cyber security incident response planning today.
|
<urn:uuid:136228bc-1726-42fd-87db-10ce62f922d6>
|
CC-MAIN-2022-40
|
https://www.crises-control.com/blogs/cyber-security-incident-response-planning/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00260.warc.gz
|
en
| 0.955849 | 1,384 | 2.90625 | 3 |
JCL (z/OS) - Working with Procedures and Symbols
Previous courses have described many of the statements and parameters to build a basic job. This course looks at some advanced JCL capabilities including the storing of JCL code externally and calling it in the form of a procedure or an INCLUDE group. You will also see how symbols can be incorporated into JCL, and the benefits and flexibility they can provide.
Operators and programmers who need to know how to code and submit JCL batch jobs.
Completion of Interskill's JCL Coding Basics for JOB, EXEC and DD Statements courses, or appropriate knowledge.
- After completing this course, the student will be able to:
- Describe how cataloged and in-stream procedures are created and invoked
- Explain how symbols are created and referenced
Working with Procedures
What are procedures and why are they useful?
Coding Catalog procedures
Passing Variables to Procedures
Working with Symbols
What are JCL and System Symbols, and how are they used?
Syntax when Referencing Symbols in JCL
Using the SET statement to define a symbol value
Symbol substitution examples
Common symbol problems
|
<urn:uuid:990411e3-eeb3-4eee-9a5b-1494f6ad4cf0>
|
CC-MAIN-2022-40
|
https://bmc.interskill.com/course-catalog/JCL-zOS24-Working-with-Procedures-and-Symbols.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00260.warc.gz
|
en
| 0.880603 | 280 | 3.265625 | 3 |
As climate change advances as a top concern globally, IT experts are pressed to discover more innovative ways to minimize the data centers’ impact on natural resources. The cooling process, which makes up 40 percent of the power consumed by data centers, is getting some of the most intense scrutiny.
According to the Commercial Buildings Energy Consumption Survey, office buildings with data centers use a significantly larger amount of energy than office buildings without them — primarily because of factors like cooling, electricity, and computing demands. Also, since data centers work 24/7, they demand consistent power. In many cases, the survey revealed, cooling electricity intensity in office buildings with data centers was nearly twice as much as those without data center functions.
Here are some cooling and efficiency innovations that have been implemented in recent years, pointing to some variations that could possibly lead to mainstream solutions.
Artificial Intelligence. Google has been using DeepMind technology to test artificial technology on resolving energy inefficiencies in its data centers. Early indications show that it’s working, with an overall power consumption reduction of approximately 40%.
Underwater data centers. Microsoft is also launching a project to reduce data center cooling costs, by going underwater. Project Natick, an underwater data center, makes use of ocean water to cool off the data center infrastructure. As with Google, the results are showing promise. The company is building another underwater data center.
Outdoor air cooling. Perhaps more accessible to the average data center, using outside air to cool data centers has been another approach to reducing energy dependence. Facebook has been using free cooling techniques with success. However, there are other companies that have cited concerns about contaminants getting into equipment when using this approach.
Other innovative approaches being testing include using water submersion for cooling. Microsoft tested the waters by placing a self-contained data center in the ocean. More practical cooling applications have included hot-aisle containment (HAC) and cold-aisle containment (CAC), which have led to reduced use of energy among numerous companies.
Want to learn why EMP shielding, FedRAMP certification, and Rated-4 data centers are important?
Download our infographic series on EMP, FedRAMP, and Rated-4!
|
<urn:uuid:1faf7d8e-9878-44cf-9ff5-8c955ead6e6c>
|
CC-MAIN-2022-40
|
https://lifelinedatacenters.com/data-center/top-concerns-energy-efficiency/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00260.warc.gz
|
en
| 0.949102 | 449 | 3.265625 | 3 |
eSports is on a trajectory to become the largest sport in the world. According to Activate Technology and Media Outlook 2020, the industry is expected hit $7 billion by 2023. However, for many eSports players, there are unexpected benefits in participating. Educators, parents, and eSports athletes themselves note that participating in eSports helps students build critical soft skills, from stronger communication to collaboration and teamwork skills. Here’s a closer look at some of the latest insights from the field and how an eSports program could help your school prepare your students for tomorrow’s most challenging academic and career opportunities.
On the Importance of Soft Skills
According to a recent piece in the Harvard Business Review, one of the most critical areas for skills development is soft skills. As many as 50% of jobs are likely to be automated by 2024. Today’s students can help prepare for the future of work by not only focusing on their technical and hard skills, but also on developing their soft skills.
As the authors note, “In one survey, 93% of employers reported that ‘a candidate’s demonstrated capacity to think critically, communicate clearly, and solve complex problems is more important than his or her undergraduate major.’ In addition, employers seek candidates who have other sorts of ‘soft skills,’ such as being able to learn adaptively, to make good decisions and to work well with others. These sought-after abilities, of course, fit perfectly with the sorts of things that people can do well, but are and will continue to be difficult to automate.”
In other words, even the most sophisticated technology struggles to replicate factors such as emotional intelligence, the ability to understand and adapt decision-making changes to context, and creativity and collaboration. Students can develop important competitive advantages for their future academic and professional performance by focusing on the soft skills that eSports participation provides. The latest research and interviews with industry professionals suggests that eSports is a perfect training ground to let students test and develop these skills.
eSports Cultivates Teamwork
There are numerous studies on the benefits of teamwork, which have led traditional recommendations in the classroom that students participate in team sports. But for students with different interests or different physical capabilities, being signing up for the football team or joining Little League isn’t always a possibility. However, eSports relies primarily on technology and is a very inclusive option for a wider range of students. eSports helps develop teamwork skills that include communication, collaboration, and learning how to work effectively with others.
One study in The Sports Journal notes that researchers, “identified team dynamics and communication as potential barriers for esports players in achieving optimal performance. Contrary to stereotypical perception of gamers, esports players need to communicate with teammates effectively and operate as a team member. Furthermore, collective intelligence has been identified as a predictor for the performance of esports teams (Engel et al., 2017). It would seem that group dynamics plays a critical role in team performance for esports in a similar way it does for traditional sports.” Fostering these skills can help students in future academic endeavors, career, and transitioning to leadership roles.
Developing Strong Team-based Problem-Solving Skills
Another advantage that esports offers is the ability to help students develop problem-solving skills under pressure and working collaboratively with others. Dr. Mimi Ito, Professor of Cultural Anthropology at UC Irvine, has conducted research on how students engage with digital technology and notes that it takes significant hard work to excel at eSports.
In an interview with the North American Scholastic Esports Federation, Ito notes “eSports provides a way for young people to hang out with their friends in a really active and positive way… Students are engaged in 21st century skills and problem-solving, and they’re understanding how to connect their own problem-solving with a whole community of players.”
Success at eSports Fosters Self-confidence
Self-confidence is a soft skill that enables students to take on challenges and broaden their horizons. As ET notes, “Achieving and excelling at competitive gaming in a learning environment can do wonders for students who love gaming but may not show any particular interest in traditional curriculum sports and activities. By offering eSports as an alternative, students are given the choice of taking up something they truly enjoy which helps improve self-confidence in their own abilities.” Students that find success in eSports may be more willing to take risks, try new things, and believe in their ability to succeed at challenging endeavors.
Competition, Competitiveness, and Much More
It’s estimated that there are 125 varsity college teams participating in competitive eSports leagues today, and the number is increasing annually. By taking part in competitive activities, students are building a number of soft skills. Often, the popular eSports games rely on teams of players working together to win, rather than the prowess of individual players. Learning how to compete effectively supports an array of skills, from the importance of time management and developing a strong ethic to managing losses in a healthy way and rejoicing in their own success.
If you need to build a strong case for hosting an esports team at your school, the effectiveness of cultivating soft skills can help. eSports have opened a new avenue of exploration and performance for students. From creating an inclusive environment for students with a wide ranges skills and abilities to helping cultivate stronger teamwork and collaboration, eSports fosters the vital soft skills that will help students be competitive in the future job market.
|
<urn:uuid:d38cdf2b-99d8-4c8f-8a08-e84579740ed6>
|
CC-MAIN-2022-40
|
https://community.connection.com/how-esports-help-students-cultivate-soft-skills/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00260.warc.gz
|
en
| 0.9606 | 1,121 | 3.203125 | 3 |
Since the pandemic introduced the world to the concept of “the new normal,” nothing is normal anymore. A simple incident can be disastrous for something as vital as the supply chain as we have seen with the blockage of the Suez Canal. The essence of that chain is to give companies the certainty that raw materials and components are available in order to produce the finished goods.
The pandemic resulted in lockdowns and a high demand because of digitisation together with economic recovery. This meant that this certainty is challenged. When manufacturers are looking for alternative material and components they can face re-certification of their products, or new developed products cannot be released. As a result of this existing products must stay longer available. The fire safety and security market is highly dependent on electronics and with that the industry is affected by the supply chain crisis.
Supply chains are formed by complex connections between companies. It starts with the raw materials and ends with finished goods for industry and end user. One chain can include up to thousands of companies. This is not a problem because, thanks to proven forecasting methods, the activities of the companies in the supply chain are precisely coordinated. This considers demand, supply, seasonal influences or specific characteristics of regions. What is not considered – and what is not possible – are unknown factors. These can lead to the forecasts no longer being correct. The well-oiled machine of the supply chain then quickly starts to creak and squeak.
One unknown factor the world faced in 2019 was COVID-19. This made it clear that society is not prepared for events that are not likely to happen but can have a major impact on society. Unfortunately, the start of the pandemic happened in a country where a large part of the world’s production takes place. For years, Western companies have located parts of their production there. When the production heart of the world temporarily stops beating, the world comes to a standstill. Problems in the supply chain are a direct consequence.
Several industries had problems even before COVID-19. Producers of chips, computer parts and other components needed for the digitalisation of our society were already under great pressure. The production capacity of these goods is limited worldwide and the slightest change in demand can cause supply problems. This was already the case with smartphones, (game) computers or televisions. Chips had already entered the automotive industry on a large scale and with the electrification of this industry, the demand for chips soared. We see a similar development in industries and parts of society where the (Industrial) Internet of Things is becoming commonplace.
The consequences of the COVID crisis have led many governments to recognise that the high dependence on producers out of one region poses a greater risk to certain sectors. For example, the fact that many European countries have no production capacity for facemasks which were needed during the pandemic, is perhaps the best example of this. For electronic chips and components we face the same challenge; to reduce the risks there is simply a need for more and better distributed production facilities.
In the pursuit of lean manufacturing, production has been outsourced to Asia which means that a shutdown of factories in one country can have a global impact. The EU also recognised this even before the pandemic. Accelerated by the pandemic, the EU is focusing its policy, among other things, on increasing domestic capacity and diversifying the number of suppliers.
Following the rapid spread of the coronavirus in China, European companies were affected. The lockdowns introduced in China led to a virtual standstill in production and restricted the freedom of movement of residents, which also brought logistics providers to a standstill. As quickly as companies were caught off guard by these lockdowns, the recovery in demand was also swift.
For many companies that were caught off guard by global lockdowns, the speed of recovery is almost as insidious and led to another supply chain crisis during the pandemic. Increased consumer spending and thus demand for products, combined with delayed transportation by sea and air caused major shortages and record backlogs. The tightness on container capacity is expected to continue for some time. This will not help to clear shortages of electronic components, which is expected to continue for some time.
Effects on the fire safety and security industry
The supply chain crisis caused by the pandemic also affects companies in the fire safety and security industry. The effects not only concern the manufacturers of equipment but also companies in the field of service and maintenance of systems. Outside this there are other areas that can impact building safety. An example of this is that recommended emergency escape routes that were in place before the lockdowns are now mixed with the one-way traffic signs intended to allow employees to pass at a safe distance from each other.
Manufacturers of electronic fire safety and security equipment are affected by the disruption in transport and shortages on natural resources and core materials. COVID-19 has shown that unexpected events can shatter the basic premise that materials will be easily accessible, disrupting supply chain performance. The chain reaction initially caused by the shutdown of factories in countries effected not only the supply chains but also the workflows within and between companies.
Paul van der Zanden, General Director of Euralarm adds: “Another relevant topic that affects our industry is the compliance of the products that the industry delivers. With electronic components not being available due to the supply chain problems, manufacturers need to reconsider replacement of parts that aren’t available. However, with the replacement of certain components, the conformity of the final product may also be at stake.” This could make it necessary to have the product retested and recertified. High (and unnecessary) costs could result from this.
When service and maintenance companies were faced with problems in reaching the customers during the pandemic, these organisations learned other flexible ways to stay in contact with their customers. Many industries and businesses have started modifying their operational methods. They are now operating their business online. The fire safety and security industry is doing the same by starting virtual offices and using remote service and diagnostic tools to support their customers.
Customers are moving to hybrid working models which are applied throughout society and could lead to downsizing or repurposing of buildings. This also can require that the fire safety and security requirements need to be adapted to the new use or size of the building.
Effects for the Green Deal
Securing a sustainable supply of metals and minerals used for components in fire safety and security equipment is also key to meet the energy and climate targets for 2030 and beyond. The European Green Deal aims to make the EU’s economy sustainable. That creates many opportunities for the European society and industry in the current context of both the climate crisis and the COVID-19-outbreak.
However, the transition towards green technologies, like renewable energy, e-mobility and stationary energy storage relies heavily on critical raw materials, such as cobalt, neodymium, tungsten, etc. on new products and services. Both globally and in Europe it is expected that the demand for these materials will continue to increase. This can create challenges for the Green Deal.
The impact of extracting and processing these resources is high while the supply chains are often not transparent and may lack traceability. Another challenge is the recycling of the materials. For most critical raw materials, the recycling efficiencies are low while the dependency on non-EU countries is high and still increasing.
The green ambitions of the EU could therefore also lead to certain activities being brought back to the West either to reduce the dependency of non-EU countries or to avoid CO2 emission as result of transporting goods from other parts of the world to Europe. This could lead to shorter logistics chains and more sustainability in several sectors. In that sense the current crisis in the high-tech supply chains contributes to a greener world and a stronger Europe.
For more information, visit: www.euralarm.org
|
<urn:uuid:6a26ef7e-4b64-461b-bb12-ad66f299cc5a>
|
CC-MAIN-2022-40
|
https://internationalsecurityjournal.com/supply-chain-issues-fire-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00260.warc.gz
|
en
| 0.963996 | 1,622 | 2.8125 | 3 |
Staying secure in a changing agricultural landscape
Agriculture is essential for modern society. Arguably, it has never been more important – with climate change, population growth, demographic changes, and water scarcity, it’s vital that the food industry adapts and adopts technology to meet the growing demands on the food supply chain and network.
The UK’s agriculture industry has changed dramatically in recent years. Farms have increasingly adopted emerging technologies as a way to improve efficiency and cut down on costs, and terms such as ‘smart farming’ and ‘precision farming’ have come into popular use.
However, with the increasing adoption of digital technologies comes a growing cyber security threat. While this is true for all industries, the agriculture sector faces its own unique challenges.
For example, concerns around animal welfare mean that agriculture organisations can face threats from hacktivists wishing to cause financial damage. As well as this, the UK’s food sector is classed as critical national infrastructure, which makes it a potential target for nation-state actors.
Having effective cyber security measures in place is crucial if organisations want to implement and maintain effective digital processes. Many of the individual businesses that make up the UK’s agriculture sector are small or micro-enterprises, which are less likely to have strong defences in place. Additionally, complex supply chains and a reliance on third-party infrastructure can make it difficult to quantify cyber risk, which is where government departments and security specialists can step in to help to improve knowledge and share best practice.
Large food processors can also face cyber security threats and be targeted by activists and sophisticated nation-state actors that want to cause disruption to a supply chain. Although food shortages would be unlikely, a successful attack on just one of these large organisations could affect a large number of farmers and growers.
It’s important to understand the potential risks that can arise from the UK’s complex and interconnected food network. From databases containing confidential and often critical data about farm produce and livestock, to internet-connected vehicles and heating, ventilation and air conditioning (HVAC) systems in storage spaces, increasing digitisation can present a vast attack surface.
Along with Harper Adams University, our research team has been working on an in-depth analysis of the UK’s agriculture sector, exploring potential risks and outlining how industry and government can work together to improve the resilience of the nation’s food network.
Download our whitepaper from the link below
|
<urn:uuid:1d6caa7c-832d-4016-9da2-80e588ebc702>
|
CC-MAIN-2022-40
|
https://newsroom.nccgroup.com/news/staying-secure-in-a-changing-agricultural-landscape-388411
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00260.warc.gz
|
en
| 0.939092 | 514 | 3.015625 | 3 |
By the time we get to the 2020 U.S. Census, estimates say there will be about 330 million Americans living in more than 140 million housing units. As the country’s population grows, how can the Census Bureau encourage as many citizens as possible to respond to its surveys?
This was the problem the agency has been trying to solve, according to Deirdre Dalpiaz Bishop, chief of the Census Geography Division.
“The most costly part of conducting a census is when we have to go out and conduct nonresponse follow-up,” she told GovernmentCIO Media at the 2018 Esri Federal GIS Conference on March 20. This follow-up involves physically going door to door around the country to the people who didn’t respond.
By 2020, Census hopes to reduce nonresponse follow-up, increase self-response and better communicate with community leaders with the Response Outreach Area Mapper, or ROAM.
What is ROAM?
An online interactive public map powered by Esri, a provider of GIS mapping software and platforms, ROAM is populated with statistical data from the Census Planning Database. It provides tract-level data on low response scores and information about people and households from the American Community Survey, like poverty status, education level, race and language ability. The data is updated annually and the most recent 2012-2016 numbers will be plugged into the application in June, providing the most up-to-date low response score calculations and predictions.
The Planning Database has been publicly available for several years, but was a large and difficult file to download, use and process.
“It was sort of a specialized file that required a third party application to be able to manipulate it, and look through it,” said Suzanne McArdle, Census computer mapping specialist, who also was at the Esri event.
So, when McArdle and the geography division realized the maps — being the most important part of the puzzle — and the spatial aspect weren't included in that Planning Database, it was time to integrate. Now, anybody with a web browser can interact and see all the data.
“It’s just right out-of-the-box usable,” McArdle said.
Ultimately, ROAM makes it easier to identify areas with typically low response rates for censuses and surveys. It helps the agency plan for the 2020 Census and communicate with tribal, state and local governments and community leaders about where those hardest-to-count populations will be. In turn, community leaders and officials can use that information to educate those populations about why it is important to self-respond.
How Did Census Integrate?
Integrating the platform wasn’t much of a challenge. Census already used Esri tools at the enterprise level, and ROAM is one of the first applications the bureau built using Portal for ArcGIS, an Esri platform for sharing and securing geospatial content.
“That’s actually something that was stood up specifically to help support our work here, because it was already accessible to us,” McArdle said.
It didn’t require a workforce trained in code development, either. Once the geography division figured out what it needed the tool to be and look like, and the cartographic design of the tool, it was easy to put together.
And when it came to staff adoption, Bishop said senior leaders at the bureau were “hungry for this type of application.” They’ve been looking at this data for at least the past 20 years, but 20 years ago, it required geographic specialists to create the maps and print them out for Census leaders and communicators, who would take the printed maps to community leaders.
Now, anyone carrying a device with a web browser can pull up the map and show people where the hardest-to-count populations will live.
The application was also shared with Census’ National Advisory Committee in 2017. The committee consists of people across the country who have been asked to serve the federal government and advise Census about how it’s progressing on its plans for the 2020 Census.
“[The committee] could see the value of a tool like this, and that really helped us get this tool out even faster,” Bishop said.
How is ROAM Used Internally?
The tool was an idea in April 2016 and was released to the public in February. During that time, there were different phases of integrating it into what the geographic and mapping teams were doing.
Leadership at all different levels can easily use the tool to help plan for the 2020 Census. For example, for areas with lower response rates, Census recruitment will know whom to hire to go knocking on doors in certain communities based on the area’s demographics or predominant languages spoken.
The maps project where Census will have the greatest need for hiring and recruiting people, what the unemployment rates are for the future and how it can attract the right people. The tool can help predict where to target outreach teams when working with communication teams and contractors, and how to design a message for TV, radio or internet depending on the population the bureau is trying to reach.
“All of these statistics in this data set are going to help us to make informed decisions,” Bishop said.
What About External Users?
Census has partnership specialists who go out to local communities and meet faith-based and community leaders to show them the tool and how it can be used. The map will even inform local officials of where to allocate resources and teams to encourage self-response.
Census also publicly released ROAM Representational State Transfer services, which allows all of the map data in the application to be available at a public REST endpoint. This means other developers can create their own web mapping applications with ROAM data as a base.
So, as partnership specialists make connections in local communities, municipal GIS shops can build their own applications using Census data and their own local data on top of it. Census doesn’t store youth center location or faith-based organization data on its map, but local communities with access to that information can add that layer.
“If it will help state and local officials to have their own GIS web mapping applications that look similar to ours, but add value, add the local information, we thought that that would be really helpful,” McArdle said.
This feature was initially a recommendation from the National Advisory Committee.
“We keep a running list of enhancements, and so we’re taking note of anybody who comments on how this tool could be better because if we can implement it, we would like to be able to do that,” McArdle said.
|
<urn:uuid:a90a3171-26ec-42dc-bb48-1f38efac37ed>
|
CC-MAIN-2022-40
|
https://governmentciomedia.com/census-integrates-gis-technology-plan-2020?page=1
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00260.warc.gz
|
en
| 0.959968 | 1,396 | 2.65625 | 3 |
ICMP Source Quench
|Alert||ICMP Source Quench|
ICMP source quench messages are generated when a gateway device runs out of buffer space to process incoming network traffic. This is an informational message that is generated in an attempt to inform the remote host generating the traffic to limit the speed at which it is sending network traffic to the remote host.
All connected network gears.
ICMP source quench message are generated by gateway devices that no longer have the buffer space needed to queue datagrams for output to the next route. This could be an indication of a routing problem, network capacity problem, or ongoing Denial of Service attack.
Legitimate source quench datagrams will trigger this rule.
Denial of Service. Attackers could potenially use ICMP source quench datagrams to rate limit a remote host that listens to unsolicited ICMP source quench datagrams.
Use ingress filtering to block incoming ICMP source quench datagrams.
|
<urn:uuid:611f7c66-df8f-4348-bd08-fdc0d659b060>
|
CC-MAIN-2022-40
|
https://www.aldeid.com/wiki/Snort-alerts/ICMP-Source-Quench
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00260.warc.gz
|
en
| 0.817571 | 260 | 2.96875 | 3 |
Ever wonder where the Internet lives? (No Ted Stevens, it is not in a “series of tubes.”)
Actually, it’s more like a series of servers, or computers, many of which are located in data center facilities. Some of these are on-premises, which means they’re owned and operated by an organization that provides the Internet-based content, service or application. Others are in the cloud, which is just an airy word for a facility that is owned and operated by a paid vendor, for example, Amazon Web Services or Google Cloud. Some are in colocation facilities, which means the equipment, bandwidth and storage space are rented out to clients.
Regardless of the model used, chances are, most of the Web activity you engage in for work or play on any given day entails your computer connecting to a remote server somewhere. This could be a few hundred miles away, or it could be across an ocean. What’s important to understand is that the Internet is entirely dependent upon these facilities. And while it’s easy for data center managers to get wrapped up in their day-to-day processes and forget just how important security really is, data centers are essentially the backbone of the Internet: Trying to keep Web services up and running without these facilities would be like trying to walk without a spine.
What are the cyberthreats to data centers?
There’s a long list of data center hazards that includes floods, fires and power outages, which have been caused by squirrels chewing on wires, birds flying into transformers and the occasional summer blackout. Every single one of them, plus many other possible threats, can result in downtime that cripples Web services.
But the risk of physical intrusion is just as significant for data center management. According to Data Center Knowledge contributor Jason Verge, thieves actually managed to break into a data center facility in Denmark by cutting a hole in a wall. Verge reported that the perpetrators stole basic equipment, including network cards. While this damage was about as minimal as they could have hoped for, it does raise some questions about how this was allowed to happen.
“How did thieves cut through a wall? How did they get in and out undetected? Why wasn’t security staff aware? What was stolen and how did that hurt the customer?” asked The Data Center Journal contributor Josh Moody. “Security should be multilayered and require multiple points of two-factor authentication along with biometric scanning at every colocation-room door.”
“In many cases, cyberattackers and thieves are after much more than network cards.”
In many cases, cyberattackers and thieves are after much more than network cards. Take the NSA’s data center in Utah: The massive 20-building super complex is home to what the NSA refers to as a “100,000 sq-ft mission critical Tier III data center.”
According to The Atlantic reporter Walter Kirn, the facility was being used, among other things, to house intelligence collected by the NSA. Given the amount of potentially sensitive data, which may or may not be pertinent to national security, it’s hardly surprising that there are an estimated 300 million attempted cyberattacks on the facility every day, according to The Hacker News. Still, that’s an awfully large number of attempted hacks, and if nothing else, it highlights how just how important cybersecurity is in government-operated data center facilities.
Likewise, data centers that house protected health information are ideal targets for hackers. The Identity Theft Resource Center estimated that there were hundreds of health care-related data breaches in 2015. While many of the more infamous incidents entailed elaborate virtual schemes, a cyberattack caused by a physical intrusion of a data storage facility is not outside the realm of possibility.
It might be time to change the locks
Given the undeniable significance of data center facilities as well as possible implications of a physical intrusion – and we don’t just mean downtime here – data center managers need to make sure physical security is as strong as ever. For starters, multifactor authentication at all possible entry points is absolutely essential.
This could entail the use of a one time password that is sent to a predetermined mobile device each time an employee taps an eID on a card reader. It could also mean fingerprint and retina-scanning technology, as suggested by Moody. The same goes for internal entry ways to sectors of a facility that might have limited access to a select few employees, as well as doorways in colocation facilities.
“Mobile device management creates unique identities for mobile devices.”
Stronger data center security also entails securing mobiles devices that may be in use for daily operations such as communication between staff and alert notifications. To this end, mobile device management creates unique mobile identities for devices. This makes them more reliable as authentication tokens in data centers.
Whether you’re running a massive, top-secret facility that houses NSA intelligence, you’re a health care provider that relies on its data center for access to patients’ medical records or you’re a world-leading cloud provider, slacking on authentication is just about the dumbest thing you can do.
Keep your organization’s data – or your customers’ data in the case of a cloud vendor – safe with smart, strong authentication.
|
<urn:uuid:25ab7714-8e32-47e9-a7cc-1ccb51ef19a6>
|
CC-MAIN-2022-40
|
https://www.entrust.com/it/blog/2016/03/bracing-the-internets-backbone-with-stronger-authentication/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00260.warc.gz
|
en
| 0.953932 | 1,109 | 2.828125 | 3 |
The momentum behind cleaning up energy that powers data centers and shrinking battery costs have created an opportunity for a new type of data center technology: smart grid-ready UPS.
The idea is to use energy data centers already store for backup to balance the utility grids, where intermittent renewable energy sources increase load volatility. There is still a long way to go before most electrical grids are made “smart” enough for the idea work, but a convergence of factors has made this an opportune moment for it to move forward, according to the market research firm Omdia.
Electric grids today have “limited ability to store energy, so electricity must constantly be generated to satisfy demand,” Moises Levy, principal analyst with Omdia’s Cloud and Data Center Research Practice, said. “Smart grid is the capability to allow for bidirectional interactive sensing and communication between the utility and the users. This represents a significant opportunity for distributed energy resources to contribute with the electric grid, including UPS and energy storage systems.”
The biggest source of friction in adoption of smart grid-ready data center UPS has been battery technology. The batteries must be reliable, cost-effective, easy to deploy, and environmentally friendly, Levy said. But innovation over the recent years by the electric vehicle industry has driven battery costs down, putting grid-interactive onsite energy storage within data center operators’ reach.
Another major source of friction is immaturity of the world’s energy markets with respect to smart grids and renewables. New regulations and new market mechanisms must be implemented to allow for greater access to renewable energy and participation in demand response by data centers connected to the grids.
The grids themselves must be upgraded with sensors, data analytics capabilities, and new controls to enable a more fine-grained and dynamic approach to load management.
The wave of government and corporate enthusiasm about sustainability that’s currently rising may drive a larger appetite than before for making the necessary changes.
|
<urn:uuid:d4401449-8e08-4b6f-8219-fb9544deaa77>
|
CC-MAIN-2022-40
|
https://www.datacenterknowledge.com/power-and-cooling/smart-grid-ready-data-center-ups-can-help-shift-carbon-free-energy
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00260.warc.gz
|
en
| 0.927333 | 404 | 2.765625 | 3 |
Asymmetric cryptography or public key cryptography is where 2 keys are used to establish a secure connection between 2 entities in a network. Public key cryptography utilizes asymmetric encryption. The private key is kept only with the owner of the website, the server, or with whom you want to communicate. The public key is distributed among the clients and the userbase. The data encrypted using the public key can only be decrypted by the private key. Asymmetric cryptography thus protects against Man in the Middle attacks and attacks where the data-in-transit might be compromised or modified.
|
<urn:uuid:1c6ae90e-3588-4762-86c8-ebef61f5abe8>
|
CC-MAIN-2022-40
|
https://www.encryptionconsulting.com/education-center/what-is-public-key-cryptography/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00260.warc.gz
|
en
| 0.939052 | 116 | 3.53125 | 4 |
Cloud migration is the process of moving applications into the public and private cloud infrastructure to achieve cloud’s agility, resiliency and scalability drive business growth.
Migrating to the cloud infrastructure provides the ability for the business to change the IT infrastructure as per their requirements.
The cloud model is composed of three service models (SaaS, PaaS, IaaS) and four deployment model Public Cloud, Private Cloud, Community Cloud and Hybrid Cloud.
Software as a Service (SaaS)
It is a one-stop shop that provides everything you need to run an application. In this model, the application will be a vendor by the provider and they make it available to the customers over the internet.
The SaaS provides usual components of an on-premises application, including an application layer, data storage, and access via API calls.
With SaaS cloud provider vendor responsible for a number of security concerns, but customers should ensure the security of their application data and the endpoints that are used to access cloud services.
Platform as a Service (PaaS)
Providing the platform online, vendors offer servers, networks, and other system components and the customers can’t see the underlying infrastructure.
This is suitable for the developers who need the application platform, with this model vendor is responsible only for the physical infrastructure and the customer should take care of the implementations.
Infrastructure as a Service (IaaS)
With the IaaS service model computer, network, and storage resources are outsourced to support for enterprise operations. The abstracted resources are then “orchestrated” by a set of connectivity and delivery tools.
The IaaS is simply a data center in the cloud and the vendor responsibility is for physical security and data vulnerability. The cloud user is responsible for everything built in the infrastructure.
Cloud Deployment models
Public Cloud – Public Cloud migration Owned by cloud service provider and made available to the public via virtual machines (VMs), applications or storage.
Private Cloud – private Cloud migration owned and managed by a single organization.
Community Cloud – Infrastructure Cloud migration shared by a group of organizations and the deployment can be managed by the community or by a third party.
Hybrid Cloud – It is the mix of the public cloud and a private cloud environment, it integrates products and services to meet business needs.
Cloud Migration & Security Process Model
The key part is to identify the Cloud security requirements define the architecture, and determine the control gaps based on the existing security features of the cloud platform.
- Identify enterprise governance, risk, and compliance requirements, and legacy mitigation controls.
- Evaluate and select a cloud provider, a service model, and a deployment model.
- Select your cloud provider, service, and deployment models.
- Define the architecture of your deployment.
- Assess the security controls and identify control gaps.
- Design and implement controls to fill the gaps.
- Develop and implement a migration strategy.
- Modify your implementation as necessary.
Cloud vendors provide tools to secure the infrastructure only, and they provide tools to defend against application based attacks such as OWASP Top 10 risks or automated attack and to analyze the network traffic.
It is expected by 2020 most of the applications move to cloud infrastructure, so there be a greater chance of exposing vulnerabilities, so it is essential to have a higher level of security in Cloud.
|
<urn:uuid:af195bb0-7947-4dba-ad37-19bca00e51b4>
|
CC-MAIN-2022-40
|
https://gbhackers.com/cloud-migration-guide/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00460.warc.gz
|
en
| 0.913276 | 709 | 2.59375 | 3 |
The COVID-19 crisis has taken away the normal lives of people all over the world. Due to social distancing, people are using their personal devices to communicate with each other more frequently, and more people are working remotely.
Increased use of mobile devices and remote connections directly translates to hackers having more access to sensitive information on both personal and corporate levels, especially for those without a secure connection.
This is a summary of the general cybersecurity trends and examples of cyberattacks that occurred during the first half of 2020—as impacted by the COVID-19 crisis.
Cybersecurity statistics in the first half of 2020
25% of brand impersonations in phishing attacks
Due to quarantine regulations, the use of remote working technology and personal devices has increased. This has inspired hackers to start targeting personal and employee devices with cyberattacks, including phishing.
“Brand phishing,” or brand impersonation phishing entails hackers trying to imitate the official website of a well-known brand: the webpage design, logos, color schemes, and even URL may be near-identical to the original.
Victims of such brand phishing schemes may be redirected to the fake website while web browsing, or intentionally lead to such site through phishing emails or text messages. These fake websites are specifically designed to steal users’ personal information, including payment details.
20% increase in cyber fraud and abuse
Any act that involves deliberate deception for unlawful or unfair gain that occurs online refers to cyber fraud. Some examples of cyber fraud are online credit card theft and the non-delivery of paid products, software, or merchandise that were purchased online.
Fraud would normally die down a bit after the busy holiday season, but because of COVID-19 and the restriction of face-to-face interactions across the globe has kept cyber fraud crimes active.
The COVID-19 crisis has increased the percentage of online fraud and abuse by more than 20% in the first quarter of 2020; since, the beginning of 2020, 445 million cyber fraud cases have been reported.
200% increase in BEC attacks
Business Email Compromise, or BEC, targets businesses who work with suppliers overseas and conduct online payments or money transfers. Attackers mainly target corporate or publicly available email accounts of high-level employees like CEOs or C-level employees who are related to finance or involved with wire transfer payments.
After the hacker secures these email addresses of company executives, he will trick unsuspecting employees to make online payments and transactions. BEC attackers who perform invoice and payment fraud pose as suppliers, vendors, or customers in order to steal money using tactics such as hijacking vendor conversations to redirect vendor payments.
From April to May 2020, there has been an increase in BEC attacks by 200%. The attack mainly focused on invoice or payment fraud. These cash-targeted attacks, compared to other types of BEC attacks, involve a much bigger financial loss as they are aimed at business to business transactions.
One example of such a larger dollar amount of fraud was a case that may have caused more than $ 700,000 in losses. A BEC hacker impersonated as an authentic vendor and convinced the employees of a telecommunications provider to change banking details. The Abnormal Security team detected that a legitimate invoice of over $700,00 was redirected to another account, and prevented the transaction before the payment was made.
BEC attacks are unlike past phishing campaigns that targeted a large number of random people; BEC hackers impersonate a known and trusted figure with authority to mislead specific targets into performing financial transactions. BEC attacks may have been a low profile cybercrime, but their economic costs are becoming increasingly damaging.
Cyberattacks that occurred in the first half of 2020
World Health Organization (WHO)
The WHO is a specialized agency of the United Nations, responsible for international public health. Since WHO is in charge of all worldwide health issues, if they get hit by a cyber-attack it is extremely dangerous and can reach all types of people.
From February to March of this year, coronavirus-related email threats from entities disguised as WHO doubled. In fact, a report by WHO shows that phishing attacks increased by 15 times much more during the first two weeks of March than the entire month of January, proving a spike of cyberattacks since the onset of COVID-19.
In one instance, cybercriminals have managed to steal patients’ records from Hammersmith Medicines Research (HMR), a UK-based medical facility, and have published some of the files on the dark web, demanding for a ransom payment. HMR’s Clinical Director Malcolm Boyce stated that the UK medical organization was able to restore its systems without having to pay the ransom demanded by the hackers but not before medical questionnaires and passport copies of more than 2,300 patients were leaked on the dark web.
Medical records of patients are highly valuable on the dark web because it contains personal information that hackers are interested in, like a patient’s full name, address, financial information, and much more.
Hackers have also mimicked the WHO’s internal email system in an attempt to steal multiple agency staff’s passwords. Not just WHO but cybercriminals have used the same malicious web infrastructure to target other healthcare and humanitarian organizations too.
When an organization that handles a large amount of personal information gets leaked it is a huge risk. With the information, hackers gained they can sell it to other parties who will use it for illegal purposes. And hackers can create more effective phishing schemes and lead you to credit card fraud if they have obtained more information, like the last four digits credit card numbers.
An Italian email provider experienced a massive data breach in April. Data of more than 600,000 users that were stolen by hackers are currently being sold on the dark web. Hackers went on Twitter to promote the dark web where they were selling the company’s data. The hacking group responsible stated that they have planted themselves for more than two years in the company’s network and planted themselves “similar to an APT.”
Advanced Persistent Threat (APT) attack is an attack where it is deployed over a long period of time. Attackers plan in advance and target large organizational networks that contain valuable data. APT attacks not only steal data but also sabotage organizational infrastructure or surveillance systems for a long time.
Email.it stated that financial information, business accounts, and paid customers were not stored in the hacked server. However, the company should still be wary and keep their security defenses to defend against such attacks. Because with the stolen data it can be used for espionage and extortion or it may also result in a total site takeover, website defacement, and more.
The largest internet service provider in Austria, A1 Telekom, had also experienced security beach. The company noticed the cyberattack after a month and tried to fix the problem. The malware that hackers sent out had only infected the computers in the company’s office network and not its entire IT system.
Hackers managed to compromise some databases and even ran database queries in order to learn the company’s internal network. Luckily, because of the complexity of the internal network which outsiders cannot easily understand helped the company prevent hackers from gaining access to other systems.
Despite that it took more than 6 months to handle the attack, hackers were not able to get any sensitive customer data. The company was able to clean their network from hackers on May 22 and since then have changed all of their employees’ passwords and access keys for all their servers.
Other companies may not get so lucky in detecting a similar attack or blocking access to sensitive company information. If a company experiences a security breach the dangers are businesses end up losing revenue, companies’ brand reputation will be damaged in the long-run, online vandalism, and much more.
Although we can not tell the exact time period when the pandemic would end, it’s safe to assume that the percentage of cyberattacks will not easily decrease.
In fact, many security experts predict that the rise of cyber attacks will continue tenfold, as hackers would keep taking advantage of economic uncertainty during the COVID-19 crisis. With spikes in social engineering and phishing scams targeting new users within the digital economy, vulnerable individuals and businesses will be exploited.
Hackers are searching for different methods and new ways of gathering sensitive information from individuals and companies. No one is safe, but everyone can learn to better prepare themselves from cyberattacks. Check out Cloudbric’s website security solution for complete web protection.
|
<urn:uuid:018c4d40-2ff7-4304-befc-c2939fba9dc1>
|
CC-MAIN-2022-40
|
https://en.cloudbric.com/blog/2020/07/cyberattacks-cybersecurity-statics-2020/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00460.warc.gz
|
en
| 0.955879 | 1,779 | 2.71875 | 3 |
1 - Getting Started with Microsoft Project
Topic A: Identify Project Management Concepts Topic B: Navigate in the Microsoft Project Desktop Environment
2 - Defining a Project
Topic A: Create a New Project Plan File Topic B: Set Project Plan Options Topic C: Assign a Project Calendar
3 - Adding Project Tasks
Topic A: Add Tasks to a Project Plan Topic B: Enter Task Duration Estimates
4 - Managing Tasks
Topic A: Create a Work Breakdown Structure Topic B: Define Task Relationships Topic C: Schedule Tasks
5 - Managing Project Resources
Topic A: Add Resources to a Project Topic B: Create a Resource Calendar Topic C: Enter Costs for Resources Topic D: Assign Resources to Tasks Topic E: Resolve Resource Conflicts
6 - Finalizing a Project Plan
Topic A: Optimize a Project Plan Topic B: Set a Baseline Topic C: Share a Project Plan
Actual course outline may vary depending on offering center. Contact your sales representative for more information.
Who is it For?
This course is designed for a person with an understanding of project management concepts, as well as general desktop computer skills, and who will be responsible for creating and maintaining project plans. This course will give you the fundamental understanding of Microsoft Project necessary to construct basic project plans.
To ensure your success in this course, you should have basic knowledge and skills using the Microsoft® Windows® operating system—preferably the most current version. While you do not need to be an expert, some experience and competency with Microsoft Office applications, particularly Word and Excel®, will be useful. Finally, having a foundational knowledge of project management concepts will help prepare you for working with Microsoft Project.
|
<urn:uuid:c499564a-4f70-4cdd-9a3e-a848a1a65288>
|
CC-MAIN-2022-40
|
https://hawaii.newhorizons.com/training-and-certifications/course-outline/id/1035992695/c/microsoft-project-2019-2021-part-1
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00460.warc.gz
|
en
| 0.801412 | 371 | 2.9375 | 3 |
Since the release of Windows 2000, you’ve probably heard a lot about public key encryption. I’ve written articles that deal with the ins and outs of this new security mechanism (
Understand the differences between public key and symmetric key
Windows 2000 security-enabled protocols
). However, before you can truly understand the Windows 2000 implementation of public key encryption, you need to know how certificates work. In this article, I’ll discuss the basics of what a certificate is in Windows 2000 and how to manage certificates.
What’s a certificate?
I’ve often found that the best way to describe an abstract concept in computing is to compare it to something from the tangible world. We’ve all been to car dealerships and seen the lock box that they use to store the keys to the cars. This box is similar to the certificate server, with the car keys representing individual certificates.
As you might have guessed by this little analogy, a certificate server is a secure server that’s responsible for storing and distributing certificates. And a certificate is essentially the key that makes public key encryption possible. As I’ve explained in other articles, public key certificates are the Windows 2000 mechanism that makes secure communications possible between machines on a Windows 2000 network. These certificates are used in everything from network protocols to network authentication to the encryptable file system.
As with most things in Windows 2000, the primary interface for working with certificates is the Microsoft Management Console (MMC). You can access the management console by clicking Start|Run. At the Run prompt, enter the MMC command. Windows 2000 will now load an empty management console. Now, follow these steps:
- Once MMC has loaded, you’ll have to load the Certificates snap-in. To do so, select the Add / Remove Snap In command from the Console menu. When you do, you’ll see the Add / Remove Snap In properties sheet.
- Select the property sheet’s General tab and click Add.
- You’ll see a list of available snap-ins. Select Certificates from the list and click Add.
- You’ll see a dialog box asking whether you want to manage certificates for your user account, a service account, or a computer account. Naturally, the answer that you give will depend on the task that you’re trying to accomplish. For the purpose of this example, select the My User Account radio button and click Finish. A standard user can manage the certificates for their account, but only an administrator may manage service or computer related certificates.
- Click Close to close the list of available certificates. Click OK to close any remaining windows.
The Certificates snap-in is now loaded into the management console.
When the snap-in loads, you’ll see an entry in the left column for Certificates Current User. If you expand this entry, you’ll see five entries below it. These entries provide a storage place for the various types of certificates. The available certificate types are Personal, Trusted Root Certification Authorities, Enterprise Trust, Intermediate Certification Authorities, and Active Directory User Object. By default, each of these categories can be expanded. If certificates exist in any given category, there will be a Certificates entry below the category. The Certificates entry contains the actual certificates. You can see an example of this layout in Figure 1.
Each certificate contains an extensive amount of information. To see more detail on any given certificate, simply double-click on it. When you do, you’ll see the certificate’s properties sheet. The properties sheet’s General tab contains a basic summary of the certificates purpose. It details such information as who the certificate is from and what its intended purpose is. You can see an example of the General tab in Figure 2.
If you require more extensive information, the Details tab is where you want to be. The Details tab contains just about any information that you could ever want to know about a certificate, such as the serial number, issuer, and valid dates. You can even look at the actual public key that the certificate contains, as shown in Figure 3.
Importing and Exporting Certificates
The Certificates snap-in is more than just a handy way to view the certificates installed on your machine. You can use this interface to manipulate certificates, as well. For example, suppose that you have a user who likes to encrypt files on their local machine by using the encryptable file system. Now suppose that the user gets a new machine. However, the encryptable file system uses certificates for the encryption and decryption process. This means that if you copy the user’s files to the new machine, the files will remain encrypted through the copy process. Once on the new machine, the user will be unable to decrypt the files because the machine lacks the proper certificate. This means that you’ll have to either permanently decrypt the files before attempting to copy them, or you’ll have to copy the associated certificate to the new machine. As you might have guessed, copying the certificate is the preferred method. However, copying the certificate is only part of the process. As a security-conscious administrator, you’ll want to remove the certificate from the user’s old machine to keep the certificate from falling into the wrong hands.
The process of moving the certificate between machines involves using the import and export features. To export the certificate from the old machine, you must begin by locating the certificate in the Certificates snap-in. Doing so can be a little difficult, because a machine may contain hundreds of certificates. To make this process easier, right-click on Certificates Current User and select the Find Certificates command from the resulting context menu. Doing so will launch the Find Certificates utility, which allows you to search by particular aspects of the certificate.
Once you’ve found the certificate, right-click on it and select the All Tasks | Export commands from the resulting context menu. At this point, Windows will load the Certificate Export Wizard. Click Next to get started. When you’re exporting a certificate for purposes such as the one that I discussed earlier, you’ll almost always be exporting private key certificates. As you may already know, private key certificates are password protected for security. If you are trying to export a private key certificate, the wizard will display a warning screen that indicates that it may be necessary to enter a password later on. If you receive this warning, and you know the associated password, select the Yes, Export The Private Key radio button and click Next.
The next screen that you’ll see deserves a little explanation. It asks what format to export the certificate in. DER and Base-64 are intended for single certificates, while PKCS #12 is capable of exporting an entire certificate chain. My recommendation is that unless you know what format to use, go with the default selection. As you can see in Figure 4, you have some options under PKCS #12. You may do things like include all certificates in the path, enable strong protection, and delete the original key if the export is successful.
At this point, you may be asked for a password if you’re exporting a private key. Enter and confirm the password and click Next. Finally, you’ll be asked for the path and filename to export the certificate to. Enter this information and click Next. The following screen will display a summary of the options that you’ve chosen. If this information appears to be correct, click Finish to complete the export process.
Before I tell you how to import a certificate, I should give you a word of caution. Be very careful when importing root certificates. Root certificates are the basis for most certification operations. Therefore, check out root certificates thoroughly before importing them.
With that said, you can import a certificate into any of the five categories I mentioned earlier. To do so, right-click on the category into which you want to import the certificate and select the All Tasks | Import command from the resulting context menu. Doing so launches the Certificate Import Wizard. This wizard isn’t nearly as complicated as the export wizard. It will ask you for the name and location of the certificate and possibly for a password. If you are prompted to enter a password, note that on the password screen there’s an option to make the certificate exportable. If you may ever need to move the certificate to another machine, be sure to check this option. Now, simply complete the wizard and your certificate will be imported. //
Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the Director of Information Systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it’s impossible for him to respond to every message, although he does read them all.
|
<urn:uuid:7380e422-379c-4ad0-8c1c-ec51028d7339>
|
CC-MAIN-2022-40
|
https://www.enterprisenetworkingplanet.com/security/working-with-certificates-in-windows-2000/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00460.warc.gz
|
en
| 0.89998 | 1,883 | 3.265625 | 3 |
Explanations of terms and information from the world of IT security
Our knowledge database provides you with valuable information on various topics in the field of IT security. Learn which dangers exist and how you can specifically counter these threats to ward off CEO fraud, ransomware, phishing and the like. In addition, you’ll find an overview of relevant terms in the field of information security.
The abbreviation DDoS stands for Distributed Denial Of Service. A DDoS attack is a type of DoS attack in which several hijacked systems are used to carry out an attack against the target system.
With the establishment of cryptocurrency, the era of a new means of payment has been ushered Crypto Mining in. To better understand the miners’ gold rush, we have summarized the most important facts.
Even with the best technical security precautions, every company has a risk factor that is difficult to control: the human one. What exactly is social engineering and how can you protect yourself?
Spear phishing is a cyber attack with extremely malicious intent that is derived from traditional phishing. In a conventional phishing attack, the target persons fall randomly into the attacker’s grid.
Business Email Compromise (BEC) is characterized according to its different forms. In addition to compromising an employee’s email account, methods such as spear phishing or CEO fraud are also used, the latter being preferred by criminals for gaining access to confidential company information or money.
Today, encryption is mainly thought of as an IT term, because data, e-mails, computers etc. are encrypted. But that was not always so. Encryption actually has its origins back in the year 480. And until a few years ago, encryption was primarily used in espionage or in top-secret government communications.
The most important IT news. Read our latest blog posts
Ransomware attacks continue increasing: 20% of all reported attacks occurred in the last 12 months – new survey
Hornetsecurity’s 2022 Ransomware Report found that 60% of attacks came from phishing attempts Survey of over 2,000 IT pros revealed that a quarter either don’t know or don’t think Microsoft 365 data can be affected by ransomware Report also found that hackers continue...
TA joins existing investors PSG and Verdane in a co-control partnership alongside the Hornetsecurity management team HANOVER, Germany; September 6 2022 – Hornetsecurity (the “Company”), a leading international cloud security and compliance SaaS provider,...
The survey, conducted by Hornetsecurity, reveals that organizations activated more Microsoft 365 security features as they were increasingly targeted by cyber-attacks in the last year. Hanover, Germany (21 June 2022) - A global IT security and compliance survey...
|
<urn:uuid:36a8e716-aaba-4e6e-b88b-d1093455ead6>
|
CC-MAIN-2022-40
|
https://www.hornetsecurity.com/en/knowledge-base/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00460.warc.gz
|
en
| 0.948655 | 573 | 2.75 | 3 |
Would you fall for a social engineering tactic? It’s harder than you think to identify them.
Attacks against enterprises and SMBs using social engineering are not only growing more frequent, but they're also increasingly more sophisticated. Enterprises must exercise due diligence in order to keep one step ahead of cybercriminals since they are coming up with even more clever ways to trick people into handing over valuable company data.
Social engineering is the term used for a broad range of malicious activities used by cybercriminals to trick users into making security mistakes or giving away sensitive information.
Any successful cyber-attack that employs social engineering preys on one basic human instinct: trust. According to the 2022 Verizon Data Breach Investigations Report, 82% of breaches involve the human element, whether that be through pretexting, phishing or use of stolen credentials.
All it takes is one email, phone call, or text message that appears to be coming from a recognized person or organization to fall through the cracks. After the deception works and the attack succeeds, the cybercriminals can expose sensitive information, use it to their benefit, or take control of corporate devices, systems and networks.
Suspicious links are so common online that most of us are uneasy about clicking on any links in almost any situation.
You may be thinking, “Surely this can’t still be happening.” Dilbert portrays it best:
Three billion fraudulent emails are sent out every day to try to compromise sensitive information. And, according to the 2021 edition of Terranova Security’s Phishing Benchmark Global Report, 19.8% of total participants click on the phishing email links.
It can be harder than you think. Try this quiz to see if you can tell what’s fake and what’s real.
Social engineering is so effective and dangerous because people make mistakes.
Successful social engineering scams rely on that knee-jerk human reaction to trust the sender and believe the message. Being busy, not paying close enough attention or complacency can lead to users being too trustful.
The best examples of social engineering are the ones that play all the right notes on a victim’s emotional scale. The social engineering attacks prey on human emotion whether that be fear, greed, curiosity or helpfulness.
Everyone within an organization must know what social engineering attacks look and/or sound like. Otherwise, the risk of data or system exposure through a malicious email link or attachment can increase significantly.
Let us take a closer look at the various forms that cybercriminals can use to package their social engineering attempts.
Phishing encompasses a wide range of devious tactics, including deceptive emails, fake websites, and misleading text messages. They all have the same goal: to steal confidential data belonging to an individual or organization. Phishing attacks are typically successful when they appear to come from a well-known source, trusted acquaintance or organizational entities.
Pretexting is a social engineering technique where a false identity dupes a victim into giving up sensitive information. For instance, a cybercriminal may know that the targeted individual recently bought an item from Apple and pretends to be a company customer service representative to acquire credit card information or other confidential details.
Quid Pro Quo
Quid pro quo scams rely on an exchange of information to convince a victim to act. Often, they offer to provide a service in exchange for a benefit. A common tactic in this category is when a cybercriminal impersonates an IT support employee and call victims who recently opened a support ticket, promising to fix a virus-related issue if they are provided with login credentials.
Spear phishing is a cybercrime that deploys targeted attacks against individuals and businesses using relevant and well-crafted messages. Hackers will collect details about the targeted parties and, using email, use that information to appear familiar to the victim.. Though often used simply to steal user data, spear phishing can also be a means to install malware or ransomware onto someone’s device.
Vishing uses phone calls or voicemail to convince victims that they need to act quickly. Typically, messages will dangle the threat of being subjected to legal action or a criminal attack, such as one urging the victim to reset their banking information because their account has been hacked.
Water-holing targets a group of users and websites they frequent. The cybercriminal looks for a security vulnerability in one of these websites and then infects it with malware. Eventually, a member of the targeted group will be victimized by the malware. This specific social engineering technique is also very hard to detect.
Baiting is both an online and physical social engineering attack that promises the victim something in exchange for an action. This can include plugging in a USB key or downloading an attachment to receive free movie downloads for life. The computer and the network can be targets of malicious software that captures login credentials or sends fake email messages.
The promise of malware removal messages tricks victims into paying for a tool to remove viruses or other nefarious software from their devices. Depending on the scam, the criminal can steal the victim’s credit card information or install a different malware or ransomware program onto the computer or mobile device. Keep an eye out for malware emails – nearly 95% of payloads are delivered this way.
The reality is that people keep falling for these social engineering attacks. That why attackers keep doing them as they work! Their methods are constantly evolving so the ones we see today are sure to evolve. Hence the importance of continually educating your employees. Training and testing them should be a part of your cybersecurity plan.
Present partners with Terranova Security, a Gartner magic quadrant leader, to offer a high-quality customizable training program. Contact one of our cybersecurity experts to learn more!
The right use of technology addresses business challenges and drives business growth in all areas of an enterprise. We hope this blog will offer insight into developing strategies and tactics to enable you to identify those key drivers of growth and keep pace with and anticipate the rapid technology change of today.
|
<urn:uuid:401b3132-b09f-4c5d-8e3e-7e358367428d>
|
CC-MAIN-2022-40
|
https://blog.present.ca/how-confident-are-you-that-you-can-identify-a-phising-scam
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00460.warc.gz
|
en
| 0.92785 | 1,264 | 2.75 | 3 |
Data discovery is a term probably most mentioned in relation to business intelligence and data science. I this context data discovery can be seen as a more experimental and preliminary activity that can lead to a more continuous and integrated form of reporting and predictive analysis when hidden data sources, relationships and patterns are identified.
However, data discovery is useful in other data management disciplines as well.
With the increasing awareness of data security, data protection and data privacy – and the regularity compliance enforced in this space – it is crucial for organisations to know what kind of data that flows and are stored within the organization. While you may argue that this should be available in already existing documentation, I have yet to meet an organization, where this is the case. And I come around a lot.
Data discovery is also a component of test data management and tool vendors package their offerings in this space with capabilities for data masking, data subsetting and data discovery in order to answer questions as:
- Where are the data elements that should be masked when using production data in test scenarios without violating data privacy regulations?
- How can you subset (minimize) test data sets derived from production (covering several databases) and still have proper relationships covered?
Within Data Quality Management, Data Governance and Master Data Management (MDM) data discovery also plays a role similar to the role in data reporting. We can use data discovery to map data lineage, find potential data relationships where data matching, data cleansing and/or data stewardship might help with ensuring data quality and business process improvement and explore where the same data have different labels (metadata) attached or the same labels are used for different data types.
|
<urn:uuid:fa85076a-2f24-40a0-b9eb-3afc02d88971>
|
CC-MAIN-2022-40
|
https://liliendahl.com/2019/07/19/the-role-of-data-discovery-in-data-management/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00460.warc.gz
|
en
| 0.915978 | 337 | 2.796875 | 3 |
Burlington/Seattle. New York. New Jersey. Minnesota. Orlando. San Bernardino. Chicago… and on and on.
How do communities, states, and nations stop mounting violence?
It requires leadership across organizations, communities, states, and nations to actually make changes.
Leaders from all levels of government (and organizations too) have been “talking about preventing violence” and “talking about changes” for years, but in reality people are creatures of habit and rarely change until the pains get so bad they have to go from talking about changes to making changes.
How much more pain will you and your community allow and endure before you start making changes?
What Changes Need to Be Made?
Currently we rely on law enforcement to stop violence, but law enforcement personnel are First Responders. Their primary responsibilities are to respond to crimes and violence, minimize the damages and apprehend those evil individuals that have committed a crime or a violent attack. Law enforcement has done a good job responding and apprehending, but First Responders are not First Preventers.
Making changes starts with these three changes:
First Change: Leaders from organizations, communities, states, and nations must immediately realize First Responders are very different from First Preventers.
Second Change: Leaders from organizations, communities, states, and nations must make (not talk about) immediate changes to establish First Preventers and equip First Preventers to stop and prevent violence BEFORE evil and radicalized individuals escalate and execute their plans of violence.
Third Change: Leaders from organizations, communities, states, and nations need to realize stopping and preventing violence is not about politics or religion or race… it is about intervening and preventing evil doers from killing and ruining the lives of innocent children and adults.
What Is the Difference Between “First Responders and First Preventers”?
It is football season so let’s use a football team analogy. First Preventers and First Responders are similar to Offensive Coordinators and Defensive Coordinators on football teams. To be successful, football teams need both Offensive and Defensive Coordinators. Football teams that invest almost 100% of their budget into a Defensive Coordinator and Defensive Players (First Responders) and their training and tools would clearly not be very successful in winning their games. Based on evidence from post-event reports and based on the number of daily headlines involving violence, most organizations and communities are not successfully preventing mounting violence and they are constantly in “defense” mode EVEN THOUGH almost all incidents and tragedies were found to be preventable. The bottom line is this, it is nearly impossible for a “team” to win their “war or game” if their primary option depends on Defensive Coordinators and Defensive Players who, like First Responders, are constantly reacting and responding to the “other side”.
Why “First Preventers” Make Sense?
Emotionally – 99.9% of people prefer Preventing, yet most organizations and communities do not have First Preventers who are trained and properly equipped to prevent.
Financially – The costs associated with Preventing are a fraction of the costs of Responding. AND the costs associated with First Preventers and First Preventer tools are a fraction of the costs of First Responders and First Responder tools and equipment.
Evidentially – Evidence overwhelmingly reveals most incidents/tragedies were Preventable because the “pre-incident indicators and pieces of the puzzle” existed BEFORE the incident/tragedy. However, without First Preventers and First Preventer tools, the indicators and pieces of the puzzle were not collected, and not assessed and the dots were not connected BEFORE the incident/tragedy.
“Making Changes” Will Stop Violence and Change the World
My plea to Mayors, Police Chiefs, Governors, and Leaders of Organizations is this, please take time to understand the difference between First Responders and First Preventers and contact me immediately to discuss how you can take immediate action. Your First Responders are good at what they do, so now you need a Prevention Specialist like me help your organization or community implement a proven First Preventer game plan and proven First Preventer tools to immediately start stopping and preventing violence in your community or your organization.
Violence is already bad, and getting worse every day… evidence from prevention failures and prevention successes is overwhelming and clear that preventing violence is possible. Don’t wait until violence gets so bad that it impacts you and impacts the lives of innocent people. And don’t let evil doers and violence change our world, because together we can make changes and change the world in a good way.
Evidence reveals violence will not be stopped with more talk and more First Responders… stop and prevent violence with First Preventers who are trained, and equipped, and ready to PREVENT.
|
<urn:uuid:e6d168f5-0ff9-40e1-be38-78d1babf68cf>
|
CC-MAIN-2022-40
|
https://www.awareity.com/2016/09/26/stop-mounting-violence-first-preventers-first-responders/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00460.warc.gz
|
en
| 0.938639 | 995 | 2.75 | 3 |
In the latest times, we see many attacks arising in many organizations related to the data. Data Security is one of the essential aspects of an organization as everything runs around the data. The attackers have mostly targeted small businesses these days. Small businesses have less data with less secure networks, paving a way to the cyberattacks quickly. To gain in-depth knowledge about Cybersecurity, you can undergo a CyberSecurity/Cyberark certification training. However, it is very crucial to analyze and follow the tips and recommendations by experts to save the business from attacks. In this blog, you will understand why cybersecurity is essential and also the recommendations to prevent Security attacks.
Why is Cybersecurity important?
All the organizations maintain the data and databases related to the organization. It will include confidential data that cannot be shared with anyone. While working on the sensitive data and the databases, it is vital to ensure that the data is not shared or leakage to the outsiders. This will allows the attackers to make the changes, thefts, and damage to the data, which will lead to further concerns and security concerns.
To ensure that the data and information of the organization in safe hands rather than attackers, cybersecurity is the concept that has come up. Cybersecurity is essential in every organization as they will work on sensitive data, and any malfunctioning to the data could lead to problems and bring the business down. All we need to do is to make sure that we are working on the data in the right way without giving a chance for the attackers.
Recommended Steps or tips to prevent from Cyberattacks:
All the entrepreneurs and business teams should concentrate on data security as this will be the source of the attacks to do malicious acts and damage to the reputation of the organization. In general terms, don’t you think that security plays a vital role? Don’t we have some recommendations or tips to get rid of cyber attacks? The answer would be yes. There are some recommendations given by the experts to save the business from attacks. Let us have a quick review of the tips to be followed when we work around the data related to the organization.
- Use highly secured passwords:
All the employees in the organizations will have their unique set of usernames( login) and passwords to have access to the tools and software. As per the latest surveys, it has revealed that 63 percent of the attacks are happening using weak passwords, stolen, or lost. Al the employees should create a password that is not user-friendly and should be a combination of letters, alphanumerics, password manager etc. The organizations usually set up password expiration, and the employees are allowed to change or update the password every 30-60 days. However, it is the responsibility of the employees to keep the password safe.
- Installation of anti-malware software:
Every management in the business will think that all the employees are aware of the phishing emails. Usually, not every employee is knowledgeable and capable of identifying such emails. As per the recent survey, it is revealed that 30 percent of the employees are still opening up the phishing emails contributing to cyberattacks unknowingly. To prevent such attacks, it is essential to have the anti-malware software installed on all the devices. It is necessary to train the employees as well regarding the anti-malware software.
- Firewall usage:
A firewall is considered as the barrier between the data and the attackers. It is always recommended to use a firewall as this could help in preventing cyberattacks. Small Businesses are the biggest victims of cyberattacks. As per the recent analysis, it is observed that some of the organizations have started using the internal firewalls that enable more security to the data in the organization. During this time, as everyone is working from home, the organizations have started installing a firewall on the home network, which is plus.
- Create a plan for mobile devices:
According to recent research and analysis, it is understood that half percent of organizations are following the policies that are documented in every organization. The policies include the precautionary measures to be taken to prevent from Cyberattacks. As per the advancements in devices like fitness trackers, smartwatches, etc., follow wireless capabilities. It is essential that all such devices have to be added to the policy as well.
- Data back up management:
It is possible to prevent as many attacks as possible, but there would be some point of time where we are not aware that there is a threat. Employees use different data sources to work and perform operations. This includes spreadsheets, word documents, files, databases, etc. and the count would be numerous. It is crucial to back up all the data stored on the cloud. Make sure that the back copy is also available and stored in a different location. It is essential to back up the data frequently, and this data would be helpful when there is an attack.
- Train and educate employees or individuals:
It is important to educate, train the employees on the different types of attacks and the policies, practices to save the data from being attacked. As the employees work on confidential data, this is the significant aspect that the organization has to focus on. Some of the organizations will train and assign some knowledge training to the employees, helping them understand the importance of data and security.
All the individuals who are interested in Cybersecurity are also allowed to understand the policies and nurture them in their daily lives. Some of them would like to develop their career in cybersecurity platform and they can deep dive more to understand the concepts behind cybersecurity.
Security is the aspect that every organization or business would think of and focus on. The cybercriminals are developing their techniques with the latest technologies. It is essential to ensure that security is enabled and followed to run a successful business. As an individual or employee, it is essential to follow the recommendations mentioned to ensure that the data is secured.
Author Bio: I am Preethi, working as content writer in HKR training, having good experience in handling technical content writing, and aspires to learn new things to grow professionally. I am expertise in delivering content on the market demanding technologies. You can touch me at Linkedin and Gmail.
|
<urn:uuid:56ec1c98-e525-40f4-8242-0cd696dc06c7>
|
CC-MAIN-2022-40
|
https://gbhackers.com/cybersecurity-recommendations-to-prevent-from-security-attacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00460.warc.gz
|
en
| 0.960509 | 1,263 | 2.671875 | 3 |
Embedded programming has a long history of making devices do what people need. However, it remains largely overshadowed by application programming. When application programmers were embracing relatively high-level object-oriented languages like C++ or Java, or graphical application development environments like MATLAB, embedded programmers were only moving from into C. They were always outnumbered by app programmers. Today, even hobbyists can develop an app using an easy language and share it with the world, while embedded programmers need to have deep knowledge of hardware and firmware, and how to write programs that can execute in often highly resource-constrained environments. With the emergence of the Internet of Things (IoT), the balance can finally shift. Now that many new thermostats, toasters, watches and light bulbs are equipped with processors and connectivity, the market needs more embedded programmers to program these devices and simpler tools to allow these programmers to write code without plunging into the low-level hardware.
What Is Embedded Programming?
Techopedia offers a definition of embedded programming is “a specific type of programming that supports the creation of consumer-facing or business facing devices that don’t operate on traditional operating systems the way that full-scale laptop computers and mobile devices do.” The idea of embedded programming is part of what drives the evolution of digital appliances and equipment in today’s IT markets.
In simpler words, embedded programming is designing and writing programs for small “computers” that are embedded within devices other than traditional PCs, laptops or smartphones. It’s that which enables microcontrollers to awaken previously “dumb” devices—e.g. thermostats, lighting systems, parking meters, etc.—and give them some ability to “reason” about their environment.
Embedded Programming and IoT
From an engineering perspective, the Internet of Things describes a network of embedded, microprocessor-controlled devices, where that network is connected directly or indirectly to the web. The three pillars of IoT are, therefore:
- Embedded programming
- Network technology
- Information technology
IoT is soon to be everywhere. Embedded devices are, therefore, soon to be ubiquitous as well.
Here is a brief glance at some of the ways in which IoT is changing industries:
- Industry — Industrial machinery and control, temperature monitoring and cognitive anomaly detection.
- Healthcare — Blood pressure monitors, heartbeat monitors, fitness trackers, embedded medication delivery.
- Aerospace and Defense — Flight control systems, actuation, air and thermal management, engine power monitoring and control.
- Smart Homes — Home security systems, digital cameras, televisions and kitchen appliances.
Diving Into Embedded Systems
Some say that every complex system in the world can be reduced to two conceptual spheres: software and hardware. An embedded system represents, more or less, the intersection of those spheres: hardware and software.
Exploring Embedded Hardware
A typical embedded development board is divided into five “modules”: the processor, memory, input devices, output devices and bus controllers.
Hardware Components of an Embedded System
Embedded processors can be broken down into two categories: ordinary microprocessors that use separate integrated circuits for memory and peripherals, and microcontrollers that have on-chip peripherals, reducing power consumption, size and cost. Some examples of these include:
- Microcontroller (CPU) — an intelligent device that computes the tasks assigned by the user and is used to build small applications with precise calculations.
- System on Chip (SoC) — comprises a CPU, peripheral devices (timers, counters, etc), Communication interfaces (I²C, SPI, UART), and power management circuits on a single integrated circuit.
- ASIC processor (Application Specific Integrated Circuit) — designed for a specific application by a company or manufacturer.
- DSP processor — removes the noise and improves signal quality in audio and video applications.
Memory is used to store data that’s being used on the device. Some examples of the types of memory used in embedded systems include Non-Volatile RAM (Random Access Memory), Volatile RAM, DRAM (Dynamic Random Access Memory), etc.
Input devices, such as sensors, switches, photodiode, optocouplers, etc., capture data from the outside world to be processed or exported from the device.
Output devices, including LCD (Liquid Crystal Display) or LED (Light Emitting Diode) displays, seven segment displays, buzzers and relays, respond to input events from outside the microcontroller.
The bus controller is a communication device that transfers data between the components inside an embedded system. The most widely used bus controllers are serial buses (I2C, SPI, SMBus, etc.), RS232, RS485 and Universal Serial Bus (USB).
Exploring Embedded Software
Embedded software, sometimes called firmware, is written for the device drivers, operating system and applications, as well as for error handling and debugging.
Software Components of an Embedded System
A device driver is a piece of embedded code written for a specific piece of hardware.
Operating System (OS) or MicroOS
Embedded systems have a range of operating systems, including RTOS (Real-time Operating Systems), mobile embedded, stand-alone and network embedded systems.
Most of the embedded software is now written in two languages: C and C++. There isn’t much of a difference between C and C++ in terms of syntax. However, C++ has some additional features, like enhanced security and closeness to real-world applications, while C is considered more reliable and has better performance by directly interacting with the hardware.
Key Considerations When Creating an Embedded Product
To develop a viable product you should take the following steps:
Step 1. Learn C or C++
This is where many stop since these languages can be hard to learn. However, if you want to write embedded software, you have to learn C/C++ (and maybe eventually Rust).
Step 2. Learn Some Basic Electronics
At least to the extent that you understand what voltage, current, power, resistance and ohms law are.
Step 3. Get the Basic Equipment
Embedded programmers interact with the physical world, so things like a soldering iron, Digital Multi-Meter (DMM) and a hardware debugger/ JTAG adapter (such as an ST-Link, or OLMEX adapter) or a Logic Analyzer would help.
Step 4. Choose a Microcontroller and Toolchain
To make your program run, you’ll need a microcontroller to actually run it, a compiler that compiles the code for the microcontroller and other tools to load the program onto your hardware. An example of the combination of microcontrollers with a toolchain is the STM32 microcontrollers that are supported by the arm-gcc along with openOCD toolchain.
Step 5. Understand the Datasheets
Before actually sitting down to write the first line of your code, you need to understand the (end user) specifications.
Step 6: Examine the Components
Analyze and pick up the components (software and hardware) required to make the product.
Step 7: Design a Product
Designing is always the most critical phase of any development cycle. The peculiarity of embedded programming is that you have to develop the hardware and software parts individually and then integrate them.
Step 8: Develop a Prototype
A prototype is a sample version created to test the concept that’s developed according to the specifications using the selected hardware and software tool.
Step 9: Test the Application
Now that you have a prototype, it’s possible to run test cases to tease out the potential of the application.
Step 10: Deploy the Application
After testing the application, the result is checked in a real environment to realize the Proof Of Concept – a technique used to validate an idea.
Step 11: Support and Upgrade
If needed, you should be ready to provide support and upgrade the application with new features.
And now you’re ready to start changing the world!
|
<urn:uuid:b0fc8367-454c-4479-8223-f3a390bb4786>
|
CC-MAIN-2022-40
|
https://www.iotforall.com/embedded-programming-iot
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00660.warc.gz
|
en
| 0.896478 | 1,700 | 3.78125 | 4 |
Holy basil (Tulsi) has a long history as a medicine for different human disorders. Hence the study team screened different components of Tulsi leaf and found that eugenol, but not other major components (e.g. ursolic acid, oleanolic acid and β-caryophylline), inhibited the interaction between spike S1 and ACE2 in an AlphaScreen-based assay.
Utilizing silico analysis and thermal shift assay, the study team also observed that eugenol associated with spike S1, but not ACE2.
Eugenol also reduced SARS-CoV-2 spike S1-induced activation of NF-κB and the expression of IL-6, IL-1β and TNFα in human A549 lung cells.
Importantly, oral treatment with eugenol reduced lung inflammation, decreased fever, improved heart function, and enhanced locomotor activities in SARS-CoV-2 spike S1-intoxicated mice.
The study findings were published in the peer reviewed Journal of Neuroimmune Pharmacology. https://link.springer.com/article/10.1007/s11481-021-10028-1
Holy basil or Tulsi (Ocimum tenuiflorum) is cultivated in Southeast Asia for religious and traditional medicine purposes (Cohen 2014). Tulsi is known to augment immunity that may help fight viral, bacterial and fungal infections.
For example, in a 4-week study in 24 healthy individuals, it has been found that supplementation of 300 mg of holy basil extract is capable of increasing levels of IFN-γ, IL-4 and percentages of T-helper cells and natural killer cells (Mondal et al. 2011).
These are the immune cells that are beneficial in protecting and defending the human body from viral infections. In addition, many cell culture and animals studies have delineated anti-inflammatory, antioxidant, anti-cancer, hepatoprotective, radioprotective, anxiolytic, adaptogenic, metabolic, and anti-diabetic effects of Tulsi leaf (Prakash and Gupta 2005; Baliga et al. 2013; Cohen 2014; Jamshidi and Cohen 2017).
The coronavirus disease 2019 (COVID-19) pandemic that started from the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in December 2019 is still continuing to kill thousands of people daily worldwide.
To date, officially 4.8 million people lost their lives in the world from COVID-19. Although affected individuals manifest a wide array of symptoms, common symptoms of COVID-19 are fever, cough, and shortness of breath (Ledford 2020; Machhi et al. 2020).
Severity to COVID-19 increases with age as well as preexisting conditions, such as hypertension, obesity, asthma, or diabetes. It has been found that severely ill COVID-19 patients suffer from cytokine storm, lung injury and multi-organ failure (Pia 2020).
Although underlying mechanisms are poorly understood, COVID-19 is more lethal in men than it is in women (Mukherjee and Pahan 2021). While vaccination is underway and more than 50 % people in USA are fully vaccinated, a specific and an effective antiviral and anti-inflammatory agent is also needed to treat this viral pandemic.
Angiotensin-converting enzyme 2 (ACE2) is a beneficial molecule as it converts angiotensin II (AngII), a vasoconstrictor, to Ang1-7, a vasodilator (Vickers et al. 2002; Zaman et al. 2002). Since the spike protein on the surface of SARS-CoV-2 binds to ACE2 (Machhi et al. 2020; Stower 2020) to enter into human cells and the spike S1 subunit harbors the receptor-binding domain (RBD), we screened different components of Tulsi leaf and found that eugenol was capable of inhibiting the interaction between spike S1 and ACE2.
In addition, eugenol inhibited the entry of pseudotyped SARS-CoV-2, but not VSV, into human ACE2-expressing HEK293 cells and suppressed spike S1-induced activation of NF-κB and expression of proinflammatory cytokines in human lungs cells. Oral administration of eugenol also decreased lung inflammation, reduced fever, inhibited arrhythmias, and enhanced locomotor activities in an animal model of COVID-19, indicating that naturally available eugenol may be beneficial for COVID-19.
|
<urn:uuid:0437dc90-d57f-47a6-97cb-50cbadfd04cc>
|
CC-MAIN-2022-40
|
https://debuglies.com/2021/11/08/phytochemical-eugenol-extracted-from-holy-basil-and-cloves-is-effective-against-the-covid-19-disease/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00660.warc.gz
|
en
| 0.912193 | 961 | 3.15625 | 3 |
Is your organisation implementing zero-trust correctly? While thousands of enterprises flocked to implement a zero-trust framework during the Covid-19 pandemic, many have struggled to effectively deploy it within their environments.
Research released earlier this year found that while 100% of organisations believe zero-trust architecture is important in reducing cyber risk, only 21% have so far adopted zero-trust as a security model.
There are many reasons why organisations are struggling to deploy zero-trust, but one of the most significant is the fact that organisations are attempting to manage user access to infrastructure rather than to the underlying data itself.
Data-centric security is an essential component for implementing effective user access controls and ensuring that confidential or regulated information stays out of the hands of unauthorised users.
What is data-centric security? Data-centric security and zero-trust
The term, data-centric security is a security approach where an enterprise secures access to critical data assets at data-level, rather than ringfencing and protecting at infrastructure or server level.
Under a data-centric security framework, an organisation catalogues data throughout on-premises and cloud environments; deploys access controls to determine who has access to what information; and monitors that access to ensure no malicious changes are made.
This approach enables security teams to quickly identify if an unauthorised individual starts accessing important files so they can take action to control the incident.
According to the National Security Agency (NSA) data-centric security is essential to implementing zero-trust, as it enables an enterprise to protect critical data assets in real-time, and apply the principle of least privileged access to each access decision.
In other words, if organisations want to implement zero-trust, they need to move beyond the traditional network security mindset of protecting key resources and servers and start identifying and protecting key data assets.
Want to find out how our Managed Digital Risk Protection Service can protect your organisation’s attack surface? Click here to download our free ebook.
How to implement data-centric security
At a foundational level, data-centric security is about collecting information on the relationship between users, data, and apps. This means understanding the level of data sensitivity across your network and the cloud as well as user access permissions, and access activity.
This information helps you to employ the principle of least privilege and monitor how users interact with critical data assets so it is easier to identify malicious insiders and hackers who’ve bypassed your preventative controls.
The Varonis platform provide a solution to implement these controls, enabling you to build a baseline behaviour profile for every user to detect real world attacks.
It also offers post-event controls such as automated and roll-back environment-wide changes so you can revert changes to users/groups, folder permissions and AD group memberships.
The challenge of data-centric security
While implementing data-centric security is a necessity, many organisations struggle due to the complexity of identifying data within their environments.
According to MongoDB, 80% to 90% of the data collected by modern companies is unstructured, meaning it's not only difficult to discover, but also to classify. Unfortunately, some of this data can be exposed in public-facing assets like APIs, which means threat actors can still get hold of it.
The problem is that many organisations don’t have the expertise they need to discover this information and are unaware of the true volume of data that’s exposed to malicious entities.
To address this challenge, Integrity360 has launched the Managed Varonis Data Security Service, which helps users to integrate data sources and directories, and discover and classify sensitive data on-premises and in the cloud, so that it can be monitored and secured.
This approach enables organisations that don’t have the in-house expertise to discover and classify unstructured data to implement a data-centric security solution with the support of a 24x7 SOC and infrastructure support team.
Integrity360’s team can help configure effective data protection policies, while providing continuous security incident investigation, analysis, and management to ensure you have the ability to detect and respond to data breaches in the shortest time possible.
Guarantee your data security
Zero-trust has the potential to redefine enterprise security and make it much more difficult for attackers to gain access to high value information.
However, data-centric security and implementing access controls at the data level is critical to ensure you can reduce the likelihood of unauthorised access to regulated data.
If you don’t have the internal capability needed to deploy these controls, Integrity360 can provide you with full managed service support so you can gain a complete inventory of your data and stop cyber criminals in their tracks.
Want to find out how our Managed Varonis Data Security Service can help you identify, classify, and protect your mission-critical data?
Contact us today for more information and to request a free data risk assessment to help you to identify and address any potential data security risks within your business. You can also download our Varonis Service eBook HERE and Brochure HERE
|
<urn:uuid:5031e02e-b77c-494e-a642-55902dfe94bd>
|
CC-MAIN-2022-40
|
https://insights.integrity360.com/why-data-security-is-the-key-to-implementing-zero-trust
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00660.warc.gz
|
en
| 0.911887 | 1,050 | 2.65625 | 3 |
Instructional technologist Steven Anderson, and technology facilitator Sam Walker, from North Carolina have created a program for school kids, aimed at teaching safe and smart social media skills. Their goal is to establish an environment within schools that is able to teach students how to behave, and survive in the digital world.
Roughly three out of four teens use social networking sites – it’s imperative that this population knows how to utilize these sites safely and effectively. These social networking websites include Facebook, Twitter, and YouTube. These programs are aimed at teachers as well as students. They believe that social media should be integrated into lesson plans, and that educators should teach social media skills to their students.
Some of the guidelines that they teach include:
- Don’t share secrets
- Protect your privacy
- Be honest
- Respect copyright laws
- Think about the consequences
These are pretty basic rules we all generally try to follow, but that kids may not have thought of. If a majority of young people entered the social networking world at 13 or 14, while their navigating skills may have improved, their social skills may not have. Learning these skills at a young age will be beneficial to them when they become post-secondary students, and adults.
Technology in the classroom is becoming used more and more. As social media becomes more prevalent in our lives, it makes sense that students are taught how to use it properly. What do you think about teaching social media skills in the classroom?
|
<urn:uuid:fd105056-0362-4401-aff0-01ec4ad48aa5>
|
CC-MAIN-2022-40
|
https://www.faronics.com/news/blog/safe-and-smart-social-media-skills
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00660.warc.gz
|
en
| 0.961093 | 303 | 3.96875 | 4 |
What is the biggest difference between humans and animals? This question has been asked many times, evoking a range of different answers. "Being able to use tools" is one of the most frequently cited differences. However, it was found that in fact there are animals that use objects such as stones or twigs to catch prey, in some cases even shaping these objects before use. So the description of the difference has been amended to "Only humans create tools to make other tools."
Be that as it may, it certainly can be said that the advancement of humanity and the progress of civilization was driven by the discovery and continuous improvement of tools. And in our modern age, many of these tools are being made more convenient, more powerful, and more functional by the application and evolution of electronics technology.
■Cars and electricity have been closely linked from the start
A prime example of this development is the automobile. Even in its earlier stages, the car would not have been possible without electricity. The internal combustion engine requires an electrical spark from a spark plug to ignite the fuel, and without the electric starter motor, even getting the engine to run would be a major undertaking. Without headlights or windshield wipers, the car could not drive at night or in the rain, and without brake lights or winkers, the number of collisions would certainly rise. We can categorize these types of equipment as "electrical equipment necessary for moving the car."
Electrical equipment commonly used in automobiles
On the other hand, the value of the car as a product has increased through the addition of electrical equipment that makes driving more pleasurable, such as air conditioning, car audio systems, car navigation systems, etc. Various sensors are indispensable for electronic systems controlling the engine and other aspects of the car. In recent years, it has become more common to take some of the information provided by these sensors and present it to the driver in an easy to grasp format, thereby contributing to more efficient and better driving.
For example, an indicator that directly shows the actual fuel consumption at any moment helps enormously with realizing fuel economy, and indicators showing the timing for oil changes and other necessary actions help to keep the car in optimum condition. A system that warns the driver when the external temperature drops below three degrees centigrade alerts him or her to the possibility of road surface freezing. Recently, some manufacturers equip their cars with systems that can analyze driving patterns and provide guidance for safe and economical driving.
Tire pressure warnings, anti-lock braking systems (ABS), electronic slip control (ESC), collision prevention systems and similar features that contribute actively to driving safety are being increasingly included as standard equipment. We may call this category "electrical equipment for comfort and safety."
Over the course of the past twenty years or so, the importance of new electronics technologies has increased notably. In order to preserve the earth's environment and resources, improving the fuel economy of cars has become a critical and pressing goal, and electronic systems that directly contribute to better performance are attracting wide interest. Developments in this field began in the mid-1970s, starting with electronic control for ignition and fuel systems. What originally had been performed by purely mechanical means now was put under electronic control, resulting in drastically improved flexibility. Suddenly, it became possible to adjust the amount of fuel supplied to the engine as well as the ignition timing over a much wider range. This in turn enabled designers to successfully combine output performance with cleaner exhaust emissions.
Nevertheless, until about ten years ago, a car with a displacement in the 2-liter class consumed about 1 liter of fuel for every 10 kilometers when driving in an urban environment. By contrast, cars in the same class these days habitually get about 15 to 20 kilometers per liter. The biggest reason why this improvement in fuel economy came about is the impending introduction of fuel economy standards worldwide, such as the CAFE (Corporate Average Fuel Efficiency) standard.
If average fuel consumption figures calculated on the basis of every car sold by a manufacturer do not meet certain CAFE standards, the manufacturer's name may be made public, penalties may apply, or a limit may be imposed on the number of cars that can be sold. Because this affects especially manufacturers with high sales figures for large and luxurious cars that tend to consumer more fuel, there is strong pressure on improving the fuel economy of all models in a company's lineup.
■Electronics technologies helping to improve fuel economy
There are largely four different approaches to improve the fuel economy of an automobile. The first is improving the fuel performance of the engine itself. The second approach involves assistive systems or devices that provide improvements in areas where the engine is not good at. Third is the reduction of air resistance while the car is driving. And finally, there is reduction in the weight of the automobile. Two of these approaches are intricately linked to electronics technology.
Electrical equipment for improving the fuel performance of an engine
The first task is improving the fuel performance of the engine itself. In fact, there is not all that much than can be done in this area. The three possible aspects are "improved combustion," "reduced resistance," and "reduced losses." And for each of these, electronics technology presents an effective solution.
A representative example of an effective technology for improving combustion is known as the variable valve lift and timing system. A fuel air mixture is introduced into an internal part of the engine called the cylinder. It is then compressed and made to ignite, allowing the retrieval of kinetic energy. When combustion is finished, the remaining gas is expelled to the outside, and the process starts all over again. During the compression and combustion phases, the cylinder must be tightly sealed, but to allow the introduction of air, the so-called intake valve must be opened, while the exhaust valve must be opened in order to release the exhaust gas. The amount of air that can be introduced and expelled depends on the open/close timing of the valve as well as on the degree to which it is opened, which is called lift. The parameters for optimal valve open/close timing and valve lift differ significantly according to the actual driving conditions of the car.
In combustion engines of about 20 years ago, the valve open/close timing was governed by a mechanical arrangement with a fixed operation scope. If lift and timing were optimized for the relatively low engine speeds prevalent under normal driving conditions, the engine could not really be driven into high rpms, resulting in an engine that was considered to be deficient in power.
This limitation was removed by the introduction of variable valve timing systems, allowing the open/close timing to be adjusted according to engine speed. While early systems used a simple hydraulic mechanism capable of switching only between two stages, this has been progressively replaced by continuously variable systems driven by an integrated electric motor or similar, thereby allowing detailed and continuous control over valve timing. The powerful modern engines of today with good fuel economy would not be possible without this development.
Increasingly, such systems not only control the open/close timing, they also allow adjustment of valve lift, resulting in complex systems known under names such as "continuously variable valve lift and timing." Different manufacturers employ different construction principles, but the use of oscillating cams controlled by stepping motors or similar is the most common approach.
Also, in order to ensure that the open/close timing of valves is always optimal, the condition of various parts of the engine must always be monitored very closely and accurately. For this purpose, a large number of sensors are mounted to provide data about temperature, pressure, and other engine parameters. These data assist the variable system to achieve the best timing for the current situation. The sensor information is also used for integrated control of ignition timing and fuel injection timing by an ECU. The high performance of modern sensors, combined with the high performance of the control systems, is what enables modern engines to deliver both high output power and maintain good fuel economy.
The next aspect is reduced resistance. The most effective way to achieve this is increasing the precision of the parts that make up the engine. Another effective measure is to substitute electric power for driving accessories that used to be driven directly by the engine. A prime example for this approach is the water pump that circulates coolant between the engine and the heat exchanger. In older style engines, a part of the output was used to drive the pump directly, but by driving the pump electrically, the resistance during engine operation can be reduced.
Designing accessories to operate electrically makes it possible to have them function on demand, that is only when needed. This also helps to reduce resistance. A mechanical water pump itself is not equipped with a means for flow control. Rather, it is always operating and coolant temperature is adjusted by a thermostat. An electric water pump on the other hand can be made to operate only when a change in coolant temperature is required, thereby preventing unnecessary transfer of thermal energy to the coolant. Cars equipped with an automatic function to turn off the engine when idling use a dedicated electric oil pump for generating the required hydraulic pressure while the engine is stopped. This can also be considered as an aspect that contributes to enhanced fuel economy.
■Hybrid: a smarter solution
Finally, consider the possibilities for fuel economy improvement through assistive devices other than the combustion engine. When starting to move from the stopped condition, or when accelerating rapidly, a car requires a high amount of power. However, at other times, such as when driving at a constant speed for an extended period of time, the power required is said to be only on the order of some 30 kW. By selecting an engine with a smaller displacement and therefore better basic fuel economy, and only providing additional power from an electric motor when higher power for acceleration is required, one gets a hybrid vehicle.
Until quite recently, hybrid configurations could be divided into two main types: "using an electric motor to allow the combustion engine to operate always in its efficient range" and "assisting the combustion engine in its weaker range with an electric motor." More complex hybrid configurations have appeared recently, but the basic fact that an electric motor in conjunction with a combustion engine is used to improve fuel economy still applies. In addition, features such as regenerative braking, turning off the engine when stopped, and EV drive mode also help to save fuel.
A new type of hybrid power unit
Hybrid vehicles with simpler configurations than current designs will probably make their appearance before long. In particular for cars with a transverse combustion engine and front-wheel drive, a hybrid configuration where the output of the electric motor is introduced to the final gear reduction unit is expected to become widely adopted. This configuration offers a number of advantages, such as the fact that it is relatively easy to realize also in lower priced models, and weight increase can be kept to a minimum.
It is certain that automotive engines will be designed to interact even more closely with electric motors in future. For example, in the pinnacle of motor sports, the F1 power unit will employ a so-called MGU-H (Motor Generator Unit - Heat) configuration from 2014. MGU-H is a combination of turbo charger and generator. At low revolution speeds, when the engine exhaust power is low, the generator functions as a motor that quickly increases the revolution of the turbo charger. As soon as the turbo charger revolution speed is high enough, the generator produces electricity for charging the battery bank of the drive assist system. Because this allows the combination of even higher fuel efficiency with high output power, the concept is garnering worldwide attention and may come to represent a new racing engine generation.
MGU-H system to be incorporated in F1 engines from 2014
The progress of automobile engines from now on is intricately linked to electric motors, sensors, and microcontrollers. The combination of these elements working together in unison has reached a level that in effect could be called "robotic." The increasing sophistication of car electronics technology makes cars more intelligent and smart, acting as a motivating force towards exploring new frontiers.
|
<urn:uuid:20539084-9cae-4d91-8073-7bb4f1576b83>
|
CC-MAIN-2022-40
|
https://www.mbtmag.com/home/whitepaper/13249639/tdk-corporation-of-america-how-electricity-drives-automobile-progress-part-1-electric-motors-working-behind-the-scenes-for-better-fuel-economy
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00660.warc.gz
|
en
| 0.955496 | 2,444 | 3.015625 | 3 |
The traditional POTS lines that convert audio signals into electronic pulses to deliver from one point to another.
DTMF is the abbreviation for dual-tone multi-frequency signaling. Essentially, dialing a phone number is DTMF. The different tones tell the data where it needs to go.
Federal Communications Commission is the government authority that regulates interstate and international communications, whether they are by radio, tv, wire, satellite, and cable.
A Hosted PBX is a PBX that is in the cloud and connected over IP.
or long Term Evolution, is a standard wireless broadband communication for cell phones and mobile devices.
M2M stands for machine to machine and refers to any communications that occur directly between one device and another.
PBX is the abbreviation for private branch exchange and is a private telecom network that facilitates both internal and external communications.
Short for Plain Old Technology Service, POTS refers to the analog twisted copper wires that have serviced our telecommunications since inception.
POTS IN A BOX®
POTS IN A BOX® is an LTE/Cellular/Wi-Fi/PSTN/FirstNet-capable router that can enable many combinations of legacy analog wireline in-band Voice, M2M. Data, DTMF, Analog Data Modem Tones, Fax and Alarm System Signals.
Or Public Switched Telephony Network, and is the traditional switched network that has been in use since the 1800s, and uses underground copper wires to transmit signals. As the adoption of VoIP and other digital communications grows, the PSTN is replacing the copper wires of analog communications with newer, digital IP connections.
The acronym “SIP” stands for Session Initiation Protocol and refers to a TCP/IP-based network protocol that can be used to establish and control communication connections of several subscribers. SIP is used in VoIP telephony to establish the connection for telephone calls.
The abbreviation for telecommunications.
TELECOMMUNICATIONS (see also: telecom)
Telecommunications encompasses all communication that occurs over a distance, whether by cable, telephone, telegraph, or broadcasting (i.e. radio/TV).
UNIFIED COMMUNICATIONS (UC)
This term describes the combination of several forms of communications into one solution. Text message chat. video conferencing and calling voice communications and web conferencing all fall into this bucket.
UNIFIED COMMUNICATIONS AS A SERVICE (UCaaS)
UCaaS is the process of providing unified communications through a cloud delivery process.
Also known as voice over internet protocol, is the process of converting voice signals into digital signals, enabling it to be sent over an internet connection instead of the traditional phone Lines.
Wireless technology that enables computers, laptops, cell phones, tablets, and any other device that connects to the internet through a wireless connection and not a hard-wired connection.
|
<urn:uuid:3c1e49fb-64ea-4fda-99f4-386ea1c3be9e>
|
CC-MAIN-2022-40
|
https://mixnetworks.com/terminology-dictionary/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00660.warc.gz
|
en
| 0.893211 | 684 | 3.03125 | 3 |
Everyone as an individual must have encountered sometimes the situation where he/she has to choose what to do, yes it's “what to do” that is worked upon by making smart decisions on different conditions.
Suppose, in childhood, you might have taken a decision, like, what to wear, what to eat, go to school or not and many more. After growing up, you have to make a decision on a serious note as these decisions are directly or indirectly related to your profitability, more complex situations come when these decision makings take account into the business perspectives.
For example, the market manager seeks for a number of customers who purchased more products, the loan manager asks for identifying risky loan applications to reach a cheaper loan failure rate.
A decision tree is generally a prediction modeling technique, it is a decision-supporting tool. It uses a tree-like representation or design and decision model to get accurate inferences.
The very basic goal of decision trees is to develop a model that predicts the value of a target by taking some attributes into account and making decisions accordingly.
The decisions generally depend on if and else conditional statements. The deeper the tree, the more complex the rules and the better the model. It is the most demandable method in supervised learning and has a wide range of applications. It has a flowchart like structure that is constructed through algorithmic approaches to identify in which ways splitting will be done based on different conditions.
If we talk about the structure of the flowchart, it contains a root node from where a model building is initiated, the internal node to represent a test on any feature, branches to show the outcome of the test, and the leaf node to give a group of the same values which is created after taking decisions on all related attributes.
Decision trees are hugely used in regression and classification based problems. They built automated predictive models that have many applications in machine learning, data science, data mining, and statistics. Tree-based models enable predictive models while delivering high accuracy, more stability, and extremely interpretable that’s why it is easy to understand. They map the non-linear relationship quite well, unlike linear models.
A decision tree is undoubtedly very fast as compared to other techniques, the only thing that limits it is the condition of overfitting that arises when the trees grow and become complex or dense, in order to overcome the problem of overfitting, we should use the random forest, i.e nothing but the group of decision trees that performs decision making on a sub-part of the dataset, therefore it reduces the chance of overfitting and it still remains fast.
How does a decision tree work?
In machine learning terms, under the supervised learning algorithm, decision trees are mostly applied on classification or regression-based problems, it works for both continuous and categorical variables, in this method, we divide the entire population or sample(dataset) into a various number of subpopulation sets on the basis of different attributes.
Decision trees use various algorithms to recognize the most significant variables, the split, and the best possible value as a result that produces a further subpopulation set.
The image below represents the workflow of the decision tree, it is showing how data is divided into test and training dataset, decision tree algorithms are applied and model performance is evaluated later on. You can learn about how to analyze data here.
Workflow structure of the Decision Tree
I am explaining it with a very simple example, let's say we want to check whether a person is fit or not(a root of the tree). There are some parameters or you would say features of the tree(internal node), on which decision is taken(to produce branches of the tree- what we call each line), suppose here we set the parameter as to check a person of age less than 30 is fit or not.
First splitting is done on the set parameter, now for further splitting, other sets of parameters are required, such as if he eats lots of food or not, does he do exercise in the morning or not, and so on, and at last, we got the results(leaves of the tree-everything which is not roots or branches).
Leaves are basically the decisions and don’t split, a tree has decided whether a person is fit or not. This can clearly be understood by following the working chart of taken example.
Example of the Decision Tree
The decision tree has its own representation and solves the problem, as mentioned above it contains roots, branches, internal nodes, and leaves. Following are the steps to make tree representation ;
Optimize the best attribute and put it at the root of the tree.
Divide the dataset into subsets, using the previous attribute make sure subsets must have the same values for an attribute.
Repeat the process discussed in step 1 and step 2, until you find the leaf nodes for all branches of the tree.
Analysis of Decision tree
Decision tree as a classification tree or regression tree
In the above-mentioned example of loan manager, this is a simple example to classify the loan applications into safe or risky loan application on the basis of some attributes, here, attributes are some possible or real-time events on which decision depends. And the criteria of classification as a decision tree comes.
The classification is basically a two-stepped process, first is the learning step in which an arbitrary model is built on a given set of training data and the second one is the prediction step in which the model is implemented to predict the response for a given data set.
Sometimes, situations come in which a decision was made on continuous data, or when the target variable will be a real number, like, to predict the price of a product on the basis of the cost of raw material used to manufacture that product, to obtain the salary of the customer by utilizing his consumptions, job or home location and other information provided in applicant form, etc. Here, the target variable is either a real value or a part of the dataset is continuous data used for predicting the target variable.
And hence, the criteria of regression as a decision tree comes. The regression tree takes into account the observation about the various features of an object and trains the model to predict data in order to give meaningful continuous output.
Decision trees needs more attention on data preprocessing as there are many attributes in a dataset that we may not even need and in a algorithm like decision tree, every attribute contributes in decision making of the algorithm to produce results, therefore it is highly important to clean and prepare data in such a way that there exist no way by which we would get unwanted result.
Similarities and Differences between the Regression and Classification tree
Let’s discuss the primary differences and similarities in the regression tree and the classification tree; Regression trees are used when the target variable is a continuous variable and the value received in the training dataset to the end of a terminal node is the mean response of all the observation lying under that section.
Also, if any other known observation comes in that section, it is then replaced by a mean value, and prediction is done. In contrast to this, Classification trees work are required when the target variable is a categorical variable and the value received in the training dataset to the end of a terminal node is the mode of all the observation lying under that section. Also, if any other known observation comes in that section, it is then replaced by a mode value, and prediction is done.
If we discuss similarities; Both the trees split up independent variables into definite and non-overlapping regions and use a recursive binary approach, i.e. splitting initiates from the top of the tree while all the observations lying in one single region and divide the independent variable into two new branches. The splitting process is continued and results in a fully developed tree when the stopping criteria get fulfilled, defined by a user.
Decision trees have some advantages and disadvantages
Implementing decision trees in machine learning has several advantages;
We have seen above it can work with both categorical and continuous data and can generate multiple outputs.
Decision trees are easiest to interact and understand, even anyone from a non-technical background can easily predict his hypothesis using decision tree pictorial representation.
The model can interpret accurate results and trees’ reliability can be trusted and quantified.
Decision trees require less time for data preparation as it doesn’t require dummy variables, data normalization, replacement of missing values, etc.
Also, it takes very less time for data exploration, to find the most important variables and its relationship with other variables, to create new features that strengthen the target variable.
Decision trees are very helpful in data cleaning, it takes much less time in the data cleaning process in comparison to other modeling techniques as it doesn’t get affected by outliers and missing values up to a certain mark.
Decision trees are often considered as a non-parametric method, they have no opinions about space arrangement and designing of classifiers.
Even though non-linear relationships between various features are not able to influence the performance and efficiency of trees.
Let’s learn about some disadvantages in decision trees;
While dealing with categorical data having multiple observations, the information gain gets biased in approval of the attributes with the most observations.
As datasets have values with many levels, these are interconnected so calculations become more complex.
Decision trees often struck with the problems of overfitting, there might be situations when over-complex trees can’t generalize the data well. By constraining the number of parameters at the leaf node or setting the maximum depth of the tree, this problem can be minimized.
Very small changes in data might result in generating a completely different tree, this is termed as variance, so decision trees are treated as unstable. The concept of bagging, boosting, etc are introduced for the same
In comparison with other modeling techniques, it produces low prediction accuracy for any data.
I hope you might have got an idea about the beginning part of the decision tree, this blog surely inspires you to study decision trees more profoundly. You have seen that Decision trees belong to the class of supervised learning, other various methods such as random forest, gradient boosting, etc are famous for solving data science problems. Decision trees are basically used for solving regression and classification based problems. For more blogs in analytics and new technologies do read Analytics Steps.
|
<urn:uuid:45b861c5-3ff3-480e-b8d1-638bdb867475>
|
CC-MAIN-2022-40
|
https://www.analyticssteps.com/blogs/decision-tree-machine-learning
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00660.warc.gz
|
en
| 0.943159 | 2,177 | 3.28125 | 3 |
Kids today are being raised in a never-before-seen environment. Children of the 21st century have grown up in an all-digital world, and nobody is completely sure what the consequences will be. Click or tap to see why some experts are skeptical about kids’ cell phone use.
But the phenomenon of “digital natives” hasn’t been lost on scientists. In fact, one team of researchers has put together a study surrounding the effects of technology on developing minds.
If you have kids or grandkids, you won’t want to miss this important study about how screens are affecting children’s brains. Plus, we’ll show you how you can set reasonable limits for media usage
Too much of a good thing?
According to a study at Cincinnati Children’s Hospital Medical Center, children who get more screen time than recommended show key differences in brain development compared to kids who spend less time with technology.
These changes occurred in the parts of the brain associated with language and self-regulation, and it’s currently unknown how these changes will affect development later in life.
The study involved multiple testings and brain scans of 47 local children between the ages of 3 and 5. The children were all verified as healthy before signing on, and the study was able to map out how screen use potentially affects brain processing speeds, among other factors.
Researchers provided parents screening tools for the study, which assigned number scores depending on each child’s screen time. The higher the score, the more time children spent on devices.
The kids with higher scores showed significantly reduced expressive language and processing speeds. Higher scores also correlated with reduced white matter in the brain, which affects organization skills and impulse control.
This comes on the heels of a similar study from Canada that explored how screen time can negatively affect the attention spans of preschoolers. The Cincinnati study can be found in the JAMA Pediatrics journal.
What can I do to keep my kids safe?
The American Academy of Pediatrics (AAP) has screen time guidelines for kids and recommends children younger than 18 months avoid all screen-based media with the exception of video chatting. Children aged 2-to-5 should limit screen time to up to an hour per day.
The experimental group of children in the study exceeded these guidelines and spent more time with screens than recommended, while the control group fell within recommended guidelines. The study’s findings do support the AAP’s screen limit recommendations.
Aside from limiting screen time, a valuable tool you can use with your older kids is simply talking to them about what constitutes healthy productive screen time. A great way to do this is to discuss reasonable limits and set rules that keep them safe.
We’ve put together a safety contract you and your kids can sign that outlines appropriate internet behavior and rules. Click or tap here for Kim’s tech safety contract for kids and parents.
Listen to Kim talk about the effects of technology on young children in this episode of Consumer Tech Update:
|
<urn:uuid:9b10d62c-b076-419f-9feb-623e1339beb6>
|
CC-MAIN-2022-40
|
https://www.komando.com/technology/too-much-screen-time-changes-kids-brain-development/611654/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00660.warc.gz
|
en
| 0.943962 | 620 | 3.5 | 4 |
Blockchain looks to be one of those up and coming technologies that is constantly being talked about. Many of the largest IT companies – IBM, Microsoft, and Oracle to name few – plus a not-for-profit or two are heavily promoting blockchain. Clearly, there is intense interest, much of it fueled by exotic-sounding cryptocurrencies such as Bitcoin and Ethereum. The big question I get asked – and analysts are supposed to be able to answer the big questions – is “What can I use blockchain for?”
To begin with, the best applications of blockchain are those that require an authenticated source. Blockchain provides (sort of) immutable proof of the existence of something and transactions involving it. This is what made it attractive as a currency. It was hard to create a forgery of the “coin” while supporting changes of ownership.
Another indicator that blockchain may be useful for an application is lack of a central authority to manage transactions. Blockchain allows for participants to interact as peers without a clearinghouse to moderate the transactions. Credit cards, for example, are transactions between consumers who want to buy something and merchants who want to sell them. Visa, American Express, Discover, and Mastercard enable these transactions by clearing them between the banks of the buyer and seller. Without the clearinghouse, credit cards wouldn’t work. In the case of Bitcoin, changes in ownership are recorded in the blockchain that underlies the currency and distributed to all parties participating in Bitcoin.
So, the best blockchain applications will be those that require authentication and lack a central authority to grant it. Some examples of these type of applications are:
• Material supply chains. Blockchain hold a lot of promise to inhibit counterfeit parts from entering the supply chain. The blocks in the chain represent parts that can be transferred from owner to owner. This would produce a history of where the part originated and where it has been as it moved through the supply chain. A hash can identify the part and the ledger agreed to by all participants in the supply chain since they all have a copy of it.
• Transportation. Similar to material supply chains, blockchain holds the potential to help track shipments along a series of routes, even while the shipments change hands between different carriers.
• Smart contracts. Blockchains can represent a contract, it’s amendments, and agreement to the final document. Unlike many electronic contracts, all parties would have a complete copy of the entire history of the agreements made within the contract and it will be hard to dispute the “signatures” later. The contract can also be agreed to without a third party such as Docusign or the potential for forged signatures.
• Professional credentials. It is not news that job seekers will sometimes inflate or falsify their academic credentials. In some cases, job seekers go so far as to claim doctorates that they never earned. Now, imagine how easy it might be to claim technical credentials that are conferred by training organizations that may not exist for ever. There are also plenty of professions such as medical, dental, and law, where a constant stream of new learning is required to maintain licensing. In all of these cases, blockchain could be used to create a trustworthy method of verifying credentials that can easily be shared with anyone to prove their authenticity.
• Personal identification. Almost everyone has had their email system hacked at some point or another. This is an example of someone stealing an aspect of someone’s electronic identity. The same is true for credit card numbers, social security numbers and other forms of personally identifying information. If personal identification was actually a blockchain, this would be much more difficult since thieves would have to steal something that is never shared online. Blockchain hold the promise of online authentication that is harder to hack then even two-factor authentication.
Some of these are purely speculative, though plausible. Others, especially the professional credentials, transportation, and material supply chains applications are already under development or released.
Like so much interesting technologies, the hype is a bit early and probably overstates the technology. We shouldn’t let the hype undermine blockchain’s potential nor dissuade developers from exploring its usefulness. There are many instances where verification of transactions is hard but necessary and a central authority doesn’t exist or is undesirable. These are just the early emerging applications – the low hanging fruit – and it is hard to predict the good ideas developers will come up with for blockchain.
|
<urn:uuid:a46a37d0-d6d6-405a-b74f-5a7516af6d75>
|
CC-MAIN-2022-40
|
https://amalgaminsights.com/2018/02/26/blockchain-what-is-it-good-for/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00660.warc.gz
|
en
| 0.957733 | 895 | 2.96875 | 3 |
Social Engineering is a tactic used by cyber criminals to manipulate individuals to gain confidential information such as social security number, credit card number, passwords, etc.
In the cyber security world, the weakest link in the security chain are the users, which is why people are the target when it comes to social engineering. It doesn’t matter how many security measures you have in place. You can have locks on your doors, an alarm system, the latest firewalls, and network or security monitoring tools; all it takes to hack into your network is to trick a user into clicking on a malicious link they think came from a social media site.
Understanding How Social Engineering Works
Social engineering is responsible for many of the recent major attacks, from Sony Pictures Hack to The White House. Attackers will take whatever means necessary to break into a company's network and steal information and the most successful by far is social engineering. Criminals will sometimes take weeks or even months doing researching about companies and their employees on social media like LinkedIn, Twitter, or Facebook before coming in the door.
In your workplace how often have you heard “could you hold the door please /my hands are full / I forgot my badge” even though, the individual may not seem suspicious, this is a very common tactic used in social engineering. On the phone, a social engineer might call and pretend to be a trusted person (law enforcement, co-worker, IT support, bank auditor, etc.)
4 Most Common Social Engineering Attacks
This is the most common technique use in social engineering. Phishing is a technique used to convince people to open email or attachments infected with malware. Criminals will usually start by creating a web page that looks like Outlook, Amazon, etc. They will then send a crafted email to the company without targeting a specific user. Clicking on any link in these emails will take users to a login page asking them to provide their login information. This eventually will lead to requesting credit card information or any potentially sensitive information.
For precaution, never open links or attachments that are from unknown sources. It is best to report it when in doubt. This helps reduce the risk of getting compromised and increases the level of awareness.
This is another form of social engineering where attackers pretend to be someone one else to obtain sensitive information. Pretexting can be used to create a whole new identity and then using that identity to manipulate users. For instance, a criminal may call and claim he's from the HR department, and ask you a few questions. When the criminal has the information he wants he will sell it to people who may use it to steal your company’s asset or even sue you.
This usually starts by a criminal striking up a friendly conversation to talk their way into accessing a restricted area of your business. This could be as simple as an employee opening a door and holding it open for another person to enter, without any proof that person they let in had authorization to enter.
Baiting is simply offering users something free. An attacker might offer you a free movie or music downloads. These of course, contain malicious programs. In another instance, an attacker would leave an infected USB flash drive at a public place hoping someone would pick it up and use it on their devices.
Protecting Your Business Against Social Engineering
Social engineering should be a concern for organizations of any size big or small. Therefore, prevention and education play a key role in avoiding incidents. Integrity can assist and support your organization with a customized security bundle that addresses these common threats. The goals are to minimize your risk associate with these threats, reduce the likelihood of a security breach, help your people become "protectors of information" and demonstrate due diligence on behalf of your organization related to security compliance.
Integrity's Information Security Advisor and dedicate Security Services Team are ready to assist you with:
Contact us for more information.
|
<urn:uuid:82f1bada-0cd8-42e3-a2c2-783a560451d9>
|
CC-MAIN-2022-40
|
https://blog.integrityts.com/social-engineering-is-your-company-at-risk
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00660.warc.gz
|
en
| 0.946483 | 804 | 3.078125 | 3 |
Rapidly evolving digital technology is central to society, bringing with it several opportunities, challenges, and criticisms. Sustainability ranks high because of the tech industry’s impact on climate change, electronic waste (e-waste), water waste, and other notable problems. For instance, digital technology consumes great amounts of electricity, and many tech products end up as e-waste.
However, there are steps data centers and tech companies can take to improve sustainability, including the use of immersion cooling. The sooner you invest in sustainable technologies, the more you’ll do for the planet—and for your bottom line. Let’s explore how digital technology impacts the environment and how you can make a difference while saving time and money.
Digital Technology Uses Polluting Electricity
The digital economy is on an unsustainable track to use excessive amounts of electricity. The power sources are often dirty and emit vast amounts of carbon. Data centers alone account for a sizable chunk of total electricity use, although the numerous digital devices used throughout our economy are adding to this disaster.
It takes more electricity to manufacture devices than to use them, and tech manufacturing places a denser demand on the grid than other industries. For example, producing a regular phone emits 55 kilograms of carbon, and the processors behind data centers are far more resource-intensive. In fact, digital technology may account for a fifth of total electricity production in 2025.
While much of this power goes to obvious uses like servers and storage devices, a substantial amount supports tasks like cooling. Air-cooling systems are inefficient and can use a larger share of a data center’s electricity than the IT equipment itself. Therefore, upgrading your data center’s cooling method to liquid immersion is a practical way to improve tech sustainability.
As technology advances, new challenges will arise. For instance, blockchain and the metaverse require more computational and electrical power than predecessor technologies—the cryptocurrency Bitcoin uses as much power as entire countries. Tech companies must start adopting more sustainable practices that use renewable sources of energy.
Technology Produces E-Waste
Digital technology also produces unsustainable amounts of e-waste. While digital products represent only 2% of the products we buy, they are densely packed with toxins and produce 70% of toxic waste. Outdated devices get thrown into landfills and other waste sites, often releasing toxic materials into the environment. Data centers also produce a particularly large amount of e-waste because many of these facilities lack a recycling program.
To minimize e-waste, get the maximum amount of use out of each part. With newer and faster digital devices coming out often, it’s common to replace old parts rather than repair them. However, repairing parts can extend their life and use. You should also take steps during device installation to protect parts. For example, the environmentally friendly cooling oil in liquid immersion systems protects IT infrastructure against corrosion and wear.
Measure the amount of digital products going into your facility against the amount going out for disposal, recycling, and repair. Also, aim to minimize the use of products containing environmentally sensitive materials. Equipment designed from the start to be readily recycled prevents e-waste.
Digital Technology Wastes Water, Minerals, and Land
Using digital tools also wastes scarce natural resources including water, minerals, and land. Water is of utmost concern, with much of its consumption coming from electricity generation. Companies in many industries are starting to pay attention to their water use, and data centers should too—they already use more than half a billion cubic meters of water each year. Finding a system that saves both water and energy is essential.
Liquid immersion removes server heat more efficiently than previous cooling systems and runs servers with fewer resources. Liquid-cooled data centers also use half the electricity of air-cooled ones, eliminating much of the water waste while improving performance. Green Revolution Cooling’s (GRC) liquid immersion can cool even high-density installations without a water chiller, which results in massive water and energy savings.
Land and minerals are also endangered by digital technology. Land is initially mined for minerals used to build digital equipment, but it too is ultimately consumed—data centers need more land as they grow. The e-waste produced often goes back into the land rather than being recycled. Both the mining and disposal of uncommon substances for digital technology can occur under hazardous conditions. Poor countries usually suffer the most.
Instead of degrading the world’s water, land, and minerals, use sustainable technologies like liquid cooling that preserve resources. GRC’s immersion cooling uses less real estate than conventional data centers and contains long-lasting hardware. By upgrading, you’ll be protecting the earth’s scarce assets.
Invest in Environmentally Sustainable Practices Early
Companies should modify their digital behaviors to reduce their unwanted effects on the environment. The sooner you start, the better results you’ll see over the long term. An easy first step is to put environmental responsibility policies in place. These will clearly define your standards and aims and help you track key performance metrics to ensure your policies are working.
Next, show your green credentials by using eco-friendly resources and building to environmental construction standards. For instance, some data centers source their electricity from renewable resources like fuel cells and solar energy, which reduces emissions. Or, you can use energy-efficient products that require fewer resources, such as GRC’s liquid immersion cooling solutions.
Don’t build a data center facility that’s bigger than your needs. Instead, use technologies that meet your current demand and can scale with you as you grow. In addition, recycle the heat produced by servers by redirecting the heat to where it’s needed, such as to warm the building’s air and water.
Finally, you can invest in the newest innovations in battery technology, which are more efficient than older models. This move complements liquid immersion cooling, which has far lighter electrical requirements and therefore cuts back on batteries and emission-producing generators.
Liquid Immersion Cooling Makes Data Centers Sustainable
With digital technology now taking the spotlight for emissions and climate change, it’s time for the data center industry to do its part. Liquid immersion cooling is next-gen cooling technology that submerges servers in an electrically safe fluid.
Immersion uses only 5% of cooling electricity compared to traditional data centers, which instantly eliminates unnecessary resource use and toxic byproducts. It also eliminates half the total electricity used by the data center and cuts out vast amounts of water waste.
GRC’s game-changing solutions improve data centers’ sustainability while reducing costs. For instance, liquid immersion keeps hardware running longer by protecting your electronics. This, in turn, minimizes e-waste, mineral extraction, and disposal into incinerators and other polluting outputs.
Liquid cooling systems also cut electricity and water use tied to global warming, making the cooling solution a favorite among eco-friendly tech companies.
Use Digital Technology Sustainably With GRC
Digital technology isn’t going away, and neither is environmental concern about its effects. Each year data centers take on a more prominent role in the economy, which puts the industry on course for a reckoning with sustainable practices.
There are several ways data centers can become more sustainable, but immersion cooling stands out as the best way to reduce the use of electricity, carbon, water, and other resources. With GRC’s liquid immersion cooling, you can finally use servers, storage, networking, and other digital technologies in a responsible manner. This efficient system cools data centers without the destructive waste of previous cooling methods.
Use this green solution to increase the sustainability and performance of your data center. , to get started.
|
<urn:uuid:727266a4-e185-4b66-9c93-a05cd5e46f7f>
|
CC-MAIN-2022-40
|
https://www.grcooling.com/blog/digital-technology-sustainability/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00660.warc.gz
|
en
| 0.921671 | 1,585 | 3.5 | 4 |
Hard hats save lives: but only if people wear them. Discover how intelligent, AI-powered hard-hat cameras are helping to ensure workers in dangerous locations stay safe at all times.
In 1919, the E.D. Bullard Company patented the “Hard Boiled hat”, based on the steel helmets used by soldiers in World War One. These hats were made of steamed canvas and glue and were designed as daily protective headgear for miners.
Over the course of the 20th century, the hard hat evolved to become the brightly coloured, impact-resistant headgear of choice for those working in hazardous environments. And today, many countries have legislated to ensure hard hats are worn at all times in such places – with good reason.
Failing to wear a hard hat in a hazardous workplace can be fatal. In the US in 2012, more than 1,000 people died from head injuries while working. What’s more, data from the US National Safety Council suggests that such fatalities are increasing, with construction, transportation/warehousing and agriculture reporting the highest number of preventable deaths in 2016 and 2017.
Importantly, not wearing safety headgear not only costs lives: it can also cost companies money in lawsuits, compensation and life insurance payouts and lost labour.
Enforcing safety in a busy environment
In spite of the risks, enforcing hard hat use can be challenging, as people can forget or decide not to put them on. This means site managers must keep a constant lookout, but that’s not always easy in a distracting, noisy environment with vehicles, materials and people always moving around, often over multiple levels.
That’s why many organisations are turning to AI-powered solutions that intelligently help identify if people are complying with hard-hat safety rules.
AI cameras watching out for workers
In real life, artificial intelligence technology is being used around the world every single day to make workplaces operate more efficiently, more productively and of course more safely.
The latest hard-hat detection video cameras use embedded AI algorithms to ‘learn’ what a person wearing a hard hat should look like. They then apply this algorithm while scanning a site, rapidly identifying if anyone is working without a hard hat and alerting management teams so they can take action. Of note, when a violation is detected, management teams can also send an auditory warning through the on-site speaker to remind people of the rules.
AI-powered Hard Hat Detection from Hikvision
Hikvision’s AI-powered Hard Hat Detection Cameras are equipped to intelligently detect if workers are or are not wearing safety headgear. The high definition cameras constantly scan your site, rapidly sounding an alert if someone is identified as breaching defined rules. Cameras can also be linked with access control systems, to ensure that members of staff are wearing hard hats from the moment they enter a hazardous location.
Accelerate your business with AI
To find out more about the application of hard hat detection technology, visit the industrial park solution pages for more information.
|
<urn:uuid:8f8ede9f-d414-42f4-885e-0fee41ac83b5>
|
CC-MAIN-2022-40
|
https://internationalsecurityjournal.com/how-ai-powered-cameras-are-keeping-workers-safe/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00660.warc.gz
|
en
| 0.94002 | 703 | 3.140625 | 3 |
Unfortunately, there are still some out there that do not have quality application control software in place to protect themselves from the full range of attacks they may face. However, a new style of cyber attack has the potential to stymie even the highest quality layered security system.
Distributed denial-of-service attacks (DDoS) have been around for over a decade, and are an easy way for hacktivists and cybercriminals to temporarily take down a website. In a DDoS attack, hackers use servers to send lots of fake connection requests to a website. Since each connection is a dupe, the hosting server cannot find the computer and thus spends time continually looking for an endpoint. This overwhelms the hosting server, which can degrade website performance or even take it offline entirely, according to CNET.
However, CSO reported that cyber security experts have become much better at targeting and taking down DDoS attacks. That’s because in the past many attacks came through a number of infected servers known as a botnet. Once the servers were detected, they could be taken offline, thus ending the threat.
A new breed of DDoS emerges
A hacktivist group recently found a way to launch a DDoS attack without the need for a botnet, taking the websites of five major U.S.-based banks offline as a result in the past few weeks. Instead of using a central hub, the Izz ad-Din al-Qassam Cyber Fighters have targeted websites using a more scattered approach that is more difficult to detect, CSO reported.
The group on recruits who are instructed to download a program available at two different peer-to-peer file sharing websites. Once the program is on a machine, users can start the program with just one click and then continuously send fraudulent server requests. While it is relatively easy to detect a botnet, it is much more difficult for websites to determine a genuine connection request versus one sent via this program since, to the host server, both look like commands coming from ordinary home networks, according to CSO.
Using this DDoS method, the group has temporarily taken down the websites of Bank of America, Wells Fargo, JPMorgan Chase, Citigroup and U.S. Bank. The group says it is targeting the banks in retaliation for a YouTube video that mocks the Islamic faith.
What layered security methods do you rely on to prevent DDoS attacks from taking a website offline? What steps would you recommend banks and others to take to prevent attacks like the one carried out by the hacktivist group? Leave your comments below to let us know what you think about this issue!
|
<urn:uuid:d6025ee7-622f-4f79-8a8d-52a1293787eb>
|
CC-MAIN-2022-40
|
https://www.faronics.com/news/blog/hacktivists-use-new-tactic-to-take-down-websites
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00660.warc.gz
|
en
| 0.951168 | 540 | 2.53125 | 3 |
Published On October 16, 2018The GDPR endeavors to balance the capabilities of biometric data with organizations’ responsibility to carefully gather and protect the data.
More organizations are accumulating biometric data through fingerprint and retina scans, facial recognition and even ear-canal authentication. Biometric authentication has the potential to become the most accurate identification method. But this data, like any other type of data, is not immune to security issues. In fact, the stakes are higher for biometrics because the data is so personal. After all, you can cancel and replace a credit or debit card if your account is compromised, but you can’t exactly replace your face if you’re relying on facial recognition.
While there is no current law addressing biometric data, the General Data Protection Regulation (GDPR) covers biometrics in detail. According to the GDPR, biometric data is defined as “personal data resulting from specific technical processing relating to the physical, physiological or behavioral characteristics of a natural person, which allow or confirm the unique identification of that natural person.” Biometrics is one of the “special categories of personal data” that can only be used if the data subject has given clear consent.
The GDPR endeavors to balance the advanced capabilities that biometrics affords with organizations’ responsibility to carefully gather and protect the data.
Biometrics has various advantages over other methods of authentication. The sensitivity of the data makes it more dependable. When implemented as a part of a layered authentication system, biometrics dramatically decreases the opportunities for hackers to breach authorized users’ accounts.
Some organizations are using biometric data for progressive innovative research and data analytics. The GDPR does not prohibit this kind of practice, but organizations should provide security warnings. One must have lawful grounds for processing personal data. Organizations have the responsibility to use best practices to securely store and maintain this highly sensitive data.
Security should always be the number one priority. Biometrics has spurred exciting technological innovation, but if the biometric data are more sensitive than the data the identification allows you to access, it may be optimal to use a less demanding method of authentication.
The GDPR requires data processors to employ proper technical and organizational procedures such as one-way coding to keep data secure. One-way coding keeps biometrics templates from being reverse engineered and reconstructed. These procedures can be complex, but by clearly explaining your data-security measures to organizations, you can inspire confidence and help them understand why collecting these data is both necessary and safe.
|
<urn:uuid:aabb947c-a3e4-40bc-9320-e7ecf2485536>
|
CC-MAIN-2022-40
|
https://www.ironmountain.com/blogs/2018/managing-biometric-data-the-gdprs-requirements
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00660.warc.gz
|
en
| 0.896902 | 522 | 2.84375 | 3 |
Digitalization is an irreversible trend shaping the future, with countless benefits for people and industries. However, with almost all aspects of life, work and commerce now online, data protection and data security is a critical concern for all industries. Data theft is the objective of most cybercrime, and data breaches have serious ramifications for companies, including loss of customers and revenue, downtime or brand and reputation loss. Data protection begins with a thorough understanding of data classification, and what constitutes sensitive data.
What is sensitive data? A simple definition
Sensitive data is any information which is confidential, that needs to be kept safe and out of reach from unauthorized users. It is accessible only by those with relevant permissions.
Data is classified from levels zero up to three, based on the extent of damage that it would cause, if it were available in the public domain, whether intentionally or unintentionally. There are various data classifications followed by government and non-governmental organizations.
|Data classification||Government||Non-government||Potential adverse impact from a data breach|
|Class 0||Unclassified||Public||No damage caused|
|Class 1||Confidential||Sensitive||Some damage caused|
|Class 2||Secret||Private||Serious damage caused|
|Class 3||Top Secret||Confidential / Proprietary||Exceptionally grave damage|
Data is classified according to the adverse impact it can cause if leaked
When you hear about a movie being behind a data leak from Sony Pictures Entertainment, at the face of it, it seems like it should be classified as zero in data classification. A movie is public information, and what damage could be caused if it was intended for public viewing anyway? However, the case of Sony Pictures being hacked in 2014 explains the complexity of data classification and risk. Sony had produced a movie called ‘The Interview’, a comedy parody about two Americans who assassinate North Korean leader Kim Jong Un. The hackers, believed to be working on behalf of North Korea, leaked embarrassing information about employees to the media, and eventually demanded that Sony cancel the release of the movie. Although Sony initially decided not to screen the movie, critics including Obama were against giving in to terror demands. Sony went on to screen the movie, but theatres then received threats and refused to screen it. Sony eventually released the film online on OT platforms. However, this incident demonstrated that the hack behind a movie release could have maximum adverse effect, for the movie industry, general public and also for foreign policy, changing even the notions of warfare.
The leakage of private or personal data is also becoming an issue that is governed by regulatory compliances in many countries. Examples of data that is classified as private include anything that has personal identifiable information (PII) or protective health information (PHI). For organizations, employee data or payroll data leaks are considered to have serious consequences. Employers who violate the General Data Protection Regulation (GDPR) could face fines of up to 20 million euros or 4% of annual revenue, whichever is higher.
Addressing human error through data security automation
A joint study from Stanford University Professor Jeff Hancock and security firm Tessian revealed that 88% data breach incidents are caused by human errors. IBM’s Cost of a Data Breach Report states that the average cost of insider cyber incidents, across sectors, due to human error is estimated to be $3.33 million.
We have to assess this problem at two levels – the user perspective and from the perspective of automated technology solutions and their implementation.
To address the problem of data security, security professionals recommend best practices such as asking users to maintain strong passwords, and two-factor authentication for emails and sensitive data. However, relying on users and best practices recommended for users to follow is unreliable.
Deploying additional technology-led security, combined with an expert keeping guard is the better option. Such ‘Managed Detection and Response’ (MDR) services provide organizations with trained and skilled analysts using cutting-edge security tools and with access to global databases, who can keep track of evolving cyberthreats. Using the latest in SOAR (security orchestration, automation and response), external security providers are able to streamline incident response workflows, automate data aggregation to assist human and machine-led analysis and coordinate response actions.
Both MDR and SOAR incorporate the latest automation and endpoint detection and response (EDR) tools. But, while solutions may be completely automated in terms of taking a defensive posture to protect the organization, data security itself is not addressed. In a situation where despite the best security practices an attack is successful, the data becomes available to hackers with malicious intentions. This is why in addition to automating data security, the data itself needs to be protected through encryption or other means.
Data protection – encryption vs tokenization
Most developers or security experts who provide recommendations on data protection focus on encryption of sensitive data such as passwords. In such cases, there may be some comfort in that the hacker can only see the encrypted salted passwords only. On the flipside though, the hacker has other PII or private information such as addresses, names, mobile or SSN details. DLP and CASB solutions protect the data from exfiltration to the most extent, but internal threats, or circumstances in which a professional hacker bypasses all these controls will still find your data compromised.
Shareholders, senior management, and CXOs rely on standards set up by compliance, but these are usually the bare minimum. Compared to the cutting edge technologies used by hackers, latest breaches or advanced hacks, there is no complete automation solution focusing on data security.
Data encryption is a step in the right direction, but it must be implemented in such a way that it’s not just passwords that are protected. Protecting all types of data helps increase trust and manage risk. Encryption uses an encryption key to temporarily alter data, making it unreadable. However, the drawback is that with sufficient effort, any encryption can be broken. Because encryption is reversible, PCI Security Standards Council and other regulatory bodies still consider encrypted data as sensitive data. This could still attract fines due to non-compliance.
We at Entersoft recommend and implement tokenization for our customers. Tokenization is when sensitive data is replaced with a non-sensitive equivalent, referred to as a token. When tokenization is properly implemented, even if there is a data loss, there is no way that the hacker can digest or use that information.While encryption is secured using an algorithm that can be figured out, tokenization replaces the data with randomly generated non-sensitive data, while sensitive data is securely stored. Even if a hacker gets hold of the tokens, they cannot use them. The user of the token goes through additional security checks before the data is swapped. Plus, unlike encryption, the tokens themselves have no intrinsic value and cannot be broken into, thus meeting compliance regulations while providing a cost-effective and highly secure way of protecting organizational data.
Tokenization platforms provide data security and allow businesses to leverage data
End to end, automated data protection platforms combine data classification and data security, taking into account the unique regulatory or business needs of the organization. In addition, sophisticated platforms can also secure the data while allowing it to be leveraged to build insights for various business or operations purposes. As organizations look to leverage their data for competitive advantage, operational efficiency and savings, data security is paramount. Investing in a secure and mature platform or service to secure data at an organizational level provides a shield to de-risk against data breaches, reputation loss or non-compliance. Having done this, companies can then freely use the data to power their growth journey
|
<urn:uuid:bb026a3a-6b83-48ae-ba0d-8783424deb33>
|
CC-MAIN-2022-40
|
https://blog.entersoftsecurity.com/data-classification-protection/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00060.warc.gz
|
en
| 0.929372 | 1,562 | 3.390625 | 3 |
What is vectoring technology?
Vectoring technology has emerged as a means of increasing broadband speeds without investing in an extensive fibre roll-out.
The technology uses noise cancellation, in a similar way to noise-cancelling headphones, to increase data speeds on existing copper infrastructure.
How exactly does vectoring technology work?
According to research from Alcatel-Lucent, vectoring technology works by addressing the gap between theoretical maximum speeds and the speeds that service providers can deliver in typical field conditions.
The company’s xDSL strategist for fixed access, Paul Spruyt, and marketing director for wireline fixed access, Dr Stefaan Vanhastel, identify crosstalk as one of the reasons that the highest theoretical download speeds cannot be achieved on copper infrastructure. Crosstalk is where cables that are bundled closely together interfere with each other. The more cables bundled together, the more crosstalk that is generated. Vectoring technology continually measures the crosstalk from all other lines in a bundle and works to remove it by generating anti-phase signals to cancel out the crosstalk signals. This results in almost no noise on a line.
To calculate crosstalk, vectoring technology measures and cancels interference across hundreds of lines over the full frequency spectrum they occupy. The interference is processed by subdividing the spectrum into narrow frequency bands, known as tones, and processing each tone independently. All copper lines deploying vectoring technology are processed simultaneously and the results are used in real time to develop anti-phase compensation signals for each line, based on the actual signals transmitted on other lines in the bundle. This calculation is extremely complex, which is why vectoring only emerged as a viable option for providers with recent advances in silicon technology.
Which equipment vendors are developing vectoring products?
Ovum’s network infrastructure analyst, Kamalini Ganguly, says that almost every major broadband hardware equipment vendor has some type of vectoring product in development. Ericsson was one of the first to announce a live lab demonstration in 2009, and this was followed by announcements from Nokia Siemens Networks, Huawei, ZTE and Alcatel-Lucent. Of these, Alcatel-Lucent has been the first to reach commercial availability.
What speeds can vectoring technology achieve?
According to Alcatel-Lucent, vectoring deployed on VDSL2 lines can reach downstream speeds of 100Mbps at distances of up to 400 metres, while 40Mbps can be supported with loops as long as 1,000 metres. In Alcatel-Lucent’s field trials with a number of service providers, including Belgacom, A1 Telekom Austria, Swisscom, Orange P&T Luxemburg and Turk Telekom, vectoring improved downstream bit rates by 90% to 150%. Alcatel-Lucent also achieved speeds of 300Mbps through the use of vectoring in conjunction with its VDSL2 bonding and phantom mode solutions.
What are the advantages of vectoring technology?
Vectoring technology has obvious cost advantages over fibre, as it reuses existing infrastructure. This also means it can offer a much quicker time to market. Another advantage of the technology is its reliance on DSL, which remains the main method of connecting to the internet worldwide. According to research group, Dell’Oro, two thirds of the world’s broadband subscribers are connected through DSL.
What are the disadvantages?
According to Alcatel-Lucent, sophisticated noise cancellation is CPU intensive and therefore works best over a few hundred lines. The noise cancellation process also requires measurements to be available from all lines, meaning that the lines all need to be under full control of a single service provider in order to achieve best performance. In addition, over longer distances vectoring technology is less effective at improving download speeds. This means that in some rural areas, where homes and businesses are thousands of metres away from the street telecoms cabinet, the technology will not significantly enhance the existing connection.
Where is vectoring technology being deployed?
Telekom Austria’s domestic subsidiary, A1, is one of the first companies to deploy the latest generation of vectoring technology. A1 has begun deploying Alcatel-Lucent’s VDSL2 solution in the state of Lower Austria, with a nationwide roll-out planned for mid-2012. Belgacom is also introducing vectoring technology in its domestic Belgium market through a partnership with Alcatel-Lucent. Western European countries are considered to have the most to gain from the technology due to their extensive copper infrastructure.
|
<urn:uuid:13dfdf80-12dd-4ab8-90ae-501da5e5816b>
|
CC-MAIN-2022-40
|
https://www.capacitymedia.com/article/29ot4zbr84apiu3vdmv40/news/what-is-vectoring-technology
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00060.warc.gz
|
en
| 0.937684 | 944 | 2.875 | 3 |
It sounds like science fiction: Take a supercomputer and immerse it in tanks of liquid coolant, which must be kept cool with the use of water. This sci-fi scenario has created a real-world scientific computing powerhouse.
The Vienna Science Cluster uses immersion cooling, dunking SuperMicro servers into a dielectric fluid similar to mineral oil. Servers are inserted vertically into slots in the tank, which is filled with 250 gallons of ElectroSafe fluid, which transfers heat almost as well as water but doesn’t conduct an electric charge.
The system has emerged as one of the world’s most efficient supercomputers, as measured by Power Usage Effectiveness (PUE), the leading metric for the efficiency of data center facilities. The Vienna Science Cluster 3 system touts a mechanical PUE of just 1.02, meaning the cooling system overhead is just 2 percent of the energy delivered to the system. A mechanical PUE doesn’t account for energy loss through the power distribution system, which means the actual PUE would be slightly higher.
The end result: 600 teraflops of computing power uses just 540 kilowatts of power and 1,000 square feet of data hall space.
“We are very impressed by the efficiency achieved with this installation,” said Christiaan Best, CEO and founder of Green Revolution Cooling, which designed the immersion cooling system. “It is particularly impressive given that it uses zero water. We believe this is a first in the industry.”
Why Liquid Cooling Matters
Liquid cooling can offer clear benefits in managing compute density and may also extend the life of components. The vast majority of data centers continue to cool IT equipment using air, while liquid cooling has been used primarily in high-performance computing (HPC). With the emergence of cloud computing and “big data,” more companies are facing data-crunching challenges that resemble those seen by the HPC sector, which could make liquid cooling relevant for a larger pool of data center operators.
Last fall at the SC14 conference, a panel of HPC experts outlined their expectation for a rapid expansion for liquid cooling that may extend beyond its traditional niches. At Data Center Frontier we’ll be tracking this transition, and keeping readers posted on relevant innovations in liquid cooling, such as the water-less implementation in Vienna.
The Vienna Scientific Cluster combines several efficiency techniques to create a system that is stingy in its use of power, cooling and water.
Water management is a growing priority for the IT industry, as cloud computing is concentrating enormous computing power in server farms supported by cooling towers, where waste water from the data center is cooled, with the heat being removed through evaporation. Most of the water is returned to the data center cooling system, while some is drained out of the system to remove sediment.
The fluid temperature in the immersion tank is maintained by a pump with a heat exchanger, which is usually connected to a standard cooling tower. The Vienna Scientific Cluster uses a closed loop dry cooler as the final method of heat rejection, requiring no water at all. Energy use may rise slightly in the summer, but should still remain near the 1.1 to 1.2 level seen among leading hyperscale data centers.
Free Resource from Data Center Frontier White Paper Library
Get this PDF emailed to you.
The novelty of the Vienna design is that it combines a water-less approach with immersion cooling, which has proven effective for cooling high-density server configurations, including high-performance computing clusters for academic computing, seismic imaging for energy companies, and even bitcoin mining.
Breaking the CRAC Habit
While not seen often in today’s enterprise and cloud data centers, liquid cooling isn’t new. If you’ve been around the industry for a few years, you’ll recall the days when water-cooled mainframes were standard in corporate data centers. But that soon shifted to racks of servers cooled by air using the familiar “hot aisle/cold aisle” design seen in most data centers today, with water chilling loops confined to the air handlers and “CRACs” (computer room air conditioners) housed around the perimeters of the data hall.
The alternative is to bring liquids into the server chassis to cool chips and components. This can be done through enclosed systems featuring pipes and plates, or by immersing servers in fluids. Some vendors integrate water cooling into the rear-door of a rack or cabinet.
Immersion takes a different approach, sinking the equipment in liquid to cool the components.
Green Revolution has been in the forefront of the recent resurgence of interest in immersion. In addition to supporting extreme power density, immersion cooling offers potential economic benefits by allowing data centers to operate servers without a raised floor, computer room air conditioning (CRAC) units or chillers. It also eliminates the need for server fans, which can also be power hogs.
The VSC-3, was installed in 2014, with Green Revolution Cooling working with Intel, ClusterVision, and Supermicro. It supersedes the VSC-2 cluster, which used a rear-door cooling solution that achieved a mechanical PUE of 1.18. VSC-3 features 2,020 compute nodes, each with 16 processor cores housed in the CarnotJet tanks.
The Cost Component of Cooling
Liquid cooling often requires higher up-front costs, which can be offset by savings over the life of a project. Economics were a key driver for the Vienna design.
“The value proposition (of the GRC system) was extremely impressive,” said Christopher Huggins, Commercial Director at ClusterVision, a leading European HPC specialist. “The whole data center and cluster was far less expensive than it would have been with any other cooling solution on the market. We are certain we will be using the GRC solution on more projects in the future.”
|
<urn:uuid:3c174824-c51a-453a-8037-d123a4356f77>
|
CC-MAIN-2022-40
|
https://datacenterfrontier.com/immersion-supercomputer-no-water-extreme-efficiency/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00060.warc.gz
|
en
| 0.935141 | 1,224 | 2.84375 | 3 |
Data center processing has now reached an inflection point. The heat being generated by chips and servers is so high that the cooling methods most data centers employ are no longer completely viable.
The demand for video content streaming, application services and data-intense technologies like artificial intelligence (AI), IoT and 5G is growing rapidly. High performance computing and cryptocurrency mining often require cooling capabilities beyond what air and other traditional cooling methods are capable of.
Enter 2-phase immersion cooling.
This groundbreaking technology immerses servers and other IT equipment in a non-conductive fluid that has excellent thermal characteristics, providing thousands of times more heat rejection than air cooling. The server components (like CPUs, GPUs, ASICs and power supplies) heat the fluid until it is boiled into a vapor. The heat energy in the vapor is then transferred through a condensing coil placed just above the ‘vapor zone’, rejecting that heat to an outside fluid loop typically connected to a fluid cooler (also known as a dry cooler since no water is consumed to reject the heat). The condensed vapor falls back into the tank in the form of a liquid, hence completing a perpetual, self-contained, 2-phase cooling cycle: Liquid – Gas – Liquid.
2-phase immersion cooling drastically reduces energy consumption, water use, data center floor space, and horizontal and vertical IT equipment space. The six major factors driving the need for this revolutionary technology are:
- Data center efficiency gains have stalled since 2018
- Chip power / IT power densities are increasing rapidly
- Data center water use has surpassed energy use as an environmental concern
- Compute power is moving toward the edge and compacting
- Data center e-waste is a growing problem
- Billions are being invested in corporate sustainability initiatives
Data Center Efficiency Gains Have Stalled Since 2018
Data center efficiency gains have flat-lined and reversed direction. The last two decades realized steady improvements in air cooling, including hot-cold aisle arrangements, close-coupled cooling, fan speed control, staged and inverter-driven compressors, adiabatic assist and many other innovations. But according to the Uptime Institute, since 2018 data center PUEs are actually on the rise (meaning the wrong direction!). This is due in part to the fact that air cooling technology has reached a ‘technology development tap-out’, coupled with the fact that today’s higher powered chips and processors are too energy dense to be efficiently cooled with air.
Chip Power / IT Power Densities Are Increasing Rapidly
While Moore’s Law long ago established that processor speed would double every eighteen months, the speed of AI processing now doubles every three and a half. Handling these speeds requires the most powerful chips ever designed, and these chips generate massive amounts of heat, which cannot be effectively or efficiently cooled with air.
For example, in April 2021 Cerebras released its new WSE 2 chip, which boasts 2.6 trillion transistors and 850,000 AI-optimized cores, and draws 23 kW of power. Most air cooling systems in data centers can only handle about 8kW to 12kW per rack, so even though you could fit three WSE 2 chips in a rack, you might not be able to blow enough air through the rack to cool even one of them. Even if you miraculously achieved an air cooling solution, with AI power doubling every quarter, this approach still wouldn’t be sustainable for long.
Data Center Water Use Has Surpassed Energy Use as an Environmental Concern
We all know that the majority of electricity we use is generated by fossil fuels. But what many don’t know is that power plants use these fossil fuels to heat water, generating steam to turn the turbines that ultimately create the power. Furthermore, chillers and air handling units can consume massive volumes of water to reject the heat from data centers. These compute facilities therefore use billions of liters of water per year. According to recent estimates, data centers consumed over 660 billion liters of water in 2020 alone.
Compute Power is Moving Toward the Edge and Compacting
Edge computing means processing data within devices or at edge data centers located geographically close to end users, rather than within a centralized data center. The proximity to users and bypassing of centralized data centers means data has much less backhaul time and cost, while the close proximity of regional and edge data centers to the applications and users drastically reduces response times. This results in significantly higher bandwidth and ultra-low latency, enabling the use of technologies that require real-time data relay of massive datasets, such as 5G, industrial IoT, AR and VR gaming, smart city sensors, drones, and so on.
Not only does edge computing require an enormous amount of processing power, but dense user populations are typically located in cities, where space is limited and priced at a premium rate. Edge data centers must often support dense servers and heavy workloads in compact spaces, located in harsh climates with high or low temperatures, dust, dirt, contaminates and particulates. Air cooling methods do not allow for this compaction because the coils, fans, compressors and ducting take up a significant amount of space. Furthermore, because these air-cooled systems rely on airflow to reject heat, that airflow naturally carries with it a host of dust and debris which can clog coils, reduce performance and even lead to premature failure.
Data Center E-Waste is a Growing Problem
In addition to wasted space, water and energy, we must add the physical waste associated with outdated cooling systems. Since high power air-cooled servers require larger fans and heat sinks, they take up even more space, are shrouded in more sheet metal, require more racks, and generate vast volumes of waste packaging. All of this Electronic Waste creates a negative impact on the environment. The idea is to do more with less. Liquid cooling provides the opportunity to achieve exactly that.
Billions Are Being Invested in Corporate Sustainability Initiatives
There is now a heightened social, corporate, governmental, and consumer attention on the environmental impact of corporations, the resources they consume and the impact on our ecosystem. Sustainability initiatives are no longer a nice-to-have; they are a high-priority directive being driven by the C-suite and boards of directors.
There is no better example of organizations putting their money where their mouth is and implementing real-world sustainability initiatives than the “big three” cloud providers — AWS, Google Public Cloud, and Microsoft. Microsoft has pledged to be carbon negative by 2030. Google has set an ambitious goal to run solely on carbon-free energy at all data centers by 2030. And AWS has pledged to power all operations with 100% renewable energy by 2025.
Since cooling and thermal management can consume approximately 40% of the energy data centers use, replacing outdated cooling methods is low-hanging fruit for enterprise sustainability goals.
2-Phase Immersion Cooling Will Be The Standard
2-phase immersion cooling is a tailor-made solution to the trends discussed above. The savings in energy costs, space, and carbon emissions will not only contribute significantly to corporate sustainability goals, but will enable exponentially higher compute power and a brave new technological future.
|
<urn:uuid:30ef6339-6f17-437e-a937-f5c12f846c84>
|
CC-MAIN-2022-40
|
https://liquidstack.com/blog/what-will-accelerate-the-adoption-of-2-phase-immersion-cooling
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00060.warc.gz
|
en
| 0.926947 | 1,486 | 2.5625 | 3 |
Intelligent Process Automation (IPA) is part of the wave of technologies that enable organizations to reduce operational costs, improve efficiency, and deliver better employee and customer experiences. Born out of rule-based, task automation, Intelligent Process Automation is the next iteration of Robotic Process Automation (RPA) that continues to drive digital transformations forward.
What is Intelligent Process Automation?
Intelligent Process Automation is the combination of different technologies to automate more complete, end-to-end business processes. It is the evolution of basic, rules-based task automation into the management and automation of entire business processes made up of numerous tasks.
At its core, Intelligent Process Automation is the convergence of RPA and different Artificial Intelligence (AI) technologies to automate larger decision-based business processes that traditionally required an employee to intervene and execute.
The promise of Intelligent Process Automation is rooted in its ability to elevate automation to another degree of complexity, creating even more efficiency, operational cost reduction, agility, and better experiences all around.
What Technologies Make Up Intelligent Process Automation?
At a high level, Intelligent Process Automation is essentially made up of two major market technologies: RPA and AI.
However, at a more granular level, there are specific Artificial Intelligence technologies that serve different purposes but are key components of IPA that enable the cognitive capabilities to automate more complex business processes.
Here's a rundown of how each technology makes up Intelligent Process Automation:
- Robotic Process Automation (RPA) – the automation of repetitive, rules-based business tasks.
- Artificial Intelligence (AI) – a combination of the technologies below that enables systems to perform tasks that require reason, judgment, and decision-making. Put simply, AI is a computer's ability to collect and extract information and apply logic to that data to make a decision.
- Machine Learning – An AI technology that can be defined as systems using data to improve those systems' performance without explicit instructions. A good example is discovering patterns in data and then using those patterns to make accurate predictions.
- Natural Language Processing (NLP) – An AI technology that interprets language and uses that information to make decisions and take an action. Common examples that use NLP are chatbots or virtual home assistants like Amazon's Echo or Google Home.
- Computer Vision – An AI technology that enables computers to parse and interpret images. There are examples in banking where computer vision is used to detect fraudulent banknotes to enhance security.
How is Intelligent Process Automation Different from Robotic Process Automation?
The difference between the two technologies is that while both deal with automation, RPA is simply one of the technologies that make up Intelligent Process Automation.
Robotic Process Automation is the automation of repetitive, rules-based tasks that present little variation. The processes selected for automation with RPA are typically smaller, decomposed processes of much larger, complex ones. For example, extracting information from invoices to input them into ERPs (enterprise resource planner) is a very common task that's automated using RPA. Still, it's only a small piece of the much larger flow to process invoices end-to-end.
Intelligent Process Automation would automate much more of that process and remove the human intervention that introduces error and decreases execution speed. For example, machine learning can be used to review the invoice for compliance. Decision modeling software can be leveraged to automate the checks that managers or finance teams would perform manually, and so on.
What are the Benefits of Intelligent Process Automation?
According to Mckinsey, organizations in different industries that have experimented with Intelligent Process Automation have seen impressive returns that include:
- The automation of 50-70% of tasks translating into 25-35% annual run-rate cost efficiencies
- A 50-60% reduction in straight-through process time
- Return on investments in the triple-digit percentages
It's safe to say the main benefit of Intelligent Process Automation is a significant amplification of the returns RPA offers.
By combining the task automation RPA provides with AI technologies like machine learning, NLP, and computer vision to automate more complex, end-to-end business processes, the natural result is:
- Increased cost reduction than RPA alone can deliver
- Improved process output quality
- Greater efficiency
- Freeing up even more employee time to focus on strategic, mission-critical initiatives
What are Some Use Cases for Intelligent Process Automation?
Similar to RPA, Intelligent Process Automation can be applied to various industries, departments, and functions. Additionally, just like the early days of RPA, there are specific industries that are early adopters with common use cases already leveraging Intelligent Process Automation:
IPA is already being experimented with and applied in the Financial Services sector to build more precise credit models to reinforce lending processes, optimize trade execution and routing, and leverage analytics to understand client price sensitivity and preferences.
Another great example comes from BBVA (Banco Bilbao Vizcaya Argentaria) using computer vision to accelerate and enhance the onboarding process of new customers. Using this technology, prospects can open bank accounts by merely taking a selfie.
Chatbots that have Natural Language Processing at their core are actively being used within insurance companies to automate and improve customer experiences. Specifically, they are used within an IPA framework to automate appointment scheduling and implement a self-service model for customers to select an insurance policy easily.
A big advantage of Intelligent Process Automation is visualizing data in real-time and bringing it to customers without manual intervention. As an article in Information Age points out, pharmaceutical companies and medical device manufacturers are using the greater visibility of data IPA affords to reinforce compliance by reducing fraud and errors while increasing security, safety, and accuracy.
The digitization and automation of document handling and regulatory monitoring are also helping the healthcare industry improve drug discovery and vaccine development.
What is Intelligent Process Automation's Role in the Future of Automation?
Intelligent Process Automation is the future of automation. While not quite at saturation, Robotic Process Automation is definitely out of the hype cycle and implemented widely.
Even though RPA growing pains have been abundant as organizations struggle with underwhelming ROI from burdensome RPA maintenance and support and a fragile digital workforce from poor automation design practices, those challenges will eventually be overcome to usher in IPA.
While integrating AI technologies with RPA to deliver Intelligent Process Automation is not quite ready for prime time yet, there is ample experimentation, and the early adopters are already seeing strong returns, which means it's only a matter of time before it becomes red hot.
Learn More: Top 7 Predictions for RPA in 2022
How Does Blueprint Enable Intelligent Process Automation at Scale?
The successful application of Intelligent Process Automation is dependent on synergy. By definition, artificial intelligence is combined with RPA to marry the task execution of bots with the intelligence and use of analytics that AI provides so complex, end-to-end business processes can be automated for bigger returns.
Therefore, a solution is needed to consolidate these tools; otherwise, a disconnected siloed IPA architecture will simply result in future failure.
That's where Blueprint comes in. Blueprint's Business Transformation Platform is the heart of your Intelligent Process Automation toolchain.
Delivering the most powerful automation design environment on the market where end-to-end automations that combine RPA and AI technologies can be designed, planned, and managed in what we call a Digital Blueprint.
Digital Blueprints contain everything you need to drive intelligent automation in your organization. From detailed process flows, functional and non-functional requirements, user stories, compliance, and regulatory requirements, to both functional and acceptance tests, among other critical information for successful solution delivery.
They can be used to drive any number of Intelligent Process Automation use cases such as RPA, the migration of legacy applications, custom development, or for commercial off-the-shelf technology implementations.
Download a copy of the Blueprint Business Transformation Platform datasheet and discover how Blueprint enables Intelligent Process Automation at scale.
|
<urn:uuid:da48e656-b27f-44a2-8826-a630ac128ff7>
|
CC-MAIN-2022-40
|
https://www.blueprintsys.com/blog/rpa/what-is-intelligent-process-automation-ipa?utm_medium=blog&utm_source=5-automation-trends-in-2022&utm_content=inline
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00060.warc.gz
|
en
| 0.897716 | 1,675 | 2.921875 | 3 |
Eta Compute Inc. is shipping the first production silicon for what it says is the first-ever artificial intelligence multicore chip for embedded sensors. In some sensor uses, the machine learning ECM3532 processor can run on just microwatts of power, compared to the 200+ watts needed for typical server processors.
The neural sensor processor ships with Eta’s patented continuous voltage-frequency scaling capability, which adjusts the chip’s internal clock rate and supply voltage based on the workload it is experiencing.
Using less power is a self-rewarding virtue given climate change, but it also makes the ECM3532 a good fit for edge systems — such as IoT sensor nodes — which often need to be power-sippers. The company claims that its power management “eliminates battery capacity as a barrier” for industrial and consumer system deployments.
Also onboard the processor are flash memory, SRAM, I/O, peripherals and a machine learning software-development platform.
Executives have already publicly demonstrated the chip performing image recognition and other applications in sensing at the device edge.
A growing list of power-sensitive processors
Eta has a lot of company when it comes to designing for more environmentally neutral devices.
StartUs Insights, an Austrian business research firm, has created a list of more than 200 companies globally that make energy harvesting IoT sensors.
Among its top four picks is Kinergizer, which harvests waste energy from vibrations, pressure, and strain. That energy is then used to power the sensors, according to the Dutch startup.
Kinergizer’s products convert waste energy into useful electricity using electroactive polymers, electrostatics, and electromagnetism.
E-peas, a Belgian startup, has designed power Management Integrated Circuits (PMICs) which capture waste thermal energy from equipment including power generators, which produce copious heat and need to be cooled frequently. Thermoelectric generators make the conversion, and send the electricity to sensors.
E-peas also offers low-energy microcontrollers which manage data sensing, collecting, processing and transmission.
Another Dutch startup, Nowi, supplies advanced embedded products that harvest light energy for powering IoT sensors and devices.
Executives claim that batteries are unnecessary in edge settings because Nowi systems capture light from the sun and conventional lights using photovoltaics.
U.S. startup Switches and Sensors makes sensors with energy-harvesting capabilities. In this case the devices monitor electromagnetic equipment using excess electromagnetic energy from the equipment itself.
AI | edge computing | energy harvesting | Eta Compute | IIoT | IoT | machine learning | power | processors | sensors | startups
|
<urn:uuid:720818e1-9b14-4b19-a18e-a21047a9b58b>
|
CC-MAIN-2022-40
|
https://www.edgeir.com/edge-vendors-are-finding-new-ways-to-make-a-little-electricity-go-a-long-way-20200224
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00060.warc.gz
|
en
| 0.904854 | 568 | 2.8125 | 3 |
The Impacts of the Pandemic on Email Security
The COVID-19 pandemic had a dramatic impact on all aspects of business, including cybersecurity. The sudden shift to remote work caught many organizations unprepared, forcing them to rapidly deploy and expand infrastructure to support a remote workforce. Often, the focus was on ensuring that the infrastructure was capable of supporting the new remote workers and not on security.
Cybercriminals have taken advantage of how the pandemic has impacted businesses. Cyber attacks increasingly are targeting remote work infrastructure such as virtual private networks (VPNs) and the remote desktop protocol (RDP).
During the pandemic, phishing attacks have also been on the rise as it provided cybercriminals with many pretext options to use in their attacks. Additionally, employees working from home do not always have the same protections as when they are working from the office.
The Importance of Email Security
Email is one of the most commonly used attack vectors for cybercriminals. The ubiquity of email in the workplace means that most employees use it and are conditioned to trust it, making it a technique with a high probability of reaching the target. Additionally, phishing and other email-based attacks are easy to perform and can have significant payoffs for an attacker.
These factors make email security a vital component of an enterprise cybersecurity strategy. Email-based attacks work well for attackers, so they are unlikely to be abandoned any time soon. Only by deploying comprehensive, targeted email protections will organizations protect themselves from the email threat.
Types of Email Security Threats
Email security threats can come in different forms. Some of the most common email-based attacks include:
- Spam: Spam is unsolicited emails sent out in massive blasts. While modern spam filters catch and block most spam emails, it is possible that one might slip through and deliver malicious content to a user’s inbox.
- Phishing: Phishing emails use social engineering, spoofing, and other techniques to trick the user into doing something for the attacker. Phishing attacks can be used to accomplish a variety of goals, including stealing user credentials, data, or money.
- Business Email Compromise (BEC): BEC attacks are a specific form of phishing email designed to steal money from an organization. The phisher will impersonate someone high in an organization’s hierarchy and use the status and authority of that individual to instruct an employee to send money to an attacker-controlled account.
- Malware Delivery: Emails can carry malware directly in their attachments or point recipients to malicious sites that deliver malware. Phishing emails are one of the leading delivery mechanisms for ransomware, trojans, and other types of malware.
- System Takeover: A successful phishing attack may compromise user credentials or deliver malware to a recipient’s computer, enabling the attacker to take over that computer. The computer can then be added to a botnet for use in distributed denial of service (DDoS) and other attacks.
Best Practices to Ensure Email Security
Implementing email security best practices is essential to protecting the organization against email-borne threats. Some of the more important email security controls that companies should put in place include:
- Educate Employees: Most email-based attacks are designed to trick the recipient into doing something that hurts them and helps the attacker. Training employees to recognize phishing emails and to appropriately report suspected attacks is essential to managing an organization’s cybersecurity risks.
- Deploy Anti-Phishing Solutions: Anti-phishing solutions have the ability to identify the red flags that indicate potential phishing emails and to block malicious content from reaching the recipient’s inbox. By deploying anti-phishing solutions, an organization minimizes the risk that a thoughtless click will lead to a cybersecurity incident.
- Implement Data Loss Prevention (DLP): Phishing campaigns are commonly designed to steal and exfiltrate sensitive information from an organization via email. DLP solutions can help to prevent these attacks by inspecting outgoing emails for potentially sensitive content.
- Use Safe Browsing Solutions: Phishing emails commonly attempt to direct users to browse to a malicious link that points to a phishing site. Safe browsing solutions can perform URL filtering to block users from visiting any known bad URLs or sites hosting phishing content.
Secure Your Email with Check Point
Check Point and Avanan believe that a prevention-focused approach is best for email security. By blocking malicious emails from reaching the intended recipient’s mailbox, they eliminate the risk that these email threats post to the organization.
To learn more about email security solutions from Check Point and Avanan, check out Harmony Email and Office. You’re also welcome to sign up for a free demo to learn about our anti-phishing and account takeover prevention solutions.
|
<urn:uuid:1e199f2f-7c05-46d3-9def-50b939953fc6>
|
CC-MAIN-2022-40
|
https://www.checkpoint.com/cyber-hub/threat-prevention/what-is-email-security/top-5-email-security-threats/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00261.warc.gz
|
en
| 0.937091 | 980 | 2.609375 | 3 |
Are computers and laptops a recreational tool first and then a conduit for knowledge? This issue was brought up in K-12 Technology group discussion on Linked In regarding managing students’ use of technology in classrooms.
This concern is not unique to classrooms: look at employers trying to limit Facebook time and boost productivity of employees. The students are growing up without developing the abilities to focus on studies and control distractions and attention shift. Their emotional wellbeing is affected by Facebook. There are even services popping up that offer help in curbing social media dependency.
As we know technology develops at increased speed and opens freedoms and possibilities to learn and access limitless information. Yet humans have the same limitations as before – thirst for entertainment, engaging with friends, need to shift attention constantly and pursue new “shiny” engaging activities rather than concentrate on studies or work.
Schools play important role in instilling in students ability to focus, establishing boundaries of online behavior and teaching students to use technology productively. Here are some examples of schools that were able to foster learning environments with technology use:
Wolf Creek Public Schools in Alberta introduced BYOD initiative. They did not just stop at supplying infrastructure for wireless connectivity and technical support of devices. They focused on shift in pedagogy and developed new ways of teaching with technology. Curriculum was adjusted so that assignments were posted well in advance and students could choose when to work on completing them. Forums were developed to collaborate on completing assignments and students were encouraged to post their work for others to look and comment on. That motivated students to put more effort into their work.
Palmdale High School recognized that when used properly technology could significantly boost teacher effectiveness and student learning. The school introduced a solution into their lab environment that would allow teachers to stop classroom distractions from taking place right at the source – student workstations and draw students’ attention to the teacher when required.
What’s your take: fun or productivity first when using computers in schools or workplaces? Should control be left with the user and should there be limitations on accessing entertainment sources?
|
<urn:uuid:f87f3091-10f4-4ad1-a09e-c624ece7a654>
|
CC-MAIN-2022-40
|
https://www.faronics.com/news/blog/computers-which-comes-first-work-or-play
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00261.warc.gz
|
en
| 0.96579 | 421 | 3.328125 | 3 |
SD-WAN, or Software-Defined Wide-Area Networking, is a technique for using software to make wide area networks more intelligent and flexible. It typically begins with connecting sites directly to the internet over commodity broadband links instead of sending all traffic back to a regional office via private lines (which often are based on older, expensive technology known like MPLS). Configurations and access policies are centrally managed and easily applied across all sites, removing the need to manual administer each WAN device individually.
Why is SD-WAN Important?
Digital transformation, the use of modern, cloud-based applications and technologies to empower new ways of doing business, is driving changes across every industry. The first step for many organizations is to ensure that their increasingly distributed workforce has safe, fast, always-on access from every appropriate location. Unfortunately, traditional ways of connecting widely dispersed stores, branch offices, and remote offices often aren’t up to the challenge. Old hub-and-spoke networks built on private links can quickly buckle under the strain of Office 365, video training and teleconferencing, just to name a few examples. In such environments, IT faces a big challenge: how to optimize network performance without getting stuck on an endless treadmill of throwing money at the problem, upgrading hardware, and reconfiguring the network over and over.
Today, organizations need agile, flexible and cost-effective IT solutions if they want to compete effectively. They need solutions that are easy to implement, that are scalable and that meet the needs of growing businesses. Also, in a world where downtime can affect both reputation and the bottom line, they need to be confident that the networking solutions they choose are always on.
An SD-WAN solves these problems and more, especially with new approaches that also bring enterprise scale and security. That's why it's becoming one of the most popular networking solutions available today.
The Difference Between WAN and SD-WAN
Just a few years ago, organizations looking to enhance their existing WAN environments would need to invest heavily in special network links, network equipment, and expertise in setting it all up. Then, they would often spend days and even weeks configuring the equipment to function properly on their network.
SD-WAN works differently. It lets organizations use whichever inexpensive internet service provider (ISP) connections are available at each location rather than requiring specific, expensive ones such as MPLS lines obtained from telecom providers. Many SD-WAN solutions even mix and match different connection technologies and ISPs intelligently, boosting the overall performance of the network at each site. Configuration of all locations is done centrally, eliminating the need to manually edit setup files on each device. Administrators have full visibility across the entire network, not just a “peephole” glance into individual WAN routers, so they can understand what is happening and respond faster to incidents and potential problems.
The Benefits of SD-WAN
Many businesses or government agencies look to SD-WAN to reduce or eliminate their dependence upon slow, costly MPLS lines (Learn more about SD-WAN vs MPLS). However, that’s just the start of what SD-WAN can do for organizations :
Lower Connectivity Costs – SD-WAN can reduce ongoing operating expenses by switching from expensive MPLS lines to commodity broadband like fiber, cable, DSL, or even mobile technologies.
Higher Performance for cloud apps – With SD-WAN, new lines can be added quickly and easily to sites that need more capacity. And, by connecting sites directly to the internet, SD-WAN reduces the bottlenecks and delays that are common in older WANs.
Multiple Link Resilience – Traditional WAN environments usually have a single network link going into each location. With SD-WAN, multiple links from different ISPs can be used, eliminating a single point of failure that could take the network down.
Greater Agility – When you are opening up new branch offices, time is money. SD-WAN allows you to set up reliable and secure networks fast, using whichever ISPs are most appropriate to each location.
Optimized Use of Resources – SD-WAN enables you to intelligently assign key applications to different links, including internal lines as well as Internet connections, assigning different Quality of Service (QoS) guarantees to each. This lets you apply the right resources in each situation to maximize performance and productivity while minimizing cost.
Combining SD-WAN with Scale and Security
Early SD-WAN implementations focused primarily on connectivity for organizations with dozens of sites. But, new enterprise-focused SD-WAN solutions are making it possible to have more than 1500 sites managed on a single pane of glass and building in the same full next-generation firewall (NGFW) security that’s needed wherever your network touches the internet.
Transitioning to SD-WAN With Confidence
Moving to an SD-WAN solution can help you control costs, enhance business agility and accelerate cloud initiatives with confidence. But no matter what type of network environment you choose, it needs to be secure. With cyber attacks and data breaches on the rise, it is imperative that you protect your data, reputation and bottom line with best in class IT solutions.
Forcepoint Secure Enterprise SD-WAN allows you to safely and efficiently extend your network from your data centers and headquarters out to your remote branch offices and into the cloud. In addition, it gives you seamless control access over web content and enables you to decrypt traffic, all while safeguarding privacy. For distributed enterprises looking to enhance performance and scalability without compromising security, Forcepoint Secure Enterprise SD-WAN enables you to connect and protect your people to the data and applications they need simpler than ever before.
|
<urn:uuid:918accbb-1f9f-4764-9fe2-6d8745c8d976>
|
CC-MAIN-2022-40
|
https://www.forcepoint.com/cyber-edu/sd-wan
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00261.warc.gz
|
en
| 0.935604 | 1,191 | 2.796875 | 3 |
Serious Meltdown and Spectre Flaws Make CPUs ExploitableModern Processors From Intel, AMD and ARM Vulnerable to Kernel Data Theft
"Replace CPU hardware."
That's the only full solution listed by Carnegie Mellon University's CERT Coordination Center for serious flaws in microprocessors that run millions of PCs, cloud services, servers, smartphones and other devices.
Thankfully, many security experts believe that full-blown hardware replacement is an option that few individuals or organizations will have to seriously consider when mitigating the flaws.
But they do recommend patching without delay (see Meltdown and Spectre: Patches and Workarounds Appear).
The CPU flaws, known as Spectre and Meltdown, exist in millions of modern processors built by Intel, AMD and ARM, leaving them and the operating systems that run their hardware vulnerable to remote attacks that could steal data directly from the systems. In particular, information leaks could be triggered via side effects of speculative execution, a CPU optimization technique, to steal data from the kernel, which is the code that runs CPUs. Such attacks could leave encryption keys, passwords and sensitive data in open and running applications exposed to remote attackers.
"Meltdown breaks the mechanism that keeps applications from accessing arbitrary system memory. Consequently, [potentially malicious] applications can access system memory," according to a group of researchers who independently discovered the flaws. "Spectre tricks other applications into accessing arbitrary locations in their memory. Both attacks use side channels to obtain the information from the accessed memory location."
The researchers say exploitations of Meltdown or Spectre would likely leave no trace.
The attacks could also be used to gain access to all instances on a virtual machine or cloud server. "Testing also showed that an attack running on one virtual machine was able to access the physical memory of the host machine, and through that, gain read-access to the memory of a different virtual machine on the same host," Google's Matt Linton and Pat Parseghian say in a blog post.
The only full fix comes by replacing flawed processors, which in practice would mean acquiring new systems. "The underlying vulnerability is primarily caused by CPU architecture design choices," CERT/CC's vulnerability alert reads. "Fully removing the vulnerability requires replacing vulnerable CPU hardware."
Thankfully, patches and workarounds for the flaw are starting to appear. Some reports have suggested that the workarounds may result in decreased processor speed because the fixes require disabling "speculative execution," which is a time-saving feature.
Intel, however, has tried to downplay such assertions. "Contrary to some reports, any performance impacts are workload-dependent, and, for the average computer user, should not be significant and will be mitigated over time," Intel says in a security alert.
Three Attacks Identified
Researchers have identified three attacks that could be used to exploit vulnerable processors:
- Variant 1: Bounds check bypass (CVE-2017-5753);
- Variant 2: Branch target injection (CVE-2017-5715);
- Variant 3: Rogue data cache load (CVE-2017-5754).
The Spectre attack refers to attack variant one and two; Meltdown refers to variant three.
"For a few Intel and AMD CPU models, we have exploits that work against real software," Google's researchers report.
"All three attack variants can allow a process with normal user privileges to perform unauthorized reads of memory data, which may contain sensitive information such as passwords, cryptographic key material, etc.," they say. "There is no single fix for all three attack variants; each requires protection independently. Many vendors have patches available for one or more of these attacks."
Real-World Threat: Mostly Low
Patch but don't panic, security experts advise. "This is the sort of problem that affects vast swathes of machines, is serious enough that it needs to be fixed but the likelihood of it being used - if you practice good security hygiene - is relatively low," says Alan Woodward, a computer science professor at the University of Surrey.
In part, that's because it's not clear that Spectre or Meltdown attacks are practical for anyone except well-resourced nation-states' intelligence apparatuses.
"It's remarkably hard to make use of snippets of memory you can retrieve anyway. Think about Heartbleed," Woodward says, referring to a vulnerability in OpenSSL, an open-source implementation of the SSL and TLS protocols that's used to secure data sent between clients and servers, that was discovered and publicly detailed in 2014, when patches and fixes were released (see Heartbleed Lingers: Nearly 180,000 Servers Still Vulnerable).
"Was it ever actually used in the wild by criminals?" he says. "This is the [same] sort of complex side-channel attack that you use against high-value targets - it takes a lot of effort, assuming it hasn't been closed off altogether already by patching, and the return is not that great. It may be used by nation-states, but criminals like easier meat. I'd worry about ransomware more."
Coordination Comes Apart
Google's Project Zero says it developed proof-of-concept exploits for Meltdown and Spectre and reported the flaws to Intel, AMD and ARM on June 1, 2017. As part of a coordinated vulnerability program, all involved researchers and notified organizations agreed to not publicly announce the flaw until Jan. 9. But efforts by other researchers led to increased attention on the flaw, leading Google and others to publish full details of the vulnerability on Wednesday.
"I must confess a few of us thought there was something bubbling under when we saw the research papers earlier last year," Woodward says, referring to the Meltdown and Spectre research. "That obviously spurred others" - notably Google - "to look more closely."
Bug bounty expert Katie Moussouris, CEO of consultancy Luta Security, says the premature disclosure demonstrates the difficulty of attempting to coordinate so many organizations and such big fixes.
Today, infosec Twitter (re)learned the following are hard:— Katie Moussouris (@k8em0) January 3, 2018
1. Fixing design bugs in chips
2. Multiparty Coordinated Vuln Disclosure
3. Differentiating authoritative fact vs speculative hype
4. Holding embargoes
5. Naming things so they don't sound goofy #Meltdown #Spectre pic.twitter.com/K6lSqfwmQu
Intel CEO's Stock Trades Raise Questions
One senior executive whose company's wares are vulnerable to Meltdown and Sceptre is facing questions about whether he inappropriately used knowledge of the vulnerability information in advance of it being made public, for personal gain.
A Securities and Exchange Commission filing in late November by Intel reported that CEO Brian Krzanich sold a large chunk of his Intel stock for about $39 million, apparently netting about $25 million. According to a Motley Fool report, that move left Krzanich with the bare minimum of stock that an Intel CEO would be required to own.
In the wake of last year's Equifax breach, the SEC has signaled that it plans to tighten requirements for when senior executives are allowed to sell stock, including during the period after which a security problem has been discovered, but before it has been made public (see SEC Plans Cybersecurity Guidance Refresh: What to Expect).
But an Intel spokeswoman tells Information Security Media Group that "Brian Krzanich's sale is unrelated" to the timing of the CPU flaws being discovered or remedied. "It was made pursuant to a pre-arranged stock sale plan (10b5-1) with an automated sale schedule," she says. "Brian continues to hold shares in-line with corporate guidelines."
Patching: The Long Tail
Rather than replacing devices that have vulnerable processors, many information security experts expect that patches and workarounds now being rushed out will be good enough for many, and that only critical environments might need to look at ripping and replacing systems that use the flawed CPUs.
But as previous flaws of this nature have shown, many devices never get patched and continue to be used. And that leaves those organizations and individuals at increased risk from malware-wielding attackers.
"The patches will be available within days, but as with Heartbleed there will be a long tail of those who don't patch," Woodward says. "Obviously, it'll need to be designed out in the microarchitecture of future chips, but the interesting technical question is how can they maintain performance without the sort of mechanism that this is exploiting."
Cybersecurity expert Chris Pierson, CEO of risk advisory firm Binary Sun Cyber, says the CPU flaws are a reminder that engineers need to be taught not just how to build great technology but also more secure technology. "We need to focus on how we are training our engineers to imagine differently and attack what they create to ensure more secure systems from the ground up," he says.
Lessons to Learn
As with Heartbleed and other flaws discovered before and since, the future will inevitably see more major flaws get discovered that put a large swath of a business's systems at risk, says David Stubley, head of Edinburgh, Scotland-based incident response and penetration testing firm 7 Elements.
So plan for these types of scenarios in advance in part by putting in layers of information security defenses designed to block undiscovered attacks from succeeding. "Obviously, prevention is better than cure, and putting in place defenses against attacks should always be a priority," he says. But ideally, organizations will also be practicing a risk-based approach that prioritizes "detecting problems, reacting to them and recovering as quickly as possible," no matter what they are, he says (see Ransomware School: Learn Lessons From How Others Fail).
|
<urn:uuid:64acd5d8-e414-4e1e-abf5-2c2c332c9ee1>
|
CC-MAIN-2022-40
|
https://www.databreachtoday.com/serious-meltdown-spectre-flaws-make-cpus-exploitable-a-10557
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00261.warc.gz
|
en
| 0.956582 | 2,077 | 3.09375 | 3 |
Top ten best British tech inventions
From the computer to the World Wide Web, the iPod to the typewriter, a lot of modern technology started out in the UK
4) TV and possibly radio
In 1880, inventor David Edward Hughes demonstrated the "aerial waves" he was able to send and receive to the Royal Society but they were dismissed as not being conducted through the air and he didn't pursue his research until after Marconi's famous radio experiments in Cornwall (when the first antenna blew off the cliff in a storm).
Scottish inventor John Logie Baird chalked up a lot of firsts for TV in the UK; the first public demonstration of televised silhouettes at Selfridge's department store in March 1925, the first live transmission of moving images in his Soho lab in October - where he gave a demo of a 30-line, 12.5 frames per second television broadcast in January 1926, the first transatlantic television broadcast from London to New York in 1928, the first outside broadcast from the Derby in 1929 and the first public television service was broadcast by the BBC in November 1936.
Baird also created the first video disk recording system, Phonovision, in 1927, as well as colour (1928), infrared and stereoscopic 3D television. Like the telegraph and radio, television brought news and information but quickly became home entertainment.
5) The Web
Brit Sir Tim Berners-Lee was working at CERN when he proposed a global hypertext system in March 1989, saying the problems scientists faced when tracking all the information involved in building the Large Hadron Collider were going to come to everyone soon, with documents and directories that couldn't keep up with changes.
He designed a system to allow "a pool of information to develop which could grow and evolve with the organisation" and described it as a "web" of notes with links (like references) between them. He created the first World Wide Web server, running info.cern.ch, in 1990 along with the first browser (which was also an editor). The rest is literally history
6) The PDA
Before we had smartphones, we had personal digital assistants. The Palm Pilot might have captured the market, but the 1984 Psion Organizer was the first PDA and it came from David Potter's UK company, previously known for building some of the first games for the Sinclair ZX81 and Spectrum.
The 1991 Psion Series 3 was a worldwide success plus its EPOC operating system evolved into the Symbian phone OS which dominated the smartphone market for a decade. Before Psion, the idea of a computer you could put in your pocket was pure science fiction; now nearly everyone has one.
In This Article
Big data for finance
How to leverage big data analytics and AI in the finance sectorFree Download
Ten critical factors for cloud analytics success
Cloud-native, intelligent, and automated data management strategies to accelerate time to value and ROIFree Download
Remove barriers and reconnect with your customers
The $260 billion dollar friction problem businesses don't know they haveFree Download
The future of work is already here. Now’s the time to secure it.
Robust security to protect and enable your businessFree Download
|
<urn:uuid:e89aa881-3b6a-4536-b43a-2e4f03cc9a26>
|
CC-MAIN-2022-40
|
https://www.itpro.com/strategy/leadership/22586/top-ten-best-british-tech-inventions/page/0/1
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00261.warc.gz
|
en
| 0.967525 | 657 | 2.671875 | 3 |
Why are data breaches so commonplace? Whether the attacks are against the energy sector as reported July 2014[i] with over 1,000 energy companies in North America and Europe reported to have been compromised. To other attacks targeting other sectors (e.g. Operation Troy, Operation High Roller Nightdragon, etc.) it would appear that no sector is immune from data breaches. One common theme amongst these and other attacks is the initial infection vector, namely exploiting the subconscious of a trusted employee. The modus operandi for most of the common data breaches is to leverage some form of social engineering to coerce the user into an action facilitating malware infection.
The prevalence of social engineering in many publicly disclosed cyber-attacks demonstrates either an inherent weakness in the acumen of victims to distinguish malicious communications, or that cybercriminals are using more complex methods to bypass the ‘human firewall’. The answer of course likely lies somewhere in between these two statements, but regardless of the root case it does demonstrate that the first line of defense is evidently failing. The default position to blame users as the cause for breaches which is not entirely fair. Whilst there will be examples where clearly unsafe practices are being employed, our latest whitepaper “Hacking the Human Operating System” demonstrates the techniques used by attackers are to bypass the consciousness of their targets and attempt to manipulate victims through leveraging subconscious levers of influence.
The paper reviews the concept of social engineering; the techniques used within many of the recent cyber-attacks, levers used to influence victims, communication channels used, and suggested controls to reduce the risk.. Much has been written about social engineering. The content of these sources vary widely, from definitions, to mitigation. The purpose of the paper is to define the concepts, and introduce mitigations that go beyond simply suggesting that awareness is a panacea.
Unless we address the first line of defense, data breaches will continue to hog our Twitter timelines, and support the ever burgeoning cost of cybercrime.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
|
<urn:uuid:284f4c56-758e-40ac-99e2-3e6fdf4ceb78>
|
CC-MAIN-2022-40
|
https://www.mcafee.com/blogs/internet-security/hacking-human-os-report-social-engineering/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00261.warc.gz
|
en
| 0.92795 | 466 | 2.546875 | 3 |
Phones are equipped with a number of different buttons or keys. Phones have number keys (0-9) for dialling, as well as so-called function keys. The number and type of function keys depends very much on the phone and whether or not it is connected to a PBX. The simplest function keys that are present on almost all phones are the redial key, "#" and "*". With these keys alone users can control many of the features modern telephone networks offer.
In companies with large and very powerful PBXs so-called system phones are often used. These support functions and features of the telephone system in question and, depending on the type and use, have different kinds of function keys. Frequently used function keys include:
- Speed dial keys, for calling predefined numbers with one touch
- The redial button, to redial the last number called
- The recall key, to put your call on hold so you can dial another number
- The call forwarding key, to divert all incoming calls to a predefined number or the answering machine
- The conference call button, for setting up multiple simultaneous conversations and initiating a conference call.
- The Busy Lamp Field (BLF), to display which colleagues are currently in calls.
- The intercom key. By pressing the function key the user is directly connected with the loudspeaker of the answering party.
|
<urn:uuid:cb8773ec-bd0b-4768-a46d-ccabd36b5797>
|
CC-MAIN-2022-40
|
https://www.nfon.com/en/get-started/cloud-telephony/lexicon/knowledge-base-detail/function-keys
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00261.warc.gz
|
en
| 0.921333 | 285 | 3.3125 | 3 |
The DECT (Digital Enhanced Cordless Telecommunications) standard is the accepted standard in Europe for wireless telephony inside buildings. It enables mobile telephony, with good voice quality and with a range of approximately 50 metres or more around the DECT base station. Outdoor distances of up to 300 metres are theoretically possible.
In Europe, DECT uses the frequency range of 1880-1900 MHz for radio transmission. This range is different from that used by other short-distance wireless technologies such as Wi-Fi and Bluetooth. As a result, these technologies do not interfere with each other and can be operated in parallel. For the transmission of the actual voice audio, the G.726 codec is used, which, depending on the voice connection, requires a bandwidth of 32 kbit/s. Most private households use the solution known as Single Cell DECT. A DECT base station connects to the telephone network and allows up to six DECT cordless handsets to connect to the base station. For larger businesses and buildings in which a greater number of cordless phones are to be supported, the Single Cell DECT solution is no longer sufficient. In this instance, Multi Cell DECT installations are used.
Multi Cell installations like this support numerous base stations and provide a comprehensive DECT radio network, with multiple cells. This makes it possible, depending on the number of supported cells and base stations, to operate hundreds of handsets throughout the premises. As the individual radio cells overlap within the building, connections are automatically transferred when changing from one cell to the other, meaning that calls are continued without interruption. This is known as the so-called “seamless handover”. For those making the calls, this makes it perfectly possible to move freely within the area served by the Multi Cell DECT during a phone call.
Want to know more? Check our leaflets!
|
<urn:uuid:ca964807-2d24-4aef-a3c3-1ede4e936f1e>
|
CC-MAIN-2022-40
|
https://www.nfon.com/en/get-started/cloud-telephony/lexicon/knowledge-base-detail/multi-cell-dect
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00261.warc.gz
|
en
| 0.933838 | 378 | 2.890625 | 3 |
[TECH BRIEF] GPRS Tunneling Protocol (GTP) Processing
GPRS Tunneling Protocol or GTP for short is a mechanism used exclusively in cellular networks to tunnel IP packets through a mobile network core. The protocol was introduced in the late 1990s when the first generation of packetized data—known as General Packet Radio Services or GPRS—was adopted. GPRS is o en referred to as 2.5G because it runs over GSM (2nd Generation or 2G mobile technology). GTP has moved on from those humble beginnings and is used in an updated form in both 4G (LTE) and emerging 5G cellular networks.
Comprehensive discussion of GTP protocol and how an Accolade adapter can help with GTP deduplication.
- GTP is used exclusively in mobile networks
- Accolade ANIC adapters can fully parse GTP packets and offer value-added capabilities such as deduplication
|
<urn:uuid:08a0c146-0391-435e-95e7-74ddf8da7267>
|
CC-MAIN-2022-40
|
https://accoladetechnology.com/tech-brief-gprs-tunneling-protocol-gtp-processing/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00261.warc.gz
|
en
| 0.918909 | 215 | 3.421875 | 3 |
Internet is a wild place. Malware and attacks are getting sophisticated. There are many different ways your computer can get infected or compromised.
We can achieve safer web browsing by isolation. That is, use a dedicated machine only to do web browsing and do not store any important data. When the machine is infected, we simply erase the hard drive and reinstall the OS.
But that means you need an extra machine, space, cost and time.
Thanks to virtualization technology. You can now achieve safer web browsing virtually, by creating virtual machine instead of using a real machine.
You can also test run software to ensure they are legitimate and functioning properly.
Virtual machine has made it much faster and easier to replace. All you need is to terminate the infected virtual machine and start a new one using a clean snapshot.
Hyper-V, Microsoft’s virtualization technology, is now available for Windows 10. You can create a Windows 10 virtual machine and use it for browsing or test out some software.
IMPORTANT: In terms of Windows 10 licensing, while Hyper-V feature is free, Microsoft treats virtual machine as an independent machine and requires a separate license for Windows running inside a virtual machine.
An alternative is to create Ubuntu desktop virtual machine instead, which is free.
- Windows 10 Pro
- Windows 10 Enterprise
- Windows 10 Education
Windows 10 Home Edition can be upgraded to Windows 10 Pro. To do so open up Settings > Update and Security > Activation. Here you can visit the store and purchase an upgrade.
The upgrade cost, as far as I know, is $99.
- 64-bit Processor with Second Level Address Translation (SLAT)
- CPU support for VM Monitor Mode Extension (VT-c on Intel CPU’s)
- Minimum of 4 GB memory. I recommend 8 GB for better experience
Check CPU Compatibility
It could be hard to understand some of these hardware requirements.
Fortunately, there is a command called systeminfo in Windows to find out if it is compatible or not. To check,
- run PowerShell or Command Prompt (search for PowerShell or command at Cortana and hit ‘Enter’).
- A console window will show up. Now run the command systeminfo there. You should see some results like below to determine if it is Hyper-V compatible
If you get all Yes, Hyper-V is available.
If you get a No like the screen for Virtualization Enabled In Firmware, you would need to access your computer’s BIOS and enable virtualization (if CPU supports it) there.
Enable Hyper-V on Windows 10
- Right click on the Windows button and select Apps and Features
- Select Programs and Features on the right under related settings
- Select Turn Windows Feature on or off
- Select Hyper-V and click OK
Restart your computer after installation has completed.
Obtain Windows 10 installation image
Download the media creation tool
- Go to Download Windows 10 at Microsoft
- click on Download tool now
- run the downloaded Media creation tool
- Read license terms and proceed if you accept the terms
- select Create installation media
- click Next
- click Next again
- select ISO file
- click Next
- select where to save the ISO file
- sit back and wait for the download to complete
Create Windows 10 virtual machine
- open Hyper-V Quick Create from the start menu
- click on Local installation source
- then click on Change installation source
- select the Windows 10 installation ISO file you have
- click Create Virtual Machine
- click Connect
- click Start
- press any key when you see ‘Press any key to boot from CD or DVD…’
- if you miss it, turn off the virtual machine and start again
Normal Windows installation process should start. Proceed to finish the Windows installation.
In addition to Hyper-V Quick Create, there is a Hyper-V Manager where you can start & stop your virtual machines.
|
<urn:uuid:0086192d-9999-47ac-b89b-7b5264f8a38e>
|
CC-MAIN-2022-40
|
https://netosec.com/safe-browsing-vm/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00261.warc.gz
|
en
| 0.823804 | 835 | 2.640625 | 3 |
What is e-Procurement
The acquisition and sale of supplies, equipment, works, and services via the internet or other networked system are known as electronic procurement or e-procurement. In other words, e-procurement is the electronic exchange of goods and services between buyers and sellers.
In today’s economy, e-procurement has become a powerful tool for businesses to cut costs and increase efficiency. But what does it mean for government agencies? The private sector's successful implementation of new and innovative e-business and e-commerce models is now compelling governments to rethink their existing hierarchical, bureaucratic organizational models and frameworks.
As the global population continues to rise, governments worldwide face mounting challenges in providing basic social services, such as healthcare, education, and housing. This means they also struggle to meet the demands of citizens for better quality public services.
To address these challenges, governments are turning to innovative solutions such as e-procurement to improve their operations. E-procurement saves them time and money by reducing paper transactions, improving service delivery, and increasing transparency.
In the United States alone, over $1 trillion worth of goods and services are purchased through government contracts each year.
Acquisition and procurement have always been high on the government's priority list. Though not a new concern, the need to procure wisely and avoid debt is more crucial now. Government officials have been seeking ways to automate procurement and get a better hold on spending, allowing for more control and transparency while shortening procurement periods. They are modernizing their procurement systems and processes by leveraging e-procurement solutions.
Benefits of e-Procurement:
- Minimizes cost: The built-in monitoring capabilities in the e-procurement system regulate costs and maximize efficiency while eliminating paperwork and overheads.
- Automates time-consuming Tasks: Fully automated solutions optimize procedures and potentially shorten the time it takes from order creation to fulfillment while offering access to a broader range of products and services. Tasks like auctioning orders and documentation for purchase orders, analyzing and selecting suppliers, negotiating contracts, agreeing and storing supplier contracts, and more could be automated, freeing up staff for other tasks.
- Increased transparency: With e-procurement, all data is centralized and may be shared with stakeholders. E-procurement gives insight into controlling non-compliant spending, identifying areas for supplier consolidation, and using purchasing power to negotiate cost savings.
E-procurement systems have a slew of cutting-edge features to improve procurement efficiency and total cost of ownership, making it a convenient and profitable choice for maximizing profits and enhancing processes. The e-procurement technology is designed to centralize and automate transactions between the organization and its value chain partners.
|
<urn:uuid:dfec4e8e-1637-4d37-8711-170bcc395201>
|
CC-MAIN-2022-40
|
https://www.infosyspublicservices.com/insights/glossary/what-is-e-procurement.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00261.warc.gz
|
en
| 0.934642 | 572 | 3.09375 | 3 |
Technology has the power to transform the way in which we look after the world around us. Below, we discuss five ways in which the latest video technology is being used for faster, more efficient, and more effective environmental monitoring.
As economic development and growth continues across the world, pressure on the natural world is increasing rapidly. In this context, the responsibility to minimize our impact on the planet is arguably greater than it’s ever been. But importantly, there is no panacea; solving the problems requires creative thinking, innovation, and collaboration between a variety of people and organizations.
The environmental power of intelligent video
One innovative tool that is already proving itself invaluable in the field of environmental protection is video technology.
The latest HD video cameras feature artificial intelligence technologies that are incredibly powerful for environmental monitoring. By linking these cameras to a variety of Internet of Things (IoT) devices – sensors, video equipment, unmanned aerial vehicles, and more besides – those working on environmental projects can gather and analyze all manner of data about air, water, soil, and local ecology in real time.
This information can be used in a whole host of inventive ways. Below, we explore some current applications of this technology in more detail.
1. Preventing Air Pollution
As urbanization increases, dealing with air pollution is becoming more and more challenging. And how best to reduce pollutants and improve air quality is the subject of intense debate.
In urban areas, video technology is already being used to monitor air quality on construction sites, where swirling dust is constant, and heavy plant emits significant exhaust fumes. Drones are also being used for environmental monitoring on a large, city-wide scale. In industrial zones, cameras can help management teams to monitor factory gas emissions in real time. And in the countryside, video technology is used to detect a variety of sources of air pollution in farmland, such as burning straw.
2. Tackling Water Pollution
More than 99 percent of Earth's water is unusable by humans and many other living things - only about 0.3 percent of our fresh water is found in the surface water of lakes, rivers and swamps. With this in mind, the protection of global water resources is vital.
Here, video technology can help to eliminate hazardous water pollutants in order to safeguard the health of local residents and wildlife. Using intelligent video analysis tools, users can remotely monitor water sources for a variety of potential issues – from floating objects and changes in water color, to illegal occupation of riverways, illegal construction, and illegal dumping of waste. Today’s video technology is also robust and compact enough to be installed at a variety of locations, from the water source, to river cross-sections, to river mouths.
What’s more, to overcome any potential restrictions of working in more challenging waterside environments, video technology can be utilized in combination with other supporting technologies. For example, when monitoring the draining of sewage at a river mouth, users can install thermal imaging cameras into drones and unmanned vessels. This ensures the clearest view day and night, without the need to be present at the site.
3. Preventing Wild Fires
Wild fires and forest fires can be devastating to people, animals and the environment. But today’s video surveillance technologies can allow a fire to be detected early, before it has fully broken out.
The most advanced thermal and optical cameras will feature fire detection algorithms, employing deep learning and artificial intelligence to provide highly accurate alarms at the earliest stage of a fire. Armed with this technology, users can build a smart early warning and control system, for 24/7 uninterrupted forest monitoring and fire prevention.
4. Intelligent Ecological Protection
Traditional environmental inspection and monitoring processes can involve a lot of traveling on foot and by car, which can be time-consuming, labor-intensive, and even damaging to particularly sensitive locations. Through video, however, authorities can manage sites remotely, monitoring for ecological problems without setting foot in vulnerable areas.
For example, aerial surveillance solutions can be used to quickly detect illegal construction in protected places. Combined with thermal imaging equipment, users can also accurately identify poaching or over-grazing around ecologically fragile or sensitive areas. What’s more, state-of-the-art video cameras with built-in speakers can be used to warn off intruders should they step into sensitive locations.
5. “Zero Waste Cities”
More and more urban residents are paying attention to the issues of garbage and waste treatment in China. As a result, we are seeing an emerging trend for "zero waste cities”, with a focus on reduction, recovery, utilization and disposal of hazardous solid waste.
Here, video can be used with a variety of supporting technologies to enable transparent, intelligent solid waste management – from waste production, to waste transportation and waste disposal.
At the point of production, radio-frequency identification (RFID) equipment can be used to accurately sort, track and trace waste. When waste is transported, vehicles can be equipped with mobile surveillance equipment, which transmits video images to a central management system in real time via a high-speed data network. What’s more, satellite navigation systems can intelligently raise an alarm if there is any deviation from the correct driving route.
When it comes to waste disposal, the latest intelligent video technology can be used to enhance the safety of the disposal process; for example, incinerators can be video monitored to prevent the outbreak of fire.
What’s more, we strive to be at the forefront of the industry. In June 2019, the main session for 2019 World Environment Day was held in Hangzhou, China. The event aimed to raise awareness about air pollution and discuss efforts to tackle it. To get a clearer picture of how tech companies can make a difference in environmental protection, more than 50 international environmental experts from this event visited Hikvision. Together, we discussed current best practices and future developments in the field of environmental protection technology.
Following on from that discussion, Hikvision will continue to explore the potential of video technology to empower environmental protection, monitoring, and early warning. It is our belief that by creating innovative solutions that combine intelligent cameras, drones and sensors, we can transform ecological monitoring, empower the early warning of critical events, and help to maintain essential biodiversity in increasingly innovative ways.
In the years to come, technology is set to become a game-changing element of environmental protection activities all over the world. At Hikvision, we are proud that our technology is playing a key role in this vital process.
Questo sito utilizza cookies per memorizzare informazioni sul tuo device. I cookies aiutano il normale funzionamento del nostro sito e ci mostrano come poter migliorare l'esperienza dell'utente.
By downloading and using software and other materials available via this website, you agree to be legally bound by HIKVISION Materials License Agreement. If you don’t agree to these terms, you may not download or use any of those materials.If you are agreeing on behalf of your company, you represent and warrant that you have legal authority to bind your company to the Materials License Agreement above. Also you represent and warrant that you are of the legal age of majority in the jurisdiction in which you reside (at least 18 years of age in many countries).
|
<urn:uuid:ffd6e159-dfb4-4eaf-9acd-d4ed1493f1ea>
|
CC-MAIN-2022-40
|
https://www.hikvision.com/it/newsroom/blog/five-ways-in-which-cutting-edge-video-technology-is-helping-to-t/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00261.warc.gz
|
en
| 0.909247 | 1,533 | 3.171875 | 3 |
Editor’s Note: As BroadbandBreakfast.com begins publishing summaries of the national broadband plan, and commentaries on it, we also here reproduce the actual text of Chapter 3 of “Connecting America: The National Broadband Plan,” as produced by the Federal Communications Communications Commission. Chapter 3 is available here; the plan is available in PDF form at http://broadband.gov/download-plan
To see how broadband is transforming American life, walk down a busy street or pay a visit to any school, business or airport. Parents on business trips use their smartphones to check e-mail or watch short videos of their children playing soccer, hundreds, if not thousands, of miles away. Americans work together in real time on complex documents from different desks in the same office, and workers in different offices around the world collaborate via videoconferencing technology. Sales and field maintenance personnel use mobile devices to access inventory information in their businesses, place orders and update records, increasing efficiency and productivity. Students draw on the richness of the Internet to research historical events or watch simulations of challenging math problems. People are using broadband in ways they could not imagine even a few years ago.
To understand how this transformation will evolve, it is important to understand the forces shaping the broadband ecosystem in America today (see Exhibit 3-A).
Exhibit 3-A: Forces Shaping the Broadband Ecosystem in the United States
The broadband ecosystem includes applications and content: e-mail, search, news, maps, sales and marketing applications used by businesses, user-generated video and hundreds of thousands of more specialized uses. Ultimately, the value of broadband is realized when it delivers useful applications and content to end-users.
Applications run on devices that attach to the network and allow users to communicate: computers, smartphones, set-top boxes, e-book readers, sensors, private branch exchanges (PBX), local area network routers, modems and an ever-growing list of other devices. New devices mean new opportunities for applications and content.
Finally, broadband networks can take multiple forms: wired or wireless, fixed or mobile, terrestrial or satellite. Different types of networks have different capabilities, benefits and costs.
The value of being connected to the network increases as more people and businesses choose to adopt broadband and use applications and devices that the network supports. Several factors contribute to their decisions. These include whether they can afford a connection, whether they are comfortable with digital technology and whether they believe broadband is useful.
Networks, devices and applications drive each other in a virtuous cycle. If networks are fast, reliable and widely available, companies produce more powerful, more capable devices to connect to those networks. These devices, in turn, encourage innovators and entrepreneurs to develop exciting applications and content. These new applications draw interest among end-users, bring new users online and increase use among those who already subscribe to broadband services. This growth in the broadband ecosystem reinforces the cycle, encouraging service providers to boost the speed, functionality and reach of their networks.
While the explosive growth in the use of broadband suggests that many aspects of the American broadband ecosystem are healthy, there are many ways America can do better.
Users benefit directly from the applications and content they access through broadband networks. Applications help people purchase products, search for jobs, interact with government agencies and find information related to their health. Users also spend considerable time using broadband for banking, shopping, entertainment, social networking and communication (see Exhibit 3-B).
Home broadband use has increased from roughly 1 hour per month in 1995, to more than 15 hours per month in 2000, to almost 29 hours per month today, as consumers find more valuable applications and content online. Increased hours of use are correlated with increased actual speeds of broadband connections to the home. As connection speeds have grown and more applications have been developed, the amount of data consumers download has increased. Today, the average Internet user with a fixed connection consumes 9 gigabytes of data per month over that connection. But that consumption varies significantly across user types, with some heavy users consuming upwards of 1,000 GB or more each month. Total data use per fixed residential connection is growing quickly, by roughly 30% annually.
Almost two-thirds of the time users spend online is focused on communication, information searching, entertainment or social networking. However, use patterns vary significantly. Except for high-definition video, most applications in use today can be supported by actual download speeds of about 1 Mbps (see Exhibit 3-C).
Exhibit 3-C: Actual Download Speeds Necessary to Run Concurrent Applications (Mbps)
Broadband applications are helping businesses improve internal productivity and reach customers. Many businesses use at least basic applications: 97% of small businesses use e-mail; 74% have a company website. There is evidence that broadband applications may improve individual companies’ productivity. Though gains vary drastically depending on the size and type of firm, as well as breadth of implementation, broadband-based applications may allow faster product development cycles, access to new geographic markets, and more efficient business processes and allocation of resources.
These productivity gains benefit the entire economy. Investment in information and communications technologies accounted for almost two-thirds of all economic growth attributed to capital investment in the United States between 1995 and 2005.
Businesses also find it valuable to collect and aggregate information derived from use of broadband applications. More sophisticated digital profiles of Internet users allow businesses to better understand user buying patterns. This information is also useful for advertising or other purposes. Businesses are creating services tailored to individual consumers that improve their health, help them reduce their carbon footprint, track students’ educational progress and target appeals for charitable, social and political causes.
Businesses often use broadband in ways that are fundamentally different from how consumers use it. For example, high-capacity broadband service is often used to connect PBX’s for business voice and local area networks. These mission critical uses require broadband service with business-grade performance and customer support levels.
Both consumers and businesses are turning to applications and content that use video. Video is quickly becoming an important element of many applications, including desktop videoconference calls between family members and online training applications for businesses. Cisco forecasts that video consumption on fixed and mobile networks will grow at over 40% and 120% per year, respectively, through 2013.
User-generated video and entertainment—from sites such as YouTube and Hulu—are a large portion of the total video traffic over broadband connections. Increasingly, video is embedded in traditional websites, such as news sites, and in applications such as teleconferencing. Skype reports that video calls account for over one-third of its total calls, and that number is growing rapidly.
Video, television (TV) and broadband are converging in the home and on mobile handsets. The presence of broadband connections and TVs in the home could facilitate the development of a new medium for accessing the Web and watching video content. Traditional, or “linear,” television still accounts for more than 90% of all time spent watching video. Video consumed over the Internet still represents a small portion of overall video consumption at less than 2% of all time spent viewing.
Broadband-enabled video could grow as more innovative and user-friendly devices reach the home, allowing access to both traditional linear and Internet content via the TV.
Cloud computing—accessing applications from the Internet instead of on one’s own computer—is also growing as more companies migrate to hosted solutions. Software based in the cloud may allow more small businesses and consumers to access applications that were once only available to large corporations with sophisticated information technology departments in the applications and content markets.
There are several issues that are important for the development of applications and content.
Illegal distribution of copyright-protected content over the Internet continues to be an issue. Although there have been promising results from technologies such as content fingerprinting and from industry-led initiatives to develop guidelines for dealing with illegal content, piracy is still present in the broadband ecosystem.
Increased use of personal data raises material privacy and security concerns. Almost half of all consumers have concerns about online privacy and security, which may limit their adoption or use of broadband. Better security and more control over private information may trigger a more robust applications market.
By making more of its information freely available, government can make it easier for companies to develop applications and content. The Global Positioning System (GPS) industry was born after the U.S. Department of Defense opened its fleet of GPS navigational satellites to the public and the National Oceanic and Atmospheric Administration made public its satellite data. More recently, Sunlight Labs sponsored Apps for America, a competition to build useful applications with federal government data available on Data.gov. One application was FlyOnTime.us, which gives average flight delay information by airline and between U.S. cities. Moving forward, government information can unleash additional new applications that help drive the growth of the broadband ecosystem.
Devices continue to grow in number and variety as more computers, phones and other machines connect to the Internet. New devices have repeatedly revolutionized the personal computer (PC) market in the past three decades. Today, about 80% of U.S. households have some sort of personal computer. Although desktops initially dominated the market, 74% of all new personal computers sold today are laptops. Many predict that, over the next 5 years, growth in the netbook and tablet markets will far outpace growth in the traditional PC market.
The mobile phone market has also seen robust innovation. There were more than 850 different certified mobile products in the United States in 2009. In that same year, approximately 172 million mobile phones were sold in the United States. Of these, 27% were Internet-capable smartphones manufactured by a wide variety of firms, including Apple, HTC, LG, Motorola, Nokia, Palm, RIM, Samsung and Sony-Ericsson. Analysts expect smartphone sales to overtake standard mobile phone sales soon.
Countless other Internet-capable devices come to the market each year. Companies are building smart appliances that notify owners of maintenance issues over broadband networks and communicate with the electric grid to run at off-peak hours when prices are lowest. E-book readers deliver books almost instantly to consumers anytime and anywhere, often at lower prices than traditional editions. Devices monitor patients at home and wirelessly transmit data to doctors’ offices, so problems can be identified before they become too serious.
Devices already are starting to communicate with each other, keeping humans out of the loop. Increasing machine-to-machine (M2M) interaction will occur over the network, particularly for mobile broadband. A pioneering example of machine-to-machine communication for consumer use is General Motors’ OnStar, an M2M system for automobiles in which an onboard sensor automatically notifies OnStar’s network if there is an accident or system failure. M2M communications are used in many industries, often to collect information from sensors deployed remotely. For example, devices tracking the heart rate or blood-sugar level of patients with chronic conditions can transmit the information to a monitoring station that will trigger an alarm for a nurse or doctor where an abnormal pattern is detected. Networked sensors in a power plant can collect and transmit data on how generators are operating, to allow analysis by sophisticated predictive methods that will diagnose potential faults and schedule preventive maintenance automatically.
The emergence and adoption of new technologies such as radiofrequency identification and networked micro-electromechanical sensors, among others, will give rise to the “Internet of Things.” Billions of objects will be able to carry and exchange information with humans and with other objects, becoming more useful and versatile. For example, the Internet of Things will likely create whole new classes of devices that connect to broadband, and has the potential to generate fundamentally different requirements on the fixed and mobile networks: they will require more IP addresses, will create new traffic patterns possibly demanding changes in Internet routing algorithms, and potentially driving demand for more spectrum for wireless communications.
Significant competition and innovation exist for most classes of devices that interact with broadband networks. But one class of devices has not faced substantial competition in recent years: the television set-top box. The Telecommunications Act of 1996 contained provisions designed to stimulate competition and innovation in set-top boxes. Two years later, the FCC, in partnership with industry, developed the CableCARD standard to incent competition in the set-top box market. Yet by 2008, two manufacturers shared 92% of the market, up from 87% in 2006. Only 11 set-top boxes have been certified for retail sale, in contrast to the more than 850 unique handsets that were certified to operate on mobile networks in 2009 alone. In addition, 97% of CableCARD-deployed set-top boxes installed between July 2007 and November 2009 were leased from operators rather than purchased at retail.
Set-top boxes are an important part of the broadband ecosystem. An estimated 39 million set-top boxes were shipped in the United States in 2007 and 2008 combined. The lack of innovation in set-top boxes limits what consumers can do and their choices to consume video, and the emergence of new uses and applications. It may also be inhibiting business models that could serve as a powerful driver of adoption and utilization of broadband, such as, models that integrate traditional television and the Internet.
Network service providers are an important part of the American economy. The 10 largest providers have combined annual revenue of more than $350 billion and annual capital investments in excess of $50 billion. These investments have led to the deployment of multiple networks that today bring fixed and mobile broadband to end-users via the telephone, cable television, satellite and third-generation (3G) and fourth-generation (4G) mobile networks.
Terrestrial Fixed Broadband Availability
Today, 290 million Americans—95% of the U.S. population—live in housing units with access to terrestrial, fixed broadband infrastructure capable of supporting actual download speeds of at least 4 Mbps. Of those, more than 80% live in markets with more than one provider capable of offering actual download speeds of at least 4 Mbps. Meanwhile, 14 million people in the United States living in 7 million housing units do not have access to terrestrial broadband infrastructure capable of this speed. Although housing units without access to terrestrial broadband capable of 4 Mbps download speeds exist throughout the country, they are more common in rural areas (see Exhibit 3-D).
Businesses and community anchor institutions are often served by broadband. Ninety-six percent of all business locations have access to Digital Subscriber Line (DSL) service, and 92% have access to cable broadband service. In addition, 99% of all health care locations with physicians have access to actual download speed of at least 4 Mbps (see Exhibit 3-D). Finally, 97% of schools are connected to the Internet, many supported by the federal E-rate connectivity programs. But crucial gaps exist: More than 50% of teachers say slow or unreliable Internet access presents obstacles to their use of technology in classrooms, and only 71% of rural health clinics have access to mass-market broadband solutions. Further, many business locations, schools and hospitals often have connectivity requirements that cannot be met by mass-market DSL, cable modems, satellite or wireless offers, and must buy dedicated high-capacity circuits such as T-1 or Gigabit Ethernet service. The availability and price of such circuits vary greatly across different geographies, and many businesses and anchor institutions face challenges acquiring the connectivity to support their needs.
Typical advertised broadband speeds that consumers purchase have grown approximately 20% each year. This growth has been driven by a shift in consumer preferences to faster, more advanced technologies, improved performance of different technologies and large investments by service providers in network upgrades.
Both telephone and cable companies continue to upgrade their networks to offer higher speeds and greater capacities. Many have announced specific upgrades. For example, Verizon plans to pass over 17 million homes by the end of 2010 with its FiOS fiber-to-the-premises (FTTP) service, three million more than today. AT&T has announced it will build fiber-to-the-node (FTTN) infrastructure to serve 30 million homes by 2011, 11 million more than today. In addition, many smaller companies plan to aggressively build FTTP networks. If the targets in these public announcements are met, at least 50 million homes will be able to receive peak download speeds of 18 Mbps or more from their telephone company within the next 2 years.
Cable companies have also announced that over the next 2-3 years they will upgrade their networks to DOCSIS 3.0 technology, which is capable of maximum download speeds of more than 50 Mbps. One analyst predicts that by 2013, leading cable companies will cover 100% of the homes they pass with DOCSIS 3.0. The top five cable companies currently pass 103 million housing units, or about 80% of the country’s homes.
Exhibit 3-E: Announced Upgrades to the U.S. Fixed Broadband Network (millions of households covered)
As noted in a recent report from the Columbia Institute for Tele-Information (CITI), history suggests that service providers will meet these announced targets. So it is likely that 90% of the country will have access to advertised peak download speeds of more than 50 Mbps by 2013. The affordability and actual performance of these networks will depend on many factors such as usage patterns, investment in infrastructure, and service take-up rates.
However, these major announced buildouts target areas already served by broadband. It is unlikely there will be a significant change in the number of unserved Americans based on planned upgrades over the next few years, although some small companies may upgrade their networks to support broadband in currently unserved areas.
The performance of fixed broadband connections is often advertised in terms of maximum “up to” download and upload speeds. For example, an end-user with a connection for which download speeds are “up to 8 Mbps” can expect to reach 8 Mbps download speeds, but not necessarily reach and sustain that speed all or even most of the time. Data show that actual speeds experienced by end-users differ considerably from the “up to” speeds advertised by service providers. This distinction is important because it is the actual experience of the consumer (not theoretical technical capabilities) that enables or limits the use of different applications by end-users.
Estimates of the average advertised “up to” download speed that Americans currently purchase range from 6.7 Mbps to 9.6 Mbps, with the most detailed data showing an average of approximately 8 Mbps and a median of approximately 7 Mbps. As noted, the average advertised speed purchased by broadband users has grown approximately 20% each year for the last decade. Upload speeds are significantly lower, as the advertised “up to” upload speed typically is closer to 1.0 Mbps.
However, the actual experienced speeds for both downloads and uploads are materially lower than the advertised speeds. Data indicates the average actual download speed in American households for broadband is 4 Mbps (median actual is 3.1 Mbps) (see Exhibit 3-G). Therefore, the actual download speed experienced on broadband connections in American households is approximately 40-50% of the advertised “up to” speed to which they subscribe. The same data suggest that for upload speeds, actual performance is approximately 45% of the “up to” advertised speed (closer to 0.5 Mbps).
Exhibit 3-G: Advertised Versus Actual U.S. Fixed Broadband Residential Download Speeds (Mbps)
Actual download speeds vary by technology as well. While median actual download speeds for fiber and cable are 5-6 Mbps, median actual download speeds for DSL are 1.5-2 Mbps, and under 1 Mbps for satellite (see Exhibit 3-F). Despite this variation in performance across technologies, on a percentage basis, the gap between advertised and actual speeds experienced by consumers is consistent and prevalent across all types of connection technologies.
This performance gap between advertised “up to” speeds and actual performance is consistent with reports published in a number of other countries. A study in the United Kingdom found that average actual speeds were typically about 57% of average advertised speeds. Studies in New Zealand, Australia, Italy and Ireland have shown similar results.
Mobile Broadband Availability
As of November 2009, according to data from American Roamer, 3G service covers roughly 60% of U.S. land mass. In addition, approximately 77% of the U.S. population lived in an area served by three or more 3G service providers, 12% lived in an area served by two, and 9% lived in an area served by one. About 2% lived in an area with no provider.
These measures likely overstate the coverage actually experienced by consumers, since American Roamer reports advertised coverage as reported by many carriers who all use different definitions of coverage. In addition, these measures do not take into account other factors such as signal strength, bitrate or in-building coverage, and may convey a false sense of consistency across geographic areas and service providers. As with fixed broadband, most areas without mobile broadband coverage are in rural or remote areas. In fact, 3G build out is significantly lower in several states—in West Virginia, only 71% of the population has 3G coverage and in Alaska only 77% have coverage.
Additionally, American Roamer also suggests that 98% of businesses have 3G coverage today, although the data have similar limitations regarding signal strength, bitrate and in-building coverage. While most businesses have wireless broadband coverage, nearly 9% of rural business sites still do not have access, compared to less than 1% of business sites in urban or suburban areas. Finally, while a business location may have coverage, the value in mobile broadband comes when employees can access applications everywhere, which limits the importance of this particular coverage metric.
Several operators have announced upgrades to 4G broadband networks. CITI notes that by 2013, Verizon Wireless plans to roll out Long Term Evolution (LTE)—a 4G mobile broadband technology—to its entire footprint, which currently covers more than 285 million people. AT&T has announced it will test LTE in 2010 and begin rollout in 2011. Through its partnership with Clearwire, Sprint plans to use WiMAX as its 4G technology. WiMAX has been rolled out in a few markets already, and Clearwire plans to cover 120 million people with WiMAX by the end of 2010.
Mobile broadband network availability will change rapidly because of these deployments. Improved spectral efficiencies and significantly lower network latencies are some of the features of 4G networks that could lead to a better mobile broadband experience. For example, the spectral efficiency of mobile broadband networks could improve by over 50% with a transition from early 3G networks to 4G, while improvements relative to state-of-the-art 3G networks are likely to be a more modest 10-30%. The extent to which the effect of these advances are reflected in users’ experiences will depend on a variety of factors, including the total amount of spectrum dedicated to mobile broadband and the availability of high-speed backhaul connections from cellular sites.
Evaluating network availability and performance is much harder for mobile than for fixed broadband. For instance, the quality of the signal depends on how far the user is from the cell tower, and how many users are using the network at the same time. Therefore, the fact that users are in the coverage area of a 3G network does not mean they will get broadband-quality performance. Still, as with fixed broadband, it is clear that the speeds experienced on mobile broadband networks are generally less than advertised. Actual average download speeds have been reported to be as low as 245 kbps, while speeds in excess of 600 kbps are advertised. Actual average upload speeds as low as 106 kbps have been reported, versus advertised rates of 220 kbps or higher.
Both mobile network performance and the availability of mobile broadband rely on the availability of spectrum. Carriers and other broadband-related companies agree that more spectrum will be needed to maintain robust, high-performing wireless broadband networks in the near future.
3.4 ADOPTION AND UTILIZATION
Nearly two-thirds of American adults have adopted broadband at home. While adoption likely will continue to increase, different demographic groups adopt at significantly different rates (see Exhibit 3-I). For example, only 40% of adults making less than $20,000 per year have adopted terrestrial broadband at home, while 93% of adults earning more than $75,000 per year have adopted broadband at home (see Exhibit 3-H). Only 24% of those with less than a high school degree, 35% of those older than 65, 59% of African Americans and 49% of Hispanics have adopted broadband at home. Among people with disabilities, who face distinctive barriers to using broadband, only 42% have adopted. Those living on Tribal lands have very low adoption rates, mainly due to a lack of available infrastructure. What little data exist on broadband deployment in Tribal lands suggest that fewer than 10% of residents on Tribal lands have terrestrial broadband available.
Exhibit 3-I: Broadband Adoption by American Adults by Socio-Economic and Demographic Factors
While it is important to respect the choices of those who prefer not to be connected, the different levels of adoption across demographic groups suggest that other factors influence the decision not to adopt. Hardware and service are too expensive for some. Others lack the skills to use broadband.
Broadband adoption among businesses, by contrast, is quite strong: Ninety-five percent of America’s small and medium-sized businesses have adopted broadband. Only 10% of small businesses are planning to upgrade to a faster Internet connection in the next 12 months.
Subsequent chapters address adoption as well as the other elements of the broadband ecosystem that can help ensure America captures the full promise of broadband.
Nielsen Company, Viewership on the Rise as More Video Content Spans All Three Screens, A2/M2 Three Screen Report 2 (2Q 2009) (Nielsen, Viewership on the Rise), available at http://blog.nielsen.com/nielsenwire/wp-content/uploads/2009/09/3ScreenQ209_US_rpt_090209.pdf; Lee Rainie & Dan Packel, Pew Internet & Am. Life Project, More Online, Doing More 3 (2001), available at http://www.pewinternet.org/~/media/Files/Reports/2001/PIP_Changing_Population.pdf.pdf (last visited Feb. 19, 2009); see also Omnibus Broadband Initiative, Broadband Performance (forthcoming) (OBI, Broadband Performance).
comScore database; see also Bowen, Broadband Performance; Cisco Sys., Cisco Visual Networking Index: Forecast and Methodology, 2008–2013, at 4 (2009) (Cisco, Visual Networking Index), available at http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white_paper_c11-481360.pdf; Letter from Craig Mundie, Chief Research & Strategy Officer, et al., Microsoft Corp., to Marlene H. Dortch, Secretary, FCC, GN Docket Nos. 09-47, 09-51, 09-137 (Sept. 22, 2009) at 3; University of Minnesota, Minnesota Internet Traffic Studies (MINTS), http://www.dtc.umn.edu/mints/home.php (last visited Feb. 19, 2009).
FCC, National Broadband Plan Survey of Businesses, Dec. 9, 2009–Jan. 31, 2010 (2010) (FCC, NBP Survey of Businesses), available at http://fjallfoss.fcc.gov/ecfs/comment/view?id=6015536973.
Cisco Sys., Cisco IT Executive Presentation: Telepresence 6 (3Q 2009), available at http://www.cisco.com/web/about/ciscoitatwork/downloads/ciscoitatwork/pdf/TelePresence_White.pdf
See Cisco, Visual Networking Index 4; Cisco Sys., Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2009–2014, at 1 (2009), available at http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white_paper_c11-520862.pdf.
Stevie Smith, Skype 4.0 Looks to Expand Video Calling, Tech Herald, June 18, 2008, http://www.thetechherald.com/article.php/200825/1273/Skype-4-0-looks-to-expand-video-calling; Shamila Janakiraman, Skype Supports Video Calls on PCs and Embeds Skype Software in HDTVs, TMCnet.com, Jan. 6, 2010, http://voip-phone-systems.tmcnet.com/topics/voip-phone-systems/articles/72051-skype-supports-video-calls-pcs-embeds-skype-software.htm.
RAND Corp., The Global Positioning System, App. B—GPS History, Chronology, and Budgets 247–49 (1995), available at http://www.rand.org/pubs/monograph_reports/MR614/MR614.appb.pdf.
Sunlight Labs, Apps for America 2: The Data.gov Challenge, http://sunlightlabs.com/contests/appsforamerica2/ (last visited Feb. 19, 2010); FlyOnTime.us, http://flyontime.us (last visited Feb. 19, 2010).
See Consumer Elec. Ass’n, US Consumer Electronics Sales & Forecasts 2005–2010, at 33 (2010) (CEA, Electronics Sales & Forecasts) (87 percent); Niki Scevak, Forrester Research, Inc., Forrester Research Online Population Access and Demographic Model (2010) (81 percent); Horrigan, Broadband Adoption and Use in America at 13 (79 percent).
CEA, Electronics Sales & Forecasts 33 (“Netbooks will overtake all other notebooks by 2011”); Goldman Sachs, Adobe Systems Inc. (ADBE) PC Refresh Beneficiary 15 (2009) (citing forecast of about 50 million units by 2013).
Number calculated using Commission data. See Office of Engineering and Technology, FCC, Equipment Authorization Search, https://fjallfoss.fcc.gov/oetcf/eas/reports/GenericSearch.cfm (last visited Feb. 22, 2010). The data represents applications for grants issued for new FCC IDs for equipment class parameters “PCE-PCS Licensed Transmitter held to ear” and “TNELicensed Non-Broadcast Transmitter Held to Ear.” Data does not include applications for permissive changes and
counts multiple entries for the same FCC ID only once.
Carolina Milanesi et al., Gartner, Inc ., Forecast: Mobile Devices, Worldwide, 2003–2013, at tab 2 (Devices) (2009). We took the information from column L (2012 year), added rows 40 (Basic Phones) and 41 (Enhanced Phones) together (95 million) and compared the number with the number received when rows 43 (Smart Phones—Entry Level) and 44 (Smart Phone—Feature) are added together (109 million). This plan contains several references to Gartner. The Gartner Report(s) described herein, (the “Gartner Report(s)”) represent(s) data, research opinion or viewpoints published, as part of a syndicated subscription service, by Gartner, Inc. (“Gartner”), and are not representations of fact. Each Gartner Report speaks as of its original publication date and the opinions expressed in the Gartner Report(s) are subject to change without notice.
See OnStar Explained, http://www.onstar.com/us_english/jsp/explore/index.jsp (last visited Mar. 1, 2010) (discussing OnStar).
Section 629 covers equipment used to receive video programming—including cable set-top boxes, televisions, and DVRs—as well as equipment used to receive other services offered over MVPD systems, including cable modems. See 47 U.S.C. § 549 (codifying section 629 of the Telecommunications Act of 1996); Implementation of Section 304 of the Telecommunications Act of 1996; Commercial Availability of Navigation Devices, CS Docket No. 97-80, Report and Order, 13 FCC Rcd 14775 (1998).
Cf. CableLabs, Certified, Verified and Self-Verified Cable Products,
(Aug. 26, 2009) (reporting 11 certified set-top boxes), with supra note 22 (calculating 850 wireless devices).
Letter from Neal M. Goldberg, Vice Pres. and Gen. Counsel, National Cable & Telecommunications Association, to Marlene H. Dortch, Secretary, FCC, CS Docket No. 97-80 (Dec. 22, 2009) at 1 (presenting report detailing CableCARD deployment and support).
Housing units are distinct from households. “A housing unit is a house, an apartment, a mobile home, a group of rooms, or a single room that is occupied (or if vacant, is intended for occupancy) as separate living quarters.” U.S. Census Bureau, Households, Persons Per Household, and Households with Individuals Under 18 Years, 2000 http://quickfacts.census.gov/qfd/meta/long_71061.htm (last visited Feb. 28, 2010). In contrast, “A household includes all the persons who occupy a housing unit. . . . The occupants may be a single family, one person living alone, two or more families living together, or any other group of related or unrelated persons who share living arrangements.” Id. There are 130.5 million housing units and 111.7 million households in the United States. U.S. Census Bureau, Census Bureau Reports on Residential Vacancies and Homeownership (press release), Feb. 2, 2010, at 3 tbl. 3, http://www.census.gov/hhes/www/ housing/hvs/qtr409/files/q409press.pdf (Census Bureau, Residential Vacancies and Homeownership). Unoccupied housing units (the difference between the count of households and of housing units) include housing units vacant for sale or rent and those for occasional, temporary or seasonal use.
See See OBI, The Broadband Availability Gap. Seven million housing units without access to 4 Mbps terrestrial service are outside the cable footprint and are more than approximately 11,000 feet from the nearest DSLAM location; 6 million housing units with 12 million people do not have access to any always-on service with actual download speeds of 768 Kbps or higher as they are more than approximately 16,000 feet from the nearest DSLAM. Note that the analysis excludes satellite broadband because satellite capacity is limited, as discussed in the working paper.
See OBI, The Broadband Availability Gap. In general, availability of access infrastructure capable of supporting a given download speed does not guarantee that service providers will offer service at those speeds. Note that these numbers do not take into account quality of service.
See OBI, The Broadband Availability Gap. Coverage reflects access at download speeds consistent with residential discussion; it does not necessarily reflect access to business-class broadband services.
See See OBI, The Broadband Availability Gap; National Atlas of the United States, 2005-06, County Boundaries of the United States, 2001: National Atlas of the United States, Reston, VA (presenting map boundaries).
National Center for Educational Statistics, Internet Access in U.S. Public Schools and Classrooms: 1994–2005, at 4 (2006), available at http://nces.ed.gov/pubs2007/2007020.pdf.
Dep’t of Educ., Evaluation of the Enhancing Edcuatin Through Technology Program: Final Report 12 (2009), available at www.ed.gov/rschstat/eval/tech/netts/finalreport.pdf.
See infra Chapter 10; see also Letter from Theresa Cullen, Rear Admiral, U.S. Public Health Service, Chief Information Officer and Director, Indian Health Service, to Marlene H. Dortch, Secretary, FCC (Feb. 23, 2010) Attach. In this instance, “mass market” refers to non-dedicated line solutions for businesses, which are similar to residential broadband but called “small business” or “business packages” by carriers.
Along with aggregate growth in broadband speeds, each technology has shown speed increases internally. For instance, cable typical advertised speeds have migrated from 1 Mbps in the late 1990s to roughly 10 Mbps today, a 20% annual growth rate. See OBI, Broadband Performance.
Robert C. Atkinson & Ivy E. Schultz, Columbia Institute for Tele-Information, Broadband In America: Where It Is And Where It Is Going (According To Broadband Service Providers) at 8 (2009) (Atkinson & Schultz, Broadband Report), available at http://www4.gsb.columbia.edu/citi/; see also Census Bureau, Residential Vacancies and Homeownership 3 tbl. 3.
See Organisation for Economic Co-Operation and Development (OECD), Average advertised download speeds, by country (Sept. 2008) http://www.oecd.org/dataoecd/10/53/39575086.xls (last visited Dec. 22, 2009) (9.6 Mbps); FCC, 2008 Form 477 database (accessed Dec. 2009) (on file with the Commission) (6.7 Mbps). Note that 477 data is collected in speed “tiers” and reflects 2008 data. See OBI, The Broadband Availability Gap.
comScore database. The median speed is more representative of the speeds seen by the typical American consumer because the average speed is skewed upwards by a limited number of high-speed connections (>15 Mbps advertised). comScore monitored 200,000 computers for data usage and consumption, selected to represent American usage broadly (types of services, service providers, geographies, demographics, etc.). Speed testing was attempted every 36 hours at varying times of day and only done when a given computer was otherwise inactive. Speed tests were conducted using packets sent in ever-increasing size to measure average speeds experienced to end-users. Maximum speeds on each connection were determined based on maximum speeds achieved (+/- 10%) and with confirmation on a sample of bills in tandem with the FCC. Speed testing was conducted from the computer/device to the nearest Akamai server. This approach has been used for speed claims by 5 of the top 10 ISPs in America. See OBI, The Broadband Availability Gap (discussing the methodology and data further).
Note that speeds experienced by the end-user can be impacted by many factors including the user’s own equipment, the service provider network and the applications and sites being accessed online. In the first half of 2009, the median actual speed for those that subscribe to broadband in the United States was 3 Mbps download speed. comScore database. Given past annual growth rates in subscribed speed of approximately 20–25% per year, the median could exceed 4 Mbps by the end of 2010. Cf. Akamai, The State of the Internet, 3rd Quarter, 2009, at 10 (Jan 2010) available at http://www.akamai.com/dl/whitepapers/Akamai_State_Internet_Q3_2009.pdf?curl=/dl/whitepapers/Akamai_
State_Internet_Q3_2009.pdf&solcheck=1& (registration required) (finding average download speeds to be 3.9 Mbps in the third quarter of 2009); see also OBI, Broadband Performance (discussing past growth rates).
comScore database. Note that fiber in the database refers to both fiber to the premises (FTTP) and short-loop fiber to the node (FTTN). According to the Form 477 database, FTTP advertised download speeds were 3-4 Mbps faster than comScore fiber average. For more data and detail on methodologies see OBI, Broadband Performance.
comScore database. Commission 477 data mirrors comScore advertised speed ranges of different technologies and relative advertised speeds, with important methodology differences for fiber. See Bowen, Broadband Performance.
SamKnows Limited Comments in re NBP PN #24 (Comment Sought on Broadband Measurement and Consumer Transparency of Fixed Residential and Small Business Services in the United States—NBP Public Notice #24, GN Docket Nos. 09-47, 09-51, 09-137, Public Notice, DA 24 FCC Rcd 14120 (WCB, rel. Nov. 24, 2009) (NBP PN #24)), filed Dec. 16, 2009; Ofcom, UK Broadband Speeds 2009, at 8 (2009), available at http://www.ofcom.org.uk/research/telecoms/reports/broadband_speeds/broadband_speeds/broadbandspeeds.pdf.
See American Roamer Advanced Services database (accessed Aug. 2009) (aggregating service coverage boundaries provided by mobile network operators) (on file with the Commission) (American Roamer database); see also Geolytics Block Estimates and Block Estimates Professional databases (2009) (accessed Nov. 2009) (projecting census populations by year to 2014 by census block) (on file with the Commission) (Geolytics databases). The approximate of 60% is based on total landmass area. In 2008, this figure was 39.6%. Implementation of Section 6002(b) of the Omnibus Budget Reconciliation Act of 1993; Annual Report and Analysis of Competitive Market Conditions With Respect to Commercial Mobile Services, WT Docket No. 08-27, Thirteenth Report, 24 FCC Rcd 6185, 6257, tbl. 9 (WTB 2009).
Data from American Roamer shows geographic coverage by technology. The actual service quality of data connections experienced by end-users will differ due to a large number of factors, such as location and mobility. Further, the underlying coverage maps do not include information on the level of service (i.e., signal quality and the speed of broadband service) provided; nor is coverage defined by providers in the same way. Thus, coverage as measured here does not correspond to a specific minimum signal quality or user experience. See American Roamer database; see also infra Chapter 4, Section 4.1 (Competition in Residential Broadband Networks) (discussing the American Roamer methodology). Population is based on projected census block figures from Geolytics. See Geolytics databases.
Data from American Roamer applied to business locations will suffer from the same quality of service issues (in-building coverage, varying bit rates) as residential. See American Roamer database; see also GeoResults National Business and Telecom database (accessed Nov. 2009) (projecting business locations) (on file with the Commission) (GeoResults database).
See Atkinson & Schultz, Broadband Report 8; see also Verizon Wireless, Network Facts, http://aboutus.vzw.com/bestnetwork/network_facts.html (last visited Feb. 28, 2010) (providing Verizon’s 4G roll-out plan, and coverage of 285 million people by its 3G network).
See comScore database (discussing data on upload and download speeds); Chetan Sharma & Sarla Sharma, State of the (Mobile) Broadband Nation: A Benc hmarking Study (2009), available at http://www.chetansharma.com/State%20of%20the%20Broadband%20Nation%20-%20Chetan%20Sharma%20Consulting.pdf (Reprinted with permission. Copyright © 2009 Chetan Sharma Consulting. All rights reserved. Based on data compiled by Root Wireless, Inc.).
For the purposes of the Plan, we define “Tribal lands” as any federally recognized Tribe’s reservation, pueblo and colony, including former reservations in Oklahoma, Alaska Native regions established pursuant to the Alaska Native Claims Settlement Act (85 Stat. 688), and Indian allotments. The term “Tribe” means any American Indian or Alaska Native Tribe, Band, Nation, Pueblo, Village or Community which is acknowledged by the Federal government to have a government-to-government relationship with the United States and is eligible for the programs and services established by the United States. See Statement of Policy on Establishing a Government-to-Government Relationship with Indian Tribes, 16 FCC Rcd 4078, 4080 (2000). Thus, “Tribal lands” includes American Indian Reservations and Trust Lands, Tribal Jurisdiction Statistical Areas, Tribal Designated Statistical Areas, and Alaska Native Village Statistical Areas, as well as the communities situated on such lands. This would also include the lands of Native entities receiving Federal acknowledgement or recognition in the future. While Native Hawaiians are not currently members of federally-recognized Tribes, they are intended to be covered by the recommendations of this Plan, as appropriate.
Ookla Has Verizon as Fastest Q1 Fixed Provider, T-Mobile Takes Top Spot for Mobile
T-Mobile was also named the most consistent mobile operator and topped 5G download speeds.
WASHINGTON, April 18, 2022 – A market report released Friday by performance metrics web service Ookla named Verizon the fastest fixed broadband provider in the U.S. during the first quarter of 2022, and T-Mobile as the fastest mobile operator during the same period.
Verizon had a median download speed of 184.36 Mbps, edging out Comcast Xfinity’s speed of 179.12 Mbps. T-Mobile’s median mobile speed was 117.83 Mbps.
Verizon had the lowest latency of all providers, according to Ookla, well ahead of Xfinity’s fourth place ranking, yet sat at third for consistency behind both Xfinity and Spectrum.
T-Mobile was also the most consistent mobile operator during the first quarter, achieving an Ookla consistency score of 88.3 percent, which along with median download speed represented an increase from the fourth quarter of 2021.
The company also achieved the fastest median 5G download speed, coming in at 191.12 Mbps.
Verizon also notably increased its 5G download speed from its Q4 metric, attributed in part to the turning on of new C-band spectrum in January following deployment delays and protest from airlines. For mobile speeds, it stood in second behind T-Mobile, bumping AT&T to a standing of third. These rankings were the same for mobile measures of latency and consistency.
Yet on 5G availability, AT&T remains ahead of Verizon.
The Samsung Galaxy S22 Ultra came in as the fastest popular device in the country, running at 116.33 Mbps.
Ookla is a sponsor of Broadband Breakfast.
FCC’s Rosenworcel: Broadband Nutrition Labels Will Create New Generation of Informed Buyers
The FCC hopes companies will make it easier for consumers to choose a broadband plan that fits their needs.
WASHINGTON, March 11, 2022 – The Federal Communications Commission’s broadband nutrition labels will usher in a new era where buyers have simple information about what they’re buying, agency Chairwoman Jessica Rosenworcel said Friday.
Consumers should know what they’re signing up for when they spend hundreds “or even thousands” of dollars per year for internet service. She was speaking at Friday’s commission hearing on its so-called broadband nutrition label initiative.
The hearing comes on top of a public comment period on the initiative. Many providers are pushing for more flexible regulations on compliance.
When consumers choose a broadband provider for their household, Rosenworcel said may people make decisions with “sometimes incomplete and inaccurate information.”
“The problem for broadband consumers isn’t a total lack of information, but there’s loads of fine print,” Rosenworcel said. “It can be difficult to know exactly what we are paying for and these disclosures are not consistent from carrier to carrier,” which makes comparing prices and services harder and more time-consuming for consumers.
The comments built on other recent speeches by Rosenworcel promoting the initiative, encouraging state attorneys general’s ability to enforce companies’ commitments through their states’ consumer protection statutes.
The FCC began a plan in 2015 for broadband labels that was voluntary. The new initiative directed by last year’s bipartisan infrastructure law makes this effort mandatory for broadband providers.
Matt Sayre, managing director of cross sector economic development firm Onward Eugene, said residents in rural Oregon would benefit from simple information when considering broadband providers. During a time where dial-up and satellite-based offerings were primarily available, Sayre said his neighbors “never used terms like latency or packet loss.”
“These are important aspects of good internet service, but not easily understood by most people,” Sayre said. “Citizens understood they needed better service but were uncertain about what tier of service they needed. This is where broadband labels can be very helpful.”
The hearing was the agency’s first on the initiative.
Small ISP Organizations Push FCC for Flexibility on Broadband Label Compliance
Advocates say strict compliance requirements may economically harm small providers.
WASHINGTON, March 11, 2022 – In comments submitted to the Federal Communications Commission Wednesday, organizations representing small internet providers are pushing for flexible regulations on compliance with a measure that requires clear reporting of broadband service aspects to consumers.
The measure was adopted at a late January meeting by the commission, mandating that providers list their pricing and speed information about services in the format of a “broadband nutrition label” that mimics a food nutrition label. Congress’ bipartisan infrastructure bill enacted in the fall required that the FCC adopt such policy.
The organizations that submitted comments Wednesday say that strict compliance requirements for the new measure may economically harm small providers.
Among those leading the charge are trade associations Wireless Internet Service Providers Association, NTCA – The Rural Broadband Association and America’s Communications Association as well as provider Lumen Technologies.
In comments, limited resources of smaller providers were cited as factors which could disadvantage them in terms of complying with the measure to the FCC’s standards and several organizations asked for small providers to be given extra time to comply.
In separate comments, internet provider Lumen said that the FCC must make multiple changes to its approach if it is to “avoid imposing new obligations that arbitrarily impose excessive costs on providers and undermine other policy goals.”
Last month, FCC Chairwoman Jessica Rosenworcel said that she looks forward to increased coordination between the FCC and state attorneys general for the enforcement of the measure.
- High Demand for Middle Mile Grants, Local Concerns in FCC Process, Musk Agrees to Buy Twitter Again
- Paul Atkinson: Why Fiber Trumps Satellite When Bridging the Digital Divide
- FCC Targets Spam Call Offenders, Disaster Assistance Requirements, U.S. 23rd in Fiber Development
- Wireless Internet Service Providers Facing Challenges Meeting BEAD Program Requirements: Experts
- Johnny Kampis: Wireless Survey Shows 5G’s Role in Closing Digital Divide
- Lack of Adequate Workforce Expected to Hamper Broadband Industry, Says Panel
Signup for Broadband Breakfast
Broadband Roundup4 weeks ago
AT&T Sues T-Mobile Over Ad, Nokia Partners with Ready, LightPath Expanding
#broadbandlive3 days ago
Broadband Breakfast on October 5, 2022 – How to Reform the Universal Service Fund
Broadband Mapping & Data3 weeks ago
Broadband Mapping Masterclass on September 27, 2022
Broadband Mapping & Data4 weeks ago
FCC’s Fabric Challenge Process Important Part of Getting Map Right, Agency Says
WISP3 weeks ago
Wisper Internet CEO Takes Issue With Federal Government Preference for Fiber
Big Tech4 weeks ago
A White House Event, Biden Administration Seeks Regulation of Big Tech
Funding4 weeks ago
NTIA Middle Mile Director Stresses Need for Infrastructure to Withstand Climate Events
Fiber4 weeks ago
In ‘Office Hours’ Sessions, NTIA Addresses Questions of Middle Mile Grant Applicants
|
<urn:uuid:cb4ff33f-3219-465d-a417-f980575accb5>
|
CC-MAIN-2022-40
|
https://broadbandbreakfast.com/2010/03/connecting-america-chapter-3-current-state-of-the-ecosystem/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00261.warc.gz
|
en
| 0.926498 | 11,101 | 2.90625 | 3 |
The healthcare industry is experiencing a surge in data breaches, security incidents, and criminal attacks—exposing millions of patients and their medical records, according to the Ponemon Institute.
The study reveals that criminal attacks in healthcare are up 125 percent since 2010 and are now the leading cause of data breach. The findings also show that most healthcare organizations are still unprepared to address this rapidly changing cyber threat environment and lack the resources and processes to protect patient data.
According to the FBI, criminals are targeting the information-rich healthcare sector because individuals’ personal information, credit information, and protected health information (PHI) are accessible in one place, which translates into a high return when monetized and sold.
“We are seeing a shift in the causes of data breaches in the healthcare industry, with a significant increase in criminal attacks. While employee negligence and lost/stolen devices continue to be primary causes of data breaches, criminal attacks are now the number one cause,” said Dr. Larry Ponemon, chairman and founder, Ponemon Institute. “Since first conducting this study, healthcare providers are starting to make investments to protect patient information, which need to keep pace with the growing cyber threats.”
A criminal attack is the deliberate attempt to gain unauthorized access to sensitive information, usually to a computer system or network, resulting in compromised data. Criminal attacks are often referred to as cyber-attacks, but can also include malicious insiders and/or paper medical files.
Medical records are greatly susceptible to threats and fraudulent activity because of the value of their information and because they are accessible at many points. The study indicates that medical files, as well as billing and insurance records, are the top stolen targets.
Since sensitive patient data can be easily transmitted and exposed, no organization is immune from data breach. Those especially vulnerable are healthcare organizations including hospitals, clinics, private or public healthcare providers—also referred to as “covered entities;” (CEs) and their “business associates,” (BAs) including patient billing, health plans, claims processing, and cloud services.
A business associate is a person or entity that performs services for a covered entity that involves the use or disclosure of PHI, according to the U.S. Department of Health & Human Services. Small to middle market organizations are at greater risk for data breach, as they have limited security and privacy processes, personnel, technology, and budgets compared to their enterprise or large corporate counterparts.
As part of everyday business, there are exponentially more security incidents than data breaches. Under federal law, all security incidents need to be assessed to determine if they are data breaches that require reporting. The study’s findings indicate that organizations are not thoroughly assessing their security incidents. In fact, one-third of the respondents do not have an incident response process in place.
Key findings of the research:
Data breaches in healthcare are rising
All healthcare organizations, regardless of size, are at risk for data breach. Ninety-one percent of healthcare organizations had one data breach; 39 percent experienced two to five data breaches; 40 percent had more than five data breaches over the past two years. In comparison, 59 percent of business associates experienced data breaches; 14 percent experienced two to five data breaches; 15 percent experienced more than five data breaches over the same period. Half of all healthcare organizations, both CEs and BAs, have little or no confidence that they have the ability to detect all patient data loss or theft. Data breaches are costing the healthcare industry $6 billion annually; the average economic impact of data breaches per organization is $2,134,800.
Criminal attacks are the new leading cause of data breach in healthcare
Criminal attacks in healthcare are up 125 percent compared to five years ago. In fact, now, nearly 45 percent of data breaches in healthcare are a result of criminal activity. The percentage of criminal-based security incidents is even higher; for instance, 78 percent of healthcare organizations and 82 percent of BAs had web-borne malware attacks. Yet, only 40 percent of healthcare organizations are concerned about cyber attacks.
Security incidents part of everyday business
Sixty-five percent of healthcare organizations and 87 percent of BAs experienced electronic information-based security incidents over the past two years, and approximately half of all respondents suffered paper-based security incidents. However, organizations lack the financial and personnel resources to protect patient information. More than half of healthcare organizations and half of BAs don’t believe their incident response process has adequate funding and resources. In fact, one third of respondents don’t have an incident response process in place. Healthcare organizations remain unsure if they have sufficient technologies and resources to prevent or detect unauthorized patient data access, loss or theft. In addition, the majority of them fail to perform a risk assessment for security incidents, despite the federal mandate to do so.
The threat of medical identity theft to breached individuals is growing; however, harms are not being addressed
According to the Ponemon/Medical Identity Fraud Alliance study, 2014 Fifth Annual Study on Medical Identity Theft, medical identity theft nearly doubled in five years, from 1.4 million adult victims to over 2.3 million in 2014. Yet, the Fifth Annual Benchmark Study on Privacy & Security of Healthcare Data further reinforces that the harms to individuals affected by a breach are not being addressed. Nearly two-thirds of both types of respondents do not offer any protection services for patients whose information has been breached.
|
<urn:uuid:8e0cf130-c0f3-4d6c-a885-f10712f021b5>
|
CC-MAIN-2022-40
|
https://www.helpnetsecurity.com/2015/05/07/criminal-attacks-in-healthcare-are-up-125-since-2010/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00461.warc.gz
|
en
| 0.952068 | 1,111 | 2.578125 | 3 |
Recently, the IBM i Open-Source Software team ported their IBM i Access ODBC Driver so it could work directly on IBM platforms. While this might seem like a minor technical update, it is worth noting because of what it could mean for companies using IBM i (or managed services for IBM i), especially when it comes to development.
ODBC and the IBM i Access ODBC Driver: What are they?
ODBC stands for Open Database Connectivity. Put in the most basic terms, it is a software protocol that allows the exchange of data between different proprietary databases.
An ODBC driver creates an ODBC interface, which allows applications to access data in database management systems (DBMS) using a set standard, facilitating maximum interoperability. In other words, a single application can access different DBMS directly as needed. The ODBC interface has many advantages when it comes to developer productivity and support, which has made it a widely adopted standard. It is believed to support somewhere in the neighborhood of tens of thousands of custom corporate applications.
To use ODBC, one must have an ODBC driver manager for the target operating system, along with an ODBC driver for the DBMS in question. IBM i and Linux, for example, use a driver manager called unixODBC. The individual drivers vary by name, depending on the DBMS. For example, Microsoft’s DBMS is called Access, while IBM’s is called Db2. Thus, IBM i users looking to interface with a Microsoft database need the IBM i Access ODBC driver, while those that want to interface with a Db2 database would need the IBM i Db2 ODBC driver.
Why Care About ODBC and IBM i Generally?
The above might seem like quite a bit of technical detail if you are not versed in database management. But this is exactly the kind of detail that has ramifications for what one can do, and not do, within certain operating environments.
Take, for example, cross-platform development, talent recruitment, and dashboards.
ODBC allows a team to develop applications on one system, say a Windows system or Linux system, and then move those applications to IBM i servers when ready for deployment, without rewriting large bits of the code to handle database access (and using the same ODBC driver). This gives developers more flexibility to use the systems they want to use and test applications the way they want to test them. It also makes the development of open source software for IBM i a bit smoother.
Easier Talent Recruitment
Many developers these days learn the ODBC API in order to be most useful, instead of learning the details of individual databases. This means that a developer might not know the intimate details about Db2 on IBM i, for example, but can still create a functioning application using ODBC. This is especially important as the talent pool for IBM i is shrinking. (You can read an interesting case study on IBM i talent shortage here.)
More and more organizations are using dashboards that query multiple internal databases. Using ODBC allows those dashboards to easily access those databases and improves performance, which means better data analytics insights for the organization.
ODBC is Big with the Cloud
ODBC is also being developed in tandem with the trend of migrating to cloud environments. Even as organizations migrate to cloud-based or hosted solutions, they want to connect their products with existing data sources (like Microsoft Excel spreadsheets or legacy IBM i apps). The simplest way to do this is to use an ODBC driver that can interface with the cloud data source, allowing the application to connect directly and dynamically.
This has led even more organizations to consider managed clouds or hybrid clouds built partly on IBM systems. In fact, we here at Connectria provide many of our clients Managed IBM Clouds and Hybrid cloud solutions IBM Power Systems.
Learn more about Connectria’s IBM i services here.
|
<urn:uuid:d553c3e9-b12c-450b-8ffe-7664bad7860d>
|
CC-MAIN-2022-40
|
https://www.connectria.com/blog/the-new-odbc-driver-for-ibm-i-and-what-it-means-for-business/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00461.warc.gz
|
en
| 0.926128 | 811 | 2.703125 | 3 |
A HIPAA violation is a non-compliant disclosure of protected health information (PHI) that compromises healthcare data privacy and security. Simply saying any unauthorized use or disclosure of PHI is considered a data breach and leads to penalties.
HIPAA violation fines can reach up to $50,000 per occurrence and the highest annual penalty is $1.5 million per violation. Moreover, such breaches can threaten medical practices, jeopardize the institution’s reputation and even lead to suspension of the guilty party’s medical license or jail time. That’s why any medical organizations need to ensure they are HIPAA compliant at all times, including the software they use.
As technology continues to enhance patient outcomes and engagement, it’s more critical than ever that healthcare institutions know how to comply with HIPAA and avoid data breaches.
The Department of Health and Human Services’ Office for Civil Rights (OCR) has the power to penalize any involved hospital or health-related service for HIPAA violations of any scale.
The HIPAA violations usually revealed in three main ways:
The consequences of HIPAA violations can be severe, and it’s important to know what fines can be applied by OCR even if no breach of PHI has occurred. The financial consequences of a data breach depending on the level of negligence and the number of records exposed and the risk posed by the unauthorized disclosure.
Ransomware is a type of malicious program that can be received through email or a suspicious link through corrupted files. Usually, the message states that all captured data from the device or even the entire network will be wiped or released to the public if the organization fails to pay a certain fee. However, there’s no guarantee that it will regain access to its data even after paying up. Such types of malicious programs may not only wipe essential data but also shut down the entire system leading to severe consequences.
Malware or viruses can be sent to people to destroy data stored on the devices. If a malware virus was sent to a medical institution, it could wipe millions of records containing patients’ data, resulting in severe consequences.
It’s one of the more common HIPAA violation cases and may be routine practice at a healthcare facility with a personnel shortage when employees email ePHI to personal email accounts.
Despite the intentions, whether it is to complete work at home or catch up on a backlog, it is a HIPAA violation. Also, any emailing of ePHI to a personal email could be considered theft, the consequences of which could be far more severe than termination of an employе.
If employees talk about patients to coworkers or friends, it is a HIPAA violation leading to severe consequences. Employees should only discuss patient information privately and only with other medical personnel.
HIPAA compliance for email is not always required if a healthcare provider has an internal email network protected by an appropriate firewall. But messages need to be secured in transit if they contain ePHI and are sent outside a protected internal email network beyond the firewall.
It’s essential to ensure that only authorized personnel have access to data centers, server cabinets, vaults, and any other location where ePHI data is stored.
Hacking is a real threat to medical ePHI, and there are many people who want to use this data for ill-disposed purposes. Hence, medical institutions need to ensure that their data is protected against hacking.
HIPAA Journal data breach statistics show hacking is now the main cause of healthcare data breaches.
Approximately half of all data breaches are the result of device theft. If the data stored on devices is not encrypted or password-protected, the device’s loss or theft becomes a more severe issue.
Some doctors and nurses tend to use their own laptops or smartphones to access patient data after hours. In itself, this isn’t a HIPAA violation, but it can very simply transform into one in case the screen is left unattended, and some family members took a glance.
One of the essential procedures to enforce is the proper disposal of PHI records. Employees should understand that all data that contains PHI, such as social security numbers, medical practices, diagnoses, should be destroyed or wiped from the hard drive.
Suppose any of this information is left lying around, for example, in a computer’s recent files folder or in a trash can. In that case, it could get into the wrong person’s hands, and this would be a severe HIPAA violation.
Examples of HIPAA violations and lessons we can learn from them are a way to minimize data breaches. below there some of the latest and biggest violation cases.
A cancer center located in Texas was forced to pay over $4.3 million in civil penalties after three data breaches that lead to HIPAA violations. The OCR investigation showed that a PHI breach for over 34,000 patients was because three devices were stolen. While the cancer center had encryption policies for preventing a potential breach from theft, the laptop and USB thumb drives were without encryption or password protection.
Lessons to learn: A HIPAA violation occurred since not all devices were encrypted or password-protected along with using unprotected flash drives. Instead, it’s much secure to transfer data within a closed network. Such safeguards are needed to protect the integrity, confidentiality, and availability of PHI.
Idaho State University’s Medicine Clinic disabled the firewall that was protecting a server with the medical records of 17,500 patients. The firewall was inactive for ten months, leading to the data explosion to unauthorized third parties for an unacceptable period. To resolve the HIPAA violations, OCR agreed to a fee of $400,000.
Lessons to learn: Suppose the university reviewed the procedures, policies, and system as required under the HIPAA Security Rule. In that case, they could have identified the deactivated firewall earlier and could have taken prompt action to address the issue.
“Risk analysis, ongoing risk management, and routine information system reviews are the cornerstones of an effective HIPAA security compliance program,” – said Leon Rodriguez, OCR Director.
The FBI informed Touchstone Medical Imaging that one of its file transfer protocol (FTP) servers was accessible over the Internet and allowed anonymous connections to a shared directory. This breach exposed PHI files of 307,839 individuals, and OCR obliged Touchstone Medical Imaging to pay $3,000,000 to resolve the violations.
Lessons to learn: Touchstone had failed to complete a thorough, organization-wide risk analysis to identify all risks to the confidentiality of ePHI. Moreover, the organization didn’t enter into a business associate agreement with vendors before providing access to systems containing ePHI.
An investigation into Anthem Inc’s massive 78.8 million-record data breach of 2015 revealed multiple HIPAA violations. Cybercriminals had breached Anthem Inc’s defenses and had gained access to its systems and members’ sensitive data. The attackers gained a loophole in the network through spear-phishing emails sent to one of its subsidiaries. OCR obliged Anthem to a record-breaking settlement of $16,000,000 to resolve the violations.
Lessons to learn: Insufficient technical controls to prevent unauthorized ePHI access and electronic information systems’ access and procedures lead to HIPAA violations.
A case occurred when a patient submitted a complaint to OCR about an unaccepted disclosure of PHI in a mailing. Sentara Hospitals reported that the breach impacted eight individuals, but the OCR investigation discovered that 577 patients had been affected in reality that was settled for $2.175 million.
Lessons to learn: It’s essential to revise policies and procedures at least annually, or more frequently if appropriate, ensuring the organization’s compliance with HIPAA Rules.
Many of the most common causes of HIPAA violations can be attributed to a lack of employee education about HIPAA. That’s why it’s essential to provide regular HIPAA training for personnel when there are changes to regulations and then keep the rules fresh in everyone’s mind.
Likewise, healthcare organizations and providers must establish business associate agreements with any third-party solution to ensure data confidentiality. Technology is a great tool to streamline and improve patient care, especially when it is used by companies that value and prioritize HIPAA compliance.
In this article, we shared some of the practices that help to prevent data breaches by ensuring high-level security. But there are more procedures that should be implemented such as administrative, physical, and technical safeguards.
NIX has vast experience in providing software engineering services for the healthcare industry. From mobile to web healthcare solutions, we know how to develop HIPAA-compliant software and are ready to offer technical assistance.
With more than 3 years of practical experience, Natalia helps CIOs of Medical companies, CTOs and Founders of agile Healthtech startups build technology solutions that make medical practice better and leverage digital transformation to meet patients expectations.
Knowing how telemedicine works and how you can best implement it in your organization is the best way to enhance patient communication and drive more revenue.
Cloud computing has been one of the key focuses for business innovation in recent times, with a study from LogicMonitor predicting that 83% of enterprise workloads will be in the cloud by 2020. The same study notes that digitally transforming enterprise is the main cause of…
GCP offers services any business can benefit from, like AI, machine learning, data storage, and management.
Explore our blog
I agree to receive news and updates from NIX United
Configure subscription preferences
Trends & Researches
Web and mobile HIPAA-compliant app for improving patient retention and measuring patient health remotely.
See More Success Stories
Our representative gets in touch with you within 24 hours.
We delve into your business needs and our expert team drafts the optimal solution for your project.
You receive a proposal with estimated effort, project timeline and recommended team structure.
|
<urn:uuid:114da6d0-110e-4080-a200-157970c4c24a>
|
CC-MAIN-2022-40
|
https://nix-united.com/blog/what-is-hipaa-violation-and-how-to-prevent-data-breaches/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00461.warc.gz
|
en
| 0.93847 | 2,106 | 2.59375 | 3 |
As the world enters 2022, cybersecurity has become more critical than ever. The recently discovered Log4j vulnerability is one example of the potential havoc vulnerable open-source software components can wreak. But what is it that makes open source and third-party software components risky to deal with in the first place?
Since anyone can openly view and edit open-source code, it’s not uncommon for vulnerabilities to appear in codebases and attackers to exploit them through different attack vectors. Open-source code also gets released freely through many software licenses, bringing intellectual property and legal issues for organizations that are not fully aware of the licensing terms.
Still, that’s not to claim open-source software is not to be used. There are ways for developers to deal with potential threats and assure software security. Software Composition Analysis (SCA) comes in handy for checking vulnerabilities and licensing issues line-by-line. It is an automated open-source and third-party code scanning tool you can use to save precious time and resources.
In this brief guide, we’ll explain in detail what SCA is and how your development team can utilize it to secure your software from licensing and security risks.
What Is Software Composition Analysis (SCA)?
Software Composition Analysis (SCA) stands for the process of analyzing your codebase for all its components and dependencies. The goal is to assess the vulnerability and code security risk these components pose before attackers exploit them.
Besides discovering code vulnerability issues, Software Composition Analysis can also help address legal or code compliance issues in the software supply chain. For instance, software using third-party code that does not comply with industry standards or lacks an open-source license may lead to future development issues. SCA can help identify such code components in the codebase.
What Is the Software Supply Chain?
Simply put, the software supply chain is the sum of all components that go into a software’s code or other factors that determine the development operations of the code. These may include the CI/CD pipeline, APIs, and software libraries, to name a few.
The supply chain also keeps track of where these components come from, their known vulnerabilities, and the license information. But how exactly is the software supply chain relevant to Software Composition Analysis? The answer has to do with code security.
These days, software dependencies are the norm, not the exception; it’s common for software to have multiple dependencies, including open source components. Any modern software product or service utilizes what is known as the multi-source development model.
Under this model, an organization’s software development team does not code everything from scratch but uses a combination of the following:
- Proprietary code that the development team writes on their own
- Open-source code available under open source licenses
- Third-party code from commercial software vendors
Benefits of Open Source Software
The primary advantage of the multi-source approach is that it significantly reduces development costs and is much easier to implement. Rather than reinventing the wheel every time, a development team can utilize existing solutions that have proven successful. Furthermore, by relying on multiple sources for software components, an organization can prevent vendor lock-in, reducing dependence on a single software vendor alone.
At the same time, this approach can also lead to more code vulnerabilities. This vulnerability arises because utilizing code from multiple sources significantly increases the attack surface of the codebase, which is the set of all points on the system’s perimeter where an attacker may try to penetrate the system.
Since third-party and open source components can form a good chunk of a software’s codebase, analyzing them for vulnerabilities with recommended application security testing practices is critical for ensuring code security.
The Risks of Open Source Software
Although utilizing Open Source Software (OSS) has many benefits, it does not come without its set of risks. OSS and code components can lead to two general risk categories:
Although OSS is generally free to use, that freedom often comes at a cost. As the saying goes, “there’s no such thing as a free lunch”; so too is the case with OSS. The catch is that programmers release open-source software or code components under a license.
On the one hand, we have permissive software licenses that provide developers virtually unlimited freedom with the software and code components. Developers can freely use, modify, or even sublicense the OSS and code components. Examples include Apache, MIT, and the BSD license.
On the other hand are copyleft licenses such as the GNU General Public License (GPL). Under such licenses, any derivations to the original code must also use the same license and licensing terms as the original. An organization that uses a copyleft license may risk its intellectual property, as it is possible for the original software publisher to claim the organization’s work as a derivative of their own. Worse yet, an organization not correctly understanding a license’s terms can run into legal troubles.
The other major OSS risk category is security. Since OSS is freely available by nature, there are no central security and quality control measures.
In theory, open-source code should be just as secure as proprietary code. In reality, since open source code is available to the public, it is much easier to uncover vulnerabilities and potential exploits in the code. Security researchers, analysts, and hackers are always looking for these vulnerabilities.
The Apache Struts vulnerability CVE-2017-5638, which exploited a flaw in the Jakarta Multipart parser, is one example of how attackers abuse vulnerable open source code components. It allowed attackers to execute remote command injection attacks through parsing invalid HTTP content headers.
OSS security risks are so pervasive that the Open Web Application Security Project (OWASP) listed Vulnerable and Outdated Components (A06:2021) as the sixth most severe security threat to web application security. OWASP published this ranking in 2021 as the most recent list of top ten web application security threats.
How Can Software Composition Analysis Help?
Given the risks of open-source software, developers utilizing such code components should increase their application security. They can do so by opting for Software Composition Analysis tools built to eliminate the risk of exposing vulnerable open-source software components to attackers.
SCA tools generally have a three-part framework to provide DevSecOps teams a complete picture of their codebase and its vulnerabilities. This framework is summarized as follows:
- Inventory Scan: This SCA tool provides a complete inventory of the codebase and its software component dependencies.
- Analysis and Detection: After the inventory scan, an SCA tool analyzes the codebase to detect known vulnerabilities that potentially pose a threat. This analysis may include CVE codes of the vulnerabilities, CVSS scores, and license compliance risks.
- Control and Remediation: Finally, the SCA tool will provide measures to control and remediate the vulnerabilities. These measures may include suggestions on upgrading outdated and vulnerable code components or using alternatives. A software bill of materials is an essential output of this phase, which contains a list of all code components and their respective licenses.
The three-step framework of Software Composition Analysis solutions can help bridge the gap between the analysis and the remediation phase. As a result, an organization deploying the SCA saves time and resources it would otherwise spend detecting vulnerabilities and fixing them.
Do I Need SAST If I Use SCA?
A common misconception is that SCA alone will cover all vulnerabilities in a codebase and that Static Application Security Testing (SAST) is not needed. While it is true that SCA is the newer technology and has a much broader scope, it is not a replacement for static testing altogether.
SCA is better-suited for analyzing third-party and open source components as a rule of thumb. In contrast, SAST lends itself much better for testing proprietary code. As such, be wary of SCA tools that market themselves as a complete replacement for SAST.
Despite the many advantages of open source software components, developers should be aware of its security and licensing risks. The Log4j vulnerability recently demonstrated that not even the most widely used open-source software components are secure. As such, any DevSecOps team worth its weight in salt should strongly consider using better security tools to protect its software from future risks.
As the saying goes for medicine: prevention is better than cure. So too, is the case with application security tools. When attackers discover a vulnerability and exploit it, it is often too late. The better approach for DevSecOps teams is to integrate application security with the rest of the organization’s software development cycle.
SCA is one solution for discovering future code vulnerabilities during the software design and production phase. When paired with SAST, it can deliver even better results through proprietary, open-source, and third-party code.
Kiuwan is a security solutions provider that offers both SCA and SAST tools for an all-in-one application security package. Rather than using separate code security tools for open-source and proprietary code, developers can secure all their code components using a single platform.All Kiuwan products are fully compliant with the best IT security compliance standards, including NIST, OWASP, and CERT. Contact us today to learn more about Kiuwan security solutions and services and how they can benefit your organization.
|
<urn:uuid:9d3c2184-a0b0-41a9-bb57-9b1d22fa8364>
|
CC-MAIN-2022-40
|
https://www.kiuwan.com/software-composition-analysis-mitigate-development-risk/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00461.warc.gz
|
en
| 0.921493 | 1,957 | 2.890625 | 3 |
National and Civil ID
The largest biometrics programs in the world verify citizen and resident identities across multiple purposes. Civil ID programs register each citizen and resident to allow governments to more effectively deliver services while minimizing the costs of fraud and corruption, unfortunately significant in many countries. In some countries, fraud in national social programs and voting can be presumed, with clear evidence of its high cost.
Governments have a vested interest in knowing who is being issued cards and other tokens, such as driver’s licenses, voter ID’s, and access to benefits such as state sponsored healthcare and other social welfare programs. In addition, an advanced program design like India UID anticipates commercial utilization of the master ID data base, for example, to authenticate micro-financing transactions or to control restricted services like the sales of cell phone SIM cards.
It is also now generally accepted that a lack of a formal identity registration process in many developing countries contributes the poverty cycle and societal exclusion for the most needy. In fact, one of the core stated purposes of India UID was “to bring identity to the identity-less”, thereby opening up opportunities for educational and financial inclusion.
Only biometrics can ensure that each citizen or resident gets one and only one card or ID number. Demographics such as the family name and an address, even with background checks, are clearly insufficient to accomplish this. For very large scale projects, the initial value of biometrics is to “de-duplicate” the data base to eliminate duplicate and presumably fraudulent entries.
And yet it is clear that only iris recognition, due to its inherently high biometric information content, can deliver the highest levels of accuracy to efficiently de-duplicate programs with tens of millions (or hundreds of millions!) of people.
Advanced thought on the design of national and civil ID enrollment programs also strongly suggests that the authentication and verification stages of these programs should be built into the design from the beginning. Authentication at the time and place of the delivery of services needs to be fast and effective. Iris recognition has the speed and matching performance to be utilized for many authentication programs with the highest confidence.
|
<urn:uuid:88eb10c1-7fbc-4640-a7ec-8e5169b9fead>
|
CC-MAIN-2022-40
|
https://cmi-tech.com/applications/national-and-civil-id/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00461.warc.gz
|
en
| 0.946243 | 441 | 2.640625 | 3 |
CIFS is the short form of “Common Internet File System”. CIFS protocol was established in 1980’s and in initial years was known as Server Message Block (SMB). The intent behind introducing SMB was to run over the NetBIOS / NetBEUI API for the purpose of tuning file access (local) to network file system. The directories on the remote hosts which were made available via SMB were called “shares.” The operations of CIFS includes read, write, create, delete, and rename are all supported and the files are on a remote server.Its worth mentioning that CIFS is a stateful protocol. In order to preserve security contexts, crypto security, and semantics of file access (like caching etc), it is required to sustain this stateful behaviour.
CIFS Working –
The protocol sends packets from the client to server where each packet type is a request (Request may be to open a file, close, or read a file). On receiving the packet, the server checks to see if the request is legitimate, validates whether client has the appropriate file permissions. Once validated, it executes the request and returns a response packet towards the client. The client then analyses the response packet from server to determine whether or not the request was successful.
Related – CIFS vs NFS
CIFS Footprint –
CIFS protocol is universally used with Microsoft operating systems. Starting from Windows for Workgroups, since then all the Microsoft operating systems have been used both as a CIFS server and client. CIFS has been used for –
- Remote file operations
- Browsing (on Network Neighborhood icon)
- Authentication on NT and Windows 2000
- Remote printer services.
Microsoft has been such a successful and widely preferred protocol. This is substantiated by the fact that Unix flavours also implement a CIFS client/server via the Samba program. Apple computers too have the capability to use CIFS clients and servers.
Benefits of CIFS protocols –
- Concurrency– CIFS protocol allows multiple clients to access and update the same file simultaneously.
- Fault tolerance -. CIFS is capable to bear considerable amount of network and server failures and still bring back the lost connections and continue the process of file opening after connection reestablishment.
- Fine tuned to support slow speed links – CIFS protocol is designed to support slow-speed links like dial-up lines.
- Security – CIFS protocol is capable of supporting both anonymous file transfers and secure authenticated access to named files.
- Scalabaility – Integration of CIFS servers and OS is regulated to provide high maximum system performance which is easy to administer
Features supported by CIFS protocol include –
- File and printer access – A client can perform plethora of activities like open, read, write, modify, delete, and even close multiple files on the same server. Also, clients have ability to open the same file simultaneously.
- File and record locking – CIFS not only supports file and record locking, but also opportunistic locking of files to allow clients to cache data for superior performance.
- Safe caching, read-ahead, and write-behind – The protocol is capable of supporting caching, read-ahead, and write-behind. Infact unlocked files are also supported , unless they are not safe.
- File change notification – CIFS has the feature where applications can request to server of being notified when a file or directory is modified on the server.
- Protocol version negotiation – There are several versions of this protocol . A particular version is called a dialect. On a per connection basis, it allows dialect and related features of protocol to be negotiated.
- Extended attributes – CIFS supports sub protocols to provide direct access for extended server functionality.
- Distributed file system support – The protocol supports file system subtrees, and for the clients it looks like a single volume , however in reality span multiple volumes and servers. CIFS provides a single consistent object naming scheme which can span across an array of different servers .
- Server name resolution using DNS – It supports resolution of server names using the DNS, therefore allowing access to the files of other organizations over Internet, or hierarchical organization of servers’ names inside an organization.
- Batched requests – The protocol supports the batching wherein multiple requests are bundled into a single message, therefore minimizing round trip latencies.
- No dependence on connection-oriented or connection-less transports – The protocol does not rely on type of transport protocol for message exchange between the client and the server.
- Unicode file names – Support for both extended ASCII character set and Unicode file names.
|
<urn:uuid:fd85c77b-c251-4e1f-90ea-3d13411d9f20>
|
CC-MAIN-2022-40
|
https://ipwithease.com/introduction-to-cifs-protocol/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00461.warc.gz
|
en
| 0.917703 | 978 | 3.1875 | 3 |
An expression is a combination of common mathematical values and Neurons for ITSM proprietary functions. You can use expressions in a search, quick action, dashboard, workflow, or business rule. The power of expressions gives you more control over your results. You can enter literal text constants, functions, or expression delimiters.
Six types of fields are supported by expressions:
•Boolean: Also known as a logical field. Used for storing Boolean values such as true or false.
•Number: Used for storing numerical values, for both integer and real numbers.
•Text: Used for storing textual (such as string) data, with support for Unicode characters and HTML.
•DateTime: Used for date and time values. You can optionally include time zone information.
•Currency: Used for currency values.
•Link: Used to link one object to another.
Four other field types are available in Neurons for ITSM but there are no expression use cases for them. You can refer to these types in an expression, but you cannot manipulate them. These are the following:
See Using Fields for complete information about these data types.
|
<urn:uuid:ef0cf06f-b0c1-4224-a2f1-d4aab78da6f6>
|
CC-MAIN-2022-40
|
https://help.ivanti.com/ht/help/en_US/ISM/2021/admin/Content/Reference/Expressions/Expressions.htm?Highlight=about%20expressions
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00461.warc.gz
|
en
| 0.839448 | 262 | 2.640625 | 3 |
In part two of our desktop buying guide, we talk about one of the most confusing specifications you’ll see whenever you purchase a computer. We’re going to demystify memory, also referred to as RAM.
In the first post of this series, we went over how to choose a CPU/Processor when picking out a new desktop. Our main focus is on choosing a desktop for your business or home office, but we did talk about a few options that exist for more high-end computers that can handle video editing and gaming. We’re going to stick with this theme here, especially when it comes to talking about RAM.
RAM (which stands for Random Access Memory) is often just referred to as Memory. It’s often confused with the amount of data your computer can store, but that isn’t the case. RAM is used to temporarily store data so it can be instantly recalled without having to pull it from the computer’s storage. If you wanted to compare it to the human brain, it’s sort of like short term memory.
The amount of RAM you have determines how much you can have going on at once, and how quickly your computer performs when a lot is going on. If you read the first post in this series, you might ask ‘hey, isn’t that also what the CPU does?’ and you wouldn’t be wrong. The CPU handles instructions. It processes the data that the RAM holds. More RAM means a bigger stack of data that the CPU can quickly process, and a faster CPU means the CPU will process the data faster. They go together.
How Much RAM Does My Computer Need?
The nice thing about buying a desktop these days is you have pretty limited options as far as RAM goes. That isn’t to say there aren’t dozens of brands with their own clock speeds and special features that you can pick and choose from, but PC manufacturers handle all that for you.
If you were building your own PC at home, or customizing a PC on a site that lets you choose from a wide variety of types of RAM, things will feel more complicated. If that’s the case, this guide probably over-simplifies things for you, but you’ve probably figured that out by now.
When buying a new preconfigured desktop (or laptop), the speed and type of RAM is typically figured out for you based on the manufacturer’s model. The real thing you need to look for is how much RAM is included in the device.
The Scrimping Budget End – Generally speaking, the smallest amount of RAM you will typically see for a Windows 10 device is 4 GB (Gigabytes). You can technically get Windows 10 to run on less, but we wouldn’t recommend it for most desktops. Even 4 GB is pretty meager; you won’t be able to do much very quickly on that device. We’re talking very light document editing, and web surfing. Even then, you’ll need to be gentle and not expect much out of your system.
The Low-End – Most ‘budget” PCs start with 8 GB of RAM. This is plenty to run the operating system and handle some light office work. Editing documents, looking at photos, and surfing the web should work fine. Much more than that will likely tax the system.
The Mid-Range – Even on a budget, check to see if the desktop can be upgraded to 16 GB of RAM. Often the price difference isn’t very significant, and you’ll be able to get more out of your investment. Often, when older computers start to feel slow for our clients, we’ll upgrade the RAM by doubling it for a low-cost way to get more life out of the system.
What’s nice about having 16 GB of RAM is that this is also the entry-point for gaming systems. We’re not saying that 16 is the magic number, but if you are willing to pay a little to reach it, you’ll likely be in pretty good shape if the rest of your computer can handle what you throw at it.
The High-End – Like everything else, this is where we can really push the ceiling up. For example, the new Mac Pro is boasting that it’s capable of supporting up to 1.5 TB of RAM (That’s a whopping 1500 GB). At the time of writing this, no pricing has been made available for configuring the Mac Pro with 1.5 TB of RAM, but rumors say it could cost up to $20 grand.
If you are designing a gaming rig, a video editing system, or a server, you start to get into the realm of more than 16 GB of RAM. Once you get much past 32 GB of RAM (the next tier) it’s time to leave Best Buy and start consulting with an expert (no offense Geek Squad).
Final Thoughts on RAM
Often, you can upgrade your RAM later, depending on the device. This is more likely in desktops and less likely in laptops.
When in doubt, never settle for less than 8 GB and typically try to shoot for 16 GB.
We hope this guide was helpful! Be sure to check out part 3 in the next couple of days, and if you need any help when it comes to purchasing computer equipment for your business or keeping your existing computers running smoothly, give us a call at 604.931.3633.
|
<urn:uuid:1a232a26-0d15-4c5c-84d2-78c4b8c3fee9>
|
CC-MAIN-2022-40
|
https://www.activeco.com/desktop-buyers-guide-2019-part-ii-how-much-ram-do-i-need/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00661.warc.gz
|
en
| 0.954563 | 1,150 | 2.78125 | 3 |
What Is VoIP and Why Is It Good for Your Business
You may have heard about VoIP, but you are not sure what it is. Understanding how it works will enable you to use this technology to your company’s advantage.
What is VoIP?
VoIP, which stands for voice over internet protocol, is a technology that allows you to make and receive telephone calls using the internet instead of a traditional phone line.
VoIP converts analog – the traditional – voice calls into packets of data. These packets of data travel over the internet just like any other type of data, such as email. Using a VoIP service, you can call a landline or a cell phone. When calling a landline or a cell phone, the packets of data are converted back to telephone signal – the traditional voice signal – before they reach the person you are calling.
You can also make or receive calls using landline telephones via your VoIP service. For this purpose, you need an analog telephone adapter to be connected to your network. In addition, you can call someone via computer-to-computer, with you – the caller – and the receiver speaking into computer microphones and listening through computer headsets or speakers.
A basic VoIP system only requires a broadband internet connection and a VoIP-enabled phone; a computer with VoIP software and a headset; or a traditional phone connected to an adapter.
VoIP versus Unified Communications
Aside from VoIP, you may have heard about “unified communications”. VoIP refers to the basic internet-based telephony system. Unified communications, meanwhile, is a communication system that includes not just VoIP, but other communication services, including conferencing that combines video, data and desktop sharing. You can also instantly monitor the availability of your colleagues through this unified system.
Benefits of Using VoIP
Your company’s aging telephone system only causes productivity slowdowns, as well as loss of revenue due to poor quality and expensive maintenance. VoIP, on the other hand, gives your business the following benefits:
- It allows for individual employee telephone numbers without the need for multiple physical landlines.
- It reduces local and long-distance charges.
- It reduces travel costs as on certain occasions your staff need not have to travel – thanks to online conferencing, a convenient way to use video calls and other collaboration tools.
- It can easily make changes – adds or moves phone extensions and locations – saving your company money and giving your company more flexibility.
- With the unified communications solution, your employees have more ways to collaborate – through voice calls, video chat, web conference and instant messaging.
- Your customers can contact your staff more easily.
Hosted VoIP versus On-site Installed VoIP
Once you have made the decision to replace your aging telephone hardware with VoIP, you have to choose which solution is best for your small business: a hosted VoIP or an on-site installed VoIP.
Both hosted VoIP and premise-based VoIP have their distinct advantages and disadvantages. Some small businesses favor the greater customization and control of premise-based VoIP, while other small businesses favor the scalability and ease of hosted VoIP.
Your decision on what VoIP solution to choose will depend on how your company views VoIP – whether as an operating expenditure (OpEx) or capital expenditure (CapEx). Your organization’s growth plans, as well as the availability of in-house experts to manage the VoIP will also be factored in when deciding to choose between hosted VoIP and premise-based VoIP.
The following are some of the major differences between hosted VoIP and premise-based VoIP:
1. Installation and Management
One of the main differences between the two is that hosted VoIP can be accessed over the internet as a hosted service, while premise-based VoIP is installed on your local network.
A premise-based VoIP runs on your I.T. infrastructure and connects to the public switched telephone network (PSTN). If you choose this path, your IT staff or IT partner will be responsible for the installation of the VoIP system and the upgrading of the routers needed as voice gateway to support the system. With a hosted VoIP, there is no need to upgrade the router as the voice getaway is part and parcel of the network of the hosted VoIP.
Both hosted VoIP and premise-based VoIP require a fast internet connection to transmit voice traffic.
A premise-based VoIP needs on-site resources and experts to manage the system, a hosted solution, on the other hand, does not need them. This management issue is one of the main reasons why small businesses pick hosted VoIP instead of premise-based VoIP. Some larger small businesses may opt for premise-based VoIP if they have in-house experts as they can have more control over the system. For instance, with premise-based VoIP, your company can upgrade the system anytime, instead of relying on the hosted VoIP provider.
A premise-based solution also enables your company to exercise control over which features should be enabled for an enhanced VoIP system. Hosted solution, on the hand, offers VoIP features as bundles or packages, as such your organization cannot select the features that you want.
If for instance, your company hires a large pool of temporary workers during holidays and you need more phone lines, it makes sense to choose the hosted solution. It is simpler to add more phone lines with hosted solution compared to premise-based solution. With hosted solution, adding more users can be enabled with just a few clicks of the mouse. On the other hand, adding users to the premise-based solution will involve installing network and phone system equipment – an ordeal that makes it difficult to scale.
3. Cost and Pricing Models
Companies often choose their VoIP solution based on the number of users that must be supported. The initial costs and regular costs differ between hosted and premise-based solutions. The premise-based solution needs money to buy the necessary hardware, software, as well as the impending installation fees and assistance from the VoIP vendor.
Hosted VoIP, meanwhile, charge users a monthly fee for the service. This fee is dependent upon the number of phone lines your organization subscribes to. The bundles or packages you choose are also factored into the fee, for instance, if your organization opts to avail of advanced features such as the unified communications or video conferencing.
At GenX, we ensure that your organization gets the best VoIP system by using the following 5-step approach:
- Planning and Assessment
- Cost Analysis
- Long Distance Requirements
- Audio and Videoconferencing
- Implementation and Training
Contact us today to get started – (416) 920-3000.
|
<urn:uuid:673eb387-1944-4864-96b7-00a0a0c2e83a>
|
CC-MAIN-2022-40
|
https://www.genx.ca/voip-good-business
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00661.warc.gz
|
en
| 0.937725 | 1,410 | 2.921875 | 3 |
A number of simulations are provided that the student can use to assess their skills and knowledge in relation to the entering of commands, and interpretation of output produced, when monitoring and managing VTAM.
Junior and senior operators responsible for monitoring z/OS system activity and resolving day-to-day z/OS system issues using console commands.
Completion of Interskill’s VTAM Commands course, or equivalent knowledge, and a solid understanding of the z/OS operating system.
After completing these simulations, the student should be able to enter VTAM commands and analyze responses in order to:
- Monitor VTAM activity
- Identify the status and attributes of VTAM-related resources
- Shutdown and Start-up VTAM
Sim 1 – Running a Trace on a Node
Sim 2 – Displaying Attributes of Major and Minor Nodes and the Users Logged onto them
Sim 3 – Displaying Storage Attributes and Buffer Use
Sim 4 – Displaying Terminal and TSO User Attributes
Sim 5 – Displaying Cluster Information
Sim 6 – Displaying Line and Channel Link Information
Sim 7 – Shutdown and Start-up of VTAM
|
<urn:uuid:e9199158-d5cc-4d06-9619-c4e0d81c1856>
|
CC-MAIN-2022-40
|
https://interskill.com/?catalogue_item=vtam-command-simulations&noredirect=en-US
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00661.warc.gz
|
en
| 0.835611 | 253 | 2.859375 | 3 |
Load balancers are critical components in AWS systems, and selecting the most suitable option might prove confusing for some users. Choosing the right option enables users to distribute various tasks across resources, resulting in an optimized process. Operating a network without load balancers may result in significant delays in web services during a spike in user requests.
The modern digital age has led to a significant increase in user requests from social media use and IoT operations, increasing the importance of load balancers as critical components in web traffic management.
Essentially, a suitable load balancer serves as the gatekeeper or contact point between client devices and backend servers, driving application responsiveness, scalability, and availability while reducing the risk of traffic overload (i.e., increased fault tolerance).
Load balancers follow a preset algorithm with varying complexities, determining request distribution across servers. The most widely used algorithms include round-robin, hashing methods, least response time, and custom loads.
Understanding ALBs (Application Load Balancers)
ALBs operate from the application layer or the seventh layer of the OSI (Open Systems Interconnections) model, which drives communications among multiple systems. An ALB receives the request and evaluates listener (a process that checks for connection requests) rules through prioritized order, essentially routing requests based on content to a specific subgroup.
Users can choose to route the algorithm of listener rules specifically to different target groups. Additionally, system administrators can conveniently add or remove target groups according to the changing priorities and demands of a project or organization without causing disruptions to the overall requests received by the application.
Users can combine ALB with various other AWS services to optimize the availability and scalability of applications. These services may include AWS Certificate Manager, Amazon EC2, Amazon Route 53, AWS WAF, and Amazon CloudWatch.
For instance, Amazon CloudWatch offers users real-time application monitoring capabilities, providing quick error detection and troubleshooting in response to system anomalies or performance delays. With Amazon Route 53, users can create an alias record, listing multiple IP addresses for responding to DNS requests, an effective web solution for managing geographically distributed servers.
How ALB Works
ALB primarily distributes network load in a public cloud to optimize availability and stability. The ALB monitors the health of applications within the seventh layer of the OSI model and will specifically route traffic to healthy registered targets.
Specifically, ALB assesses data packets identified with HTTP and HTTPS headers, providing developers with detailed reports of each check that zooms in on the specific code and HTTP-related errors encountered.
AWS users can apply ALB through internal load balancing in front of AWS EC2 instances, applications (through Rest API), or containers. Multiple services in a complex environment may share a single ALB load balancer through path-based routings, such as routing it to different server clusters based on application needs. Users can route up to 10 applications behind a single ALB.
Core Concepts of ALB
ALB includes various components that users should familiarize themselves with for optimized network configuration. These include rules, targets, priorities, and conditions. The rules of ALB set the desired action that matches a client’s request by fulfilling a specific condition or path pattern. An ALB determines the sequence of rules fulfillment based on priority, according to numerical values in ascending order.
Understanding ELBs (Elastic Load Balancers)
Introduced by AWS in 2009, the ELB, also known as the classic load balancer, is a software-based load balance that automates the traffic distribution process across multiple targets. These targets may include containers and IP addresses.
The ELB operates from the fourth layer (i.e., the transport layer) of the OSI model and transfers requests based on the applied protocol of TCP or IP and links with a similar backend target. For instance, when an ELB receives a client request from a TCP port, it routes the request based on the rules pre-configured during a load balancer setup.
The classic load balancer serves various functions to provide application stacks with added security, easier management, and reliability.
Specifically, ELB provides web networks with functions that include:
- User verification with a public key
- Centralized administration of SSL certificates
- Traffic distribution among registered and healthy instances
- Support for IPv4 and IPv6
ELB provides a single entry point for users of EC2 instances, efficiently distributing traffic across available targets. With configured health checks, ELBs can closely monitor the health of each registered target and limit traffic distribution to healthy locations, improving fault tolerance.
How ELB Works
Usually, with classic load balancers, users register instances directly with the load balancer when creating a load balancer node within an enabled availability zone (AZ).
Having multiple servers behind AZs within a region improves the availability of networks, enabling the ELB to reroute traffic to available AZs during inaccessibility. ELB routes traffic evenly among AZs during default configurations. However, the default setting could lead to overload/load imbalance if servers are not responding to the requests.
The activation of cross-zone load balancing enables each balancer node to distribute traffic across registered targets across all enabled AZs. Alternatively, disabling cross-zone load balancing limits each balancer node to distributing traffic to its specific AZ. As such, cross-zone load balancing mitigates the risks of potential load imbalances.
Comparing ALB vs. ELB
ALBs and ELBs share several core functions and capabilities despite their specialized features. For starters, they feature high availability and scalability, and users can choose to add or remove resources when required without disrupting the overall request flow from applications. ALB and ELB support primary functions that include:
- Sticky sessions — the system assigns an attribute to users via cookies and IP tracking.
- SSL termination — decrypting encrypted traffic before distribution to registered targets.
- Idle session terminations — the load balancer automatically closes a session after a pre-configured period of inactivity.
- Connection draining — a feature that enables users to safely remove instances without prematurely terminating client connections.
- Health checks — providing health checks to identify anomalies in instances for further action.
The Differences Between ALB and ELB
In 2016 AWS improved its original ELB program with ALB, which provides users with additional features and enhanced functions.
For instance, while ELB enables users to add and remove listeners according to changing priorities, the ALB provides the extra feature of viewing and editing listener routing rules. As such, users can conveniently direct routes to a predefined target group.
ALB also rectifies some of the limitations of ELB, which include:
- The unsupported function of forwarding traffic to more than one port per instance
- Incompatibilities with EKS servers that run on Fargate
- Incapabilities of delivering traffic to IP addresses, which prevents traffic to targets outside AWS
- Lack of support for WebSockets and HTTP/2
- Serves only one permitted domain name
One of the most significant differences between ALB and ELB lies in the system of their routing process. While ELB only routes traffic based on routing number, ALB facilitates context-driven routing based on multiple references, including query string perimeter, source IP, port number, hostname, and path.
Additionally, ALB supports Server Name Indication (SNI), enabling it to bypass the conventional limitations of the ELB in serving a single domain name. ALB offers users native HTTP/2 and WebSocket support via multiple requests delivered through a single connection.
ALB Provides Built-in Capabilities
ELB only allows routing via a single port, while ALB supports distribution through multiple ports and lambda functions. Lambda functions enable users to manage and run various functions, build websites through serverless coding, and create customized ALB targets through serverless methods.
ALB offloads and optimizes server functions by performing advanced tasks directly from its program, including a preconfigured redirection or fixed response and user authentication through LDAP, Microsoft AD, and Google. The added load balancer function enables applications to focus on their business logic for increased performance.
Other notable built-in ALB capabilities include:
- Container-based application support, enabling a single instance to host multiple containers listening for network traffic behind the same target group.
- The capabilities of performing fine-grained health checks at the port level. Specifically, ALB has console support for filtering by tags, resource, and resource-based permissions, so users can use IAM policies to implement the fine-grain controls.
- Providing detailed access logs stored securely in a compressed format.
ALB’s access logs include a detailed breakdown of information that consists of the original HTTP response and response_processing_time, which determines the time required to transfer a client request, request type, and time stamps.
Summary of ALB vs. ELB
Users might find it advantageous to apply ALB in balancing a load of HTTP/HTTPs web traffic with a specific path or host-based routing that drives context-driven requests. These will help expedite processes in complex environments, such as the microservice landscape.
While the ALB might seem like a complete upgrade of the classic ELB, each load balancing solution has its recommended uses. For instance, ALB functions better for content-based routing, especially in response to modern trends like the rise of microservices that require the rapid and reliable distribution of complex applications.
Users who operate from a network with carefully defined load balancers for each server with direct links between URLs and services will likely find it more cost-effective and practical to apply the classic ELB in handling their traffic needs.
Also, users with old AWS accounts need to note that ELB is the only load balancer that works on EC2-Classic and supports application-defined sticky session cookies. However, ELB/classic load balancer users should note that AWS has not released new updates for the program and will retire the EC2-Classic by August 15, 2022, so users should consider a systematic migration to a VPC, which avoids interrupting their existing workload.
What to Expect With Efficient Load Balancing
Upgrading from a classic load balancer can bring users a wide range of benefits that optimize the overall performance of their networks.
Modern load balancers are compatible with the VPC, which supports multiple security features such as SSL/TLS decryption and user verification. System administrators will have the option of establishing private connections through AWS PrivateLink between the VPC and load balancers through a VPC endpoint, enabling secure offline traffic distributions.
Additionally, modern load balancers continue to include more TLS policies, such as ELB Security Policy FS 2018 06, that control TLS and cipher suites. These implementations will optimize forward secrecy (i.e., safeguarding the security of session keys) across application load balancing.
Users can expect uninterrupted traffic across multiple healthy targets throughout multiple AZs, keeping requests and data running with optimized efficiency.
Modern load balancing enables users to function across AWS and on-premise systems via a single load balancer. System administrators will face less friction in managing, migrating, or transferring control (i.e., failover) to on-premise resources when required.
Updated load balancing enables users to autoscale effectively according to varying application workloads. Additionally, users can host multiple applications from a single instance while maintaining individual security groups for optimized communication between applications.
AWS continues to expand its load-balancing options, giving users greater flexibility in distributing their traffic for efficient server functions. The company launched its Network Load Balancer (NLB) in 2017, aimed at handling millions of requests per second. NLB provides users with a wide range of traffic management improvements, including long-running connections that power demanding IoT, gaming, messaging applications, and failover support.
System developers have also created specialized services to manage authentication, authorization, and accounting through AAA computer security solutions for reduced cost, improved scalability, and optimized efficiency.
With similar fees for each load balancing solution, the price point would rarely serve as a factor in deciding on the best fit. Ultimately, the chosen load balancing method depends on the underlying location where a workload functions. However, the complete progression toward ALB use seems clear within the horizon.
Overall, AWS’s load balancers integrate seamlessly with the rest of its services. Choosing the most suitable loader ultimately depends on the complexity of existing network infrastructure, environment, and demands.
Sign up to access comprehensive monitoring and alerts across AWS networks across unlimited devices.
|
<urn:uuid:87ffad1c-3349-4849-ad61-2223f6b4a34a>
|
CC-MAIN-2022-40
|
https://www.logicmonitor.com/blog/alb-vs-elb
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00661.warc.gz
|
en
| 0.898613 | 2,619 | 2.75 | 3 |
Big data use cases: A variety of business benefits
Analyzing big data helps companies answer critical questions, test hypotheses, and ultimately improve business outcomes. Well-managed big data also allows organizations to identify the location and proliferation of sensitive data and track its use so companies can spot and act on a potential data breach.
Big data projects may be focused on delivering a specific business benefit—for example, using financial transaction data for real-time fraud detection, building a 360-degree view of customer data for deeper customer understanding, or using predictive analytics to detect and replace mechanical components before they fail. Or they may take the form of broader, enterprise-wide modernization initiatives—for example, building a centralized data lake to store all enterprise data for big data analytics, moving data into a cloud-based data lake, or migrating to a cloud data warehouse.
How big data is used in healthcare
Health-related systems and devices generate an immense volume and variety of data, far beyond the information captured in electronic health records and the content of claims and other transactions. Big data in healthcare also includes data from medical and pharmaceutical research, smartphone apps and wearable devices, equipment tracking sensors, public records, government agencies, and far more—so much data that it has an estimated compound annual growth rate between 2018 and 2025 of 36 percent according to the IDC Data Age 2025 Report.
There are many opportunities to use big data in healthcare. Sophisticated analytics monitor the pharmaceutical industry to ensure that drugs are safe, effective, and high-quality throughout their life cycle. Big data from research trials and medical records can be analyzed to look for situations where medications are incompatible or where a medication may have a new, useful application—both of which can improve patient outcomes and spark new, more effective treatments. AI and machine learning technologies rapidly screen medical images and flag potential issues for human review. Streaming data from medical devices and wearables allows healthcare organizations to spot danger signs and intervene early enough to prevent poor patient outcomes. Natural language processing (NLP) can extract nuanced insights from unstructured text notes in patient records, with many potential uses such as identifying lifestyle, geography, social, and other data that indicate patients who need intervention to avoid slipping through gaps in the system.
And of course, only big data analysis can handle the gargantuan, distributed datasets involved in tracking and investigating the human genome. Big data has already allowed us to create new drugs and therapies that are tailored to specific genetic requirements for efficiency. It also lets us identify genetic patterns that predict cancer and other conditions with sufficient accuracy to be clinically useful. Tomorrow, it may enable us to create therapies and clinical trials for rare diseases that currently have no treatment. Read our blog for more big data healthcare insights.
How big data is used in government
Government agencies collect vast amounts of data about the constituents they serve, from who and where they are to the services they need and use and the environment they live in. When government agencies analyze, augment, aggregate, correlate, and consolidate data across silos and organizations, data can lead to deeper insight and greater efficiency in every aspect of operations, from providing citizen self-service offerings to predicting the impact of natural disasters and developing response plans.
One representative use case is San Diego's 2-1-1 service, a free 24/7 resource and information hub that connects residents in San Diego, California with community, health, and disaster services by phone or online. The city uses big data to create and maintain a 360-degree view of callers across 1,200 different agencies and providers. This enables the 2-1-1 center to make appropriate recommendations, know which agencies are serving which callers, and proactively guide callers to services they may not know they need. Big data also gives San Diego city workers greater visibility into the effectiveness and outcomes of referrals so it understands whether people are finding the help they need—and if not, what the city can do to improve their service. Learn more about San Diego 2-1-1 and its data management requirements.
Another example is the United States Geological Survey (USGS), which runs the National Water Quality Assessment Program to collect and interpret data about the quality of America’s major groundwater and surface water systems. By integrating its big data sources, the agency built the largest available dataset about water quality, which it uses to support scientific research, manage natural resources, and minimize health risks from tainted water. Learn more about the USGS and its large-dataset challenge.
How big data is used in automotive
As vehicles increasingly include telematics—telecommunications and monitoring systems, including GPS diagnostics and other smart or connected systems—the automotive industry is actively developing ways to use this big data resource for vehicle-to-vehicle, vehicle-to-infrastructure, and vehicle-to-everything communications. This has ramifications for every aspect of building, driving, and servicing vehicles, from the factory assembly lines to the open road.
Automobile manufacturers can turn real-time streaming diagnostics data into improved automotive designs and more prompt, predictive maintenance alerts for car owners, service centers, and factories. City planners and transportation authorities can leverage big data about traffic flows, parking needs, and road maintenance requirements to make commutes more efficient and pleasant. Insurance companies can analyze data about driving behavior to manage premiums more accurately and provide truly personalized quotes and service. Rental agencies can use both behavioral and diagnostic data to schedule maintenance, determine pricing, control liability, and even ensure that renters report accidents and traffic violations accurately.
Data engineering and the future of big data
Regardless of your industry, big data analytics can help your organization better understand past performance and current trends so you can make confident decisions for the future. Data—and therefore data management—is moving to the cloud, and the questions that businesses ask of their data now require increasingly advanced analytics and new, AI- and ML-enabled technologies to deliver answers.
In response, big data is moving away from on-premises Hadoop to multi-cloud environments in Spark-serverless mode that increase agility, innovation, and cost-effectiveness. As a result, we're seeing the rise of data engineering, a discipline that enables enterprises to build intelligent, end-to-end data engineering pipelines for AI and ML projects and advanced analytics. Data engineering spans data integration, data quality, cataloging, streaming, masking, and data preparation to deliver faster, better insights.
Learn more about big data use cases and best practices
- Big Data in Healthcare: Driving Digital Transformation: Take a closer look at how big data is used in healthcare
- How Big Data Has Changed Finance: An exploration of how big data is used in the finance industry
- 6 Steps to Improve Banking CX through AI: See how retail banks can differentiate themselves with intelligent big data and customer data management
- Federal Data Strategy Use Cases: Explore the data-driven use cases submitted for consideration in development of the Federal Data Strategy.
- Connected Cars: Learn more about how streaming data is disrupting the rental car industry.
- Big Data Management for Dummies: Our detailed guide to understanding big data and big data use cases
- Big Data Characteristics: Learn how the 4 Vs of big data help you improve business operations
|
<urn:uuid:47ac9950-aa96-42e6-8ac0-66ef487818b2>
|
CC-MAIN-2022-40
|
https://www.informatica.com/nz/resources/articles/big-data-use-cases.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00661.warc.gz
|
en
| 0.906485 | 1,477 | 2.71875 | 3 |
If you are a business owner, you’re likely looking for ways to keep your business safe. One of the best ways to mitigate risk is to conduct a risk assessment. While risk assessments can extend to numerous areas, this article refers specifically to IT.
A risk assessment is a process that helps identify risk in your organization, allowing you the opportunity to mitigate it. Performing one is essential because it reduces the likelihood of major incidents, like breaches or downtime, which can severely impact your business. Risk assessments should be completed annually. Read on to learn more about risk assessments and why they are a critical proactive process every business should use.
Are you looking for a risk assessment? Take our free risk assessment quiz here!
What is a risk assessment?
A risk assessment is a formal process where risk is identified and analyzed, intending to manage it. IT risk assessments identify vulnerable technology assets, processes, and services, such as hardware, software, services, onboarding, offboarding, data, intellectual property, and more. Risk assessments help your organization operate more securely by identifying security gaps.
Why is a risk assessment important?
A risk assessment identifies vulnerability and gives you actionable insight.
- A lack of MFA (multi-factor authentication) on internet-facing systems
- A lack of encryption on PCs that house PII (Personally Identifiable Information)
- An insecure way of offering remote access to your team
- Unexpected folder access to confidential information
- Outdated or missing endpoint protection for your PCs and Servers
- Recommendations to add phish testing and regular training for your team
- A vulnerable system that offers full access to sensitive data
- A weak password policy (e.g., shared accounts, reused passwords, short passwords, etc.)
- A weak offboarding policy that allows employees to have access after termination
By identifying and eliminating weaknesses, organizations improve their security posture. A risk assessment helps protect your business against modern cyber threats, and if performed annually, it is an invaluable process.
When to do a risk assessment
Conduct a risk assessment at least once per year or after significant change within the organization (for example, after a merger) or IT infrastructure (for example, installing new firewalls or servers). IT companies often conduct risk assessments during onboarding to identify existing issues. Businesses improve their overall cybersecurity posture by conducting risk assessments at least once a year to remain current with emerging best practices.
A risk assessment is a formal process to identify risk within an organization. Risk is identified and analyzed with the goal of mitigation. A risk assessment is essential because it identifies vulnerability and gives actionable insight. Conduct a risk assessment at least once a year or whenever a significant change is made. Risk assessments help organizations improve their security posture so long as there is follow-through on the insights received.
|
<urn:uuid:d3189d8a-6923-4db5-9b35-c9032d069752>
|
CC-MAIN-2022-40
|
https://www.sirkit.ca/risk-assessments-and-why-your-business-needs-one/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00061.warc.gz
|
en
| 0.941773 | 584 | 2.546875 | 3 |
What is MIMO?
MIMO or ‘multiple-input, multiple-output’ is a wireless technology that, when deployed, uses multiple antennas at both the source (transmitter) and the destination (receiver). This allows for more data to be sent and received at the same time, unlike in conventional wireless communications where only a single antenna is used.
MIMO utilises a natural radio-wave phenomenon known as ‘multipath’ or ‘multipath wave propagation’.
Multipath effects are when an electromagnetic field is met with obstructions such as buildings, walls, hills or other objects and they scatter, taking various different paths and reaching the destination at different times. Without MIMO this can result in fade-out, intermittent reception or total cut-off.
In the past, multipath caused interference and significantly slowed down wireless networks. However now, by using multiple smart transmitters and receivers, MIMO technology adds another dimension and increases performance and range.
By enabling antennas to combine their data streams that are arriving from different paths at different times, receiver signal-capturing is greatly increased using MIMO.
This methods ability to multiply the capacity of the antenna links has made it an essential element of current wireless standards including Wi-Fi, HSPA+, WiMAX and LTE.
MIMO is one of several forms of smart antenna technology, the others are MISO (multiple input, single output) and SIMO (single input, multiple output). Legacy wireless devices use SIMO and so can only receive one spatial stream at a time.
Due to its nature, MIMO is being adopted more and more with the development of IoT and 5G. BT recently announced a successful collaboration with Bristol and Lund Universities in their quest for highly efficient 5G wireless connectivity.
Get all of our latest news sent to your inbox each month.
|
<urn:uuid:32cc071c-fa57-4dfb-bbec-3767819adeb3>
|
CC-MAIN-2022-40
|
https://www.carritech.com/news/what-is-mimo/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00061.warc.gz
|
en
| 0.931812 | 398 | 3.71875 | 4 |
Hackers have once again demonstrated that the GSM (Global System for Mobile Communications) standard, the most widely used mobile phone standard in the world, can be hacked.
The GSM Association (GSMA) has acknowledged the technology’s flaw, but it said the weakness is not a serious threat and that hackers have not been able to create a practical attack capability that can be used on live, commercial GSM networks.
However, the danger of this latest hack is that it was done with relatively inexpensive equipment, including a PlayStation 3 and open source software, showing that it’s getting cheaper and easier to hack wireless communications.
The Latest Hack
“It was stunning to see what (US)$1,500 of USRP can do,” they wrote in a summary of their presentation at the Chaos Club congress. “Add a weak cipher trivially breakable after a few months of distributed table generation, and you get the most widely deployed privacy threat on the planet.”
GSM is used by nearly 800 mobile carriers in 219 countries worldwide, representing more than three billion connections, according to GSMA statistics.
USRP stands for “Universal Software Radio Peripheral.” A USRP is a high-speed USB-based board for making software radios. It has an open design with freely available schematics, and comes with free software to integrate with the GNU Radio free software toolkit.
Nohl and Paget have created a code book, or lookup table, for the A5/1 cipher using fast graphics cards such as Nvidia and ATI/AMD cards, and Sony PlayStation 3s. While compiling such a code book would take more than 100,000 years on a single CPU, it took three months on 40 Nvidia Cuda nodes.
The most important thing about this latest hack is that it used relatively inexpensive, widely available technology. “Processing power is increasing dramatically, with GPU (graphics processing units) in particular,” said Rob Enderle, principal analyst at the Enderle Group, said. “This is only the tip of the iceberg when it comes to how this power could be used to hack into otherwise secure data streams.”
Another danger lies in the fact that GSM is being used in an increasing range of sensitive applications, hackers Nohl and Paget said. These include voice calls, banking through SMS and access control.
“Cloning, spoofing, man-in-the-middle [attacks], decrypting, sniffing;, crashing, DoS’ing, or just plain having fun — if you can work a BitTorrent client and a standard GNU build process, then you can do it all too,” hackers Nohl and Paget said. “Prepare to change the way you look at your cellphone forever.”
However, at present, it’s not quite clear just who will be impacted. “Opinions are split, even among technologists,” Ozzie Diaz, CEO of wireless intrusion prevention firm AirPatrol, told TechNewsWorld. “Some say this latest hack is significant because wireless networks are purported to be some of the most secure networks in the world, but others say it won’t be an issue at all when you get to 3G and beyond.”
Only select people will probably be at risk from GSM hacks, Enderle told TechNewsWorld. “The most exposed are likely to be celebrities, top executives or board members of large public corporations, politicians, and intelligence organizations,” he explained.
Federal government officials could also be at risk, depending on their jobs and how mission-critical their work is, AirPatrol’s Diaz pointed out.
“The GSMA heads up a security working group, which looks at all issues related to security, and this isn’t something we take lightly at all,” association spokesperson Claire Cranton told TechNewsWorld. The association has a new security algorithm that’s being phased in, she added.
The association might speed up its work in moving to a new algorithm, A5/3. “The GSMA’s security group is set to have a meeting in February to decide whether it will be necessary to upgrade to a stronger code,” Julien Blin, CEO and principal analyst at JBB Research, told TechNewsWorld. “This could be a game-changing factor.”
However, the A5/3 algorithm is also insecure, hackers Nohl and Paget contended. Replacing A5/1 with A5/3 may not be enough because the A5/3 cipher, known as “Kasumi,” has been broken by academic researchers, and A5/3 uses the same keys as A5/1.
In fact, the A5/0, A5/1 and A5/2 algorithms were all broken in 1998, according to a Black Hat briefing in 2008. Key material is reused, key recovery systems are available, and the key is artificially weakened, according to the briefing.
The GSMA does not see these hacks as significant. “Over the past few years, a number of academic papers setting out, in theory, how the A5/1 algorithm could be compromised have been published,” according to a statement the association released. “However, none to date have led to a practical attack capability being developed against A5/1 that can be used on live, commercial GSM networks.”
The GSMA admits that hackers could attack the A5/1 algorithm using a lookup table, but it seems to think the table’s size — 2 TB — will make that difficult. Also, it pointed out that before a practical call can be attempted, the GSM call has to be identified and recorded from the radio interface, which is a complex task. “A hacker would need a radio receiver system and the signal processing software necessary to process the raw radio data,” the association said. “The complex knowledge required to develop such software is subject to intellectual property rights, making it difficult to turn into a commercial product.”
Criminals often disregard intellectual property rights, however, and the USRP seems to have gotten over the difficulties of processing raw radio data, at least to some extent.
On the other hand, the industry’s move to UMTS, 3G and 4G might render the latest hack essentially moot. “3G uses a different algorithm set,” the GSMA’s Cranton pointed out.
“Most carriers are on their way to 2.5G or 3G or even 4G, so the GSM hack might be a problem that’s too late to be called a problem,” AirPatrol’s Diaz said. “It may not be an issue at all once you get to 3G and beyond.”
|
<urn:uuid:588b3940-c336-4b67-856f-6e9878c536ef>
|
CC-MAIN-2022-40
|
https://www.ecommercetimes.com/story/hackers-jimmy-gsm-cellphone-encryption-68997.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00061.warc.gz
|
en
| 0.947795 | 1,456 | 2.734375 | 3 |
Roaming planets, untied to a solar system or stellar orbit, were recently found by a team of astronomers, according to a report in Nature.
The original goal was to look for unknown masses in the galaxy’s composition that they believed could be brown dwarfs or other material, Bennett, who has been working on some of the observations since 1990, told TechNewsWorld.
As the investigations continued and became more advanced, however, they began to entertain the idea that the masses they found were in fact planets about the size of Jupiter.
The Rambling Planets
Unlike the planets we recognize in our solar system like Mars or Venus, which orbit around our sun, these planets seem to be unattached to a stellar path.
“The observations suggest that close planet interactions scatter planets out of their systems. It could be that some of these planets are very far out in their planetary systems, but the indication is that probably most of them are not bound to a star,” said Bennett. It could be that there are more planets in the solar system that were also ejected, he added.
Floating throughout the Milky Way Galaxy, these planets — researchers identified about 10 of them — were probably formed when they were “ejected” from different planetary systems following a violent spatial encounter.
The idea of planets kicked out of their solar systems isn’t new. In fact, in our own system, the Nice model — a possible scenario for our planetary formation — suggests that Jupiter, Saturn, Uranus and Neptune did some rearranging and migrating of their own during their evolutionary stages. But this is the first time observations and scientific data can back up the idea of these lonely planets.
Researchers agree the discovery could change the way they imagine the origins of solar systems.
“In general, this discovery will change the idea of how the planetary system forms,” Sumi told TechNewsWorld.
In the past, the idea of planetary formation was based off what scientists learned about our planet’s formations and its orbit around the sun. With the new data, however, researchers can begin to wonder if “ejected” planets are more abundant in space than previously thought.
“This changes our whole perspective of how planets relate to the population of other solar systems out there. Solar systems like Earth are maybe the exception rather than the rule,” said Mike Malaska, solar system ambassador for NASA/JPL.
The Search Continues
Follow-up investigations such as NASA’s WFIRST are currently being planned to gather more information on galaxy formation. Additional projects are even undergoing right now, like the Kepler Spacecraft, which its website calls a “Search for Habitable Planets.”
Another craft, the European Space Agency’s COROT, is also in orbit searching for planets outside Earth’s solar system.
Researchers and space enthusiasts are excited about the endless possibilities this discovery could lead to.
“Up until now we used to think we had a pretty normal solar system,” Malaska told TechNewsWorld. “Then we start looking at everything and a discovery like this and think, wow, there’s a lot out there, now there’s endless possibilities about other solar systems out there.”
|
<urn:uuid:112f2fd5-f906-4eb3-ab27-1be6be865818>
|
CC-MAIN-2022-40
|
https://www.ecommercetimes.com/story/lone-wanderers-no-warmth-of-the-sun-for-some-planets-72485.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00061.warc.gz
|
en
| 0.959432 | 681 | 3.671875 | 4 |
The demand for cloud computing has driven the growth of Everything-as-a-Service or XaaS. Providing users with easy deployment, accessibility, and cost savings paved ways for service providers to build everything from infrastructure to disaster recovery combined into a single center capability ‘as a service.’
Who knew the XaaS phenomenon would go beyond the legitimate and legal business world.
Cybercrime-as-a-service can be technically defined as criminal applications of the ‘as-a-service’ business model for online attackers. Such practices turn out to be dangerous, especially for newcomers who can easily launch an attack without bearing much technical knowledge. CaaS offerings include malware, botnets, hacking specialists, databases of stolen personal information, penetration testing of potential targets’ networks, open-source research, and a whole lot more.
The other names for CaaS include “attack-as-a-service,” “malware-as-a-service,” and “fraud-as-a-service.”
Cybercrime-as-a-Service (CaaS) service models
Following are some of the basic types of Cybercrime-as-a-Service (CaaS) service models –
- Collecting victims’ information through legal or illegal means
- Reselling stolen personal data or email addresses
- Determining and selling zero-day vulnerabilities
- Hosting malware on secure networks
- Leveraging established botnets for distributed denial-of-service (DDoS) attacks
- Hosting cloud operations
- Leasing on sophisticated exploits and other malware
- Creating and deploying customized solutions
- Tutorials explaining how to handle and defeat advanced cybersecurity defenses
- Designing malware for niche markets
- Fully outsourcing a complete cyberattack
- Assisting with technical support for cybercriminal activities
- Adding stolen data into a robust infrastructure
- Tutorials for technical expertise needed for the attacks
- Selling ransomware used for attacks
- Tutorials on how to use various ransomware variants
- Leasing about ransomware operation infrastructure
- Providing access to command-and-control (C and C) servers
- Providing spyware and other malware for phishing attacks
- Tutorials on performing phishing attacks
- Leasing botnets to distribute phishing emails
- Selling premade phishing forms and pages
Some facts on Cybercrime-as-a-Service (CaaS)
Following are some of the essential facts business leaders should know about Cybercrime-as-a-Service. The facts show the phenomenon and explain views about the IT security threat landscape –
Buying and selling of Cybercrime-as-a-Service
There is a sub-layer called Dark Web, where users operate anonymously. It is a secret or untouched layer of the internet where very few users are allowed to enter. It is this place where aspiring cybercriminals access hacking tools and services.
Its anonymity feature has made Dark Web a pool of illegal activities. It is a normal activity for cybercriminals to visit Dark Web and connect with others to trade stolen credentials or other data, services, and tools that help them perform cyberattacks.
Tools, services traded in CaaS Market
Similar to the legal cloud services market with a different range of offerings, sellers in the cybercrime-as-service environment offer various services and tools.
Following lists some of the tools available in the cybercriminal world:
- Password stealing programs
- Exploit kits for lease (e.g., there are WordPress and Microsoft Office exploit kits for sale at daily, weekly, and monthly rates)
- Botnets for rent
- DDoS attacks as a service
- Account hacking programs
- Hacking-related tutorials
- Cybersecurity, businesses preparing to defeat CaaS
Initially, a criminal who wanted to enter the cybercrime world needed to know how to code and even required technical knowledge of it. Which meant only a limited number of people had the authority to perform cyberattacks.
Additionally, they needed to spend some money on infrastructure, including installing a botnet for spreading spam and phishing emails. It further required breaking tons of computers with the help of malware and turning them into bots.
An increase in the growing CaaS market proved future cyberattackers need not possess tech-related expertise or talent to gain access to unauthorized sensitive data. Moreover, the easy availability of CaaS applications is also responsible for an increase in the number of cybercriminal populations. Such applications also assure cybercriminals of their strong growth.
Looking at the continuous emergence and advancements in IT security threats and the rapid growth of the new forms of malware and attacks, it becomes challenging for organizational bodies to keep track and maintain a strong cybersecurity strategy.
Next-generation firewalls, employee security training, security risk assessments, penetration testing, compliance as a service, endpoint protection are some of the solutions and services that can help IT service providers manage the situation.
Any individual or company should be aware of cybercrime. About 60 million Americans alone have succumbed to identity theft throughout time. It is estimated that an organization that gets breached can lose as much as USD 3.92 million. Almost 60% of enterprises are of the view that they are at the risk of compromising.
Even the former Federal Bureau of Investigation (FBI) Director Robert S. Mueller III said, “There are only two types of companies—those that have been hacked and those that will be hacked.”
As a truly global threat, CaaS is a powerful and dangerous cybercrime tool. Potential victims should come together to work collaboratively and handle the situation if they wish to succeed. The world needs to design better cybercrime laws that should be worked upon strictly. Stakeholders must vow to work together to investigate threats as best they could. Governments and law enforcement agencies must share intel and know-how, especially since cybercrime often transcends boundaries and jurisdictions.
To read more visit our latest whitepapers on security and other related information here.
|
<urn:uuid:2a3e303c-87ec-49a5-a2e2-bdd754770379>
|
CC-MAIN-2022-40
|
https://www.itsecuritydemand.com/insights/security/does-cybercrime-as-a-service-exist/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00061.warc.gz
|
en
| 0.914921 | 1,320 | 2.65625 | 3 |
Has the crisis communications ‘golden hour’ disappeared?
- Published: Wednesday, 14 August 2019 07:20
The rapid growth of social media, fuelled by camera-enabled smart phones, is obvious for all to see, and it has had fundamental impacts on society. But what about its impacts on crisis communications? Victoria Cross suggests that it has resulted in the disappearance of the traditional ‘golden hour’.
Fight, flight, freeze… or film?
Evolutionary science tells us that when faced with a potentially dangerous situation, our sympathetic nervous system is activated and a primitive ‘fight, flight or freeze’ response kicks in. Our body is subsequently flooded with adrenaline and we are poised to make an unconscious decision to help safeguard our survival.
While our basic human instincts remain fundamentally the same as our evolutionary ancestors, the world around us has not. We are now a long way from the sabre-toothed tiger, and even the infamous screech of the Internet dial-up tone, and find ourselves in a state of constant digital overload. We snap, share, like, retweet and comment on almost every aspect of our lives and the lives of those inside and outside of our communities.
There are of course significant benefits to having these digital tools at our fingertips, we can share our lives with friends and family, engage with organizations and brands, and access a wealth of information and resources – including which recalled products or snarled-up roads we might want to avoid! However, in the age of sharing, we’ve seen individuals taking to their smart phones to circulate images and videos of terrorist attacks and other crises online.
The attacker responsible for the deaths of fifty people at two mosques in New Zealand in March 2019, live-streamed his attack on Facebook for 17 minutes, which was viewed 4,000 times before it was removed. Two years before when a terrorist attack saw a car run over pedestrians on London’s Westminster Bridge, and a police officer fatally stabbed, photos of casualties flooded social media.
This begs the question has ‘film’ become a fourth f – a newly evolved basic human responses in life-threatening situations?! Aside from the obvious ethical implications that this phenomenon brings, it should also invite us to question the impact that this could have on an organization caught up in a crisis.
In most cases a crisis escalates extremely quickly which means that a company must act quickly to ensure control. Traditional crisis communication refers to ‘the golden hour’, the theoretical time within which an organization has the opportunity to establish the facts and most importantly, its response. The growth of digital and social media has dramatically reduced the golden hour to little more than a few seconds. Anyone with a smartphone is now a citizen journalist who can catapult news of a crisis, and your brand, into the public domain almost instantaneously and all-too-often inaccurately.
Organizations now need to be even more prepared than ever for crisis. Are there sufficient resources in place to be drawn upon at a moment’s notice, including pre-prepared template holding statements? Are you aware of all your stakeholders, both internally and externally, and how to communicate appropriately with them during an incident? Do you have trained crisis media spokespeople on hand?
These are just some questions to consider as part of an organization’s wider crisis management planning, but there are so many more. No matter how thorough an organization’s approach to risk mitigation we all know the unexpected can still happen. The best thing your organization can do is be prepared to communicate in a timely, professional and empathetic manner.
Victoria Cross is Head of Instinctif Partners’ Business Resilience Practice. Instinctif offers CrisisCommsOptic, a solution to help benchmark your crisis communications readiness against industry best practices.
|
<urn:uuid:bc434ed1-63c9-4db5-a3f0-701b871268c6>
|
CC-MAIN-2022-40
|
https://www.continuitycentral.com/index.php/news/business-continuity-news/4324-has-the-crisis-communications-golden-hour-disappeared
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00061.warc.gz
|
en
| 0.955406 | 791 | 2.53125 | 3 |
Records through the Ages: From Ur to Washington
Records in the Ancient World
You may think that Records Management is a concept that is relatively new to history, dating back only 200 or 300 years. The truth, however, is that the history of Records Management begins almost 6,000 years ago with the invention of the archive. In about 4,000 B.C. the first archive was created by the Sumerians. They used cuneiform writing on clay tablets to record property ownership and commercial activity.
Around a millennium later, the Egyptians expanded the uses of archives by creating and housing military records. 800 years later, a revolt spread throughout Egypt, leading to the eventual burning down of a records office. The mob cited that it was “the custodian of hated property rights.,” marking it as the first time that records were noted as tools of political oppression. The first mention of record retention occurs in Mesopotamia around this time as well. Short retention records (bookkeeping records, letters) were gotten rid of after a certain period, while long retention records (legal documents) were stored in a more permanent housing.
Records in Classical Antiquity
While ancient Greece had several private archives for many years prior, it appears that the first case of a public archive occurred in Rome during the year 509 B.C. Nearly 100 years later, Athens gives public access to their archives, which also include manuscripts and plays by Socrates and Euripides.
Alexander the Great was a fervent believer in the power of the written record. During one of his conquests in the early 300’s B.C., the tent of his chancery had burnt down and all of his records within were lost. He was so dismayed by this that he ordered his staff to reconstruct everything – even going as far as to obtain copies of documents throughout the Greek Empire. The first historical example of a catalog is also created around this time. Iraq was using number systems on the sides of their clay tablets, making them more easily retrievable.
Records in Post-Classical History
Justinian I is most famous for unifying the Byzantine empire with his code of 529 A.D. The code itself was written with the assistance of archived documents and emphasized the importance of archiving in a public place of deposit. In Justinian’s Code, a transparent public archive is noted as guaranteeing integrity and authenticity.
The Venerable Bede wrote the Ecclesiastical History of the English people in 731 A.D, drawing heavily from the archived records of England. During this time the church took to unique methods of protecting their records from theft. At the end of every document or manuscript, there was a curse or prayer added to ward off thieves.
Venice and Florence created their city archives 200 years apart from each other in the 11th and 13th centuries. Towards the end of the 12th century, England began to centralize all their government archives. By the 13th century the Tower of London was storing England’s scrolls, even taking in all of Britain’s Chancery records The invention of the printing press in 1440 allowed for the creation of the first chronologically organized bibliography
Records in Modern History
Sir Thomas Bodley of England was a Records Management Rockstar. He opened the library which created the first general catalog to ever be printed in Europe. In 1620 his library made the first alphabetical author-title catalog. His library’s final contribution to the world of Records Management was in the form of the first detailed catalog guidelines.
When Hernan Cortes conquered South America in the 1700’s, it was considered essential to destroy the conquered Inca’s record repositories. Not only that, but Cortez also instated notaries to every conquered territory, who then sent their records back to Spain.
During the French Revolution many archives were attacked or destroyed by angry mobs. They reasoned that the records were a source of their oppression, perhaps because of their inaccessibility to the public. In 1790, France created a new National Archive which was open to the public and held accountable by the Assembly. Four years later, French National Archives were given jurisdiction over the records of government agencies, provinces, communes, churches, universities and noble families. This made it the world’s first centrally controlled archive system.
During the 1800’s, most countries in Europe used France as a model to develop their own centralized national archive systems. Unlike the others, however, England was deliberating on what to do with their scattered private archives. By 1838 England had passed the Public Records Act, merging all the records from ancient courts into a single location in Central London. These centralized records allowed for the publication of historical documents such as the “Roll Series,” and “Calendars of State Papers.”
The United States were also well on their way to establishing a centralized archive by creating the Act of April 28, 1810. The Act had removed all offices except those of the Department of State, War, and Navy from the building and created fireproof rooms for those departments to deposit their records.
In 1877, when a fire destroyed part of the Interior Department building, President Hayes appointed a commission to investigate. The commission found troves of paper that were no longer needed and only added to the combustibility of the building. In 1888 Senator Francis Cockrell wrote the bill that brought us the Act called “An act to authorize and provide for the disposition of useless papers in the Executive Departments,”.
Records Over the Past 100 Years
By the 1930’s it was known that the paper production process led to rapid, acid-based decay. American chemist William Barrow introduced the field of conservation to paper deacidification when he published an article on the acid paper problem. In the United States, a national archive was finally established in 1934. This was over 150 years since the declaration of independence was signed!
The River Arno in Florence Italy flooded in 1966, damaging and destroying millions of historical documents and works of art. This great loss led to the development of restoration laboratories and new methods in records conservation. In the 1970’s the United States began to store cataloged material in a machine-readable format. This discovery started the age of digital record keeping, which continues to evolve to this day.
|
<urn:uuid:d50d8d81-bd48-479a-bcd2-5751ddb3fb73>
|
CC-MAIN-2022-40
|
https://www.feith.com/records-management-through-the-ages/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00061.warc.gz
|
en
| 0.974044 | 1,310 | 3.453125 | 3 |
Every computing device you own contains some sort of storage. An iPhone or iPad contains flash memory, and a desktop or laptop computer contains either a solid state drive (SSD), which is flash memory, or a hard disk. Macs are currently sold with three types of storage devices: hard drives (only in the base 21.5″ iMac and Mac mini), SSDs, and fusion drives. And you can buy external or internal drives of three types: SSD, hybrid (fusion) drive, or hard drive.
You might be curious to know, what’s the right hard disk for your Mac? Choosing which drive to use in a computer involves a trade-off between speed, capacity, and cost. In this guide, you will learn what the difference is between the different types of drives as well as the advantages and disadvantages of each.
Ignoring the rise and fall of the floppy disk, for a long time, hard disks were the most common storage devices. They are reliable, have large capacities, and are relatively inexpensive. Of course, they weren’t always cheap. In 1985, Apple sold a 20 MB hard disk for the astounding sum of $1,495. This disk was a lot slower than current hard disks, spinning at only 2,744 RPM.
Current hard drives generally spin at 5,400 or 7,200 RPM, though there are some that are faster. (Performance is not just about speed, there are other features that can make a drive read or write data faster.) From the limited 20 MB storage devices sold in the 1980s, we have gone to the relatively common capacity of 4 TB (even 8 TB for hard drives). Disk manufacturers have released drives that are 10 and 12 TB, and we should even see a 16 TB hard drive later this year.
In terms of cost for storage, hard drives are the cheapest. As a disadvantage, however, they have moving parts, which means they are susceptible to failure if something goes wrong or if you drop a laptop containing a hard drive. They are also heavier and they make noise. This latter point may not bother most people, but I prefer not to hear anything spinning in my Macs.
Solid State Drives (SSDs)
Solid state drives, or SSDs, use flash memory to store data. When they’re built into a computer, in appearance they’re just a few chips on a circuit board. (You can also buy them in 2.5″ format to install in a laptop, or in an external enclosure.)
SSDs are compact, quiet and very fast, especially when you start up a computer or wake the computer (hard disks may go to sleep when not used for a certain time, and take a few seconds to spin up). SSDs also use less power, run cooler, are lighter, and have no moving parts, which makes them ideal for laptops.
If you drop a laptop when its hard drive is spinning, the drive can be damaged, and you can lose data. SSDs tend to be more reliable overall, and if they fail, you can still read data (unless the actual memory chips are damaged), whereas you may not be able to do this with a hard disk.
However, SSDs are much more expensive when you look at the cost to storage ratio. Currently, you can buy an 8 TB external drive for less than $150, whereas that amount of money will only buy you a 500 GB SSD.
There is another kind of drive that combines the two technologies: the hybrid drive, or what Apple calls the fusion drive.
Hybrid drives combine a standard hard drive with an SSD element, usually from 6 to 128 GB. (Apple’s fusion drive has a 24 GB SSD in the 1 TB model, but the 2 TB and 3 TB drives have 128 GB SSD.) The drive copies the most frequently used files to the flash storage, so they can be accessed more quickly. This generally includes the operating system, apps you use often, and files you access regularly. The first time you boot and launch apps, files are read from the hard disk, and then moved to the SSD part of the drive; subsequently, accessing those files is much faster.
These drives offer a compromise between speed and storage, still being a bit slower than SSDs, but at a much nicer price point: You can buy a 2 TB hybrid drive with 6 GB SSD for less than $100. However, hybrid drives have all the disadvantages of hard drives, and only some of the advantages of SSDs.
Choosing the Best Hard Disk Drive
The speed and reliability of SSDs make them the ideal solution for today’s computers. Most people do not want a Mac without an SSD, because an SSD enables it to boot much faster, apps launch faster, and files copy more quickly. However, if you need much more storage than what an SSD can offer, perhaps because of a large media collection, you have two options: a hybrid drive, or an external hard drive.
On a new iMac that comes by default with a 2 TB fusion drive, you can pay an additional $200 to opt for a 512 GB SSD, or $600 more for a 1 TB SSD. (Going to 2 TB SSD is a whopping $1,400 extra.) And Apple’s laptops are only offered with SSDs.
If you consider that the cost of an external hard drive is around $100 for 2 TB, then picking the internal SSD and external hard drive will run you about $300 more than the fusion drive—but your Mac will run much faster.
I have found that the combination of an internal SSD and an external hard drive is the best compromise, since I don’t want a fusion drive (because of the noise and moving parts). If you do choose the fusion drive, you should avoid the 1 TB model, because of its smaller SSD element; it only has 24 GB compared to the 128 GB on the 2 TB and 3 TB models. The larger SSD element means more files can be cached, and your Mac will run a lot faster. Though if you only have simple needs—if you only use a few apps, and don’t work with big files—it won’t make a difference, and upgrading to the 2 TB fusion drive is an extra $200.
The Future of Disks
The future is flash. We’ll eventually see affordable SSDs at multi-terabyte sizes, and the choice will be simple: You’ll only buy a hard disk if you really need a lot of storage.
Prices are also dropping: The first Mac with an SSD was the 2008 MacBook Air, which boasted a 64 GB SSD that cost an extra $1,300. In early 2021, upgrading a 256 GB SSD to 1 TB only costs an additional $400.
In truth, however, there is no reason to buy a huge SSD, unless you work with very large files (such as video), or you need to carry lots of files with you on the road and cannot use a portable hard drive. Most of the files on a big SSD will just sit around, never being accessed, yet costing a lot.
For now, most Mac users can get by with 256 or 512 GB SSDs, and if you have lots of files, an external drive—or even the cloud—will save you money.
How can I learn more?
Each week on the Intego Mac Podcast, Intego’s Mac security experts discuss the latest Apple news, security and privacy stories, and offer practical advice on getting the most out of your Apple devices. Be sure to follow the podcast to make sure you don’t miss any episodes.
You can also subscribe to our e-mail newsletter and keep an eye here on Mac Security Blog for the latest Apple security and privacy news. And don’t forget to follow Intego on your favorite social media channels: Facebook, Instagram, Twitter, and YouTube.
|
<urn:uuid:69b88650-d71d-4db5-8dc7-fd21266a5e74>
|
CC-MAIN-2022-40
|
https://www.intego.com/mac-security-blog/how-to-choose-the-right-hard-disk-for-your-mac/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00261.warc.gz
|
en
| 0.944975 | 1,647 | 3.03125 | 3 |
Failover is a backup operational mode that automatically switches to a standby database, server or network if the primary system fails, or is shut down for servicing. Failover is an extremely important function for critical systems that require always-on accessibility. Failover functionality seamlessly redirects requests from the failed or downed system to the backup system that mimics the operating system environment.
System designers create failover capability in servers, backend database support, or networks with a need for constant availability and exceptional reliability. Failover can:
- Protect your database during maintenance or system failure. For example, if the main server onsite suffers a hardware failure, the backup server (onsite or in the cloud), can immediately take over hosting responsibilities without manual input.
- Allow maintenance jobs to run automatically without the need for supervision. An automated switchover during scheduled software updates allows for immediate and seamless protection against cyber security risks.
- Be completely customized to suit your hardware and network configurations. While maintaining a database, an administrator can have not only an A, B system of two servers running in tandem to protect each other against failure, but also can use a cloud server as well to allow for full on site troubleshooting repair and updating, all without connectivity issues.
Failover can apply to any aspect of a system:
- On a personal computer or mobile device, a hardware or software trigger can protect the device when a component, such as a processor or even a battery cell fails.
- Within a network, failover can apply to any individual network component, even a system of components, such as a connection path, storage device, or Web server.
- With a hosted database or web application, failover is what allows multiple local or cloud based servers to maintain a constant and secure connection with little or no interruption of service.
Failover as a service is functionally similar to switchover, the difference being that failover can occur automatically and without warning, while switchover necessitates human intervention in order to start. Switchover often occurs when an administrator wants to apply hardware or software updates, bug fixes, or feature testing, to either the main or backup system without terminating connectivity for the user.
At the server level, failover automation often incorporates a heartbeat system. This system, in basic terms, connects two servers either physically through a cable or over a wireless network. As long as the pulse between the two servers continues uninterrupted, the second server will not go online.
Often, depending on the complexity of the hosting, a system might even have a third server that is running the basic components required to prevent any downtime during switching. The heartbeat communication exists between the two servers as a way of keeping the second server ready to switch over if needed. Multiple paths, redundant components, and offsite or cloud-based support all help to assure a secure and always connected pathway.
A systems administrator will sometimes build an automated notification signaling to users that a switch has taken place. Alternately, some systems will notify the working technician of a need for switchover. They can then manually initiate the switch to the secondary server. This is called automated with manual approval configuration.
The increased prevalence of virtualization software has changed failover’s reliance on physical hardware. This has been possible because of migration, in which an active virtual machine is transferred from one physical host to another, allow for a completely smooth continuation of service.
Failover and its systems give customers comfort in knowing that they will be able rely on a secure and protected connection, without unforeseen interruptions. Failover integration may seem like an unnecessary financial burden, but is in fact an important insurance policy that provides safety and security.
Failover’s main purpose is the stopping of, or at the very least the reduction of, complete system failure. Fallover, the term used to describe customer impact of systems failure, is an important measure of a business’s reliability as a service. Failover is an integral part of any businesses disaster recovery plan. If the network infrastructure is configured correctly, then failover and failback will be a seamless and total safeguard against most if not all service disruption. Any hiccups of real measure are usually caused by the size of the data changeover occurring, the available bandwidth, and how the data is being transferred, mirrored or replicated to the second location.
For a systems engineer, the focus should be on minimizing data transfer while maximizing the quality of synchronization between the two sites. After securing data transfer quality, the next concern is how to trigger failover while reducing the change-over time.
How Barracuda Can Help
To ensure unbeatable, cost-efficient connectivity, Barracuda CloudGen Firewalls provide a wide range of built-in uplink options including unlimited leased lines, up to twelve DHCP uplinks, and up to four xDSL uplinks. By eliminating the need to purchase additional devices for link balancing, security-conscious customers have access to a WAN connection that never goes down, even if one or two of the existing WAN uplinks are severed.
Barracuda Web Application Firewalls can be clustered in active / passive or active / active pairs with failover to ensure instant recovery. Security configurations and deployments are automatically synchronized between the clusters, providing instant recovery from any outages.
To protect your backup systems from failures, all Barracuda Backup products include Energize Updates that automatically apply software updates to improve performance and keep you protected from the latest threats. Updates are sent as frequently as needed to protect you from zero-day security threats.
Do you have more questions about Failover? Contact us now.
|
<urn:uuid:1e222ffc-867d-436a-9a2b-01cb8d0d8e6a>
|
CC-MAIN-2022-40
|
https://www.barracuda.com/glossary/failover?switch_lang_code=en
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00261.warc.gz
|
en
| 0.911115 | 1,157 | 3.03125 | 3 |
Business may be booming for the cybercrime underworld at large, but that doesn’t mean that any old scheme will be profitable. Botnets, armies of thousands of bots, give criminals the scale they need.
A botnet is any large network of web-based malicious applications or “bots.” Some botnets operate out of data centers, while others are made up of real internet users’ devices infected by malware. Some send millions of spam emails; some take down websites and hold them for ransom; some perform account takeover and fake account creation; some steal from programmatic advertisers through ad fraud.
While botnets themselves vary widely, they have remained a favored tool of the most sophisticated cybercriminals. Here are some of the botnets that have come to define cybercrime:
EarthLink Spammer - 2000
Any good history starts at the beginning. The first botnet to gain public notoriety was a spammer built by Khan K. Smith in 2000. The botnet sent 1.25 million emails – phishing scams masked as communications from legitimate websites – in a little over a year. Smith hoped to collect sensitive information like credit card numbers or downloaded viruses onto victims’ computers that would remotely feed him information. Eventually, Smith was sued for $25 million by EarthLink for using their network for his spam scheme, which earned him at least $3 million.
Storm - 2007
Storm was one of the first known peer-to-peer botnets — that is, it was among the first to be controlled by several different servers. The network was tremendous, ranging from 250,000 to 1 million infected computers, and could be rented out to any criminal willing to pay for it on the dark web. Because of this, Storm was involved in a wide range of criminal activities, from DDoS attacks to identify theft. Some of Storm’s servers were shut down in 2008, and today the botnet is thought to be more or less inactive.
Cutwail - 2007
In 2009, the spam botnet Cutwail was sending 51 million emails every minute, contributing up to 46.5% of the entire world’s spam volume at the time. Since Cutwail is comprised of around 1.5 million infected machines, attempts to shut it down have been frustratingly ineffective. Even after an attempted takedown by the FBI, Europol, and other law enforcement agencies in 2014, the botnet remains active and available for rent today.
Grum - 2008
Grum was a spam botnet specializing in pharmaceutical spam, but had massive scale. In 2009 it was capable of sending 39.9 billion messages per day, or 18% of the world’s spam. Law enforcement discovered Grum command and control centers in locations around the world, from the Netherlands to Panama, successfully shutting the operation down in 2012.
Kraken - 2008
It’s hard to know exactly how big the Kraken botnet was, but its massive reach is undeniable. It’s been estimated that Kraken infected 10% of all Fortune 500 companies, and that each of its 495,000 bots could send as many as 600,000 emails per day. The botnet was one of the first observed to use evasion techniques that allowed it to avoid being detected by anti-malware software, even when auto-updated. While Kraken is inactive today, its remnants have been spotted by security systems in the past and may well resurface again one day.
Mariposa - 2008
Mariposa was a botnet of Spanish origin, capable of stealing millions of dollars from unsuspecting users by taking their credit card numbers and passwords to their accounts on financial services sites. It used malvertising – the use of digital ads to spread malware – to take over a whopping ten million machines, making it the second largest botnet discovered to date. However, Spanish law enforcement was able to bring down the operation in one fell swoop when they discovered a record of everyone who paid to rent the network.
Methbot - 2016
Methbot fraudulently acquired hundreds of thousands of IP addresses from two global internet registries and associating them with US-based ISPs. Methbot’s operators created more than 6,000 domains and 250,267 distinct URLs that appeared to come from premium publishers, got advertisers to bid on them, then sent their bots to "watch" as many as 300 million video ads every day. Methbot was discovered and uprooted by White Ops in 2015, but we’re always looking out for signs of it resurfacing.
Mirai - 2016
The Mirai botnet was behind a massive distributed denial of service (DDoS) attack that left much of the internet inaccessible on the U.S. east coast. But, what made Mirai most notable was that it was the first major botnet to infect insecure IoT devices. At its peak, the worm infected over 600,000 devices. Most surprising of all: the botnet was created by a group of college kids looking to gain an edge in Minecraft.
3ve - 2018
|
<urn:uuid:4457464d-8e09-4610-ad51-91379834a831>
|
CC-MAIN-2022-40
|
https://www.humansecurity.com/learn/blog/9-of-the-most-notable-botnets
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00261.warc.gz
|
en
| 0.965467 | 1,038 | 2.921875 | 3 |
The need for gender diversity in technology is undisputed, and according to reports, there are still very few women leaders in technology. In a recent World Economic Forum article “Why do so many women leave engineering?, (opens in new tab)” a study found that in group situations, especially during internships and summer jobs, female engineering students were often given less challenging problems and were relegated to doing routine ‘managerial and secretarial’ tasks instead of the ‘real’ engineering work. Without opportunities and adequate support – it’s no surprise that women may choose a different career path.
As business leaders, it is our responsibility to recognise the benefits of creating a more gender inclusive environment. It goes beyond our corporate responsibility as employers. It is an absolute business imperative. As an article presenting a 2014 Gallup survey on the subject put it, gender equality is vital “not just because it’s a laudable goal” but because “it simply makes bottom-line business sense.”
While there are numerous organisations and events that have an impact, more needs to be done to attract, hire, retain and promote women in the tech industry.
What’s holding women back?
A recent article in the Harvard Business Review (opens in new tab) by researchers from Stanford’s Clayman Institute for Gender Research highlights that the lack of visibility is what holds women back from reaching the high echelons in technology careers.
Moreover, a Lean In / McKinsey study (opens in new tab)has shown that women are 30 per cent less likely to get regular feedback and input on their performance, which becomes an obstruction to enriching their performance over time. The same study found that for every 100 women, 130 men are promoted from entry-level roles to a managerial role. This shows that women are behind from the very beginning of their careers.
The McKinsey research notes that even employers that believe in, and support, women in the workplace may have employed management exercises that curb opportunities for women. According to the analysts, this is unfortunate because women are often more likely to be intuitive, show empathy and collaborate across all executive board areas. All of which are fundamental attributes for many strategic roles from the lowest levels. This set back occurs once women are already settled in the workplace however, the route to employment is also riddled with setbacks. If we are to effect change in the long term then there must be a concerted effort to encourage young women into traditionally male dominated STEM subjects.
Teach them young
Teaching children technical skills is no longer a choice, but a necessity. Including girls in STEM education, from an early age, is crucial to ensure a steady growth of women in ICT.
The private sector can assist here by connecting the dots between secondary education and careers in technology. Particularly as organisations work to develop business technology pathways directly to STEM degrees and even STEM-based careers.
The reality is that in today’s digital economy most professions will be based, and even dependent on, technology. So, while it’s not a given that every student will want to pursue a career in technology, many will have careers that utilise tech and this is what primary and secondary education supports. Supporting these careers is a win-win for both business and wider society.
Walking the walk
Although we still have a long way to go, leaders are beginning to understand that there is power in reflecting the community in which they operate, and that diverse points of view are differentiating. Not to mention, conducive to the bottom line.
In technology alone, women leave positions at a rate of two times (opens in new tab) that of men. Yet women drive 70 – 80 per cent (opens in new tab) of consumer purchasing through a combination of buying power and influence. They also bring specific skills to gender diverse teams in the wake of digitisation.
Ultimately, businesses must make a true commitment to providing greater opportunities for women and this should start with recruitment. HR professionals have the important job of finding and keeping qualified candidates with a keen eye on diversity.
But it doesn’t end there, after onboarding female talent, employers should make every effort to create a thriving women’s network. This needs to prioritise diversity & inclusion with the aim of helping women advance their careers focused on building strong relationships, sharing professional insights, developing skills and seizing career advancing opportunities to drive success. A rich mix of gender perspectives helps to drive innovation and enables companies to better serve customers, and this approach should be earmarked with goals.
For example, SAP committed to having 25 per cent women in leadership by the end of 2017, and has created numerous initiatives to reach that goal. Its award-winning Leadership Excellence Acceleration Program (LEAP) for women is one of the most innovative leadership development programmes in the industry. The Women’s Professional Growth Webcast Series reaches thousands of colleagues and customers (women and men) each year, and its male advocacy programme enables genders to collaborate more effectively. This is just one example of how we are working to minimise the ‘man made’ gender gap.
Can technology solve the industry-wide problem?
We have access to technology today that lets us know for instance, a team’s gender balance is out of synch, or too many people with a certain skill or experience level are leaving faster than others. Still the reality is that by the time your software or your HR partner reports back to the head of business that the balance is out of synch, it will be too late.
But what if companies were instead focused on applying machine learning, intelligent services, artificial intelligence – built into the software – to enable you to identify the origins of bias before you get to that ‘out of synch’ place?
Case in point; to track our goal of 25 of leadership positions filled by women, a dashboard was created to provide continuous status of the women in management KPI, as well as the gender split by different career levels, in career movement (such as promotions), progressions and hires, and in terminations. SAP HR professionals can now easily track progress, see which geographies or business need action, and proactively take actions in the different processes like recruitment, mentoring and succession planning with the relevant business management representatives.
Fundamentally, we can continue to talk about the lack of women in tech, or we can do more to encourage them to become and remain an intrinsic part of the technology mainstream. We need to do this by tapping into their young aspirations, nurturing their path of study and supporting their leadership goals. Only then can we take steps towards equalising the gender imbalance and give women the opportunities in tech that they deserve.
Miguel Castro, Lead for Culture and Identity, SAP Global Diversity and Inclusion Office (opens in new tab)
Image Credit: GaudiLab / Shutterstock
|
<urn:uuid:b81713bb-b021-475d-8377-b2df8f2cec77>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/features/women-in-technology-a-global-challenge/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00261.warc.gz
|
en
| 0.960931 | 1,412 | 2.6875 | 3 |
Ransomware as a Service (RaaS) is a way for cybercriminals to make money from ransomware while minimizing their own efforts.
It works by selling off-the-shelf malware to willing customers without them coding it themselves.
A RaaS provider will typically provide its clients with an easy-to-use interface inside which the customer can customize their own malware with a few clicks. A RaaS customer can be anyone willing to pay for the service, and they only need to have some basic computer knowledge.
Customers can also modify the ransomware’s behavior, such as how much money they want it to demand, and whether they want it to behave like a worm. Since all of this is done without any coding or technical expertise, RaaS customers can remain anonymous and their location hidden from third parties.
The only thing providers need for someone to purchase their malware is a computer with an Internet connection and some cash in his account.
How RaaS attacks work
- A criminal buys a copy of the RaaS software from an online black market. The software is purchased as a service, which means that it can be used multiple times to attack many computers. This way, the criminal pays for the malware once and then uses it again and again without having to pay again.
- Next, the attacker configures the software by choosing his desired language and setting up options such as how much money he wants to demand from victims.
- Finally, after all this is done, he receives a unique cryptographic key associated with his copy of the software that will be used to encrypt files on victims’ computers. The criminal can now launch attacks on targets knowing that only these specific victims will have their files with a specific extension encrypted.
Criminals can also use numerous RaaS providers to increase their revenue and launch mass attacks that would be difficult for a single individual to do on their own due to the amount of effort required.
Why is Ransomware as a Service so dangerous?
Ransomware as a Service (RaaS) has become very popular among cybercriminals in just a few years since its first instance, Cryptolocker, was discovered in 2013. According to Kaspersky, 3 out of 4 new ransomware families are now being distributed through RaaS channels.
There has been an increasing number of cases where computer systems have been attacked by large numbers of victims all at once, which is highly alarming behavior that indicates the involvement of professional cybercriminals. RaaS makes it very easy for anyone to become involved in ransomware attacks, and that means we may see more dangerous attacks in 2021 than before.
How to prevent ransomware as a service (RaaS) attacks
Ransomware attacks rely on users falling victim to social engineering tactics and opening infected email attachments or clicking malicious ads on websites, you can significantly reduce your risk by following some basic rules:
- Don’t open suspicious emails or attachments from unknown senders
- Keep all software up-to-date
- Don’t visit questionable websites or click dubious links
- Use reputable web protection tools to stop online threats in real-time
Ransomware as a service threats
Ransomware as a Service (RaaS) attacks are one of the hottest trends in the cybercrime ecosystem currently, and they will most likely increase in popularity as time goes by. Cybercriminals are also constantly developing new ways to use RaaS platforms so that their ransomware becomes more lucrative.
The most dangerous threat that we see now is the attack of IoT devices with RaaS software. By compromising shared servers and using them as a botnet, attackers can harness millions of compromised machines at once with just one click, and launch devastating attacks on a target country or even on an entire continent.
Examples of RaaS
Locky is a type of malware that was released in 2016. Locky comes into contact with people through emails and fake invoices which include an attached Microsoft Word document containing malicious macros. When the user opens this document, it appears to be full of gibberish including the phrase “Enable macro if data encoding is incorrect,” a social engineering technique used by hackers to trick unsuspecting users into downloading something they don’t want on their computer. If the victim allows these macros to run then save and execute binary file will download the actual encryption Trojan for all files matching certain extensions. This is the point at which a victim’s files become worthless to them.
The Jokeroo RaaS is newfound ransomware that has been spreading like wildfire through underground hacking forums and via Twitter. The story was first reported by malware researcher Damian who found out about it on Exploit.in, an online forum where hackers meet to share their knowledge in order to help each other with new hacks or attacks they are working on. The ransomware has been using the same email spamming techniques as Locky, but is more sophisticated in its approach to tricking users.
LockBit ransomware is malicious software designed to block user access to computer systems in exchange for a ransom payment. It will automatically vet for valuable targets and spread the infection, encrypting all accessible devices on a network. This self-piloted cyberattack has made its mark by threatening organizations globally.
RaaS Revenue model
Ransomware as a Service is the newest and most dangerous revenue model in cybercrime. The possibility to hold an individual or organization’s data hostage for an increased profit means that ransomware threats will continue to grow in size, strength, and duration.
Cybercriminals rely on RaaS because it saves their time and money when it comes to implementing an attack. That also helps them avoid detection by law enforcement agencies who have been after ransomware attacks since they started spreading so quickly throughout the world. With RaaS, professional hackers can just buy everything they need from vendors offering plug-and-play hacking solutions at affordable prices instead of spending hours or days writing custom malware code that could be detected by antivirus programs.
Frequently asked questions about ransomware as a service
How can you protect your business from RaaS?
Since RaaS is about renting ready-made malware for the most convenient price possible, your business should focus on cybersecurity solutions that can protect it in real-time.
The best way to do this is by having an updated and tested antivirus solution that will detect ransomware attacks before they begin. Next, scan all incoming emails with an advanced mail gateway system, so you can protect yourself from phishing attempts to deliver malware as well. Finally, place web filters between employees and the Internet so that any compromised sites cannot be accessed and used for spreading malicious software.
Why would people use RaaS to commit cybercrime, and what are the implications for victims of these crimes?
Cybercrime involving RaaS relies on a business model that offers criminals a decentralized, automated method for spreading ransomware.
This means that attacks are more frequent and have become increasingly sophisticated resulting in criminals making more money than ever before. Ransomware targets organizations of all sizes, with recent media reports claiming that even hospitals and police stations have fallen victim to such attacks. Because the ransom is typically higher when victims are institutions, it implies cybercriminals may be moving towards personalized attacks against valuable targets as opposed to random users who may not pay up.
What types of businesses / industries need to be aware of this threat?
Any organization can fall victim to this type of attack whether or not they store sensitive corporate data online or their stores are physical storefronts. Ransomware may have the potential to lock down an entire database, system, or server, but is also effective at targeting individual systems and files. With RaaS, it doesn’t matter if you’re a Fortune 500 business with a multibillion-dollar revenue stream or just an individual using your home computer for personal tasks – everyone can be targeted.
What mitigation strategies are available to help prevent victims from paying ransoms?
No one wants to pay any amount of money to cybercriminals as this will only encourage them to continue their criminal activities. In addition, there’s no guarantee that paying the ransom will result in getting access back into your own data since there’s no way of knowing whether criminals really have your data or files.
In the event that an attack is detected, there can be serious implications for victims who decide to pay. One of the main reasons why cybercriminals are using RaaS is because it’s so difficult to track them in this decentralized model. This means their money-laundering operations will go completely undetected if they continue receiving payments from several different victims.
What other repercussions do businesses face by paying ransoms?
Paying a ransom could make you a target for further attacks since criminals might assume that you have deep pockets and would be easy to extort again in the future. In addition, paying ransoms doesn’t guarantee access back into your systems as those behind these types of attacks have no real incentive to release data once they get paid – and no fear of getting caught or punished.
Since RaaS is a pay-as-you-go business model, it can generate more profits for those behind this type of crime than other models that involve larger upfront costs. There are several factors to consider when making decisions about paying ransoms including the overall cost of the attack itself versus what you might lose in potential losses from not being able to access your systems in real-time.
How do RaaS operations get started?
Criminals can subscribe to a RaaS service that involves all the steps needed to launch a ransomware attack, including delivery mechanisms and payment methods. Cybercriminals have their pick of the litter in choosing from different services depending on how they want to target victims. Some criminals may prefer to use websites for malware distribution because this is most likely what those behind these types of attacks know best – whereas others will prefer more sophisticated methods like email since it’s already been proven as effective at compromising organizations through phishing and spear-phishing campaigns.
How should businesses that fall victim report these incidents?
In order for law enforcement agencies to take action against cybercriminals who are using RaaS services, there needs to be solid intelligence with regards to the identities of those behind these operations. Identity is key since law enforcement agencies will need to have enough information so they can take action against individuals – rather than entire networks of command-and-control servers and other infrastructure involved with delivering attacks.
What are some of the most effective ways businesses can protect themselves from Ransomware as a Service?
The best way to avoid falling victim to RaaS is by being proactive, which includes using advanced threat detection tools that alert security teams about suspicious activity in real-time. Security analysts should also be trained on recognizing activities that might seem innocent but could lead to a compromise in order to prevent future incidents from happening. Another important consideration for businesses involves training users on cyber threats including how they should act upon receiving an alert (even though it might seem like a false alarm) and how to take other proactive steps such as making copies of data regularly in order to mitigate the effects of future attacks.
What kind of revenue model is associated with RaaS?
Ransomware as a Service (RaaS) is an up-and-coming model that has motivated criminals to launch new attacks using ransomware because they can set their own pricing and get paid for every single attack. This business model has been in operation since 2013, but it’s only now beginning to gain significant traction among cybercriminals since the financial gains are so high with minimal costs. Companies who fall victim will likely pay several times more than what it would cost for those behind these types of operations to develop malware from scratch.
Unlike other models that involve higher development costs, RaaS requires a payment structure based on how many victims are attacked instead of capital expenditures or operating expenses on infrastructures such as servers and command-and-control infrastructure.
How do businesses normally get infected with Ransomware As A Service?
One of the most common ways that cybercriminals use to deliver malware is through websites that have been compromised and serve as distribution sites for hosting malicious payloads such as ransomware, spyware, adware, etc. Another method involves phishing emails that contain links leading to binaries hosted on services like Dropbox or Google Drive which may also contain ransomware.
There are other targeted methods involving social engineering, where victims are convinced into giving up their credentials in order for attackers to gain access to systems. This might involve using fake job postings or accounts designed to look like legitimate users within an organization – such as a CEO – who send out messages telling employees they help them fix an issue that was detected on their device.
How can governments help to prevent or minimize the impact of ransomware attacks on their citizens?
Governments should invest in cybersecurity at a national level to ensure businesses have the resources they need to prevent attacks from taking place. In addition, individual governments need to engage with other countries that are linked to cybercriminal activities so that proper actions can be taken against them. Building relationships with these countries will also pave the way for more effective international coordination and cooperation between law enforcement agencies when it comes to fighting back against RaaS activities by sharing intelligence about their identities and locations.
How does Ransomware As A Service impact individuals?
Individuals who become victims of Ransomware as a Service (RaaS) could lose access to all of their personal files including images, videos, documents, bank statements, or even keys to their cars, homes, or offices. For organizations, this could mean a complete shutdown of business operations until they are able to restore critical systems and files from backups, which can be lengthy and costly if it’s not properly planned for in advance.
|
<urn:uuid:0ba36a09-b53d-4175-bfd1-0b86b74b02fb>
|
CC-MAIN-2022-40
|
https://www.netacea.com/glossary/raas/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00261.warc.gz
|
en
| 0.959184 | 2,866 | 2.875 | 3 |
In our previous post you can read about server, what is server and how it makes to give his resources avaiable to Client Computers. Now we will se the other par of Client-Server architecture, Host in the role of Client.
Simply, host is a machine with a unique address, active over the network. That can be in operation here either as a client or else as a server. The main function of that device is to distribute information and related services with the help of a single or multiple addresses. Similarly, client is a computer application that is used to get access to those services, provided by means of the server which is also a computer system. Though, word “client” for the first time was used for the dumb terminals which have no ability to run their own individual program but they were capable to interact with other remote computers over their network.
However, client/server architecture is still in use but functioning at an advance level than before. Both client and server programs can be run on the similar computer systems now with the help of inter-processed network communication techniques like shared memory. And online chatting can be performed using multiple clients. Moreover, constant transferring of the outsized client programs to the websites is presenting a browser as a global client. The main purpose of this step is to keep away from the hassle of large software downloading onto a computer system so an application can be used on it.
Connecting to server:
Internet sockets will be used by a user to connect to a server, offering service by operating over a probably remote network system with the help of internet protocol suite. Connection process is involved server side listening sockets set up and from client side connection initiate step. After that server needs to accept that instigate connection. For better understanding, you can consider a web browser as a client that needs to be connected to a web server in order to retrieve requested web pages.
You will observe variety in the client types. And these can be defined as below:
Fat clients (computer systems) offer local storage and as well local processing facilities. Moreover, a low-fat client or a rich-poor client is a PC or laptop that will operate independently. But the programming languages such as Delphi, Java, .NET Framework are used in the development of rich clients.
As the name shows, a hybrid client is developed by keeping in mind the features of both above mentioned client models. Hybrid clients are able to offer local storage facility but can’t provide the local processing facility. This client can process locally in the same way as a fat client practices. Anyhow, for storage data purpose it depends on the server.
|
<urn:uuid:795ec778-b57f-4543-b697-b08fe4eb6be5>
|
CC-MAIN-2022-40
|
https://howdoesinternetwork.com/2011/host?shared=email&msg=fail
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00261.warc.gz
|
en
| 0.94053 | 639 | 3.671875 | 4 |
The NSA has created a tool for transcribing phone calls on mass and converting them into searchable text, according to documents released by the whistleblower Edward Snowden.
Called "Google for Voice", the nine-year-old programme enabled spies to extensively search conversations using keywords, and included an algorithm for flagging particular records.
Dan Froomkin, a journalist at the Intercept, released the latest files which claimed the tool was used in war zones such as Iraq and Afghanistan, but may have employed more widely.
"Spying on international telephone calls has always been a staple of NSA surveillance, but the requirement that an actual person do the listening meant it was effectively limited to a tiny percentage of the total traffic," he wrote on the journal’s website.
"By leveraging advances in automated speech recognition, the NSA has entered the era of bulk listening."
A document released by the Intercept showed that the British spy agency GCHQ had been investigating speech-to-text tools since at least 2001, when IBM informed them that its speech recognition technology was not yet ready for use in the field.
One obstacle in the British use of the technology was that early systems were mostly tested on American accents, which prompted GCHQ to set up its own scheme to test it on British ones, including 56 hours of intercepted calls from Northern Ireland.
However the document also claims that the automatic transcription systems used in 2009 had word error rates of 30-40%, and such programmes required significant investments in training in order to be exploited.
Because of these costs and the limits of the systems, the chair of Speech Technology Working Group within GCHQ recommended that British spying agencies collaborate to maximise their potential.
However the chair also said: "[Speech-to-text technology] has still to prove itself in large-scale applications, but the potential for major benefits in productivity in the future is clear, given sufficient investment in further developing the systems for our target speech."
|
<urn:uuid:a27af392-b7f1-4493-a991-401e8c0e0cd8>
|
CC-MAIN-2022-40
|
https://techmonitor.ai/technology/cybersecurity/nsa-created-google-for-voice-to-better-snoop-on-phone-calls-4570121
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00261.warc.gz
|
en
| 0.964293 | 395 | 2.640625 | 3 |
With each passing generation of GPU accelerator engines from Nvidia, machine learning drives more and more of the architectural choices and changes and traditional HPC simulation and modeling drives less and less. At least directly. But indirectly, as HPC is increasingly adopting AI techniques, having neural networks learn from real-world or simulated data rather than run a massive calculation predicting the behavior of something, the difference between HPC and AI may be moot in the next decade or so.
This is, in a nutshell, the bet that Nvidia is making as it focuses its GPU compute engines on neural network transformer models and expands its DGX systems to being able to support trillions of parameters in a machine learning training run.
And this bet, we think, is a good thing, since in the long run, if Nvidia is right, more parts of the simulations performed in the HPC centers of the world will be inferred rather than numerically calculated. While dense linear algebra calculations will still be important – notably with simulations providing datasets for physical phenomena that cannot be directly viewed and therefore have no real-world data – the inside of a star or the inside of an internal combustion engine are two good examples – the ratio between single-precision FP32 and double-precision FP64 math and other kinds of math on the GPU is going to continue to shift down to lower precisions.
This has certainly happened with the new 8-bit FP8 floating point format in the new fourth generation Tensor Core that is at the heart of the new “Hopper” GH100 GPU chip from Nvidia. The lower precision data formats in the vector and matrix math units in CPUs and GPUs, including 4-bit and 8-bit integer formats (INT4 and INT8 in the lingo), have not been useful for AI training, but only for AI inference. But with the FP8 format, for many models, a mix of a lot of FP8 and some FP16 with a smattering of FP32 and a taste of FP64 is now sufficient to do training, and FP8 can be used for inference as well without having to do the tedious data conversion to INT4 or INT8 formats for new data to be run against the neural network so it could be identified by the model or converted to another type of data – speech to text, text to speech, video to speech, or speech to video, for example.
It is not hard to imagine a day when Nvidia might be able to create a GPU compute engine that only has floating point matrix math and supports all levels of mixed precision, perhaps all the way down to FP4 and of course all the way up to FP64. But like other compute engine makers, Nvidia has to keep backwards compatibility for software written for its older devices, and that is why we see a mix of 32-bit and 64-bit vector engines (which have the integer support as well as floating point support) and the Tensor Core matrix math engines. We have been cautioned before that there are plenty of calculations that cannot be done efficiently in a matrix unit and vectors will still be necessary. (You will have to pardon our enthusiasm for wanting someone to create the most efficient math engine with no dark silicon.)
The good news is that the streaming multiprocessors, or SMs, in the new “Hopper” GPU have the ability to do math on lots of both vector and matrix data.
SMs are roughly analogous to the cores in a CPU, and in fact, when you look at the core count on a hybrid supercomputer on the Top500 list, that core count is the combination of the number of cores on the CPUs and SMs on the GPUs in that system. SMs have a lot more arithmetic units than CPUs and have schedulers that are explicitly designed to hide latencies across tens of thousands of threads that are comprised of the fairly modest cores that, collectively, provide an order of magnitude or more of performance than CPUs that run at roughly twice the speed. Slower and wider is better for certain kinds of calculations than fast and skinny – at least when you are constrained by chip size, electricity consumption, and heat dissipation and you need to scale up to petascale and now exascale processing.
Here is what the new Hopper SM looks like:
The SM is organized into quadrants, each of which has 16 INT32 units, which deliver mixed precision INT32 and INT8 processing; 32 FP32 units (we do wish Nvidia didn’t call them CUDA cores but CUDA units); and 16 FP64 units. There is a new Tensor Core design, and Nvidia is intentionally obfuscating about the architectural details of this core. Each quadrant has its own scheduler and dispatch unit, which can do 32 threads per clock, and while that scheduler can juggle multiple unit types at the same time, it cannot dispatch to all of them simultaneously. (The ratio with the “Ampere” GA100 GPU was the scheduler could ship work to three out of the five unit types at the same time. We don’t know what it is for the GH100 GPU.) Each Hopper SM quadrant has 16,384 32-bit registers to maintain state of the threads that are being pushed through the quadrant, and eight load/store units and four special function units. Each quadrant has an L0 cache (which sounds like it should be empty, and while it isn’t empty, we don’t know the capacity). The SM is wrapped by a Tensor Memory Accelerator (more on that in a moment), 256 KB of L1 data cache, and an unknown amount of L1 instruction cache. (Why not just tell us?)
It is hard to get the brain wrapped around what a Tensor Core is, but we think of it as a hard-coded matrix math engine where all of the inputs in the matrix go from registers, pour through the unit, and it does all of the multiplication across the matrix elements at the same time and accumulates it in one fell swoop. This is in contrast to lining up some of the vectors individually in the vector units, multiplying them out, stashing the results in the registers, grabbing some more vectors, doing the multiply, and finishing up by accumulating the whole shebang.
Here is how Nvidia illustrated the Pascal matrix math using FP32 units on a 4×4 matrix compared to the Volta Tensor Core units and to the Ampere Tensor Core units, both of which hard-coded the 4×4 matrix math and the latter of which had a sparsity data compression trick that doubled the throughput without sacrificing AI accuracy noticeably:
As you can see, the Volta Tensor Core implemented a pair of hard-coded 4×4 matrix by 4×4 matrix multiplies in FP16 mode, with FP32 accumulate. With sparsity on, the A100 Tensor Core effectively became a math unit that was equivalent to doing calculations on a 4×8 matrix multiplied by an 8×8 matrix, yielding a 5X improvement over the V100 Tensor Core.
In the comparison below, Nvidia appears to be showing the Tensor Cores at the SM level for the GA100 and GH100 GPUs, with four Tensor Cores each:
So we know that the GA100 had four Tensor Cores per SM (which Nvidia revealed in its GA100 architecture paper) and we infer from this diagram that the GH100 also has four Tensor Cores per SM (which Nvidia did not disclose in its GH100 architecture paper). And we can also see that in FP16 mode with sparsity on, the Hopper GPU Tensor Core is effectively doing a multiplication of a 4×16 matrix by an 8×16 matrix, which is three times the throughput of the Ampere Tensor Core with sparsity support on.
If you do the math on all of this, and assign the P100 FP64 vector engines a value of 1 multiplying a 4×4 matrix by another 4×4 matrix, then the V100 Tensor Core was 8X more powerful, the A100 Tensor Core was 20X more powerful and 40X more powerful with sparsity support (where applicable), and the H100 Tensor Core is 60X more powerful and 120X with sparsity support.
The number of physical Tensor Cores varies by GPU architecture (672 for Volta, 512 for Ampere, and 576 for Hopper SXM5), and the number of activated cores on the die also varies (672 for Volta, 432 for the Ampere, and 528 for the Hopper SXM5. And further complicating peak performance comparisons, the GPU clock speed also varies by architectures, too: 1.48 GHz for Pascal SXM, 1.53 GHz for Volta SXM2, 1.41 GHz for Ampere SXM4, and an estimated 1.83 GHz for Hopper SXM5. So the raw Tensor Core performance per GPU wiggles up and down based on all of those variables, generation to generation, GPU to GPU.
Just like vector units are getting wider and wider – 128 bits, 256 bits, and 512 bits – to stuff more FP64, FP32, FP16, or INT8 numbers through them to get more work done in each clock cycle, Nvidia is making the Tensor Core matrices wider and taller; presumably this can be used to do math on large matrices, but also to stuff more smaller matrices into them to get more work done per clock.
The important thing is that for certain kinds of matrix math, Hopper just blows away the direct use of FP32 or FP64 units to multiply numbers, albeit at reduced precision. The Tensor Cores also support higher FP32 and FP64 precision, and support twice as much FP32 and FP64 throughput as do the actual FP32 and FP64 units on the GPU. The TensorFlow32 (TF32) format has 8X the throughout with sparsity as the regular FP32 unit. To keep that ratio right on traditional vector units, Nvidia has had to keep increasing the number of FP32 cores and FP64 cores across the generations, averaging about a 1.9X increase across the Kepler through Hopper generations.
You can see this all in the compute capabilities tables for Nvidia GPU compute engines below:
The Hopper GH100 GPU has 144 SMs in total, with 128 FP32 cores, 64 FP64 cores, 64 INT32 cores, and four Tensor Cores per SM. Here is what the schematic of the Hopper GH100 looks like, and you will have to click on that image to zoom in on it because it is a monstrous chip at 80 billion transistors:
As was the case with the GA100, the GH100 is organized into eight GPU processing clusters, which correspond to the Multi-Instance GPU (MIG) partitions that the GH100 can be carved up into and virtualized – now with full isolation. The GPCs have nine Texture Processing Structures (TPCs), each comprised of two SMs. On the top of the chip is the uber-scheduler, the GigaThread Engine, as well as a PCI-Express 5.0 hos interface. The four of the GPCs are lined to a bank of L2 cache, and there are two banks with a total of 60 MB of capacity.
Along the sides of the GA100 GPU there are a dozen 512-bit memory controllers, which feed out to six HBM3 memory banks. And along the bottom is a high speed hub that all of the GPCs are linked to and that feed out to 18 NVLink 4.0 ports, which have a combined bandwidth of 900 GB/sec.
To get respectable yields, Nvidia is only selling H100s that have different ratios of compute activated, as is common with compute engines. With the H100 in the SXM5 form factor, all eight of the GPCs are active but only 132 out of the 144 SMs are active and only five out of six of the HBM3 memory banks and associated memory controllers are working. So 12.5 percent of the GH100’s compute capacity and 16.7 percent of its memory capacity and bandwidth are dark – the same ratios that were dudded on the GA100 GPU from two years ago. With the PCI-Express 5.0 version of the H100, either seven or eight GPCs are active, and in either case, only 114 of the 144 SMs are active across those GPCs. The same five out of six HBM3 memory banks are active.
Rather than move to chiplets, as Nvidia Research has shown recently that it can do, the GH100 is a monolithic chip in either version.
“We are not adverse to chiplets,” explains Jonah Alben, senior vice president of GPU engineering, referring directly to the co-packaged “Grace” Arm server CPU and the Hopper GPU. “But we are really good at making big dies, and I would say that I think we were actually better with Hopper than we were with Ampere at making a big die. One big die is still the best place to be if you can do it, and I think we know how to do that better than anybody else. So we built Hopper that way.”
The GH100 chip is implemented in a custom variant of Taiwan Semiconductor Manufacturing Co’s 4 nanometer 4N process, and consumes 700 watts in the SXM4 form factor, which is driving memory bandwidth to 3 TB/sec instead of the 2 TB/sec in the PCI-Express 5.0 variant of the card, which weighs in at only 350 watts for nearly the same compute performance. (We will be going over performance, price, and power of the Hopper GPU compared to its predecessors in a separate story, but it is worth point it out briefly here.)
The shrink from 7 nanometer processes used in the Ampere GPU to the 4 nanometer processes used with the Hopper GPU allowed Nvidia to cram more compute units, cache, and I/O onto the die while at the same time raising the clock speed by 30 percent. (The precise clock speeds have not been finalized by Nvidia yet.)
There are a whole bunch of new technologies that enable the Hopper GPU to offer up to 6X more performance than the Ampere GPU it will replace when Hopper starts shipping in the third quarter of this year. We have already talked about dynamic programming and the Transformer Engine acceleration with Ian Buck, who is general manager of hyperscale and HPC at Nvidia. But briefly, the Transformer Engine can selectively apply the new 8-bit FP8 data format to machine learning training or inference workloads and also invoke other reduced precision to speed up transformer neural network models. Importantly, this adaptive precision is dynamically adjusted, based on lots of simulating done on the Selene supercomputer, to maintain accuracy while trying to boost performance to the largest possible extent.
There are actually two FP8 formats in the Hopper GPU: One that maintains the same numerical range as FP16, but has substantially reduced precision, and one that has slightly higher precision but a smaller numerical range. The FP8 matrix math in the Tensor Core can accumulate into FP16 or FP32 formats, and depending on the bias in the neural network, the output can be converted to FP8, BF16, FP16, or FP32 formats.
When you add it all up, the move to the 4 nanometer process allowed the GH100 clock speed to increase by 1.3X and the number of SMs to increase by 1.2X. The new Tensor Core and the new FP32 and FP64 vector units all provide 2X performance boost per clock compared to those in the GA100, and for transformer models, the Transformer Engine with its FP8 precision boosts machine learning throughput by another 2X. That works out to 3X more performance on the vector engines commonly used for HPC and 6X more performance for the Tensor Core engines commonly used in AI.
|
<urn:uuid:fe8e5fab-6177-4c15-a86a-b67a9d40b38e>
|
CC-MAIN-2022-40
|
https://www.nextplatform.com/2022/03/31/deep-dive-into-nvidias-hopper-gpu-architecture/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00261.warc.gz
|
en
| 0.937388 | 3,326 | 2.6875 | 3 |
Chrome Headless is “Chrome without Chrome,” in the words of Chrome developer and engineer Eric Bidelman. It’s the functionality of Chrome, but operated from the computer’s command line.
That’s what’s meant by a “headless browser,” which makes now a good time to answer:
What is a headless browser?
A headless browser is a browser without a graphical user interface. Instead of controlling the browser’s actions via its graphical user interface (GUI), headless browsers are controlled using the command line.
Don’t worry. All will become clearer as you read on.
Why use Chrome Headless?
Chrome Headless is used for crawling (by Google), testing (by developers), and hacking (by hackers). It’s also used by:
- Search engines, which use it to render pages, generate dynamic content, and index data from single-page web apps.
- SEO tools, to analyze websites and make suggestions on how to improve it.
- Testing tools, to render pages and compare them to previous versions, in order to track changes in the user interface.
The major advantage of using Headless Chrome is that users can write script to run the browser programmatically, doing tasks like scraping, analyzing, or imaging websites rapidly and at scale without having to open the browser’s GUI and click a million things.
Doing that requires three things: Headless Chrome, DevTools Protocol, and Puppeteer.
You’ve already met Chrome Headless. DevTools Protocol is a remote instance of Chrome DevTools, open in another browser, which allows you to see “through the eyes” of Headless Chrome without running the browser’s GUI. And Puppeteer is a Node library that gives developers tools to programmatically control Headless Chrome via the DevTools Protocol.
Combine all three, and you have a way to script repetitive, large-scale actions using Headless Chrome and run them at scale fast.
How does Headless Chrome compare to other versions of Chrome?
Chrome releases four standard channels, plus Chromium builds that match Chrome release numbers and the Chrome OS for Chromebooks. Those channels are:
Chrome Stable is the mainstream release that most users have. Its features are tried and tested, and it hardly ever crashes.
Chrome Beta is tomorrow’s Stable, and thus isn’t quite as stable. The trade-off is more new features, sooner.
Chrome Dev is aimed at developers, updated much more frequently and much more prone to crashes. It primarily exists to let developers test their apps on the Chrome of the future and avoid obsolescence.
Chrome Canary is updated daily and especially prone to crashing and glitching. It’s an early testbed for features and ideas, and it’s the only Chrome channel that runs in its own instance automatically.
Finally, if you are—in Google’s words—“absolutely crazy,” there’s Chromium Raw, a hastily assembled, wildly unstable look into one of Chrome’s potential futures.
Headless Chrome isn’t a different channel. It’s a different way to run the same application. Later in this post you’ll find out how to launch both Chrome Stable and Chrome Canary in Headless mode. It’s the absence of a GUI that makes the first impression of difference; functionality is the same, you just have to access it differently.
Normally, when you launch Chrome, you’ll click on the application icon—either in your dock or Applications folder, or in your Start menu if you’re a Windows user. Chrome opens like any other application, in a window on your desktop that you can make fullscreen if you want.
You can enter URLs or search terms, navigate to websites, view them, and interact with them. If you want the browser to do different things or display a different website, you use a set of clickable dropdown menus in the application’s GUI or in your OS to do that.
Chrome is designed to be simple and intuitive to get started and its GUI is easy to get used to.
If you’re a more advanced user, you can open Chrome’s powerful, flexible DevTools and modify the way websites are displayed and the way they work in your browser, in real time, right in front of you. All that takes place inside the application window, with web pages rendered and displayed, inside Chrome’s GUI.
All this is true of the other types of Chrome—even Chrome Canary, the unstable, bleeding-edge Chrome version that’s updated daily. Whichever channel or build of Chrome you’re running, this relationship between the application and the user remains the same.
Headless Chrome is not the same.
In Headless Chrome, you’re not going to see any of these familiar elements of Chrome. There is no user interface. This means there’s nothing to interact with in the way we’re used to. So a new set of tools is needed to interact with Chrome. It also means that you can easily use Chrome Headless to do things that don’t need a UI or where a UI would actively get in the way, like testing and web scraping.
Instead, you’re going to start Chrome from the command line. What you’ll see is just text in the Terminal or Command Line window. Chrome will be doing its thing without any of the superstructure that normally shows you, the user, what designers and developers wanted you to see. You’ll see what goes on under the hood instead.
Let’s get started.
Getting started with Chrome Headless
To open Chrome Headless you need to open a Chrome binary in the command line. If you only recognize a couple of words in that sentence, don’t worry. It’s simple and we’re about to walk through it step by step.
First, open your command line application.
- For Mac users this is Terminal, which is usually in the Utilities folder in Applications.
- For Windows users, it’s Command Line. You’ll find that by opening Start, going to “Search” or “Run,” and typing “cmd” (short for “command”) and hitting Enter.
Once you have your command line tool open in front of you, it’s time to use it to open Chrome.
To do that you need to know where Chrome is on your computer—where it really is, not where your computer’s GUI shows you it is.
In nearly every case if you’re using a Mac, this is what you’ll use:
/Applications/Google Chrome.app/Contents/MacOS/Google Chrome
Windows users should use this filepath:
The problem with this is that if you’re reading this in Chrome—which statistically you are—Chrome won’t open a new browsing session, just a new window in your extant browsing session.
You need a version of Chrome that runs separately as a different application. (You can also do some stuff with aliases that makes normal Chrome work like this, but that’s a bit complex for a beginner’s guide.)
Time to download Google Chrome Canary:
Having downloaded Chrome Canary, we’re going to open it in the command line. Again, for Mac users you want:
/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary
Copy and paste that into Terminal and you should see Canary open a new window.
Windows users should amend the filepath to lead to Canary in their C drive.
Now you have opened a Chrome binary. How do we make it headless?
Shut Canary—just using the normal UI for now—and go back to Terminal/Command Line. Now, reenter the same command you used before but append this to it:
So if you’re a Mac user you’re copy-pasting this:
/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary –headless
This is the headless “flag”—not to be confused with Chrome flags, which are internal to Chrome and are experimental features you can enable by going to chrome://flags.
Windows users should do the same thing in their Command Line tool.
You’ll see the yellow Chrome Canary symbol appear in your doc and then immediately disappear. Chrome Canary is now running in Headless mode.
One thing about a tool with no UI is, it’s tough to interact with—what can we really do with this tool right now?
But we can use a version of DevTools to manage this headless Chrome instance and do stuff like test for throttling, device emulation, check for code coverage, and plenty more. Anything you can do from inside Chrome’s DevTools, you can do programmatically in Headless Chrome, automatically and a lot faster.
You can also do some fast, simple things to get you started.
Things you can do with Chrome Headless right now
Now that you’ve learned how to launch and kill Chrome Headless from the command line, there’s a ton you can do with it. Here are a few basics to get you started deriving some actual value from this tool.
1. Visit a website
Before you do anything else in Chrome Headless you need to give it something to chew on. Launching the browser in headless mode isn’t enough.
To visit a website in Chrome Headless, all you have to do is add the URL after the headless flag in the command line.
Mac users should use this:
/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary –headless https://nira.com
Again, you’ll see the Canary icon jump up and disappear in the Doc. But that’s all you’ll see. To see more of what’s happening you can screenshot the page in the command line, or use DevTools from the command line.
Screenshots can be done with a flag:
–headless –disable-gpu –screenshot
Add that to your command line text:
/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary –headless –disable-gpu –screenshot https://nira.com
You’ll see a notification in the command line telling you where the image is. By default it will be a file called screenshot.png:
[0329/141521.683403:INFO:headless_shell.cc(620)] Written to file screenshot.png
Macs will save it to the Home folder automatically. Be aware that each new screenshot will be screenshot.png, and will overwrite the last one.
This is just a screen’s worth of imagery. On a longer page, everything after the first screen will be missing. What if you want a complete web page? Then you should make a PDF. Incidentally, this is one of the easiest and quickest ways to make a non-watermarked PDF of a website, using nothing but your (headless) browser.
3. Create a PDF
Add this flag to your command line script:
If you’re a Mac user that script should now read:
/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary –headless –disable-gpu –print-to-pdf https://moz.com/
I’m using Moz’s homepage rather than ours because it’s longer, so the effect is easier to see.
That will produce a file called output.pdf, which again will be in the Home folder by default if you’re a Mac user.
[0329/142229.301088:INFO:headless_shell.cc(620)] Written to file output.pdf
Again, this file will be overwritten every time you do this.
4. Use DevTools from the command line
You can open a remote instance of Chrome DevTools and use it to control your Headless Chrome. Just add this flag to your command line text:
You can use any port, but if you don’t have much experience with this, stick to the default. Your command line script should look like this now:
/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary –headless –remote-debugging-port=9222 https://nira.com
I recommend quitting and reopening Canary when you do this.
Chrome’s DevTools will let you know they’re ready to help you:
DevTools listening on ws://127.0.0.1:9222/devtools/browser/e9deca6c-777b-4615-b313-9b0103cf7566
Then drop this URL into a new tab on the Chrome you’re actually using to read this:
Obviously the numbers have to match—if you used a different port, use those numbers in your URL.
What if you’re not using Chrome?
I know the odds are that you’re already using Chrome to read this but in case you’re not, this works fine in any browser. All it does is show you what’s happening as you manage DevTools through the command line. You’ll see a tab marked Headless, with Inspectable WebContents at the top, and your page meta title one line down. That’s a link. Click it and you’re in. You’ll see the page and the code next to it in a remote instance of DevTools.
|
<urn:uuid:791be551-4234-42a4-8777-93bfa30eaab0>
|
CC-MAIN-2022-40
|
https://nira.com/chrome-headless/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00261.warc.gz
|
en
| 0.858609 | 3,015 | 3.046875 | 3 |
It has been roughly a year and a half since Hewlett Packard Enterprise first announced its intent to create a completely different kind of a system. The company may have announced it then, but you can bet that its researchers and engineers had long been playing with integrating many of the very concepts that you have been reading about recently in The Next Platform. High on this list would include the now simple realization that technology was soon to be developed that would allow the basic organization of computer systems that we have had forever to change.
It wasn’t really all that long ago that solid-state storage started to range its use as a partial to a complete substitute for spinning disk drives. This non-volatile and fast flash storage (also known as NAND and NOR memory) was then used as a simply faster form of I/O-based persistent storage; we even called them flash drives. Realizing that there were other non-volatile storage technologies not all that far into our future that are or will be still faster, still more dense, and considerably more reliable, it now seems like a no brainer to be asking why it is that such memory needs to be relegated on the far side of any I/O infrastructure. Why can’t this persistent memory be a computing system’s main memory?
And not just main memory, but all memory. If it’s persistent, and if there is enough of it somewhere in the system, why would there be a need for rapidly accessible I/O-based storage at all?
For those of you following The Machine, as we have been doing here at The Next Platform, you know that HPE has been pushing a type of persistent storage based on memristor technology. HPE has struggled to bring that form of persistent storage to market, and although it will certainly be a feather in its cap if it does get commercialized, memristor-based storage is only a small part of the broader vision for The Machine. Any number of other potential technologies, some already very near to the marketplace, could serve the same purpose. What HPE seems to be envisioning for The Machine is a complete solution: fast multi-processors, both rapid dynamic and persistent memory, many optically-linked nodes, with all persistent memory being byte addressable from all processors in the system. In addition, this total solution includes operating systems, file systems, database managers, and tools that are well aware of this difference in system topology. Picture your changing database records as being assured of the protection of persistent storage merely by having a processor copy data into it, or even just flushing processor cache.
What does this system called The Machine really look like? Answering that question is the purpose of this series of articles.
Since it is a series, we will start with something normally relegated to the end of any paper, a bibliography of sorts. Most of these are videos of presentations by HPE personnel. The last is a nice paper on a possible programming model relating to the use of non-volatile memory.
- Developers, start your engines; open source software for The Machine, HPE Discover 2015 London
- Programming with non-volatile memory; Hewlett Packard Labs makes it easy, HPE Discover 2015 London
- HP Labs Peeks Under the Hood of The Machine, HPE Discover 2015 Las Vegas
- NV-Heaps: Making Persistent Objects Fast and Safe with Next-Generation, Non-Volatile Memories
Before proceeding, I would like to thank Paolo Faraboschi and the Hewlett Packard Labs team for answering a number of my questions.
The (Not-So) Basic Node Of The Machine
Let us start with a picture of an exemplary node from The Machine from a Hewlett Packard Enterprise presentation at HP Discover 2015. I have identified portions within this node per this presentation.
Interesting in itself, but I have taken the liberty of abstracting the mock up as in the following figure to include the interconnects as I understand them to be. This represents only a single node, part of a much larger single system built using many of these. So let’s next take a pass at understanding what the topology of a single node implies. We will be looking at the system made from many of these shortly.
The entity called the processor SOC (short for system-on-chip) could be most anything. HPE has spoken of this potentially being a 64-bit ARM or X86 processors. As interesting as a processor’s instruction set is, this series of articles will spend no real time there; the storage and programming model is more our focus. So let’s for now assume that this SOC is a single-chip, but multi-core package, each core and chip with a sufficient amount of cache, each chip with direct connections to up to (based on the mock up) eight DRAM DIMM slots. For the time that this is likely to come out, let’s call this at least 1 TB (per node) of rapidly accessed volatile memory.
The same processor chip additionally supports a high bandwidth interconnect to a module called the local fabric switch. For only this local node, the purpose of this unit is to enable the processors here to access the contents of local persistent memory units by way of modules called media controllers. Think of these media controllers as serving the same function as the on-chip memory controllers on the SOC complex, and more, but here relative to the local persistent memory. That local fabric switch goes on and supports a connection to an off-node optical adapter. (More on that later.)
Along with this memory-based connection to other nodes, the nodes in The Machine also support more traditional means of doing inter-node communications and, of course, communications into the broader world. Think Ethernet.
So that is our starting point. It represents just one node, one node interconnected to many more via this optical adapter by way of what I’ll call for now a switch (for now picture it as a top-of-rack unit), all together comprising what is ultimately just the hardware of The Machine.
Of Volatile Cache, Volatile DRAM, And Non-Volatile Persistent Memory
This might seem an odd place to start in characterizing The Machine, but you will find that knowing the following will make The Machine, and all of its cool differences, seem obvious to you.
Even though all DRAM memory in almost any computer system can be considered to be byte addressable, and so processors can access individual bytes of DRAM, in reality far and away most accesses of DRAM are done as block reads and writes. A “block” here is the size of a processor’s cache line, one of many in the cache of the processor cores. Said differently, assuming that a cache line is 128 bytes in size, DRAM is not accessed as individual bytes or words, the DRAM data block being accessed is 128 bytes in size and aligned on a 128-byte boundary. DRAM memory is being accessed to do cache fills – and therefore block reads – or cache line flushes – and therefore block writes. Such blocks are the unit of storage that are typically passing between the processor chip and the DRAM; it is a rare event that individual byte accesses to/from DRAM ever occur. Accesses of your data as the typical integer value are done from the cache.
Stepping up a level, even though no programming language gives you visibility to it, for most program execution time, a program accessing what it thinks of as “memory” is instead typically accessing the data and instruction stream residing in a processor’s cache. The programming model is such that a program’s instructions act as though they are accessing the memory’s real address space, but in reality the processor cache – a much faster memory – steps into the middle of the access and transparently serves up the needed data.
When the cache does not have what is needed, the processor’s core is effectively unaware and is simply told to wait a while, allowing the cache controller to find the requested data block, no matter where it is in the entire system. When the cache needs more space for the next cache fill from the DRAM, it may flush the contents of some cache lines as data block writes back to memory. The location of the write in the DRAM is dictated by a block’s real address held along with the cache line. Again, it does not matter where in the entire system the target real address happens to be pointing, the flushed data block will make its way back to that location. That’s basic cache theory as it relates to what every system has dealt with up to now, the accessing of the DRAM.
Let’s switch gears and, still limiting ourselves to one node, consider The Machine’s node-local persistent memory. Just like DRAM, its content is capable of being byte addressable and byte accessible. That’s relatively new for persistent storage. The processor could read byte-by-byte from persistent memory, but, as you will see in a bit, the processor – and the performance of a program – does not want to wait for individual byte reads from persistent memory any more than it does for reads from DRAM. So here too, with persistent memory, it is completely logical that reads and writes to it are done as block-sized cache line fills and flushes. It is also completely reasonable to observe that most of the accesses a program thinks it is doing from persistent memory’s data are actually being done as accesses to and from the cache; once a data block is in the cache, subsequent accesses of that same block are done from there. A program may act as though it is storing to an integer variable at a location in persistent memory, but such stores are – at least temporarily – being done first in the cache, into cache lines tagged with the real address of persistent memory.
Given that The Machine’s processor caches are intended to hold such byte addressable data from both the volatile DRAM and the non-volatile persistent memory, it follows that the unit of storage being requested from and returned to the persistent memory – via the fabric switch and the media controllers – is that of a data block, just like with the DRAM.
Sure, it’s all done via the cache, but there is ultimately a big difference between cache-with-DRAM and cache-with-persistent memory. Either way, for the data remaining in the processor’s cache, if power is lost to its processors, you can say good bye to the contents of the cache. It does not matter whether those cache lines were tagged with the real addresses of DRAM or real addresses of persistent memory; if the power is off, the data is gone. Further, if persistence is required, the cached data needs to be flushed out to the persistent memory and to have actually made it there.
When the power is lost, the data in persistent memory remains; it persists. And the beauty is that it took not much more than flushing such cache lines out to Persistent Memory to make these blocks persistent. (Before going on, pause for a moment and consider: What does it really take to ensure data’s persistence in today’s typical system? Functionally, there is a big difference to make data persistent. From a performance point of view, well, there is no comparison. It should go without saying that we are talking about multiple orders of magnitude difference.)
This approach to memory is fast, but as you will see, enabling this takes a different attitude. From the point of view of a program and DRAM, cache is generally transparent. For the programmers out there, when was the last time that you thought about cache? That’s just as the processor designers intend. On the other hand, from the point of view of persistent memory, making the contents of the cache ultimately persistent is something that seems to need our awareness. But that’s no big deal, right? Making data persistent today requires our full awareness as well.
Still, consider, from the high level point of view, the process of a program saving a file. Aside from the definite speed difference between saving to persistent memory and saving onto I/O-based persistent storage devices, saving the file rather looks the same. At such a high level, do we really care whether my file system resides on one versus the other? We just say save or commit a change to a file, and the file system manages the writes no matter the location of the persistent storage. But at the lower level of actually making it happen, it rather does matter. At that low level, there is a huge difference between what it takes to drive a file to disk versus driving that same data to the right location in persistent memory.
For persistent memory, if you are programming at a low level, the trick is first knowing:
- That the cache lines are tagged with a real address of said file in persistent memory (i.e., when a cache line is flushed, it writes its contents into its particular block location in persistent memory) and
- That the data flushed from the cache really had become successfully and completely stored into the persistent memory. It’s not persistent until your program (or later recovery) knows it’s persistent. That fact that you know that the data is on its way toward the persistent memory buys you no guarantees. (Recall that your program did not need to know when data was on its way into the DRAM; a power failure affects both the cache and the DRAM the same way.)
If you are familiar with the basic workings of cache, you know that – with enough time and disuse – changed data residing in the cache does, sooner or later, make its way out to memory. Hardly any program bothers to think in those terms; it does not really matter to a program whether data is in some cache or in DRAM. Indeed, the processor designers work hard and long to ensure that the cache is effectively transparent to any program; this remains true even in a multi-threaded, multi-core, shared memory system. Hardly any code takes the existence of the processor cache into account; it just works. Again, in the fullness of time, changed cached data sooner or later shows up again in memory.
But, historically, everyone understood that to make data persistent, we also needed to programmatically force the changes out into I/O space and then on its way onto the likes of disk drives, often with your program waiting for it to complete. Your program, or some entity working on your behalf, spent the very considerable time, complexity, and processing to ensure that your data made its way out there. Your program made the simple request to make the data persistent, and then something else took over to make it so. And, often, you waited.
Taking all of this into account, let’s consider a program on The Machine having its changed data still residing in a processor’s cache. Let’s also say that the changed data is in cache lines tagged with real addresses associated with persistent memory. Being tagged in this way, in some future time these changed data blocks may well make their way out into persistent memory on their own. Its future may be persistent, but it certainly is not now. These data blocks can also stay in the cache – or even in the cache of another processor – indefinitely. But if your program requires explicit knowledge that these same changed blocks really have become persistent, an explicit request – a forced write, a cache line flush – is needed to begin the process of returning the cached data to persistent memory. It will not take long, and it is not particularly complex to initiate such cache flushes (HPE will be providing a number of APIs that make this so), but if your program should not continue until such data is known to be persistent, your program needs an explicit means of knowing that the cache flushes really are complete.
Why tell you all this? It is to start to make it clear that creating The Machine is a lot more than just hanging persistent memory off of a processor. Creating The Machine also means creating the operating system, compilers, APIs, and overall software infrastructure to make this all happen, to make it implicitly obvious to use, and to make this thing really fast. It also means, as you will see shortly, that if you have performance needs that require your own programs to work at this low level – and now with The Machine, it can – it will take a different mental model to support it well and correctly. It is all just memory, after all, but it is also conceptually different.
We will take a closer look at a number of other aspects of The Machine in subsequent articles.
|
<urn:uuid:4e56b10d-5615-4312-a7b2-92abc2171725>
|
CC-MAIN-2022-40
|
https://www.nextplatform.com/2016/01/04/drilling-down-into-the-machine-from-hpe/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00261.warc.gz
|
en
| 0.951513 | 3,474 | 2.71875 | 3 |
Hong Kong Scientists Improve Efficiency of Quantum Memory
(AsianScientist) Scientists in Hong Kong have found a way to improve the efficiency of quantum memory by cooling rubidium atoms to nearly absolute zero temperatures and increasing the signal-to-noise ratio of single photons.
Quantum memories are essential components for quantum computers. However, the production of highly efficient quantum memories remains a major challenge as it requires a perfectly matched photon-matter quantum interface.
In the present study, researchers led by Professors Du Shengwang and William Mong at the Hong Kong University of Science and Technology created a quantum memory device by trapping billions of rubidium atoms into a hair-like tiny space. Professor Shengwang explained, “Although the quantum memory demonstrated in this work is only for one qubit operation, it opens the possibility for emerging quantum technology and engineering in the future.”
|
<urn:uuid:8a7823f9-7390-4622-b946-a034937303f1>
|
CC-MAIN-2022-40
|
https://www.insidequantumtechnology.com/news-archive/hong-kong-scientists-improve-efficiency-quantum-memory/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00461.warc.gz
|
en
| 0.901699 | 176 | 3.125 | 3 |
2FA (Two Factor Authentication)
2FA (Two Factor Authentication) is a process that uses two steps to authenticate a user.
Rather than just asking for a single piece of information to verify a user, an additional step is added. Such as using a temporary identity token good for minutes, vs. a forever password, that is sent to a cell phone or from an authenticator and is required to access an account.
Often, a third-party authenticator (TPA) app enables two-factor authentication, usually by showing a randomly generated and frequently changing code to use for authentication.Back to Glossary
|
<urn:uuid:26279a85-fc37-4bfa-b00f-776e1c6ce756>
|
CC-MAIN-2022-40
|
https://abusix.com/glossary/2fa-two-factor-authentication/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00461.warc.gz
|
en
| 0.903635 | 126 | 3.359375 | 3 |
SOC (Security Operations Center)
A Security Operation Center (SOC) is a centralized cyber security function within an organization that employs people, processes, and technology to continuously monitor and improve an organization’s security posture by preventing, detecting, analyzing, and responding to cybersecurity incidents.
A Cyber Security SOC acts as the command center by taking in telemetry from an organization’s network, devices, and information systems, regardless of the location of those assets. By collecting context from all sources, advanced threats are more likely to be identified. Ultimately over time, the SOC becomes the cyber security center in which every event is logged within the organization is logged, correlated, and monitored.
For each of these events, the SOC then makes the decision on how the events are then managed and acted upon.Back to Glossary
|
<urn:uuid:98648338-0463-4b1d-97df-59b70ee90028>
|
CC-MAIN-2022-40
|
https://abusix.com/glossary/soc-security-operations-center/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00461.warc.gz
|
en
| 0.950411 | 167 | 3.0625 | 3 |
What Are Digital Badges and How Do They Support Your Professional Development?
You may have heard the term “badging” or “badges” thrown around before, especially in the workplace or academic setting. Many people use badges, also referred to as learning badges, skill badges, or training badges, to highlight their professional development. During the job search and application process, many forward-thinking job candidates employ badges to visually showcase their accomplishments and skillsets to potential employers. These badges help candidates stand out amongst competition.
What is the Purpose of a Training Badge?
Think of a badge like a certificate — just created for digital use. Badges are used to publicly illustrate a person’s credentials, skill level, or accomplishments, such as the mastery of a skill or completion of a program. Unlike certificates, they’re designed for digital settings and act as a visual verification of the recipient’s skills.
Why Do I Need Badges?
Badges allow peers, colleagues, and employers to quickly determine your skill level and competencies. Let’s say you’re a candidate looking for a new job. If a recruiter or hiring manager is looking at 30-40 resumes a day, your resume might stand out if your acquired badges are listed across the top. Not only are badges designed to be eye-catching and visually appealing, but they are specifically created to showcase your skills quickly and efficiently.
Do Training Badges Replace Diplomas?
Comparing badges to diplomas is like comparing apples to oranges. Diplomas are given in academic and collegiate settings, but many employers (even Elon Musk and Bill Gates) say that degrees are a thing of the past. Today, employers are looking for candidates with job skills, who are appropriately equipped to do the job on day one. Badges are a great way to showcase that capability— whether it’s professional development acumen or job-specific skills, they show the employer that you have exactly what they’re looking for.
Who Gives Out Badges?
Any creditable source can give out badges! Some colleges and boot camps will give badges to students for completing certain courses, while employers may give out badges to employees for completing professional development training. HubSpot issues badges for passing their digital marketing courses and Microsoft offers them to those who pass their exams. The important thing to remember, though, is that a badge is only as credible as the organization who provides them.
What Kind of Skills Would I Receive a Badge For?
Just about anything! If you scour the internet, you can find a badge for all sorts of things— but keep in mind that if you plan to utilize badges for a professional purpose, you’ll want to be strategic about which badges you earn. At Centriq, we offer an entire program that focuses solely on technologies you’ll use on the job and a professional development badging program that clearly emphasize skills students gain throughout our Online IT Training Program, including:
- Career Readiness
- Program Graduate
- Featured Student
- Perfect Attendance
Do Badges Benefit IT Professionals?
Without a doubt! That’s why Centriq places such a large emphasis on badging. Because degrees aren’t required for tech jobs, having badges that are relevant for the IT field show exactly which skills you have and how they will benefit the employer. Take the Systems and Security Administrator badge for example: students only receive the badge once they complete rigorous technical training and have a verifiable ability work across functional IT networks.
Badges are a great way to succinctly show off your skillset and accomplishments. During the job application process, badges make it easy for a hiring manager to quickly ascertain your qualifications. If you’re already making a hiring manager’s job easier, then you’re one step closer to snagging an interview.
Want to speak to an Admissions Advisor more about our process? Fill out our online form or call us today.
|
<urn:uuid:68a8b940-2438-4667-8d71-52042a104056>
|
CC-MAIN-2022-40
|
https://centriq.com/blog/what-are-digital-badges-and-how-do-they-support-professional-development/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00461.warc.gz
|
en
| 0.946488 | 824 | 2.8125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.