text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Try Free for One Month Find powerful insights with 300+ no-code, low code automation building blocks. What Is Data Standardization? In data standardization, the software algorithms that execute data transformations are decoupled from the very systems that house the data. These algorithms are not persisted in code; rather, their logic is maintained in human-readable rules that non developers can maintain on their own using visual interfaces without relying on IT. Data standardization abstracts away all the complex semantics of how the data is captured, standardized, and cobbled together. It offers aggregators the agility to onboard new partners quickly, enhance the rules that logically blend the new provider’s data with existing data, and provide the business with faster and more accurate analytics. Why Is Data Standardization Important? Data mapping is here to stay as the world is not about to adopt a unified way of defining every business data element any time soon. The good news, however, is that mapping doesn’t have to be painful. A modern strategy for handling data mapping is to virtualize the entire process. Organizations often hard-code their standardization logic in code that resides in the systems that house and move data around. Such strong coupling meant that organizations had to spend significant time creating, maintaining, and debugging standardization code that was spread around several locations, with limited ability to ensure its quality and reusability. With complex standardization logic, organizations have struggled to onboard new partners quickly, causing them to miss onboarding milestones and new revenue opportunities. A unique approach to data transformation is through virtualization to decouple and abstract away standardization code, enabling business users to define standardization rules using a visual interface that converts the logic to code at query time. With this type of virtualization, organizations increase their business agility, and onboard new partners faster. The Data Standardization Process When a new data provider is on-boarded, the analytics automation platform uses its proprietary Data Scanner to understand the source data, regardless of the format or the system it’s in. The platform builds a universal and virtual data layer that is automatically enhanced with pointers to the new raw data elements and includes all the transformation logic that the business requires. These virtual data columns and their transformations allow the platform to query the raw data at any time, eliminating data moves and copies, and ensuring that query results reflect the latest changes in the raw data. When schema changes are detected, the platform makes the necessary adjustments in the data layer to point to the raw data elements correctly. With the virtual data columns added, business users define virtual rules to standardize and blend the data. The rules are virtual since they’re not persisted in code. They are kept in human-readable form that business users maintain. It’s only at query time that Alteryx automatically creates the necessary code that it executes to create tables and views. There are three types of rules that business users maintain for data transformation: Taxonomy rules: These rules map the columns and values of the partner’s data with the aggregator’s. For instance, a partner can describe their transactions as having two columns: a settlement amount and a type, where the type can be one of three options. Reshape rules: These rules specify how to pull data elements together from the partner’s side, and how to distribute them on the aggregator’s side. For example, a retailer might provide all transaction data in a single file, but the aggregator needs to split it into three tables, one for transactions, another for retailer data, and yet another for consumers. Semantic rules: These rules articulate the meanings of data elements and how the business uses them to describe its domain. For example, what constitutes a successful transaction? And how should its final settled amount be computed after accounting for refunds? Each data provider has its own semantics that makes sense in the context of its operations, but one that the data aggregator must reconcile with all other providers’ data definitions. You can define these rules declaratively using a visual tool. It has a rich set of transformation functions that make standardization easy. For instance, users can map columns and translate values to a standard set, or, pull data together from multiple files including XML, CSV, JSON, EDI etc. Common problems such as a different order of columns, renamed columns, changes to the values or types of columns can be handled automatically. Users can also use a SQL console to describe more complex logic. In addition, users can build data validations and reports to monitor and check that all the standardizations happened correctly.. As soon as a new file or record is added or changed, a Data Scanner detects it, applies the relevant standardization rules — by dynamically generating relevant SQL code and executing it — and exports the data to a standard format. The Future of Data Standardization Standardizing business data from multiple partners is a critical and common task that is only to become more important and frequent as economic developments offer the opportunity to partner with more stakeholders, and as these data providers continue to shape their datasets according to their own business logic. Given the impact that data standardization has on business agility and performance, organizations that aggregate data from multiple sources should consider carefully the infrastructure and workflows they put in place, and their ability to onboard new partners. Getting Started with Data Standardization Organizations often face a similar challenge: how to ingest datasets that come in each formatted differently according to its provider’s custom business logic, and have these datasets standardized so that they can be compared, aggregated, and otherwise analyzed consistently. The Alteryx Analytics Automation platform helps companies prepare data across disparate sources without the need for engineering to build ETL and data pipelines. Customers unlock the full value of their data by empowering business users to work with datasets that are hard to understand, reconcile, and blend, enabling customers to instantly capture and validate business logic in support of a wide range of use cases.
<urn:uuid:aca42461-4180-4dbc-8adf-64d59221b3e0>
CC-MAIN-2022-40
https://www.alteryx.com/glossary/data-standardization
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00263.warc.gz
en
0.910896
1,232
2.734375
3
A new survey by the Pew Research Center on how Americans view cybersecurity finds that most people are concerned about online security but forgo the necessary steps to protect themselves. The survey of 1,040 US adults shows that 41% of them have shared the password to one of their online accounts with friends or family members. Young adults are especially likely to engage in this behavior – 56% of online adults ages 18- to 29 have shared passwords. Along with sharing passwords, 39% say they use the same password or very similar passwords for many of their online accounts. And 25% often use passwords that are less secure than they’d like because simpler passwords are easier to remember than more complex passwords. "When it comes to passwords, very few of us are acing the test," says Aaron Smith, a co-author of the report, and associate director, research, at Pew. "And no age group is doing particularly well." Smith says the study also found that people feel that they have lost control over their personal information. For example, the study found that 64% have directly experienced some type of significant data theft or fraud and 49% think their personal data has become less secure in recent years. Americans have also lost confidence in major institutions to protect their data, mostly notably the federal government (28%) and social media sites (24%). In contrast, 42% of respondents say they are "somewhat confident" and another 27% say they are "very confident" that their credit card companies can be trusted to protect their data. "In some ways it’s not a fair comparison because social media sites especially don’t have a full customer service staff and 1-800 numbers to call," says Eddie Schwartz, board director at ISACA. "Social media sites like Facebook are free and you get what you pay for." Schwartz adds that for the most part the Pew data meshes with a recent ISACA/RSA study from last year where 74% of respondents said they expected to fall prey to a cyberattack in the next year, and 60% hsf experienced a phishing attack. "So yes, we know these cyberattacks are happening, we know they are bad, we’re afraid, but not always willing to do something about it," Schwartz says. On a more positive note, the Pew study found that 52% of those surveyed use two-factor authentication on at least some of their online accounts. And 57% say they vary their passwords across their online accounts. Here’s a sampling of some of the other findings of the report: - Roughly 10% of those surveyed say they never update the apps on their smartphone, and only 32% do so automatically. Another 14% say they never update the operating system. - 51% surveyed say a major attack on our nation’s public infrastructure will "probably" happen in the next five years, while 18% say it will "definitely" happen. - 75% of American have heard at least something about the Target breach, and 47% has heard "a lot" about it. Only 33% of those surveyed are aware of the OPM attack with only 12% hearing "a lot" about it. - Americans are divided over encryption. 46% believe that the government should be able to access encrypted communications to investigate crimes, while 44% says that technology companies should be able to use encryption tools that are unbreakable to law enforcement. Democrats and younger adults tend to support strong encryption, while Republicans side with law enforcement.
<urn:uuid:b2bde338-1815-4a4a-8469-52dac6d0639e>
CC-MAIN-2022-40
https://www.darkreading.com/cloud/pew-research-study-exposes-america-s-poor-password-hygiene
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00263.warc.gz
en
0.963424
728
2.828125
3
In the coming weeks, we will be discussing different aspects of autonomous vehicles in this blog. Everyone is familiar, at least at a high level, with the concept of an autonomous vehicle or in other words a vehicle that can drive itself. However, the complexities to make this happen are amongst the biggest challenges in engineering today. Imagine all the different scenarios and corner cases that we humans deal with today when we drive. We must negotiate rush hour traffic, anticipate a child running in front of our car, weave around debris in the road, merge into high-speed traffic, react to traffic signals, follow detour signs and so much more. These do not seem like a big deal to humans because we have a highly developed brain, but how do we train a computer or series of computers to perform these tasks without failure? The answer has many facets that we will explore, including how modern technology such as cameras, FPGAs, lidar and artificial intelligence are used and what does “autonomous” really mean? We will begin the exploration in the next post by discussion the 5 levels of autonomy as defined by the Society of Automotive Engineers (SAE).
<urn:uuid:dff76987-0500-4b48-9403-b93d4efa2d88>
CC-MAIN-2022-40
https://accoladetechnology.com/autonomous-vehicles/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00263.warc.gz
en
0.960518
238
3.125
3
Every business faces risk as long as they have something of value. The more valuable the assets of the company are, the more risk they face. Data value increases when the amount of information in a database grows and the data can be harvested more effectively. Data should be protected or secured at a reasonable cost that is a fraction of the value of the data. The cost of attacking a corporation’s data assets usually decreases as technology improves. To attack and exploit a company’s data center or get to a certain asset, a given investment would be needed to gain access to the data and gain benefit from it. The cost of certain attacks may be very low and the enterprise needs to guard against these attacks. If the cost of the attack becomes less than the value of the data, then the security for that asset should be upgraded to deter the attacker. Unfortunately, most attackers do not do a cost-benefit analysis on the victim before attacking. Many low-cost methods of attack, like kiddie scripts (attack modes that are obtainable for free on the Internet), are done for kicks. Attackers may not benefit from the attack, but the attack may hurt the owner of the data. Enterprises need to fight against all types of attacks that threaten their assets or their ability to do business. A general definition of risk will help show how threats are a factor in determining risk. Risk due to security attacks is the product of the threat, times the vulnerability to the threat, times the value of the asset. Since companies want to increase the value of their assets and cannot stop all threats, they must decrease their vulnerability to a given attack. To find the total risk that a company faces, the company must inventory their data assets. With each asset tallied, the company can estimate the probability of the threats to each asset and the vulnerability to each threat in terms of a probability. The total risk will be the summation of the risks to each asset in terms of dollars. To justify a security upgrade, the company may evaluate the reduction of risk due to a security upgrade. Dividing the reduction in risk by the cost of the security upgrade reassures return on security investment (ROSI). This analysis will give the user an estimate of the lower risk due to countermeasures. Reduction of risk makes the enterprise safer than if the threats are ignored. Enterprises can choose to install countermeasures before the attack or deal with the consequences after an attack. Risk always starts with a threat. Threats can be broken up into three basic levels. The first level of threats is unintentional and due to accidents or mistakes. While not intentional, these threats are common and can cause downtime and loss of revenue. The second level of threats is a simple malicious attack that uses existing equipment and possibly some easily obtainable information. These attacks are less common but are intentional in nature and are usually from internal sources. The third level of threat is the large scale attack that requires an uncommon level of sophistication and equipment to execute the attack. A third level attack is usually from an outside source and requires access, either physically or virtually. Third level attacks are extremely rare in SANs today and may take considerable knowledge and skill to execute. Table 1 summarizes the three levels of threat. Level 1 attacks are unintentional and are usually the result of common mistakes. A classic example of a Level 1 attack is connecting a device to the wrong port. While unintentional, a miscabling could allow a device to have unauthorized access to data or cause a disk drive to be improperly formatted. The incorrect connection could even join two fabrics that could enable hundreds of ports to be accidentally accessed. The unfortunate aspect of this attack is that it can be executed with little skill or thought. Fortunately, Level 1 threats are the easiest to prevent. Level 2 threats are distinguished by the fact that someone maliciously tries to steal data or cause disruption of service. The variety of Level 2 attacks increases as the intruder (anyone initiating the attack) is attempting to circumvent barriers. An intruder impersonating an authorized user would be a common Level 2 attack. To prevent a Level 2 threat, the SAN will need to add processes and technology to foil the attack. Level 3 threats are the most troublesome. These are large-scale offensives that are usually perpetrated by an external source with expensive equipment and sophistication. An example of this attack would be installing a Fibre Channel analyzer that monitors traffic on a link. Equipment to crack authentication secrets or encrypted data would be another example of a Level 3 attack. These cloak and dagger type attacks are difficult to accomplish and require uncommon knowledge and a serious commitment to perpetrate the attack. Level 3 attacks are rare and complex and are beyond the scope of this white paper. The three levels of attack are helpful in categorizing threats, but an in-depth analysis is required to address each threat. The next section will enable a systematic approach to dealing with individual threats. Administrator’s Perspective – Storage Network Points of Attack Threats to storage networks come from many places. Each point of attack may be used as a stepping-stone for later attacks. To provide high levels of security, several checkpoints should be placed between the intruder and the data. The various points of attack are helpful in identifying security method to thwart different attacks. Similar to how castles have several defense mechanisms to defend against invaders, the enterprise should install many barriers to prevent attacks. The point of attack helps the discussion of individual threats. The threats that will be discussed in this paper include: – Unauthorized Access Unauthorized access is the most common security threat because it can run the gamut of Levels 1 to 3 threats. An unauthorized access may be as simple as plugging in the wrong cable or as complex as attaching a compromised server to the fabric. Unauthorized access leads to other forms of attack, and is a good place to start the discussion of threats. Access can be controlled at the following points of attack: 1. Out-of Band Management Application – Switches have non- Fibre Channel ports, such as an Ethernet port and Serial Port, for management purposes. Physical access to the Ethernet port may be limited by creating a private network to manage the SAN that is separate from a company’s Intranet. If the switch is connected to the company Intranet, Firewalls and Virtual Private Networks can restrict access to the Ethernet port. Access to the Serial Port (RS 232) can be restricted by limiting physical access and having user authorization and authentication. After physical access is obtained to the Ethernet port, the switch can control the applications that can access it with access control lists. The switch may also limit the applications or individual users that can access through point of attack 3. 2. In-band Management Application — Another exposure that a switch faces is through an in-band management application. The in-band management application will access the fabric services – such as the Name Server and Fabric Configuration Server. Access to the fabric services is controlled by the Management ACL (MACL). 3. User to Application – Once a user has physical access to a management application, they will have to log into the application. The management application can authorize the user for role-based access depending on their job function. The management application will need to support access control lists and the roles for each user. 4. Device to Device – After two Nx_Ports are logged into the fabric, one Nx_port can do a Port Login (PLOGI) to the another Nx_Port. Zoning and LUN masking can limit the access of devices at this point. The Active Zone Set in each switch will enforce the zoning restrictions in the Fabric. Storage devices maintain the LUN masking information. 5. Devices to Fabric – When a device (Nx_Port) attaches to the fabric (Fx_Port), the device sends a Fabric Login (FLOGI) command that contains various parameters like Port World Wide Name (WWN). The switch can authorize the port to log into the fabric or reject the FLOGI and terminate the connection. The switch will need to maintain an access control list (ACL) for the WWNs that are allowed to attach. The real threat to data will occur after the device is logged into the fabric and can proceed to point of attack 4 or 5. 6. Switch to Switch – When a switch is connected to another switch, an Exchange Link Parameters (ELP) Internal Link Service (ILS) will send relevant information like the Switch WWN. The switch can authorize the other switch to form a larger fabric or the link can be isolated if the switch is not authorized to join. Each switch will need to maintain an ACL for authorized switches. 7. Data at Rest – Stored data is vulnerable to insider attack, as well as unauthorized access via fabric and host-based attacks. For example, since storage protocols are all cleartext, administrators for storage, backup and hosts have access to stored data in raw format, with no access restrictions or logging. Storage encryption appliances provide a layer of protection for data at rest, and in some cases provide additional application- level authentication and access controls. Controlling access with Access Control Lists (ACLs) prevents accidents from leading to catastrophes. ACLs will not stop attackers who are willing to lie about their identity. Unfortunately, most thieves usually don’t have a problem with lying to get what they want. To prevent spoofers (someone who masquerades as another) from infiltrating the network, the entity that is being authorized must also be authenticated. Spoofing is another threat that is related to unauthorized access. Spoofing has many names and forms and is often called: impersonation, identity theft, hijacking, masquerading and WWN spoofing. Spoofing gets its names from attacking at different levels. One form of attack is impersonating a user and another attack is masquerading as an authorized WWN. The way to prevent spoofing is by challenging the spoofer to give some unique information that only the authorized user should know. For users, the knowledge that is challenged is a password. For devices, a secret is associated with the WWN of the Nx_Port or switch. Management sessions may also be authenticated to ensure that an intruder is not managing the fabric or device. Spoofing can be checked at the following points of attack: 1. Out-of Band Management Application – When a management application contacts the switch, the switch may authenticate the entity that is connecting to the switch. Authentication of the users is addressed in point of attack 6. 2. In-band Management Application – The in-band management application will use Common Transport (CT) Authentication to prevent spoofing of commands to Fabric Services. 3. User to Application – When the user logs into the application, the management application will challenge the user to present a password, secret or badge. The application could authenticate the user with biometric data like fingerprints, retina scans or even DNA samples. 4. Device to Device – After an Nx_port receives a PLOGI, the Nx_Port can challenge the requesting port to show its credentials. CHAP is the standard Fibre Channel mechanism for authenticating Nx_Ports. The requesting Nx_Port should also challenge the other Nx_Port so that both ports are sure of the authenticity of the other port. Two-way authentication is known as mutual authentication. 5. Devices to Fabric – When a device sends a Fabric Login (FLOGI) command, the switch could respond with a CHAP request to authenticate the user. The Nx_Port should respond to the CHAP and challenge the switch as well for mutual authentication. 6. Switch to Switch – When a switch is connected to another switch, both switches should authenticate each other with CHAP. To authenticate every point, four types of authentication are possible: 1. User Authentication 2. Ethernet CHAP Entity Authentication 3. CT Message Authentication 4. Fibre Channel DH-CHAP Entity Authentication After entities and users are authorized and authenticated, the traffic should be able to flow securely between authorized devices. Data flowing on the link could still be stolen by a sniffer. Sniffers will be investigated in the final threat. Data can be stolen in many ways. One way to steal data is sniffing the data while it is in flight. Sniffing is also referred to as wire-tapping and is a form of the man-in-the-middle attack. A Fibre Channel analyzer is a good example of a sniffer that can monitor traffic transparently. Sniffing does not affect the operation of the devices on the link if done properly. A cure for sniffing is encryption. Encryption is the process of taking raw data and encrypting it in a manner that is unreadable without the correct secret. Without the correct key, the stolen data is worthless. Several encryption methods work and there are different encryption algorithms for different kinds of traffic. Instead of discussing the encryption techniques for each point of attack, the encryption method only applies to in-band and out-of-band traffic. Encapsulating Security Payload (ESP) can encrypt the Fibre Channel traffic to ensure confidentiality. Ethernet traffic can be encrypted with Secure Sockets Layer (SSL) or similar protocols. These encryption techniques can use different levels of encryption to make stolen data worthless. As SANs have grown in complexity, with many terabytes of data aggregated and replicated in shared systems, customers are increasingly concerned with the security of data at rest. Government regulations around privacy have further increased the importance of protecting stored customer information. McDATA has developed integrated solutions with partners to provide transparent, wire-speed encryption for data at rest. These appliances use hardware-based encryption and key management to lock down data at rest, and enforce overall fabric security and access controls. These McDATA-certified solutions have been deployed by government and enterprise customers with minimal impact to application performance or management overhead. McDATA’s experienced consultants can work with your team to ensure a seamless integration that addresses your unique security requirements. The most common threats are mistakes made by users. Various access control lists can limit the risk due to many forms of errors. Access control lists only stop users that do not spoof an authorized device. To prevent spoofing, authentication services are required to catch a lying intruder. If the intruder still manages to obtain physical access to the infrastructure and attaches a sniffer onto a link to steal data, encryption can render the stolen data worthless. These three common threats and solutions help IT organizations manage the risk associated with various attacks. Referring back to the level of attack, access control lists prevent Level 1 attacks by preventing miscabling from proceeding past the initialization stage. If an intruder is trying to use an application under an authorized users name, the Level 2 attack will be stymied with authentication. If the intruder installs a wire tap in a Level 3 attack, encryption can spoil the intruder’s ill gotten goods. The three levels of attack require different types of defensive maneuvers. Each threat must be dealt with individually.
<urn:uuid:12aa85ad-49f4-4c3f-8e1b-4ce5e6c35e12>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2005/07/11/risks-and-threats-to-storage-area-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00463.warc.gz
en
0.931112
3,152
3.03125
3
"The Power of Ten" refers to a set of ten rules developed by Gerard Holzmann of the NASA Jet Propulsion Laboratory for use in writing safety-critical software. The rules are simple, but they specify strict limits on the forms code can take. These limits support code clarity and analyzability, which are especially important for safety-critical applications. In addition see the Power of Ten website for more information. The rules and rationales are described in a 2006 paper: Gerard J. Holzmann, "The Power of 10: Rules for Developing Safety-Critical Code,"Computer, 39(6), pp. 95-97, June 2006. Relevant Warning Classes The following PDF shows the CodeSonar warning classes that are associated with Power of Ten rules. The tenth rule, POW10:10, is associated with some checks and also requires the use of a static analysis tool such as CodeSonar. These warning classes are matched with CodeSonar 7.1.
<urn:uuid:84c5cf7f-2431-4933-961a-0a6a79069db3>
CC-MAIN-2022-40
https://support.grammatech.com/hc/en-us/articles/4575245897361
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00463.warc.gz
en
0.922622
204
3.21875
3
Basic programming knowledge A computer that can run a Windows virtual machine. An interest in disassembling things and understanding how they work! Patience and perseverance to “try harder”. The aim of this course is to provide a practical approach to analyzing ransomware. Working with real world samples of increasing difficulty, we will: Deep dive into identifying the encryption techniques, Navigate through various evasion tricks used by malware writers, Have fun discovering flaws in their logic or the implementation and Work out automated ways to recover the affected files. If you’re already familiar with the basics and want to dive straight into advanced samples, navigate anti-virtualization and anti-analysis tricks, and write C and Python decryptors for custom crypto algorithms, please check out our Advanced Reverse Engineering Ransomware course!
<urn:uuid:1b8bbfa8-cd58-4fd8-beb1-96db21b3bf26>
CC-MAIN-2022-40
https://cybermaterial.com/reverse-engineering-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00463.warc.gz
en
0.861758
179
3.171875
3
Imagine a world where wireless devices are as small as a grain of salt. These miniaturised devices have sensors, cameras and communication mechanisms to transmit the data they collect back to a base in order to process. Today, you no longer have to imagine it: microelectromechanical systems (MEMS), often called motes, are real and they very well could be coming to a neighbourhood near you. Whether this fact excites or strikes fear in you it’s good to know what it’s all about. What can smart dust do? Outfitted with miniature sensors, MEMS can detect everything from light to vibrations to temperature. With an incredible amount of power packed into its small size, MEMS combine sensing, an autonomous power supply, computing and wireless communication in a space that is typically only a few millimetres in volume. With such a small size, these devices can stay suspended in an environment just like a particle of dust. - Collect data including acceleration, stress, pressure, humidity, sound and more from sensors - Process the data with what amounts to an onboard computer system - Store the data in memory - Wirelessly communicate the data to the cloud, a base or other MEMs 3D printing on the microscale Since the components that make up these devices are 3D printed as one piece on a commercially available 3D printer, an incredible amount of complexity can be handled and some previous manufacturing barriers that restricted how small you can make things were overcome. The optical lenses that are created for these miniaturised sensors can achieve the finest quality images. Practical applications of smart dust The potential of smart dust to collect information about any environment in incredible detail could impact plenty of things in a variety of industries from safety to compliance to productivity. It’s like multiplying the internet of things technology millions or billions of times over. Here are just some of the ways it might be used: - Monitor crops in an unprecedented scale to determine watering, fertilisation and pest-control needs. - Monitor equipment to facilitate more timely maintenance. - Identify weaknesses and corrosion prior to a system failure. - Enable wireless monitoring of people and products for security purposes. - Measuring anything that can be measured nearly anywhere. - Enhance inventory control with MEMS to track products from manufacturing facility shelves to boxes to palettes to shipping vessels to trucks to retail shelves. - Possible applications for the healthcare industry are immense from diagnostic procedures without surgery to monitoring devices that help people with disabilities interact with tools that help them live independently. - Researchers at UC Berkeley published a paper about the potential for neural dust, an implantable system to be sprinkled on the human brain, to provide feedback about brain functionality. Disadvantages of smart dust There are still plenty of concerns with wide-scale adoption of smart dust that need to be sorted out. Here are a few disadvantages of smart dust: Many that have reservations about the real-world implications of smart dust are concerned about privacy issues. Since smart dust devices are miniature sensors they can record anything that they are programmed to record. Since they are so small, they are difficult to detect. Your imagination can run wild regarding the negative privacy implications when smart dust falls into the wrong hands. Once billions of smart dust devices are deployed over an area it would be difficult to retrieve or capture them if necessary. Given how small they are, it would be challenging to detect them if you weren’t made aware of their presence. The volume of smart dust that could be engaged by a rogue individual, company or government to do harm would make it challenging for the authorities to control if necessary. As with any new technology, the cost to implement a smart dust system that includes the satellites and other elements required for full implementation is high. Until costs come down, it will be technology out of reach for many. What should you do to prepare? The entities who have led the development of smart dust technology since 1992 and large corporations such as General Electric, Cargill, IBM, Cisco Systems and more who invested in research for smart dust and viable applications believe this technology will be disruptive to economies and our world. At the moment, many of the applications for smart dust are still in the concept stage. In fact, Gartner listed smart dust technology for the first time in its Gartner Hype Cycle in 2016. While the technology has forward momentum, there’s still quite a bit to resolve before you will see it impacting your organisation. However, it’s important to pay attention to its trajectory of growth, because it’s no longer the fodder of science fiction. We might not know when it will progress to the point of wide-scale adoption, but we certainly know it’s a question of when rather than if.
<urn:uuid:e2fdec58-3c8e-44b0-82ad-11aa8ee22a13>
CC-MAIN-2022-40
https://bernardmarr.com/smart-dust-is-coming-are-you-ready/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00463.warc.gz
en
0.933336
998
3.46875
3
Older WiFi standards and technologies were designed for performing simple tasks that require connectivity. Now there is a new standard in WiFi technology which provides service providers and organizations with more advanced capabilities and performance. Not too long ago WiFi connectivity and equipment was designed to provide a cost effective and convenient way to connect to the Internet. This type of WiFi connectivity was typically designed for home environments and areas which had low interference. Although this represented a significant breakthrough which paved the road to new technologies, the older WiFi equipment does not address the current needs of individuals and businesses which include flexibility, scalability, and support for a wide number of simultaneous users. The older WiFi technologies which were once considered the latest innovation are now inadequate for addressing the connectivity challenges associated with new demands. The latest WiFi technologies have proved to be successful in meeting some of the connectivity challenges businesses and individuals face when it comes to increasing productivity and offering enhanced customer service solutions which meet the demands of the mobile consumer. New wireless technologies and network configurations enable businesses of all sizes to increase productivity without being held back by the shortcomings of older technologies. Why are New WiFi Technologies Necessary? If you have ever been in an environment where you experienced a weak or intermittent WiFi connection, the common determination for cause is usually building structures, metal objects, walls, and other obstructions which interfere with wireless connectivity. Although this is true when it comes to older WiFi technologies, there are other factors which come into play. Many older wireless technologies fail to accommodate the signal range which is required to operate a large organization or provide long range connectivity in outdoor areas. Typically with older technologies the range is anywhere from 50 to 100 feet which is hardly adequate for today’s connectivity demands. Additionally, the internal architecture of building structures often results in obstructing the wireless connection. This is especially true in the case of a Local Area Network (LAN) connection which can results in an intermittent or unreliable wireless connection as a result of building structure. This challenge is often solved using wireless solutions designed for use in both the indoor and outdoor environments which can leave an organization susceptible to security issues. By implementing the latest wireless technologies, organizations of all sizes and industries can improve internal productivity while providing their customer base with access to a secure wireless connection whenever it is warranted for a specific industry. This is what motivates businesses to implement the latest wireless solutions such as Ruckus which address these demands and challenges. Overview: Who is Ruckus and What Type of Technology Do They Provide? Ruckus is a wireless solutions provider which has existed since 2004 and is considered to be one of the pioneers in wireless technology. Recently, the company has taken significant strides to make wireless connectivity better and smarter to meet the ever growing challenge of voice, video, and data which can be accessed from anywhere and at any time without the hassles associated with older technologies. Ruckus technologies ensure faster and more reliable wireless access, thanks to improvements in signal to noise ratios, more dBs (decibels) out of a WiFi connection, and better control over polarization of transmissions. There is a large amount wireless noise in the air with the increased implementation of wireless hotspots that it is difficult for people to communicate. As a result of years of testing, Ruckus devised a way to shoot wireless signals only to locations where they are needed. The patented technology predicts the high capacity channels it should use at any given time, determines methods for getting a high volume of traffic on and off the network simultaneously, and implementing solutions to circumvent all the noise. This way packets do not get dropped and the end user accesses a high performance network which is consistent. Ruckus came up with an exclusive patented technology known as Smart WiFi which delivers high performance under heavy loads such as stadiums, convention centers, airports, and outdoor public areas. This is carrier class type of WiFi that can adapt to interference, easy integration with existing cellular networks, and improved scalability. Smart WiFi also offers higher capacity with fewer access points, stronger signals, super-fast WiFi connections at longer distances, and all for a lot less than you would pay other wireless solution companies. Advantages: Why Choose Ruckus Over Other Wireless Manufacturers? The main reason why many businesses are opting to use Ruckus technologies is due to the patented Smart WiFi we mentioned above. Some of the specialized ruckus technologies such as Zoneflex, Flex Master, SmartCell and ZoneDirector offer consistent and reliable wireless solutions which are capable of accommodating individual business requirements. When compared to traditional wireless solutions, Ruckus technologies offer targeted wireless connectivity which integrates WLAN with enhanced security and uninterrupted connections. What Are the Different Types of Ruckus Technologies? To help you understand the advantages better, let’s take a look at the various technologies included in the patented Ruckus Smart WiFi. Zoneflex utilizes a patented Ruckus BeamFlex technology which delivers customized wireless solutions that provide an extended service range with targeted signaling that adapts to the surroundings and the environment. This means that the wireless connection is never compromised by building structure, a large amount of concurrent users, or the weather elements. This type of technology is not offered by any other wireless solutions provider. Zoneflex works through the consolidation of both Local Area Network (LAN) and Wireless Local Area Networking (WLAN technologies. The controllers included in Zoneflex technology function on a completely different path than a data path which ensures consistent and reliable service. This means the controller is out of the path of a wireless LAN client and only facilitates management traffic. This is advantageous for both the indoor internal productivity setting and outdoor environments. The outdoor Zoneflex wireless solutions offer access to high performance 802.11n connectivity which offers more than 150 Mbps in between each mesh node. This provides the connection with extreme extended connection capability of more than 900 feet as opposed to conventional connections of approximately 100 feet. This enables organizations and their customer base to access a high performance WiFi connection from just about anywhere indoors or out. Zoneflex technology is ideal for businesses which require consistent and reliable connectivity both indoors and outdoors. These are industries such as restaurants, hotels with outdoor facilities, stadiums, convention centers, and similar venues. The Ruckus patented SmartCell technology is designed to provide SMEs and large enterprises with improved gateways and access points. Additionally, the SmartCell technology offers a product known as Insight which provides enhanced protection required for larger organizations. SmartCell gateways provide Wireless Local Area Network (WLAN) controllers which are flexible and scalable and offer 3GPP compatibility for enhanced operability. The management system provides a way to integrate all wireless components by utilizing a mobile packet core completely scalable WLAN controllers that are 3GPP (3rd Generation Partnership Project) compatible for optimal functionality. This helps to eliminate the problems associated with conventional wireless solutions since the access point is integrated with small components which are designed for high-density utilization. It also solves the challenges associated with urban areas by offering the capability to accommodate 3G, LTE, and WiFi networks simultaneously for consistent and reliable performance. This type of Ruckus technology is designed for configuration in outdoor environments which are utilized by businesses in the retail, hospitality, and restaurant sectors. It is equipped with a hardened enclosure to stand up to harsh weather elements which is mountable on walls, poles, or ceilings with a BeamFlex adaptable antenna width of 120 degrees. The BeamFlex technology allows the smart antenna to automatically rank antenna configurations and then reconfigure itself to the best performing one. ZoneDirector is a Ruckus technology which works with Ruckus Smart/OS to make configuration and administration of a Wireless Local Area Network (WLAN) easy. ZoneDirector is a high end Smart WLAN controller which runs with Smart/OS using a web interface. Under typical circumstances, wireless LAN Controllers can require years of training and acquired experience to implement and maintain. However, this is not the case with a ZoneDirector controller since it delivers all of the necessary features such as adaptive wireless meshing, advanced user access controls, flexible WLAN groups, extensive authentication support, and other essential features in one centralized management interface which is accessed via the Web. These features are typically only offered in costly add-ons when it comes to operating other centrally managed systems. ZoneDirector delivers all the essentials and is offered fully equipped as a package deal while providing advanced security enhancements in one centralized system. It is also easy to setup and configure and removes all of the hassles associated with conventional methods of WLAN configuration and administration. ZoneDirector patented LAN controllers are used with Ruckus Smart/OS by a large number of businesses and organizations which cover a variety of industries. Some of the industries include sports and convention center venues, retail and hospitality, healthcare, transportation and warehousing, and telecommunications organizations. Ruckus FlexMaster technology provides a centralized WiFi management system which allows a WiFi network to be securely controlled and monitored from any location around the globe. FlexMaster works with any Internet connection, LAN, or private IP network. The technology operates on a Linux-based platform which acts as a managed service to perform configurations. FlexMaster technology also allows for easy auditing, fault detection, optimization, and performance management for thousands of Smart WiFi access points and Smart WiFi LANS/WLANS all from a centralized platform. The management system is a scalable element management system designed to manage standalone access points as well as ZoneDirector controlled environments. The technology offers Carrier Class WiFi Management and functionality for large scale roll outs capable of handling thousands of devices and users, advanced troubleshooting, and comprehensive reporting for compliance. It also offers end to end visibility which allows you to see from the CPE (Customer Premise Equipment) to the access points and ZoneDirector controllers. FlexMaster is a software based application which is licensed to scale to the needs of your access points and devices. Once it is loaded it is very simple to configure. Many organizations that are involved with the telecommunications industry such as Time Warner and others that use multi-service operators, are ideal candidates for FlexMaster technology. Where Can You Obtain Ruckus Wireless Technologies? If you are a business user then I would not advise setting up Ruckus equipment in an office environment unless you have network and wireless network experience. There are many third party vendors out there that can help you, Our IT Department being a prime example – http://www.ouritdept.co.uk/ruckus-wireless-solutions/ Since Ruckus has been around for close to a decade, you can learn more about their technologies by accessing a wealth of information offered on the Official Ruckus Wireless website. The technology has been introduced to some countries around the globe by telecommunications providers which are offering Ruckus technologies to home users for Internet and cable connectivity. In this instance, the configuration and setup is taken care of by the telecommunications provider using the interface technology we described earlier. In the case of Ruckus Wireless implementation for organizations and large businesses, the technology should be installed by a Ruckus Wireless partner that has achieved qualifications through the Ruckus Big Dog Partner program. This requires the Ruckus partner to complete an application, purchase a Ruckus kit, and undergo a series of trainings which help to award status as a Ruckus Authorized Partner. This means you should make certain you are working with a Ruckus Authorized Partner before implementing any of the Ruckus technologies we described in this article. To learn more about Ruckus Technologies, feel free to visit the Ruckess Wireless website. Author Sue is an IT expert, with over 15 year’s extensive experience writing technology articles and blog entries. Sue has an interest in Apple technology and network security.
<urn:uuid:503fb8b1-eab2-4c63-8a8c-c8662ed77f6b>
CC-MAIN-2022-40
https://computerbooksonline.com/n2tech/what-is-so-good-about-ruckus-wireless-technology-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00463.warc.gz
en
0.945007
2,402
2.578125
3
Most businesses and government organisations are now aware that cybersecurity is not merely the responsibility of IT. They recognise that everyone is accountable for protecting systems, people and information from attack. They also know that many attacks occur from within rather than from external parties. So how can they make part of their business culture? Education is key. An education program should complement and explain robust security policies that detail the assets a business or organisation needs to protect, the threats to those assets and the rules and controls for protecting them. An effective program makes every worker acutely aware of cyber threats, including emails or text messages designed to trick them into providing personal or financial information; entice them to click links to websites or open attachments containing malware, or deceive them into paying fake invoices that purport to be from a senior executive. It teaches them how to recognise common threats, the actions they need to take and people they need to inform when targeted and the steps to take if they do fall victim to a malicious individual or software. In addition, the program should teach workers how to recognise and respond to poor – or suspicious – cybersecurity behaviour by a colleague. Cybersecurity education also needs to extend to a business or government organisation’s senior leadership team, who should also visibly support its objectives and model appropriate behaviours. It should also encourage workers and managers to pass on lessons learned to friends and family to help them avoid being compromised by malicious cyber activities. Perhaps most importantly, it is not good enough to run a cybersecurity education program once and consider it a box ticked. A business or government organisation should run programs regularly and update them as needed to account for changes in policies and the threat landscape. It should also provide ongoing information and direct people to resources such as the Australian Cyber Security Centre for assistance. Cybersecurity policies and education programs also need to complement the effective use of proven, regularly updated security products to protect systems, people and information from cyber threats. For more information, contact us at: [email protected] Stay Up To Date Stay up to date with the latest news, tips and product features
<urn:uuid:a49ac80b-516e-47ff-a4bb-07bfe0d2ada3>
CC-MAIN-2022-40
https://firstwave.com/blogs/how-to-make-cybersecurity-part-of-your-business-culture/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00463.warc.gz
en
0.956987
432
2.796875
3
You can assign an aggregate function to columns that contain numeric and non-numeric values in your report. Aggregate functions return a single value (for example, Average, Maximum, Minimum), calculated from the values in a column. For example, the sum of a column results from adding all the values in the column. - Click the down arrow next to a report column that contains numeric values. - Select Aggregation from the menu, then choose the aggregation type. Function Name Description None No aggregate function assigned Average Calculates the average value in a given column Count Counts the items in a column; does not require a numeric value Count Distinct Counts the distinct occurrences of a certain value in a column; does not require a numeric value Max Identifies the highest or largest value in a column Min Identifies the lowest or smallest value in a column Sum Calculates a running total sum of the specified column The values in the column update. - Save the report.
<urn:uuid:b274bf4a-682a-4aee-a519-8282d3da88d0>
CC-MAIN-2022-40
https://help.hitachivantara.com/Documentation/Pentaho/5.2/0L0/120/020/010/020
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00463.warc.gz
en
0.698646
198
2.78125
3
A blockchain is a database that stores data in the form of chained blocks. A blockchain can be used in a centralized or decentralized manner. Decentralized blockchains are not owned by a single entity – rather, all users have collective control. Let’s take Bitcoin, for example. Bitcoin’s network consists of thousands of computers, called nodes, which are operated by individuals or groups of people in different geographic locations. Decentralized blockchains are immutable. This essentially means that data transactions are irreversible. Blockchain technology’s decentralized nature makes it ideal for cybersecurity. Blockchain technology automates data storage, provides data integrity and is transparent. Let’s explore the potential use cases of blockchain technology for cybersecurity. Recent Cybersecurity Statistics and Trends With each coming day, the amount of data that we are generating is increasing exponentially. As we continue to develop sophisticated technologies to ensure security, hackers are devising new techniques and implementing sounder technology to execute cybercrime. All of this has resulted in the culmination of damning cybersecurity statistics and trends: - 95% of data breaches are caused due to human error. - In the first half of 2020, data breaches exposed over 35 billion records. - Of recorded data breaches, 45% featured hacking, 22% involved phishing and 17% involved malware. - Over 90% of malware is delivered by email. - Over 200,000 malware samples are produced daily. This number is expected to rise with time. - It took 34% of businesses hit with malware a week or more to regain data access. - Cybercrime-related damage is projected to cost $6 trillion annually in 2021. - Ransomware costs businesses over $75 billion annually. - Fake invoices are the leading disguise for malware distribution. - 50% of global data breaches will occur in the United States by 2023. The Future of Cyber Attacks With the rolling out of 5G networks, download speeds will increase substantially, in turn creating more opportunities for hackers to expose security inefficiencies. Faster download speeds will encourage larger cyber crimes as well. The number of globally connected Internet of Things (IoT) devices is projected to amount to 13.8 billion devices in 2021. As there is a huge commercial appetite for IoT, enterprises are coming up with a range of applications, from wearables to smart homes. Patchy security features could be exposed by miscreants. This is where blockchain technology comes into the picture. Blockchain’s Role in Cybersecurity Here is how blockchain technology can strengthen cybersecurity: - The number of social media platforms that we use is on the rise and most are protected by weak and unreliable passwords. Large quantities of metadata are collected during social media interactions and hackers can create havoc if they gain access to this data. - Blockchain technology can be used to develop a standard security protocol, as it is a sounder alternative to end-to-end encryption. It can also be used to secure private messaging by forming a unified API framework to enable cross-messenger communication capabilities. - Through edge devices, hackers have been able to gain access to overall systems in the past. With the current craze for home automation systems, hackers can gain access to smart homes through edge devices like smart switches, if these IoT devices have dodgy security features. - Blockchain technology can be used to secure such systems or individual devices by decentralizing their administration. - By decentralizing Domain Name System (DNS) entries, blockchain technology can help prevent Distributed Denial of Service (DDoS) attacks. - With increasingly large quantities of data generated each day, storing data in a centralized manner leaves it potentially exposed, as a single vulnerable point can be exploited by a hacker. By storing data in a decentralized form using blockchain, it will be nearly impossible for miscreants to access data storage systems. - Blockchain technology can be used to verify activities like patches, installers, and firmware updates. - Blockchain technology can be used to protect data from unauthorized access while it is in transit, by utilizing encryption. The inherently decentralized nature of blockchain technology has several applications, of which cybersecurity should be explored. Data on blockchains cannot be tampered with, as network nodes automatically cross-reference each other and pinpoint the node with misrepresented information. Blockchain technology provides the highest standards of data transparency and integrity. As blockchain technology automates data storage, it eliminates the leading cause of data breaches – human error. Cybercrime is the greatest threat to enterprises and blockchain technology could go a long way in fighting it.
<urn:uuid:ee287222-8f43-472d-a6c2-a6ccd52cbb0f>
CC-MAIN-2022-40
https://www.itbusinessedge.com/security/potential-use-cases-of-blockchain-technology-for-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00663.warc.gz
en
0.904766
942
3.140625
3
The Covid-19 pandemic has made physical contact almost impossible to conduct business. People are staying home and going online to work and meet. This has puts pressure on data centers and communications infrastructure. It is vital that data centers are resilient, but also sustainable. There are two opposing forces at work. As demand grows, data centers expand and their energy use increases. But at the same time there is public, and government pressure to become carbon neutral. Due to the pandemic, the digital transformation accelerated. Progress that would take years happened in only a few months. The progress gained moves us closer to an all-digital world. Expect more human activity to move to digitization, both professional and personal. To meet the increasing demand, data centers must be adaptable, and resilient. Data centers must be sustainable and efficient. Data centers usual response to resilience is to increase redundancy. achieving greater resilience at the expense of efficiency and sustainability. This requires increased power consumption, hence no longer workable. Designers can use a portable scale approach to handle megawatt growth and avoid over-provisioning. Datacenter construction and operation plans need to include resource lifecycle optimization. This will transition from disposable use designs to reusing the available resources. Solving the resilience and sustainability paradox together In making the design for the new data center, the industry must address these challenges: ● Sustainable – Meeting business needs without compromising our shared future ● Efficient – Optimize cost, speed, and capital to increase return on investment ● Adaptive – Future-ready designs to accommodate new technologies ● Resilient – Reduce vulnerability to unplanned downtime Designing and Building the Next Generation of Sustainable Data Centers Data centers consume lots of resources making an undesirable impact on the environment. To reduce their impact, the design, build, and operation of data centers should reduce their carbon footprint. Designing and building data centers must go beyond the usual metrics of minimizing the use of land and water resources. It should focus on ways to cut emissions and waste. The conventional data center is designed around cheap extraction. It uses and then discards re-sources, often wasting value in the process. The long-term results of this is mass consumption of natural resources include: Regular idleness as standby capacity, Intense carbon foot-print Harmful emissions generated by the mass consumption of fossil fuels. Rethink clean energy. Plan for clean energy in every data center facility. Renewable energy sources should utilized on or near-site for consumption. Rethink and Make a Sustainable Change Improved sustainability within the data center industry is gaining in popularity. Data center managers have promised clean energy and increased renewable energy procurements. The data center industry has seen a growth rate of 6% per year globally during the last twenty years. Thus, data centers are a major consumer of global resources. This trend will likely continue to, and even accelerate into the future. The sustainable data center must be designed around the Sustainable Cost of Ownership (SCO). Sustainable data centers must avoid overprovisioning which occurs when redundancy exists. An example of overprovisioning, diesel generators as an auxiliary power source. These add cost, adds to more space requirements and give off emissions. They are only needed when the power grid fails. Due to rapid expansion sustainable strategies must have a modular system. Utilizing a module systems for data centers makes the building of a data center faster. Designing each module to be sustainable is easier than a whole building. As computing power increases new modules are deployed for rapid scaling. Cooling uses water as its main coolant. It is the second main use of power in data centers around the world. Data centers need to decrease the resources that cooling uses to improve sustainability. Modular data centers under construction Responsible Power Generation and Consumption. Today’s data centers have redundant, pollution-causing auxiliary power systems. They handle power outages as electrical grids are not reliable enough to meet 99.999% uptime. The industry must move towards renewable, sustainable onsite power generation. Minimum Waste and Landfill Impact. Data centers do not monitor the production of waste from construction and performance. Tracking all waste produced by the site over its entire operation needs considering. Waste is minimized by re-use and recycling. Direct Clean Power. Locating data centers close to reliable and renewable power sources, such as hydroelectric power plants. Shifting to renewable sources so data centers use clean power and reduce carbon footprint. High density and scalable IT module. The biggest challenge of the next generation data centers is being both resilient and powered by clean energy. However, it is an obstacle for the thousands of existing data centers. As of now many do not have any alternative for available, and affordable clean power. Fuel cells offer a good alternative. Fuel cells for powering data centers is feasible today. Powering data centers with fuel cells saves money compared to the old method of purchasing power from the grid. Utilizing a Fuel cell design technique called Rack Level Fuel Cells (RLFC). The power is provided onsite with fuel cells located at the rack level. When connected with renewable natural gas (RNG) as a fuel source, fuel cells have become a reliable power choice that is environmentally friendly. Microsoft fuel cell powered racks Fuel cells allow the data center to be completely separate from the electric grid’s cost. They are reliable and cut carbon footprints. RLFC can deliver 99.999% resiliency as a power source, thus eliminating the need for backup power generators. In addition, RLFC with RNG there is no need to pursue power purchase agreements. RLFC data centers can control responsible power generation and consumption. Data centers purchase their energy to accommodate absolute peak power consumption. Peak power consumption increases te cost of power despite normal demand bing much lower. With RLFC power, it can be produced at the IT rack which gives more than 50% improvement in generation efficiency. This is due to modern fuel cell technology and virtually no transmission loss because of its location to the data center racks. As the new normal continues to change the lifestyles of every person, the data center has become the behind-the-scenes enabler of this new arrangement. Due to increasing dependency on data centers, outages resulting in a loss of video conferencing, remote monitoring or streaming video services is coming under closer scrutiny. It is critical for data centers to be both sustainable and resilient. With the new designs for data centers, the industry will not only minimize its environmental impact but also serve as an important contributor to human welfare in the Global Connected Era. It must emphasize the need for data center managers to achieve sustainability for the next-generation data centers. The industry must strive to improve usage effectiveness beyond Power Usage Effectiveness(PUE); and to expand Total Cost of Ownership(TCO) beyond direct business cost per megawatt installed. Simon Fraser University has constructed a new research data center. It is a 175 rack facility which houses Cedar, the world's 50th largest supercomputer. Construction was recently completed with 107 racks fully populated. AKCP sensor solutions were chosen to monitor the data center and the mechanical room which houses the chilled water plant equipment. In the initial phase, spot and rope water sensors have been installed in the mechanical room connected with AKCP sensorProbeX+ SNMP enabled base units. The spot and rope water sensors are placed at key points where leaks are most likely to occur. Rope water sensors are laid out along the lengths of pipes or perimeter of an enclosure.
<urn:uuid:a8c9eb66-2ddd-485b-97a0-08672c2e7eea>
CC-MAIN-2022-40
https://www.akcp.com/blog/sustainable-and-resilient-data-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00663.warc.gz
en
0.931628
1,567
2.9375
3
It isn’t easy to imagine a world without the internet, even though it existed not that long ago. Almost every aspect of modern life, especially in business, involves the internet at some point. It’s an incredible resource, but it also comes with a few risks you should be careful to avoid. It’s no secret that the internet can be a dangerous place if you’re not careful. You might’ve heard that hackers attack every 39 seconds, but did you know that figure is from 2007? Internet usage has skyrocketed since then, so it’s safe to say cyberattacks have probably shot up too. Considering these rising threats, online security for your business isn’t optional. If you don’t know where to start, follow these eight steps. If there’s only one thing you do to increase your cybersecurity, it should be using an antivirus service. You most likely already use one, but it’s such an essential step, it’s still worth the mention. No matter how careful you are, things can slip through the cracks. Without this software, you’re at risk. Even though they’re called antivirus, these services protect against all kinds of malware. The internet is full of threats like trojans and spyware, and the more you work online, the more exposed you may be. Malware is too prevalent a risk to rely on your computer’s built-in antivirus. From email to cloud services to social media, working online involves creating and managing many accounts. You’ve probably heard it before, but it bears repeating — you need to use different passwords for each account. Even though 91% of surveyed users understand the risk of password recycling, 59% still do it. It can be challenging to create and remember all these secure passwords, so that disconnect makes sense. To help you keep track of all your business and personal accounts, use a password manager. These services generate and store secure passwords, so you don’t have to worry about reusing any. Even with a password manager, relying on a password alone isn’t the safest option. Although it’s a lot more challenging for a hacker to compromise a secure password, it’s not impossible. Turning on multi-factor authentication adds another layer of security, further securing your accounts. Many online services include multi-factor authentication, but it’s not turned on by default. Even though it may seem like a pain at first, enabling this feature is a straightforward but effective way to improve online security. The extra time it takes to log on is well worth the protection it gives you. Now that you can perform many business tasks online, remote work is a viable option. If you work remotely, you may occasionally go to other places like coffee shops or coworking spaces to work. While you’re at these locations, be wary of using their Wi-Fi, as public networks are usually unsecured. If you’re using public Wi-Fi, you should use a virtual private network (VPN) on your computer and phone. VPNs hide your IP address and encrypt your data, so if someone does look at your system, they’ll just see gibberish. It’s also a good idea to avoid public Wi-Fi as much as possible, too. Identity theft may seem like something that happens to “someone else.” Still, there are 15 million cases each year in the U.S. It’s become such a common problem partly because of how much personal information is available online. Even if you use protected or private accounts, your data may be more accessible than you realize. Double-check your privacy settings on all online accounts to see how they use your info. Many sites will give your information to advertisers, which can present a security risk. To stay truly private online, you need to tell websites not to share your data. You probably use different email accounts for your work and personal life, and you should do the same for online accounts. Like with passwords, varying emails ensure that a hacker can’t breach everything with one piece of info. You should use at least two — one for communication and another for making accounts. Using multiple email addresses will also help you sort phishing from actual alerts. If you get a seemingly important email on your spam account, you’ll know it’s a scam. While you’re making various accounts, consider using one email exclusively for password resets. No matter how robust your cybersecurity is, you can never be certain accidents won’t happen. Cybercriminals aside, poor luck like a program glitch or spilling some coffee on a computer can lead to file loss. To make sure these accidents don’t hurt you, back up sensitive data. You don’t need to backup everything, but it’s a good idea for anything that would be detrimental to lose. You can store backups in the cloud or a physical device like an external hard drive. Whatever method you go with, make sure it’s just as secure as your primary storage option. Cybersecurity is dynamic, so you need to update everything as soon as possible. As new threats emerge, developers release patches to protect against them, so you’re vulnerable if you don’t update. This step is especially pertinent when working online since the internet landscape changes so frequently. In 2019, the UN suffered a major data breach when a hacker exploited a vulnerability in Microsoft SharePoint. Microsoft had released a patch for this hole before the attack, though. If the UN had updated their software as soon as possible, the cyberattack would have been unsuccessful. If you want to stay ahead in today’s business world, you need to leverage the internet. As you move more of your processes online, though, you need to be careful to avoid cyber threats. No one practice will protect you entirely, so you should take a broad, varied approach. Online security is a serious issue, but with the right tools, you can stay safe. Follow these eight steps, and you’ll find that you’re far more protected than before. Main Photo Credit: Elie Bursztein
<urn:uuid:ea598e3c-bf55-4eb1-a83e-f7f355d7b950>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/increasing-your-online-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00663.warc.gz
en
0.928049
1,309
2.921875
3
Government researchers are looking for common patterns in the systems they study, to see if they can be applied to the development of robust complex networks. FORT LAUDERDALE, Fla. ' Government researchers in fields as diverse as biotechnology, ecosystems and behavioral science are looking for common patterns in the systems they study, to see if they can be applied to the development of robust complex networks, whether for computer systems or organizational structures. A panel convened at the Association of the United States Army winter symposium yesterday discussed some of the parallels between biological systems, such as the circulatory, respiratory and central nervous systems in fish, the behaviors of proteins in bacteria and the organization of an airline's flight routes, to show how their behaviors may be mirrored in the performance of networks. Understanding biological, molecular and economic networks is necessary to design large, complex networks whose behaviors can be predicted in advance, said Jagadeesh Pamulapati, deputy director for laboratory management and assistant Army secretary for acquisition, logistics and technology. The search centers on finding the answer to, 'What are the underlying rules in common?' he said. Can a common language be used to describe all these systems? Is there a mathematical formula to describe their behaviors and relationships? Jaques Reifman, chief scientist for advanced technology and telemedicine in the Army's Medical Research and Materiel Command, said that modeling protein interactions inside e. coli and plague bacteria is a form of comparing networks to understand 'why in two related viruses, sharing more than 50 percent of proteins, one's more virulent, more deadly, than the other.' Reifman offered the theory that proteins can be judged for 'essentiality' based on how many connections they make with other proteins, and these hub proteins are more likely to be centrally located within the network of interactions. 'I study fish because it's the data we can get,' said Lt. Col. John Graham, assistant professor for behavior sciences and leadership at West Point. Humans are resistant to providing access to their e-mail traffic, for instance, to allow the generation of very large datasets for study. But the understanding of networks is critical, he said, because 'the bad guys are getting good at network science.' Graham took some of the conclusions from the study of biological networks and demonstrated how they can be applied to social networks, including prospective terrorist networks. The most effective attacks will target 'boundary spanners,' the people who bridge gaps in communications, he said. NEXT STORY: Meyerrose to helm new information sharing panel
<urn:uuid:733bd1e9-a082-4846-8779-8e404e954da6>
CC-MAIN-2022-40
https://gcn.com/2007/03/network-science-is-about-more-than-computer-systems/310404/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00063.warc.gz
en
0.941235
522
2.515625
3
Network Security 101: Network Firewall Best Practices all Businesses must follow Most firewalls can provide some level of security straight out of the box. However, leaving your firewall to simply function according to faith won’t guarantee your business network security. To prevent malicious devices or software from accessing your network, firewall management best practices must be incorporated into your network security administration. Hackers can quickly probe and identify businesses with a weak firewall system. Once identified, they can focus their efforts on breaching the company’s cyber defenses. To prevent malicious attacks, your firewall’s security must be frequently reviewed and optimized. All this will be covered as we identify network firewall best practices in the article below. Before we get into the leading firewall best practices, let’s first review why firewalls are necessary and their importance to an organization’s cybersecurity posture. On this page: What is a Network Firewall, and How does it work? A network firewall is a security device and is an organization’s first line of defense against malware, viruses, and attackers attempting to access an organization’s internal network and systems maliciously. Network security firewalls are implemented as hardware, software, or both, and monitor incoming and/or outgoing network traffic, blocking unauthorized data traffic based on predefined rules. A firewall acts as a traffic controller or gatekeeper, creating a barrier between your business network and other networks such as the Internet. It helps protect your network and data by managing your network traffic, blocking unauthorized incoming network traffic, and validating access. How does a Network Security Firewall work? The data that comes into your business network is in the form of packets (a small unit of data). Each packet contains information about the source and destination. The intent of each data packet cannot be determined. A firewall will only let connections that it has been set up to allow. It does this by permitting or blocking certain data packets based on IP addresses known to be safe. By determining the data packet’s source, i.e., the IP Address, firewalls filter and allow only legitimate traffic to enter your corporate network. Most operating systems and security programs come with a firewall installed. It’s sensible to ensure these features are on and your security settings are set up so that updates run automatically. Why does my Business need a Firewall? Security and protection. Firewalls, along with corporate anti-virus software, form the cornerstone of any information security strategy. Firewalls prevent unwelcome traffic from passing through. Firewalls stop external threats from reaching your network and prevent internal threats. Here are some reasons why your business needs a network firewall: - The first line of Defence from Unauthorised traffic: A firewall protects your corporate network from the Internet. External users can access your confidential corporate assets without a barrier. Business data and assets are in danger without a firewall. A firewall can protect you from hacking attempts, data breaches, and sometimes, DDoS attacks may even be prevented. - Block access to Malicious Websites: A firewall can stop your users from going to certain websites outside of your network. It can also stop unauthorized users from getting into your network. For example, you could set up a rule that prevents your network from accessing social media sites like Facebook or YouTube. - Protect Businesses from Malicious Code: Firewalls analyze incoming and outgoing network traffic and log infiltration attempts and policy infractions. Modern firewalls often include Data Loss Prevention (DLP) These features can detect sensitive data and prevent it from being leaked. - Meter Bandwidth: A firewall does more than keep things safe. It can also measure and control the amount of network bandwidth that flows through it. For example, through QoS (Quality of Service), firewalls can limit the network bandwidth for non-business videos, music, and images. This lets you save bandwidth for business traffic which is more important. - Provide VPN Services: Many firewalls can connect two sites through Virtual Private Network (VPN) services. This VPN feature allows mobile device users and remote sites to access your internal network resources safely. This makes it easier for people to work together and share information. - Regulatory Compliance: Many data privacy/protection laws, regulations, and policies require firewall use either explicitly or implicitly. Examples include PCI DSS (the Payment Card Industry Data Security Standard), HIPPA (the Health Insurance Portability and Accountability Act), and GPDR (the General Data Protection Regulation). A firewall can help you increase your chances of meeting these regulatory mandates. Types of Firewalls in Network Security Structurally, firewalls can be software, hardware, or a combination: - Software Firewall: A software firewall runs on computers and is usually included in operating systems and security programs, such as Avast Internet Total Security. Software firewalls prevent attacks from the outside, like unauthorized access, malicious attacks, and so forth, by warning when a harmful email is opened or when inadvertently accessing an insecure website. - Hardware Firewall: A physical device that is placed at the network boundary. All network connections that cross this boundary go through this firewall. This lets it inspect incoming and outgoing network traffic and enforce access controls and other security policies. There are also different kinds of firewalls based on how they work, and each kind can be set up as either software or a physical device. There are four different kinds of firewalls that work in different ways. - Packet Filters: Also known as a static firewall, these firewalls control network access by watching outgoing and incoming packets and letting them through or stopping them based on the source and destination Internet Protocol (IP) addresses, protocols, and ports. - Stateful Inspection Firewalls: Use dynamic packet filtering to control how data packets move through a firewall. These firewalls can check to see if a packet is part of a particular session. It will only allow communication if the session between the ends is perfectly set up. If it isn’t, it will block the communication. - Application Layer Firewalls: These firewalls can look at information from the application layer of the OSI model, such as an HTTP request. If a suspicious application that could harm your network, or is not secure, is found, it is immediately blocked. - Circuit-level gateways: A circuit-level gateway is a firewall that protects User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) connections. It works between the transport and application layers of an Open Systems Interconnection (OSI) network model, such as the session layer. What are Next-generation Firewalls (NGFW)? Often referred to as intelligent firewalls, NGFWs accomplish the same tasks as the other types of firewalls mentioned already. However, they HoHHHalso incorporate extra features like application awareness and control, integrated intrusion prevention, and cloud-delivered threat intelligence. 5 Firewall Best practices to improve your Network Security Your firewall’s effectiveness depends on the location of your firewall and its configuration. These are just a few of the factors that can affect your firewall. 1. Harden the Base Configuration Block traffic by default, permitting only some specific traffic to certain known services, and monitor user access. Employing this approach allows you to control who has access to your network and prevent security breaches. Firewalls are a business’s first line of defense against threats. It must be protected from all access.User permission control is required to ensure that only authorized administrators can modify firewall configurations. A log must also be kept for compliance and audits whenever an administrator changes configuration.Unwarranted changes to the configuration can be detected, and configuration restoration may be performed in these cases. Separate user profiles can be created to give different levels of access to IT staff. Firewall logs should be regularly monitored to detect any unauthorized break-ins of the firewall from outside or inside the network. You can take security measures to protect your firewall and maintain its integrity. These are some firewall-hardening best practices that you can follow: - Make sure your firewall software/firmware is up to date: This will ensure that no one can exploit your firewall’s vulnerabilities. - Replace the default factory password: You can use a complex password that includes alphanumeric and non-alphanumeric characters and uppercase and lowercase characters. - Use the principle of least privilege for firewall access: Only authorized administrators should be able to log in to make changes to your firewall. - Do not use insecure protocols like HTTP, Telnet, and TFTP: It wouldn’t take long to get sensitive information (e.g., usernames and passwords) if someone intercepts the traffic. 2. Implement Firewall Change Control Everyday there are changes that affect your IT infrastructure. New applications are installed, new network equipment deployed, user numbers increase, and non-traditional work methods may also be adopted. Your IT infrastructure’s attack surfaces will change as a result. You shouldn’t take it lightly when making changes to your firewall. One simple error can cause some services to go offline, disrupting critical business processes. You could expose ports to the outside world and compromise their security. You must have a plan for managing change before you make any changes to your firewall. The plan should outline the changes you are planning to make and the goals you have in mind. Your change control mechanism must contain anticipated risks and measures to mitigate them. Keep track of all relevant information when you are creating your plan. When executing your plan, identify who made the changes and what they did. This will enable you to have an audit trail that you can refer back to if anything goes wrong. After you have improved the performance of your firewall, it is time to ensure that it remains secure. This plan will minimize the negative impact firewall changes can have on your business. 3. Plan and perform regular Firewall Security Audits Companies have good reasons to perform compliance audits frequently. Companies typically conduct PCI DSS audits at least once per year. Compliance is generally only relevant for a specific time since there is no guarantee that a firewall device will always be compliant. To identify policy violations, your IT security team, whether in-house or outsourced, should conduct regular and routine firewall security audits. The audits check whether firewall rules align with organizational security regulations or compliance requirements. Unauthorized firewall policy changes could lead to non-compliance and expose your business to cyber threats. An audit helps forecast how network changes may affect your security profile and should be performed every time you: - Install a new firewall - Conduct a firewall migration on the network - Implement multiple firewall configuration changes The following are standard firewall audits: - Collect relevant information like previous audit reports, firewall/network Security Policies, network diagrams, and legitimate applications within your network. - Ensure employees comply with firewall access control policies. This access should only be granted to authorized administrators. - Checking firewall change management plans and making sure employees follow them. Reexamining your firewall monitoring process. Someone should monitor the firewall logs to detect potential threats and faulty rules. - Review firewall rules and access control lists. Ascertain that they are still suitable for your current network setup. To aid post-incident reporting or investigations, send your firewall logs to a security event management (SIEM) system. Audits are also a way to ensure that firewalls you have acquired or migrated comply with your firewall policies. Combining your firewall configuration best practice with regular firewall security audits can be compliant and secure. A firewall management plan was briefly mentioned earlier. Here are the details and reasons you should have it. 4. Optimize the Firewall Rules For optimal protection, firewall rules must be well-defined and tuned. De-cluttering your firewall rule base can improve network security. Your firewall rule base may include redundant components, duplication, or bloated superfluous rules that are ineffective. It’s vital to eliminate such rules to have more explicit guidelines. You can clean your firewall rule base by: - Removing redundant rules or duplicate rules can slow down firewall performance. They cause the firewall to process more rules than necessary. - You should remove rules that are no longer needed.They make firewall management much more complicated and threaten network security. - Eliminate any shadowed rules which aren’t essential.These could lead to the neglect of more important rules. - Disputes must be resolved. - Any mistakes or inaccuracies must be corrected as they could lead to malfunctions. If possible, use real-time monitoring to be alerted of any firewall changes. 5. Use a Centralized Management Tool for Multi-Vendor Firewalls Multi-vendor firewalls can be found in many organizations.To provide additional security, companies prefer to have firewalls from different companies installed in their systems.The problem is that firewalls made by different companies have different architectures. To ensure that all firewalls are working correctly, it is essential to have them all managed centrally.Multi-vendor firewall management tools allow you to view all firewall policies and rules in one place. This makes it easy to manage and compare firewall rules. A centralized management tool will allow you to perform security audits and reports, troubleshoot configuration problems, and support firewall migration. Firewall management is part of a broader Cybersecurity Strategy Firewalls are an essential part of your business. However, just like other software devices, apps, and software, they require attention to work at their best and deliver results. Poorly installed firewalls are worse than no firewall. This is also true for firewalls that have not been adequately planned or audited. These common mistakes in many businesses can lead to poor network security and a failed investment. However these mistakes can be avoided by adhering to the firewall best practices outlined above. Firewalls are a significant investment for any business, ranging from $1,000 to more than $100,000. By following these firewall best practices, your can maximize your return on investment and increase the effectiveness of your firewall simply by using informed firewall practices.
<urn:uuid:641285d4-1daf-47d8-bf9e-da3691e16530>
CC-MAIN-2022-40
https://www.businesstechweekly.com/cybersecurity/network-security/network-security-101-network-firewall-best-practices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00063.warc.gz
en
0.914553
2,950
2.671875
3
Microsoft is testing a cooling technology known as submersion cooling that could make it more efficient to run high-performance, low latency workloads such as machine learning and artificial intelligence applications at the edge. Microsoft is not the first company to experiment with submersion cooling but is claiming to be the first to use two-phase immersion cooling in a production cloud service environment. Using liquids to cools computers has a history that goes back to the first IBM mainframes. Submersion cooling is a technique that has been pioneered in the world of supercomputers by companies like Cray Supercomputer and has found traction in applications such as cryptocurrency mining. Servers are placed in a tank of dielectric cooling fluid (in this case, the fluid is supplied by 3M); the fluid boils at a temperature far lower than water. As the liquid boils heat is carried away from the servers, and this means they can operate at full power without risk of failure through overheating. Microsoft says this technology will make it easier to run demanding applications in edge locations such as at the base of a 5G tower, the company wrote in a blog post. Liquid technology is widely used elsewhere in engineering. Most cars for example use it to prevent engine overheating. Applying the same technology to chips running in servers makes sense because more and more, chip manufacturers draw increasing amounts of power to drive performance upwards. Now that transistor widths have shrunk to the size of a couple of atoms, we are reaching a point where Moore’s Law, which says the number of transistors on a chip will double every two years, will shortly cease to apply. But while we might have reached the physical limit of chip architecture, demand grows and grows, and manufacturers have turned to power to improve performance. Microsoft says typical CPU power usage has doubled from a hundred and fifty watts a chip to three hundred, and GPUs can use upwards of 700 watts of power. But the more power passed through a chip the hotter the chip and the higher the risk of malfunction. Air cooling has long been the solution to the problem, but that is ‘no longer enough’ especially for the artificial intelligence workloads. (Hardware engineers on Microsoft’s team for data center advanced development inspect the inside of a two-phase immersion cooling tank at a Microsoft data center. Source: Microsoft) Microsoft says immersion cooling will enable Moore’s Law to continue at the data center level. The reduced failure rates expected also mean it might not be necessary to replace components immediately when they fail. That would allow deployment in remote hard-to-service areas. Microsoft is currently running a tank in a hyperscale data center, but it could also see them used at the base of a 5G cell tower, used for applications such as self-driving cars. It’s easy to imagine the submersion systems being deployed in lots of other classic edge locations such as oil and gas wells or factories where maintaining system reliability and energy efficiency are critical to operations. To that end, keep an eye on companies like Submer Technologies and GRCooling which have been commercializing immersion cooling systems that will benefit from the exposure Microsoft has brought to the topic. 3M | Azure | cloud | CPU | data center management | edge data center | facilities management | GPU | liquid cooling | Microsoft
<urn:uuid:55383c83-d2d4-4c98-bc3b-48fb8dac0a0c>
CC-MAIN-2022-40
https://www.edgeir.com/microsoft-brews-up-plan-to-use-liquid-for-edge-data-center-cooling-20210420
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00063.warc.gz
en
0.950485
682
3.25
3
On Dec 8, 2017, 4iQ reported the discovery of a database on the dark web containing 1.4 billion credentials—in clear text.1 The fine writers of the aforementioned article note that they’ve “tested a subset of these passwords and most of them have been verified to be true.” A standard calculator (like the one on your smartphone) cannot display 1,400,000,000 without using scientific notation. I tried; my poor iPhone can only manage to display 140 million. There aren’t enough digits on a standard calculator to deal with numbers of this magnitude. Our brains, it turns out, are similarly limited. The numbers associated with breaches of late is so large that scientists tell us we can’t really comprehend it.2 That’s usually why we talk in terms of percentages and round up to numbers easier to swallow, because we are much better at grasping those implications. This innate inability is problematic, because as researchers note, “the larger a number grows, the harder it becomes to deal with. But sometimes, extremely large numbers lurking in the levels of billions and trillions and more, actually are relevant to the lives of everyday people. Take the national debt and government deficit for example. In order to understand such numbers, it's important to have an understanding of the context that number falls into.” So, let’s put them into context. As of June 2017, there were approximately 3.8 billion Internet users across the globe,3 and 1.4 billion is just over one-third (37%) of all Internet users. That means if just three of us get together, the credentials from one of us was likely in that database. If you’re uncomfortable with this revelation, let me make you even more uncomfortable: that’s just the tip of the proverbial iceberg. F5 Labs gathered and analyzed data related to a decade of data breaches and discovered that 1.4 billion is a mere pittance when viewed against the almost 12 billion records (of all kinds) compromised over the past ten years. “In 338 cases, almost twelve billion records (11,768,384,080) were compromised. That’s an average of 34,817,704 records per breach! To put that figure into perspective, the current world population is 7.5 billion, and the population of people online as of June 30, 2017 was 3.8 billion. That’s roughly 1.6 records breached per person in the world (just because you’re not online doesn’t mean your data isn’t), or 3 records per person online that have been breached.” This research does not include the recent find by 4iQ. If we include that, the number of records breached rises to 13 billion, or an average of 3.5 records per person online. The point is not to scare you into a fetal position under your desk. It is to ignite awareness that we are experiencing a very real and troubling credential crisis that cannot be managed simply by changing passwords anymore. Moore’s law and cloud computing are completely indifferent as to their application. They work just as well for the defenders as they do the attackers. Source: Lessons Learned from a Decade of Data Breaches, F5 Labs, Nov 2017 With literally billions of credentials stashed on the “dark web” and accessible to anyone with the means to pay for them, we must consider that we might be feeding the trolls by changing passwords without simultaneously re-examining our app protection and access strategies. Because 3.4 million secret question and answer records were also among the records breached. Changing passwords won’t stop someone from resetting a password by correctly answering the canned questions presented by most organizations. What will help is a strategy that includes protection from compromised credentials being used from attacker machines, and protecting applications from being exploited in the first place: - Multi-factor authentication (MFA) to prevent stolen credentials being used from unknown (attacker) systems - A hyperactive approach to patching vulnerabilities in platforms and applications - Use of both reactive and proactive security technologies to protect and defend apps against exploitation - Vigilant monitoring of applications, databases, endpoints, and network activity, including decrypted traffic Stay safe out there.
<urn:uuid:80d07b71-1087-4e0a-9f0e-546f335b18dc>
CC-MAIN-2022-40
https://www.f5.com/labs/articles/threat-intelligence/the-credential-crisis-its-really-happening
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00063.warc.gz
en
0.947969
901
2.640625
3
Only a few organic creatures specifically designed for that environment are able to do it well. When you think of military robots or drones, images of unmanned aircraft that have been patrolling the skies in combat zones for many years like the durable Predators or Reapers probably come to mind. Or you might think about even more advanced flying machines like the autonomous Skyborg being designed for the Air Force to act as a wingman to human pilots in combat situations. Flying drones are so popular and well established these days that there are even hundreds of models available for civilian use. Designing drones that can fly makes a lot of sense, because other than during takeoffs and landings, it’s easy to keep them from coming into contact with the ground or any of the many hazards that can impact travel on land. Once you get a flying drone above the treetops, there is not too much to worry about in terms of things that the aircraft can crash into. Then about five years ago, we started to see drones and robots evolving into new environments, specifically with many being engineered to conquer the underwater world. Right now, most underwater robots are designed for maintenance, various surveillance or scientific tasks, or simply for exploration. But if you think about it, drones swimming in the ocean makes a lot of sense too, because like the drones that can fly, there is not too much for them to run into once launched. Recently, the U.K. military held large exercises where swarms of drones and other autonomous vehicles were tested in various combat roles. Some even bridged the gap between sea and sky, diving down into the water and swimming for a while before zooming back up into the sky to complete their missions. But if there is one place that gives drones trouble, it’s the ground, or more specifically, the underground. Moving and operating underground isn’t an easy task. Only a few organic creatures specifically designed for that environment like earthworms, ants and moles are able to do it well. Creating a functional robot that can dig its own path seems almost impossible. And yet, the Defense Advanced Research Projects Agency came up with quite a few potential uses for a robot that could navigate underground. It announced the Underminer program last year, tasking three organizations with trying to develop a tunneling robot that could navigate the underground world without human intervention. According to DARPA, the goal of the program was to “develop and demonstrate tactical uses for rapidly created underground infrastructure in contested environments.” Dr. Andrew Nuss, the Underminer program manager in DARPA’s Tactical Technology Office said: “The ability to quickly bore tactical tunnels could benefit contingency operations such as rapid ammunition resupply, rescue missions or other immediate needs.” This week, scientists working at the GE Research testing site in Niskayuna, New York successfully deployed one of the first and most impressive digging robots in the world. Funded with $2.5 million from the DARPA Underminer program, the scientists at GE were able to task their robot with quickly digging a small tunnel under their research facility’s grounds. The robot operated independently without human intervention during the test. Unlike most robots with rigid bodies designed to protect their vulnerable electronics housed inside, the GE robot has a soft body that resembles an earthworm with lots of articulated joints. This gives the robot the strength to push through soft ground, but also to make very tight turns in a way that is impossible for solid drill bits and most other modern digging technology. When you look at the robot navigating through a clear tube set up by the researchers to study its locomotion, it really does look like a giant earthworm. According to GE, the worm robot is capped with a piloting tool on its nose designed to stir and soften the ground as it moves forward. As the dirt loosens, the inside of the worm begins to churn in a way that mirrors how earthworms navigate their underground environment. Fluid flows through seven internal chambers that act as muscles, flexing as water is pumped in and out of them. To move forward, a swelling hydraulic muscle inflates, anchoring the tail of the robot while the auger churns at the head. The worm then elongates to push itself forward. The worm is equipped with ultrasound technology which allows it to track its movements as well as the angle and position of the entire robot. GE has gone through several prototypes prior to this model. Future worms might be even smarter, able to quickly navigate to wherever they are needed to carry out missions. The new DARPA-funded worm robot proves that autonomous, robotic vehicles have a bright future in both the military and civilian organizations. Having conquered the sky, the seas and now the underground, there is almost nothing that can stop them from continuing to advance and complete increasingly complex missions in almost any environment. John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys
<urn:uuid:0367e7d5-e99b-4211-b329-01c9ba5f69b3>
CC-MAIN-2022-40
https://www.nextgov.com/ideas/2021/07/military-robots-dig-new-underground-environments/184101/?oref=ng-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00063.warc.gz
en
0.960295
1,046
3.25
3
Cybersecurity continues to be a top concern for businesses in Canada. This is not a surprise. In 2021, 85.7% of Canadian organizations experienced at least one cyberattack within a 12-month period according to the 2021 Cyberthreat Defense report by the CyberEdge Group. The bad news is that there is no miracle solution that will 100% guarantee that a business or organization will never be the victim of a cyberattack. The good news is that there are measures that can be taken to minimize the risks and in the worst-case scenario, have the cyber resilience to minimize the impacts to the organization. Really, that is the idea behind the cybersecurity onion. It is having multiple layers of security to minimize risk and increase cyber resilience. Areas of analysis As Technology Service Providers when we think about the cybersecurity of a business, we look at the following areas: Graphically represented, it looks like the image below, thus, the onion reference with the different layers. Let’s take a deeper look at the different areas and what security solutions businesses need to consider. People can be the weakest link in your cybersecurity posture, or they can be your best line of defense. Businesses should strive for the latter. There are some key solutions to consider implementing to create a human firewall. - Cybersecurity Awareness Training – this involves teaching people about cyberthreats such as phishing and how to recognize them to stay safe. Part of the training should by phishing simulations on a regular basis which will help people identify phishing attempts. - Password Management – implementing password policies that make it as difficult as possible for cybercriminals to easily crack them. The table below developed by Mike Halsey gives a good overview of how long it takes cybercriminals to crack passwords depending on their length and complexity. It may be surprising but true. It should be noted that this table has been updated on several occasions as cybercriminals use more sophisticated tools to crack passwords. Implementing password management policies and solutions can help with protection from cyberattacks. - Multi-factor or Two-factor Authentication (MFA/2FA) – This is an authentication method that requires the user to provide two or more verification factors to gain access to a resource. MFA is a core component of identity and access management policies. Some may see it as an inconvenience, but this tiny inconvenience can be the difference between being hacked and not being hacked. The perimeter is the border between one network and another. Creating a security perimeter, is placing the necessary safeguards at the entrance network to secure it from hackers. Some of the solutions that help secure the perimeter of an organization’s network include: - Firewall – establishes, with the proper configuration of security rules, a barrier between a trusted network and an untrusted network, such as the Internet. - Spam Filter – detects unsolicited, unwanted, and virus-infected emails and prevent those messages from getting to a user’s inbox. - Dark Web Monitoring – to watch for any user information such as passwords that may have been compromised and are being sold on the dark web. - Penetration Testing – Also referred to as a pen test or ethical hacking, is an authorized simulated cyberattack on a computer system, performed to evaluate the security of the system Protecting the network itself by implementing additional security such as: - Security information and event management (SIEM) – these are software products and services that combine security information management and security event management. They provide real-time analysis of security alerts generated by applications and network hardware. - Security Operations Centre (SOC) Services – these are services that continuously monitor and improve an organization’s security posture while preventing, detecting, analyzing, and responding to cybersecurity incidents. - Network Segmentation – this is when different parts of a computer network, or network zones, are separated by devices like bridges, switches, and routers. A few key benefits of network segmentation are: - Limiting access privileges to those who truly need it. - Protecting the network from widespread cyberattacks. - Boosting network performance by reducing the number of users in specific zones. - Wireless Authentication – enables you to secure a network so that only users with the proper credentials can access network resources. Securing the endpoints involves: - Monitoring and alerting services that look for unusual or suspicious activities at the endpoint level - Endpoint Detection and Response (EDR) is an integrated endpoint security solution that combines real-time continuous monitoring and collection of endpoint data with rules-based automated response and analysis capabilities. EDR is often referred to as next-generation antivirus - Patch management is the process of distributing and applying updates to software. Although sometimes overlooked, these patches are often necessary to correct security vulnerabilities and bugs in the software. - Drive Encryption is a technology that protects information by converting it into unreadable code that cannot be deciphered easily by unauthorized people. - Vulnerability scan enables organizations to monitor their networks, systems, and applications for security vulnerabilities. Data security mainly involves backing it up. Best practices for secure backups are often summarized by what is called the 3-2-1-1-0 backup rule which is described in the graphic below. At this layer, security involves cyber resiliency. It’s having a plan in place to respond to an incident as well as a plan for keeping the business operational when an incident occurs. Having plans in place, communicating the plans throughout the organization, practicing the plans, and reviewing them on a regular basis allows businesses to be prepared during a worst-case scenario situation. In the end, the goal of a layered approach to cybersecurity is to make it as hard as possible for cybercriminals to hack your business. The requirements and needs of every business are different and engaging with an IT Service Provider, such as MicroAge, can help you determine what the right solutions for your business are, is part of the process. Contact us today to see how we can help.
<urn:uuid:bf0db5c0-e4ce-416c-a88f-011ac50cbadb>
CC-MAIN-2022-40
https://microage.ca/at/en/have-you-heard-about-the-cybersecurity-onion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00264.warc.gz
en
0.939068
1,292
2.640625
3
The one-room schoolhouse. The barber shop doubling as a healthcare clinic. These images of small town living tend to stand as notions of how things “used to be.” This is not to say that pared down public facilities have ceased to exist, or that all communities require or could support a sprawling institution of higher learning or a top-notch research hospital. However, what is working to bridge the resource gap in education and healthcare between cities and rural areas is fixed wireless broadband: Medicine: At MetLife Stadium earlier this year, a medical trailer parked in the lot became a world-class emergency room through wireless broadband. This rapid-response trailer was outfitted with point-to-point technology, connecting the medical personnel in the trailer to their counterparts at Hackensack University Medical Center, a 900-bed teaching and research hospital in Hackensack, NJ. Education: Similarly, distance learning via wireless broadband connection spreads high-quality education farther from its source. With an enhanced Internet connection enabling swift transfer of large files, video communication and multi-campus collaboration, students in disparate communities can learn from one another, and educators can swap best practices and further their training and attain new credentials. To the latter application, wireless broadband provides connectivity from the University of Belize in the capital, Belmopan, to six primary schools. This high-bandwidth connectivity allows for widespread access to learning modules offered by the Caribbean Centre of Excellence for Teacher Training. Wireless broadband’s prowess over water and other challenging environments also makes it ideal for cost effectively connecting buildings and campuses over varied terrain, in addition to long distances. At Weymouth College, a 160-year-old institution of more than 7,000 students in the UK, wireless broadband is cost effectively providing carrier-grade connectivity between the main campus and four satellite campuses, one connection stretching across Weymouth Bay. These applications show how wireless broadband can extend the reach of smart city smarts to communities near and far, but certainly it’s just the tip of the iceberg in truly proliferating equal access to world-class healthcare and education. If you have witnessed a unique application of wireless broadband in healthcare or education, let us know in the comments below.
<urn:uuid:a8a376ce-c1fc-4333-a02f-b1088dab6c47>
CC-MAIN-2022-40
https://www.cambiumnetworks.com/blog/a-wireless-broadband-gateway-to-smarter-schools-hospitals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00264.warc.gz
en
0.942697
458
2.671875
3
It’s seems like almost every part of our lives is now being supported by emerging technologies, from predictive analytics and artificial intelligence to the Internet of Things (IoT). First, we had smart phones, then smart watches and now smart cities. Currently, more than half of the world’s population lives in towns and cities, and by 2050 this number could rise to 66 per cent. This is resulting in a growing need for solutions to effectively manage city infrastructure and cope with the rising population, all while keeping up with modernisation. There are vast amounts of benefits when it comes to smart cities, such as wireless connectivity for utilities and intelligent transport systems. Through IoT, smart cities provide effective and innovative solutions to the growing number of challenges facing communities today. For example, sensor-enabled traffic lights can alert city maintenance workers about a burnt-out light bulb, ensuring public safety as well as saving valuable time and money. Security vulnerabilities and risks to smart cities It is clear that the future benefits of IoT-enabled cities are enormous. However, these benefits come with a significant array of challenges and risks, one being security. Though city administrators undoubtedly attempt to prevent attacks, we would be naive to ignore the possibility of something falling through the cracks. History has shown us that security measures that have even the smallest of vulnerabilities will be quickly identified and exploited by criminals and smart cities are no different. As smart city technologies rely on digital networks, cyber criminals can take advantage of a number of vulnerabilities from a distance. The growth of IoT has been rapid, yet it has not been matched with adequate protection. Due to inadequate software security, many smart city systems have been constructed with minimal end-to-end security, as many of the devices used assumed a safer environment with a smaller user community in mind. Cities and local councils are also under increasing pressure to make savings, so it’s no surprise that the use of legacy systems which have not been upgraded for several years is commonplace, leaving cities wide open to cyber threats. A cyber-attack or extreme weather conditions, such a storm or heavy rain, potentially resulting in millions of residents being left with no electricity supply, are therefore very real threats to smart cities. And, in these hyper-connected environments, an outage can have cascading effects. For example, if an electricity grid is affected, power could be cut to homes, workplaces and various essential infrastructures, leaving thousands, if not millions, without power or heat for hours and even days. This is similar to the 2015 BlackEnergy cyber-attack in Ukraine where hackers accessed a power plant system, causing a power outage and leaving a whole city of 230,000 citizens without electricity for light or heating. The preventative approach to smart city continuity In the past, the standard approach to attacks or outages was addressed through the recovery process, however this is no longer enough to keep us safe. Without taking a preventative approach, smart cities are at risk as even the latest and most resilient technology is unable to completely eliminate security risks and vulnerabilities. So, how can we address these risks to ensure continuity when things do inevitably go wrong? As cities adopt smart technologies, making data security a priority is crucial. We are witnessing the increasing adoption of smart devices within homes, providing a wealth of new potential data streams that could inform smart city services – as long as they are secured. For example, live video feeds from smart home security cameras could be used to help inform city police services. This raises the issue of security and continuity at a network level, as it opens up possibilities for cyber attackers to hack into households. Security features on smart devices are essentially non-existent – potentially leaving a city vulnerable, along with a family’s online privacy. With the introduction of cloud technology, smart city continuity can now be ensured. Smart city systems can be backed up and restored at lightning speed. Enabling cloud also provides an “air gap” in critical systems, which can be forcibly shut down when systems are hacked or at risk. This leaves time to resolve vulnerability issues, prevent further damage and get things back up and running – allowing cities to not only avoid massive outages, but to also recover from them. As smart cities move from concept to reality, securing their foundation will ensure the safety of the digitally connected communities of the future. Decision makers cannot afford to put the public at risk by not implementing the right processes to support the infrastructure. While deploying security solutions to prevent things from going wrong will help, using continuity as a framework for building resiliency is the way forward for smart cities. There is no substitute for being prepared but investing in the right solutions to get cities back up and running, with immediate access to heating, power or electricity is non-negotiable. High quality investment in cloud backup and disaster recovery ahead of time is imperative. Taking a proactive, rather than reactive, approach will ensure that systems are protected and are resilient in the face of foreseeable or unforeseeable attacks and outages.
<urn:uuid:e8569b76-d702-4fc7-8065-bc7550f2d1ce>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2018/12/18/delivering-security-smart-cities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00264.warc.gz
en
0.952833
1,024
3.296875
3
Are Gas-Fired Peaking Plants on the Way Out? The electric utility industry has seen very significant changes in the last decade, especially with the advent of renewables and other technologies that facilitate energy efficiency. The latest projection is that, before long, natural gas-fired peaker plants will be replaced by “solar plus storage” technology. To date, “solar plus storage” has been a very effective strategy for utilities that want to guard against short, intermittent outages or to stabilize voltages, in that the technology can “keep the lights” on for a few minutes. More recently, though, “solar plus storage” technology has been advancing such that it is capable of powering the grid for a few hours, rather than just a few minutes. As a result, industry observers see a day in the not too distant future when utilities will be relying more on “solar plus storage” to handle peak demands (such as afternoon spikes in demand on hot summer days) than on the traditional gas-fired peaker plants. A February 13, 2018 article in the Wall Street Journal looked at this trend, noting that, “Giant batteries charged by renewable energy are beginning to nibble away at a large market: the power plants that generate extra surges of electricity during peak hours.” It went on to note that peaker plants are now becoming vulnerable to “inroads from lithium-ion batteries, which have fallen in price in recent years, and are emerging as a competitive alternative for providing extra jolts of electricity.” But it is not only the ever-lowering prices of “solar plus storage” that is making the technology more attractive. It is government directive. In late February 2018, the Federal Energy Regulatory Commission (FERC) unanimously approved Order 841, which, according to GTM Research, “opens the floodgates for storage participation” in wholesale power markets. Order 841 directs operators of RTOs and ISOs to come up with market rules for energy storage to participate in the wholesale energy, capacity, and ancillary services markets that recognize the physical and operational characteristics of the resource. Prior to the release of Order 841, studies conducted by GTM Research were showing that energy storage could be competitive with new-build gas-fired peaker plants in five to ten years, but only if the right market mechanisms were in place. With Order 841, these market mechanisms will soon be in place. It is important to note that, as of today, while most “solar plus storage” is less expensive than new-build gas-fired peaker plants, most such projects are still more expensive than existing gas-fired peaker plants. However, that is already starting to change in certain locations around the country. For example, First Solar Inc. recently won a contract to supply Arizona Public Service with a 65 MW solar farm that will feed a 50 MW battery system, with a price that not only beat out other “solar plus storage” competitors, but also gas-fired peaker plants. Neighboring Tucson Electric Power recently signed a power purchase agreement for a “solar plus storage” project that is “significantly less than $0.045/kWh over 20 years,” said Carmine Tilghman, senior director of energy supply for the utility. He added that he believes that the solar portion of the PPA is “the lowest price recorded in the U.S.” The project, being built by NextEra Energy, calls for a 100 MW solar array and a 30 MW, 120 MWh energy storage facility. And, in Long Beach, California, Fluence Energy (a joint venture of AES Corp. and Siemens AG) is building a battery array three times the size of the Tucson project, making it the largest lithium-ion battery project in the world. Another Arizona utility, Salt River Project, recently signed a 20-year power purchase agreement with NextEra Energy for a project involving a 20 MW solar array with a 10 MW lithium-ion storage system. A third driver of the explosive growth of “solar plus storage,” in addition to reduced pricing and the new FERC order (which has been in discussion since 2015), is the current 30 percent investment tax credit, which is in place for both solar and storage when they are combined. A fourth driver is that several state governments are encouraging the introduction of “solar plus storage” as a way to reduce dependence on gas-fired peaker plants. In December 2017, for example, the California Public Utilities Commission issued a resolution that would direct Pacific Gas & Electric to open competitive solicitations for distributed energy resources (primarily energy storage) to cover grid capacity and voltage needs that are currently being served by three gas-fired peaker plants in the northern part of the state. A recent report by GTM Research noted that, “Over the past five years or so, California’s push to replace natural-gas power plants with a portfolio of solar PV, batteries, thermal storage, demand response, and energy efficiency has grown from an experiment to a cost-effective reality.” Additional proof of California’s commitment to shifting from gas-fired peaker plants to “solar plus storage” came in the fall of 2017, when the California Energy Commission said that it planned to reject a utility’s proposed new gas-fired peaker plant, leading the utility to suspend its application process. And it is not just states in the Southwest that are moving away from dependence on gas-fired peaker plants. New York, Massachusetts, and Oregon are all introducing directives and mandates designed to encourage the growth of “solar plus storage” projects. The trend, of course, is of concern to the natural gas industry. A representative of the American Petroleum Institute (API), a lobbying group representing natural gas producers, noted in the Wall Street Journal article that the API “applauds batteries,” but believes they should compete on a level playing field, pointing to the still-in-existence 30 percent investment tax credit available to “solar plus storage” projects. “It appears that battery technology is now ready to compete in the market,” said the API spokesperson. “This means the financial support provided by governments intended to encourage the development and deployment of the technology can be eliminated.” The future? It remains to be seen. While three of the four current drivers of “solar plus storage” (continuing-lowered technology costs of “solar plus storage,” FERC Order 841, and pressure by state governments for utilities to shift from gas-fired peaker plants to “solar plus storage”) seem to be in place for the long run, the future of the 30% investment tax credit is still an unknown. If this tax credit does end, it could make “solar plus storage” a bit less attractive from a financial standpoint, but, of course, not from an environmental standpoint. And, of course, it still remains to be seen what the Administration’s new 30 percent tariff on imported solar panels will have on the price of “solar plus storage” projects. If you are interested in learning more about this concept and solution at your company, please contact us for more information and consulting to assist you in meeting the challenges of being an reliable and affordable energy provider. [email protected]
<urn:uuid:6bb308ea-ae52-4fb7-bbbf-ea0c27e9cbd0>
CC-MAIN-2022-40
https://finleyusa.com/whitepaper/are-gas-fired-peaking-plants-on-the-way-out/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00264.warc.gz
en
0.951544
1,576
2.921875
3
Denial of service (DoS) attacks are any attacks that attempt to make your Internet-based services inaccessible when you need them. An attacker who manages to knock your email server offline, for example, has executed a Denial of Service on your email. Types of DoS Attacks There are generally two types of DoS attacks we need to be prepared for: volume-based and logic-based. Volume-based Attacks. This type of DoS is the one most are familiar with. In a volume-based attack, the attacker simply attempts to flood a service or connection with so much traffic that communications either become unusably slow, or the service crashes altogether. Logic-Based Attacks. In a logic-based attack, the attacker attempts to exploit a flaw in a service to take it offline. For example, if an attacker finds a vulnerability in an unpatched web server, and uses that vulnerability to degrade the performance of the server, that would be a logic-based attack. It didn't take a lot of resources; it only took a "logic" flaw to degrade service. You may have also heard of a "Distributed Denial of Service" attack - or DDoS. A DDoS attack is usually a volume-based attack that uses a network of computers to attack a single source. The attacks are most often conducted using machines that don’t belong to the attacker (previously compromised) and can be both highly effective and highly challenging to stop. Preventing Denial of Service DoS prevention is not an easy task, but there are a few things you can do to prepare: Trust the experts in DoS mitigation. Many providers in the marketplace will handle DoS for you. They do this (usually) by acting as a middle-man between your users and your network, blocking attacks on their own networks before the attackers can ever reach your network. While we don't specifically endorse only one vendor, the Coalition team of engineers entrusts Cloudflare with this task. Patch your servers. In logic-based attacks, the attackers are looking for flaws and known-exploits in your software in an attempt to take them offline. One of the best ways to prevent this kind of DoS attack is to always run updated software. While this seems like an easy thing to do, it's simple advice that most don't follow. This can often be more complicated in large environments, but it's nonetheless important. Use Application-Layer Firewalls. Application-layer firewalls work to prevent logic-based attacks by intercepting and testing all connections to your servers before they get to your servers. Think of this as a lock and key - if the traffic (key) doesn't fit the lock (your server), it won't get in. Once again, there are many vendors in this space, but one of the best and most affordable solutions on the market is Cloudflare. Know your network. While it may sound simple, knowing how your network (and cloud services) work within your organization is very important. When an attack happens, you need to be able to react quickly, potentially rerouting traffic using your DoS mitigation company - and that will require an understanding of your network. Denial of Service attacks can be tricky to defend against. We recommend using third-parties for DoS prevention whenever possible, as they have the tools and techniques required. As with all things in cybersecurity, this is not a 100% guarantee. However, using these services is a fantastic way to reduce your risk to a manageable level in most cases. For more information on this topic, please reach out to us; We’re here to help!
<urn:uuid:574c7fe2-5516-4dda-9ed6-74e71c0c4bd8>
CC-MAIN-2022-40
https://help.coalitioninc.com/en/articles/2754843-understanding-and-preventing-denial-of-service-dos-ddos
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00264.warc.gz
en
0.957024
759
3.34375
3
Malicious software — or malware — is talked about when a dangerous software is used to access and infect a computer or network without being detected. A lot of people hear terms like viruses, adware, spyware, ransomware, worms or Trojan horses; these are all different types of malware that can severely damage computers. Cybersecurity companies are constantly on the lookout to find these treacherous codes and put a stop to them before they cause significant damage. All forms of malware are extremely dangerous once they infiltrate a device, but the way malware functions differ depending on the type of malware. Below is a list of several types of malware and short definitions of how they work. Function — Software that replicates itself over and over again once it is activated Threat — Viruses will corrupt or delete data Location — Comes in emails Function — Software that throws advertisements up on your screen (pop-ups) Threat — Can also corrupt your server and disable you from getting online Location — Found on the web; can appear through potential unwanted programs Function — software that sneakily clings to your computer’s OS Threat — collects all kinds of information Location — Can come in through terms and conditions you agree to Function — Software from crypto-virology that will lock you out of your own files Threat — They will block you from your own files until you pay a ransom Location — Usually carried in through a download or an attachment in an email Extra — Outlaws Function — Software that relies on vulnerabilities in a computer and spreads like a virus Threat — They replicate to a point that your network is damaged and bandwidth consumed Location — Found in vulnerable codes Function — Software that looks legitimate but is activated once its program is clicked Threat — Designed to damage your computer in any way — disrupt, steal, infect, etc. Location — They appear in what looks like normal social media ads or other normal links The above descriptions only summarize some of the key components of what these malicious software programs can do, but they all carry an equal amount of concern if your computer is affected. In addition, there are a few other types of malware that we haven’t discussed — Botnet, Rootkit, Spam, etc. When malware was first created, it was used as pranks and experiments, but now it is always destructive. Cybersecurity companies use programs to combat these dangerous attacks. If your network or computer is infected by some kind of malware, you should contact a cybersecurity company immediately. Malware is not the only form of dangerous online attack, there is also spoofing, phishing and other security hackers that might be able to penetrate your computer’s basic line of defense. To detect any threats before they become a problem, contact ICS today.
<urn:uuid:a472e105-a508-4a05-ab50-7a5dad55397d>
CC-MAIN-2022-40
https://icscomplete.com/uncategorized/understanding-different-types-of-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00264.warc.gz
en
0.928539
601
3.5
4
Some of the greatest technological breakthroughs aren’t always immediately apparent. They can slip quietly under the radar when they first come out and their importance only becomes realised later on. That will probably be the case with Raspberry Pi, a brilliant idea that will make sure kids around the world can be computer literate in the future and at relatively little expense. The Raspberry Pi is a pocket-friendly, ARM based single board computer the size of a credit card that was developed here in good old Blighty by the Raspberry Pi Foundation, a charity whose aim is to help children to learn about computers and programming. It plugs into a TV and keyboard and can do much of what a desktop PC does, including word-processing, spreadsheets, games and high definition video. Indeed, the Raspberry Pi’s great strength is its great flexibility; there are plenty of add-ons so you can customise it to do what you want it to. Early designs for the Raspberry Pi were developed in 2006 by Eben Upton and his colleagues at the University of Cambridge’s Computer Laboratory who challenged teachers and computer experts to create a computer that would inspire and engage children. They’d noticed a decline in the numbers and skills levels of the students applying to do degrees in Computer Science. In the 1990s, kids who applied had a good working knowledge of programming but a decade on the applicants didn’t have that knowledge and were more into web design. Upton and his team identified a number of issues with the way children were using computers. They noticed that the ICT (Information and Communications Technology) curriculum in schools seemed to be dominated by working on Word and Excel or creating web pages while home PCs and games consoles were all ready-to-use and didn’t require any knowledge of programming or how computers work; very different to the BBC Micro, Spectrum ZX and Commodore 64 machines the previous generation had learnt about computers on. Determined to get programming back at the heart of home computing like it had been in the 1980s and 1990s, Upton and his colleagues designed several versions of what would become the Raspberry Pi between 2006-2008. With the power and affordability of processors available to mobile devices by then, it became apparent that their computer could also provide multi-media as well as programming to make it more attractive to the younger generation. In 2008, Upton and some like-minded colleagues formed the Raspberry Pi Foundation to try to make the concept a reality. By October 2011, a demo version of the Raspberry Pi was made public and the first models went on sale in February 2012. Within a year, more than one million Raspberry Pi boards had been sold through licensed manufacture deals The response from schools, colleges and universities was particularly encouraging with Raspberry Pis becoming increasing commonplace in the classroom. Aside from the education sector, there’s been great interest from hospitals and museums too, who recognise that the Raspberry Pi offers a much cheaper alternative to conventional PCs. For the same reason, there’s a lot of interest from the developing world in this affordable but powerful technology. It’s in the classroom where the Raspberry Pi is set to have the most dramatic effect though. Whole communities have grown up around the small computer as kids learn how to use and get creative with it. And as more schools and homes find that the Raspberry Pi is a great – and cheap – way to learn about computers and computing, we’ll see a whole new generation of truly computer savvy individuals coming through. As the Raspberry Pi Foundation says: “We want to see cheap, accessible, programmable computers everywhere… we want to break the break the paradigm where without spending hundreds of pounds on a PC, families can’t use the internet. We want owning a truly personal computer to be normal for children and we’re looking forward to what the future has in store.” There’s a quiet revolution going on and the Raspberry Pi is at the heart of it. Until next time…
<urn:uuid:5515763b-c0dd-42d3-917d-024a8981c3f5>
CC-MAIN-2022-40
https://www.comms-express.com/blog/raspberry-pi-small-but-powerful/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00264.warc.gz
en
0.968414
818
3.328125
3
Cracking or discovering a simple password is not difficult, but it can be time-consuming. It just got a bit quicker. We discuss and demonstrate multiple ways to do that in Learning Tree Course 468 System and Network Security Introduction. One mechanism we do not demonstrate is the use of graphics cards (GPUs) to do the processing. That is becoming an increasingly popular method for password cracking as graphics cards are becoming more powerful. A case in point is the new Nvidia GeForce RTX 3090. Designed for gaming, the card lists for around USD1500, but comes with a market-topping 10,496 cores and 24GB of GDDR6X memory! The huge number of cores and memory can make tasks such as password discovery very quick. The Passcovery tool already supports the new card. I haven't seen any password cracking benchmarks for the new card at this writing, but it is expected to support well over a billion guesses per second. That could spell doom for simple passwords not on the haveibeenpwned.com list. The list contains many simple passwords and can, of course, be searched quickly, but the new card will make searching for variants easy. I have written about the dangers of weak passwords before. The issue is more serious now. What is considered "simple" has been significantly expanded. Sure, a serious attacker could have used a collection of graphics to attack passwords in the past. It could even have been a network of compromised computers. But a fast new card makes it even easier. What should I do? Change all your simple passwords, of course. It can be difficult to manage more complex passwords, especially when they need to be longer and use a richer character set than in the past. Let's face it: a random mixture of 12 or 14 upper and lower case characters along with digits and punctuation is difficult to remember. Dozens of such passwords is impossible for all but the memory champions. If you have been procrastinating about getting a password manager, it is time to get to it. [sidebar_cta header="Get started!" color="white" icon="" btn_href="https://www.learningtree.com/courses/468/network-security-training-network-security-fundamentals/" btn_href_en="https://www.learningtree.com/courses/468/network-security-training-network-security-fundamentals/" btn_href_ca="https://www.learningtree.ca/courses/468/network-security-training-network-security-fundamentals/" btn_href_uk="https://www.learningtree.co.uk/courses/468/network-security-training-network-security-fundamentals/" btn_href_se="https://www.learningtree.se/kurser/468/network-security-training-network-security-fundamentals/" btn_text="System and Network Security Introduction Training"] The reality is processing power will continue to increase. The gaming market - and not the password cracking market - is driving the increase. Artificial Intelligence is another driver as AI techniques evolve. These are competitive markets with significant amounts of money. I hope that as the cards continue to get faster and cheaper, alternative authentication methods will become more viable. Few organizations have moved completely to tokens or biometric devices. Tokens as part of a password are common internally in many enterprises but are rare on the Internet. Unfortunately, it is websites where passwords tend to be vulnerable to disclosure. It is more difficult to deploy authentication using biometrics or USB tokens over the Internet. As text passwords become less safe, researchers are likely to find a safe method to use a stronger method. Text messages or email as a second factor is a significant improvement, but the password is stll a weak link.
<urn:uuid:56e1a239-de4c-474a-ade9-8650128734a5>
CC-MAIN-2022-40
https://www.learningtree.ca/blog/password-cracking-just-got-easier/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00264.warc.gz
en
0.919011
805
3.4375
3
A Code Page (also referred to as Character Set or Encoding) is a table of values where each character has been assigned a numerical representation. A code page enables a computer to identify characters and display text correctly. Alteryx supports many code pages that you can select when you input and output data files via the Input Data tool and Output Data tool, or when you convert data types with the Blob Convert tool. Additionally, the ConvertToCodepage functions (available within tools that have an expression editor) can use code page identifiers to convert strings between code pages and Unicode®, the universal character-encoding standard for all written characters as created by the Unicode Consortium. Alteryx assumes that a wide string is a Unicode® string and a narrow string is a Latin 1 string. If you convert a string to a code page, it will not display correctly. Therefore, code pages should only be used to override text encoding issues within a file. Code pages can be different on different computers or can be changed for a single computer, leading to data corruption. For the most consistent results, use Unicode®, like UTF-8 or UTF-16 encoding, instead of a specific code page, which allows different languages to be encoded in the same data stream. UTF-8 is the most portable and compact way to store any character and is used most often. Both UTF-8 and UTF-16 are variable-width encoding, but UTF-8 is compatible with ASCII and the files tend to be smaller than with UTF-16. For more information on code pages, go to MSDN Library. To support the same functionality on Linux, Alteryx employs the ICU library. We use the same IDs that are on Windows, converting them to string ICU converters. ICU does not support the whole list of Windows encodings or there can be differences when converting the data from one code page to another. Code Page Identifiers These code page identifiers are supported with the ConvertToCodepage functions. Go to Functions for more information. |37||IBM EBCDIC - U.S./Canada||Original engine and AMP.| |500||IBM EBCDIC - International||Original engine and AMP.| |932||ANSI/OEM - Japanese Shift-JIS||Original engine and AMP.| |949||ANSI/OEM - Korean EUC-KR||Original engine and AMP. Not supported for the Download and Blob Convert.| |1250||ANSI - Central Europe||Original engine and AMP.| |1251||ANSI - Cyrillic||Original engine and AMP.| |1252||ANSI - Latin I||Original engine and AMP.| |1253||ANSI - Greek||Original engine and AMP.| |1254||ANSI - Turkish||Original engine and AMP.| |1255||ANSI - Hebrew||Original engine and AMP.| |1256||ANSI - Arabic||Original engine and AMP.| |1257||ANSI - Baltic||Original engine and AMP.| |1258||ANSI/OEM - Vietnamese||Original engine and AMP.| |10000||MAC - Roman||Original engine and AMP.| |28591||ISO 8859-1 Latin I||Original engine and AMP.| |28592||ISO 8859-2 Central Europe||Original engine and AMP.| |28593||ISO 8859-3 Latin 3||Original engine and AMP.| |28594||ISO 8859-4 Baltic||Original engine and AMP.| |28595||ISO 8859-5 Cyrillic||Original engine and AMP.| |28596||ISO 8859-6 Arabic||Original engine and AMP.| |28597||ISO 8859-7 Greek||Original engine and AMP.| |28598||ISO 8859-8 Hebrew: Visual Ordering||Original engine.| |28599||ISO 8859-9 Latin 5||Original engine and AMP.| |28605||ISO 8859-15 Latin 9||Original engine and AMP.| |54936||Simplified Chinese GB18030||Original engine and AMP. Not supported for the Download and Blob Convert tools.| |65001||Unicode UTF-8||Original engine and AMP.| |1200||Unicode UTF-16||Original engine and AMP.|
<urn:uuid:74ec0e41-4f50-4350-8729-9afff7f3dadf>
CC-MAIN-2022-40
https://help.alteryx.com/20221/designer/code-pages
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00264.warc.gz
en
0.680532
991
2.984375
3
The TCP and UDP are the most major protocols which are operating at the transport layer. Both the protocols will operate in a different manner and it will be selected based on the requirements only. TCP stands for the transmission control protocol, which guarantees the data packet delivery. And UDP stands for the User datagram protocol which operates in the datagram mode. TCP is the connections oriented protocol, whereas the UDP is the connection less protocol. Here, you can learn the TCP and UDP operations in the following sections: The TCP is referred as the reliable protocol, which is responsible for breaking up the messages into the TCP segments as well as resembling it in a receiving side. The major purpose of the TCP is to give the reliable and secure logical connection, service or circuit between the pairs of the processes. To offer this type of service on top of the less reliable internet communication system needs facilities in areas such as security, precedence, multiplexing, reliability, connections and basic data transfer. The main purpose of the TCP is flow control and error recovery. As it is connection based protocol, which means that before allowing any data it accomplishes connections and also terminates it upon completion. During the connection, accomplishment both server and client agree upon the sequence and also acknowledge numbers. The implicit client notifies the server of its source ports. The sequence is the characteristic of the TCP data segment. This sequence begins with the random number and each time the new packet is sent, then the sequence is incremented by a number of bytes sent in the previous segment of the TCP. Acknowledge segment is moreover the same, but from a receiver side. This does not comprise data and are equal to the sender's sequence numbers increased by the number of the received bytes. The ACK segment acknowledges that the host has got the sent data. TCP is the connection oriented protocol, that means the devices must open the connection before transferring data and must lose a connection gracefully after transferring the data. It also assures the reliable data delivery to the destinations. This protocol offers the extensive error checking mechanisms, including the acknowledge of data and flow control as mentioned above. The TCP is relatively slow because of the extensive error checking mechanisms only. Demultiplexing as well as multiplexing is greatly possible in the TCP by means of the TCP port numbers and also retransmission of the lost packets is merely possible in the TCP. The larger Maximum transmission unit - MTU will bring greater efficiency. This MTU is the needed concept in the packet switching systems. The Path MTU equals to the smallest link MTU on the path from the source to destination. Let us come to the Path MTU that relies on the TCP to probe an internet path with the progressively larger packets. It is the most efficient one when used in the conjunction with an ICMP based path MTU mechanism as indicated in the RFC 1191 and RFC 1981, but it resolves many robustness problems of the techniques which are classic, since it will never depend on the ICMP message delivery. The internet protocol version 6 is also known as the IP next generation. It was specially proposed by the IETTF as the successor to the internet protocol version 4. The most significant difference between version 4 and 6 is the version 6 increases an IP address size from the 32 bits - 128 bits. The links that the packet passes through the source to the destination has a variety of different MTU. In the IPv6, when the packet size exceeds the MTU link, then the packet can be fragmented at a source so as to deduce the forwarding device processing pressure and also utilize the network resource rationally. The PMTU mechanism is to identify the minimum MTU on the source to destination path. The MSS is defined as the maximum segment size. It is the parameter of the TCP protocol which specifies the largest data amount. The default TCP MSS is 536. Each of the TCP device has associated with it the ceiling on TCP size. The segment size that does not exceed regardless of how large the current window was. This is called as the maximum segment size. To decide how much data to put into the segment, every device in the TCP connections will choose the quantity based on the current size of the window, in conjunction with a various algorithm, but it does not as so large that the quantity of data exceeds the maximum segment size of the device to which it was sent. It is the largest quantity of data that a communication or computer device can handle in the single, unfragmented piece. For the optimum communications, then the number of bytes in a data segment as well as the header must include less than the number of the bytes in an MTU. This MSS is the most essential consideration in the internet connections, especially in web browsing. When an internet TCP is used to gain the internet connection, then the computers which are connected must agree on and set, the maximum transmission unit size acceptable to both. The typical MTU size in the TCP for the home computer, internet connections are either 1500 or 576 bytes. The headers are mostly 40 bytes long and the MSS is equal to a difference, either 1460 or 536 bytes. In some cases, the MTU size is less than the 576 bytes and data segments has smaller than the 536 bytes. As the data is routed over the internet, it has to pass via multiple gateway routers. Most ideally, each data segment may pass via each router without getting fragmented. Suppose, the data segment size is relatively too large for any routers via which the data passes, then the oversize segment are fragmented. It will slow down the speed of the connection as viewed by the computer operator. In some instance this slowdown is really dramatic. The likelihood of the such kind of fragmentation may be minimized by maintaining the MSS as small as much as possible. For most of the computer operators, the MSS will set automatically by an operating system. The speed of the each data transfer like the TCP is about the use largely determined by a line speed. The delay is considered as round trip time- RTT of the each data packet. Regardless of the speed of a processor or software efficiency, it takes the finite amount of the time to manipulate and also present the data. Whether an application is the web page showing the live camera or latest news shot showing the traffic jam, there are so many methods in which the application can be affected by the latency. There are 4 key causes of the latency are: data protocols, propagation delay, serialization, switching a routing, buffing and queuing. Any time the client computer asks the server a, there is an RTT delay until that receives the response. The data packet has to travel through the number of high traffic router and also there was always a speed of light as the limitation, considering a huge distance of the internet communication. The throughput of the communication is limited by the 2 windows such as congestion window and receive window. Each of the TCP segments comprises the current value of a receive window. The TCP windowing concept is mainly used to avoid the congestion in the traffic. It also controls the quantity of the unacknowledged data that a sender may send before it get an acknowledgement back from a receiver which it has received it. The Bandwidth delay product - BDP determines the quantity of the data which can be transmitted in a network. It is the most important concept in the window based protocol like TCP, as the throughput is bound by a BDP. The TCP receive window and BDP limit the connection to the products of the latency as well as the bandwidth. The transmission will not exceed a RWIN/ latency value. The amount of the data to send prior that should reasonably expect an acknowledgement. The TCP global synchronization in the computer networks will happen to the TCP flows during the period of congestions because every sender will deduce their transmission rates at a same time when packet loss occur. All the TCP streams will behave in the same way, so it will become as synchronized eventually, increasing to cause the congestion as well as backing off at the roughly same rates. It causes the most familiar bandwidth utilization graphs called the saw tooth. The WRED and RED will assist to alleviate it. The user datagram protocol - UDP is the datagram oriented protocol without overhead for opening the connection with the help of 3 way handshake, closing the connection and maintaining the connection. This UDP is very efficient for the multicast or broadcast type of the network transmission. It has only the error checking mechanism with the help of checksums. There are no sequencing of the data in the UDP and the delivery of the data cannot be guaranteed in that. It is simpler, more efficient and faster than the TCP. Although, UDP is less robust than the TCP. Here demultiplexing and multiplexing are possible in the UDP by means of the UDP port numbers. Additionally, there is no transmission of the lost packets in the UDP. As it is a connectionless protocol, it is not at all reliable protocols when compared to the TCP. It is capable to perform the fundamental error checking too. It will never offer any sequencing of the data. Hence, the data will arrive at the destination device in the various orders from which it is sent. This will occur in the large networks like the internet, where datagrams takes various paths to a destination and also experience the delay in the different router. The UDP is generally the IP with the transport layer port addressing. Sometimes this UDP is also known as the wrapper protocol. The last 16 digits are reserved for a checksum value in the UDP header. This checksum is used as the error detection tool. The checksum field also includes the 12 bytes pseudo header which includes the destination and source IP addresses. This pseudo header is the most useful one to check the IP datagram arrived at the station. The TCP starvation or UDP dominance is experienced at the congestions time where the TCP and UDP streams are assigned to a same class. Because the UDP has no flow control which cause it to back off while congestion taking place, but TCP does, this TCP ends up backing off and also allowing even many bandwidth to the UDP streams to a point where the UDP takes it over completely. It is not assisted by WRED as drops caused by the WRED will not affect the UDP streams. The best possible way to resolve the issue is to classify the TCP and UDP streams separately in the possible way. The latency is the end to end delay. As mentioned above, the UDP is connectionless, the real effect of the latency on the UDP stream is that there would be a great delay in between the sender and the receiver. The jitter is the variance in the latency. It causes problems with the UDP stream. The Jiffer can be smoothed by buffering. From the above session, it is possible to learn the TCP and UDP operations in details. In that it is essential to learn more about the difference between those 2 operations too. The connection and connectionless protocols are used in a variety of things depends upon the usage and requirements of the things. This thorough explanation will help to understand the operations as well MSS, latency, global synchronization, bandwidth-delay product, windowing, and IPv4 and IPv6 P MTU under TCP and latency and starvation under the UDP operations. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from [email protected] and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:e1ac7c71-54f0-4ac5-bdca-c99840416510>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/ccnp-explain-tcp-and-udp-operations.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00264.warc.gz
en
0.94772
2,417
3.9375
4
The Internet of Things movement has been emerging as widespread network connectivity and devices that can communicate with one another automatically over the network proliferate. These technological advances come together to form an Internet of Things - a network in which a variety of devices, ranging from toasters to sensors, drones and even light bulbs can respond intelligently to data from other nearby devices. "BPM solutions put IoT data in the right context for end users." According to a recent Mashable report, the breadth of IoT capabilities is set to revolutionize urban landscapes the world over, presenting a variety of new challenges and possibilities for engineers, city planners and other urban stakeholders. The connected city is not a future pipe dream According to the news source, the long-term vision for intelligent systems for street lamps, traffic lights, utility systems and even sidewalks is not some kind of science-fiction dream. Smart cities are emerging around the world, ranging from locations likeMasdar City in Abu Dhabi that are fully embracing IoT technologies to Los Angeles, where street lights are synchronized to improve traffic management. Jason Kelly Johnson, cofounder and design partner at Future Cities Lab, told the news source that the range of IoT possibilities is huge. "I can't honestly think of a field where [the IoT] won't have some effect," Johnson told Mashable. "In architecture, specifically, it will in fact shape public space; it will intersect in a visible and tangible way." Cisco estimates that approximately 50 billion devices will be connected to networks. The number of connected devices already exceeded the number of humans in 2008. The IoT is using this widespread network access to fuel communication between devices, adding a layer of intelligence to machine-to-machine communications that could have a huge impact on what people are able to achieve. Unlocking the full potential of the IoT through business process management BPM tools empower organizations to connect data and processes across disparate user groups. For example, a municipal government using the IoT to manage its utility operations can get data from sensors on power lines in real time, immediately notifying employees if a line is damaged, leading to a power outage. This information on its own is extremely useful, but BPM enables organizations to establish an application platform that also sends the outage details to relevant emergency responders, government departments and other user groups that will be impacted by the outage. The IoT gets the data to relevant users, and machines, efficiently, and BPM solutions put that information in the correct context to be of maximum use to different users. Appian is the unified platform for change. We accelerate customers’ businesses by discovering, designing, and automating their most important processes. The Appian Low-Code Platform combines the key capabilities needed to get work done faster, Process Mining + Workflow + Automation, in a unified low-code platform. Appian is open, enterprise-grade, and trusted by industry leaders.
<urn:uuid:b7239ffc-89a5-4332-990a-a8bd911068df>
CC-MAIN-2022-40
https://appian.com/blog/2015/breadth-of-iot-highlights-importance-of-process-innovation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00264.warc.gz
en
0.931675
590
2.71875
3
pressmaster - stock.adobe.com We live in an age where data must be secure. Even when the threat of malicious access to data is non-existent or minimal we must do our best to guarantee the security of data for compliance reasons. One of the simplest ways to do this is by using self-encrypting drives, a hardware, on-disk form of encryption available from all the storage array makers. Data lives on storage media, and for the largest part of its existence on spinning disk and flash. Access to these has to be secured, in the datacentre, when removed from the array, and at the end of life when disposed of. In other words, when data is at rest. Self-encrypting drives carry encryption hardware, which has a data encryption key built in during manufacturing. When drives ship, this key is set as unlocked by an authentication key and data can be read by any device. Encryption on the drive is activated by the customer changing the factory authentication key to a private one, with that key held on a key management server. Subsequently, this key is applied during boot-up and encrypts data on the drive all the time it is running. However, when drives are inactive they are fully protected and can’t be run on another machine (excepting the malicious circumstances described below). More on compliance and storage - GDPR puts tough requirements on organisations that store “personally identifiable data”. We look at what is needed to achieve compliance. - The General Data Protection Regulation is upon us. Mathieu Gorge, CEO of Vigitrust, talks you through the key areas needed for compliance in storage of data subjects’ data and how to find it quickly on request. There is no performance impact because drive unlocking occurs at start-up. The key encryption standard used is the Trusted Computing Group’s Enterprise standard (TCG-E), and for consumer hardware, such as laptops, the TCG’s Opal 2. TCG standards employ Advanced Encryption Standard (AES) 256-bit encryption. TCG-E compatiblity is available in SAS and SATA drives, which can be spinning disk or flash, while it appears that NVMe flash drives are currently TCG Opal-only. It should be noted that self-encrypting disks address a limited set of threats. They don’t protect the data while the server or array is running, only when it is down or removed. If there is a threat to self-encrypting drives it is from malicious insiders. That said, self-encrypting drives do deal with basic compliance requirements. But, they are vulnerable to a number of types of attacks, according to this report at Blackhat.com. These include various methods by which attackers either physically access the drive or force a restart and to give up its data to another OS on the attacker’s machine, or gain the authentication key. But, as mentioned above, for any of these vulnerabilities to come about, an attacker needs physical access to the drives. Sometimes, use of self-encrypting drives is known as full disk encryption. If it simply refers to the use of self-encrypting disks then it is sometimes known as hardware full disk encryption. There is also software full disk encryption that in which an encryption application protects data. This has to be authenticated by the user, and has a performance overhead. It is often used with laptops and can be managed from a central console in an enterprise scenario.
<urn:uuid:d5db4b4c-f1f0-44d1-9dd5-66acd43831a1>
CC-MAIN-2022-40
https://www.computerweekly.com/feature/Storage-101-Self-encrypting-drives-benefits-and-limitations
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00264.warc.gz
en
0.950171
736
2.53125
3
The United States is reliant on space-based capabilities. In-orbit platforms, payloads, and other satellites form a virtual exoskeleton for our nation’s critical infrastructure. Communications, transportation, trade, financial services, weather monitoring, and critical defense systems all depend on our expansive network of satellite constellations and other space-based assets. Historically, the vulnerability of these resources has often been overlooked in wider discussions of cyber threats to our critical infrastructure. However, similar to any digitally networked system, these assets are highly vulnerable to cyber-attacks. Cyber risks for space-based assets take many forms. While one attack might involve the jamming, spoofing, or hacking of communications and navigation systems, another may target critical control systems or specific mission payloads, shutting down satellites, altering their orbits, or permanently damaging assets through deliberate exposure to harmful radiation. Moreover, cyber attacks of space-based assets often have widespread collateral effects. For instance, the US National Oceanographic and Atmospheric Administration (NOAA) Satellite Data Information System was taken offline in September 2014 after a serious hacking incident, denying high volumes of data to worldwide weather forecasting agencies for 48 hours. From a national defense standpoint, Invictus recognizes the impact is even greater. For instance, a cyber attack on a critical DoD asset has the potential to undermine the integrity of a strategic weapons system, driving destabilization of a deterrence strategy to dissuade an adversary, or an attack on an asset may cast doubt on collected intelligence and increase the risk of misperception during a time-sensitive military crisis. Traditionally, cyber protection of space assets is based upon a hardened ground segment combined with encrypted communications relays to and from the spacecraft. In orbit, the assumption is made that the encrypted streams ensure appropriate levels of cyber security across onboard systems and controls. Often, few additional cyber defenses are integrated. As a result, if an actor is able to gain access to the ground segment or insert malware into an onboard component, there are inadequate preventative measures to prevent direct, full-control of the asset. In response, US policy has recently pivoted from enhancing the basic “survivability” of an asset (active/passive defense measures) to driving the “resilience” of space-based missions. Resilience incorporates traditional approaches to survivability with the operational aspect, enabling the ability for a mission to endure the loss of one or more nodes, assets, or ground system elements. Invictus understands cyber resilience can only be achieved by addressing the catalyst of our vulnerabilities. A tradeoff must be performed for each mission to understand whether increasing the survivability of a single asset, or approaching a mission through the proliferation of smaller, more agile assets increases the guarantee of mission success. This tradeoff must further be in alignment with Risk Management Framework (RMF) to ensure policy and standards are met. As experts in Cyber governance for the Intelligence Community, Invictus is growing our capability to apply RMF to non-traditional IT systems and operational assets, such as aircraft, ships, space systems and other platforms. Our team understands cyber resiliency and stands ready to address the challenge of “defending the gates” from orbit.
<urn:uuid:e37664eb-5ba5-4d90-a02d-425f5c3e9235>
CC-MAIN-2022-40
https://invictusic.com/articles/driving-cyber-resilience-for-space-based-missions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00264.warc.gz
en
0.925781
660
3.015625
3
To gain an understanding of what a Cross-site Request Forgery (CSRF) vulnerability is, please see the article entitled: “What Is Cross-Site Request Forgery?“. For insights into how to detect Cross-site Request Forgery (CSRF) vulnerabilities within web-applications, see the article entitled: “How To Test For Cross-Site Request Forgery“. Cross-Site Request Forgery For the purposes of this explanation, we will assume that you have either detected a critical transaction that has been determined to be vulnerable to Cross-site Request Forgery (CSRF) attack, OR are in the process of developing a critical transaction and wish to avoid the problem. We will refer to the subject HTTP request as the “transaction under consideration”. Recall from “What Is Cross-Site Request Forgery?” that exploitation involves an attacker tricking an authenticated end-user into submitting an HTTP Request representing a critical transaction. Preventing Cross-site Request Forgery (CSRF) is as simple as making sure the application verifies that the transaction under consideration actually came from the client application and not a link outside the application. The solution is simple: generate a (cryptographically strong) random code on the server and store it in the user’s session. Whenever a page associated with a critical transaction is produced, include the secret code in a manner that will be provided in the HTTP Request WITHOUT USING A COOKIE. Verify the secret code against the value stored in the session when the request is received, and reject the request if the code is missing or does not match. This ensures that the request was submitted by the client-interface because an attacker is unaware of the value of the secret code. Using a Hidden Field In the simplest (and most common) case, the transaction under consideration is the result of a form that the end-user must complete and submit. The solution is to include a hidden field within that form that contains the secret code that is submitted with the form and verified by the application when the request is received. Using a URL Parameter Another design scenario would be a standalone button that must be protected. This could be accomplished by building the secret code into the button’s “action” URL as a query parameter value that would be verified by the application when the request was received. Using an HTTP Request Header Another potential approach would be to add the secret code to the HTTP Request Header. A slight variation on the approach would be to generate the secret code more frequently than once per session. This would be advised in the design scenario in which the secret code appears within URLs. It is also worth noting that some development frameworks automatically provide anti-CSRF tokens in their forms. It is worth investigating whether a turn-key solution is available in your development environment before you “roll your own” solution. Fameworks that include anti-CSRF support include: - ASP .NET Notice that cookies are NOT a viable solution for transferring the secret code because it would automatically returned with every request to the application, defeating our purpose in trying to thwart the CSRF attacker. Additional details on the problem and other solution approaches can be found in the OWASP XSRF Prevention Cheatsheet. In short, Cross-Site Request Forgery (CSRF) is not difficult to prevent and there are several viable approaches. The challenge often turns out to be awareness of the problem and identifying the critical transactions within the application. About Affinity IT Security We hope you found this article to be useful. Affinity IT Security is available to help you with your security testing and train your developers and testers. In fact, we train developers and IT staff how to hack applications and networks. Perhaps it was a network scan or website vulnerability test that brought you here. If so, you are likely researching how to find, fix, or avoid a particular vulnerability. We urge you to be proactive and ensure that key individuals in your organization understand not only this issue, but also are more broadly aware of application security. Contact us to learn how to better protect your enterprise. Although every effort has been made to provide the most useful and highest quality information, it is unfortunate but inevitable that some errors, omissions, and typographical mistakes will appear in these articles. Consequently, Affinity IT Security will not be responsible for any loss or damages resulting directly or indirectly from any error, misunderstanding, software defect, example, or misuse of any content herein.
<urn:uuid:9ed0d90d-8806-4ef3-a228-50d72d471700>
CC-MAIN-2022-40
https://affinity-it-security.com/how-to-prevent-cross-site-request-forgery-csrf/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00464.warc.gz
en
0.925052
964
2.75
3
If your system spreads across multiple networks or you allow your users to connect to the main server over the Internet, you must configure the network view and add additional networks. - Open the Network view task. - If you are creating a subnet, select the parent network in the network tree. Otherwise, select the Default network. Click Network (), and enter the name of the network entity. You are automatically placed in the network’s Properties tab. From the Capabilities drop-down list, select the data transmission type for streaming live video on the network. Tip: Always select the largest set of capabilities that your network supports. - Unicast TCP - Unicast (one-to-one) communication using TCP protocol is the most common mode of communication. It is supported by all IP networks, but it is also the least efficient method for transmitting video. - Unicast UDP - Unicast (one-to-one) communication using UDP protocol. Because UDP is a connectionless protocol, it works better for live video transmission. When the network traffic is busy, UDP is much less likely to cause choppy video than TCP. A network that supports unicast UDP necessarily supports unicast TCP. - Multicast is the most efficient transmission method for live video. It allows a video stream to be transmitted once over the network to be received by as many destinations as necessary. The gain could be very significant if there are many destinations. A network supporting multicast necessarily supports unicast UDP and unicast TCP.NOTE: Multicast requires specialized routers and switches. Make sure you confirm this with your IT department before setting the capabilities to multicast. Under the Routes section, verify that all the routes created by default are valid. - You may have to change the default capabilities, or force the use of private address when public addresses cannot be used between servers within the same subnet. To edit a route, select it in the list and click Edit the item (). - If there is no connection between this network and another network on the system, select the route, and click Delete (). - You may want to add a direct route between this network and another child network, bypassing its parent network. - Click Apply.
<urn:uuid:b68bcec1-7c6a-4c05-b905-2bfd208745a1>
CC-MAIN-2022-40
https://techdocs.genetec.com/r/en-US/Security-Center-Administrator-Guide-5.9/Adding-networks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00464.warc.gz
en
0.876558
487
2.890625
3
NASA and the European Space Agency have tested a prototype system that could truly give new meaning to long-distance calling. It could help enable Internet-like communications between Earth and other planets. To demonstrate very long-distance remote control, ISS Commander Sunita Williams last month conducted an experiment using NASA’s Disruption Tolerant Networking (DTN) protocol to drive a Lego robot located on Earth at the European Space Operations Center in Germany. Unlike other types of communication, DTN is not built around direct line of sight. “The alternative means of communication is essentially direct line of sight,” said Kevin K. Gifford, Ph.D., of the University of Colorado’s Aerospace Engineering Sciences department. “From there, you have a direct network. That means the Mars Rover needs to be on a particular side of Mars for someone on Earth to control it,” he explained. While popular science fiction might suggest instant communication across the cosmos is possible, the truth is that it’s unlikely to occur anytime in the near future. Since direct line of sight occurs only when planets line up, an alternate technology is needed to solve the communications problem. This is where DTN comes in — but it still won’t resolve the time-delay issue. “For Mars, there is a minimum of eight minutes to communicate based on the orbit location,” said Gifford, who was part of the team at CU Boulder that worked with NASA to build the DTN network for the ISS. “DTN is not going to overcome the speed of light, which is the fastest means of electromagnetic communications,” he pointed out. Instead, it works by storing data at intermediate nodes, such as a satellite, until a downstream communication channel is available. This allows the communication to sidestep the line-of-sight requirement for connecting data, which is in essence an all-or-nothing option. With line of sight, the information either gets through or it does not. DTN technology sends information a relatively short distance, where it can be stored, and then sends again when a communication channel opens up. “It is a little like a basketball as you are moving the ball towards the goal, and a player can wait to pass the ball,” Gifford told TechNewsWorld. “Anytime there is not a direct path, the ball can be held in place. This makes it more efficient for communicating over long distances.” Outer Space Networking The technology would of course limit real-time communications — but then again, even direct line of sight can’t overcome the eight-minute delay between Earth and Mars, for example. Taking that into account, DTN could be more like email than instant messaging. However, it has other advantages as well. “Think of this technology as ultra-high latency and ultra-reliable,” said Rob Enderle, principal analyst of the Enderle Group. “It is designed to function where no other networking technology could function. It uses a lot of prediction and redundancy to compensate for the problems it is designed to solve, and it drives a lot of intelligence into the network,” he told TechNewsWorld. It could be used to keep information running and communication going, albeit slowly, even in situations when normal networks might have failed catastrophically. “This is on the evolutionary path to communications technologies used for space exploration and it’s ideal for warfare — and Defense is likely the biggest funding source for it,” Enderle said. “It could also mitigate problems associated with storm-damaged networks, which is topical at the moment; networks in emerging countries, which aren’t very reliable; networks that are under cyberattack; and networks that are failing for other reasons,” he suggested. DTN for First Responders Because it has the ability to be a backup system of sorts when other networks might go down, DTN is also being integrated within emergency networks. “This is the national public safety broadband network, which was funded in March of this year,” said Gifford. “It provides (US)$7 billion to upgrade the U.S. first responder network.” DTN could enable communications if existing networks should fail or become inaccessible. Data could be transmitted to various nodes, held until communications were restored or accessed, and the information could then be passed on. DTN will be optimized through the commercial infrastructure in the event of an emergency Gifford added. “It has direct applications for the public safety network.”
<urn:uuid:d77f7eae-8a9e-4876-b76f-cb7207f4ef10>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/interplanetary-internet-small-step-for-lego-robot-giant-leap-for-space-exploration-76592.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00464.warc.gz
en
0.954312
974
3.6875
4
If you are still running an insecure HTTP (Hypertext Transfer Protocol) website, many of your visitors might already be greeted with a 'Not Secure' message on their Google Chrome browser warning them that they can't trust your website to be secure. For all HTTP site, you will see a Not Secure message in front of the web address (URL) stating the site connection is not secure as there is no SSL certificate to encrypt your data that transmit between your computer to site server. Now this Not Secure Labelling is quite impact bad effects on your site. - This will make your site visitor think as they are not in a secure connection. - HTTPS will improve your SEO and reflect in Google Search page result. - Improves your Site security and gains users trust.
<urn:uuid:52fdcd58-b735-43a7-85d7-4b19d80ad01d>
CC-MAIN-2022-40
https://www.cyberkendra.com/2018/07/google-chrome-now-labels-http-sites-as.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00464.warc.gz
en
0.91919
158
2.65625
3
When deploying an Internet of Things (IoT) strategy, it is important to consider the longevity of the network as well as the longevity of the devices that will be attached to said network. The benefit of the IoT lies in its relatively low device costs, but this benefit can be all but negated when the device needs regular servicing or replacement. Avoiding an expensive service truck roll is paramount, especially when it is just to change out a device battery. Low-power networks typically serve devices which are battery powered, and ideally, these devices should remain operational for years, even decades without the need for attention. Changing IoT device batteries can harm profitability, but it can be even more harmful to the environment. According to the U.S. Environmental Protection Agency (EPA), each year Americans throw away more than three billion batteries, which equates to about 180,000 tons of waste. While modern batteries do not contain the same harmful properties of their predecessors, choosing to recycle these batteries is still a better choice as they can take decades to break down in a landfill. Although the U.S. is a leader in IoT device adoption, it lags far behind other countries in battery recycling initiatives. It is estimated that European countries such as Switzerland, Belgium and Sweden recycle more than 50% of all batteries, while the United States is in the low double digits for battery recycling, despite the prevalence of clean initiatives throughout the nation. Aside from changing habits, we should look at changing consumption. As advances in battery life technology continue to progress, devices can utilize fewer and fewer batteries throughout their lifetimes. Additionally, designing an IoT network to effectively transmit data at very low power can also help extend battery life. So, when implementing an Internet of Things strategy, remember that the battery is one of the most important “things” that should be considered. For more discussion on this topic, download our white paper, The Six Secrets to Extremely Long Battery Life.
<urn:uuid:e60e1b5d-7783-4c58-98d8-9f4e5b7559ba>
CC-MAIN-2022-40
https://www.ingenu.com/2016/07/reduce-rethink-recycle/?doing_wp_cron=1664508027.4534800052642822265625
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00464.warc.gz
en
0.944556
399
2.984375
3
The Complete IP Phone System Configuration Guide Did you know that business phones with richer features that are more affordable to use and maintain than your current landline phones are already available? Did you also know that these advanced office phones are similar to your personal smartphones — mobile and portable? These latest communication devices, known as IP phones are taking the world by storm. Experts predict that Internet Protocol (IP) phones will be a major contributor to the fast growth of the VoIP market worldwide. Global Market Insights, a market research firm based in Delaware, projects the global Voice over Internet Protocol (VoIP) industry will reach around $55 billion by 2025 up from $18 billion in 2018. Why are IP phones growing in popularity? Does your business really need one? Read on and find out. What Is An IP Phone System? First of all, let’s understand what VoIP is by defining it. VoIP is a communications technology that allows users to conduct telephone calls through an internet connection rather than via an analog phone system or a telephone line. It does not primarily depend on physical networks such as wires and cables. Mobile phones or landlines connected to the internet can make and take VoIP calls, which can also be conducted on desktop computers or laptops with microphones, speakers, or headsets. A high-speed internet connection, some communication devices, and a phone service provider that regularly offers you VoIP service are essentially what comprises a VoIP phone system in your business. That said, IP phones are actual office phones using a VoIP phone system. They look like traditional desk phones and you may have come across or used these latest devices without knowing it. An article on the data solutions provider TechTarget website explains that IP phones are hardware and software-enabled devices built to use VoIP technology to make calls via an IP network such as the internet. These phones can send calls through the internet by turning analog audio format into digital format. The incoming digital signal is then changed from the internet to regular phone audio. IP phones consist of features that are missing in conventional analog handsets. Since phone calls are made through the internet rather than the Public Switched Telephone Network (PSTN), these latest phones need extra performance requirements. An IP phone system consists of an IP phone and an IP PBX, otherwise known as a VoIP private branch exchange (PBX). The system is run by a VoIP service provider using your office computer network called the local area network (LAN). What is an IP PBX? An important part of the IP phone system is the IP PBX. The traditional or standard PBX is known as “on-premise,” meaning that the hardware system is within your office or property. On this platform, you and your staff handle and maintain the system’s equipment and devices. Conversely, a hosted IP PBX is a system that stores data and information in the cloud, which can only be accessed via the internet. With this setup, there is no need to install a hardware system that needs your routine care. A service provider or a third-party vendor will host and maintain this cloud-enabled telephony service for you in a safe and secure off-site data center. IP PBX, likewise known as cloud PBX or virtual PBX, is a PBX solution delivering features and services that are not different from an on-premise or on-site PBX. This means you still get the advantages the network brings to your organization, such as remote connectivity, call routing, conference calls, and call transferring. However, all of your communication activities are conducted virtually or in the cloud, so a physical location for storing the necessary equipment is no longer needed. Additionally, an external provider that offers IP PBX service has experts, professionals, or technicians who can guarantee that your network runs at maximum capacity to deliver high-quality calls and reliable security for your business. The provider also oversees the data centers to prevent serious disruptions in your communications system. How Does An IP Phone System Work? With regard to carrying out a phone call, an IP phone works almost the same as a traditional telephone but with a few technical variations. An IP phone transforms voice calls into digital signals that are transmitted mostly through the internet. It can either run through VoIP-based physical phones or as a virtual phone software installed on a mobile device (smartphone or tablet) or a computer (desktop or laptop). To switch to an IP phone, the Dynamic Host Configuration Protocol (DHCP) requires certain networking components to assign IP addresses to your devices. The DHCP instantly sets your system and its parameters. A domain name system that considers the IP addresses is also needed for your office phones to work. Features Of An IP Phone System IP phones provide more capabilities than their counterparts and the number of features depends on the brand or model. The features may include recordings or call logs, integration with customer relationship management (CRM), easy conference call access, online faxing, call recording, call routing, auto-attendant, text messaging, video calling, team chat, mobile and desktop apps, call analytics, and speech-to-text transcription, IP phones are built with Bluetooth technology in devices for microphones, speakers, handsets, and headsets. Another unique feature of an IP phone system not found in a traditional one is that you can conveniently transfer your current phone numbers from one provider to another. You don’t need special numbers or to change your existing phone numbers should you switch to a new service provider. This device lets you keep your number when changing your vendor. You can also achieve a cloud-based business phone system for small businesses or home offices with IP phones. A phone network relying on cloud technology enables your business to make phone calls, organize conference calls, or transmit messages from any VoIP-powered device, whether it be a desktop or mobile device, without acquiring additional phones. Despite these benefits, an IP phone system has some drawbacks. It’s essential to weigh the advantages and disadvantages before making a decision to adopt the network. The features can be helpful but may not be suitable for your current business situation. IP Phones: Software And Hardware-Based The market is packed with IP phones of different shapes and sizes. Regardless of their models, brands, or versions, they can all be classified into two main types: software-based and hardware-based. A software-based IP phone, more popularly known as a softphone, is a virtual phone software installed on a device. Its appearance is similar to that of a phone handset with a touchpad and a caller ID display. You need a headset with a microphone attached to your computer or an identical device with a pre-installed speaker and mic before making calls. Some of the softphone’s common features are call transfer, call conferencing, and voicemail, to name a few. Its additional capabilities, that can be from your service provider include instant messaging and video conferencing. A hardware-based IP phone resembles your regular cordless telephone. It consists of physical features, such as a caller ID display, a microphone, a speakerphone, and a touchpad. This device also provides enhanced features, including call transfer, multiparty calling, support for various VoIP accounts, and video phones that enable callers to see each other. Types Of Hardware-Enabled IP Phones Videophones, conference phones, standard desktop phones, and USB phones are all examples of hardware-based IP phones. - Videophones. These look like standard desktop phones. The main difference is that they have a USB-based or built-in camera and a wide, full-color screen built for video-calling. Although they appear fancy, these devices are not well-known since many users depend on computers and mobile gadgets for their video-calling needs. Only selected industries or businesses still use this technology. - Conference phones. Designed to improve sound quality during conference or group calls, these phones contain several microphones. Each microphone is electronically balanced to boost call clarity. These devices are commonly shaped like the letter V or Y, with two or three separate speakers. Each speaker is digitally stabilized to ensure the voices are always clear. - Standard desktop phones. These devices are similar to and function in the same way as traditional office desk phones. Many of these products provide LCD screens with features such as caller ID that displays essential caller information. The new models are composed of rich color touchscreens and can handle multiple phone lines to boost call efficiency. Some brands also feature Bluetooth technology for wireless headsets to help work productivity. - USB phones. They are convenient and low-cost devices connecting directly to a computer via a USB port. These devices are best for small business and home office users. However, these phones lack LCD screens or advanced features and take calls through the workstations to which they are attached. They are ideal for use \with softphones. IP Phone System Versus Legacy Phone Systems Traditional phones are dependent on physical infrastructures for communication services – landlines require physical wiring to the PSTN infrastructure. Cellular phones need cellular sites or towers. IP phones, by contrast, run via IP networks, a collection of computers connected through different IP addresses. Apart from having more capabilities beyond voice calls, IP phones are more cost-effective to roll out than landline phones. However, this internet-based phone system is prone to online hacking and other cyberattacks as well as sporadic weak connections. Compared to mobile phones, IP phones offer an affordable business phone system cost when implemented on the same level. No matter how advanced smartphones are nowadays, they are still not as feature-rich as IP phones in the areas of enterprise-focused functions, such as unified communications, analytics, software integration, and CRM. The Benefits Of IP Phones Cost savings is considered the main advantage of IP phones over other systems. If your small business uses standard desk phones, then switching to an IP phone system will reduce your costs over the long term. Traditional phones have minimal installation costs but have steep maintenance and upgrading expenses. What’s more, charges to IP phone calls are based on a local rate no matter where they call recipient is, allowing you to save a great deal of money on long-distance and international calls. Integration is another edge with IP phones. This means you can incorporate other business software or apps with your office phone to improve communication and boost productivity. IP phones allow organizations to combine customer relationship management programs (CRM) to examine call logs and analytics for multiple sales leads and prospects. Additionally, IP phones are more mobile and flexible than traditional phones. Standard phones work because of embedded physical wiring that cannot be reinstalled at your new office address. Conversely, IP phones are scalable, which means that you add more phones to your system by just requesting a higher network bandwidth from your service provider. You can carry these portable devices with you anywhere, without losing touch with your existing customers or clients. So if your office’s physical location changes, obtaining new phone lines is not needed. IP phones also work best for a multi-line phone system for your small organization. With a multi-line system, you can simultaneously handle at least two calls. It allows users to perform multiple tasks in a short span of time, including dialing internal or outside numbers, going back and forth between lines, or putting calls on hold. Multi-line networks can be supplied with IP phones that rely on different data centers for redundancy to make sure your business operations go smoothly. The Drawbacks Of IP Phones When the internet connection is poor or unreliable, the IP phone performance will be adversely affected. A weak connection leads to bandwidth constraints, resulting in dropped calls, delays, choppy calls, or other latency issues. Your IP phone system can experience problems during power outages that break off the online connections. It’s recommended to have your own backup power supply when there is a power interruption. Otherwise, you have to wait until the electricity is restored for your phone to reactivate. By contrast, landlines can operate almost anytime even during a blackout because power suppliers are equipped with backup generators dedicated to landlines. These generators enable you to continue conducting calls in the midst of electricity failure. The struggle of making emergency calls is another disadvantage of IP phones. IP addresses cannot give the caller’s exact location, preventing 911 operators from routing calls made with IP phones to the appropriate emergency call center. Furthermore, the 911 system is meant for traditional phones, such as landlines and similar devices. Standard phones permit 911 operators to instantly locate the call source upon receiving an emergency call. With an IP phone, the operators will be unable to identify your correct whereabouts. They will only get the address you give them on the date of your phone’s setup. How Much Is An IP Phone System? Industry experts say standard VoIP service costs include basic plans and no installation fees. Plans are typically priced for as low as $16 monthly per user. The price difference is subject to the subscription length and number of users. Some external providers offer discounts or lower rates to subscribers who pay on a yearly basis. Most VoIP providers usually do not ask for a setup fee when no physical hardware is deployed in your office. The IP phone system is accessible almost as soon as the vendor provides your phone numbers or port numbers and account details. VoIP phone providers can also throw in additional phone features without extra charges. In particular, their high-tiered or premium plans include the most advanced features while the basic options contain limited functional ability. Regardless of the classification, the basic plan that they provide still has much more to provide than the standard phone service well-known telecommunications operators offer. To install a multi-line IP phone system for a small business having not more than 20 employees, the initial cost is $4,500 give or take a few hundred dollars. Afterward, the service provider will charge you, on average, $700 per month, or $8,400 yearly. Altogether, your hardware and software costs for the first year are placed at close to $13,000. VoIP providers normally promote the prices of their services as per user, per line, and per month subscription fees. VoIP plans generally cost between $15 and $60 monthly per line. The providers will offer large organizations long-term contracts or discounts of as much as $2 monthly per line. On the other side, the IP PBX phone systems many large businesses use are the most complex of the high-end IP desk phones, with prices between $200 and $1,000 per unit. Additional expenses are routers for internet-enabled systems ranging from $75-$400 and headsets for desk phones from $15 to $150. The wide price difference is due to the variety of features service providers offer and their system compatibility. To install a two-line phone system or any other type of phone network including the IP phone platform, you should set aside a budget of not less than $500 as payment for IT professionals or consultants who will set it up. It takes up to five hours to completely install your system with an average fee of about $100 per hour. Some experts note that the basic setup cost of a VoIP system is between $100 and $200 for each IP phone. Service providers normally charge an installation or labor fee ranging from $20 to $40 per phone. If your current numbers need migration, then an extra $10-$20 fee should be paid to the vendors On top of that, an IP phone system has a regular charge for each extension per month, ranging from $10 to $50 per line for cloud-based services. Based on these estimates, if a company has 10 employees and each needs a phone, the estimated investment should be as much as $2,000 to pay for the migration costs and labor, plus $100-$500 monthly in recurring or regular fees. Other costs are associated with setting up an IP phone system. These are softphones ($50 monthly per unit), ethernet ports ($150 installation cost per port), additional phone numbers (about $5 per listing), adaptors ($60-$80 per phone), extra features (such as conference calling or call forwarding for $5 or more per month per service), an internet connection (starting as low as $50 per month for high-speed broadband or fiber-optic connection), to name some. Switching to an IP phone system can result in annual productivity gains of about $480 per user. This means that if your organization has 50 users, you can save up to $24,000 yearly when adopting a cloud-hosted phone network such as an IP phone system. IP Phone System And Call Flow Businesses that mainly engage in customer services, such as call centers or contact centers, rely on IP phone systems offering capabilities that maximize customer service process flow. A number of service providers extend VoIP solutions to enhance call flow, an indispensable tool call center representatives use to effectively communicate with customers. The call flow is important because it assists employees in properly dealing with calls from the moment they respond up to the time of call completion. This procedure is a must-have to take care of the many different call situations, especially the tough calls. After all, great customer experience is always the end goal. Among the VoIP’s call-flow features is the interactive voice response (IVR). This is an automated phone system that interacts with and gathers information from a caller. Based on the caller’s answer and the details provided, the IVR takes an action when a phone keypad is pushed or the caller responds with a voice. In addition, IVR boosts the parts of a call flow as it helps with routing customers to the appropriate department or team within the organization. When operated properly, an IVR can manage calls on certain matters, such as bill payment, account balance inquiries, and scheduling. Do VoIP Phones Work Over Wifi? An online article on the website for technology resource Lifewire says VoIP phones over wireless networks such as Wifi (wireless fidelity) are now a reality. Communications devices that are not connected by cables or similar wires to access the internet are becoming widely accepted especially in homes. Current wireless network technologies, including Wifi, works for VoIP communication. Local area networks (LANs) that link phones and other communication devices within your office or property are wired with RJ-45 jacks on an ethernet system. With Wifi, however, wireless routers that connect to the hardware wireless adapters enable internet telephony within a specific range in your office. Using WiFi for internet access, you can use an IP phone, smartphone, laptop, desktop, tablet, or similar device to conduct a call. You can make calls regardless of your location in the office as long as the device being used is within the WiFi range or signal. Even if your smartphone has a poor cellular signal in the office, the WiFi will pick up the slack. If you’re mobile or always on the go within the office, then WiFi-enabled VoIP phones are ideal for you. It allows you to talk on a phone anywhere as far as the WiFi signal can reach. This setup not only contributes to your convenience and productivity but also saves time and energy. In particular, workers in a medical organization or in a factory can benefit the most from wireless VoIP since their regular duties involve frequently moving from one area to another within a premise. Lifewire adds that there are some disadvantages to using wireless VoIP. One is that VoIP on LANs is carried out mainly in corporate settings instead of residences. Another issue is that the service quality is still not on par with wired networks. However, connection reliability is improving as time goes by. Another drawback is that the financial and technical costs of installing and maintaining a WiFi-based IP phone system are higher than that of a wired network. Lastly, VoIP using wireless technology is more prone to security threats such as cyber intrusion or attacks due to the multiple access points or wireless LANs. According to online technology magazine Plentyofgadgets, one disadvantage of WiFi VoIP phones is the presence of electronic magnetic interference (EMI). This occurs when one device disrupts another because of the electromagnetic fields resulting from its operation. The interruption causes data transfer breakage and loss. Some of the products that can cause EMI are air conditioning units, doorbell transmitters, power lines, vacuum cleaners, and pencil sharpeners. How Can I Use My IP Phone Like A Normal Phone? Your landline phone is restricted to a particular location and requires a wiring system or telephone lines to connect to a call. With IP phones, the limitation of locality is eliminated because all calls are conducted online. Even a global reach is possible. With recent technological developments, using an IP phone as a regular or normal phone can now be done by following some easy steps. Converting this latest phone will not cost you a fortune. The first thing you need to do is to buy a VoIP adapter. Also called Analog Telephone Adapter (ATA), this device connects the analog telephone service and the digital network that sends calls. The prices of VoIP adapters are between $20 and $100, depending on the brand or model. Next, attach the adapter to a wireless router or a modem that uses an ethernet cable. Then switch on your computer and follow the instructions in the manual included with the adapter. Check for the IP address on the adapter and type it on the computer. To configure the adapter, complete the set of instructions on the screen. When the configuration is finished, plug your standard phone into the VoIP service provider. The ATA and the ethernet cable are the two devices you need to transform your analog phone into an IP phone. Your analog phone will connect to the internet to make calls instead of the conventional PSTN, which will show the phone system as an IP phone. How Does VoIP Connect To The Internet? Thanks to the internet, a VoIP system can conduct calls anywhere and anytime. The network that connects all computers and other digital communication devices all over the world acts as a medium between your business phones and the service provider to interact with each other. IP phones are typically connected to a computer network with cables that come from the communication room in your office or directly from your router. They are operated and linked to the network with a cable. A networking device, known as a power over ethernet (PoE) switch, is needed to activate the IP phones. PoE switches work best for large organizations equipped with their own communication room while PoE injectors or adapters are most suitable for home offices and small- and medium-sized businesses due to their small size and affordability. In other words, IP phones are linked via an ethernet connection to your network of wires, switches, cables, and routers called a local area network (LAN), which connects to your phone service provider through the internet. To set up an IP phone system, a single ethernet cable is required for every desktop or workstation. IP phones consist of a 10/100/1000 built-in network switch that can be attached to the ethernet cable from the wall, which is directly connected to the PoE switch. Then install an ethernet patch cord from your phone port to the workstation. For your IP phone platform to connect to the internet, you need a modem and router. These are the essential parts of the present internet setup. As soon as you download the IP phone’s calling software, you can start making calls from your IP phones or from any digital communication device. Other types of equipment also need to be taken into account. One is VoIP desktop phones designed to function with IP technologies for transmitting calls. Unlike landline phones, these wireless devices don’t need an outlet to be connected. They have features resembling those of business phones, such as conference calls, do not disturb, and call waiting. These types of phones are simple to install and you can use them immediately upon setup at the office or at home if your team members work remotely. You should also obtain a VoIP headset. This gadget lets you and your employees talk easily without the need to hold the phone receiver or rely on external sound from the computer. A VoIP headset has a built-in microphone and uses two common types of connections, wired and wireless. The wireless type uses Bluetooth technology to connect to your system in a manner similar to that of cellular phones. You can also try the other version known as the USB VoIP headset, a device that combines headphones and a microphone. It requires a USB connector to connect to computers or related gadgets. Basic VoIP headsets are priced at about $20; high-end ones can reach as high as $400 depending on the features, version, and model. How Do I Set Up My VoIP Phone System? Below are five steps to help you install your own VoIP phone system. 1. Select between a cloud PBX and on-premise (on-site) PBX An on-premise PBX (private branch exchange), also called on-site PBX, is a type of communication system operated and maintained in your own location. Although this platform involves initial high installation and hardware costs, it becomes more affordable to maintain over the long term. With it, you have great control over your system because all of the components and requirements are physically placed in your office or property. By contrast, with a cloud-hosted PBX system, a service provider hosts the PBX on its own location, not on your premises. Requiring no setup or installation costs, the platform’s service can only be accessed via the service provider’s internet-based data center called the cloud. On-premise PBX: the highs and lows As mentioned above, the main benefit of using an on-premise PBX system is great control over the network. The hardware and equipment including networking devices, servers, desktop phones or handsets, and other gadgets that come with this system are situated within your office or premises. The long-term benefit of choosing this system is its cost-effectiveness. This setup is recommended for organizations that have the financial resources to afford the network’s pricey hardware. However, the equipment and the phone lines included in this platform are unaffected by price instabilities or hikes. In the course of time, the maintenance costs will go down. The high upfront and maintenance costs are the on-premise PBX system’s primary drawback. Aside from investing in telephone devices, servers, networking equipment, and other related hardware, you will also have to allocate funds to maintain, update and keep the on-site system. Since it consists of large equipment, this platform needs a storage area large enough to accommodate its sizable equipment. The system is impractical for small businesses and home offices with restricted financial resources and physical space. Cloud PBX’s: the highs and lows Low investments and affordability are the primary advantages of having your own cloud PBX system. Since the service provider will carry all the financial costs for the equipment located in its own data center, purchasing expensive hardware can be avoided. The only costs to shoulder are the provider’s service subscription fee, which is either monthly or yearly and the desk phones your employees will use. Also, the provider’s technical team will be responsible for your system’s routine maintenance. The cloud-enabled PBX is built with flexibility and scalability, allowing you to modify your communication system if a reorganization in your company occurs or when there’s a change in the business situation. It can quickly adapt according to your current business needs, such as switching to a remote working environment, decreasing or increasing your workforce, or moving your office to a new physical address. The system’s flexibility allows customers, clients, and suppliers to conveniently contact you regardless of your location using IP phones, smartphones, or computers. Since the hardware and other equipment are stored and operated away from your office, the cloud PBX system does not require a physical site. For this reason, the area that would be reserved for the hardware can instead be used for extra workstations or for otherwise more productive purposes. The cloud PBX solution has its own drawbacks as is the case with other systems available on the market. One of them is its complete reliance on the internet. If the internet connection is poor or unsteady for a certain period of time, then your VoIP phone calls will go through technical difficulties, such as dropped or patchy phone calls and other latency issues. Not to mention that picking a cloud PBX system will entail less authority over your phone system in terms of overall network management. It also needs a significant amount of time for your employees to get accustomed to it. For one, the platform operates differently from traditional telephone devices and requires staff training or upskilling for efficient operation. 2. Determine the most relevant VoIP features and capabilities A crucial element to consider before setting up your VoIP system is to determine the specific features you think will work best for your business. Suppose the nature of your small business requires frequent travel. Acquiring a VoIP system providing rich mobile features is recommended. If your business needs at least 10 phone lines, then deploy a VoIP platform that has broad call routing capabilities. Listed below are the features generally regarded as the most essential elements of VoIP phone systems to help business operations and productivity as well as enhance the customer call experience.: - Auto-attendant, an automated answering service, is useful to greet callers with programmed messages that route them based on their calling needs and preferences. For instance, “Press 1” is for paying their bills, “Press 2” is for checking an account balance, “Press 3” is for speaking with a customer representative, and so on. - The “Do Not Disturb” feature blocks inbound calls while you’re on the line answering a critical call with an important client, supplier, or customer. - A program, software, or application with first-rate conferencing capabilities that can organize video calls while doing other tasks, including sending emails or transferring files. - Call recordings function is also necessary, especially when you’re engaged in business processing outsourcing. This is an indispensable tool to boost the customer-employee relationship and to assist in training call center agents or representatives. - Voicemail is needed so that employees or customers can leave messages when one person is temporarily unable to answer the call. Other VoIP features that can benefit your business are: - Caller ID. This provides basic caller information before you or your employee answers the phone. It displays information including the caller’s name and phone number, as well as the city and state from which the call originates. This default feature is useful for stopping unknown or unwanted calls and is ideal to use with your personal phone. - Automatic Call Distribution (ACD). This feature puts callers in touch with the most qualified agent or customer service person according to their particular needs. - Business tools integration. This allows you to connect your work data with external business apps, such as support ticket systems, email, and chat to consolidate conversation records. Team members working from home will benefit from this. Advanced features offer sales teams tools that join caller data with CRM software. Such a feature allows employees to verify past conversations using a call tracking record. - Call flip. This function helps users transfer calls from one communication gadget to another. Your employees don’t have to use call parking anymore. All they have to do is press a button and the line will not be cut off nor interrupted. This best fits your smartphone or any mobile device that is running low on battery or about to die out. You can swiftly switch to a desk phone or computer. Those who are often mobile or on the go and want to maintain contact with clients can enjoy this feature. - Call park. A common and widely accessible feature that lets you place an active call on hold to avoid being interrupted while engaged with your current caller on a different line. This also gives your employees time to answer the phone, swap devices, transfer a call, or swap devices. - Call forwarding. A regular phone feature that guarantees replies to all customer calls. When used, inbound calls are automatically redirected to a different phone number or extension. This tool is a favorite among salespeople who want to avoid lost potential sales. If nobody answers from a desk phone, then the calls can be diverted to their personal mobile phones. 3. Draw up a budget for your VoIP phone system First, select the VoIP features you think are essential to your business requirements. A service provider will offer you a VoIP plan packed with many capabilities and add-ons. At least some of these are not applicable to your setting. After making your choices, estimate an initial budget, always keeping in mind to set aside a reserve, if possible, for additional future expenses Controlling unnecessary expenses is important when selecting the VoIP pricing and plan most suitable for you. Before deciding what kind of VoIP system is best for your business, do research on the different plans and examine each. The pricing and plan should match your existing needs and funds. A VoIP service provider should meet your business needs. When reviewing each plan, carefully consider the details, including the hardware and software offered, warranty, as well as the charges or fees for installation, maintenance, installation, support return policies, cancellation, and usage limits. Go over the plans and pricing structures of the selected service providers. Prepare an exhaustive list of questions and pose them to the representative. Don’t forget that many providers charge for their services on a per-user and per-month basis. Requesting add-ons or extra features will raise your expenses. Define which particular plan you intend to subscribe to from a service provider and then check if it is within your budget. As a rule of thumb, an annual payment plan should be a priority in order to get your full money’s worth. A typical yearly payment to a provider is estimated at $20 monthly per user while its monthly rate per user is close to $30. A quick calculation reveals that you’ll be able to save approximately $240 per year when going for the annual plan. A provider can also grant you more VoIP features or extra services when you avail your business of the 12-month package for its premium or high-end plans. Being aware of the direct and indirect costs when procuring a VoIP system will assist you in developing a sufficient budget. The different types of costs are upfront costs (the cost of buying the system), implementation costs (the professional fees of a service provider implementing or rolling out the system), operational costs (the monthly or periodic expenses for the VoIP plan you selected), training costs (the expenses on upskilling your workforce to use the system efficiently), and upgrade costs (special expenses related to upgrading or improving your VoIP network’s capabilities). 4. Search and pick your VoIP service provider After identifying the VoIP features you need, the number of users, and the estimated budget, it’s time for you to decide on the right service provider. Industry experts offer several useful tips on how to hire the ideal service provider for your business phone system. One suggestion is to check online customer reviews and ratings. This is a proven means of determining existing and previous client satisfaction. Positive feedback can build your confidence in the service provider and is an added factor. Verify the number of subscribers and how many customers are well-known and established. Another piece of advice is to choose a service provider that offers a quality-of-service guarantee. Your prospective third-party vendor should ensure you first-rate service in terms of call quality, bandwidth, and call traffic. It should give you an assurance that weak internet connectivity and low bandwidth leading to poor customer experience are kept to a minimum. You should also look into the provider’s service accessibility level and customer support quality. As a general rule, a service provider should be available round the clock, especially on weekends or holidays. Asking their current clients and other references about quickness of response and dependability is recommended. A third-party vendor offering a variety of customer services to hone your VoIP system, including phone support, technical assistance, and live chat support is also advisable. Last but not least, request several references from a potential service provider. You should also ask your potential partner for case studies and testimonials. Try to avoid prospects that are unable to hand you a number of references; these are crucial to learning more about their service quality and image. 5. Be certain your VoIP system is protected and secure Not all data that moves across the internet is fully secure. For this reason your planned VoIP system should include stable and robust security with strong end-to-end encryption and multiple authentications to protect your data. To establish effective VoIP network security, periodically training your workforce with the right tools and knowledge should be a top priority. Staff negligence and security leniency are two of the main reasons why online intrusions succeed. As much as you can, stay away from making international phone calls. Don’t include a VoIP’ international call feature if it’s not necessary for your business operations. Many hackers strike from countries outside the United States. If they are able to get through your system, they will exploit it to conduct numerous overseas calls with you footing the bill. Internally, always ensure that your system is equipped with passwords that are difficult to guess or know and make sure they are often updated. Ideally, each of your network devices should have a strong password. Lastly, virtual private networks (VPNs) and firewalls are also vital to strengthen your VoIP system security. The communication session is called Session Initiation Protocol (SIP) and the phone network’s software setup is the most exposed part that cyber intruders can take advantage of. A firewall can deal with these weaknesses. Similarly, a VPN is needed to secure your system when your organization regularly uses VoIP phones for remote access for employees working from home or voice-over LTE (VoLTE) that works for mobile phones. What Is Required For VoIP Setup? In addition to the hardware and other related equipment (as discussed above) needed to set up your VoIP phone system, the other requirements you should meet are the exact number of users, network compatibility, network capacity, and internet connection speed. Determining the number of employees or users who will depend on the system is necessary to optimize network implementation and facilitate the process. When identifying how many actual users, you can also determine the number of phone lines required and how fast your internet connection speed should be to address the expected volume of customer calls. It goes without saying that your entire network setup should harmonize with the planned VoIP system. Examine all your hardware and networking equipment, such as routers, firewalls, security devices and programs, switches, and physical wires, to name some, to check if they match or are compatible with your potential VoIP solution. It’s also essential that your existing network infrastructure has the capacity to properly manage the call volume and quality. Your current platform should accommodate the projected number of calls while keeping high-level voice quality. Voice calls are susceptible to jitter and packet loss during periods of high call traffic. Finally, your business internet connection should be fast enough to support the rest of the requirements. IP phones primarily depend on stable, high-quality internet connections for seamless calls. Check the speed to ascertain if it can cope with your chosen VoIP. Call your internet service provider to determine your exact connection speed. Ask for details about the service quality and reliability in your physical location. Bear in mind that the quality of your online connectivity depends on the types of communications devices you use. You can also regularly and freely check the connection speed and performance by visiting a web service site, such as Speedtest.net, TestMy.net, Speedsmart, and Internet Health Test.
<urn:uuid:bc771bc3-b04d-4780-90f5-eae2a7c42408>
CC-MAIN-2022-40
https://callflowsolution.com/how-to-configure-ip-phone-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00464.warc.gz
en
0.935506
8,375
2.546875
3
In the USA, the Department of Energy have awarded $425 million to IBM and NVIDIA to develop supercomputers with data-centric computing architecture. “Data-centric computing, or the idea of locating computing resources all places where data exists to minimize data movement, is a necessary and critically important architectural transition that IBM is announcing with the Coral project,” explained David Turek of IBM in a recent video, “because not only are the government labs in the US experiencing a dramatic impact of huge amounts of data, but also are industries around the world.” U.S. Secretary of Energy Ernest Moniz stated, “High-performance computing is an essential component of the science and technology portfolio required to maintain U.S. competitiveness and ensure our economic and national security”. The bottom line is to have an upper hand in data processing and computing technology over any other country. About $325 million has been sanctioned for two high-performance supercomputers, capable of performing 10^18 floating point operations per second, also known as one exaflop. The remaining $100 million will go toward a program called FastForward2 to develop next-generation supercomputers that are 20 to 40 times faster than current supercomputers. The two computers — Summit, to be built at Oak Ridge National Laboratory, and Sierra, built at Lawrence Livermore — will have peak performance of around 150 petaflops when they are completed in 2017-2018. Summit, is expected to be five times more powerful than the current system at Oak Ridge, Titan. And Sierra supercomputer is expected to be at least seven times more powerful than Lawrence Livermore’s current machine, Sequoia. The joint Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) was established in early 2014 to leverage supercomputing investments, streamline procurement processes and reduce costs to develop such supercomputers. “DOE and its National Labs have always been at the forefront of HPC and we expect that critical supercomputing investments like CORAL and FastForward 2 will again lead to transformational advancements in basic science, national defence, environmental and energy research that rely on simulations of complex physical systems and analysis of massive amounts of data,” Moniz stated. IBM Power Architecture, NVIDIA’s Volta GPU and Mellanox’s Interconnected technologies will collaborate to advance key research initiatives for national nuclear deterrence, technology advancement and scientific discovery. Jen-Hsun Huang, CEO of Nvidia, said. “Scientists are tackling massive challenges from quantum to global to galactic scales. Their work relies on increasingly more powerful supercomputers. Through the invention of GPU acceleration, we have paved the path to exascale supercomputing, giving scientists the tool for unimaginable discoveries.”. Dave at IBM stated “Data-centric computing has been set up as a new architectural paradigm which is meant to deal with the problem of big data and that no one is immune to it. The issue of big data, cuts across all market segments and technologies and will eventually affect all consumers using smart devices”. Read more here. (Image credit: Thomas Hawk)
<urn:uuid:06308d02-3d5d-418c-b347-555234ee94da>
CC-MAIN-2022-40
https://dataconomy.com/2014/11/data-centric-computing-to-expedite-us-department-of-energy-operations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00464.warc.gz
en
0.920167
658
2.59375
3
In the wake of increasing privacy concerns and the arrival of new regulations and attack vectors, protecting intellectual property (IP) has never been more critical to ensuring business survival. It’s what sets companies apart from their competition, framing the business, its purpose and growth potential. While losing customer data can damage a brand’s reputation, trust, and even bottom line, losing intellectual property or IP can threaten a company’s very existence. Therefore, preventing valuable corporate information from leaving the confines of the business and falling into the wrong hands is fundamental. Just as identity theft represents a huge risk for individuals, organised groups of fraudsters are looking for new ways to exploit both public and private sector organisations. Business identity theft was up 46% year on year in 2017, according to the latest figures from Dun and Bradstreet, and research from KPMG Forensic reveals that business fraud rose by 78% in 2018, with most crimes perpetrated by employees and managers already inside the company. A prime example here is the Chinese-American engineer charged with stealing General Electric trade secrets to take to China – smuggling the data out via code hidden in a digital photo of a sunset. Most cyber security firms can tell you when a breach or attack has taken place and data has been lost. But, its critical to be able to stop confidential data leaving company devices once the attack has taken place – especially when insiders are the perpetrators. Intellectual property being stolen – often unknowingly The inherent mobility of today’s workforce makes it increasingly difficult for companies to keep track of what’s happening on every device. Also, a high proportion of device transactions take place in the background, without the user’s knowledge – often resulting in sensitive company data unknowingly being sent to unidentified servers in regions where high levels of cyber-attacks originate. Hackers have become increasingly sophisticated and are attacking organisations from all directions. In fact, Avast research reported that mobile cyber-attacks grew by 40% in 2017 alone. Malicious actors profile employee behaviour as they browse the Web and use applications – collecting valuable data right across company networks, using spyware, phishing, ransomware, and malvertising to bait unsuspecting targets and then extract intellectual property or IP. And the Dark Web, used by over 80% of all ransomware, provides the perfect haven for illegal activity – facilitating anonymous access to content whilst preventing the identification of both the request and destination. Detecting unusual behaviour and shutting it down Despite the prevalence of security solutions that focus on intrusion detection systems such as Firewalls and Anti-Virus, together with Malware solutions that remove known infections, it seems inevitable that attackers will infiltrate the company network at some point. So, when hackers break into company networks, it’s critical that they are met with the next line of defense – preventing the transmission of data from one device or network to another and essentially blocking all outbound traffic. Preventing data loss is only part of the security equation with organisations left vulnerable by what they can’t see. Companies need intruder intelligence across all devices and endpoints – being able to track attackers’ movements, in real-time, to see where they go and what confidential company data they’re attempting to take. And, with 77% of successful attacks now using fileless exploits, defense tools have their work cut out – continuously monitoring unusual behaviour before shutting it down. It’s rather like taking preventative medicine; simply installing the defense tool on every device prevents infection through immunisation. One company benefiting from data loss prevention is IT consulting and solutions provider, CYANIT – blocking as many as 3000 threats per month and decreasing suspicious behaviour by 18%. To ensure maximum protection of company intellectual property or IP, organizations need a multi-layer defense system that prevents data loss, as well as unauthorised data profiling and data collection. History tells us that cyber criminals will always find a way of getting into organisations to steal company credentials or that special formula that makes the company tick. Attackers must be stopped from removing or leaking confidential data, before it causes untold damage and potentially brings the company down.
<urn:uuid:f675ca80-a5f6-49de-bd59-ced875802d0c>
CC-MAIN-2022-40
https://www.blackfog.com/the-inside-track-on-protecting-intellectual-property-ip/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00464.warc.gz
en
0.935035
841
2.546875
3
Brian Johnson, ABB data centre segment head, explores how digitalisation is shaping the evolution of green data centres, and how it is possible to deliver both cost and energy efficiencies despite growing demand for data. Even in the face of rapid digital acceleration, the vast proliferation of smart devices and an upward surge in data demand, data centres are only estimated to consume between one and two percent of the world’s electricity. A recent study even confirmed that, while data centres’ computing output jumped six-fold from 2010 to 2018, their energy consumption rose only six percent. That said, the need for additional data from society and industry shows no sign of stopping, so data centres need to find a way to meet this increased demand, without consuming significantly more energy. The key to this is digitalisation. By digitalising data centre operations, data centre managers can react to increased demand without incurring significant additional emissions. Running data centres at higher temperatures, switching to frequency drives instead of dampers to control fan loads, adopting the improved efficiency of modern UPS and using virtualisation to reduce the number of underutilised servers, are all strong approaches to improve data centre operational efficiency. So, how can data centres deliver power and cost savings without compromising on performance? There are several actions that can be taken. Digitalisation of data centres One of the most recent developments has been the implementation of digital-enabled electrical infrastructure, including the use of sensors instead of traditional instrument transformers, which communicate digitally via fibre optic cables. This reduces the total number of cables by up to 90% vs traditional analog, and they also use low energy circuits, which increases safety. The resultant digital switchgear can then be manufactured, commissioned and repaired much more easily thanks to both the sheer number of cables and the intelligent nature of the connections. Other innovations allow circuit protective devices to be configured wirelessly, and even change their settings when alternate power sources are connected. Minimising idle IT equipment There are several ways data centres can minimise idle IT equipment. One popular course of action is distributed computing, which links computers together as if they were a single machine. Essentially, by scaling-up the number of data centres that work together, operators can increase their processing power, thereby reducing or eliminating the need for separate facilities for specific applications. Virtualisation of servers and storage Undergoing a program of virtualisation can significantly improve the utilisation of hardware, enabling a reduction in the number of power-consuming services and storage devices. In fact, it can even improve server usage by around 40%. A server cannot tell the difference between physical storage and virtual storage, so it directs information to virtualised areas in the same way. In other words, this process allows for more information storage, but without the need for physical, energy consuming equipment. More storage space means a more efficient server, which saves money and reduces the need for further physical server equipment. Blade servers can help drive consolidation as they provide more processing output per unit of power consumed. Consolidating storage provides another opportunity, which improves memory utilisation while reducing power consumption. Some consolidation methods can use up to 90% less power once fully operational. Managing CPU power usage More than 50% of the power required to run a server is used by its central processing unit (CPU). Most CPUs have power-management features that optimise power consumption by dynamically switching among multiple performance states based on utilisation. By dynamically ratcheting down processor voltage and frequency outside of peak performance tasks, the CPU can minimise energy waste. Distribution of power at different voltages Virtually all IT equipment is designed to work with input power voltages ranging from 100V to 240V AC, in accordance with global standards and the general rule is of course, the higher the voltage, the more efficient the unit. That said, by operating a UPS at 240/415V three-phase four wire output power, a server can be fed directly, and an incremental two percent reduction in facility energy can be achieved. Plugging into the smart grid Smart grids enable two-way energy and information flows to create an automated and distributed power delivery network. Data centre operators can not only draw clean power from the grid, they can also install renewable power generators within their facility to become an occasional power supplier back into the grid. In short, as many global economies push for net zero emissions over the next 30-40 years, the role of the ‘green’ data centre will become increasingly important. There are several ways data centre operators can optimise efficiency without reducing performance – for further advice on safe, smart and sustainable digitalisation tactics which support the green data centre masterplan, visit: https://new.abb.com/data-centers
<urn:uuid:e4b9c058-27cf-4aae-98c5-50bf0510448f>
CC-MAIN-2022-40
https://datacentrereview.com/2021/04/digitalisation-and-the-evolution-of-green-data-centres/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00464.warc.gz
en
0.903937
979
2.90625
3
How can a wireless access point create a lifelong dream to explore the world’s oceans? Well the answer is simple. Well not “really” simple. But if the access point is located in an elementary school that leverages CBTS’ scalable Network as a Service (NaaS) solution, then the following scenario could unfold: Miss Wheeler’s 2nd grade class has been waiting for this day all year. The Woods Hole Oceanographic Research Institute has selected her class to participate in a remote session with marine biologists aboard the research vessel Atlantis, which is currently located just off the Galapagos Islands. The students will listen to and speak with the scientists about the research they are conducting. The grand finale includes the children using remote technology to control the robotic arm of Alvin, the Institute’s advanced research submersible, a mile and a half deep and almost 3000 miles away. The children are excited; a few have never seen anything like this before, growing up in the Midwest. Miss Wheeler is excited for them and continues to check with Steve, the school’s IT administrator, who is on hand for the event to ensure everything will go smoothly. She would hate to see the children disappointed as sometimes the school’s Internet can be a little questionable (apparently the 5th grade teachers like to binge watch The Great British Baking Show while their classes are at gym, lunch or recess, and use all the bandwidth). Now Steve is comfortable because he knows the school system had recently converted to NaaS from CBTS, the most flexible, reliable, secure and scalable solution available to them. Steve knows that CBTS manages the solution for him with a staff of world-class engineers and operators ready, at a moment’s notice, to solve any issue. Steve also knows about The British Baking Show marathon so, at the click of a button, he requested a private SSID for this specific event with traffic shaping enabled, allocating priority to the remote session traffic. All changes were executed on his existing NaaS hardware, with zero downtime and via rapid and flexible cloud management. Oh, and he requested a filter for Nexflix traffic on the remainder of the network. So ultimately, Steve can relax knowing that CBTS NaaS will do its part to ensure that Miss Wheeler’s 2nd grade class will enjoy their opportunity of a lifetime. Miss Wheeler can smile as one little girl steps up to the screen and sees an image transmitted from 3000 miles away of a hydrothermal vent 1 ½ miles under the ocean, then reaches out and in real-time controls the robotic arm of a real submersible, fueling dreams of one day becoming a marine biologist herself. Or that wireless access point could be from another less reliable solution. In this scenario, as the little girl steps excitedly up to the screen, her dream dies before it ever had a chance as she sees, “Safari cannot open page”. While just down the hall, consuming all the school’s available bandwidth, the 5th grade teachers look on aghast as yet another baker is sent home for having presented a scone, which was just not round enough. CBTS NaaS incorporates features designed to meet the needs of educational institutions of all sizes. As solutions become more complex and the demand for growth, security and operational oversight increase, you need a solution to keep pace now and in the future. CBTS NaaS provides the capabilities to meet and exceed those needs with key benefits such as: Leveraging CBTS NaaS in your school system or campus environment provides the robust capabilities to ensure students of all ages have the ability to achieve great learning opportunities while giving IT administrators the peace of mind to know that the network is ready for what comes next. And, our man Steve? He also knows that if, on a really slow day, he lets the 5th grade teachers watch The British Baking Show season finales, well, he’s got the pick of the doughnuts in the teachers’ lounge for life.
<urn:uuid:aa1980c7-a7ab-4189-9ccf-faf07cf13e1a>
CC-MAIN-2022-40
https://www.cbts.com/blog/cbts-naas-helping-build-next-generation-explorers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00464.warc.gz
en
0.958805
839
2.578125
3
Written by Mark Gardner, Director of Global Sales, NETSCOUT Test Optimization Business Unit Cybersecurity is built to protect computer systems and networks from theft, damage, and service disruption from attacks such as distributed denial-of-service (DDoS). DDoS attacks work by taking a target website or online service offline by overwhelming the target or its surrounding infrastructure with a flood of internet traffic. Although DDoS attacks have been around for more than 20 years, they remain something of a moving target as cybercriminals regularly discover and weaponize new attack vectors and techniques, such as the following: - Launching different types of attacks such as volumetric, TCP state-exhaustion, and application-layer attacks simultaneously as multivector attacks, each with a unique signature. - Using different botnets to change the source of attacks and stay one step ahead of blocked IP addresses. - Using DDoS attacks as a smoke screen to distract from the real cybercrime underway. DDoS traffic can consist of incoming messages, requests for connections, or fake packets. But here’s the catch: attacks are based on legitimate traffic, and it can be difficult to determine which traffic is legitimate “good” traffic and which is the “bad” traffic. Therefore, you must continually test your web servers and services, cloud offerings, and network topology for their ability to allow good traffic to pass through while stopping the bad traffic. The reality is that a DDoS attack is a matter of “when,” not “if.” With that in mind, this is what we recommend for verifying your resiliency to DDoS attacks: 1. Test your solutions. All DDoS mitigation solutions are tested. The question is whether the testing is conducted in a proactive, controlled manner or by a real attack. Proactive testing is a far better plan, because it gives you a chance to fix issues outside the stress of a real attack in which services might be failing. All public-facing services are subject to attack and should be tested. In addition to web servers, this includes session border controllers (SBCs), unified communication and collaboration (UC&C) systems, edge routers, and others. 2. Test regularly, particularly after significant upgrades. For example, one U.S. service provider tests the resiliency and vulnerability of cloud-based virtual environments prior to providing them to its commercial accounts. A second company—a network equipment manufacturer—tests for DDoS resiliency during preproduction testing of embedded mitigation software in a series of its hardware and software solutions. In one test, for example, the company found a product’s CPU (I/O card) was pegged at 99 percent after sending only 1 Gbps of TCP SYN traffic, which blocked “good” traffic from passing as initially expected. The company was therefore able to adjust the software prior to commercial launch. 3. Test by using customized attack simulations. One of the best ways to check how well your defenses can differentiate between good traffic and bad traffic is to launch attacks alongside good traffic. A good testing tool will let companies easily create custom multivector attacks that integrate into the existing test and mitigation infrastructure. Launching simulated attacks allows companies to find and fix issues before they are discovered in the heat of a real attack. DDoS attacks are on the rise exponentially—in terms of both frequency and size (bandwidth consumed). The latest NETSCOUT Threat Intelligence Report highlighted record-breaking DDoS attack activity in 2020, with more than 10 million observed attacks. Additionally, DDoS attack costs are increasing globally. According to a recent NETSCOUT Worldwide Infrastructure Security Report, the cost of downtime associated with internet service outages caused by DDoS attacks was $221,836.80, while a report from Allianz Global Corporate & Specialty found that the average cost of a cybercrime to an organization increased by 70 percent over five years, to $13 million. Can your business really afford not to test your DDoS resiliency? Learn more about how to test the resiliency of your node, endpoint, web server or web service, cloud offering, application, network, or topology against DDoS attack by using NETSCOUT’s SpectraSecure DDoS resiliency test tool.
<urn:uuid:cf633996-f0d6-413d-8344-8a6bed7c6047>
CC-MAIN-2022-40
https://www.netscout.com/blog/top-three-tactics-optimizing-ddos-resiliency-testing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00464.warc.gz
en
0.945824
890
2.703125
3
This post will explain to you, why it is that in Java most of the command line injection vulnerabilities in most common cases could not be exploited with: - && dir - ; ls There are two options for running a command: - Send the whole command to the OS shell (CMD or /bin/sh) and let Java parse & run it. - Split the words of the command into an array, execute the first word, and pass the rest as parameters. The difference is when, for example, the command is: Notepad.exe a.txt && dir The first method will run both commands (open Notepad with the file a.txt and, if it will succeed, run the command dir). The second method will pass the ‘&&’ and ‘dir’ as parameters to the notepad.exe program. Therefore, ‘&&’ and ‘dir’ will not run. This is also the difference between the ‘system’ function in C language which works as the first method, and ‘Runtime.exec’ function in Java which works with the second method.
<urn:uuid:e4fe49ac-67cc-401b-b14c-1dbd4fc06a28>
CC-MAIN-2022-40
https://appsec-labs.com/portal/2014/03/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00664.warc.gz
en
0.892349
238
3.34375
3
What is a ransomware attack? Hackers launch ransomware attacks after an initial attack, such as a brute force or credential stuffing attack, which gives them access to the sensitive data they're targeting. Once the attacker is in, they lock down files, folders, and databases and demand the victim pay the ransom to regain access to the infected system. A more modern version of the ransomware attack is called cryptoransomware. Rather than changing file access permissions, the attacker encrypts the files and databases on the infected systems and then demands payment via cryptocurrency (which is much harder for law enforcement agencies to track) to receive the decryption key. The attackers will threaten to release the sensitive information they obtained if the ransom is not paid. Popular ransomware variants While there are quite a few ransomware variants in the wild, the following are currently among the most common: - Ryuk: First spotted in 2018, this ransomware variant is delivered via a malicious attachment in a phishing e-mail or through a compromised account(s). Attackers encrypt the files, then demand a ransom. Ryuk ransomware attacks are the most costly: average ransom demands are well over $1 million. - REvil (Sodinokibi): REvil, otherwise known as Sodinokibi, may not demand as high a ransom as Ryuk, but it has five times the market share. This ransomware variant targets mainly enterprise networks, and the Russian REvil group carries out the attacks. REvil has increasingly employed "double extortion" in its attacks, which not only involves encrypting files, but also a threat of data disclosure if the victim doesn't pay the ransom. It also offers Ransomware-as-a-Service (RaaS) options, which broaden its reach. - Conti V2: Conti ransomware attacks are as common as REvil attacks and use many of the same tactics. The two ransomware families account for nearly a third of all known attacks. What makes Conti V2 so damaging is the speed at which it works: the ransomware attacker may be able to breach a vulnerable firewall in as little as 16 minutes, according to Sophos. How is it so fast? Conti V2 is human-operated, allowing it to be more nimble, and uses fileless attack methods to evade detection. - Avaddon: While REvil and Conti V2 are the most commonly spotted ransomware families, Avaddon quickly gained market share thanks to its SaaS-like platform. This allowed ransomware attackers to launch these attacks rapidly, without doing any groundwork necessary to launch such an attack. While Avaddon closed up shop in mid-2021, other Ransomware-as-a-Service (RaaS) offerings have replaced them including REvil, DarkSide, Dharma, and LockBit. As you can see, RaaS is becoming an ever larger problem. So what is RaaS, and why is it so dangerous? What is ransomware-as-a-service (RaaS)? RaaS allows people to create ransomware without needing to have any technical skills or knowledge about malware development. RaaS can be purchased on the dark web and are occasionally found on the open web if ransomware attackers know where to search. There are four primary revenue models for RaaS: - Subscriptions (monthly or annual) - Affiliate networks, where the ransomware operators get a cut of any ransom payments - Licensing fees Like SaaS, RaaS is sold on these sites, complete with 24/7 support, add-on bundles, user reviews, community forums, and more. Monthly fees range from as little as $40 per month to thousands of dollars, all scalable -- although payment is typically in cryptocurrency. The most sophisticated RaaS platforms even offer dashboards that the attacker can use to launch and monitor infections, information on their targets, and maintain their accounts much like a legitimate SaaS. The simplicity of RaaS is the primary reason why RaaS providers are the source of many recent attacks. How ransomware attacks work In the planning phase the ransomware perpetrators look for targets. The goal is to target sensitive files that will extract the highest ransom: this is why mid- and larger-sized businesses are commonly targeted. These organizations have the highly sensitive data attackers want and large enough bank accounts to pay the ransom. The attacker searches for security holes in the victim's computer or attempts to break in using a phishing email using malicious websites or attachments, disguising the malware as a legitimate file. This phase is by far the most important -- if the attacker doesn't find valuable enough data or is thwarted by good enterprise network security, the whole attack may fail. Once planning is complete and a target is selected, the attacker gets to work. They break into the device or network using an operating system, network, or application vulnerability. From there, the ransomware software begins to encrypt the targeted files, folders, or databases -- typically looking for specific file extensions or specific sensitive files stored on the victim's computer. Depending on the ransomware attack used, the victim might not even know it is happening until it's too late. At the same time, the attacker may also take the opportunity to install additional malware to help spread the ransomware throughout the company network and beyond. The first time a victim may notice a ransomware infection has occurred is upon trying to access an encrypted file. The user might find a digital ransom note instead of the file they're trying to access. Also, a pop-up may appear on the infected computer's screen demanding a ransom to restore access to the user's files. Next, the attacker threatens deletion or disclosure of the data. At this point, paying the ransom is the only way to get access to the decryption key necessary to restore access. Payment is often demanded in bitcoin to prevent disclosure of the attacker's identity. Remember, a ransom payment doesn't always guarantee access to the infected computer or device. The attacker may simply take off with the money, leaving you with no decryption key and no access to your encrypted file. Ransomware attacks in the news Ransomware is quite common, with several recent high-profile attacks in the news: - Colonial Pipeline: Perhaps the most consequential ransomware attack of the last several years, April 2021's Colonial Pipeline attack disrupted gas supplies all along the east coast of the US, and was the result of a compromised account. Carried out by the DarkSide ransomware group, the attack caused consumers to panic-buy gas. While Colonial Pipeline eventually agreed to pay the $4.4 million bitcoin ransom, the FBI was able to recoup most of the company's money through monitoring cryptocurrency transactions. - Acer: Acer's attack, carried out by the REvil hacker group, stands out as one of the largest ransoms demanded to date with a $50 million request. Hackers used a vulnerability in the Microsoft Exchange server to gain access to Acer's file servers and leaked sensitive documents to the Web. Making matters worse, Acer confirmed a second attack in October at its Indian offices, where hackers made off with about 60GB of stolen data. - JBS: REvil hackers were also behind a ransomware attack involving international meat processor JBS in spring 2021. It appears it was tracked to compromised accounts found on the Dark Web. The attackers moved 45GB of stolen data to a sharing site known as Mega, and demanded a ransom of $11 million. While it was able to recover most of its data without the private key, JBS chose to pay the attackers. JBS’s CEO Andre Nogueira argued the payment was "to prevent any potential risk for our customers." How to protect against ransomware attacks The best way to handle ransomware is to not get infected with it in the first place, but unfortunately ransomware gangs can be persistent in their attempts to hack databases. Regardless of whether you are a victim of a ransomware attack or not, take the following steps now to protect against and prevent ransomware attacks. - Eliminate passwords: Replacing passwords with stronger authentication prevents stolen credentials, which are the main entry point for ransomware, from accessing an organization’s network. Learn more about passwordless authentication today and keep your most critical applications secure. - Set up risk-based authentication and a zero trust architecture: Use risk signals like IP address, geolocation, device type, and more to set up a strong access control policy and never trust, always verify, anyone trying to access the network. - Consider continuous data backups: Data backup is often an afterthought. Instead of a once nightly (or weekly, etc.), consider a strategy of much more frequent backups. This effectively nullifies the damage of a ransomware attack since they are betting you don't have a recent copy of the encrypted files readily available. - Secure your backups: Place the backups on a server that isn't accessible for deletion or modification from the server where the data resides. This prevents your backup files from falling victim to an attack (attackers will look for backup files, too!). - Train your users on how to spot ransomware threats: Many ransomware attacks start through phishing emails. This doesn't just include malicious attachments: many of today's phishing emails use links instead, which could look legitimate. - Keep your organization's network, devices, and applications up to date: The attacker's entry point is typically an exploit in the network protocols, devices, or applications your organization uses. Keeping these up to date is a priority. - Regularly scan for malware, and keep your antivirus software up to date: Make sure you keep this up to date, as it provides you an extra layer of protection just in case the user accidentally installs malicious software or clicks on a malicious link. - Avoid unsecured Wi-Fi networks: Train your users to avoid Wi-Fi networks that aren't secure when accessing your corporate network and applications remotely. Attackers can snoop over these unsecured networks and potentially discover usernames and passwords. If you are in a situation where it is too late to protect against ransomware since you have already fallen victim there are ways to respond to ransomware so you can save as much data as possible. Responding to a ransomware infection If you are a victim of a ransomware attack, it is crucial to take a moment and understand the situation first. Paying the ransom should be the last thing you do, not the first. The FBI recommends not paying the ransom at all. Gather information that you have, and contact law enforcement immediately. Next, if you can determine the ransomware variant the attacker used, search for free decryptors from trusted security sources. While a decryptor won't be available for some ransomware variants, it's an excellent place to start. We recommend leaving this work to an experienced technician: the wrong decryptor may make the problem worse. If you're able to track down the affected device or computer, disconnect that from the internet immediately. Once disconnected, the ransomware software can no longer communicate with the command and control server and cannot spread further. Scan all devices and systems with antivirus software, and be sure to alert any third parties with access so that they can take steps to protect themselves. At this point, investigate your data backups, and begin the ransomware removal process. You may be able to restore files from these backups to replace the encrypted data.
<urn:uuid:01204a78-0e92-4794-91f3-6d58d6ba0fe2>
CC-MAIN-2022-40
https://www.beyondidentity.com/glossary/ransomware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00664.warc.gz
en
0.938714
2,353
2.828125
3
DNS over TLS (DoT) and DNS over HTTPS (DoH) are two new versions of DNS designed to encrypt the communication between DNS clients and recursive DNS servers. These are both good things by solving a longstanding “gap” where DNS queries were transmitted unencrypted. The unfortunate part of these things is how it has been implemented. Apple’s implementation, in particular, effectively changes the way DNS works at the operating system level. New Changes Bring New Problems So why do these new DNS privacy standards create problems? Let’s first talk briefly about one of the two protocols: DoT. DoT traffic is encrypted, but its use of a well-understood port (TCP port 853) makes it easier for network administrators to monitor and control encrypted DNS when it appears. Similar to standard DNS, it’s also used by a single stub resolver, making it easier to manage on a client-by-client basis. That makes it easy to control and monitor. DoH is the real troublemaker here. It leverages HTTPS to provide encryption and authentication between a DNS client and server. Still, since it uses the same TCP port (443) that all HTTPS traffic uses, it becomes a real challenge to troubleshoot DoH-related DNS issues because of the inability to distinguish DoH-based DNS requests from regular HTTPS requests. DoH relies on a handful of centralized and external cloud DNS providers. Users will essentially bypass existing corporate DNS services since DoH DNS requests are encrypted and invisible to third parties, including cybersecurity software that may rely on passive DNS monitoring to block requests to known malicious domains. But that’s not all. Apple’s recently released versions of iOS and macOS now support both DoT and DoH protocols. That’s an important development as it affects a greater number of devices. For example, as users upgrade to newer versions of Apple’s operating systems, be it on their handhelds, tablets, desktops, and laptops, these changes are automatically deployed without user interaction. From an implementation standpoint, these settings can be applied selectively, ranging from the entire operating system through MDM profiles or network extension to individual applications or selected network requests of applications. The latter is the most interesting, where developers can create applications that use DoT and DoH directly from individual apps – and potentially introduces the potential for a proliferation of resolvers maintained by various entities. Beyond Apple, we expect future Microsoft operating systems to leverage similar implementations. Communications Service Providers Are Not Immune Communications service providers (CSPs) have invested heavily in their networks to provide a safe, reliable and fast network experience for their subscribers. We all are depending on the Internet more than ever. Beyond providing entertainment at home, for many, it is now their primary means for work, education and, for some people, their healthcare. CSPs rely on their DNS investment as a significant element of their network control plane to ensure fast network experiences and keep users safe from malware and other Internet-borne threats. If CSP DNS infrastructure is no longer be in the path of subscriber DNS requests, they can’t offer DNS-based network-level content filtering and protection. That means that parental controls won’t work, and households with children will need to set up and manage parental controls on a per-device and even a per-application basis. And as I mentioned in a previous blog, parents are not keen on being sysadmins. Let’s not forget about security. With CSP infrastructure bypassed, DNS-based security controls that protect subscribers from common threats such as lookalike domains and malware sites are also bypassed, increasing exposure to data exfiltration and malware proliferation on the provider network. It’s one of the reasons why the United States National Security Agency (NSA) recently posted guidance that organizations host their own DoH resolvers and avoid sending internal DNS traffic to external third-party resolvers. Within months of being released, we saw malware that used DoH to encrypt malicious communications allowing it to hide in regular HTTPS traffic and install malware that can steal data or add a victim to a botnet. From a regulatory perspective, implementing regional or in-country obligations will be a significant challenge when there is no business relationship or legal authority over a company outside of the CSP’s network or country. Consider that CSPs must block access to pornography for children or block access to terrorist websites in many countries. How can providers meet these obligations if their content controls no longer reside on their network? Last but not least: speed and performance. I stream—a lot. And I can’t stand buffering. DNS is the first message to initiate most IP conversations, and low latency is critical to get the best performance and experience. That’s why many CSPs have invested in on-net content caching to provide subscribers with the best content experiences. DoT and DoH DNS requests will travel off-network, effectively bypassing this local on-net content. That means that some subscribers can receive less localized content or be directed to non-geographically optimal CDN caching locations. And DNS is used to optimize connectivity to streaming video caches and other content based on the client computer’s IP address. With DoT and DoH, how CDNs localize clients (meaning, the ability to direct traffic to the most geographically optimal CDN caching node) is affected. If CSPs cannot view subscriber DNS queries, they may not be able to route subscribers to the geographically closest or most efficient CDN node. The Solution: Provide On-Network DoT and DoH Services Using DoT and DoH to encrypt traffic between DNS clients and recursive DNS servers is not going away (with the adoption of DoH taking the lead), and this adoption will only increase. The only way for CSPs to tame the beast is to deploy encrypted DNS servers on their networks. The solution? Offer on-premise DoT/DoH capabilities to the subscriber base. Luckily, Infoblox Encrypted DNS for CSPs provides efficient encryption while delivering Infoblox best-in-class DNS. Infoblox Encrypted DNS enables Infoblox to encrypt last-mile DNS communications between their endpoints and DNS servers regardless of which protocol the endpoint supports while solving performance concerns associated with the additional overhead related to encrypted DNS communications. The ability to accommodate encrypted DNS allows CSPs to maintain control over their DNS investment and continue to offer the safest and fastest experience possible with microsecond latency.
<urn:uuid:698a0dad-b924-4bbd-afc4-aa20bf0b1ae0>
CC-MAIN-2022-40
https://blogs.infoblox.com/community/dot-and-doh-impacts-for-service-providers-and-how-to-overcome-them/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00664.warc.gz
en
0.928683
1,354
2.640625
3
We are living in a fascinating era when the technologies which seemed like they were from science fiction are entering our everyday lives. Or, at least, they’re making the first shaky steps to being a part of our regular reality. One great example of such tech is direct neural interface. On the surface it is just another method of human-machine interaction, but it really is something much more revolutionary. Modern PC manipulators are a mouse, a keyboard, or a touch-enabled display. Voice or gesture input are becoming more and more pervasive. A computer is already capable of tracking your eye movements or identifying the direction a user looks. The next stage of human-machine interaction is a means of direct computation of neural system signals, presented by direct neural interfaces. How it all started The first theoretical insights into this concept are based on the fundamental research carried out by Sechenov and Pavlov, who are the founding fathers of the conditional reflex theory. In Russia, the development of this theory which serves the basis for such devices, started in the middle of the 20th century. The practical application, carried out both in Russia and abroad, was heard of as long ago as the 1970s. Back in those days, the scientists tried to inject various sensors into lab chimps’ bodies and made them manipulate robots by force of mind in order to get bananas. Curiously, it worked. As they say, where there’s a will, there’s a way. The key challenge was the fact that in order to make the whole thing work, the scientists had to equip their ‘mind machine’ by a set of electronic components which occupied the whole room nearby. Now, this challenge can be tackled, as many electronic components became miniscule. Today, any geek can play that chimp from the ’70s. Not to mention the practical use from such technologies and the merits they bring to disabled or paralyzed people. How it works To put it simply, human neural system generates, transmits, and processes electrochemical signals in different parts of the body. The ‘electric part’ of those signals can be ‘read’ and ‘interpreted’. There are different ways to do this; all of them have their advantages and drawbacks. For instance, you can collect the signals via magnetic resonance imaging (MRI), but the needful appliances are too bulky. It is possible to inject special liquid markers to enable the process, but they may be harmful for a human organism. Finally, one may use tiny sensors. Using such sensors is, in general, the means of using direct neural interfaces. In our everyday lives we may find one such appliance in a neurologist’s office. It looks like a rubber cap with a ton of sensors and wires attached to it. It serves for diagnostics, but who said it cannot serve other purposes? We should differentiate between direct neural interfaces and brain-machine interfaces. The latter is a derivation of the first and deals only with brain. Direct neural interfaces deal with different parts of neural system. In essence, we talk about indirect or direct connection to the human’s neural system which we can use to transmit and receive certain signals. There are many ways to ‘connect’ to a human, and all of them depend on the sensors used. For example, sensors vary in terms of submersion; there are the following types of sensors: - Non-submerged sensors: the electrodes are positioned on the surface of the skin, or are even a bit detached, like those used in the aforementioned ‘medical cap’. - Half-submerged sensors: the sensors are positioned on the surface of the brain or close to the nerves. - Submerged sensors: the sensors are implanted directly and spliced into the brain or nerves. This method is very pervasive and has a lot of side effects: you may accidently tamper with a sensor, which would in turn provoke the process of rejection. Well, anyway, this method is spooky, but nevertheless, it is used. To ensure a higher quality of the signal, the sensors may be moistured with special liquids, or the signal may be initially processed ‘on the spot’, etc. Then the registered signals are processed by purpose-made hardware and software, and, based on the purpose, yield a certain result. — Kaspersky Lab (@kaspersky) March 6, 2015 Where it can be used The first purpose that springs to mind is research. If we refer to early studies, we talk about animal experiments. This was where it all started: mice or chimps were injected with tiny electrodes, and then their brain zones or neural system activities were monitored. The collected data helped to enable immersive studies of brain processes. Next is medicine. Such interfaces have been used in diagnostics in neurology. If the studied individual gets the result, he can initiate a process called neurofeedback. An additional channel responsible for the organism’s self-regulation is awakened: the physiology data is provided to the user in a comprehensible manner, and s/he learns to manage his/her own condition based on the received input. Such appliances already exist and are used. Another promising use is neuroprosthetics where scientists have already achieved some solid results. Should there be no chance to ‘repair’ damaged conductive nerves in a paralyzed limb, there is an opportunity to inject electrodes which would then serve to conduct signals to muscles. The same applies to artificial limbs which can be connected to neural system instead of lost ones. Or, in an extravagant case, such systems might be used to manipulate ‘avatar’ robots. There is one more branch we should talk about, which is called sensor-enabled prosthetics. Cochlear implants which help people to restore hearing ability, are already a reality. Also, there are neural retina implants which partially restore eyesight. Games leave a lot of room for imagination — and not just to virtual reality: even such a down-to-earth idea as manipulating RC toys via neural interfaces sounds fabulous. If the ability to read signals is augmented by a counter directional process of transmitting them back, stimulating certain parts of the neural system, it (in theory) means a lot of exciting opportunities for the gaming industry. Is it possible to read and write down thoughts? If we talk about the present state of tech, the answer is both yes and no. The signals we read cannot be considered thoughts per se, so one cannot ‘read’ what another person is thinking. Those signals are just traces, imprints of neural system’s activity, enhanced with noises and delivered one second late. It is not even a separate neuron which is being read — it’s merely an activity of a certain brain zone or the neural system. It does not seem feasible to catch a single thought in this pool of information. On the other hand, there are studies based on MRIs which ‘decipher’ images that originate from seeing certain images. The images are not very clear, but can be used to pull together the general picture. It seems even more complex if we consider writing down someone’s thoughts. There are no openly available studies of this subject. But we can warn people, basing our assumptions on adjacent spheres of research. Take electric shock therapy: it can be successfully used to erase a patient’s memory and influence their cognitive abilities. Also, deep stimulation of the brain is successfully used to cure Parkinson’s disease. How it refers to information security Strange as it seems, this topic has direct correlation to information security. We don’t feel it is time now to discuss the ethical side of neural interfaces usage, only time will make it right. But what we should bear in mind is that as any other sophisticated piece of tech, such appliances need to be protected. — Muse (@ChooseMuse) April 28, 2015 Now, when everything is connected, the neural devices are supposed to become such, too. An obvious case which immediately comes to mind is using the Internet to send the data obtained during diagnostics of either a device or its user. When there is a connection, there is a possibility for it to be hacked. We don’t mention a not-so-distant future when direct neural interfaces will be ubiquitous. Imagine you use implants to enhance your eyesight or hearing ability, and someone uses them to spam you with visual or aural ads, or even transmit false information for some unfriendly reasons. Reading minds sounds even scarier, left aside recording memories. If there is a possibility to read video images (even with noise) today, give it several years, and this tech will evolve — so what could happen then? It may sound like geeky mumbo-jumbo today, but, considering the pace at which new tech is developed and deployed at scale, neural appliances and collateral damage resulting from the careless use of such, may become a real problem even earlier than it seems. P.S. By the way. Check out a nice gizmo which I happen to have at my work desk. Should anyone from the Kaspersky Lab’s Moscow office be interested, feel free to drop by and have a look when free.
<urn:uuid:f1eb9edc-d1e1-480e-b574-6364006fda2c>
CC-MAIN-2022-40
https://www.kaspersky.com/blog/direct-neural-interfaces/8560/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00664.warc.gz
en
0.953518
1,931
3.171875
3
Generally, a cyberattack can be described as any form of digital assault launched by one computer against another computer(s) or network. Cyberattacks can also be defined by one of two categories – active attacks and passive attacks. Active Attacks – The attacker’s end goal is to disable the device and/or its online accessibility. Passive Attacks – The attacker’s end goal is to gain access to the device and/or network data. In addition, a recent study has revealed nearly one in five businesses polled have been affected by a cyber attack in the last two years - with nearly half estimating the attack cost them over $100,000 (others had no idea how much they spent as a result of the breach). In addition, half of small to medium-sized business (SMB) owners surveyed believe their organization is vulnerable to a cyber security attack, with the same number stating they are worried about how a breach could affect their operations. However, the good news is, a solid understanding of the types of threats to which your SMB is most vulnerable, combined with the right IT solutions, will prevent your business from being victimized by malicious cyber criminals. Below are four of the most common types of cyber attacks being used against SMBs today and how to prevent them: Phishing attacks are an extremely common form of cyber attack, with four in every ten Canadian SMBs affected. While phishing attacks can come in many forms, they are typically disguised as a well-intentioned email, masquerading as a credible company, institution, or colleague. Here, cyber attackers attempt to trick the victim into entering personal or company information or to download a malicious attachment to their device. See our previous post: 6 Common Phishing Attacks and How to Prevent Them Short for “malicious software”, malware can be defined as any type of software exclusively created to cause damage to devices, wreak havoc on sites and steal data. Delivered in the form of ransomware (more on ransomware below), viruses, Trojans, spyware and more, malware can lead to serious data breaches as well as damage to devices and networks. Prevention: Employee education on email best practices, reliable anti-virus software with regular updates, regular firewall updates. Just as the name implies, ransomware is when hackers hold the victim’s device and data “ransom” until their demands (usually monetary) are met. In this scenario, the attacker will encrypt the user’s files, forcing them to either pay to obtain the decryption key, or spend potentially thousands in an attempt to restore the hijacked data. Prevention: Employee education on phishing scams and email best practices, regular operating system and security software updates, cloud solutions and data back ups. Denial of Service (DoS) Attack A DoS attack is where cyber criminals shut down an organization’s website by means of overwhelming it with traffic and data. This form of attack can render your site virtually unusable for customers and can be extremely costly in terms of both lost sales, downtime, and website repairs. Prevention: Regular traffic monitoring, security patches and security software updates. At GAM Tech, we work hard to earn the trust of small to medium-sized companies. Responsive, reliable, and accountable, you can count on us to act in the best interest of your business 24/7/365. For more information on our variety of affordable services, Book Your Free Consultation or reach out to us, we’ll be happy to tell you more.
<urn:uuid:4c57e190-ae2a-481c-b3d0-d9ddcd825c03>
CC-MAIN-2022-40
https://www.gamtech.ca/category/blog/the-top-4-most-common-cyberattacks-to-threaten-smbs
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00664.warc.gz
en
0.946072
748
3.34375
3
Latency is bad. If a thread needs data from main memory, it must stop and wait for around 1000 instructions before the data is returned from memory. CPU caches mitigate most of this latency by keeping a copy of frequently used data in local, high-speed memory. This allows the processor to continue at full speed without having to wait. The problem with Internet scale is that it can't be cached. If you have 10 million concurrent connections, each requiring 10-kilobytes of data, you'll need 100-gigabytes of memory. However, processors have only 20-megabytes of cache -- 50 thousand times too small to cache everything. That means whenever a packet arrives, the memory associated with that packet will not be in cache. The CPU will have to stop and wait while the data is retrieved from memory. There are some ways around this. Specialty network processors solve this by having 8 threads per CPU core (whereas Intel has only 2 or even 1 thread per core). At any point in time, 7 threads can be blocked waiting for data to arrive from main memory, while the 8th thread continues at full speed with data from the cache. On Intel processors, we have only 2 threads per core. Instead, our primary strategy for solving this problem is prefetching: telling the CPU to read memory into the cache that we'll need in the future. For these strategies to work, however, the CPU needs to be able to read memory in parallel. To understand this, we need to look into details about how DRAM works. As you know, DRAM consists of a bunch of capacitors arranged in large arrays. To read memory, you first select a row, and then rid each bit a column at a time. The problem is that it takes a long time to open the row before a read can take place. Also, before reading another row, the current row much be closed, which takes even more time. Most of memory latency is the time that it takes to close the current row and open the next row we want to read. In order to allow parallel memory access, a chip will split the memory arrays into multiple banks, currently 4 banks. This now allows memory requests in parallel. The CPU issues a command to memory to open a row on bank #1. While it's waiting for the results, it can also issue a command to open a different row on bank #3. Thus, with 4 banks, and random memory accesses, we can often have 4 memory requests happening in parallel at any point in time. The actual reads must happen sequentially, but most of the time, we'll be reading from one bank while waiting for another bank to open a row. There is another way to increase parallel access, using multiple sets or ranks of chips. You'll often see that in DIMMs, where sometimes only one side is populated with chips (one rank), but other times both sides are populated (two ranks). In high density server memory, they'll double the size of the DIMMs, putting two ranks on each side. There is yet another way to increase parallel access, using multiple channels. These are completely separate subsystems: not only can there be multiple commands outstanding to open a row on a given bank/rank, they can be streaming data from the chips simultaneously too. Thus, adding channels adds both to the maximum throughput as well as to the number of outstanding transactions. A typical low-end system will have two channels, two ranks, and four banks giving a total of eight requests outstanding at any point in time. Given a single thread, that means a C10M program with a custom TCP/IP stack can do creative things with prefetch. It can pull eight packets at a time from the incoming queue, hash them all, then do a prefetch on each one's TCP connection data. It can then process each packet as normal, being assured that all the data is now going to be in the cache instead of waiting on memory. The problem here is that low-end desktop processors have four-cores with two-threads each, or eight threads total. Since the memory only allows eight concurrent transactions, we have a budget of only a single outstanding transaction per core. Prefetching will still help a little here, because it parallel access only works when they are on different channels/ranks/banks. The more outstanding requests, the more the CPU can choose from to work in parallel. Now, here's where DDR4 comes into play: it dramatically increases the number of outstanding requests. It increases the number of banks from the standard 4 to 16. It also increases ranks from 4 to 8. By itself, this is an 8 fold increase in outstanding commands. But it goes even further. A hot new technology is stacked chips. You see that in devices like the Raspberry Pi, where the 512-megabyte DDR3 DRAM chip is stacked right on top of the ARM CPU, looking from the outside world as a single chip. For DDR4, designers plan on up to eight stacked DRAM chips. They've added chip select bits to select which chip in the stack is being accessed. Thus, this gives us a 256-fold theoretical increase in the number of outstanding transactions. Intel has announced their Haswell-E processors with 8 hyperthreaded cores (16 threads total). This chip has 4 channels of DDR4 memory. Even a low-end configuration with only 32-gigs of RAM will still give you 16 banks times 2 ranks times 4 channels, or 128 outstanding transactions for 16 threads, or 8 outstanding transactions per thread. But that's only with unstacked, normal memory. Vendors are talking about stacked packages that will increase this even further -- though it may take a couple years for these to come down in price. This means that whereas in the past, prefetch has made little difference to code that was already limited by the number of outstanding memory transactions, it can make a big difference in future code with DDR4 memory. This post is about getting Internet scale out of desktop hardware. An important limitation for current systems is the number of outstanding memory transactions possible with existing DRAM technology. New DDR4 memory will dramatically increase the number of outstanding transactions. This means that techniques like prefetch, which had limited utility in the past, may become much more useful in the future.
<urn:uuid:35f998b7-1cc3-40c4-8c9a-d2a9620debe2>
CC-MAIN-2022-40
https://blog.erratasec.com/2014/08/c10m-coming-ddr4-revolution.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00664.warc.gz
en
0.949066
1,309
3.578125
4
The flight agency is looking to adapt and expand its mission to meet the demands of growing space commercialization. The SpaceX Dragon launches at Cape Canaveral in April 2016. (Image courtesy: NASA) Space traffic is on the rise. The Federal Aviation Administration saw a 55 percent increase in the number of launch applications filed by private companies in fiscal 2016 compared to the year before, according to the agency's administrator. Those applications, according to FAA Administrator Michael Huerta’s remarks at the 20th Annual Commercial Space Transportation Conference in Washington, D.C., covered a range of ever-more diverse space vehicles from reusable and small-payload rockets, to high-altitude balloons and space vehicle carrier aircraft. Companies such as SpaceX, Blue Origin and Virgin Galactic are evidence of a growing and diversifying space vehicle industry. Huerta likened the phenomenon to the early stages of the aviation industry a century ago. "Space is not the exclusive domain of the government. Industry is democratizing space, launching more vehicles from more launch sites than ever," he said. The FAA is moving to smooth out its own processes and technology to allow for more frequent launches with more safety checks. Those moves include a possible takeover of space traffic control, or space situational awareness, from the Department of Defense. It is also investigating how to restructure its current regulatory framework, which focuses on aircraft, to make it appropriate for a variety of airborne vehicles, including include high-performance jets, balloons and the aircraft portion of hybrid systems that also contain a rocket-powered launch vehicle. Huerta said the FAA's tests of a system that can more quickly turn air restrictions on and off over geographic areas for an increasing number of launches is progressing well at Florida's Cape Canaveral. The system, he said, is needed as not only the number of launches increases, but the locations of those launches mulitply as spaceports are built across the country. "Space launches are now exceptional" events that can shut down commercial aircraft flight paths for hours, he said. With the growing number of space or near-space launches and vehicles, such delays can't continue or they will threaten commercial flight paths and companies. Tests of a system that can automatically determine and apportion airspace between commercial space launches and ordinary air traffic are underway. The Space Data Integrator system test at Cape Canaveral in December saw "extremely positive results," he said. Along with those activities, Huerta called on the burgeoning commercial space industry to help develop categories of emerging space ports. Vehicles launched into space, or near space, can range from small rockets, to large aircraft-sized "lifting bodies" that boost payloads into orbit, making a categorization system for the places they launch from necessary -- what's safe for a small rocket probably won't be for a larger vehicle. Despite the growing workload to support commercial activities, the agency is working with a limited commercialization budget, said George Nield, the FAA's associate administrator for commercial space transportation. The department, he said, gets about $10 million, which hasn't increased under the continuing resolution. Ninety percent of that budget, he said, goes to employee salaries and benefits. Despite the flat budget, the office saw launches and re-entries shoot up from 17 in fiscal 2016 to between 36 and 43 in fiscal 2017. He said he expects that number to double by fiscal 2018. Integrating launches with regular air traffic, he said, is a priority for his office, and he also hopes to advance work with the Defense Department on what amounts to space traffic control. To do that, he said, FAA needs authorization to use the space situational data as well as immunity from lawsuits in using the data, just like DOD. NEXT STORY: DOJ seeks help with telecom transition
<urn:uuid:d72e072d-9490-4d56-9dc4-6c5abdb5046c>
CC-MAIN-2022-40
https://fcw.com/it-modernization/2017/02/faa-challenged-by-growing-commercial-space-industry/257702/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00664.warc.gz
en
0.952989
774
2.671875
3
Digital communications allows businesses to cut their phone bills – in large part by reducing long distance charges to nothing or close to nothing. International long distance has not-quite reached the same no-cost, even though the minute-by-minute cost is substantially reduced. Here’s the story behind international long-distance, and why cost varies so much across VoIP providers. All the countries that are just getting the Internet now aren’t “wired” the same way the United States is wired. And they are often better at wireless technology because they never buried phone lines. Their communications were built after the invention of cell towers, which are much less expensive to build. This is one of the fundamental differences in the way signals move around these countries, versus ours. Each country works differently in the way it regulates its telecommunications, but most have built a gateway into the country, and they charge to give you access to their country’s network. That’s why calls to some countries cost so much more. It all depends on how much the country charges you to enter its network. SIP providers will do the same thing in those countries as SIP providers do in the United States. They will give you access to phone numbers within the target country for a purchase price. And because that London phone number goes to an IP address, and the IP address is in New York, you can bypass those charges. Oh great, you say, so why am I still charged 10 cents a minute for my calls to London? Here’s why. Those SIP providers are routing the call across the public Internet for roughly 6,000 miles. And call quality can and does suffer. Come back next week for a full explanation of the dangers of sailing the seas of the public Internet.
<urn:uuid:879768cb-3b91-40a8-a5e3-f34c21d52d8a>
CC-MAIN-2022-40
https://www.centerpointit.com/voip-basics-5-international-long-distance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00664.warc.gz
en
0.954465
367
2.875
3
Most IoT products are not secure by design. In fact, they’re a primary reason for the recent explosion in cybercrime. One report showed 2.9 billion cyber incidents in the first half of 2019. The researchers specifically called out the proliferation of the Internet of Things (IoT) — along with Windows SMB — as a leading cause. Decentralized technology through chip-to-cloud IoT may be the answer to these problems. IoT introduces almost countless opportunities for improvement at the consumer and commercial technology levels. Personal uses include smart homes and smart cars. IoT provides overwatch across numerous cyber-physical systems in industrial settings and facilitates data mobility like never before. Rampant cybersecurity concerns surrounding IoT are leading some technologists to call for a different kind of architecture. Chip-to-cloud IoT looks like a promising way to build a more secure, useful, and decentralized technology for all. What’s Wrong sith Current IoT Security? There are likely to be 50 billion internet-connected devices worldwide by 2030. These include smartphones, personal computers, sensors, and embedded logic controllers across vehicle and machine fleets in the industrial world. Each of these devices is an internet node, and any one of them could be a weak link in the cybersecurity chain. There are several reasons why the current approach to building IoT devices is foundationally insecure: - IoT devices usually lack the processing capacity to run onboard security tools. - Adopters of IoT devices frequently fail to change factory-default passwords, which hackers can guess easily. - IoT is a relatively immature technology, and it’s easy for inexperienced users to misconfigure networks and settings. Despite their limited computational power, IoT devices can still become infected with malware. For all their potential to improve enterprise planning and maintenance responsiveness, connected machines and devices can offer an unguarded backdoor into facility intranets and corporate networks. This is especially true if they’re not built and deployed properly. That’s why IT experts frequently subdivide networks to isolate and hide IoT devices from the rest of the organization’s business or guest traffic. However, this still doesn’t solve the problem of IoT’s fundamental shortcomings. People need to rethink how the chips in IoT products communicate with the internet and with each other to bring about more robust IoT security. Chip-to-Cloud Is Decentralized IoT Chip-to-cloud architecture provides a way to create networks of secure, low-energy devices that have direct connectivity to IoT cloud platforms. Organizations everywhere are finding they must shift to a cloud-first technology stack, but so far, the collective efforts to build this infrastructure have been wasteful, inefficient and insecure. The traditional route IT professionals took to secure IoT devices was based on firewalls and other security products not hosted on the machine itself. This is the fundamental weakness that chip-to-cloud architecture seeks to address. One of the major weaknesses of IoT devices is their lack of computational power and onboard security measures. Items built with chip-to-cloud IoT security are more powerful and secure than their predecessors while remaining energy-efficient, due to several features: - Onboard cryptography engine - Random number generator - Sufficient random-access memory (RAM) These chipset features grant an additional security boon. Thanks to the cryptographic uniqueness of each IoT node, it becomes far more difficult for a hacker to spoof its identity and hijack its access to the wider business network. If previous incarnations of IoT were a part of Web 2.0, then chip-to-cloud IoT is a decisive step toward Web 3.0, or just “Web3.” The technologists who are active in this emerging space promise these design principles will shift the power balance back to the beneficiaries of data mobility — the consumers of IoT devices or value-creators — instead of centralized providers. Data gathered at the edge can be processed and used there, too. Chip-to-cloud makes it faster than ever by eliminating traffic stops between the edge nodes and the logic program standing ready to act on the information. The phrase “secure by design” applies to chip-to-cloud IoT architecture. This new generation of tools aims to provide value-adding data-mobility capabilities to new and legacy equipment, just like current IoT. However, chip-to-cloud chipsets remain perpetually connected to the cloud. This should substantially improve asset availability and make digital communication between nodes, departments, or facilities even faster. How Does Chip-to-Cloud Decentralize IoT? Essentially, what it decentralizes most are security protocols. Traditional IoT involved placing firewalls and other third-party protections over an existing, otherwise-unprotected cloud of connected devices. Picture a single umbrella protecting a family of 12 — or trying to. Now, picture that same family of 12, but each member has their own umbrella during the downpour. This is chip-to-cloud IoT. Each device possesses a powerful, individually protected chipset, making it a far stronger link in the chain than ordinary IoT devices. Chip-to-cloud is decentralized technology in that each node reports directly to the cloud controller or analytics program rather than an intermediary. This represents another win on the security side of things, plus a way to cut down on latency and loss as data packets move between recipients. What Comes After Chip-to-Cloud? Recent global events are seeing massive investments in cloud technology broadly and chip-to-cloud architecture specifically. Companies and organizations require data to power their client relationship portals, enterprise planning tools and machine-maintenance platforms — and IoT can provide it. However, IoT won’t be sufficiently safe even with chip-to-cloud technology. Organizations need detailed device management protocols for each new IT investment, a culture focused on security vigilance and training, and the know-how to choose technology partners wisely. The transition to Web 3.0 will continue from here. In time, chip-to-cloud will likely be joined and combined with other Web 3.0 technologies, like blockchain, to further enhance its usefulness and security robustness. It will be interesting to see what the world’s real value-creators do with these powerful new tools.
<urn:uuid:fa3290d3-b0fa-48aa-80b6-141394ba8741>
CC-MAIN-2022-40
https://www.iotforall.com/chip-to-cloud-iot-web-3-0
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00664.warc.gz
en
0.930271
1,290
2.96875
3
As companies navigate the ever changing political landscape, data sovereignty has grown into an important topic – especially for companies looking to transition to the cloud. This topic is not going away any time soon. Data sovereignty focuses on the idea that data “home.” And this implies that data is subject to the laws and governance structures that are in charge of the territory where the data was collected. This new trend that has come into action with the rise of cloud computing means that many countries have passed laws to regulate and control data storage and transfers. Such regulations reflect measures of data sovereignty. As of now, over 100 countries have some sort of data sovereignty laws in action. Data requirements used to be easy to navigate – in the days of on-premises computing. This was when data was stored in data centers owned by companies locally. Cloud data, on the contrary, is stored in different places and accessed across borders, which forces companies to pay close attention to how they are managing their data in different geographies. As SaaS, cloud and hosted services are being adopted more rapidly then ever, it is hard to ignore data sovereignty issues. Nevertheless, some organizations assumed that data sovereignty would not affect their business, but this is where they have miscalculated. Why data sovereignty is important? Data sovereignty is becoming more of a challenge for businesses once they’re moving to the cloud. In Europe, organizations can be fined up to €20 million if they break the General Data Protection Regulation (GDPR). GDPR applies not only to EU countries, but to companies that receive data from organizations or people living in the EU. This law has imposed major restrictions for organizations that conduct international business and are executing a cloud-first approach. What’s more important, some countries have their own data sovereignty laws are difficult to decode and even harder to obey. For example, Egypt’s new Law on the Protection of Personal Data (October 2020) does not imply data localization obligation towards cross border data transfers, but the law does require a license for such transfer of data. The general rule is that data must be transferred to a jurisdiction that offers an equivalent level of cloud data protection to that provided under Egyptian law. What is data sovereignty in the cloud? Renewed attention is being directed to issues that arise globally from the storage of business and personal data in the cloud. Besides existing requirements to keep certain types of data within the country of origin, some nations’ data sovereignty laws introduce significant limitations on data transmission outside the country of origin. Some countries also have privacy laws that limit the disclosure of personal information to third parties. This means that companies doing business in such countries may be prohibited from transferring data to a third-party cloud provider for processing or storage. Cloud data can be subject to more than one nation’s laws. Depending on where it is being hosted or by whom it is controlled, different legal obligations regarding privacy, data security and breach notification may be applicable. In some cases, large categories of data may not be allowed to be transmitted beyond the country’s geographic borders or outside its jurisdiction. Such restrictions affect businesses that employ hybrid cloud strategy – they use multiple cloud providers that maintain local data centers and comply with the separate, local legal requirements for each country. Despite the benefits of flexibility, scalability and cost savings that offered by cloud infrastructure, companies adopting cloud need to consider potential security and data sovereignty issues. Most common questions about cloud data sovereignty: - What is data sovereignty in the cloud? - What happens if you ignore data sovereignty in the cloud? - Does it really matter where data is stored, or by whom? - How can cloud services be used safely and when can they be dangerous? In this way, companies using cloud infrastructure need to address data sovereignty analysis not only to the chief information officers. Legal department, information technology security, procurement and risk managers, corporate audit – they all need to be involved into corporate governance and risk management practices. Getting into compliance – creating data protection strategy So, what should companies storing data in the public should look after to ensure they are compliant with data sovereignty laws? If you are considering working with a service provider, be prepared to gather answers to a few key questions. As a company, you should understand how to plan usage and sharing your own data. You also need to make sure your service provider has systems in place. That will resolve whether the service provider can comply with regulations in countries where your company plans to do business. To create data protection strategy which supports your data sovereignty needs, we recommend following these nine steps: - Understand applicable data residency requirements for your business. Consider them for any location where your company operates or has customers. - Consult with your legal and/or compliance departments to review legal requirements. - Take inventory of all cloud data assets and classify them. Identify assets that may contain compliant data from highly restrictive countries. - Leverage service provider capabilities based on the list of your restricted data locations. - Encrypt your data. Service providers will have keys and other tools to perform base-level encryption. Check to determine if a specific country requires stricter practices with certain kinds of data. - Develop key scoping process. You can determine that you need a key that protects specific data assets or data that might touch certain geography. That will give you the ability to customize rules to protect data specific to a particular country. - Develop compliance monitoring plan. If your data leaves the region, you have the ability to monitor when it leaves, so you can manage it and ensure that it stays in compliance. Your team must understand what tools and capabilities are available from your cloud service provider. Developing a structured approach to data protection that includes classification, tagging, encryption and monitoring, will make it easier to address your data sovereignty needs. Ongoing carefulness about which regulations apply to your customer base and operating environment is essential. Multi-cloud architecture and sovereignty issues What does a typical multi-cloud architecture consist of? It can be two or more public clouds and potentially additional private clouds. This multi-cloud environment helps eliminate dependence on a single cloud provider. Also, in a multi-cloud environment, synchronization between different vendors is not crucial to complete computation process. But in order to manage data orchestration the company must stipulate storage locations. Because of sovereignty issues, your multi-cloud architecture can be at risk of violation of multiple nations’ data sovereignty regulations. Specifically because running your applications and services with data centers scattered across geographies means some data center locations can be under strict sovereignty laws. That’s why you need to carefully choose your cloud vendor. How to select a prospective cloud vendor Why InCountry is your ideal partner for solving cloud data sovereignty challenges SaaS cloud solutions can be a great cost-saving solution for many companies, but, depending on your data retention regulatory requirements, may fall short, putting you at risk. Remember, not all clouds are created equally. Unlike other cloud sovereignty solutions for SaaS, partnership with InCountry will become your fastest way to comply with data residency regulations and unlock new territories. With InCountry you will: - Use our platform to localize cloud data – without repeatedly building your full-stack. - Enter new markets and win more customers – and therefore create new revenue streams from scaling internationally. - Comply easily with ever-changing data sovereignty regulations in 90+ countries. - Spend less time on infrastructure and software and have more time to focus on core customer and product experiences. Frequently asked questions about InCountry Which compliance and security audits has InCountry undergone? Information Security is not just a buzzword for InCountry – it‘s our daily work, our passion and the principle that drives us. InCountry is SOC 2 Type II audited and will be certified against internationally-recognized standards such as ISO 27001 for Information Security. Globally, InCountry is compliant with local data regulations in operational countries. Now InCountry also complies with Federal Law No. 152 “On Personal Data” and is FX-152 compliant. In what countries is InCountry operates? InCountry’s platform operates in 90+ countries. What physical data center security do you provide? We ensure all data center facilities we use are of at least the Tier III level and are at the minimum certified with ISO27001 or SOC2. The InCountry platform can also be deployed on the data center host provider of choice. What type of encryption do you use? InCountry uses SHA-256 for hashes and AES-256 for data payloads. Does InCountry work with a third-party key management system? Yes, InCountry integrates with third-party enterprise key management solutions (KMS) provided by third-party vendors and supports bring-your-own-key(BYOK) scenarios. Who owns the data? How will the data be handled in case of contract termination? The customer owns their data. In the case of contract termination, InCountry will retain the customer data as agreed in the contract for data retrieval purposes or migration. Ready to take the next step with data sovereignty? Right now, companies are facing the challenge of navigating through the rules that individual countries develop to ensure their own citizens’ data is being protected. There are no universal, global standards around data sovereignty on the horizon, and “splinternet” is the new trend. Although a unified set of rules across all countries would simplify moving to the cloud, most nations have their own interests. InCountry is here to help you solve data sovereignty issues quickly and efficiently.
<urn:uuid:c5f85606-3327-4c60-9fba-a34db75b1121>
CC-MAIN-2022-40
https://incountry.com/blog/understanding-data-sovereignty/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00064.warc.gz
en
0.919798
1,990
2.609375
3
In less than a decade, cybersecurity has become a critical systemic issue for the world economy. More than ever, modern life and international commerce depend upon a functioning and accessible Internet. According to Cisco, 66% of the global population will have access to it within two years, by which time there will be 5.3 billion total Internet users. Further, more than 70% of the global population will have mobile connectivity, and the number of devices connected to IP networks will be three times greater than the total number of people on Earth. In this context, cyber incidents and attacks are flourishing, but they're nothing compared to what will happen as the majority of the world joins the digital mainstream. And since ISPs and hosting providers are at the leading edge of the digital tidal wave, it's no surprise that they've become prime targets for cybercriminals. Attacks Are More Costly to Combat For organizations, building cyber resilience is growing more complex and costly. Accenture's "9th Annual Costs of Cybercrime Study" reports that malware, Web-based attacks, and distributed denial-of-service (DDoS) attacks are the most expensive attack types and are "the main contributing factors to revenue loss." But some sectors are victimized more often than others. For ISPs or hosting providers — and e-commerce, online gaming, and gambling — uptime is paramount, and every minute of downtime equals money lost. In 2020, a quarter of enterprise respondents reported the average hourly cost of server downtime ran between $301,000 and $400,000, as highlighted on Statista. Botnets on the Warpath The root cause of these financial setbacks are cybercriminals who use every means at their disposal — or "carpet bombing" — to exploit network vulnerabilities to cause havoc, extort money, or both. Carpet bombing is an example of an attack type that is becoming ubiquitous due to the easy availability of cheap DDoS services on the Dark Web. Almost anyone can pay for a botnet to seriously disrupt the company or government agency of their choice. The rapidly expanding Internet of Things (IoT) might also explain the rise in carpet bombing, since most devices are poorly protected against hostile takeovers and easily converted into bots. ISPs and hosting providers are like red flags to carpet bombers. Some lack basic DDoS mitigation tools, while others use outdated ones. The results are predictable. In November 2018, customers of the Cambodian ISPs EZECOM, SINET, Telcotech, and Digi suffered a week of intermittent connections caused by a 150 Gbit/s DDoS attack. A few months later, a series of carpet-bombing DDoS attacks crippled a South African ISP for an entire day. Extortion on the Rise Since mid-2020, a new type of extortion campaign has moved into the spotlight. Cybercriminals claiming to be part of the nation-state-backed groups Fancy Bear, Lazarus Group, and the Armada Collective delivered ransom demands in emails that threatened the recipients with DDoS attacks of up to 2 Tbit/s unless they made a 20-Bitcoin payment within a week. Many organizations ignored the emailed threats without consequence. Others — including some well-known ones — suffered substantial operational setbacks as a result of subsequent attacks, as reported by the FBI. The FBI attributed previous extortion campaigns in 2017 and 2019 to the same cybercrime groups, which at that time targeted financial institutions, retailers, and e-commerce firms. Attacks Growing Exponentially But carpet bombing isn't the only cyber threat out there. There are scores of others, and they're increasing in number and frequency so quickly that it's becoming increasingly difficult to beat them off using traditional tools or on-site appliances. One reason for this is that current attacks can be more than 100 times larger than a company's available pipe or backbone. As a result, the entire system collapses and all traffic (including legitimate IP traffic) is blackholed for hours or days. According to the US Department of Homeland Security, the scale of attacks has increased tenfold in recent years, and "it is not clear if current network infrastructure could withstand future attacks if they continue to increase in scale." In October 2019, Amazon Web Services (AWS) was hit by a major DDoS attack roughly eight hours long that prevented users from connecting. The attack caused AWS to miscategorize legitimate customer queries as malicious. Google Cloud Platform experienced problems at roughly the same time, but the company says the incident was unrelated to DDoS. In February 2020, AWS reported a 2.3 Tbit/s attack — in other words, a little under half of all the traffic that telecom BT sees on its entire UK network during a normal working day. Hosting providers and ISPs are increasingly being exposed to cyber threats, but during the pandemic, as use of these services has skyrocketed, cyberattackers have broadened their reach to include targets in vertical markets such as e-commerce, online gaming and gambling, healthcare, and educational services (think homeschooling). No DDoS mitigation solution is foolproof, so it makes sense for organizations to beef up their existing tools with as much timely and reliable threat intelligence as possible. By blocking bad actors from their networks, ISPs can avoid falling victim to carpet-bombing attacks that can cripple their operations. If they're attacked, they should never pay a ransom. Doing so only emboldens the bad guys and supports further criminal activity.
<urn:uuid:f0fa67cc-f1fc-4bdf-bd3a-7aa3907e3641>
CC-MAIN-2022-40
https://www.darkreading.com/attacks-breaches/under-attack-hosting-internet-service-providers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00064.warc.gz
en
0.957543
1,122
2.890625
3
Test your knowledge about all things technology with Techy Trivia, a fun part of your Forthright Flyer! This month’s trivia question and answer: Q: Created in 1990, what was the name of the first internet search engine? Short for “Archives,” Archie was an index of computer files stored on FTP websites and was developed by a student at McGill University in Montreal. Q: When was the first portable computer made? The first portable computer, named Osborne 1, was made by the Osborne Company and designed by American engineer Lee Felsenstein. The computer weighed approximately 24 pounds. Read more here. Look for more trivia in upcoming issues of the Forthright Flyer!
<urn:uuid:49dcb584-1795-479e-9be4-470c82f263d3>
CC-MAIN-2022-40
https://www.forthright.com/forthright-flyers-techy-trivia/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00064.warc.gz
en
0.957184
160
2.625
3
Fiber optic line drivers are much better for communications than copper-wire alternatives because they offer three main advantages: superior conductivity, freedom from interference, and security. The glass core of a fiber optic cable is an excellent signal conductor. With proper splices and terminations, fiber cable yields very low signal loss and can easily support data rates of 100 Mbps or more. Because fiber optic line drivers use a nonmetallic conductor, they don’t pick up or emit electromagnetic or radio-frequency interference (EMI/RFI). Crosstalk (interference from an adjacent communication channel) is also eliminated, which increases transmission quality. Signals transmitted via fiber optic line drivers aren’t susceptible to any form of external frequency-related interference. That makes fiber connections completely immune to damaging power surges, signal distortions from nearby lightning strikes, and high-voltage interference. Because fiber cable doesn’t conduct electricity, it can’t create electrical problems in your equipment. Electronic eavesdropping requires the ability to intercept and monitor the electromagnetic frequencies of signals traveling over a copper data wire. Fiber optic line drivers use a light-based transmission medium, so they’re completely immune to electronic bugging.
<urn:uuid:7406e4e5-c55f-4496-8f73-7e3f6c0955f0>
CC-MAIN-2022-40
https://www.blackbox.com/en-au/insights/blackbox-explains/inner/detail/networking/connectivity/advantages-of-fiber-optic-line-drivers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00064.warc.gz
en
0.871081
250
2.75
3
Tunneling, VPN, and IPSec we learned about some of the more common remote access protocols in use today. You should recall that a remote access protocol allows remote access to a network or host and is usually employed in dial-up networking. Alternatively, some remote access technologies are involved in remote control of a host, such as through secure shell or Telnet. However, another class of remote access technologies does exist. This class is related to two of the fundamental aspects of information security: confidentiality and availability. This type of remote access technology allows a user to securely dial in or otherwise access a remote network over an encrypted and difficult-to-intercept connection known as a “tunnel.” These protocols are therefore usually referred to as tunneling or secure remote access protocols. A virtual private network is a pseudo-LAN that is defined as a private network that operates over a public network. It allowsremote hosts to dial into a network and join the network basically as if it were a local host, gaining access to network resources and information as well as other VPN hosts. The exam will test you on your ability to recognize different applications of VPN networks. Use common sense here! Obviously, VPN networks would likely be employed in settings in which information security is essential and local access to the network is not available. For example, a VPN might be utilized by a telecommuting employee who dials into the office network. PPTP, or Point-to-point tunneling protocol, is a commonly implemented remote access protocol that allows for secure dial-up access to a remote network. In other words, PPTP is a VPN protocol. PPTP utilizes a similar framework as PPP (point-to-point protocol) for the remote access component but encapsulates data into undecipherable packets during transmission. It is as its name implies: an implementation of PPP that utilizes tunneling by encapsulating data. IPSec is a heavily tested area of the Security+ exam. You will inevitably see at least one question on IPSec and probably around three, so it will be to your benefit to understand IPSec well. IPSec allows for the encryption of data being transmitted from host-to-host (or router-to-router, or router-to-host… you get the idea) and is basically standardized within the TCP/IP suite. IPSec is utilized in several protocols such as TLS and SSL. You should know that IPSec operates in two basic modes. We will now study these modes in greater detail. - Transport Mode – Provides host-to-host security in a LAN network but cannot be employed over any kind of gateway or NAT device. Note that in transport mode, only the packet’s information, and not the headers, are encrypted. - Tunneling Mode – Alternatively, in tunneling mode, IPSec provides encapsulation of the entire packet, including the header information. The packet is encrypted and then allowed to be routed over networks, allowing for remote access. Because of this, we are usually most interested (at least for exam purposes) in the Tunneling mode. IPSec is comprised of two basic components that provide different functionality: - AH – Authentication Header (AH) can provide authentication of the user who sent the information as well as the information itself - ESP – Encapsulating Security Protocol (ESP) can provide actual encryption services which can ensure the confidentiality of the information being sent. L2TP, or Layer 2 Tunneling Protocol, is an alternative protocol to PPTP that offers the capability for VPN functionality in a more secure and efficient manner. Rather than actually replacing PPP as a remote access protocol or IPSec as a security protocol, L2TP simply acts as an encapsulation protocol on a very low level of the OSI model – the Data Link layer. L2TP, therefore, commonly utilizes PPP for the actual remote access service and IPSec for security. Note that L2TP operates on a client/server model with theLAC (L2TP Access Concentrator) being the client and the LNS (L2TP Network Server) acting as the server. **Source by wikipedia** To Become Cretified For CompTIA Security+ Please Visit This Link ;
<urn:uuid:4389d4b3-b485-403b-9f08-f38d5998a74c>
CC-MAIN-2022-40
https://asmed.com/comptia-security-tunneling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00064.warc.gz
en
0.929819
885
3.65625
4
Chillies - Some Like it Hot They can be red, green, orange or almost the brown colour of chocolate. They can be pointy, round, small, club-like, long, thin, globular or tapered. Their skins may be shiny, smooth or wrinkled and their walls may be thick or thin. You guessed it - chillies! The chilli pepper comes from a pod-like berry of various species of the capsicum family found in Latin America. Accidentally discovered by Columbus, these fiery little vegetables are utterly delicious and an essential part of the cuisine of many parts of the world. To the chilli connoisseur, each type has its own distinctive flavour, and a particular variety of chilli may be used to lend a specific dish its unique taste. There are over 2000 different varieties of chilli peppers with only a few exceptions, most of these enchanting little pods have some degree of pungency for the palate. The colour of chillies is no guide to the intensity of their flavour. All chillies begin life green and turn yellow or red as they ripen, although there is no rule that green or red is hotter. Generally, the smaller chillies are the most pungent. Because some chilli peppers are hotter than others, it pays to know your habaneros from your poblanos and your guajillos from your jalapenos. The heat rating of chilli peppers is related to its Scoville number. The higher the number of Scoville Heat Units (SHU’s) assigned to a chilli pepper, the greater will be its burn. Most of the heat is in the seeds and the membrane, so if it’s your first time trying chillies, or you don’t like too much heat, discard these. The heat comes from a chemical compound called capsaicin (the active ingredient in chillies), produced within the glands of the chilli, that intensifies as it matures. It is thought that the capsaicin in chillies serves to prevent animals from eating chillies so that birds, which are a better distributor for their seeds, can eat them. Mammals receive a burning sensation from capsaicin but birds do not. If you have braved the effects of eating fresh chillies and your mouth is burning, don’t be tempted to drink water, as this can intensify the effect in the short term. Instead, have either yoghurt, sour cream, cheese, milk, cucumber or chopped mint. Chillies are Nutritious Chillies are a rich source of vitamin C, and a good source of niacin (vitamin B3) and beta-carotene. That is, if you can eat enough of them! One hundred grams of red chillies contain a week’s supply of vitamin C, but a single chilli divided into a dinner for four makes less of a nutritional contribution. Nevertheless, chillies add loads of flavour, have virtually no fat or sodium, and are very low in calories. More than just flavour, there are some advantages of eating chillies. Some Advantages of Eating Chillies It has been proposed that the eating of chillies results in the production of endorphins by the body. These are the feel-good chemicals that create a temporary feeling of euphoria. Chillies are reputed to aid digestion. The hot stimulating properties of chilli make it useful in clearing sinus passages, and it has been found to reduce mucous production in certain instances. Chillies appear to have some type of anti-inflammatory property. This is thought to be linked to their ability to cut recovery time of colds and flu, when taken liberally in the early stages of these ailments. Research originating in Thailand has found that people who eat a diet high in red peppers experience a much lower incidence of blood clotting. Scientists have now concluded that chilli does indeed possess fibrinolytic activity, meaning that it is able to break down blood clots. Chillies are also thought to help buffer pain from ailments such as arthritis, headaches and menstrual cramps. It has been postulated that a high daily intake of chillies may improve circulation to the hands and feet, in those suffering from poor circulation. Other responses of the body to eating chilli include increasing salivation in order to try and refresh the mouth, and an increased rate of sweating. How to Store and Prepare Chillies Chillies keep well in a paper bag for three or four days at room temperature. At their best, chillies will keep for a week in the vegetable rack, although they may change colour. For example, green chillies may turn red after you have stored them a few days. You can alternatively keep them in your refrigerator in a glass jar with the lid on for two to three weeks. Dried chillies will keep for a year. It’s amazing how easily the seeds of chillies get around, so it is important to always prepare chillies wearing disposable gloves, and thoroughly wash all knives, cutting boards and anything else that comes in contact with a cut chilli. Above all, one should ensure never to rub your eyes if you’ve been preparing any kind of chilli dish and not to allow chilli to come in contact with a cut or graze, as it can burn the skin. Chillies can be used to add spice to soup, stews and curries, and are familiar ingredients in Mexican, Indian, Cajun and Thai recipes. Raw chillies can also be used in marinades and salad dressings. Moderation is the key, because large doses of chilli can cause stomach problems and internal burns. When buying chillies, look for fresh, smooth, shiny skins. Avoid those going brown on top and never be tempted to cheat: always use fresh chillies to achieve an authentic, full flavour. So if you have no aversions to hot, spicy foods, start hauling out those recipe books and start experimenting with a bit of chilli. Bear in mind: chillies are also believed to be mildly addictive - the more you eat, the more you want to eat!
<urn:uuid:e4d8a4b7-5dd7-4273-934f-635725153d65>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/284/chillies-some-like-it-hot.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00264.warc.gz
en
0.944878
1,318
3.109375
3
A proprietary version of a meta language (ML) in development by Microsoft researchers is causing a stir in the developer community. The issue at hand is whether or not Microsoft is trying to co-opt another programming language for its own purposes, which is what some experts perceived the company did when it crafted C++ and C#. Called F#, the meta language is designed to solve extensibility issues and problems on the .NET Framework. Meta languages are used for writing tools and compilers, programs that translate source code into object code, or a computer’s machine language. CAML, from which F# is derived, is a meta language that was developed by INRIA, a French research institute for computer science. One type of CAML is Objective CAML, which is used for teaching programming. But while they are extolled for such tasks and features as static type checking and fine type inference, Microsoft argues that meta languages are not perfect. Microsoft said there are problems for these languages on the .NET platform because of poor feature interaction between subtyping, overloading and type inference, and because they lack the extensibility mechanisms of class-based languages. F# was forged to address those problems for .NET programmers. The language, written by Don Syme, of Microsoft Research’s Cambridge, U.K., team, joins the software giant’s family of programming languages, including C++, C# and J#. “I believe that it is reasonable to innovate with syntax and semantics in order to increase the usability of a programming language,” Syme said on the Microsoft research site. As is common with new programming languages, debate was widespread and fierce on developer-oriented sites as Slashdot.org. One argument is that functional compilers already exist on the market to convert Standard ML onto the .NET framework, such as SML.NET. But Microsoft contends that it wants to make F# work seamlessly with C#, Visual Basic, SML.NET and other .NET programming languages. The notion that Microsoft is standardizing another language, as it has with C++ and C#, is the major source of contention for those who have come to distrust Microsoft in the wake of antitrust concerns that came to a head in U.S. courts a few years ago. One anonymous slashdot poster, distressed by the news, said: “I guess anyone taking computer science will have to learn this, as it is the ‘language of the future.’ MS has the power to dictate what the future of its monopoly is, and thus also the future of computing. And with computer science graduates familiar with this, they will start to use it. Then, others like myself will either have to learn it or lose their job.” Another rebutted that tack: “F# will be learned by people when managers and not university lecturers decide that it is something that coders need to learn or even when coders decide it’s necessary for something. Stop thinking that the world is out to make you use MS products no matter what. The businesses that do the employment and the people who should be advising them (cough -you- cough) are the people who make those decisions.” Microsoft could not be reached for comment as of press time, but analysts familiar with Microsoft’s adventures in programming discussed the issue. Stephen O’Grady, of Redmonk, considered the possibility that Microsoft’s purpose may be less than altruistic, but downplayed it. “Does Microsoft intend to modify ML for its own purposes? Sure, but some would call that optimizing for the platform and the framework,” O’Grady said. “People do this all the time — SAP in the past has added its own custom classes to JSP libraries, and BEA introduced a proprietary format for its Workshop product in the .JWS extension (although the latter’s been submitted as a standard to the JCP). And as for C++ and C#, I haven’t noticed that they exactly put C out of business.” O’Grady said it is possible that developers may have to get used to Microsoft’s version of meta language in F#. “But I’d say that’s not exactly a near horizon threat. Plus it’s likely that if it becomes perceived as a threat, the Java community will develop CAML or Standard ML plugins to something like the Eclipse framework, if in fact they’re not available already.” ZapThink Senior Analyst Ronald Schmelzer was less concerned about Microsoft’s intentions. He noted that Microsoft is known for doing research on a variety of topics that may never see the light of day as a product, and “this might be one of them.” “However, I don’t think there is cause for alarm here. Microsoft was one of the original creators of XML, and things like SOAP and BPEL, so why would they dump it? By and large, this looks to be a focused research project by an individual or a small group exploring the topic of how to produce better compiled languages. I don’t see any indication that this would replace C, C#, C++, Java, or any other language that Microsoft supports. In fact, the Microsoft CLR that forms the basis of the .NET runtime explicitly supports things like new languages and F# might just be one of those.” But does the programming world need another # language from Microsoft? “Probably not at the moment, but this doesn’t look to be an immediately productized offering. Instead, they’ve used this implementation obviously to offer .NET programmers an ML based language to work from if that’s suitable, but just as much to prove it can be implemented in short order (less than 10K lines of code, apparently),” O’Grady said. F# isn’t the only language Microsoft is working on, although details about an “X#” are rather murky. X# is rumored to be a language focused on more intelligent processing of things like XML documents, much like ClearMethods’ Water language, but there have been denials that the company is working on this.
<urn:uuid:8223a088-2dd6-40a5-8f33-676d6e603fb8>
CC-MAIN-2022-40
https://cioupdate.com/is-f-a-major-or-minor-consideration-for-microsoft/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00264.warc.gz
en
0.957074
1,319
2.546875
3
Where to install Smoke Detector or CO Detector: - Mount Smoke on ceiling, close to center as possible, away from corners – some local codes may allow wall mounting. - Check with local codes – Smoke alarms should be installed in accordance with the NFPA Standard 72 (National Fire Protection Association) - Locations not recommended for Life Safety devices: - Combustion particles are the by-products of something that is burning. Thus, in or near areas where combustion particles are present you do not install smoke alarms to avoid nuisance alarms, such as kitchens with few windows or poor ventilation, garages where there may be vehicle exhaust, near furnaces, hot water, and space heaters. - In air streams near kitchens. Air currents can draw cooking smoke into the sensing chamber of a smoke alarm near the kitchen. - In damp or very humid areas or near bathrooms with showers. Moisture in humid air can enter the sensing chamber and then turn in droplets upon cooling which can cause nuisance alarms. Install smoke alarms at least 10 feet (3 meters) away from bathrooms. - Near fresh air vents or very drafty areas like air conditioners, heaters, fans, fresh air vents and drafts which can drive smoke away from smoke alarms.
<urn:uuid:ab4697bd-3252-41ad-8e82-0b3cb1b3f171>
CC-MAIN-2022-40
https://alula.com/knowledge-base/location-of-smoke-or-co-detector-installations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00264.warc.gz
en
0.915832
257
2.796875
3
A look at AV technology trends in today’s school. Just a decade ago, audiovisual (AV) technology in the classroom looked like overhead projectors. Today, that technological landscape in our schools has shifted dramatically, and more and more AV technology is being brought into classrooms to increase collaboration and active learning. Here is a look at a few of the trends that are driving the increase of AV integration in schools. - Content Is Shared Collaboratively. Wireless connectivity has become crucial for schools as teachers across districts share resources and lesson plans. Now, wireless collaboration on tablets, laptops, and even smartphones empowers students to team up with one another. Not only does this take group projects to the next level, it also enables students to leverage one another for an enhanced educational experience. - Projectors Are Different. As mentioned above, overhead projectors have had their day. In their place come new projectors with touch screen integration. Many of these even allow users to operate the touch screen with their finger rather than a stylus, making it easier for any student to hop out of his or her seat and get involved. - Videoconferencing Is On The Rise. In the past, if teachers wanted to bring in an expert to speak to their class, they needed to figure out a way to physically bring him or her in. Today, they can call on anyone, anywhere in the world to expand their class’s exposure to certain subjects. With newer, cheaper, and more reliable videoconferencing programs, students can get face time access to experts all over the world! If you would like to learn more about how new AV technology can be used to enhance the modern classroom, give D&D a call, we have a wide variety of the latest AV tools to create the 21st century classroom at your school. Contact D&D by calling 800-453-4195 or by clicking here.
<urn:uuid:93d1cb52-0fb3-475b-915d-bc6008669d55>
CC-MAIN-2022-40
https://ddsecurity.com/2015/09/10/av-technology-for-schools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00264.warc.gz
en
0.962403
393
2.59375
3
What is multicloud? Multicloud is the use of cloud services from more than one cloud vendor. It can be as simple as using software-as-a-service (SaaS) from different cloud vendors – e.g., Salesforce and Workday. But in the enterprise, multicloud typically refers to running enterprise applications on platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS) from multiple cloud service providers, such as Amazon Web Services (AWS), Google Cloud Platform, IBM Cloud and Microsoft Azure. A multicloud solution is a cloud computing solution that's portable across multiple cloud providers' cloud infrastructures. Multicloud solutions are typically built on open-source, cloud-native technologies, such as Kubernetes, that are supported by all public cloud providers. They also typically include capabilities for managing workloads across multiple clouds with a central console (or 'single pane of glass'). Many of the leading cloud providers, as well as cloud solution providers such as VMware, offer multicloud solutions for compute infrastructure, development, data warehousing, cloud storage, artificial intelligence (AI) and machine learning (ML), disaster recovery/business continuity and more. Value and benefits of multicloud The overarching value of multicloud to the enterprise is that it prevents ‘vendor lock-in’ - performance problems, limited options, or unnecessary costs resulting from using only one cloud vendor. A multicloud strategy offers organizations - Flexibility to choose cloud services from different cloud providers based on the combination of pricing, performance, security and compliance requirements, geographical location that best suits the business; - Ability to rapidly adopt “best-of-breed” technologies from any vendor, as needed or as they emerge, rather than limiting customers to whatever offerings or functionality a single vendor offers at a given time; - Reduced vulnerability to outages and unplanned downtime (because an outage on one cloud won’t necessarily impact services from other clouds); - Reduced exposure to the licensing, security, compatibility and other issues that can result from “shadow IT” - users independently signing up for cloud services that an organization using just one cloud might not offer. The key to maximizing the benefits of a multicloud architecture is to manage applications and resources across the multiple clouds centrally, as if they were part of a single cloud. But multicloud management comes with multiple challenges including: - Maintaining consistent cloud security and compliance policies across multiple platforms; - Consistently deploying applications across target environments (e.g., development, staging, and production) and various hosting platforms; - Federating and visualizing events from logging and monitoring tools to achieve a singular view and configure consistent responses Organizations use multicloud management tools - or preferably, a multicloud management platform - to monitor and manage their multicloud deployments as if they were a single cloud environment. The best multicloud management platforms typically offer - Visibility into, and control over, any cloud resource, including IaaS, PaaS and SaaS offerings and associated data storage and networking resources across public cloud, private cloud and edge deployments. - Analytics and/or artificial intelligence (AI) capabilities, including artificial intelligence for operations (AIOps). AIOps uses AI and machine learning to sift through the 'noise' of data for metrics, telemetry, that the organization can use to streamline operations, predict availability or performance issues and even automate corrective actions across the multicloud infrastructure. Multicloud, hybrid cloud and hybrid multicloud Hybrid cloud is the use of both public cloud and private cloud environments, with management, orchestration and portability between them that enables an organization to use them as a single, unified, optimized IT infrastructure. Multicloud and hybrid cloud are not mutually exclusive. In fact, most enterprise hybrid clouds are hybrid multicloud, in that they include public or private cloud services from at least two cloud service vendors. Hybrid multicloud builds on multicloud benefits with - Improved developer productivity: Hybrid multicloud both enables and is enabled by Agile and DevOps development methods and cloud-native application technologies such as microservices architecture, containers and serverless computing. - Enhanced security and regulatory compliance: In addition to providing multicloud's broad access to top security and compliance technologies, hybrid multicloud provides the flexibility deploy and scale sensitive data or highly regulated workloads in the most secure and compliant ways, and the convenience of implementing security and compliance consistently across all cloud services, cloud vendors and cloud environments. - Greater efficiency and spend optimization: Beyond the flexibility to choose the most cost-effective cloud service, hybrid multicloud provides the most granular control over where workloads are deployed and scaled, enabling organizations to further improve performance (e.g., deploy closer to users to reduce latency) and optimize spend. Hybrid cloud also helps companies modernize existing applications faster, and connect cloud services to data on cloud or on-premises infrastructure in ways that deliver new value. Multicloud and IBM Cloud To advance your digital transformation, IBM offers products and services that support the multicloud approach to application development, migration, modernization, and management. Working with IBM, you’ll have access to AI-powered automation capabilities, including prebuilt workflows, to make every IT services process more intelligent, freeing up teams to focus on the most important IT issues and accelerate innovation. Take the next step: - Deploy and maintain applications across multicloud environments consistently, with more global visibility and less management complexity, using IBM Cloud Satellite. Use a single API to create a distributed cloud location, then add host machines from any cloud, on-premises data center, or from the edge. - Jump start hybrid multicloud development and application modernization with Red Hat OpenShift on IBM Cloud. Leverage the enterprise scale and security of IBM Cloud and the automation power of Kubernetes to deploy full-managed, highly available clusters across clouds with a single click. - Explore IBM Cloud Pak for Watson AIOps, an AIOps solution that deploys advanced, explainable AI across the ITOps toolchain so you can confidently assess, diagnose and resolve incidents across all cloud environments and mission-critical workloads. - Learn more about IBM multicloud services and hybrid and multicloud strategy (PDF, 427 KB). Get started with an IBM Cloud account today. The Key to Enterprise Hybrid Cloud Strategy On-premises cloud remains critical to a future-ready hybrid infrastructure.
<urn:uuid:3ac250d3-1b55-46de-979f-8ce1a1c650d3>
CC-MAIN-2022-40
https://www.ibm.com/cloud/learn/multicloud
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00464.warc.gz
en
0.899064
1,393
2.578125
3
RSSI stands for Received Signal Strength Indicator. It is an estimated measure of power level that an RF client device is receiving from an access point or router. At larger distances, the signal gets weaker and the wireless data rates get slower, leading to a lower overall data throughput. Signal is measured by the receive signal strength indicator (RSSI), which in most cases indicates how well a particular radio can hear the remote connected client radios. Indoor RSSI Set-Up Best Practices Indoor RSSI Maximums For mixed use networks: - -75 dB to -80 dB For session based networks: (Such as video conferencing, Wi-Fi calling, inventory management, etc.) - -60 dB to -65 dB Recommended Tx Output Power on APs For mixed use networks: (Such as web browsing, accessing email, etc.) - 18 dBm to 20 dBm on the 5 GHz radio - 11 dBm to 14 dBm on the 2.4 GHz radio For session based networks: - 11 dBm to 15 dBm on the 5 GHz radio - 11 dBm on the 2.4 GHz radio Channel Width Relation to RSSI Wider channels normally have lower RSSI values. Therefore, smaller channel widths are recommended in all but some special circumstances; when configuring EnGenius APs. Note: Special circumstances are low AP density deployments, such as a small home network. A wider channel setting should only be considered after the RF deployment has been qualified. (40 MHz - 80 MHz and channel widths) Visualize your project's RSSI through the subscription-free network design tool, ezWiFi Planner.
<urn:uuid:39540e6e-7df7-44fa-a9c7-f77273f7dfdd>
CC-MAIN-2022-40
https://helpcenter.engeniustech.com/hc/en-us/articles/234761008-What-is-RSSI-and-its-acceptable-signal-strength-?sort_by=created_at
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00464.warc.gz
en
0.892911
355
3.015625
3
The importance of artificial intelligence in delivering the Power to Predict The latest advances in artificial intelligence (AI) use machine learning and deep learning. They give cameras the ability to self-learn and enable built-in Video Analytics to be taught to detect customer-specific objects or situations. They can also transform a task from requiring human input to a successfully automated one. And tackle more complex tasks faster, easier, and with greater accuracy. We continue to leverage the power of AI to enable users to understand their environment more deeply, so they can respond proactively. And ultimately, predict unforeseen or future situations. Having the power to predict can prevent things from happening and strengthen the protection of people and property. It can even help to avoid potential damages and uncover business opportunities that create new revenue streams or reduce operational costs.
<urn:uuid:4fdfe932-bd1b-40ec-b006-9481a803633c>
CC-MAIN-2022-40
https://www.boschsecurity.com/us/en/solutions/video-systems/video-analytics/importance-of-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00464.warc.gz
en
0.918839
161
3.34375
3
TLS (Transport Layer Security) is the mechanism by which two email servers, when communicating, can automatically negotiate an encrypted channel between them so that the emails transmitted are secured from eavesdroppers. It is becoming ever more important to use a company that supports TLS for email transmission as more and more banks, health care, and other organizations who have any kind of security policy are requiring their vendors and clients to use this type of encryption for emailed communications with them. Additionally, if your email provider supports TLS for email transmission, and you are communicating with people whose providers do also, then you can be sure that all of the email traffic between you and them will be encrypted. How do you find out if someone to whom you are sending email uses a provider who’s servers support TLS-encrypted communications? We will take you through the whole process step-by-step, but first let us note some important truths about TLS connection encryption. - The use of TLS encryption is negotiated/determined each and every time two servers connect to each other to transmit your email. - Just because a server supports TLS today, does not mean that it will tomorrow  server configurations can change and mistakes can be made.You can, however, be sure that an email will never be sent to someone without TLS – see Enforcing Email Security with TLS when Communicating with Banks. - If your email is passed between more than one server, then the security of each server-to-server connection along the way needs to be negotiated separately. - Only the recipient’s externally facing email servers can be checked for TLS support. There is no way of checking the back-end servers of a service provider’s email system to make sure TLS is supported all the way to delivery to the recipient’s mailbox. - Even if the sender’s email servers and the recipient’s email servers are configured to use TLS, both parties still need to configure their email clients to connect securely to their respective servers (for the initial sending of the message, and for the final download and viewing of the message) in order to ensure that the email message is transmitted securely during its entire trek from sender to receiver. To check for TLS you need to use a program called “telnetâ€Â, which is another command prompt utility that exists in all of the operating systems mentioned. Telnet will be used to simulate the SMTP connection to the server; once connected, we will issue the â€Âehlo†command. If the server supports TLS encryption then we will see “250-STARTTLS†in the list of supported options. The following steps will need to be repeated for every server provided by your query for MX records. To use the Telnet program type, “telnet server_name 25â€Â, replacing “server_name†with the name of a mail server from your DNS query and press “Enter“. (25 is the standard SMTP port at which that all public inbound email servers must be listening for connections). A few lines will appear, the last starting with “220†and containing the date. support$ telnet mx1.md3v.net 25 Connected to mx1.md3v.net. Escape character is ‘^]’. 220 mx1.md3v.net ESMTP Sendmail 8.13.1/8.13.7; Thu, 05 Dec 2009 10:11:18 -0600 Now type “ehlo†followed by a domain name, your own domain name will work fine, and press “Enter“. If you see within the results the line, “250-STARTTLSâ€Â, then that email server is configured to support use of TLS. Here is a full example: 250-mx1.md3v.net Hello static-222-111-111-222.myinternet.net [18.104.22.168], pleased to meet you 250-AUTH DIGEST-MD5 CRAM-MD5 221 2.0.0 mx1.md3v.net closing connection Connection closed by foreign host. Once again you will need to repeat this telnet step for each email server listed in the MX records to be sure that TLS support is enabled on all servers processing email for email address in that domain. Use of TLS is good practice and very secure when you know that it is implemented through the entire chain of delivery.
<urn:uuid:42efc502-1fda-445c-8cba-b84eae174ad7>
CC-MAIN-2022-40
https://md3v.com/how-to-tell-if-a-server-support-tls-for-secure-email-transmission
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00464.warc.gz
en
0.898618
1,298
2.734375
3
Quality of service (QoS) has been a hot technology since its inception. QoS combines multiple technologies that help in building good traffic patterns on a computer network. To deploy a simple QoS policy that prioritizes business-critical applications on your network , follow these three steps: - Classify network traffic - Shape or police bandwidth - Apply the QoS policy to a WAN interface The example below explains QoS deployment on a network to support VoIP, which is now being commonly used on most networks. These VoIP packets should have proper treatment on the network or else users will experience bad call quality across network. Classifying network traffic Classification identifies the type of traffic that you want to prioritize on the network and then marks that traffic as a priority. Now, when the marked traffic travels through networking devices, those devices recognize the prioritized traffic and provide proper treatment. To classify VoIP traffic on the network, you can use the example configuration mentioned below. Let’s say all the VoIP traffic is coming from a particular subnet (192.168.1.0/24). This traffic passes through your network’s Cisco edge router, which connects the other sites. Now, we need to classify this VoIP traffic on the Cisco router. Below is a sample configuration that has to be done on the router. Create the object group for VoIP network: object-group network VoIP Create the access list to match this VoIP traffic access group: access-list 101 permit udp any Object-group VoIP Create a class map to match the created access list: Class-map match-all Traffic Class-map match-all VoIP Match access-group 101 Class-map match-all Video Match ip dscp AF41 Shaping or policing bandwidth Policing and shaping actually limit the bandwidth for a defined traffic type. If an interface is configured to police traffic for a given application type, then traffic will be remarked or dropped when that type of application tries to use more bandwidth than its specified limit. Shaping also sets limits on bandwidth for classified data. If the bandwidth requirement is higher than the given limit, the router buffers the traffic and uses a queuing mechanism to prioritize the subsequent transmission of the buffered traffic. Below are examples of traffic shaping: policy-map class VoIP bandwidth remaining percent 40 bandwidth remaining percent 50 bandwidth remaining percent 10 shaping average 20000000 Applying the QoS policy to a WAN interface Create a policy to attach to an interface that handles VoIP, so that when VoIP traffic exits the network, it is prioritized based on the policy on the router. Ip address X.X.X.X Traditional methods of analyzing QoS policy performance The traditional methods of analyzing the performance of an applied QoS policy include polling the router through SNMP using third-party software and getting data from the QoS policy index MIB, class MIB, and others. Users can also log in to the router directly and execute the “show” command to analyze the policy details. The methods above only give stats specific to policy and classes such as pre-policy, post-policy, drop, and queue metrics. However, those methods don’t say whether the intended traffic is really getting classified under the QoS policy. To confirm that traffic is being classified according to the policy, ManageEngine NetFlow Analyzer generates reports on CBQoS policies through SNMP. Click hereto learn more about this feature. Advanced QoS hierarchy and drop monitoring using Cisco AVC ManageEngine NetFlow Analyzer is capable of monitoring QoS Hierarchy and drop. The QoS policy and class details are actually exported in the NetFlow packets from routers, and you can see each flow policy and the class categorization details in NetFlow Analyzer reports. To configure the Cisco device for QoS Hierarchy and drop export, refer to this document.
<urn:uuid:8ba91d04-ce94-435b-a960-0f444a4b5188>
CC-MAIN-2022-40
https://blogs.manageengine.com/network/netflowanalyzer/2014/03/21/gaining-deeper-visibility-on-qos-hierarchy-with-manageengine-netflow-analyzer.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00464.warc.gz
en
0.839207
1,071
2.59375
3
What Is Network Slicing? Network slicing is a method of creating multiple unique logical and virtualized networks over a common multi-domain infrastructure. Using Software-Defined Networking (SDN), Network Functions Virtualization (NFV), orchestration, analytics, and automation, Mobile Network Operators (MNOs) can quickly create network slices that can support a specific application, service, set of users, or network. Network slices can span multiple network domains, including access, core, and transport, and be deployed across multiple operators. Network slicing, with its myriad use cases, is one of the most important technologies in 5G. It will support new services with vastly different requirements—from a connected vehicle to a voice call, which require different throughput, latency, and reliability. The use cases identified for 5G and network slicing fall into three major categories: - Extreme (or enhanced) Mobile Broadband (eMBB). These applications are very video-centric and consume a lot of bandwidth and will generate the most traffic on the mobile network. - Massive Machine-Type Communications (mMTC). This is more commonly known today as the Internet of Things, but at a much larger scale, with billions of devices being connected to the network. These devices will generate far less traffic than eMBB applications, but there will be many magnitudes more of them. - Ultra-reliable Low-Latency Communications (urLLC). These will allow for things like remote surgery or vehicle-to-X (v2x) communications and require MNOs to have mobile edge computing capacity in place. With network slicing, each slice can have its own architecture, management, and security to support a specific use case. While functional components and resources may be shared across network slices, capabilities such as data speed, capacity, connectivity, quality, latency, reliability, and services can be customized in each slice to conform to a specific Service Level Agreement (SLA). Automation will be a critical component of network slicing, as it is expected that MNOs will have to design and maintain hundreds or thousands of network slices. MNOs cannot manage this volume of slices manually at the speeds required by their customers. Instead, end-to-end automation must be used to perform zero-touch slice lifecycle management dynamically at scale, and in real time, as traffic load, service requirements, and network resources change. Once this ability is in place, however, it will open many new revenue opportunities for MNOs. With 5G, MNOs can now incorporate cloud-native applications into their networks, avoiding vendor lock-in and enabling lower-cost development, improved modification and upgrade abilities, and enhanced vertical or horizontal scalability. MNOs should strongly consider adopting cloud-native slicing applications to take advantage of this benefit and ensure they can support evolving 5G standards. How Blue Planet can help Blue Planet® is a vendor-agnostic intelligent automation software portfolio that helps MNOs transition to 5G. With the Blue Planet 5G Automation solution, MNOs can seamlessly manage the lifecycle of an end-to-end network slice by automating its design, creation, modification, and monitoring, and by provisioning underlying resources to a slice when required. The solution supports scaling and orchestrating network resources for 5G core, xHaul (combination of backhaul, midhaul, and fronthaul), and Radio Access Network (RAN), creating and operating network slices, and assuring service performance through closed-loop automation. Learn more about the Blue Planet 5G Automation solution.
<urn:uuid:79a3ee6d-d991-497c-acd4-94370207bec9>
CC-MAIN-2022-40
https://www.blueplanet.com/resources/what-is-network-slicing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00464.warc.gz
en
0.910945
743
2.96875
3
IPv4 is the fourth form in the advancement of the Internet Protocol (IP) Internet, and courses most movement on the Internet. However, a successor protocol, IPv6, has been characterized and is in different phases of generation arrangement. IPv4 is a connectionless protocol for utilization on parcel exchanged systems. It works on a best exertion conveyance model, in that it doesn't promise conveyance, nor does it guarantee fitting sequencing or shirking of copy conveyance. These perspectives, including information trustworthiness, are tended to by an upper layer transport protocol, for example, the TCP. Originally, an IP location was partitioned into two sections: the system identifier was the most critical (most elevated request) octet of the location, and the host identifier was whatever remains of the location. The last was thusly additionally called the rest field. This empowered the formation of a most extreme of 256 systems. This was rapidly discovered to be lacking. To beat this breaking point, the high request octet of the locations was reclassified to make a set of classes of systems, in a framework which later got to be known as systems administration. The framework characterized five classes, Class A, B, C, D, and E. The Classes A, B, and C had distinctive bit lengths for the new system recognizable proof. Whatever remains of a location was utilized as long ago to distinguish a host inside a system, which implied that each one system class had an alternate ability to address has. Class D was allotted for multicast tending to and Class E was held for future applications. Beginning around 1985, systems were contrived to subdivide IP systems. One technique that has demonstrated adaptable is the utilization of the VLSM. Taking into account the IETF standard RFC 1517 distributed in 1993, this arrangement of classes was formally supplanted with Classless Inter-Domain Routing (CIDR), which communicated the quantity of bits (from the most noteworthy) as, case in point, /24, and the class-based plan was named, by complexity. CIDR was intended to allow repartitioning of any location space so that more modest or bigger pieces of locations could be assigned to clients. The progressive structure made by CIDR is overseen by the Internet Assigned Numbers Authority (IANA) and the territorial Internet registries (RIRS). Every RIR keeps up a freely searchable WHOIS database that gives data about IP address assignments. Packets with a private end location are disregarded by all open switches. Two private systems (e.g., two extension business locales) can't convey through people in general web, unless they utilize an IP passage or a virtual private system (VPN). At the point when one private system needs to send a bundle to an alternate private system, the first private system typifies the parcel in a protocol layer so that the parcel can go through the general population system. At that point the bundle goes through general society system. At the point when the parcel achieves the other private system, its protocol layer is uprooted, and the bundle ventures out to its goal. Alternatively, epitomized bundles may be scrambled to secure the information while it goes over general society system. In computer systems administration, system address interpretation (NAT) gives a method of changing system address data in Internet Protocol (IP) datagram parcel headers while they are in travel over a movement routing gadget with the end goal of remapping one IP location space into an alternate. The term NAT44 is once in a while used to all the more particularly show mapping between two IPv4 addresses; this is the ordinary case while IPv4 conveys the greater part of activity on the Internet. NAT64 alludes to the mapping of an IPv4 location to an IPv6 location, or the other way around. System executives initially utilized system address interpretation to guide each location of one location space to a relating address in an alternate space, for example, when an association changed Internet administration suppliers without having an office to report an open course to the network. Starting 2014 NAT works most usually in conjunction with IP disguising, which is a system that conceals a whole IP location space - generally comprising of private system IP addresses (RFC 1918) - behind a solitary IP address in an alternate, normally open location space. Merchants execute this component in a routing gadget that uses static interpretation tables to guide the "shrouded" addresses into a solitary IP location and that readdresses the friendly Internet Protocol bundles on passageway so they seem to start from the routing gadget. In the converse interchanges way, the switch maps reactions over to the starting IP locations utilizing the tenets ("state") put away in the interpretation tables. The interpretation table principles made in this manner are flushed after a brief time unless new movement revives their state. The strategy empowers correspondence through the switch just when the discussion begins in the disguised system, since this makes the interpretation tables. Case in point, a web program in the disguised system can peruse a site outside, however a web program outside cannot search a site facilitated inside the disguised network. Be that as it may, most NAT gadgets today permit the system director to arrange interpretation table entrances for lasting utilization. This gimmick is regularly alluded to as "static NAT" or port sending - it permits movement starting in the "outside" system to achieve assigned has in the disguised system. In view of the prevalence of this procedure to moderate IPv4 location space, the term NAT has gotten to be practically synonymous with the technique for IP disguising. As system location interpretation alters the IP address data in bundles, it has genuine outcomes on the Nature of Internet integration and requires cautious regard for the subtle elements of its usage. NAT executions shift generally in their particular conduct in different tending to cases and in their impact on system activity. Merchants of supplies containing executions don't ordinarily report the specifics of NAT behavior. The most straightforward kind of NAT gives a balanced interpretation of IP locations. RFC 2663 alludes to this sort of NAT as essential NAT; it is regularly likewise called a balanced NAT. In this kind of NAT, just the IP addresses, IP header checksum and any larger amount checksums that incorporate the IP location are changed. Essential NATs could be utilized to interconnect two IP arranges that have inconsistent tending to. It is regular to shroud a whole IP location space, typically comprising of private IP addresses, behind a solitary IP address in an alternate, normally open, location space. To stay away from uncertainty in the treatment of returned parcels, a one-to-numerous NAT must change extra data, for example, TCP/UDP port numbers in friendly interchanges and must keep up an interpretation table so that return bundles might be accurately tended to the starting host. RFC 2663 uses the term system address and port interpretation (NAPT) for this sort of NAT. Different names incorporate port location interpretation (PAT), IP disguising, NAT over-burden and a lot of people to-one NAT. This is the most widely recognized sort of NAT, and has gotten to be synonymous with the term NAT in as something to be shared use. As depicted, the system empowers correspondence through the switch just when the discussion begins in the disguised system, since this makes the interpretation tables. For instance, a web program in the disguised system can search a site outside, however a web program outside couldn't scan a site facilitated inside the disguised system. On the other hand, most NAT gadgets today permit the system head to design interpretation table sections for perpetual utilization. This peculiarity is regularly alluded to as "static NAT" or port sending and permits movement beginning in the "outside" system to achieve assigned has in the disguised system. This is how it's done; This is how it's done; Here is the information about them; It is sort of NAT in which a private IP location is mapped to an open IP address, where general society location is dependably the same IP address (i.e., it has a static location). This permits an inside host, for example, a Web server, to have an unregistered (private) IP address and still be reachable over the Internet. Static NAT makes an altered interpretation of genuine address to mapped address with dynamic NAT and PAT; each one host utilizes an alternate address or port for every ensuing interpretation. Since the mapped location is the same for every back to back association with static NAT, and a relentless interpretation guideline exists, static NAT permits has on the objective system to launch movement to an interpreted host (if a right to gain entrance rundown exists that permits it). The fundamental distinction between dynamic NAT and a scope of locations for static NAT is that static NAT permits a remote host to start an association with an interpreted host (if a right to gain entrance rundown exists that permits it), while dynamic NAT does not. You likewise require an equivalent number of mapped addresses as genuine locations with static NAT. It is sort of NAT in which a private IP location is mapped to an open IP location drawing from a pool of enrolled (open) IP addresses. Normally, the NAT switch in a system will keep a table of enlisted IP addresses, and when a private IP location solicitations access to the Internet, the switch picks an IP address from the table that is not at the time being utilized by an alternate private IP address. Dynamic NAT serves to secure a system as it covers the inward design of a private system and makes it troublesome for somebody outside the system to screen singular utilization designs. An alternate playing point of element NAT is that it permits a private system to utilize private IP addresses that are invalid on the Internet however helpful as inward addresses. Dynamic NAT is the second NAT mode we're going to discuss. Dynamic NAT, in the same way as Static NAT, is not that normal in more modest systems however you'll think that it utilized inside bigger enterprises with complex systems. The way Dynamic NAT separates from Static NAT is that where Static NAT gives a coordinated inside to open static IP mapping, Dynamic NAT does likewise yet without making the mapping to people in general IP static and normally utilizes a gathering of accessible open IPs. While taking a gander at Static NAT, we comprehended that for each private IP Address that needs get to the Internet we would oblige one static open IP Address. This open IP Address is mapped to our inward host's IP Address and it is then ready to speak with whatever remains of the world. With Dynamic NAT, we additionally outline inner IP Addresses to genuine open IP Addresses, however the mapping is not static, implying that for every session our interior hosts speak with the Internet, their open IP Addresses continue as before, yet are liable to change. This IPs is taken from a pool of open IP Addresses that have been saved by our ISP for our open system. With Dynamic NAT, interpretations don't exist in the NAT table until the switch gets activity that obliges interpretation. Dynamic interpretations have a timeout period after which they are cleansed from the interpretation table, subsequently making them accessible for other inside hosts. The outline underneath represents the way Dynamic NAT lives up to expectations: It is an expansion to system address interpretation (NAT) that allows different gadgets on a neighborhood (LAN) to be mapped to a solitary open IP address. The objective of PAT is to moderate IP addresses. Home clients exploit PAT to keep their short of what secure machines from being totally assumed control once a day. At the point when an association endeavor from the outside hits the outer interface of a PAT gadget, it can't be sent unless state exists. State setup must be carried out from within, when a departure endeavor is made. In the event that this form of NAT didn't exist on such a wide scale, the Internet would be a totally better place. No one would ever effectively introduce and patch a Windows machine before a bargain without some the negligible insurance gave by PAT. NAT, PAT and IPv4 are the very important parts of networking. So, one who is seeking a good future in networking must know about them so that he can implement the usage to the work. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from [email protected] and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:9199d69d-265f-4670-b704-e9ef17b2c417>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/ccnp-troubleshoot-ipv4-network-address-translation-nat.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00464.warc.gz
en
0.93725
2,578
3.546875
4
Contributor: Val S Symantec Security Response has encountered an unofficial, information-stealing version of the open source Secure Shell (SSH) client PuTTY, which was compiled from source. The Trojanized version of this open source software was not hosted on the official website and instead, the attackers redirected users from a compromised, third-party website to their own site. If the user is connected to other computers or servers through the malicious version of PuTTY, then they could have inadvertently sent sensitive login credentials to the attackers. How the legitimate PuTTY tool works The open source software model allows contributors to collaborate from anywhere in the world to fix and improve projects. This practice provides useful software to users for free, but the model can have its pitfalls. Attackers can use an open source project’s code to create Trojanized versions for their own gain. In this case, the attackers created a Trojanized version of PuTTY, a popular open source SSH/Telnet/Serial console client written by software engineer Simon Tatham. PuTTY is used around the world by many system administrators, web developers, database administrators, and people connecting to a remote server through encrypted means. The most common connections made through PuTTY are from a Windows computer to a Unix/Linux server. Data that is sent through SSH connections may be sensitive and is often considered a gold mine for a malicious actor. Attackers can ultimately use this sensitive information to get the highest level of privileges on a computer or server, (known as “root” access) which can give them complete control over the targeted system. PuTTY is usually whitelisted because it is a commonly used administration tool which is frequently employed to connect system administrators to other computers and servers. It is not seen as a security threat by most firewalls and third-party security products, and the software is being actively maintained so administrators rarely need to recompile the product from its source. Unofficial Trojanized PuTTY Based on the compile date of the malicious version of PuTTY and our own telemetry, this file has been in the wild since late 2013 and it was first seen in Virus Total around the same time. However, we have only seen this sample broadly distributed recently. Distribution in 2013 was minimal, and we saw a gap of a year and a half before it reappeared again. It can be surmised that the author of the malware was performing tests to see which specific scanners would detect this file. Figure 1. The “About” information on the unofficial version of PuTTY Our telemetry reveals that the current distribution of the Trojanized version of PuTTY is not widespread and is not specific to one region or industry. The distribution of this malware appears to occur in the following manner: - The victim performs a search for PuTTY on a search engine. - The search engine provides multiple results for PuTTY. Instead of selecting the official home page for PuTTY, the victim unknowingly selects a compromised website. - The compromised website redirects the user several times, ultimately connecting them to an IP address in the United Arab Emirates. This site provides the user with the fake version of PuTTY to download. There is evidence to show users that the Trojanized version of PuTTY is suspicious, as the file is much larger in size than the latest official release. If users are not paying attention to the program’s file size, they may accidentally end up using the malicious version. PuTTY typically uses the standard SSH URL format for a connection: - “ssh://[USER NAME]:[PASSWORD]@[HOST NAME]:[PORT NUMBER]” However, we found that whenever the malicious version of PuTTY successfully connects to a host, it copies the connection SSH URL, encodes the URL with Base64 web safe, and sends a ping containing this string to the attacker’s web server. Figure 2. Original binary file with URL (blurred) in plain text Figure 3. SSH URL being encoded The malicious version of PuTTY also uses a specific HTTP User Agent to filter connection attempts: Figure 4. Malicious version of PuTTY uses HTTP User Agent to filter connection attempts With these credentials, the attackers can make a connection to the server. This particular attack method, using PuTTY as an example, has been blogged about before. This is not the first time that these attackers have made a Trojanized version of an open source program to steal information. Last year, the same attackers created a malicious version of the File Transfer Protocol (FTP) client, FileZilla, in order to steal victims’ information. Symantec and Norton products detect this malicious version of PuTTY as Hacktool, WS.Reputation.1, and Suspicious.Cloud.9. To ensure that you don’t become a victim to malicious versions of legitimate software, always ensure that the page you are downloading from originates from the author or publishers’ official home page. For the best possible protection, Symantec and Norton customers should ensure that they are using the latest Symantec technologies incorporated into our consumer and enterprise solutions. Finally, always keep your computer up to date with the latest virus definitions and patches.
<urn:uuid:49335804-60f1-4a84-aa24-a1d8e11d6e7d>
CC-MAIN-2022-40
https://community.broadcom.com/symantecenterprise/communities/community-home/librarydocuments/viewdocument?DocumentKey=3def16a0-514d-4fb6-9b6e-220a36456ba7&CommunityKey=1ecf5f55-9545-44d6-b0f4-4e4a7f5f5e68&tab=librarydocuments
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00664.warc.gz
en
0.928462
1,110
2.78125
3
# of Lessons Learn to use 10 Excel functions recommended by the experts Ask any Excel expert to name their favorite Excel functions, and you’ll receive a variety of suggestions. Some you’ll recognize, others you won’t. This course examines 10 of the functions more commonly listed by the experts and removes the mystery of how and why to use them. With over 450 functions available in Excel, odds are you’re not getting the full power of Excel in your workbooks. This course steps you through functions that can increase your productivity and simplify your spreadsheets. How do I know if this course is for me? High-quality HD content in the “Uniquely Engaging™” Bigger Brains Teacher-Learner style! |1||Function Criteria and Syntax||6:04| |3||EDATE and EOMONTH||7:34| |5||INDEX and MATCH||6:31| |6||INDEX MATCH MATCH||5:48| |7||OFFSET and COUNTA||6:29| Related Course Recommendations:
<urn:uuid:6649f35b-c57f-487a-9fff-9ffd48aad667>
CC-MAIN-2022-40
https://getbiggerbrains.com/course/excel-power-functions-bigger-brains/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00664.warc.gz
en
0.836507
308
2.640625
3
vMotion is a VMware concept of moving a running VM from one ESXi host to another ESXi host. There is not service interruption when a VM moves from one host to another and also helps to load balance VM load across hosts. For example, if there are multiple VMs running on an ESXi host which are causing the resource crunch on that host, then you can take few VMs from this host and move them to another host so the resource crunch doesn’t occur on any of the hosts. There are certain prerequisites for a vMotion to occur as below: - Shared storage is required so the VM once moved from one ESXi host to another should be able to access its file system present on a shared data store. Shared data store is accessible from multiple ESXi hosts. - VM is also connected to a vSwitch. These vSwitch on both the ESXi hosts participating in vMotion need to have same configuration to provide VM connectivity as in vMotion we keep the IP on VM unchanged. - VCenter is required to carry on the vMotion from once host to another. Related – VMware Interview Questions and Answers Let us take an example below to understand vMotion concept - We have a VM running on ESXi host 1 and we want to vMotion it to ESXi host 2. - VMkernel port has been configured on each host to handle the vMotion traffic. - Data Store is a shared storage so the files on data store are accessible via both ESXi hosts. - The state of VM on ESXi host 1 is copied to ESXi host 2 via the VMkernel port. An exact copy is now created on the ESXi host 2. While the copying occurs any changes made on the VM on ESXi host 1 will also be copied over the duplicate copy now on ESXi host 2. - Once the vMotion process is over the VM on the ESXi host 1 is halted and the VM on host 2 takes over. - There won’t be any noticeable outage in this process. In vSphere 6 we can take a VM running on host managed by one vCenter server to a host managed by another vCenter server. VM can be migrated from one vSwitch to another using cross network vMotion. Long distance vMotion is also now supported for connections under 150 milliseconds of latency.
<urn:uuid:ba0d8404-1729-4826-aa3e-56b4ad9da512>
CC-MAIN-2022-40
https://ipwithease.com/vmotion-basics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00664.warc.gz
en
0.92299
484
2.765625
3
ITIL Change Management - Process Overview A Change is nothing but of shifting/transitioning/modifying/from its current state to a desired future state. ITIL Change management is an IT service management discipline. It is a process used for managing the authorized and planned activities like addition, modification, documentation, removal of any configuration items in the configuration management database that are a part of a business's live production and test environments along with any other environment that a business wants to have under Change Management. Change Management focuses on transitioning new services or modifying the existing services into IT operational environment ensuring the changes wouldn’t create any outages in the IT environment. The objective of Change Management is to: - Respond to the customer’s changing business requirements while maximizing value and reducing incidents, disruption, and re-work. - Respond to the business and IT requests for change that will align the services with the business needs - Ensure that changes are recorded and evaluated and that authorized changes are prioritized, planned, tested, implemented, documented, and reviewed in a controlled manner - Ensure that failed changes are analyzed and RCA’s done to reduce the reoccurrence of such instances. Checkpoints are enforced to understand the progress of change and to understand the failures. - Ensure that all changes to configuration items are recorded in CMS - Optimize overall business risk Scope of Change Management can be defined as: - IT services - Configuration Items Interface with other Processes The Change management process interfaces with various other Service management processes as shown in the diagram above. This diagram depicts how Change Management has operated and the interfaces associated with it. Change Management Process Change Management Process flow In a practical IT environment, change management operations would generally be executed as per the below diagram: Process Description of Change Management This process starts with a Request for Change due to a major or minor upgrade to an existing service or a service request requiring a change. Each change ticket or RFC is recorded so that it could be tracked, monitored, and updated throughout its life cycle. Subprocesses involved in change management is defined below: RFC logging and review The objective is to filter out Requests for Change which do not contain all information required for assessment or which are deemed impractical. The Change Initiator requests for a Change to Service Desk who in turn; creates a Change record. Where tool access is available, the Change Initiator raises a Change record himself. Based on the initiator’s assessment and the Change Management policy/guidelines, the change is classified as Emergency, Normal, Standard, Minor change. Check RFC for completeness, practicability, and perform initial assessment: Consider the 7R’s of the Change Management. - Who raised the change? - What is the reason for the change? - What is the return required from the change? - What are the risks involved in the change? - What resources are required to deliver the change? - Who is responsible for the build, test, and implementation of the change? - What is the relationship between this change and other changes? Assessment and implementation of Emergency change This phase assesses, authorizes, and implements an Emergency Change as quickly as possible. This process is invoked if normal Change Management procedures cannot be applied because an emergency requires immediate action. ECAB will be responsible for approving any emergency changes without formally going thru the CAB meeting. The verbal or telephonic approval from ECAB will construe the change management approval. Members of CAB/EC are: - Change Initiator - Change Manager - Configuration Manager - Domain Owner(s) (As per Change requirements) - Depending on the nature of the change, the Change manager determines the other members of the ECAB. Categorization determines the required level of authorization for the assessment of a proposed Change. Significant Changes are passed on to the CAB for assessment, while minor Changes are immediately assessed and authorized by the Change Manager Assessment by the CAB Assesses a proposed Change and authorizes the Change planning phase. If required, higher levels of authority (e.g. IT Management) are involved in the authorization process. Change Manager schedules CAB Meeting. The CAB reviews RFC and related documents to understand the requirements of the change. The CAB determines if a Formal Evaluation is necessary for the proposed change. CAB understands the effects of the change and identifies predicted performance. This can be determined from the requirements mentioned in the RFC, acceptance criteria, discussion with relevant stakeholders, etc. CAB assesses risks and conducts feasibility analysis: Feasibility analysis is performed with respect to different aspects to find if the proposed change is a viable option. The analysis could include different factors like: - Cost-benefit (Cost-effectiveness) - Resource availability - Identified Risks - Impact on other services and business impact - Compliance requirements (if any) Based on the assessment findings, CAB either approves the change or rejects it. Scheduling and Building This phase authorizes detailed Change and Release planning, and to assess the resulting Project Plan prior to authorizing the Change Build phase. It involves other tasks like - Preparing the FSC after considering all approved RFCs which are still open for implementation. Also, the ongoing RFC implementations are considered which preparing the schedule of changes. Changes of a similar kind are grouped together to help release planning. The change window is reviewed with the Availability Management and ITSCM process plans for consistency. - Depending on the nature of the RFC, a decision is made on the requirement of a formal evaluation before the approval for the build is provided. - Based on the criteria for evaluation after planning and before the build, the project plan as well as the test plan are reviewed and evaluated. Deployment assesses if all required Change components have been built and properly tested, and to authorize the Change Deployment phase. Deployment determines if a formal evaluation is required before the deployment can begin. Accordingly, provide the related/relevant documents to the Change Evaluation Process and request for a formal evaluation prior to deployment. CAB is convened to: - Verify that all components required for the change have been built - Verify that all components required for the change have been successfully tested - Verify that the test results indicate that the change will meet its objectives - Assess the Project Plan for conflicts with other planned/ongoing changes and to check resource availability - Review the Evaluation Reports - Approve/Reject the change for deployment Accordingly, the change record is updated with the assessment findings of the CAB and the status of the change as appropriate. The change schedule is also updated as necessary. Post Implementation Review and Closure PIR assesses the course of the change implementation and the achieved results, in order to verify that a complete history of activities is present for future reference and to make sure that any mistakes are analyzed and lessons learned. Major activities involve are: - Determine if a formal evaluation is required to post the deployment. - Determine if the implementation of the change achieved its objectives. - Analyze and identify lessons learned from the whole lifecycle of the change. Collate all post-implementation analysis and assessment information in the Change Evaluation report - Find how the implementation of change can be improved and update the CSI register for initiating SIP. - Determine if such change is likely to recur in the future. If so, then a new change model might be necessary to handle such changes in the future. - Update the change record with relevant inputs and set the status to “Closed” to formally close the change. Tasks and Responsibilities |1||RFC Logging and Review||Change Manager / CAB| |2||Assessment and Implementation of Emergency Change||Change Manager / Practitioner| |3||Change Assessment and Categorization||Change Manager / ECAB / Change Coordinator| |4||Change Assessment by the CAB||Change Manager / Practitioner| |5||Minor Change Deployment||Change Manager / CAB / Change Practitioner| |6||Change Scheduling and Build Authorization||Change Manager / Practitioner / Coordinator| |7||Change Manager / Practitioner / Coordinator||Change Manager / CAB / Change Practitioner| |8||Post Implementation Review and Change Closure||Change Manager / CAB / Change Practitioner| |9||Change Management Support||Change Manager / Practitioner|
<urn:uuid:864f211e-fa60-435c-b2b0-3009c40136f9>
CC-MAIN-2022-40
https://www.itil-docs.com/blogs/news/itil-change-management-process
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00664.warc.gz
en
0.891985
1,822
2.515625
3
The real costs of cyber attacks are difficult to understand. The impacts of cybersecurity are terribly challenging to measure, which creates significant problems for organizations seeking to optimize their risk posture. To properly prioritize security investments, it is crucial to understand the overall risk of loss. Although managing security is complex, the principles of determining value are relatively straightforward. Every organization, small to large, wants to avoid more loss than the amount of money they spend on security. If, for example, a thief is stealing $10 from you and protection from the theft is $20, then you are left with an economic imbalance in which security costs more than the risk of loss. This is obviously not desirable. If, however, the thief is stealing $100 and the protection still costs only $20, then there is a clear economic benefit, a net gain of $80. The same principle scales to even the most complex organization regardless of the type of loss, whether it be downtime, competitiveness, reputation, or loss of assets. Without knowing the overall impacts, value calculations are near impossible, which leaves the return on investment (ROI) a vague assumption at best. Possessing a better picture of the costs and the risk of loss is key to understanding the value of investments that reduce such unpleasant ambiguity. The bad news: Cybersecurity is complex and the damages and opportunity costs are difficult to quantify. So we do what we can, with what we have, and attempt to apply a common-sense filter as a sanity check. But a lack of proficiency leads to inaccuracy, which can result in unfavorable security investments. For example, in early 2015 the FBI estimated the impact of the CryptoWall ransomware by adding up all the complaints submitted to the Internet Crime Complaint Center (IC3). The complaints and reported losses for CryptoWall totaled more than $18 million. At the time, the estimate seemed reasonable, even sizable, given it was a single piece of malware causing so much damage. The experts, myself included, were wrong. We lacked comprehensive data and similar examples for comparison. In this case, the methodology was not comprehensive and everyone knew it. Not every person being extorted would report their woes to IC3. We all expected an underestimate based upon this model, but we could not do the mental math necessary to generate a more accurate figure. So we held to the data we had. In reality, the estimate was off by more than an order of magnitude. Just a few months later, the Cyber Threat Alliance released a CryptoWall report. The CTA tracked the actual money flowing from the malware to Bitcoin wallets, the payment mechanism used by the criminals for victims to pay the ransom. One benefit of cryptocurrencies is that the transactions are public, even though the identities of the parties are obscured. The CTA’s analysis shows, thanks to the public nature of the blockchain transactions, that CryptoWall was earning $325 million. That is a huge difference! From believing $18 million in damages to having superior data showing $325 million in paid ransoms is a great improvement in understanding. The accurate figure provides a much clearer portrait of the problem and gives people better data to decide the value of security measures. But we must still recognize this is not the full story. Although the CTA did a great job of showing the ill-gotten gains of the ransomware campaign, the report still falls short of the even larger realization of loss and impact. The analysis does not capture the harm to those who chose not to pay, the amount of time and frustration every infected person experienced, costs to recover from the attacks and prevent similar future malware infections, and the loss of business, trust, and productivity due to the operational impairments. There are far more pieces to the puzzle to assemble if we are to comprehend the total loss. It all comes back to value. If a clearer understanding of the total loss and impact were consistently available, would people and organizations invest in more effective security? Perhaps, but maybe not. Regardless, a clearer understanding would give everyone better information to make informed choices. Managing risk is about making good decisions and finding the optimal level of security. Absent a realistic picture of the overall detriments, the community cannot hope to properly weigh their options in a logical way. The shortfall in measuring CrytpoWall’s impact is just one droplet in a sea of examples in which analysts struggle to find the hidden costs of cyber attacks. Multiply these accounting misperceptions across the entire cyber ecosystem and we find ourselves standing on a huge iceberg, worried only about what is on the surface. In cybersecurity we must question what we believe. It is almost a certainty that we are severely underestimating the overall impact and costs of cyber attacks at a macro scale. If this is true, then our response and investment are also insufficient at the same scale. The industry must uncover the true hidden costs in order to justify the right level of security and strategic direction. Only then will cybersecurity achieve effectiveness and sustainability. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:6f5c10c1-7c50-44ea-9ffb-6e2d410f083e>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/other-blogs/mcafee-labs/hidden-costs-cyber-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00664.warc.gz
en
0.951442
1,035
2.578125
3
Database as a service (DBaaS) is one of the fastest growing cloud services—it’s projected to reach $320 billion by 2025. The service allows organizations to take advantage of database solutions without having to manage and maintain the underlying technologies. DBaaS is a cost-efficient solution for organizations looking to set up and scale databases, especially when operating large-scale, complex, and distributed app components. In this article, we will discuss Database as a Service, how it works, and its benefits to your organization from both technology and business perspectives. What is Database as a Service? Database as a Service is defined as: “A paradigm for data management in which a third-party service provider hosts a database and provides the associated software and hardware support.” Database as a Service is a cloud-based software service used to set up and manage databases. A database, remember, is a storage location that houses structured data. The administrative capabilities offered by the service includes scaling, securing, monitoring, tuning and upgrade of the database and the underlying technologies, which are managed by the cloud vendor. These administrative tasks are automated, allowing users to focus on optimizing applications that use database resources. The hardware and IT environment operating the database software technologies is abstracted. Users don’t need to focus their efforts on the database implementation process itself. The service is suitable for: - IT shops offering cloud-based services - End users such as developers, testers, and DevOps personnel How DBaaS works Depending on the service, the DBaaS service can be a managed front-end SaaS service or a component of the comprehensive Infrastructure as a Service (IaaS) or Platform as a Service (PaaS) stack. Here’s how a typical DBaaS, as part of the IaaS, works: The first step involves provisioning a virtual machine (VM) as an environment abstracted from the underlying hardware. The database is installed and configured on the VM. Depending on the service, a predefined database system is made available for end users. Users can access this database system using an on-demand querying interface or a software system. Alternatively, developers can use a self-service model to set up and configure databases according to a custom set of parameters. The DBaaS platform handles the backend infrastructure and operations. Database administrators (DBAs) can use simple click-on functionality to configure the management process. These include, but aren’t limited to: - Upgrades and patches - Disaster recovery The DBaaS platform scales the instances according to the configuration and policies associated with the managed database systems. For example, for disaster recovery use cases, the system replicates the data across multiple instances. The building blocks of the underlying components, such as a server resource, are controlled by the platform and rapidly provisioned for self-service applications of database deployment. Without a managed database service or a DBaaS, you’ll have to manage and scale hardware components and technology integrations separately. This limits your ability to scale a database system rapidly to meet the technology requirements of a fast-paced business. Benefits of DBaaS DBaaS technology saves valuable resources on setting up and managing database systems and the IT environment. The technology reduces the time spent on the procedure from weeks and days to a matter of a few minutes. This is especially true for self-service use cases in DevOps environments that require rapid and cost-effective operations capabilities for their IT systems. From a business perspective, the DBaaS technology offers these benefits: - High quality of service. Cloud vendors manage database systems as part of a Service Level Agreement (SLA) guarantee to ensure that the systems are running to optimal performance. These guarantees also include compliance to stringent security regulations. The service availability is managed by the cloud vendor to high standards as per the SLA agreement. - Faster deployment. Free your resources from administrative tasks and engage your employees on tasks that lead directly to innovation and business growth—instead of merely keeping the systems running. - Resource elasticity. The technology resources dedicated for database systems can be changed in response to changing usage requirements. This is especially suitable in business use cases where the demand for database workloads is dynamic and not entirely predictable. - Rapid provisioning. Self-service capabilities allow users to provision new database instances as required, often with a few simple clicks. This removes the governance hurdles and administrative responsibilities from IT. - Business agility. Organizations can take advantage of rapid provisioning and deployment to address changing business requirements. In DevOps organizations, this is particularly useful as Devs and Ops both take on collective responsibilities of operations tasks. - Security. The technologies support encryption and multiple layers of security to protect sensitive data at rest, in transit and during processing. Database as a service Database as a service is just one more “as a service” offering that can bring agility, flexibility, and scaling to any business, no matter your size or industry. - BMC Multi-Cloud Blog - Data Storage Explained: Data Lake vs Warehouse vs Database - DBMS: An Intro to Database Management Systems - MongoDB Guide, a tutorials series - Data Visualization Guide, a tutorials series - State of the Cloud in 2021
<urn:uuid:526d9a8a-af79-41a8-96b1-c796d6bc7419>
CC-MAIN-2022-40
https://blogs.bmc.com/dbaas-database-as-a-service/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00664.warc.gz
en
0.916873
1,116
2.9375
3
As more and more schools request funds for a faster internet speed, the connectivity concerns will begin to shift. In just one year, schools’ requests for high-speed internet of one gigabit per second or faster have nearly doubled. An incredible 90 percent of applicants expect their bandwidth needs will only continue to increase as time and technology and devices move forward. The findings come from 2016 E-Rate Trends Report by the consulting group called Funds for Learning. The report analyzed and surveyed the internet requests and opinions of the applicants. The demand for a better internet speed does not come as a shock. Many experts believe in the importance of establishing a strong backbone infrastructure of wireless access to introduce big-tech learning programs and offer students a future-ready education. Funds for Learning, the research company, found that 72 percent of all schools say that Wi-Fi is critical in fulfilling their directive and about 43 percent of districts only have an internet connection that ranges anywhere from one to three years old. E-Rate funding is particularly helpful because it is not a part of the federal budget and does not depend on the ups and down of Congress–and with this current administration, public schools need all the help they can get! However, the issue will not be about what students can get at school. This is because as technology progresses forward, demands for a broader internet connection will only increase and never truly die. The barrier does not lie in school, but at home. Will students–particularly low-income students–have access to the internet speed required to connect to assignments and other schoolwork? Mobile hotspots may be able to solve this dilemma by giving students access to the required internet using a school’s device. This gives them the option to do their homework at home or on the bus (if they forgot to do an assignment the previous night–it happens to the best of us!).
<urn:uuid:df7c428b-db40-40b2-9415-429b01241781>
CC-MAIN-2022-40
https://ddsecurity.com/2017/03/02/future-k-12-connectivity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00664.warc.gz
en
0.948703
384
2.59375
3
Do you know what’s on your network? In this guide, we’ll show you a few simple ways you can find an IP address on your network. We’ll also go over a few great tools that can speed up this process and give you further insight into your network. Whether you’re managing an office network, or just doing some troubleshooting at home, knowing how to find a device’s IP address is critical in solving a number of networking problems. Let’s start with the most basic method of finding your own local IP address in two easy steps. - Open a command line window. In Windows, you can do this by pressing Windows Key + R, and then typing cmd in the Run box and hitting enter. In Linux, this can be done by pressing Ctrl+Alt+T. - Type ipconfig in the command line if you’re on Windows, and ifconfig if you’re on Linux. Press enter to get a list of your PC’s IP configuration. In the command prompt, you’ll find your IPv4 address towards the top. Under it, you’ll see your subnet mask and your default gateway. This information is vital, especially if you’re having issues connecting to the internet. But what about finding other IP addresses that might be on your network? To find other IP addresses that are on your local network, type arp -a in the same command prompt window and press enter. A list of IP addresses will populate on your screen along with additional information you might find helpful. In the far left-hand column you’ll see a list of IP addresses that were discovered on your network. Towards the bottom of the list, you may see some addresses starting with 224, 239, or 255. These addresses are generally reserved by your router for administrative purposes, so these can be looked over. In the second column under Physical Addresses we’ll see each device’s physical address. This is also commonly referred to as a MAC address. A physical address is a unique identifier that every network device comes with. Unlike IP addresses, this number cannot be changed. Knowing a device’s physical address is important, especially if you want to identify exactly what is on your network. The last column displays the address’s type. There are two types of IP addresses, dynamic and static. A dynamic address means that a DHCP server gave that device an IP address. A static address means that the device was configured to use a specific IP address, one that won’t change. Static addresses are great for devices that are permanent, like printers or servers. Most home networks will be fine using DHCP to hand out IP addresses. DHCP servers assign IP addresses that have leases. Once that lease is up, that device might get a different IP address. From your command prompt, you’re a bit limited in how you can interact with devices on the network. You can attempt to ping an IP address on your network by typing ping 192.168.XX.XXX (Replace the X’s with your IP address.) Most devices will answer the ping and reply back. This is a quick and easy way to determine if there are any latency issues between your PC and that device. For further troubleshooting, we’re going to need to use some network analyzer tools. These tools are great for quickly finding devices on your local network and spotting problems fast. They also provide a lot more details than your trusty old command prompt can give you. Below are three of my favorite network scanning programs. If you need more detail and functionality from your Port Scanner then SolarWinds has you covered. You can easily scan your network by IP ranges and filter by ports to identify what services a device is running. SolarWinds Port Scanner is currently a Windows tool only. SolarWinds Port Scanner also automatically resolves hostnames to help you identify what devices are on your network faster. The GUI interface is easy to use and boasts a cleaner display than Angry IP Scanner. For those who live in the command line, you’ll be glad to hear this tool comes with a fully functional CLI and support for batch scripting. While these tools are great, they won’t proactively alert you to problems on your network such as duplicate IP addresses, or DHCP exhaustion. If you’re a small business administrator, or just a curious tech looking for a bit more insight into your network, SolarWinds Port Scanner is an excellent tool and is available as a free download. If you’re a network administrator like myself, you’ll find PRTG Network Monitor an extremely valuable tool when it comes to troubleshooting problems across your network. PRTG is really the evolution of a scanning tool and more of a complete network monitor. PRTG first scans the entire network in its network discovery process, listing any devices it can find. Once the scan is complete it keeps a real-time inventory of all devices and records when any are removed or added. PRTG’s sensors are perfect for in-depth testing across your networks. Ping sensors can easily monitor a device’s connectivity over the long term, and alert you to those intermittent connection problems that can be difficult to pin down. The PRTG scanner goes a step further by also incorporating database monitoring into its suite of tools. This sensor will alert you to any outages or long wait times in almost any SQL environment. Database monitoring can help identify small problems such as stalled processes before they cause major downtime. Lastly, PRTG can thoroughly monitor bandwidth and network utilization for your environment. When things slow to a crawl, you’ll be able to quickly identify which IP addresses are using the most bandwidth and pinpoint exactly what that traffic is. Is someone streaming too much Netflix? With the usage monitoring sensor, you’ll never have to guess what is hogging up your bandwidth again. This data is beautifully displayed as a chart, and broken down by IP address, protocol, or top connections. When you have a sample of data you’d like to save, you can easily export it to XML or CSV. You can even tap into the PRTG API and export your data in real-time. PRTG is a powerful on-premise tool and is geared mostly for medium to large businesses. It installs in a Windows server environment and gives you full control of what sensors you’d like to activate. If you’d like to test it out yourself you can download a 30-day free trial. One of my favorite free tools is the Angry IP Scanner. It’s compatible with Mac, Linux, and Windows and allows you to quickly find detailed information about devices that are on your network. Simply select an IP range at the top and let Angry IP Scanner work its magic. Almost instantly Angry IP will begin pulling information about the IP range you specified. At a glance you’ll be able to see what IP addresses are open for assignment, taken by devices, and how many ports each device has open. If you’re having trouble finding a device on your network, Angry IP Scanner makes it simple to track down that device for further troubleshooting. Angry IP Scanner has personally helped me find devices that have lost their static IP address without having to physically go to the device. If you’re looking to export and save your findings, you can easily download your results in CSV, XML, or text format. It is available as a free download. No matter what size network you’re troubleshooting, understanding how to find a device’s IP address is essential. Whether you’re quickly looking up the ARP table with the arp -a command, or utilizing a network tool like PRTG, having a solid grasp of what’s on your network will help keep all of your device safe, and yourself headache free.
<urn:uuid:6bd90c31-baf9-47c0-af85-8c9067749913>
CC-MAIN-2022-40
https://www.itprc.com/how-to-find-an-ip-address-on-your-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00664.warc.gz
en
0.930586
1,681
3.015625
3
What is spoofing? Spoofing definition Spoofing, as it pertains to cybersecurity, is when someone or something pretends to be something else in an attempt to gain our confidence, get access to our systems, steal data, steal money, or spread malware. Spoofing attacks come in many forms, including: - Email spoofing - Website and/or URL spoofing - Caller ID spoofing - Text message spoofing - GPS spoofing - Man-in-the-middle attacks - Extension spoofing - IP spoofing - Facial spoofing So how do the cybercriminals fool us? Often times, merely invoking the name of a big, trusted organization is enough to get us to give up information or take some kind of action. For example, a spoofed email from PayPal or Amazon might inquire about purchases you never made. Concerned about your account, you might be motivated to click the included link. From that malicious link, scammers will send you to a web page with a malware download or a faked login page—complete with a familiar logo and spoofed URL—for the purpose of harvesting your username and password. There are many more ways a spoofing attack can play out. In all of them, fraudsters rely on victims falling for the fake. If you never doubt the legitimacy of a website and never suspect an email of being faked, then you could become a victim of a spoofing attack at some point. To that end, this page is all about spoofing. We'll educate you on the types of spoofs, how spoofing works, how to discern legitimate emails and websites from fake ones, and how to avoid becoming a target for fraudsters. “Spoofing, as it pertains to cybersecurity, is when someone or something pretends to be something else in an attempt to gain our confidence, get access to our systems, steal data, steal money, or spread malware.” Types of spoofing Email spoofing is the act of sending emails with false sender addresses, usually as part of a phishing attack designed to steal your information, infect your computer with malware or just ask for money. Typical payloads for malicious emails include ransomware, adware, cryptojackers, Trojans (like Emotet), or malware that enslaves your computer in a botnet (see DDoS). But a spoofed email address isn't always enough to fool the average person. Imagine getting a phishing email with what looks like a Facebook address in the sender field, but the body of the email is written in basic text, no design or HTML to speak of—not even a logo. That's not something we're accustomed to receiving from Facebook, and it should raise some red flags. Accordingly, phishing emails will typically include a combination of deceptive features: - False sender address designed to look like it's from someone you know and trust—possibly a friend, coworker, family member, or company you do business with. - In the case of a company or organization, the email may include familiar branding; e.g. logo, colors, font, call to action button, etc. - Spear phishing attacks target an individual or small group within a company and will include personalized language and address the recipient by name. - Typos—lots of them. Try as they might to fool us, email scammers often don't spend much time proofreading their own work. Email spoofs often have typos, or they look like someone translated the text through Google Translate. Be wary of unusual sentence constructions; companies like Facebook or PayPal are unlikely to make such errors in their emails to customers. Email spoofing plays a critical role in sextortion scams. These scams trick us into thinking our webcams have been hijacked with spyware and used to record us watching porn. These spoofed emails will say something like "I've been watching you watch porn," which is an incredibly weird thing to say. Who's the real creep in this scenario? The scammers then demand some amount of Bitcoin or other cryptocurrency or else they will send the video to all your contacts. To create the impression of legitimacy the emails may also include an outdated password from some previous data breach. The spoof comes into play when the scammers disguise the email sender field to look as if it's being sent from your supposedly breached email account. Rest assured, chances are no one is actually watching you. Website spoofing is all about making a malicious website look like a legitimate one. The spoofed site will look like the login page for a website you frequent—down to the branding, user interface, and even a spoofed domain name that looks the same at first glance. Cybercriminals use spoofed websites to capture your username and password (aka login spoofing) or drop malware onto your computer (a drive-by download). A spoofed website will generally be used in conjunction with an email spoof, in which the email will link to the website. It's also worth noting that a spoofed website isn't the same as a hacked website. In the case of a website hacking, the real website has been compromised and taken over by cybercriminals—no spoofing or faking involved. Likewise, malvertising is its own brand of malware. In this case, cybercriminals have taken advantage of legitimate advertising channels to display malicious ads on trusted websites. These ads secretly load malware onto the victim's computer. Caller ID spoofing Caller ID spoofing happens when scammers fool your caller ID by making the call appear to be coming from somewhere it isn't. Scammers have learned that you're more likely to answer the phone if the caller ID shows an area code the same or near your own. In some cases, scammers will even spoof the first few digits of your phone number in addition to the area code to create the impression that the call is originating from your neighborhood (aka neighbor spoofing). Text message spoofing Text message spoofing or SMS spoofing is sending a text message with someone else's phone number or sender ID. If you've ever sent a text message from your laptop, you've spoofed your own phone number in order to send the text, because the text did not actually originate from your phone. Companies frequently spoof their own numbers, for the purposes of marketing and convenience to the consumer, by replacing the long number with a short and easy to remember alphanumeric sender ID. Scammers do the same thing—hide their true identity behind an alphanumeric sender ID, often posing as a legitimate company or organization. The spoofed texts will often include links to SMS phishing sites (smishing) or malware downloads. Text message scammers can take advantage of the job market by posing as staffing agencies, sending victims to-good-to-be-true job offers. In one example, a work from home position at Amazon included a "Brand new Toyota Corrola." First of all, why does one need a company car if they're working from home? Second, is a Toyota "Corrola" a generic version of the Toyota Corolla? Nice try, scammers. GPS spoofing occurs when you trick your device's GPS into thinking you're in one location, when you're actually in another location. Why on Earth would anyone want to GPS spoof? Two words: Pokémon GO. Using GPS spoofing, Pokémon GO cheaters are able to make the popular mobile game think they're in proximity to an in-game gym and take over that gym (winning in-game currency). In fact, the cheaters are actually in a completely different location—or country. Similarly, videos can be found on YouTube showing Pokémon GO players catching various Pokémon without ever leaving their house. While GPS spoofing may seem like child's play, it's not difficult to imagine that threat actors could use the trick for more nefarious acts than gaining mobile game currency. Man-in-the-Middle (MitM) attack Man-in-the-Middle (MitM) attacks can happen when you use free Wi-Fi at your local coffee shop. Have you considered what would happen if a cybercriminal hacked the Wi-Fi or created another fraudulent Wi-Fi network in the same location? In either case, you have a perfect setup for a man-in-the-middle attack, so named because cybercriminals are able to intercept web traffic between two parties. The spoof comes into play when the criminals alter the communication between the parties to reroute funds or solicit sensitive personal information like credit card numbers or logins. Side note: While MitM attacks usually intercept data in the Wi-Fi network, another form of MitM attack intercepts the data in the browser. This is called a man in the browser (MitB) attack. Extension spoofing occurs when cybercriminals need to disguise executable malware files. One common extension spoofing trick criminals like to use is to name the file something along the lines of "filename.txt.exe." The criminals know file extensions are hidden by default in Windows so to the average Windows user this executable file will appear as "filename.txt." IP spoofing is used when someone wants to hide or disguise the location from which they're sending or requesting data online. As it applies to cyberthreats, IP address spoofing is used in distributed denial of service (DDoS) attacks to prevent malicious traffic from being filtered out and to hide the attacker's location. Facial spoofing might be the most personal, because of the implications it carries for the future of technology and our personal lives. As it stands, facial ID technology is fairly limited. We use our faces to unlock our mobile devices and laptops, and not much else. Soon enough though, we might find ourselves making payments and signing documents with our faces. Imagine the ramifications when you can open up a line of credit with your face. Scary stuff. Researchers have demonstrated how 3D facial models built from your pictures on social media can already be used to hack into a device locked via facial ID. Taking things a step further, Malwarebytes Labs reported on deepfake technology being used to create fake news videos and fake sex tapes, featuring the voices and likenesses of politicians and celebrities, respectively. How does spoofing work? We've explored the various forms of spoofing and glossed over the mechanics of each. In the case of email spoofing, however, there's a bit more worth going over. There are a few ways cybercriminals are able to hide their true identity in an email spoof. The most foolproof option is to hack an unsecure mail server. In this case the email is, from a technical standpoint, coming from the purported sender. The low-tech option is to simply put whatever address in the "From" field. The only problem is if the victim replies or the email cannot be sent for some reason, the response will go to whoever is listed in the "From" field—not the attacker. This technique is commonly used by spammers to use legitimate emails to get past spam filters. If you've ever received responses to emails you've never sent this is one possible reason why, other than your email account being hacked. This is called backscatter or collateral spam. Another common way attackers spoof emails is by registering a domain name similar to the one they're trying to spoof in what's called a homograph attack or visual spoofing. For example, "rna1warebytes.com". Note the use of the number "1" instead of the letter "l". Also note the use of the letters "r" and "n" used to fake the letter "m". This has the added benefit of giving the attacker a domain they can use for a creating a spoofed website. Whatever the spoof may be, it's not always enough to just throw a fake website or email out into the world and hope for the best. Successful spoofing requires a combination of the spoof itself and social engineering. Social engineering refers to the methods cybercriminals use to trick us into giving up personal information, clicking a malicious link, or opening a malware-laden attachment. There are many plays in the social engineering playbook. Cybercriminals are counting on the vulnerabilities we all carry as human beings, such as fear, naiveté, greed, and vanity, to convince us to do something we really shouldn't be doing. In the case of a sextortion scam, for instance, you might send the scammer Bitcoin because you fear your proverbial dirty laundry being aired out for everyone to see. Human vulnerabilities aren't always bad either. Curiosity and empathy are generally good qualities to have, but criminals love to target people who exhibit them. Case in point, the stranded grandchildren scam, in which a loved one is allegedly in jail or in the hospital in a foreign country and needs money fast. An email or text might read, "Grandpa Joe, I've been arrested for smuggling drugs in [insert name of country]. Please send funds, oh and btw, don't tell mom and dad. You're the best [three happy face winking emojis]!" Here the scammers are counting on the grandparent's general lack of knowledge about where his grandson is at any given time. “Successful spoofing requires a combination of the spoof itself and social engineering. Social engineering refers to the methods cybercriminals use to trick us into giving up personal information, clicking a malicious link, or opening a malware-laden attachment.” How do I detect spoofing? Here are the signs you're being spoofed. If you see these indicators, hit delete, click the back button, close out your browser, do not pass go. - No lock symbol or green bar. All secure, reputable websites need to have an SSL certificate, which means a third-party certification authority has verified that the web address actually belongs to the organization being verified. One thing to keep in mind, SSL certificates are now free and easy to obtain. While a site may have a padlock, that doesn't mean it's the real deal. Just remember, nothing is 100 percent safe on the Internet. - The website is not using file encryption. HTTP, or Hypertext Transfer Protocol, is as old as the Internet and it refers to the rules used when sharing files across the web. Legitimate websites will almost always use HTTPS, the encrypted version of HTTP, when transferring data back and forth. If you're on a login page and you see "http" as opposed to "https" in your browser's address bar, you should be suspicious. - Use a password manager. A password manager like 1Password will autofill your login credentials for any legitimate website you save in your password vault. However, if you navigate to a spoofed website your password manager will not recognize the site and not fill in the username and password fields for you—a good sign you're being spoofed. - Doublecheck the sender's address. As mentioned, scammers will register fake domains that look very similar to legitimate ones. - Google the contents of the email. A quick search might be able to show you if a known phishing email is making its way around the web. - Embedded links have unusual URLs. Check URLs before clicking by hovering over them with your cursor. - Typos, bad grammar, and unusual syntax. Scammers often don't proofread their work. - The contents of the email are too good to be true. - There are attachments. Be wary of attachments—particularly when coming from an unknown sender. Caller ID spoofing - Caller ID is easily spoofed. It's a sad state of affairs when our landlines have become a hotbed of scam calls. It's especially troubling when you consider that the majority of people who still have landlines are the elderly—the group most susceptible to scam calls. Let calls to the landline from unknown callers go to voicemail or the answering machine. How can I protect against spoofing? First and foremost, you should learn how to spot a spoofing attack. In case you skipped over the "How do I detect spoofing?" section you should go back and read it now. Turn on your spam filter. This will stop the majority of spoofed emails from ever making it to your inbox. Don't click on links or open attachments in emails if the email is coming from an unknown sender. If there's a chance the email is legitimate, contact the sender through some other channel and confirm the contents of the email. Log in through a separate tab or window. If you get a suspicious email or text message, requesting that you log in to your account and take some kind of action, e.g., verify your information, don't click the provided link. Instead, open another tab or window and navigate to the site directly. Alternatively, log in through the dedicated app on your phone or tablet. Pick up the phone. If you've received a suspicious email, supposedly from someone you know, don't be afraid to call or text the sender and confirm that they, indeed, sent the email. This advice is especially true if the sender makes an out-of-character request like, "Hey, will you please buy 100 iTunes gift cards and email me the card numbers? Thanks, Your Boss." Show file extensions in Windows. Windows does not show file extensions by default, but you can change that setting by clicking the "View" tab in File Explorer, then checking the box to show file extensions. While this won't stop cybercriminals from spoofing file extensions, at least you'll be able to see the spoofed extensions and avoid opening those malicious files. Invest in a good antivirus program. In the event that you click on a bad link or attachment, don't worry, a good antivirus program will be able to alert you to the threat, stop the download and prevent malware from getting a foothold on your system or network. Malwarebytes, for example, has antivirus/anti-malware products that you can try free before subscribing. News on spoofing - Scammers are spoofing bank phone numbers to rob victims - Phishers spoof reliable cybersecurity training company to garner clicks - Spoofed addresses and anonymous sending: new Gmail bugs make for easy pickings - When three isn't a crowd: Man-in-the-Middle (MitM) attacks explained - Lesser known tricks of spoofing extensions For more reading about spoofing and all the latest news on cyberthreats, visit the Malwarebytes Labs blog. History of spoofing There's nothing new about spoofing. In fact, the word "spoof" as a form of trickery goes back over a century. According to the Merriam-Webster online dictionary, the word "spoof" is attributed to 19th century English comedian Arthur Roberts in reference to a game of trickery and deception of Robert's creation. The rules of the game have been lost to time. We can only guess the game wasn't very fun or the Brits of the time didn't like being goofed on. Whatever the case may be, the name stuck though the game didn't. It wasn't until the early 20th century, spoof became synonymous with parody. For several decades whenever someone mentioned "spoof" or "spoofing" it was in reference to something funny and positive—like the latest film spoof from Mel Brooks or comedy album from "Weird Al" Yankovic. Today, spoofing is most often used when talking about cybercrime. Whenever a scammer or cyberthreat pretends to be someone or something they're not, it's spoofing.
<urn:uuid:f67f2efe-6070-45f0-bc75-1fd27bbca288>
CC-MAIN-2022-40
https://www.malwarebytes.com/spoofing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00664.warc.gz
en
0.934275
4,107
3.640625
4
By: Microtek Learning Jun. 22, 2021 Last Updated On: Apr. 01, 2022 These days organizations have been more focused on the adaption of cloud services, and it will remain the same in the coming years. According to the IDC report, public cloud adoption will continue to rise, and the public cloud market has reached $273.38 billion. Due to this, the demand for professionals with cloud computing skills has increased. The popular in-demand cloud platforms are Amazon (Amazon Web Services, AWS), Microsoft Azure, and Google. Research said that the cloud computing market is going to grow 35% in the coming years. Because of the pandemic, distance learning and remote work created more demand for cloud services. It is a remote service that is used to store, manage, and process data on the cloud. User can access his/her data or applications over the network anytime and anywhere. There are three categories of Cloud Computing Services: There are some assumptions about learning Cloud Computing: Cloud Security is the most in-demand skills for the cloud computing role. IT security is a top priority in most organizations; cloud computing is outsourcing the storage and retrieval of often-sensitive business data - requires a heavy focus on security and trust. A minor security breach has the potential to disclose customer data, steal-worthy intellectual property, and permanently damage a company's reputation. In general, the demand for cybersecurity is massive and growing every day, but particularly in the cloud domain. Machine Learning (ML) and Artificial Intelligence (AI) Machine Learning and Artificial Intelligence are the additional cloud computing skills recommended for growth in a cloud career. The cloud merchants have offered tools and services that provide access to cloud-based AI and machine learning applications; they have become vital cloud computing skills. Cloud computing can deliver the computing power and infrastructure demanded by organizations of any size that want to dally in AI and machine learning. Cloud Deployment and Migration Across Multiple Platforms Organizations need professional deployment and migration skills from the IT systems to a cloud platform or from one cloud platform to another. It isn't simple but needing advanced cloud computing skills to protect the integrity and security of data. If you have the AWS, Azure and Google skills, you'll have more career possibilities and more value within an organization. DevOps comes from the term "development operations", It is the favoured software development method that uses the entire software lifecycle into account, from planning to maintenance. This method allows organizations to automate specific updates and get updates out much faster and efficiently. It is the most in-demand development process of cloud computing, so adding this skill will be a smart move. Other Cloud Computing Skills We have mentioned the most important and in-demand cloud computing skills to pursue. But everyone's needs and career path, and existing skills sets are different. So below skills are also crucial in the world of cloud computing. Get these hot cloud computing skills and validate your skills with a cloud computing certification. Check out our Cloud Computing Certification Training and get ahead for a bright future!
<urn:uuid:20a4eefc-6e35-4f26-9eb1-a40c13b3256f>
CC-MAIN-2022-40
https://www.microteklearning.com/blog/top-cloud-computing-skills/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00664.warc.gz
en
0.929345
632
2.625
3
Replacing a centralized electrical distribution system with a distributed power architecture sets up a facility to handle unexpected demand spikes. The Covid-19 pandemic has radically changed the way people are using the digital infrastructure. The stay-at-home advisories and social distancing measures have increased the demand for a reliable internet speed. Whole industries have moved vast numbers of employees to remote work. Internet providers are now improving network services that allow operations to continue with minimal disruption. According to an IEA report, between February and mid-April 2020, global internet traffic surged by 40 percent. Microsoft Teams saw a 70 percent increase in use for March alone. As the old year fades away, one thing is clear: getting back to the old normal is no longer possible. It has been replaced by a new normal. The cloud is considered essential and digital tools like video conferencing and virtual services like telehealth critical. Mobile internet and social media have empowered people to become creators of data. Today, nearly 500 million photos are uploaded on Facebook and Instagram and roughly 500 thousand hours of video is uploaded to YouTube daily. More videos are uploaded to YouTube in one month than the three major US networks created in over 60 years. These figures give a view of the incredible amount of data that users generate on a regular basis. Transferring all the data generated at the edge to the central cloud, processing and analyzing it on servers, and then transporting it back to edge devices is not feasible. Centralized cloud computing has two significant limitations in meeting the demands of a connected world: bandwidth and latency. Using a central cloud bandwidth will be the choke point for the growth of IoT. Even if the network capacity increases to cope with the data, remote processing of data in the central cloud is inhibited due to latencies in the long-haul transmission of data. It is clear the need for a new computing model to cope with the hyper-connected world. The impact of this digital transformation gives a broader set of challenges for data center operations. - How can data centers maintain resiliency, efficiency, and reliability amid the ever-growing demand for cloud-based, business-critical applications? - How can data centers ensure their facilities are well prepared to scale operations in response to rapidly changing and continuously increasing capacity demands? Facing these challenges requires that data center operators address the critical role power architectures play as enabling or limiting factors for expanding data center capacity. Maximum Capacity: The Limitations of Centralized Power Architecture There are a few main elements that typically limit data center expansion: - physical space - power system capacity - cooling capacity. A more in-depth look reveals that whether or not power acts as a constraint depends on the type of distribution architecture a data center employs. For most data centers, a centralized power architecture is the driving force of the facility. In this situation, a centralized uninterruptible power supply (UPS) system brings power from the AC utility grid to DC. The power is then stepped down during a series of conversions to provide the computing equipment’s power. In the event of a significant outage like utility failure, backup power is distributed from battery banks at the central location to critical equipment across the data center. Most data centers utilize such centralized power infrastructures, but they are not reliable. When it comes to scaling to cope with increased processing needs and capabilities, there may not be enough space to add more electrical configuration in the power room. Centralized UPS power and backup capacity support a data center’s added load based on predictions made at the first facility design stages. However, because computing and power equipment are not integrated all at once – especially in collocation data centers – capacity limits may not be as expected. This also presents problems as data centers shift to higher-capacity and higher-density networking equipment to meet the increasing computing needs. With that in mind, data centers that depend solely on a centralized UPS architecture are in danger of exhausting their power capacity. They may be limiting their ability to scale rapidly to meet future demands. Beyond Capacity: The Benefits of a Decentralized Power Architecture How can data centers overcome power capacity limitations? Making sure that their infrastructure can respond to the increasing demand while still maintaining maximum uptime and resiliency? The solution is adopting a decentralized DC power architecture. Compared to a centralized UPS distribution, backup power in a decentralized system is spread evenly throughout the facility. Designed to operate near the critical load equipment. By clustering together loads and placing battery reserves close to the physical computing equipment, decentralized systems minimize potential power outages. This Improves the reliability of power amid increasing demand without compromising the entire site. Instead of connecting numerous AC-to-DC conversions, distributed power places smaller batteries and rectifiers directly inside cabinets. Reducing the number of power conversions required to step down utility power to the servers’ needs improves efficiency, reliability, and scalability. The most critical feature of a decentralized UPS architecture is it allows data centers to meet increased demand by distributing equipment-specific resources that can cope according to needs. New cabinets can expand capacity as necessary, while additional rectifiers and battery modules can increase power for servers added to open racks. By using DC power components that connect directly to the AC utility feed, a decentralized power structure allows facilities to optimize stranded white space and maximize infrastructure without placing any additional burden on their existing UPS system. A distributed DC power structure allows data center operators to add power and load concurrently. Unlike a centralized UPS, which limits how much and where a data center can grow, a decentralized power architecture is designed to scale. This is what data center operators need in a world of rapid, unexpected spikes in customer demand. Preparing for the New Normal With emerging technology, cloud computing becomes a new paradigm for the dynamic provisioning of various enterprises. It provides a way to deliver the infrastructure and software as a service to consumers in a pay-as-you-go method. Such typical commercial service providers include Amazon, Google, and Microsoft. In cloud computing environments, large-scale data centers contain essential computing infrastructure. While it is unclear how long the COVID-19 pandemic will last, it’s clear that there is nothing less than a paradigm shift in how populations worldwide live, work, collaborate, and communicate. The digital transformation of the past eight months has only served to jump-start a broader set of long-term trends already underway, including advancements in AI, automation, and data analytics, as well as the evolution of 5G, IoT, and smart cities. Data centers face long-term challenges with a new sense of urgency. Now more than ever, data center operators need to redesign the power architectures at the core of their facilities, in the face of what is sure to be an increasing demand on capacity. Introducing a distributed power architecture is an essential step towards developing data center infrastructure with the flexibility to adapt to whatever comes next. To fully utilize the underlying cloud resources, the provider has to ensure that services are delivered to meet consumer demands. These are usually specified by (service level agreements) SLAs while keeping the consumers isolated from the underlying physical infrastructure. The challenge here is to balance the power allocation resources and make appropriate decisions since the workloads of the different applications or services fluctuate a lot as time elapses. Thus, the potential benefits of shifting to decentralizing the cloud could provide the opportunity to address the issues of high-performance concern of the service provider. The pandemic has accelerated the need for a shift to transform tens of billions of devices from a challenge to an opportunity, releasing the power of computing devices at the edge. A practical solution is to build a fully decentralized architecture where every computing device is a cloud server. Edge devices process data locally, communicate with other devices directly, and share resources with other devices to remove the stress on central cloud computing resources. This architecture is faster, more efficient, and more scalable. A decentralized architecture is more private since it minimizes central trust entities and is more cost-efficient since it leverages unused computing resources at the edge. A decentralized resource management approach can be made for data centers that use virtual machines to host many third-party applications. The system models are defined and described in detail. The design of the decentralized power migration approach, which considers both load balancing and saving of energy costs by turning some underutilized nodes into a sleeping state. Performance evaluation results of the simulation experiments illustrate that the approach can achieve a better load balancing effect and less power consumption than other strategies.z
<urn:uuid:da104543-dcb5-4420-bdd3-56fc123ea9bd>
CC-MAIN-2022-40
https://www.akcp.com/blog/decentralizing-data-center-power/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00064.warc.gz
en
0.92406
1,748
2.71875
3
The ecosystem for digital distribution is complex, with unique nuances for each category of content. In a broad sense, the ecosystem includes the creation of content, the availability of services that deliver content and services to consumers, and the deployment of technologies that enable delivery to connected consumer electronics platforms that are capable of receiving and displaying content. Total sales of video- and audio-connected consumer electronics products worldwide will be more than 292 million units in 2012 and will grow to more than 760 million units by 2016. The growing number of connected devices in broadband households heightens the need for dedicated delivery solutions provided by content delivery networks (CDNs). Providers build CDNs with a huge number of connected servers whose main purpose is to bring content closer to end users, thereby reducing latency and packet loss. Most CDNs are also capable of supporting huge demand peaks, such as those generated by events streamed live online. The growth in online traffic that accompanies an ever-rising number of connected devices makes the role of CDNs more important than ever. In the early stages of the Internet, most of the online content was static and could be easily duplicated and cached in multiple CDN servers for better access by end users. Today, simple content caching has rapidly become a commodity, with competition for this “basic” service based mostly on price and geographical reach. Furthermore, dynamic content is a much larger part of Internet traffic. Even content that appears to be the same for all users has dynamic additions, including advanced advertising that changes based on time, location of use, or known user attributes. To differentiate their services, and to build sources for new revenues, CDN providers are offering a growing number of value-added services (VAS) that help manage such complexity of content and create new service options for content providers. CDNs are poised to earn an increasing share of their revenues from value-added services such as: - Content ingestion and management - Encoding and transcoding - Multiscreen delivery support (particularly for mobile platforms) - Ad insertion and analytics - Creation and management of distribution rules (e.g., allowing content access only to certain users or in certain times of the day) A few CDNs have adopted advanced solutions called “transparent caching.” These CDNs cache content based on its popularity: The most popular content is always closer to end users, while the least popular content is cached farther away in the network. The allocation of content changes continuously depending on content popularity and does not require manual intervention from content distributors. There are two main types of CDNs, based on the type of entity that created and controls the network. Telco and third-party CDNs both offer unique advantages to content distribution. Traditional third-party CDNs are essentially application service providers that use the Internet to connect their distributed servers. Third-party CDNs dedicate a significant portion of their operational expense to ongoing software development in order to keep pace with the introduction of new technologies that emerge in the consumer market. Global coverage, achieved through extensive connectivity with ISPs around the globe, is one of the strongest benefits of third-party CDNs. Third-party CDNs have distinct advantages: - Existing relationships with content providers - Global coverage - Ongoing software development and solutions that have been tested and improved over time - Connectivity among multiple ISPs - Low investment required for operators to use a third-party CDN - Ability to interoperate with a variety of other networks For example, Akamai has the EdgePlatform that allows 1080p video streaming to different platforms, including Adobe Flash, Microsoft Silverlight, iPhones and iPads. Akamai has 95,000 servers in 1,900 networks across 71 countries. Telco CDNs use their own physical fiber networks to connect their points of presence around the world. Several telcos have undertaken internal CDN development initiatives to protect themselves from both cost and competitive standpoints and to benefit from the growth in digital content traffic. Major broadband providers such as BT, Telia Sonera, and Verizon have taken the aggressive approach of investing in complete in-house CDN solutions. Telcos have to invest considerable money and effort to build strong international capabilities. Thus, telco CDNs are newer systems that are less “road-tested” than the more mature third-party CDNs. However, Telco CDNs have their own inherent advantages: - Highly customized solutions - Greater proximity to end users within the telco footprint - Optimized for the operator’s network - Cost advantages for the operator - Stronger telco control over content going through the network AT&T offers branded AT&T Digital Media Solutions, an end-to-end solution that claims to cover every step of digital media distribution (content ingestion, management, marketing, operations and distribution). This CDN also includes a multiscreen solution and 38 Internet data centers worldwide. Companies behind both types of CDNs will increasingly leverage value-added services in order to earn larger shares of revenues. At the same time, demand for their services will increase as consumers transition from physical to digital content and adopt streaming and cloud-based services, which users will naturally expect to work with their multiple connected devices. These circumstances will lead to a more direct relationship between content producers and content consumers. CDNs, by virtue of their role in enabling dynamic and ostensibly seamless distribution of media content over a wide population of users, will have a significant role in the digital lifestyle ecosystem — and those that can fortify and expand the relationship with the consumer will be the most successful.
<urn:uuid:f9015c02-199f-49e6-8a75-61c8694d5e8b>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/cdns-new-channels-new-demands-new-opportunities-74697.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00064.warc.gz
en
0.940981
1,162
2.515625
3
What really is Ransomware? We’re writing this post specifically for the people who have absolutely no idea what ransomware is, or those who have heard the buzz word but don’t really know what it means. It’s easy to read the word, and assume a definition, but can you really sit there and say you know EXACTLY what ransomware is? Don’t fret. We will explain in a practical, not technical language that you can easily understand, we will also help you protect against and prevent this type of attack. What is ransomware? Ransomware is one of the most damaging threats that internet users face. Ever since the infamous CryptoLocker first appeared in 2013, there has been a new era of file-encrypting ransomware delivered through malicious emails or websites, phishing attacks and exploit kits aimed at collecting money from home users and businesses alike. By definition, ransomware is ‘a type of malicious software designed to block access to a computer system until a sum of money is paid.’ Pretty simple, right? Once the files are encrypted, the user will be unable to open or use the files, essentially you will be completely blocked from accessing them anymore. That is until the hacker demands a ransom be paid in exchange for the encryption key. On average, it will likely cost you $300 for this key. Yikes. Nine best security practices to apply now Thanks to Sophos, there’s hope to remain secure and prevent a ransomware attack. Staying secure against ransomware isn’t just about having the latest security solutions. Good IT security practices, including regular training for employees, are essential components of every single security setup. Make sure you’re following these nine best practices: 1. Backup regularly and keep a recent backup copy off-line and off-site There are dozens of ways other than ransomware that files can suddenly vanish, such as fire, flood, theft, a dropped laptop or even an accidental delete. Encrypt your backup and you won’t have to worry about the backup device falling into the wrong hands. 4. Don’t enable macros in document attachments received via email Microsoft deliberately turned off auto-execution of macros by default many years ago as a security measure. A lot of infections rely on persuading you to turn macros back on, so don’t do it! 5. Be cautious about unsolicited attachments The crooks are relying on the dilemma that you shouldn’t open a document until you are sure it’s one you want, but you can’t tell if it’s one you want until you open it. If in doubt leave it out. 6. Don’t give yourself more login power than you need Don’t stay logged in as an administrator any longer than is strictly necessary and avoid browsing, opening documents or other regular work activities while you have administrator rights. 7. Consider installing the Microsoft Office viewers These viewer applications let you see what documents look like without opening them in Word or Excel. In particular, the viewer software doesn’t support macros, so you can’t enable them by mistake! 8. Patch early, patch often Malware that doesn’t come in via a document often relies on security bugs in popular applications, including Microsoft Office, your browser, Flash and more. The sooner you patch, the fewer holes there are to be exploited. 9. Stay up-to-date with new security features in your business applications For example Office 2016 now includes a control called “Block macros from running in Office files from the internet”, which helps protect against external malicious content without stopping you using macros internally. Connect with Secure Sense to protect data, your network, and systems 24/7, 365 days a year. If you have questions or want to learn more, please contact Secure Sense by calling 866-999-7506. You can find Secure Sense on Facebook, LinkedIn and Twitter. Follow us for current company and industry news.
<urn:uuid:5d20edc3-67d2-4331-bdf9-f42ee8179cad>
CC-MAIN-2022-40
https://securesense.ca/what-really-is-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00064.warc.gz
en
0.926324
930
2.5625
3
I was recently chatting with my father about his life as a young boy growing up in rural Ireland in the middle of the last century, and the conversation moved onto cars and how when he was young cars were a relatively new technology. In the world we know today, road safety is carefully enforced to the point where we take it for granted. But it wasn’t always thus. People simply weren’t aware of the risks. When my father was a boy there were no seatbelts, no airbags, no crumple zones in cars, nor any roll cages to protect passengers. Indeed, there were few comforts such as radios or heaters and, as my father explained, some people kept warm in wintertime by drilling a hole in the floor of the car to let the hot exhaust fumes waft into the cabin. My father’s description of the near-total disregard for road safety in a relatively new mode of transport made me think about cybersecurity and how many of the issues we face today as professionals also relate to the interaction of user behaviour, devices, systems and rules. Inspired by my father’s tales, I searched for more information on the subject and found a fascinating article in the Detroit News, appropriately titled 1900-1930: The years of driving dangerously. “In the first decade of the 20th century there were no stop signs, warning signs, traffic lights, traffic cops, driver’s education, lane lines, street lighting, brake lights, driver’s licenses or posted speed limits. Our current method of making a left turn was not known, and drinking-and-driving was not considered a serious crime. There was little understanding of speed,” the article explained. Road safety was far from uniform. Connecticut was the first State in the USA to introduce traffic laws in 1901. When Britain introduced the Highway Code in 1931, there were just 2.3 million motor vehicles, yet road accidents claimed more than 7,000 lives each year. Today, there are 27 million vehicles on the road, yet the number of road deaths is half of that in 1931. Today, if you want start building cars, you must meet very stringent safety standards and certification schemes before the car is deemed roadworthy. For the safety of drivers and passengers, cars have to undergo numerous crash tests before they get to the road. Even relatively simple safety innovations like headrests to reduce the impact of spinal injuries in crashes only became legally required in all passenger cars in 1969 – almost half a century after their design was first patented! These changes weren’t made by the automakers voluntarily, but because insurance companies and governments began to intervene. Car safety evolved over decades, and it occurs to me that this change points the way for what needs to happen in cybersecurity. The volume and scale of recent data breaches and cyberattacks all suggest we’ve come to the same fork in the road that the auto industry reached in the mid-1900s. We’re facing serious security challenges with the Internet of Things, and the parallels with road safety are even more striking. The number of connected devices offered in the market rises inexorably. The low cost of manufacturing (and consequently lower profit margins) often relegates good security to an afterthought. An Internet-enabled camera must adhere to very stringent electrical and safety regulations, but there’s no similar certification schemes for the software controlling it. Makers must make sure that the device doesn’t blow up or electrocute the user, but they still aren’t forced to ensure an attacker can’t access the camera remotely and invade the user’s privacy. Some technologists argue that excessive regulation stifles innovation and development. But the prospect of many rules didn’t deter Google and Apple from getting into the autonomous car business. I would argue that regulation ensures that whatever innovation happens, the consumer is protected (instead of just the manufacturer). We need to foster the same environment in cybersecurity that exists in road safety today. The driving environment is as safe as it is because earlier generations paid a huge cost: not just the car manufacturers and insurance companies but the families devastated by injury, tragedy, and loss of life. Roads have traffic lights, speed limits, overtaking lanes and guard rails. Cars have an array of safety features like airbags, crumple zones, anti-lock braking, seatbelts, parking sensors and child seats. The IoT industry needs to start building in similar cyber safety features by design. Security awareness training, in this analogy, is just the rules of the road – although it’s very important, it’s not enough by itself. There is no good reason why one individual who mistakenly clicks on a malicious link should put an entire organization at risk. The security industry urgently needs to encourage stakeholders to work together on a standardised set of rules for drivers (the users), but also a secure infrastructure that people can trust, along with devices and systems that protect them from their own mistakes. By all means, let’s teach people to drive carefully, but let’s also work to develop better engineered vehicles and infrastructure to let them reach their destinations safely.
<urn:uuid:e3e46a2f-bbf2-4172-b373-cdbc5455c71a>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2018/08/09/iot-security-lessons/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00064.warc.gz
en
0.967675
1,071
2.796875
3
A log is a computer-generated file that captures activity within the operating system or software applications. The log file automatically documents any information designed by the system administrators, including: messages, error reports, file requests and file transfers. The activity is also timestamped, which helps IT professionals and developers understand what occurred as well as when it happened. What is Log Management? Log management is the practice of continuously gathering, storing, processing, synthesizing and analyzing data from disparate programs and applications in order to optimize system performance, identify technical issues, better manage resources, strengthen security and improve compliance. Log management usually falls into the following main categories: - Collection: A log management tool that aggregates data from the OS, applications, servers, users, endpoints or any other relevant source within the organization. - Monitoring: Log monitoring tools track events and activity, as well as when they occurred. - Analysis: Log analysis tools that review the log collection from the log server to proactively identify bugs, security threats or other issues. - Retention: A tool that designates how long log data should be retained within the log file. - Indexing or Search: A log management tool that helps the IT organization filter, sort, analyze or search data across all logs. - Reporting: Advanced tooling that automates reporting from the audit log as it relates to operational performance, resource allocation, security or regulatory compliance. How Log Management Systems Can Help A Log Management System (LMS) is a software solution that gathers, sorts and stores log data and event logs from a variety of sources in one centralized location. Log management software systems allow IT teams, DevOps and SecOps professionals to establish a single point from which to access all relevant network and application data. Typically, this log file is fully indexed and searchable, which means the IT team can easily access the data they need to make decisions about network health, resource allocation or security Log management tools are used to help the organization manage the high volume of log data generated across the enterprise. These tools help determine: - What data and information needs to be logged - The format in which it should be logged - The time period for which the log data should be saved - How data should be disposed or destroyed when it is no longer needed The Importance of Log Management An effective log management system and strategy enables real-time insights into system health and operations. An effective log management solution provides organizations with: - Unified data storage through centralized log aggregation - Improved security through a reduced attack surface, real-time monitoring and improved detection and response times - Improved observability and visibility across the enterprise through a common event log - Enhanced customer experience through log data analysis and predictive modeling - Faster and more precise troubleshooting capabilities through advanced network analytics What is Centralized Log Management? Centralized log management is the act of aggregating all log data in a single location and common format. Since data comes from a variety of sources, including the OS, applications, servers and hosts, all inputs must be consolidated and standardized before the organization can generate meaningful insights. Centralization simplifies the analysis process and increases the speed at which data can be applied throughout the business. Log management vs. SIEM Both Security Information and Event Management (SIEM) and log management software use the log file or event log to improve security by reducing the attack surface, identifying threats and improving response time in the event of a security incident. However, the key difference is that the SIEM system is built with security as its primary function, whereas log management systems can be used more broadly to manage resources, troubleshoot network or application outages and maintain compliance. 4 Common Log Management Challenges An explosion of data, as driven by the proliferation of connected devices, as well as the shift to the cloud, has increased the complexity of log management for many organizations. A modern, effective log management solution must address these core challenges: Because log management draws data from many different applications, systems, tools and hosts, all data must be consolidated into a single system that follows the same format. This log file will help IT and information security professionals effectively analyze log data and produce insights used in order to carry out business critical services. Data is produced at an incredible rate. For many organizations the volume of data continuously generated by applications and systems requires a tremendous amount of effort to effectively gather, format, analyze and store. A log management system must be designed to manage the extreme amount of data and provide timely insights. Indexing within the log file can be a very computationally-expensive activity, causing latency between data entering a system and then being included in search results and visualizations. Latency can increase depending on how and if the log management system indexes data. 4. High IT Burden When done manually, log management is incredibly time consuming and expensive. Digital log management tools help to automate some of these activities and alleviate the strain on IT professionals. 4 Log Management Best Practices Given the massive amount of data being created in today’s digital world, it has become impossible for IT professionals to manually manage and analyze logs across a sprawling tech environment. As such, they require an advanced log management system and tools that automate key aspects of the data collection, formatting and analysis processes. Here are some key considerations IT organizations should consider when investing in a log management system: 1. Prioritize automation tools to reduce the IT burden. Log management is a time-consuming process that could drain resources from the IT organization. Many recurring tasks related to data collection and analysis can be automated using advanced tooling. Organizations should prioritize automation capabilities within any new log management tools and consider updating legacy solutions to reduce manual effort during this process. 2. Use a centralized system for better access and improved security. A centralized log management doesn’t just improve data access-it dramatically strengthens the organization’s security capabilities. Storing and connecting data in a centralized location helps organizations more quickly detect anomalies and respond to them. In this way, a centralized log management system can help reduce breakout time-or the critical window wherein hackers can move laterally to other parts of the system. 3. Create a bespoke monitoring and retention policy to better manage volume. Given the volume of data being created, organizations must be discerning as to what information is collected and how long it should be retained. Organizations should perform an enterprise-wide analysis to determine what inputs are critical to each function. 4. Leverage the cloud for added scalability and flexibility. Given the ever-growing data landscape, organizations should consider investing in a modern, cloud-based solution for their log management system. Using the cloud provides enhanced flexibility and scalability, easily allowing the organizations to expand or shrink their processing and storage capacity based on variable needs. Log Management Solutions with CrowdStrike Falcon LogScale CrowdStrike Falcon LogScale is purpose-built for the scale of today’s data volumes and unlocks the ability to log limitlessly without adding complexity. While other solutions continue to limit access to data through pre-determined views or limits set to just samples of data, Falcon LogScale enables users to log everything and answer anything, in real time. Log everything and anything you want with CrowdStrike’s Falcon LogScale
<urn:uuid:c399c0f8-eceb-4949-847f-7f222ce35b57>
CC-MAIN-2022-40
https://www.crowdstrike.com/cybersecurity-101/observability/log-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00064.warc.gz
en
0.914599
1,516
3.28125
3
Have you replaced a 60W traditional tungsten bulb with a 60W-equivalent low energy compact fluorescent and thought it’s not as bright as it was. You’re not imagining it. I’ve been doing some tests of my own, and they’re not equivalent. Comparing light sources is a bit of art as well as science, and lacking other equipment, I decided to use a simple photographic exposure to give me some idea of the real-world performance. I pointed the meter at a wall, floor and table top. I didn’t point it at the light itself – that’s not what users of light bulbs care about. The results were fairly consistent: Low energy light bulbs produce the same amount of light as a standard bulb of three to four times the rating. The older the fluorescent, the dimmer it was, reaching output of a third at a thousand hours use. Given that the lamps are rated at two to eight thousand hours, it’s reasonable to take the lower output figure as typical as this is how it will spend the majority of its working life. This gives a more realistic equivalence table as: Table showing true equivalence of Compact Fluorescent (CFL) vs. conventional light bulbs (GLS) So what’s going on here? Is there a conspiracy amongst light-bulb manufacturers to tell fibs about their performance? Well, yes. It turns out that the figures they use are worked out by the Institute of Lighting Engineers, in a lab. They measured the light output of a frosted lamp and compared that to a CFL. The problem is that the frosting on frosted lamps blocks out quite a bit of light, which is why people generally use clear glass bulbs. But if you’re trying to make your product look good it pays to compare your best case with the completion’s worst case. So they have. But all good conspiracies involve the government somewhere, and in this case the manufactures can justify their methods with support from the EU. The regulations allow the manufactures to do some pretty wild things. If you want to look at the basis, it can be found starting here: For example, after a compact fluorescent has been turned on it only has to reach an unimpressive 60% of its output after a staggering one minute! I’ve got some lamps that are good starters, others are terrible – and the EU permits them to be sold without warning or differentiation. One good thing the EU is doing, however, is insisting that CFL manufacturers state the light output in lumens in the future, and more prominently than the power consumption in Watts. This takes effect in 2010. Apparently. Hmm. Not on the packages I can see; some don’t even mention it in the small print (notably Philips). However, fluorescent lamps do save energy, even if it’s only 65% instead of the claimed 80%. All other things being equal, they’re worth it. Unfortunately the other things are not equal, because you have the lifetime of the unit to consider. A standard fluorescent tube (around since the 1930’s) is pretty efficient, especially with modern electronics driving it (ballast and starter). When the tube fails the electronics are retained, as they’re built in to the fitting. The Compact Florescent Lamps (CFL) that replace conventional bulbs have the electronics built in to the base so they can be used in existing fittings where a conventional bulb is expected. This means the electronics are discarded when the tube fails. The disposable electronics are made as cheaply as possible, so it may fail before the tube. Proponents of CFLs says that it is still worth it, because the CFLs last so much longer than standard bulbs. I’m not convinced. A conventional bulb is made of glass, steel, cooper and tungsten and should be easy enough to recycle – unlike complex electronics. The story gets worse when you consider what goes in to the fluorescent tubes – mercury vapour, antinomy, rare-earth elements and all sorts of nasty looking stuff in the various phosphor coatings. It’s true that the amount of mercury in a single tube is relatively small, and doesn’t create much of a risk in a domestic environment even if the tube cracks, but what about a large pile of broken tubes in a recycling centre? So, CFLs are under-specified and polluting and wasteful to manufacture, but they do save energy. It’d be better to change light fittings to use proper fluorescent tubes, however. They work better than CFLs, with less waste. I don’t see it happening though. At the moment descrete tubes actually cost more because they fit relatively few fittings. People are very protective of their fittings. The snag is that with CFLs you need at least 50% more bulb sockets to get enough light out of them. Standard bulbs produce less light than they could because a lot of the energy is turned into heat (more so than with a CFL). However, this heat could be useful – if your light bulbs aren’t heating the room you’d need something else. This is particularly true of passageways and so on, where there may be no other heating and a little warmth is needed to keep the damp away. The CFL camp rubbishes this idea, pointing out that in summer you don’t need heat. Actually, in summer, you don’t need much artificial light either, so they’d be off anyway. Take a look at document “BNXS05 The Heat Replacement Effect” found starting here for an interesting study into the matter – it’s from the government’s own researchers. But still, CFLs save energy. Personally, however, I look forward to the day when they’re all replaced by LED technology. These should last ten times longer (100,000 hours), be more efficient still, and contains no mercury anyway , nor even any glass to break. The snag is that they run on a low voltage and the world is wired up for mains-voltage light fittings. I envisage whole light fittings, possibly with built-in transformers, pre-wired with fixed LEDs which will last for 50 years – after which you’d probably change the whole fitting anyway. Ah yes, I hear the moaners starting, but I want to keep my existing light fitting. Okay, sit it the gloom under your compact fluorescents then.
<urn:uuid:b0469f4d-a336-4016-9311-0ab6d39c2434>
CC-MAIN-2022-40
https://blog.frankleonhardt.com/2010/low-energy-lightbulbs-are-not-that-bright/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00064.warc.gz
en
0.946342
1,448
2.609375
3
This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies. We've recently made an accessibility improvement to the community and therefore posts without any content are no longer allowed. Please use the spoiler feature or add a short message in the message body in order to submit your weekly challenge. For the first task, in the summation function, why do you (and apparently the solution), use a count instead of count distinct? There are 23 id's/names repeated in the data set - the repeats all have the same years. Why should a particular object be counted as a discovery more than once?
<urn:uuid:eb55eb2d-0f66-4d65-a899-6347c330faa3>
CC-MAIN-2022-40
https://community.alteryx.com/t5/Weekly-Challenge/Challenge-329-Near-Earth-Objects/m-p/974065/highlight/true
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00064.warc.gz
en
0.894266
154
2.703125
3
As we continue to look for ways to curb cyber threats, companies and individuals are increasingly facing more and advanced threats. Cybercriminals use various methods to execute their threats. There is no clear way to deal with or to eradicate cyber crime though there are some ways we can limit the risk and protect ourselves from these criminals. Some challenges in cybersecurity are more complex than others and are more challenging. In this article, we are going to walk you through some of these threats that are somehow very hard to mitigate. Some of these threats are from outside the company or workplace while others are just from within the organization. Ransomware is one of the most aggressive tricks used by the black hat hackers. It involves taking a computer or even the whole network hostage. The files or data in that particular computer under hostage becomes inaccessible by the user until the victim pays some ransom fees typically paid in the form of cryptocurrency such as bitcoin. The number of ransomware incidents has increased by around 36%, the rate at which it is growing is very alarming. Unfortunately, criminals are here to stay. These attackers spread viruses to the company and its customers. They then demand fees to clear the infection. The virus removed after the victim pays the price (hopefully). 2 The Internet of Things (IoT) In the current generation, most people globally at least have an iPhone, television, a tablet, and a computer. More than 80% have smartphones. The internet of things ensures that all the devices that you own connect. It’s the fast track of essential change and is how the future economies shall work. The experience of placing a sensor on all the objects at minimal cost is exciting but could also be very dangerous. It’s very risky and can pose serious security issues. Cybercriminals can exploit the devices and use them for ransomware attacks or DDoS attacks. The interconnectedness of these devices makes the consumer susceptible to attackers. 3 Information flow among devices Some employees connect personal devices to those at work. The employee’s devices are doubling as both personal devices and work devices. This act can comprise the company’s data or other confidential information. 4 Cloud-based services and computing Many companies have embraced the use of cloud computing; it enables companies and organizations to be swifter in their operations. Long gone are the days when companies had to pay large sums of money to purchase expensive software. Today most of them use SaaS solutions; they are cloud-based, are readily available, and are inexpensive. The answers are very appealing but might pose serious security threats to the companies. 5 Access to confidential information. Internal threats are more complex to detect and deal with as compared to external attacks, which can be easily recognizable. The internal attack is more ambiguous when it involves access control. If an employee decides to download a file that is not related to job duties, it is difficult to discern whether this is an attack or just a mistake.
<urn:uuid:3b718c4b-8c83-4f89-9ffa-7b07dbe72686>
CC-MAIN-2022-40
https://cyberexperts.com/challenges-in-cybersecurity-that-are-hard-to-protect-yourself-from/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00265.warc.gz
en
0.959459
607
2.84375
3
In a previous article we wrote about the activities and information required to create an effective incident response plan. In this article we will talk about the phases of an incident response plan in order to create a well-structured program. As a review, the incident response plan should define the goals, the various individuals involved, the roles and responsibilities, the communications methods, and the escalation processes throughout the phases of the plan. As we mentioned in the previous article, an incident can be anything from a breach to a systems failure, but our focus for this blog will be cyber incidents. To that end, we will first define the types of cyber incidents that an organization can experience before we discuss the phases of an incident response plan. Types of Cyber Incidents - Exploitation – this type of incident takes advantage of unpatched hardware and software or vulnerabilities in systems to take control of the IT environment. - Ransomware – most of us have heard of ransomware attacks. This type of attack uses malware that locks systems and files until a ransom is paid. However, paying the ransom does not guarantee that the data will be unlocked and accessible. - Data Theft – this type of incident involves cybercriminals stealing information that is stored on an organization’s systems. The cybercriminals usually gain access to the systems and the information within them through stolen user credentials. Often, the cybercriminals quietly watch the traffic on the systems for periods of time to identify the most valuable information that they can steal to inflict the most damage. Phases of an Incident Response Plan - Outline the goals of the incident response strategy including the policies and procedures. The improvement of the organization’s security, visibility of an incident and the recovery from an incident need to be clearly defined. - Implement a reliable backup plan to help restore your data. - Have a comprehensive strategy for patching and updating your hardware, operating systems, and applications. - Test the plan to ensure it meets expectations and make improvements based on the results as required. - Monitor networks, systems, and devices for potential threats. - Document events and potential incidents. - Analyze the occurrences to determine whether the incident response plan needs to be activated. - Understand intrusions to contain and apply the appropriate measures for effective mitigation. This may involve isolating systems or suspending access to systems and other measures. - Eliminate the intrusion by restoring from reliable backups. - Ensure devices and systems are clean by running anti-malware and antivirus software - Preserve evidence and documentation to assist with the analysis of the incident. - Identify the root cause and determine what can be improved to avoid the incident in the future. - Evaluate the incident response procedures and processes to highlight what went well and what needs to be improved. - Document the lessons learned and what needs to be adjusted for future incidents. - Meticulously document the steps that were taken to uncover and resolve the incident that can be reused to resolve future incidents quickly and effectively. One last piece of advice that we can offer when it comes to an incident response plan is having a printed copy on hand. When an incident occurs and the systems are unavailable, having an electronic copy will not be very helpful if the team cannot access it. These topics are an overview of the phases of an incident response plan. In creating your plan an organization needs to determine which of the areas where they will require external expertise. It may be none, all, or some of the areas. However, in our experience, getting assistance from experts, whether legal, insurance, communications, incident response specialists or IT service providers is useful as you create your incident response plan. Call MicroAge today to see how we can help you.
<urn:uuid:7984f490-e5b3-4527-afeb-f432fe059c8a>
CC-MAIN-2022-40
https://microage.ca/winnipeg/creating-your-incident-response-plan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00265.warc.gz
en
0.928798
780
3
3
A link is not always what it seems. Hackers have gone to great lengths to create convincing websites that look just like the real deal. Oftentimes, this is spoofing a major company such as Microsoft. By convincingly spoofing legitimate websites, bad actors are hoping to encourage end-users to enter their credentials. Thus, URL phishing is a pretext for credential harvesting attacks. When done properly, URL phishing can lead to usernames, passwords, credit cards, and other personal information being stolen. The most successful ones often require users to login into an email or bank account. Without proper defenses, end-users and companies could easily fall prey. Here, we discuss the basics of URL phishing and a summary of the best practices for stopping these attacks. Phishing attacks commonly begin with an email and can be used in various attacks. URL phishing attacks take phishing a step further to create a malicious website. The link to the site is embedded within a phishing email, and the attacker uses social engineering to try to trick the user into clicking on the link and visiting the malicious site. URL phishing attacks can use various means to trick a user into clicking on the malicious link. For example, a phishing email may claim to be from a legitimate company asking the user to reset their password due to a potential security incident. Alternatively, the malicious email that the user needs to verify their identity for some reason by clicking on the malicious link. Once the link has been clicked, the user is directed to the malicious phishing page. This page may be designed to harvest a user’s credentials or other sensitive information under the guise of updating a password or verifying a user’s identity. Alternatively, the site may serve a “software update” for the user to download and execute that is actually malware. URL phishing attacks use trickery to convince the target that they are legitimate. Some of the ways to detect a URL phishing attack is to: URL phishing attacks can be detected in a few different ways. Some of the common solutions include: These common phishing detection mechanisms can catch the low-hanging fruit. However, phishers are growing more sophisticated and using methods that bypass these common techniques. For example, phishing sites may be hosted on SaaS solutions, which provides them with legitimate domains. Protecting against these more sophisticated attacks requires a more robust approach to URL scanning. Check Point and Avanan have developed an anti-phishing solution that provides improved URL phishing protection compared to common techniques. This includes post-delivery protection, endpoint protection to defend against zero-day threats, and the use of contextual and business data to identify sophisticated phishing emails. Learn more about how phishing and social engineering attacks have grown more sophisticated over the years with the Social Engineering Ebook. Then sign up for a free demo of Check Point Harmony Email and Office to learn how to block the phishing emails that other solutions miss.
<urn:uuid:713efba1-d8cb-4ba6-8230-7e291f6c7974>
CC-MAIN-2022-40
https://www.checkpoint.com/cyber-hub/threat-prevention/what-is-phishing/what-is-url-phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00265.warc.gz
en
0.926987
611
3.046875
3
There is no specific "synagogue architecture". house of worship can take any form, and any enclosed space can be used as a synagogue. A custom-built synagogue tends to look like the houses of worship of other faiths in the area. Synagogue is actually a Greek word, συναγωγή, meaning "assembly". In Hebrew it might be called a Bet Kenesset or "House of Assembly", or a Bet Tefila or "House of Prayer." The Ashkenazi Jews from Central and Eastern Europe simply call it by the Yiddish word שול or shul, based on the German word for "school". The Temple Mount in Jerusalem was the focus of Jewish worship. The Hebrew Bible says that King Solomon built the First Temple in 957 BCE. It was built to a design similar to Egyptian temples and housed the Ark of the Covenant. It was sacked by the Egyptians and then the Assyrians. Then Nebuchadnezzar II, king of the Neo-Babylonian Empire, destroyed it along with the rest of the city in 586 BCE. Cyrus the Great, founder of the Achaemenid Empire, authorized the construction of the Second Temple in 538 BCE. It was completed in 515 BCE under his successor, Darius the Great. Alexander the Great came close to destroying it in 332 BCE. Herod the Great renovated and expanded it around 20 BCE. But then the Roman Empire besieged Jerusalem and destroyed the Temple in 70 CE. After a following revolt in the 130s, the Romans banned Jews from Jerusalem. While there was a Temple, it was where daily morning and afternoon offerings and sacrifices were carried out, the Psalms were recited, and there was a daily prayer service. Today's Orthodox Jewish services mention the sacrifices carried out in the Temple and call for its restoration. I think it's right to say that from the Orthodox point of view, there is no longer a Temple and maybe it should be rebuilt. Meanwhile, the Reform movement now refers to their assembly halls as temples. Not The Temple, but a local temple. Religious leaders formalized Jewish prayers during the Babylonian captivity of 586-537 BCE. Any Jewish individual or group can build a synagogue. There were synagogues long before the Second Temple was destroyed. Early synagogues dating to the 3rd century BCE have been found in Egypt. A minyan or מִנְיָן of ten men can assemble for group prayers in any space. A room or small building used for group prayer meetings is called a shtiebel or שטיבל. Synagogues, down to the scale of the shtiebel, supplemented but did not replace the Temple in Jerusalem. As for dedicated synagogue structures, the architecture isn't distinctive or regulated. Synagogues tend to resemble their surroundings. The earliest synagogues looked like the temples of other religions of the eastern Roman Empire. Synagogues built in medieval Europe looked like Gothic churches. Synagogues in China resemble Chinese temples. It's the internal furnishings that make a space a synagogue. There are four things you will always find in a synagogue: the Torah itself, a Torah ark (Aron Kodesh) in which it is stored, a raised platform (bima) with a table where the Torah is read, and a continually lit lamp or lantern (ner tamid). Torah / תּוֺרָה The Torah contains the first five books of the Hebrew Bible — Genesis (Bereshit), Exodus (Shemot), Leviticus (Vayikra), Numbers (Bamidbar), and Deuteronomy (D'varim). Tradition credits Moses with writing all of it, except of course for the last eight verses that describe his death and burial. The majority of scholars believe that it took written form during the Babylonian exile. The Torah is hand-written in Hebrew on parchment which is then wound into a scroll. It contains 304,805 stylized letters written without a single error, in 42 lines of text per column. A specified section is read every Sabbath (Saturday) morning so the entire Torah is read every year. Ezra the Scribe established the regular communal reading of the Torah when the Jewish people returned from the Babylonian captivity around 537 BCE. Current tradition maintains that Orthodox Judaism has preserved a Torah-reading procedure unchanged since the destruction of the Second Temple in 70 CE. Aron Kodesh / ארון קודש The Aron Kodesh, ארון קודש in Hebrew, is the Torah Ark, a special cabinet in which the Torah scrolls are stored. Unless the architecture prohibits this somehow, the Torah Ark is placed so that to face it is to face toward Jerusalem, at least symbolically. The Lower East Side is in the Western Hemisphere, so the Torah Ark is usually placed on the east wall. Spherical trigonometry is disregarded. A great circle path from New York to Jerusalem would start on a heading of about 53°, much closer to northeast than due east. The Torah Ark is a reminder of the Ark of the Covenant, which held the tablets with the Ten Commandments. It and the façade of the building often bear a symbol representing the tablets of the Ten Commandments. The Ark is the holiest spot in the synagogue, analogous to the Holy of Holies in the Temple. The parochet or פרוכת is an ornate curtain which typically covers the opening. Bima / בּימה The bima, בּימה in Hebrew, gets its name from the ancient Greek βῆμα, a speaker's platform used in courts of law and to address the public. The Temple in Jerusalem contained a platform elevated by two or three steps, and synagogues recreate this. It is at the center of an Orthodox synagogue. Early Christian church architecture picked up the idea of a ceremonial raised platform. The bema continues to be the name for the sanctuary in Orthodox Christian architecture — the area with the altar behind the iconostasion, the soleas or raised pathway in front of it, and the ambo in front of the Holy Doors and projecting into the nave. See Ιερό Βήμα, σόλιον, and αμβων in Greek; and вима, солея, and амвон in Russian. The deacon leads the litanies and the priest delivers the sermon and distributes Holy Communion from the raised bema. In western Christianity the bema evolved into the πρεσβυτέριος or presbytery, or chancel. Ner Tamid / נר תמיד The Ner Tamid, נר תמיד in Hebrew, is the Eternal Light, a reminder of the miraculously persistent western lamp of the menorah in the Temple in Jerusalem. It is a continuously lit lamp, usually electrical these days. It hangs in front of the Torah Ark. An Orthodox synagogue, like those seen here, often have a mechitzah or partition dividing the men from the women, with the women in back. That is, unless there is a separate balcony for the women. The first picture below is taken from one end of the partition, next to one end of the rear-most men's bench. The German Reform movement beginning in the early 1800s changed the appearance of the synagogue, and later the form of the service itself, in an effort to remain Jewish while joining the host society. For example, the bima in a Reform synagogue is typically at the front, next to the Torah Ark. The yad or יד, literally "hand", is a ritual pointer used while reading the Torah. The Torah is made from parchment, which does not absorb ink and is easily damaged. Also, touching the parchment renders you ritually impure. The yad can be made from anything, but silver as seen here is common, especially for the hand-like tip.
<urn:uuid:355dafa7-750a-4d50-83d6-7ae5f734811c>
CC-MAIN-2022-40
https://cromwell-intl.com/travel/usa/new-york-jewish-les/synagogues.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00265.warc.gz
en
0.938441
1,741
3.359375
3
There are plenty of dystopian predictions about the Singularity: one definition of which is the point at which general AI and computer intelligence overtakes and replaces human cognition. However, a new report from telecoms provider Tata Communications paints a more positive picture: a future world of work based on collaboration, in which AI augments and diversifies human thinking, rather than renders it obsolete. The report – based on insights from 120 global business leaders – looks on the bright side, identifying a number of ways in which AI-enabled collaboration could improve our day-to-day working lives through its potential to bring much-needed diversity to different scenarios. As a result it can be seen as challenging the numerous recent reports about bias in AI, and the lack of diversity in some training data. Multiplicity Vs Singularity However, before AI can be embraced en masse it has to be accepted at an individual level. Part of that challenge, argues the report, is getting over the unhelpful notion of the Singularity. Ken Goldberg, professor of engineering at the University of California, Berkeley, and co-author of the report, has suggested a different, more positive concept: Multiplicity. Instead of fearing a point in time that could be years down the line, we need to see AI as just another way to bring diversity to problem-solving in the workplace. “The prevalent narrative around AI has focused on a singularity – a hypothetical time when artificial intelligence will surpass humans. But there is a growing interest in ‘multiplicity’, where AI helps groups of machines and humans collaborate to innovate and solve problems,” he said. “This survey of leading executives reveals that multiplicity, the positive and inclusive vision of AI, is gaining traction.” The Tata Communications study found that business leaders increasingly value diversity in the workplace and see it as vital to performance. Eighty-one percent indicated that demographic diversity in the workplace is important or very important. Meanwhile, 90 percent believe that cognitive diversity – perhaps a rare concept in a world of memes – is important for management. This leads the report to put forward a vision for a future AI system that could provide a constant ‘devil’s advocate’ in the workplace. In line with the concept of multiplicity, an AI could air contrarian perspectives to tackle the perennial workplace challenges of confirmation bias and groupthink. For example, by harnessing improved natural language processing and machine learning to trawl through emails and meeting transcripts for keywords, an AI devil’s advocate could challenge unanimous, and potentially false, assumptions. “The important question is not, “When will machines surpass human intelligence?” but instead, “How can humans work together with machines in new ways?”, suggests the study. AI can complement human strengths The report also highlights the role that AI could play in the future of education, comparing the sweeping potential changes it brings to the US school system. “Much of education today still emphasises conformity, obedience, and uniformity. The important question is not when machines will surpass human intelligence, but how humans can work and learn with computers in new ways. This requires combining AI with IA – intelligence augmentation – where computers help humans learn and work.” As a result, the report suggests that education should evolve to emphasise uniquely human skills that AI and robots are not currently able to replicate. These include creativity, curiosity, imagination, empathy, human communication, diversity (which would seem to contradict the report’s own argument), and innovation. And of course, AI itself can play a role in that transformation: “AI systems can provide universal access to sophisticated adaptive testing, and exercises to discover the unique strengths of each student and to help each student amplify his or her strengths,” the report reads. “AI systems could support continuous learning for students of all ages and abilities.” This is already happening to some degree. Away from the report, education provider Pearson is one of several companies already using AI to augment their content for different users’ preferences, effectively turning each online course into a personal tutor. Report co-author Vinod Kumar, CEO and MD at Tata Communications, believes that AI will be a force for positive change in both the corporate and education worlds. “AI is now being viewed as a new category of intelligence that can complement existing categories of emotional, social, spatial, and creative intelligence. What is transformational about multiplicity is that it can enhance cognitive diversity, combining categories of intelligence in new ways to benefit all workers and businesses,” he said. Internet of Business says As previously reported by Internet of Business, while the dominant narrative in the popular press around AI, machine learning, robotics, and related technologies, has focused on Terminators and human replacement, the business narrative has been unhelpful in another way. While many technology providers are apparently sincere in their belief that AI is about augmenting and complementing human ingenuity and skills, many on the buy side of the equation are obsessed with using the technology to slash costs rather than make their businesses smarter. The following reports explore some of these issues:- - Read more: Propaganda chatbots and manipulative AI: Worse to come, says MIT - Read more: Bank of England warns of large-scale job losses from AI - Read more: AI bubble set to burst, says critical analyst report - Read more: Prove that AI works with real examples, say consumers: Industry report Meanwhile, this external report is one of the most recent to explore the problem of bias entering AI systems – in this case, in the world of healthcare.
<urn:uuid:6dd562f1-0433-4fb3-8240-4109a111e167>
CC-MAIN-2022-40
https://internetofbusiness.com/ai-will-augment-and-diversify-human-thinking-says-tata-communications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00265.warc.gz
en
0.942984
1,171
2.59375
3
This mini course will explain what is machine learning and illustrates how machine learning is used in Inventory Management and Predictive Analysis; Assets Management and Risk; Informational Retrieval, Advertising Targeting, Product Pricing, and will explain machine learning algorithms. What am I going to get from this course? - Define Machine Learning and give a brief history of Machine Learning. - Define Data including Data Types, Data Dimensions, and Data Classifications in terms of Computer Programming and apply this knowledge in classifying their own data. - Define Algorithm including the various types of Algorithms and the Logic using in Computer Programming and apply these concepts to writing their own algorithms. - Understand what a Protocol is for, name the types of - Protocols used by computer networks, the different types of networks including the Internet and World Wide Web. - Understand the underlying numeral systems used in machine languages and the various levels of programming languages. - Understand how Machine Learning applies to Inventory Management, Assets Management and Risk Analysis and how Machine Learning applies to their business and personal finance. Prerequisites and Target Audience What will students need to know or do before starting this course? - There is no required materials or software for this course. Who should take this course? Who should not? As an introduction to Machine Learning, this course is presented at a level that is readily understood by all individuals interested in Machine Learning. This course provides a history of Machine Learning, defines data and explains what is meant by big data; and classifies data in terms of computer programming. It covers the basic concept of numeral systems and the common numeral systems used by computer hardware to establish programming languages. Providing practical applications of Machine Learning. Module 1: Introduction to Machine Learning An Introduction to Machine Learning In this presentation we introduce the concept of computer data, algorithms, protocols, networks, the Internet and the World WIde Web; in addition to various computer langugaes, coding in terms of two numerical systems (Assemble Language). Quiz on An Introduction to Machine Learning Quiz one covers several to the terms and concepts introduced in Lecture 1: An Introduction to Machine Learning. Quiz on An Introduction to Machine Learning (cont) Quiz two covers several to the terms and concepts introduced in Lecture 1: An Introduction to Machine Learning. A brief history of Machine Learning This presentaion covers a brief history of Machine Learning. Quiz on the History of Machine Learning This quiz covers the topics introduced in the Brief History of Machine Learning. This presentation outlines the various types and forms of computer data. How data is classified, introducting big data and the variable types used in machine learning. Quiz on Data Classifications: Declarations and Types This quiz covers data dimensions and big data; in addition to data types. This presentation is on several types of algorithms and the logic used to instruct the computer. Quiz on Algorithms This quiz covers the various types of algorithms. In this presentation, we discuss the four main types of protocols in Machine Learning and introduce several protocols used in applicaitons. This quiz covers the various types of protocols. Quiz on Types of Computer Networks This quiz is on the various types of Computer Networks Logic Used in Programming In this presentation, the logical statements and operators are introduced including conditional statements, loops and logical connective operators. Quiz on the Logic Used in Programming This presentation covers the common base ten and the two common computer bases: binary and hexadecimal. Quiz on Numeral Systems This quiz cover the numeral systems used in computers and machine languages. Module 2: Management using Machine Learning Inventory Management and Predictive Analysis In this presentation, machine learning is applied to maintaining inventory and using predictive analysis to ensure that we are never left without inventory to sell. This quiz covers inventory management including predictive analysis. Assets Management and Risk This presentation is on assets management in machine learning Information Retrieval and Advertising Targeting In this presentation we discuss information retrieval in terms of personal information gathered by personal computers and data retrieval from a large number of files; in addition to has the results of Internet searches can result in target advertising. Information Retrieval and Advertising Targeting This quiz covers the types of information retrieval and advertising Targeting.
<urn:uuid:3e16dd3c-da7f-41cc-99a8-da3e8e2c4c90>
CC-MAIN-2022-40
https://training.experfy.com/courses/an-introduction-to-machine-learning
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00265.warc.gz
en
0.864764
1,002
3.28125
3
Traditions and Customs Quiz Questions & Answers - Why do Christian brides traditionally wear the wedding ring on the fourth finger? - What did handshaking originally symbolize? - Which people traditionally smoke a pipe of peace? - What primitive religion is still practiced in the West Indies, notably in Haiti? - What kind of dance do native women perform for tourists in Hawaii? - (a) According to a superstition, which bird’s feathers should never be in a house as decoration? (b) Why should you never give a knife to a friend, according to custom? - (a) Why did early sailors and pirates wear earrings? (b) Why is it considered unlucky to open umbrellas indoors? - (a) Why is number 13 considered unlucky? (b) How many years of bad luck follow breaking a mirror? - What was the most important symbol of Kingship in the Lake States of Africa? - What two flowers were supposed to have been the colors for Holi in India? - What was the name of the Island in ancient Greece where no births or deaths were allowed to take place? - Match these toasts with their lands: (a) Skal - Japan (b) Viva - England (c) Prosit - Brazil (d) Cheers - Spain (e) Salud - Scandinavia (f) Banzai - Germany - What does “Namak Haram” mean? - Why is it customary to wear blue beads among the nomadic tribes of the Middle East? - What did the ’Pigtail’ symbolize in China? - Name the Polynesian people of New Zealand who are found of tattooing themselves. - Why was lipstick used in ancient Egypt? - What is called “The Japanese custom of committing suicide to save face”? - (a) In which country is it a tradition to drink tea made of salt and yak butter? (b) Over which shoulder should you throw spilled salt? - Which people use the following modes of greeting? (a) rubbing noses (b) folding hands together (c) pressing one’s thumb to that of another person (d) kissing on both cheeks (e) shaking one’s own hand - It was believed that the fourth finger has a vein that is linked directly to the heart. - A Legal act symbolic of the parties joining in compact, peace, or friendship. - The Red Indians - The Hula - (a) The Peacock’s (b) because it will cut your friendship - (a) They believed that ear-rings would keep them from drowning. (b) Umbrellas were associated with the Sun God and it was sacrilege to open them in the shade. - (a) According to superstition, there were 13 people at the Last Supper – Christ and his 12 disciples. So it was ominous. (b) 7 Years - Royal tribal drums. Through the rhythm of the drums, a king communicated with his ancestors, and the larger the drums of the king, the more powerful it was supposed to make him. - The Tesu and the Mohua - (a) Skal - Scandinavia (b) Viva – Brazil (c) Prosit – Germany (d) Cheers – England (e) Salud – Spain (f) Banzai - Japan - Indian believes that a split between 2 friends brings enmity. To renew a broken friendship, they must seal their reunion by eating salt together. Eating a man’s salt is to partake of his hospitality. A “Namak Haram” is one who is not true to his salt as he breaks the covenant. - To ward off the evil eye - The pigtail was a symbol of abject humiliation. - The Maoris - It was believed that a red circle painted around the mouth kept the soul inside the body and the devil outside. - (a) Tibet (b) The left - (a) Eskimoes
<urn:uuid:fa49e48e-8bdf-4116-944a-1671a88ea592>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/726/traditions-and-customs-quiz-questions-answers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00265.warc.gz
en
0.957066
920
2.625
3
With the click of a button, Vivien Coulson-Thomas removes the cadaver's skin. Another click and she removes muscle to reveal her target - blood vessels. All the while the assistant professor of optometry marvels at the one-touch dissection process. "You can control the table to have whatever you touch disappear to reveal the structures of interest," she said. That's pretty much how the 3D Anatomage Table works. It uses digital cadavers to help teach anatomy, and the University of Houston College of Optometry is the first optometry school in the nation to have its own table - and not only one, but two exclusively for students. Many systemic conditions, like hypertension and diabetes, lead to eye disease so in their training, optometry students go through rigorous anatomy courses focusing on not only systemic pathology, but also the head, neck and skull. In recent years the anatomy tables have popped up in medical and nursing schools, including one at the UH College of Nursing on the Sugar Land campus. They seem to solve so many problems. "The Anatomage Table provides an innovative, world class learning opportunity for our students. In a University as diverse as ours, this will give us the opportunity to accommodate students who, because of religious, cultural reasons or past trauma could not work with human cadavers," said UH College of Optometry dean and Greeman-Petty professor, Earl L. Smith. Then there's the practical side of dissection. With a human cadaver only so much cutting can be done, and much of the internal views are blocked. "If you want to know how the esophagus and the trachea are aligned with the vertebral column, with a cadaver you can only dissect from one side and you get a rough idea, but with the Anatomage Table you can actually isolate those three organs, turn them around to get a 3D look and literally see how these organs lie against one another," said Coulson-Thomas. "The 3D view adds a new layer to the learning process." Not meant as a substitute for human cadavers, the digital cadavers are also used as a tool during actual human cadaver prosection (dissection by the professor). Again it has to do with the line of sight. In the normal course of dissection, whether removing a vein or artery, the view of the underside is hidden. By pausing the dissection and calling up the same vein or artery on the table, the user is able to turn it around and look at in 3D. "It completely revolutionized the whole prosection process. We used the Anatomage Table to guide some of our prosections, and it made the whole process a lot easier. When we first bought the table I never thought of it being used in that way." That in-depth view is what she and assistant professor of optometry Lisa Ostrin, were looking for when they took over the anatomy course in the fall. "If I had one of these tables when I was taking anatomy, it would have been incredible, and Dr. Ostrin and I wanted to bring that experience to our students," she said. The pair spearheaded the seemingly overnight effort, and the tables arrived in time for the semester to begin and their own software tutorials to take place. Having the extra table, available to students in off hours, means they can study the digital cadavers for their boards and other important tests on their own schedules. "It's really 21st century teaching and learning," said Coulson-Thomas.
<urn:uuid:eb295ccb-eafc-4049-8684-a2f3a9b4c717>
CC-MAIN-2022-40
https://www.mbtmag.com/home/news/21102137/digital-cadavers-replace-human-counterparts
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00265.warc.gz
en
0.957862
742
2.84375
3
New Delhi, India, January 28, 2021: A research study in Delhi NCR (the national capital region of India) during the pandemic observes an average of 30% drop in air pollution during the lockdown, as compared to the previous years for the same period. Commissioned by Manas Fuloria (Custodian of Entrepreneurship, Nagarro) and Namita Gupta (founder, Airveda), the project aims to critique and propose actionable tasks to check pollution levels in the region. The project is titled Analysing Pollution Levels in Delhi NCR During the COVID-19 Lockdown. The team, comprising Anmol Dureha, Bhairevi Aiyer, Gursehaj Aneja and Pranav Suri, analyses five pollutants (Particulate Matter 2.5, Particulate Matter 10, Nitrogen Oxides, Ozone, and Carbon Monoxide) from data collected by the pollution-monitoring stations at different locations across Delhi NCR. In India, the lockdown was in four phases with different levels of restrictions on economic activities. The air pollution data collected for the corresponding phases present interesting trends and patterns. Change in pollutant levels across lockdown-phases (2020) The study summarizes the key takeaways as below: - The lockdown provided a controlled environment to analyze air pollution in Delhi NCR. Though it was difficult to classify singular sources of pollution, the study establishes that air quality and economic activities are inversely related. - The drop in air pollution was more than 50% during the first phase of lockdown (compared to the previous years). On average, the drop was about 30% across the four phases. This shows that air quality can improve under strict controls of economic activities. - The report shows that air pollution is not just in winters but throughout the year. Domestic activities like biomass burning for cooking and stubble burning add to the problem. - Even with reduced economic activities, the pollution levels took 2-7 days to reach the minimum value of good range (according to pollution control standards). Given this observation, the report recommends long-term solutions instead of one-off events. - Delhi NCR did not meet the air pollution threshold set by WHO (10 μg/m3) or the Indian government (30 μg/m3) during the lockdown. The bottom line is that we are at an environmental tipping point. The authorities may have to tackle more than the usual suspects like the industrial and vehicular emissions, and construction dust. Want to understand more about air pollution in Delhi NCR and actionable steps that can improve air quality? Read the report here. Nagarro (FRA: NA9) is a global digital engineering leader with a full-service offering, including digital product engineering, digital commerce, customer experience, AI and ML-based solutions, cloud, immersive technologies, IoT solutions, and consulting on next-generation ERP. Nagarro helps clients to become innovative, digital-first companies through an entrepreneurial, agile, and CARING mindset, and delivers on its promise of thinking breakthroughs. The company employs over 8,400 people in 25 countries. For more information, visit www.nagarro.com. ISIN DE000A3H2200, WKN A3H220 Airveda offers app-enabled air quality monitors designed and manufactured in India. The company also collaborates with researchers and urban planners to develop innovative solutions to curb air pollution in the country. For more information, visit www.airveda.com.
<urn:uuid:40411c92-54fd-43d8-afc9-4b07ba9500bc>
CC-MAIN-2022-40
https://www.nagarro.com/en/news-press-release/better-air-quality-in-delhi-ncr-research-study
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00265.warc.gz
en
0.921141
740
2.796875
3
According to Statista, there were a total of 304 million ransomware attacks worldwide in 2020, a 62% increase from the year prior. So far in 2021, the outlook has not improved. With targets ranging from meat processing giants and pipelines to regional victims like the ferry operator for Martha’s Vineyard and Nantucket, this ransomware epidemic doesn’t show signs of slowing anytime soon. Factors like widespread remote working, migration to cloud infrastructure, lax cybersecurity practices, cryptocurrencies, and the booming business of ransomware-as-a-service (RaaS) are all contributing to this problem. So, what can companies do to prevent and/or quickly recover from a ransomware attack? The answer has been around for a while. Businesses must apply a defense in depth approach to their cybersecurity, and the controls found in the NIST Cybersecurity Framework are a perfect model for how to achieve this. The NIST Cybersecurity Framework (CSF) is a voluntary standard put out by the US Federal government that uses business drivers to guide cybersecurity activities as part of an organization’s overall risk management strategy. It consists of three parts: a Framework Core, Implementation Tiers, and a Framework Profile. The Framework Core provides a set of desired cybersecurity activities and outcomes using common language that is easy to understand in the form of five Functions. The Core guides organizations in managing and reducing their cybersecurity risks in a way that complements existing cybersecurity and risk management processes. Although ransomware attacks are happening all around us, you don’t have to panic. If you get serious about applying these defense in depth cybersecurity controls to your critical systems, your organization will be able to not only reduce its risk by about 85%, but will also be able to quickly recover if ransomware should strike. To learn more about applying the NIST Cybersecurity Framework in OT environments, download our NIST CSF implementation guide.
<urn:uuid:356c6450-8d23-492c-8042-40edbeb59942>
CC-MAIN-2022-40
https://www.industrialdefender.com/blog/using-nist-csf-prevent-recover-from-ransomware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00265.warc.gz
en
0.940996
384
2.625
3
The holy grail of manufacturing IoT products: make them interactive and scalable. The production at the back end is enormous, complicated, and highly vulnerable to failures. Therefore, the developers should embrace the thorough feasibility of the idea on paper before taking it on to the production floor. Since the ecosystem has to accommodate numerous variables for hardware product development and manufacturing, the foundation must be strong. In the following guide, we walk you through the major manufacturing options you can pick for your IoT product. Although not counted as a mainstream activity in manufacturing, prototyping holds critical importance in IoT projects because it lays the foundation for the IoT product in focus. It is a trial version of the connected components of the hardware such as sensors, circuit boards, microcontrollers, embedded systems, and mechanical parts such as motors, outer casing, buttons, etc. Prototyping aims to evaluate the functional correctness of the product idea in the real world rather than on paper. In IoT, prototyping is an important phase in the development lifecycle because there are multiple electrical, mechanical, firmware, and software components tied together. Traditionally, prototyping was a slow process and consumed months for a medium scale build to move from paper to production. However, with the advent of rapid prototyping technologies, IoT project owners have greater flexibility in minimal development. In this process, a scale model of a single mechanical component or the entire product is developed using CAD data. The most common method for rapid prototyping is 3D Printing. Not only mechanical parts but also PCBs can be 3D printed nowadays. Here, the construction materials are deposited, connected, and solidified under computer control. The end output is a three-dimensional object positioned layer by layer. The CAD models are saved in an STL file format that works as a reference source for the 3D printer to generate the output. It can take anywhere between hours to weeks, depending on the size of the part and the capability of the 3D printer to come up with a new prototype part. Fun fact – Even SpaceX uses 3D printed parts in their rockets. CNC Machining uses rotating computer-controlled tools like end mills, drills to eliminate materials from a solid block. This is done to impart a particular pattern or shape to the product. Here, the same design file can be used for multiple machines, thereby producing multiple prototypes at once. CNC helps in making multiple cuts at various angles and performs metal fabrication. CNC could be costlier than 3D printing, but it is highly accurate and gives you the confidence to finalize your design. Choosing Prototyping Partners It is recommended to outsource prototyping to experienced agencies. This is because a manufacturing partner will help resolve procurement complexities thereby ensuring fast track development. Check for a verified listing of IoT vendors across disciplines. Browse their offerings and cross-check for extremely negative reviews. To cut through the chaos, however, you can partner with marketplaces that provide on-demand resources. AT&T, for example, has started offering on-demand IoT project consultation services for the planning and development of products. Prototyping partners provide expert consulting in manufacturing across different tooling techniques. Low to Mid Volume Manufacturing Soft Tooling is a less costly and simpler method of developing multiple components of an IoT product. Since IoT systems are a mesh of multiple small scale components combined, soft tooling helps fill the gaps and produce an airtight build. The materials used are flexible and easily moldable as per the specs of the overall system. However, Silicone is the most popular material for soft tooling for its ability to withstand extreme temperatures. It is used through Injection Molding of Liquid Silicone Rubber (LSR) or Silicone Molding. Alternatively, the vacuum can be used to draw liquid into the mold (Vacuum casting). Similarly, the casting process using sand is known as Sand Casting. Choosing a Low to Mid Volume Manufacturing Partner Since soft tooling is less costly, it is a preferred technique for medium-to-low volume manufacturing requirements for IoT products offering simpler applications. While choosing a partner, check for their efficiency in validating a design concept through functional testing, approvals, visualization, presentations, etc. However, soft tooling has its limitations too. Most materials other than Silicone lack wear resistance and affect the durability of the product. It is important to know that it is difficult to make changes to the patterns once built. Thus, the manufacturing partner should specialize in using different materials appropriately. They should be able to use various materials to ensure quick turn-around time and produce up to a few hundred pieces. The ideal use of soft tooling is to produce 50 to 1000 units for early-stage consumer trials. High Volume Manufacturing High volume manufacturing for IoT products having metals, alloys, and other ‘hard’ materials is a critical domain. It is expensive, time-consuming, and not suited for small scale products. This is because the product in development has to go through multiple production cycles, heat treatments, stringent tolerance levels, and testing and operational standards that they must adhere to. Therefore, producing any durable and high precision parts can only be achieved through hard tooling. Hard tooling is strictly meant for projects wherein the volume requirement goes up to tens of thousands of units. Otherwise, it will not recover the cost that goes into tooling. Here, the most commonly used materials are nickel, aluminum, steel alloys, and others that have high tensile strength. Although they take more time, the end output is highly durable and lasts longer. The molds have to be crafted very carefully to avoid errors. Given the complexity of IoT products, it is recommended to outsource hard tooling to a partner specializing in precision machining. Choosing a High Volume Manufacturing Partner Hard tooling finds extensive applications in an industrial setup. Thus the partner should have substantial experience in hard tooling of mechanical, electrical, and electronic products. They should help you with high volume production requirements of thousand to millions of units. Hard tooling has high NRE costs, because of which it is recommended only for high volume production. Engage In a Partnership Although growing, the IoT industry is facing a shortage of skilled resources. Subsequently, project delays and failures are an unresolved issue. Amidst all this, a manufacturing partner could absorb the pressures so that you get to focus on core business areas. As we usher into 2021, there’s an ocean of opportunities ahead. Try and make the best out of it.
<urn:uuid:0b1cc30a-e6f8-4a1a-87cf-eb72796fa09a>
CC-MAIN-2022-40
https://www.iotforall.com/guide-for-manufacturing-an-iot-product
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00465.warc.gz
en
0.925328
1,362
2.859375
3
Scientists have come up with a new technology that may make dropping your smartphone far less upsetting. Scientists in the United Kingdom have announced that they have come up with a technology for an unbreakable phone screen. The tech involves the use of a special form of electrode. It would be possible to use it in mobile devices such as smartphones and tablets, or even in larger electronics such as TVs. The team of scientists has predicted that the unbreakable screens could be available as soon as 2018. The electrode for the unbreakable phone screen technology conducts electricity throughout the glass. A traditional form of electrode is made of indium tin oxide (ITO), which is an expensive type of metal. In fact, it is prohibitively expensive and has stopped that method from being used for more durable mobile phone screens until now. However, the UK scientist have made a new type of electrode by mixing graphene and silver nanowires. These two materials were the key to being able to create the unbreakable phone screen display. The silver nanowires are exceptionally tiny, at 1/10,000 the width of a human hair. And yet, they’re still much larger than graphene. The thinness of these materials has made it possible for an exceptionally flexible conductor which is far more resistant to breaking and cracking than the current standard glass screens. When taking into consideration the number of people who break their smartphone screens, this is very good news. After all, a cracked screen isn’t just an inconvenience. In fact, inconvenience is only the beginning. Touchscreens as a standard on nearly all smartphones have meant that cracked screens can limit the use of the device or can render it unusable. The unbreakable phone screen technology was created by a team of University of Sussex physicists who were working with an Oxford microelectrics firm. They developed these unique hybrid electrodes and published their findings in the Nanoscale journal. According to that publication, this is also an important discovery because the graphene and silver nanowire combination is actually better at conducting electricity than the older electrodes made out of the expensive ITO metal.
<urn:uuid:7aac529c-940e-4ba4-b3f4-cb95d67a9334>
CC-MAIN-2022-40
https://www.mobilecommercepress.com/tag/university-of-sussex/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00465.warc.gz
en
0.969871
433
3.546875
4
March 29, 2022 Hyper-V Replication and Failover Types: In-Depth Overview Hyper-V Replication is a feature that comes at no additional cost with Microsoft Hyper-V. This feature lets users implement a business continuity (BC) and disaster recovery (DR) plan built on replication to a remote host. Introduced with Microsoft Server 2012, Hyper-V Replication is popular with both SMBs and large enterprises. When it comes to using Hyper-V Replica, there may be some misunderstandings about how best to use this feature and the purposes it serves. Replication can also be a source of confusion when trying to decide on other DR features like checkpoints and clustering. In this post, we cover: - What is Hyper-V Replica? - How does Hyper-V Replication work? - When to use Hyper-V Replication - What is Hyper-V Replica Failover? - Types of Hyper-V Replica failover - Alternative Hyper-V Replication solutions What Is Hyper-V Replica? Hyper-V replication is a disaster recovery feature available as part of Microsoft Hyper-V. The main role of Hyper-V replication is creating replicas of primary virtual machines to be stored on remote hosts for VM recovery when needed. The primary host and the hosts with the replica VMs can reside on the same site or be located on different sites. An organization can set up and maintain its own replica site. Smaller organizations with limited budgets can choose to subscribe to disaster recovery as a service (DRaaS) from a managed service provider (MSP). In this case, disaster recovery using Hyper-V replication can also be another affordable option given the low requirements and ease of configuration. How Does Hyper-V Replication Work? By default, Hyper-V replication creates only one recovery point for a VM replica and updates the data of this recovery point at set time intervals. You can set multiple recovery points for a Hyper-V replica if needed. The minimum replication interval is not reduced in this case, but you can recover data for the needed recovery point. For example, you can set up to 24 recovery points for VM replicas with a 1-hour interval. The available Hyper-V replication time intervals are: every 30 seconds, 5 minutes, 10 minutes, 15 minutes or 1 hour. Replication data is transferred via the network from a host running the source VM to the host on which the VM replica is stored. For this reason, you must have high network bandwidth, which can be a challenge when using an internet connection between two geographically distributed sites. To avoid conflicts and split-brain situations, you should not run a source Hyper-V VM and a VM replica simultaneously. VM replicas are usually connected to other networks and have IP addresses different from those used by the original VM. Hyper-V replication process - You can configure Hyper-V replication in Hyper-V Manager or System Center Virtual Machine Manager (SCVMM). - When you enable Hyper-V replication for a VM, a VM replica is created on the secondary host, and a Hyper-V Replica Log (.HRL) file to track changes is created. - When you replicate a VM for the first time, all VM data is copied from the source host to the target host. - The next time the VM is replicated, only changed VHDX (or VHD) virtual disk data (increments) is copied to save replication time and amount of transferred data. - A Hyper-V checkpoint (.AVDX) is created when replication starts (for subsequent replicas after the initial replication). - When you create a new recovery point, and the oldest recovery point for a VM replica has expired, the oldest one is combined with the main virtual disk. Recovery from a VM replica is performed manually when using native Hyper-V functionality. When Is Hyper-V Replication Used? Unlike Hyper-V clustering, where one running VM is located on shared storage and accessed by two Hyper-V hosts, Hyper-V replication uses two VM instances (a primary running VM and a VM replica that is in powered-off state during normal operation) located on the own storage of hosts (local storage, SAN, or NAS). Download the ebook about Hyper-V clustering to learn how clustering works. When not to use Hyper-V replication You may not need to use a Hyper-V replica for VM failover if you run the following services on VMs: - Active Directory Domain Controller. Choose native Active Directory replication options rather than using Hyper-V replication. - MS SQL Server. You can use Hyper-V replicas for SQL Server protection. However, there is a native alternative solution to replicate SQL databases. Read the blog post about MS SQL Server Replication to learn more about the native replication feature. Selecting the right SQL replication method depends on your tasks and requirements. - Microsoft Exchange. You may have problems if you use Hyper-V replication for VMs running Exchange. Choose the native Exchange replication technology. Hyper-V replication flexibility Hyper-V replication is flexible in terms of the multiple deployment variations it supports. It can be deployed between: - Two standalone hosts - A standalone host and a Hyper-V failover cluster - Two Hyper-V failover clusters Hyper-V replication is also flexible in terms of hardware requirements. The primary and secondary hosts do not require matching hardware components. Besides, extended replication is supported. This means that a secondary host can be the source of another replication to a third host, thus forming a daisy chain. Hyper-V replication provides flexible granular protection. You can select specific VMs to be replicated and even select specific VM’s VHDX virtual disks. What Is Hyper-V Replica Failover? Hyper-V Replica Failover Types - Test failovers - Planned failovers - Unplanned failovers Each failover type is intended to meet specific needs. Type 1: Test failovers A test failover is used to validate replica VMs and test a disaster recovery plan. It should be carried out regularly. With test failovers, neither the running primary VM operation nor the replication process for the replica VM is impacted. Test failovers don’t interrupt production workloads and ongoing replication. A test VM is created to be examined in an isolated environment, including an isolated network. Once the IT admin stops the test failover for a replica VM, the created test VM is cleaned up. Test failovers use the internal VM Export/Import feature of Hyper-V to create a new VM copy and then rename this VM. The test Hyper-V failover includes the following operations: - A VM replica including the VHDX, XML and other files is exported to a temporary location. - The XML file of the exported VM is modified to use a unique GUID. - The host registers the newly created VM with Hyper-V with the VMSS.exe process. - The VM is renamed. - The VM is imported to the same Hyper-V host. The test VM remains in the powered-off state after the test failover, and you need to start the test VM manually. Type 2: Planned failovers Planned failovers are used to prepare for service availability during a disaster such as a hurricane or planned power outage, or to smoothly fail over from primary VMs to replicas during maintenance or datacenter migrations. Another possible reason to use a planned failover is related to compliance requirements. During a planned failover, the primary VM is shut down, and the replica VM is forced to boot on the secondary host. The traffic is directed towards the secondary host, and VM workloads are moved to that host. There is no data loss when you use planned failover. Planned failover has zero RPO and RTO and requires only the time to replicate data and boot the VM after that. A planned Hyper-V failover consists of the following actions: - A system administrator or user initiates failover. - The VMSS.exe Hyper-V process is notified about this action. - VMSS.exe requests Hyper-V VSS Writer to create a snapshot of the primary VM. - The VSS Writer creates a standard Hyper-V replica VM. - The Hyper-V Replica server is notified about this event. - The standard replica VM is copied to the Hyper-V replica server via the network. - The replica server registers the received VM replica and starts this VM replica. Type 3: Unplanned failovers An unplanned failover is started on the secondary server or site when an unexpected disaster brings down VMs on your primary server or site (power loss, hardware failure, ransomware attack, etc.). This Hyper-V replica failover type is also used to fail over a single failed VM to a secondary host. As in the case of a planned failover, the RTO is the time it takes to boot the VMs. However, when it comes to the RPO, the data since the last replication is lost. The maximum RPO is the configured replication interval, ranging from 30 seconds to 15 minutes. After switching to a Hyper-V replica using failover, you have the option of running a failback operation when the primary server is back working. Failback starts reverse replication to copy the latest data from the replica server to the original server and move workloads back to the original server. Alternative Hyper-V Replication Solutions NAKIVO Backup & Replication is a universal data protection solution that supports Hyper-V, VMware vSphere, Nutanix AHV virtual environments, as well as Amazon EC2 and Linux/Windows physical machines. You can use the NAKIVO solution to manage replication of Hyper-V VMs, automatic failover, and disaster recovery orchestration using Site Recovery. Advanced features allow you to improve replication speed, reduce replication time and automate data protection operations. Download the white paper about disaster recovery and automation by using VM replicas. Disaster recovery by using VM replication and failover is affordable compared to clustering, allows you to recover VMs faster than from backup, but doesn’t provide high availability with zero downtime as when using a Hyper-V failover cluster.
<urn:uuid:af025b3e-0633-4e5f-981a-3dec1face070>
CC-MAIN-2022-40
https://www.nakivo.com/blog/hyper-v-replication-types/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00465.warc.gz
en
0.897088
2,155
2.546875
3
Optical fiber transceivers are also called fiber optic transmitter and receiver, which are used to send and receive optical information in a variety of different applications. The role of the optical module is photoelectric conversion. These optical modules are scalable and flexible in their use, and this is why they are preferred by designers. Here is what you need to know about the basics of fiber optic transceivers. Fiber Optic Transmitters and Receivers Fiber optic transmission system consists of a transmitter on one end of a fiber and a receiver on the other end. The transmitter end takes in and converts the electrical signal into light, after the optical fiber transmission in the fiber cable plant, the receiver end again converts the light signal into electrical signal. Both the receiver and the transmitter ends have their own circuitry and can handle transmissions in both directions. Fiber optic cables can both send and receive information. The cables can be made of different fibers, and the information can be transmitted at different times. The following picture shows a fiber optic datalink. Sources of Fiber Optic Transceiver There are four types of fiber transmitters used to convert electrical signals into optical signals. These sources of fiber optic transmitters include: distributed feedback (DFB) lasers, fabry-perot (FP) lasers, LEDs, and vertical cavity surface-emitting lasers (VCSELs). They are all semiconductor chips. Take QSFP-40G-UNIV as an example, it is Arista QSFP-40G-UNIV compatible 40G QSFP+ transceiver. It uses DFB lasers as sources for fiber optic transmitters, which are used in long distance and DWDM systems. DFB lasers have the narrowest spectral width which minimizes chromatic dispersion on the longest links. The choice of the devices is determined mainly by speed and fiber compatibility issues. As many premises systems using multi-mode fiber have exceeded bit rates of 1 Gb/s, lasers (mostly VCSELs) have replaced LEDs. Fiber optic transceivers are reliable, but they may malfunction or become out-dated. If an upgrade is necessary, there are hot-swappable fiber optic transceivers. These devices make it easy to replace or repair without powering down the device. How Fiber Optic Transceiver Works? Information is sent in the form of pulses of the light in the fiber optics. The light pulses have to be converted into electrical ones in order to be utilized by an electronic device. Thanks to the conversion by fiber optic transceivers: In its fiber optic data links, the transmitter converts an electrical signal into an optical signal, which is coupled with a connector and transmitted through a fiber optic cable. The light from the end of the cable is coupled to a receiver, where a detector converts the light back into an electrical signal. Either a light emitting diode (LED) or a laser diode is used as the light source. Optical fiber transceivers are usually packaged in industry standard packages like SFP, SFP+, XFP, X2, Xenpak, GBIC. According to the fiber type it connects to, there are MM (multi-mode), SM (Single-mode), as well as WDM fiber (CWDM, DWDM modules). The SFP modules support up to 4.25 Gbps with a connector on the optical end and a standard electrical interface on the other end. The QSFP are for 40 Gigabit networks using a LC duplex connection. Take compatible Brocade 40G-QSFP-LR4 as an example, it supports link lengths of 10km on single-mode fiber cable at a wavelength of 1310nm. Keep in mind that fiber optic transceiver has two ends. One has an optical cable plug and another for connecting an electrical device. Each aspect of the transceivers is necessary to properly deliver a signal to its destination. Be aware of all aspects of fiber optic transceivers to purchase what you need for your application. Fiberstore supplies a wide variety of 40GBASE QSFP+ transceiver modules for you to choose from. More detailed, please contact us directly.
<urn:uuid:b099ad0f-9cd1-405a-9ef3-dc1dd96010b5>
CC-MAIN-2022-40
https://www.chinacablesbuy.com/tag/xfp
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00465.warc.gz
en
0.918202
862
3.8125
4
Centralized vs. Decentralized vs. Distributed Networks (the History & Future) Innovation Driven By ColdWar Anxiety The original model for the internet was based on a centralized network for maximum efficiency and control. But once the ColdWar began, the Department Of Defense (DOD) got nervous about what a centralized network meant for cybersecurity. Network engineer Paul Baran created the concept of a decentralized network in a proposal for the US AirForce. The idea was that if Russia delivered a cyberattack, but the equipment, devices, and switches were distributed around the network, the attack would not take down the entire network. Traffic could be delivered through alternate paths, and communications would not be crippled. Despite this early realization that decentralized networks could offer better redundancy, security, and improved network availability, the broader public felt that they specifically were unlikely to be at the receiving end of a Russian cyberattack. The inconvenience of changing to a new network model was worth the risk of maintaining the status quo. Still, today the more familiar centralized model is the most widely used network on the internet. However, current trends and risk assessments along with an era of indiscriminate and monetized cyber-attacks are tipping the network popularity contest towards decentralized and distributed models. This post examines why centralized networks have had a decades-long appeal to organizations and what factors and benefits are gaining momentum for decentralized and distributed network transitions. Let’s first consider the architecture of a centralized network and its advantages. A centralized network is characterized by one centralized server or node receiving data requests from peripheral nodes and follows the traditional hub-and-spoke architecture. The central node or server node serves the peripheral nodes, sometimes called the client nodes. All transactions pass through a primary server. This server is the central point of connection between the peripheral nodes. There are many reasons that organizations continue to use centralized networks. This type of network is not complex. It is easy to control and manage. This simplicity comes with several advantages. This structure allows for the greatest control. It is more cost-efficient to centralize IT structures. Centralized networks make reporting, securing, monitoring, and management simple. With less complexity, it is easier to act in unison when making changes or updates. It can scale to an extent, but a single lead server’s capacity can only go so far. This network model will work for small to mid-size organizations as long as they are not experiencing rapid workforce growth or using bandwidth-intensive applications. Cybersecurity was not top of mind of organizations until recent years when it became apparent that hackers will target any organization of any size if a known vulnerability can be found on their network. This has realigned priorities for many enterprises who have to acknowledge that centralized networks are easier to hack and harder to recover. Scalability is also a concern. Centralized network models do not scale well. Centralized networks struggle to aggregate and manage increasing volumes of data. Every request must go through a single central server. As the volume of requests increases, bottlenecks build up. With an exponential data increase in just the last few years, organizations are finding they have outgrown this model. No Middle Ground Additionally, when a failure does happen, it’s a complete outage. When the lead server fails, it brings down every connection in that network. This level of dependence on network connectivity is a significant business continuity risk. Maintenance of the server is also a headache. Any maintenance window is guaranteed downtime for the company and often results in delayed patches and updates – leaving networks vulnerable to attack and more willing to run end-of-life hardware rather than deal with the pain of replacing it. At the Mercy of One Provider Finally, a centralized network is more likely to rely on a single vendor and in a vendor lock-in situation which leaves the organization helpless to rate hikes, SLA failures, and other service issues. A Decentralized Network, also called a “Data Mesh” architecture, distributes the control and switching equipment throughout the network to various peer connection points. Each chosen connection point has a separate server that manages the data and information storage for that cluster of nodes. Decentralized Networks have greater resilience. Any connection point can act as a backup server for another, creating fail-safes and redundancy throughout the organization. Each connection point is responsible for its information processing. If one node is hacked, it does not bring down the entire system because only that connection point is compromised. Agility is also a hallmark of Decentralized networks. Because the architecture is not concentrated, workloads and shared resources can be distributed over multiple computer nodes instead of just one. Instead of traveling to a central node to complete a task, data can be sent locally, creating faster accessibility. Other benefits of a decentralized network include faster MTTR. With localized troubleshooting, there is less network to wade through before finding the root cause of an issue. Decisions can be reached sooner on the procurement and approval side because contract revisions and change considerations are more limited in scope. This opens the door for greater customization of processes and applications that make sense for each location. Organizations can take advantage of new technology more quickly and test its efficacy through individual nodes instead of waiting in a backlogged queue of approvals and hoping for the best. This model is not bound by capacity constraints and is ideal for large enterprises and mid-size enterprises on a growth track. The lack of vertical top-down visibility throughout the network can make scaling up or global changes challenging to pull off in unison. It can also be hard to gain oversight and the analytics needed to meet compliance requirements and measure performance. Distributed networks can refer to both location and architecture. Geographically distributed networks are what you see in digital-first companies, with employees who work remotely across the entire organization. There is no question that geographically distributed networks are increasing. According to a recent study cited by Forbes, 25% of all professional jobs in North America will be remote by the end of 2022, and remote opportunities will continue to increase through 2023. Architecturally distributed networks have a slightly different meaning. These structures distribute control equally across each node. This is different from decentralized networks with clusters of varying connection points throughout the network. Each node is independent and can act as a backup for the other. If a part of the network goes down, the protocol can be redirected through different routes. Like Decentralized networks, distributed networks offer more security, stability, and scalability than centralized networks. Still, unlike decentralized networks that use clusters of smaller “centralized” servers, the control is distributed evenly across each node. There is no central router to manage IP addresses, so making any changes in unison can be challenging. It is more expensive and requires more equipment to have distributed power and control at each node. Centralized visibility is an issue making audits or compliance requirements a nightmare. Since its inception, centralized networks have dominated the internet, but there is no question that the future is decentralized and distributed infrastructures. The modern enterprise cannot afford the risk of non-redundant network structures and the security vulnerability of having a single point of failure. The future network can manage the influx of data without congestion and stay up when a network issue occurs. The secret to orchestrating these moving and multiplying pieces is enhanced network visibility. LiveAction offers the broadest network monitoring and management telemetry in the industry. Whether Centralized, Decentralized or Distributed, LiveAction can provide enhanced visualization for your network infrastructure from WAN edge to core to cloud. Through LiveNX, LiveWire, and ThreatEyeNV, LiveAction delivers a complete suite of tools that assure your network availability, security, and speed optimization. Request a demo with one of our network experts today.
<urn:uuid:9bea6079-3035-47c3-aacc-e2e48c52fc3f>
CC-MAIN-2022-40
https://www.liveaction.com/resources/blog/centralized-vs-decentralized-vs-distributed-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00465.warc.gz
en
0.93492
1,668
2.625
3
Hi folks! Herewith, more tales from the Antarctic side… In this installment I’ll be telling you about the third most-important inhabitant of Antarctica – whales. Whales are third-in-line in the Antarctic pecking order after penguins (second-in-line) and Antarctic Krill (top dogs crustaceans). What the Krill? King Krill? Never heard of them, right? Well this lesser known species is Antarctic city hall since it’s first in line in the food chain down here. It’s because of the abundance of this crustacean (I’m talking probably megatons thereof in polar seas) that both whales and penguins are able to get more than their fill of animal fat. However, Krill live underwater all the time so you never get to see any – and that went for us too, so I’ll not be telling you about them. I’ve already told you about penguins here. So next up – whales; specifically – humpback whales, which were the ones we saw… View this post on Instagram The #Antarctica whale story. We had LOTS of whales around (mostly humpback whales). The captain who sailed this seas for a dozen years can't remember so much whales around. It was really oveWHALing! Whales logging, feeding, breeding, any sort of whales. But I never complaned :) | А теперь о китах в #Антарктида. Это был прямо какой-то китовый суп! Там и здесь, спят и кормятся, прыгают и шлёпают хвостами, вдалеке и в метре от лодки. Наш капитан признался, что за свою долгую полярную карьеру такого не видал. Было круто! #ekinantarctica Here in March there are oodles of whales. And, just like penguins, they too are curious beasts; they’d come real close to our to dingy – a little unnerving, it has to be said… Not only are they curious mammals, they’re also… curious mammals (curious to us, that is): they always go around in pairs or in threes (so, like, who’s the third? Talk about gooseberry?!). Here’s a threesome which, after noticing us, sharply turned toward, and swam right at us! Others came even nearer, did a few laps around us, and showed us their faces as they looked us over (alas – didn’t manage a snap of that; it didn’t last long). They’d also encircle the Vavilov; size doesn’t intimidate them :-)… There were that many whales here they were uncountable: a veritable ‘whale soup’! The above pic with all the visible whales ringed was taken at siesta time. Many of them are having a nap up by the surface. Our guides told us how this year they’d never seen this many – ever, by far. But still, every March always sees the whales at their most plentiful: it’s the peak feeding period. After the springtime feast the whales head off to a secluded part of the seas for… a bit of hanky-panky; shortly after, logically, it’s birthing time; and only the following March do they return for another krill binge ). Pods of whales, rafts of penguins, all against the unique Antarctic backdrop. Sublime. “Why nothing about me?”! We saw plenty of these cuties too, but their quantities were nothing compared with whales and penguins. And since I don’t want to turn this here blog of mine into a biology class, next up – not about Antarctica’s wildlife, but other – non-animal – features of this unique continent.
<urn:uuid:6dde46b4-9b01-4893-8f25-aef59a2ae5ce>
CC-MAIN-2022-40
https://eugene.kaspersky.com/2017/04/12/humpback-whales-having-a-whale-of-a-time/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00465.warc.gz
en
0.890239
1,096
2.734375
3