text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Updated: Oct 22, 2021 Linux is commonly the preferred operating system used by Ethical Hackers and 'Infosec' professionals due to the following factors. The open source system enables access to its entire code. A user can manipulate how each component of the operating system works, enabling very granular levels of control and infinite customisation. Linux can be used without any licensing cost to the individual - again the advantages of this mean it is very easy to get hold of a Linux distribution and start to use it right away. Bash, C and Python codebase Using the shell on a Linux distribution makes it very easy to perform repetitive tasks and perform complex automation tasks relatively easily. The use of the powerful Python language means that creating code that is highly functional and easily portable is more easily achieved than on a proprietary operating system. The majority of specialist tools required to perform Ethical hacking and other security-related functions are created for Linux. This is due to the aforementioned reasons and removes the restrictions placed on a developer using a proprietary operating system. Which Linux is best for ethical hacking? No particular Linux distribution 'distro' can be considered 'the best for Ethical Hacking' due to the diverse nature of distributions and personal preference. Different flavours of Linux can be used in different scenarios, some are quite 'heavy' meaning their default toolset and resources used to perform tasks mean they are better suited for powerful machines and everyday use. Some distributions are designed to be lightweight in their use of resources and have a restricted toolset and are used for specific environments (for example a penetration tester on a 'Red Team' engagement may use a light version of Linux to use on a 'drop box' to place within a target organisation). Why do hackers use Kali Linux? A common Linux distribution used by Ethical Hackers and 'Infosec' professionals is the 'Kali' Linux distribution created by Offensive Security. Offensive security is a very highly regarded organisation that provide training and industry-recognised qualifications for Penetration testing and Ethical hacking. This is a very popular Debian based 'distro' due to being custom-designed for the use of penetration testing/ethical hacking and the amount of commonly used tools that are pre-installed in Kali (over 600). Kali is frequently updated and has multiple versions designed to run on different platforms, such as an ARM variant that runs on Raspberry PI devices, a variant that can run in Android devices (Net Hunter) and versions for most bare-metal installations. Additionally, pre-built virtual images for most hypervisors are available and are a very popular choice for Ethical Hackers due to their portability and isolation from the main Operating system. New tools are regularly added that have been tested and tweaked to run smoothly on the distro, also old unsupported tools are removed from new releases. There are additional tweaks that can help ethical hackers such as the ability to change the look and feel of the distro to one that at first glance resembles a Windows environment with one quick command, this is very useful for covert red team engagements in scenarios where you may be seen. If you like this blog post, find more content in our Glossary.
<urn:uuid:17e1ff69-9ff7-4549-b618-0a3ad67764f6>
CC-MAIN-2022-40
https://www.covertswarm.com/post/is-linux-good-for-ethical-hacking
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00435.warc.gz
en
0.926384
664
2.71875
3
As it’s most popularly understood, malware protection relies upon signature detection. However, detecting malicious software is only one side of the antivirus coin. In fact, some would say detection based on signatures – essentially a form of denylist – is the less significant side of that coin. On the other side there is the allowlist technique, or the pre-approval of harmless software as opposed to the blocking of harmful software. What is a denylist? Allow me to explain this through the prism of specific Kaspersky technology: the Kaspersky Security Network (KSN). When users install certain Kaspersky Lab products, they are offered the opportunity to willingly join the KSN. Should they decide to opt-in, they become part of a distributed infrastructure dedicated to processing cybersecurity-related information. If an opted-in user in India becomes infected with a new type of malware, Kaspersky Lab creates a signature to detect that malware, and then adds that signature to its database so that no other Kaspersky user will become infected with that malware. Simply put, this is how denylists work. We make lists of things that are hurtful, and we keep those things off of your computer. Denylists work great when it’s 99.9 percent effective and there were only 10,000 new malicious families of software emerging each year, but it’s not quite good enough at 99.9 percent effective when there are 10,000,000 new families of malware emerging each year. What is an allowlist? Again, I’ll use Kaspersky technology and industry terminology to explain how allowlists work. In this case, we are talking about a process called “Default Deny.” Under this principle, a security product would block all applications and software by default unless they were explicitly allowed. Thus, you have an allowlist of pre-approved applications. Problematically, this sort of default deny allowlists are primarily used in corporate environments where a central authority can exhibit more control over what users need. It’s relatively easy to say that certain apps are needed for work and all others can be ignored. Furthermore, in a business environment, the list of approved apps is likely to be fairly static over time. On the consumer level, there are some obvious pitfalls, which is to say, it’s hard to know exactly what the consumer will need or want at any given time. Default Deny Via Trusted Applications Of course, our researcher friends here at Kaspersky Lab managed to come up with a way to apply the principles of default deny to the consumer crowd with a technology called “Trusted Applications.” In essence, trusted applications represent a dynamically updated allowlist of applications based on a set of trust criteria tested against various data points acquired from the KSN. — Eugene Kaspersky (@e_kaspersky) October 10, 2014 In other words, our consumer-ready, dynamic allowlist is an extensive and constantly updated knowledge base of existing applications. The database contains information on about one billion unique files, covering the overwhelming majority of popular applications, such as office packages, browsers, image viewers and nearly everything else you or I could imagine. Utilizing the input of nearly 450 partners, predominately organizations that develop software, the database minimizes the occurrence of false-positives by knowing about the contents of vendor-implemented updates before they happen. The Trust Chain What about the apps we don’t know about? Certain apps and processes spawn new apps and it would be impossible for our allowlist to have a working knowledge of all of these programs. For example, in order to download an update, a program may have to launch a specialized module, which will connect to the software vendor’s server and download a new version of the program. In effect, the update module is a new application created by the original program and there may be no data on it in the database. However, since this application was created and launched by a trusted program, it is regarded as trusted. This mechanism is called the “Trust Chain.” Similarly, if a new update is downloaded automatically and it is different from the old app in ways the allowlist does not recognize, it can be approved by secondary means, such as verifying its digital signature or certificate. A third failsafe method kicks in if an app unexpectedly changes and is also unsigned. In this case, the trust chain can run the download domain against a list of trusted domains, which generally belong to well-known software vendors. If a domain is trusted, so too is the new app. If a domain is used to distribute malware at any time, it is removed from the trust chain. Last But Not Least As you well know, attackers are hip to nearly everything we do on the protection end. In part because of this, they often like to find vulnerabilities in popular programs and exploit them in order to circumvent the very protections described above by having malicious acts originate from trusted programs. To combat that, our researchers have developed a system known as the “Security Corridor.” The security corridor supplements our dynamic allowlist by making sure that approved software and applications perform only the actions that they are supposed to perform. “For instance, a browser’s working logic is to display webpages and download files,” explained Andrey Ladikov of Kaspersky Lab’s allowlists and cloud infrastructure research team. “Actions such as changing system files or disk sectors are inherently alien to the browser. A text editor is designed to open and save text documents on a disk, but not to save new applications onto the disk and launch them.” In this way, if your favorite paint application starts using your microphone, the application will be flagged. Whose Computers are Fortified? Our researcher friends here have written not one, not two but three more technical articles on the science of allowlists. Follow those links if you’re interested in digging deeper.
<urn:uuid:1f4f85ab-bb7b-4fdd-a675-b5a498060ff6>
CC-MAIN-2022-40
https://www.kaspersky.com/blog/wonders-of-whitelisting/6367/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00435.warc.gz
en
0.93956
1,251
2.59375
3
Document Version Control Best practices for efficient file version management What is document version control? Document version control, sometimes called revision control or source control, refers to the management of changes to documents and other structured and unstructured information. With document version control software, multiple users can work on the same document, eliminating the creation of multiple and differing versions. Document version control allows a user to “check out” and lock a document while it is being worked on, so only one person is editing the document at a time. Once the item is “checked in,” it can be edited by someone new. The importance of document version control Here’s why growing businesses make the shift to document management and version control systems: Over time, edits on a single document by many users will start to look confusing. Implementing a document version control system will make it simple for various users to access multiple earlier versions of files and projects — even in sudden data losses or computer failure. Accountability and visibility are also improved during collaboration, and organisations can track which team members have edited specific documents. Audit trails for regulatory compliance Organisations with governance, risk and compliance requirements can use document version control software to verify and track data with thorough audit trails and log history. This is done by accurately monitoring the entire document lifecycle from creation to final disposition — ensuring that they meet fast-changing regulatory compliance standards. Permission controls to regulate confidential information When it comes to who can access and modify your organisation’s information, all your files should have clear, regulated controls. Permissions can be tailored according to document types and accesses can be granted or revoked at any point with the right document version control software. Examples of good document version control Each document version control strategy is tailored to meet certain business goals, from achieving document standardisation, increasing accountability or streamlining collaboration. Good document version control happens when: Administrators can easily share files with controlled permissions In cases that require collaboration between many users, not implementing proper document version control is risky. Employees could be forced to use external programs and other file editors to view, annotate and share, which makes tracking versions and user edits a difficult task. Deploying a centralised platform allows administrators to control the permissions that determine who is allowed to view and edit files, while also ‘locking’ and protecting them from unauthorised modifications. Work is retrievable even after software crashes and manual errors Employees with specific permissions should be able to track all previous versions of a document and revert to them if need be. In the event of computer malfunctions or information losses, data is still preserved without having to slow down work processes by manually re-entering data from memory. Everyone has access to one controlled document when collaborating Depending on multiple editing applications leads to duplicate versions of the same document. Using a document version control platform ensures that users do not confuse ‘old’ files for their updated versions and work on the wrong documents. Controlled documentation – or the practice of using one master document to track changes – keeps all collaborators on the same page and does away with documents of the same nature and content taking up unnecessary storage in your system. Guidelines for greater document version control efficiency Good document version control strategies include: - Deploying systematic and understandable file naming standards to enable faster document searches - Using workflows to notify relevant users on pending actions - Opting for a cloud-based document control solutions provider to protect and backup all data Locking finalised documents or setting “read-only” accesses in place to avoid unauthorised modifications - Allowing for documents to be accessible to external users (customers, patients, students, etc.) How the OnBase document version control software helps businesses OnBase gives organisations the ability to manage the creation, revision and distribution of critical business documents. It centralises the management of changing business documents and provides offline synchronisation for remote users, ensuring that everyone accesses the most up-to-date version of the document. No more concerns that: - Employees or customers are working from old versions - Important notations disappear with the paper they were written on - There are document or keyword inconsistencies
<urn:uuid:0f40917a-241b-4da5-9c99-ffecc9cdb05f>
CC-MAIN-2022-40
https://www.hyland.com/en-ZA/resources/terminology/document-management/document-version-control
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00435.warc.gz
en
0.886751
874
2.8125
3
In many ways, Beyond Identity has been decades in the making, from back when we started Netscape, the first browser through which most people accessed the Internet and the first platform through which most people entered a password. But the realization that it was now possible to effectively eliminate passwords as a method of authentication didn’t come until much more recently, when we were developing a more secure way for residents to access home automation software. When we built Netscape, we knew it was essential to enable secure transactions online. Our chief scientist, Taher Elgamal, along with Paul Kocher, developed a solution to this problem with the Secure Sockets Layer protocol, or SSL (the lock in the browser). This protocol is still in use today, in the form of TLS, for the same purpose – a testament to its design. The same principles of asymmetric cryptography and certificates used to secure online transactions for the past 25 years are now at the heart of Beyond Identity’s passwordless identity management solution. But at the time, it would not have been possible to implement it to identify individuals. This is because the protocol relies on signed certificates issued by trusted certificate authorities (CAs), which were not scalable to the magnitude necessary to issue certificates to hundreds of millions or billions of individual users. Other issues at the time included device memory limitations and how individual consumers would securely store the crucial private key. Businesses used expensive hardware security modules (HSMs) for this purpose, but this was not an option for consumers. These issues would not be solved until the introduction and proliferation of secure enclaves in computing devices – the Trusted Platform Module (TPM) chips that were introduced in the past decade on a range of devices where the secure storage of biometric data was needed. We would not revisit the password issue until long after Netscape. In the meantime, various password workarounds emerged from a few schools of thought: - Passwords need to be more complex. Proponents of this idea attempted to bolster security through enforcing strong password rules such as character minimums and requirements for special characters and numbers in passwords. These “high-entropy” passwords were harder for adversaries to guess and a bit more difficult for machines to crack. But the result was that nearly everyone used the same hard-to-guess password or a slight variation – so if hackers stole one password, it was likely usable across multiple sites. - Passwords need to be stored somewhere more secure. Password databases are a virtual goldmine for hackers. To protect them, organizations realized that security measures need to be implemented to defend where the passwords are stored. Encryption and firewalls have become standard for all databases storing credentials, yet leaks still happen. Often. As long as the passwords are stored in password “safes” or back-end databases, bad actors will find a way to access them. - Passwords are not secure enough on their own. This gave rise to multi-factor authentication – various second-factor codes often sent over insecure channels such as email, text, SMS, or apps. None of these “solutions” fixed the root of the problem – which is that there is a shared secret stored somewhere on a server. The solution, of course, is to eliminate the password entirely. It wasn’t until 2018 that we decided to finally tackle the password issue, and it was almost inadvertent. While working on home automation technology, we needed to find a secure method for homeowners to verify their identity and then control their smart home devices. It was immediately clear that a password would not be secure enough given their entire home was at stake. A leak could lead to a stranger controlling the customer’s physical environment, affecting the lives of family members in a very immediate and frightening way. Oh, and who wants to enter a password to turn on the lights or change the thermostat? Exactly – nobody. Verifying identity became a fundamental goal, and our engineers began working on a solution. One of our engineers, Nelson Melo, came up with the idea of using the same digital certificates and asymmetric cryptography from SSL/TLS to verify individual identities and secure logins. The idea was to essentially extend the chain of trust that existed between websites by one link to include the user who is accessing the server. Barriers that stood in the way before – limited memory and the lack of a secure place on devices to store private keys – had both been solved. With the advent of strong biometric authentication for endpoints and an enclave to protect biometric data and private keys, we had what we needed. Finally, it seemed that we could enable users to access applications as securely as the TLS transactions that flowed between two websites, or between your browser and your favorite e-commerce site. It was at that point that we decided this technology was bigger than the home automation purpose for which it was originally designed. This solved a much bigger problem. Seeing the potential in eliminating passwords for the masses, we formed Beyond Identity. We built an elegantly simple solution using the same fundamental technology that has been trusted to secure trillions of dollars' worth of transactions on the web for 25 years (SSL/TLS, certificates). Our foundational idea: use “self-signed certificates” to replace the insecure password. With our solution, each end user becomes their own certificate authority, eliminating passwords and extending a chain of trust all the way to the user and their endpoint device. But we didn’t stop there. As we continued to develop the technology further, we found many ways our certificate-based approach could be used to provide significant value beyond passwordless authentication. For example, with our solution, we can collect, digitally sign, and send information about the security posture of the endpoint device, at the time of the request, along with the signed certificate proving the end user's identity. This enables organizations to have unprecedented insight, and to make risk-based authentication and authorization decisions – leveraging the identity, the risk of the accessing device, and the value of the service being accessed. Unlike off-device authenticators, we produce a full-risk picture at each login for better auth decisions, and produce an immutable record for audit/compliance and security operations center (SOC) teams. We also developed integrations with SSO providers, starting with Okta, Ping, and ForgeRock, so that companies could rapidly turn on our cloud-native solution for their workforces and leverage existing IAM infrastructure investments. It takes only a few configuration changes in the SSO to make us a “delegate identity provider” and enable secure, passwordless authentication and adaptive risk-based authorization. The solution also supports customer-facing applications with API/SDK-based integration. Finally, the issues with building Internet security around passwords can be put to bed, and we can enter a new, much-overdue era of fundamentally secure authentication.
<urn:uuid:5ad4e377-4e04-426f-8f41-812e2df11385>
CC-MAIN-2022-40
https://www.beyondidentity.com/blog/journey-and-then-beyond-passwordless-authentication
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00635.warc.gz
en
0.957827
1,421
2.828125
3
Understanding 4G antennas is the first step toward choosing the right antenna for your device and wireless service. Although 3G and 4G antennas follow the same basic principles, there are many other factors that come into play when using a 4G antenna. Like anything else, it is important to have a basic understanding of specific concepts and details when it comes to 4G antennas. Although there are many different types of 4G antennas, there are factors you must take into consideration before choosing a 4G antenna. 4G antennas follow the same basic principle as 3G antennas in terms of installation and certain functions. However, 4G connectivity is fairly new and the specifications have yet to be precisely defined by the ITU-R (International Telecommunications Union). This leaves some gray areas when determining 4G network performance and antenna availability, despite the fact that more and more 4G networks are gradually being implemented and made available through wireless carriers. The main problem with 4G antennas is determining exactly what makes up 4G and whether or not there is a real 4G network you can use to integrate a 4G antenna. The second problem lies in the fact that you must determine whether or not a 4G antenna actually offers authentic 4G connectivity. As the ITU-R continues to work towards actual specifications for 4G, WiMAX and LTE are currently the closest example of a 4G network. It is these networks that are commonly referred to as 4G networks. This means that a 4G antenna is synonymous with an LTE or WiMAX antenna. So, when you are looking for a 4G antenna, you can also consider antennas from manufacturers that use the terms LTE or WiMAX to describe the product. There are many manufacturers of 4G antennas which creates a challenge when choosing the best one. The antenna must also be compatible with your wireless carrier’s service and the frequency they use to deliver the service. For example, if your wireless carrier uses WiMAX, then the standard for delivery is 802.11, a wireless standard set forth by the Institute of Electrical and Electronics Engineers for wireless Local Area Network (LAN) technologies. On the other hand, if your wireless carrier is using LTE, the frequency used to deliver the service is operating at 700MHz. This means the required 4G antenna must operate in sync with the type of frequency your wireless carrier uses to deliver their services. One aspect that does remain consistent when choosing a 4G, LTE, WiMAX antenna is the type of accessories and cables that are used to configure a Wi-Fi and 4G antenna as they are pretty much consistent across the board. Many mobile users commonly ask what they should expect from a 4G network that is fully functional. This is a legitimate question and one that should be understood in order to know what to expect from a 4G antenna in terms of performance. Since 4G has not been completely defined with specifications in exact detail, many companies are still promoting 4G accessories such as antennas and other devices as “4G.” Some of the major telecom providers such as AT&T and Verizon also promote their mobile devices and smartphones as “4G” as well. You will also find that the speed of 4G networks will vary according to your wireless carrier. For instance, AT&T may have a different concept of 4G performance than T-Mobile. Typically, you will find a significant difference in performance when comparing wireless carrier services side by side. Below is an interesting comparison of 4G connectivity tests among multiple wireless carriers using the same type of mobile device. To help you understand this aspect in terms of performance, when it comes to choosing a 4G antenna, LTE, WiMAX, 4G connectivity should be able to provide you with an exceptionally fast connection capable of handling streaming media without any interruption or hiccups in the service. This means that the speeds for data transmission over an Internet connection are extremely fast which is what can commonly be expected from 4G connectivity. It also means that a 4G antenna must be capable of handling extremely fast data speeds which can potentially consume a lot of energy from your mobile device. This requires you to recharge the device more often than you would using a slower Internet connection. For this reason, a high quality 4G antenna would be your best investment since it is equipped with the technologies you need to handle faster data speeds. This is a general background on what you can expect in terms of performance for 4G networks. If you have an idea of what type of performance to expect, this will help you to choose a 4G antenna that provides performance that meets your expectations. This knowledge combined with what to look for in a 4G antenna should provide you with the best connection possible from your wireless carrier. 4G Antenna Authenticity and Compatibility Many mobile users have asked questions in online forums regarding the authenticity of 4G antennas. Others expressed concern over antennas that are promoted as 4G but are actually standard 3G antennas. If you watched the video posted above, you can see how 4G connectivity can vary, therefore the concerns that consumers have expressed in online forums regarding 4G antennas are legitimate ones. If a 4G connection can be ambiguous among wireless carriers, why wouldn’t there be inconsistencies with 4G antennas? What’s more, how can a 4G antenna be identified when you never actually see an LTE or 4G label on some of the brands? Like anything else, there are antenna manufacturers that promote their product as a 4G antenna when in reality, the 4G antenna is simply a 3G antenna that has been repackaged and promoted as a 4G product. This means it is necessary to comparison shop and do your homework in terms of the reputation of the company that provides 4G antennas. - Read the Reviews: Invest the time to read the reviews written by consumers that have actually used the 4G antenna you are considering. First hand experiences are the best way to get a read on product quality and performance. Try to avoid the review sites that provide a review of a product with an affiliate link where you can purchase the antenna. This can sometimes mean that the review is not sincere since affiliates receive a commission for every sale they generate. - Ask Yourself Key Questions: As you are reading through reviews, ask yourself some key questions such as: How technically sound is the information? Does the reviewer appear to be tech savvy? Do they mention anything that may prevent the antenna from providing maximum performance? Is the 4G wireless carrier named in the review? What type of configuration did the reviewer use? Plus, any other information that may provide you with clues can be helpful when reading customer reviews. - Investigate the Company: Find out if the company manufactures their own product or if they work with a 4G antenna supplier. If they are working with a supplier, try to find out who the actual manufacturer is to ensure the product is not just a knockoff of the real deal. Additionally, there are ways to conduct your own speed test on a 4G antenna using products like the one described in the video below. Finally, make certain the 4G antenna is compatible with the service you receive from your wireless carrier. This may require you to call them to find out what type of frequency they use to deliver their services. You may also want to ask them for their advice on 4G antennas and then compare their suggestions with some of the reviews you read online. As you are choosing a 4G antenna it is important to keep your purpose in mind as you shop around. In addition to compatibility with your wireless carrier and achieving the best performance, different types of 4G antennas serve different purposes. For example, a booster antenna is designed for interior use and is used to boost a 4G signal while you are traveling, staying in a hotel room, or simply improving the signal while you are driving your car. A magnetic mount antenna can be used to improve a 4G signal with easy attachment to your vehicle or RV. This type of antenna is designed to be used outdoors and can withstand the elements. A full-band antenna is designed for users that lack a direct line of sight to the nearest cell tower without having to worry about positioning an antenna. It is typically designed for outdoor use and covers a wide variety of frequencies. These are a few examples of different types of 4G antennas. As you browse the market, you will find others that serve a wide variety of purposes and environments. However, one thing that remains consistent, is all 4G antennas should provide a way to boost a 4G signal and in most cases, help you to reach 4G connections that would otherwise be unavailable without the use of an antenna.
<urn:uuid:8dd38677-d090-4d68-a5c5-d5a6a0b90915>
CC-MAIN-2022-40
https://internet-access-guide.com/understanding-4g-antennas/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00635.warc.gz
en
0.946101
1,773
2.609375
3
A product’s quality is amongst the main factors that drive its commercial success. Consumers always prefer products with superior quality, which makes quality management a top priority for product manufacturers. The latter are striving to optimize their production processes in order to minimize issues and inefficiencies that lead to defects. In this context, the term Zero Defect Manufacturing (ZDM) has been introduced as a popular and highly-regarded concept in quality management. ZDM’s popularity of the term is reflected in the fact that the standard Six Sigma methodology for quality management embraces it as one of its core concepts. The literal meaning of ZDM is the complete elimination of defects. However, having zero defects, in a realistic manufacturing context, is practically impossible. Even in the scope of Six Sigma standard, the notion of “zero defects” is defined as 3.4 defects per million opportunities (DPMO). When applied in real life projects, ZDM is driven by four main principles. First, ZDM is defined against requirements in a specific production context and at a given point in time. This specification of time helps predict probable future defects and helps in avoiding the same. Second, it must be integrated in the production process right from the beginning, rather than being an outcome of solving problems at a later stage i.e. ZDM is about a proactive approach to quality. Third, ZDM is driven by monetary benefits and financial targets, which means that the elimination of defects must be reflected in reduction of waste, productivity improvements and ultimately in revenues. Last, performance should be continually audited and improved based on standardized benchmarks. The actual implementation of a ZDM discipline in production hinges on many different quality control applications, which can be combined towards delivering production quality of the highest standards. As a prominent example, a production line can be continually monitored for production errors or patterns that indicate possibility of defects. Also, the maintenance processes of machines and equipment are an important element of quality, since poorly maintained equipment can lead to malfunctions and quality problems (e.g., wear). Likewise, industrial automation applications can greatly boost quality, based on production systems (e.g., robots, smart machines) that eliminate human errors and deliver exceptional production quality. Finally, supply chain processes are extremely important for production quality as these are in charge of procurement of high quality parts that meet production specifications and are also responsible for the distribution and delivery of the finished products. Nowadays, based on one or more of the above applications and their combination, manufacturers are purchasing systems and devices that are destined to deliver excellent quality in their production lines. Despite significant investments in ZDM processes and applications, manufacturers are still challenged when it comes to deploying ZDM. This is due to a number of different factors, including: The advent of digital manufacturing as part of Industry 4.0 (I4.0) is currently revolutionizing production processes, as it enables their control from an IT-based digital layer, rather than from the field. Quality management and ZDM is no exception to this rule and will directly benefit from the flexibility and deployment efficiency of IT processes, which are gradually replacing legacy OT (Operational Technology) based processes. In particular, the use of leading edge digital technologies as part of I4.0 will alleviate the above-listed barriers to ZDM adoption through: During the past two years, we have witnessed the rise of the first digital systems for ZDM, which leverage one or more of the above-listed technologies. Nevertheless, there is still much to be done prior to reaching the “cognitive manufacturing” vision, especially given that some of the technologies (e.g., blockchains, 5G) are still in their infancy. The implementation of this vision will not only improve the competitiveness of manufacturers, but it will also enable consumers to redefine the quality of end products, through adding entire new dimensions (e.g., usability, ergonomics, environmental friendliness, product longevity). How to build a successful Digital Product The Role of a PLM System in the Product Management Process Digital Product Service Systems for Producing Everything as a Service Product Management Excellence: A Catalyst for Business Competitiveness Startups Must Learn How and When to Pivot Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:2ea10fe6-4b33-4f3c-9231-4688525f6fce>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/reinventing-product-quality-with-zero-defect-manufacturing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00635.warc.gz
en
0.938141
1,083
2.828125
3
Jane Waterfall, Content Manager at IASME Consortium, explains how systems — such as heating, air-conditioning, smoke detectors, and smoke alarms — can connect to generate, collect, and analyze data to monitor the environment in order to improve effectiveness of service. The connected, embedded sensors and devices that make up the Internet of Things (IoT) contain software that provides these systems with their "intelligence." All software contains millions of lines of code, and these inevitably contain some mistakes. In the world of cybersecurity, mistakes are called vulnerabilities and can be the equivalent of a window left open for cybercriminals to gain access. Herein lies the paradox: The hundreds of IoT devices brought in to help make a building more secure can create open gateways for hackers to access not only the device with the vulnerability, but the whole IT network that the device is connected to. Cybersecurity is concerned with preventing unauthorized access to a building or a company's network and data. Many physical security systems now include numerous connected devices with remote access from the cloud, closely resembling an IT architecture. Cybersecurity is viewed as essential for technology that connected to the Internet. Yet if you consider the fact that many features in smart buildings still contain critical defects and overlook best practices, from a security point of view, many smart systems are far from smart. Essential Cybersecurity for IoT IoT is a very attractive target for hackers, not least because numerous IoT devices make it simple for attackers to steal valuable data, take control of or disrupt a system, or access bigger prizes within a network. Attacking the physical is often part of a larger attack where its role is to act as an easier gateway to another system. IoT systems security is somewhat behind the security level of most business computers, with some security experts estimating it is at the stage in its journey where information security was 15 years ago. Consumer IoT devices and those found in many smart buildings frequently do not have even the basics in place, leaving the devices and the networks vulnerable to cyberattacks. The ETSI EN 303 645 standard was created by a team of experts from across the European Union — in industry, academia, and government — to prevent large-scale, prevalent attacks against smart devices. The standard, released in 2020, describes 13 requirements to establish a security baseline for connected consumer products and provides a basis for future IoT certification schemes. New legislation coming into law in the United Kingdom in the near future will bring some much-needed improvement to consumer IoT device security. The new legislation will specify three mandated security features that are aligned with the top three requirements of the European Technical Standard for IoT Security (ETSI). Physical Security to Protect IT In the same way that cybersecurity is needed to protect physical security technology, physical security practices are essential in helping to protect information technology. Access control is one of the key principles of cybersecurity, covering the essential precaution of controlling who can access your devices, accounts, and data. The technical control includes creating user accounts for everyday use and limiting access to the administrative accounts to those people who need them for their roles. Access control also includes physical access to equipment and premises. This would include, for example, protection from unauthorized people walking unchecked into an office or server room, or even just looking through a window. The rule of "least privilege" is a secure way to work. This simply means staff are given all the resources and data necessary to perform their roles, but no more. The same rule can be applied to accessing different parts of the business premises. Physical access control measures can include using a key card or biometric scan to enter the building and further access control for different offices, ensuring that computer screens are not visible from the window and that devices in use to access organizational data automatically lock after a period of inactivity. Physical security and cybersecurity have long been seen as separate sectors, but with the rise of smart buildings and the interdependence of physical systems with Web-based or cloud-based networks, the boundaries between the two are becoming less visible. Organizations, facilities managers, and those in the security industry need to find ways to better identify, mitigate, and respond to risks across multiple security operations when the surface area of those risks is larger and continuously expanding. Security convergence is the practice of integrating physical security and information security within projects and organizations. The idea is to manage the total risk to assets, property, systems, and networks in a holistic security strategy, anchored by shared practices and goals. Effective security convergence has needed a culture shift from that of siloed departments with separate funding sources and strategies to one of inclusion and collaboration. The security sector knows that it needs to build more awareness of IoT breaches, provide education, share best practices, and accelerate the development and adoption of cybersecurity standards. Good security strategies focus on people, processes, and technology, encourage training and education for their teams and prioritize working with trusted providers who use assured products and technology to connect their building assets. IASME developed the IoT Security Assured certification scheme to provide an accessible and achievable way for manufacturers to demonstrate the security of their Internet-connected devices and to show they are compliant with best-practice security. When the IoT Security Assured scheme badge is displayed on the device, it will reassure end users that their devices include the most important security features. The IoT Security Assured scheme is aligned with the leading global technical standard in IoT security, ETSI's EN 303 645, and with imminent UK IoT security legislation and guidance. Within the IoT Security Assured scheme, there are three levels of security that a device can be certified to: - The Basic level: This level is aligned with proposed UK legislation and covers the top three requirements of the ETSI standard. - The Silver level: This is aligned with the 13 ETSI mandatory requirements and data protection provisions. - The Gold level: This is aligned with the 13 ETSI mandatory requirements, as well as all the additional ETSI recommended requirements and data protection provisions. An information security management system (ISMS) such as IASME Governance standard is a documented systematic approach that addresses people, processes, and technology. The Governance standard integrates both cybersecurity and physical security, helping organizations embed good security awareness, knowledge, and behavior into its practices as business as usual. This story first appeared on IFSEC Global, part of the Informa Network, and a leading provider of news, features, videos, and white papers for the security and fire industry. IFSEC Global covers developments in long-established physical technologies — like video surveillance, access control, intruder/fire alarms, and guarding — and emerging innovations in cybersecurity, drones, smart buildings, home automation, the Internet of Things, and more.
<urn:uuid:c693b6d0-41a6-41f0-9d8b-bf5fec2f4b61>
CC-MAIN-2022-40
https://www.darkreading.com/physical-security/exploring-the-intersection-of-physical-security-and-cybersecurity
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00635.warc.gz
en
0.936453
1,394
3.28125
3
In the quickly evolving world of the IoT, multiple standards have developed in a short span of time, each with the goal of allowing smart home devices to communicate with each other and with multiple online services. One solution to this issue is the use of the Dotdot, an application layer developed for IoT devices to easily join networks of other similar devices and to communicate their status and capabilities in a standardized way, combined with Thread. Thread is an IP mesh network implementation designed for IoT device communication, promises to be a widely implemented standard in IoT device manufacture and development. What is Thread? Thread is an IPv6-based mesh network implementation based on the IEEE 802.15.4 specification for Low Rate Wireless Personal Area Networks. What this means is that Thread is designed to connect hundreds of devices together in a short-range network using as little power as possible to transmit and receive messages. Because Thread is based on IPv6 networking, it means that all Thread-enabled devices have the capability of communicating directly with each other through their local network and the potential of being accessible directly over the wider internet. To illustrate the usefulness of this configuration, imagine a typical home with 10 or 15 smart devices (light switches, alarms, thermostats, etc). Within this group of 10 or 15, there may be devices from several manufacturers, some of which might connect to the home’s network through WiFi, indirectly via Bluetooth or through a custom hardware gateway using proprietary networking technology. If one device (like a smart speaker) wants to request action on the part of another device (perhaps asking a light bulb to turn off), typically it will be sending an outbound request over the home’s network (presumably WiFi) to a cloud service hosted by the device’s developer. That service will then make a request back over the internet, through the home’s WiFi network, perhaps through a special bridge device, which then relays the request to the target smart device. The entire system is complex with multiple points of failure, and this is assuming that the original request was a simple one. Imagine the added complexity when the original request involves targeting actions on multiple devices, each from a different provider. Conversely, in a home that features devices on a Thread network using Dotdot, the smart speaker could communicate directly with the smart bulb using their built-in Thread radios and each would know how to address the other using a common Dotdot vocabulary. Of course, connection to the broader internet for compatibility with cloud services is also possible through the use of what is called a border router. A border router is a device with a radio built for communicating on the local Thread network and some other method for connecting directly to the internet, be it a WiFi radio or an ethernet connection. This makes it possible for a cloud service to make a request through the border router which then forwards it along to the ultimate target device. The fact that Thread is a form of mesh network makes it incredibly stable, given that many of the smart devices on the network can themselves serve as routers to pass messages along to other devices. This way, if a single device fails, the messages that would be routed through it can instantly be rerouted through other devices. And if a new device joins the network, its range and routing efficiency increases. What is Dotdot As mentioned above, Dotdot is an application layer that defines a common language that IoT devices can use to talk with each other to communicate status information and execute requests. It is based on the ZigBee Cluster Library, the application layer of the ZigBee networking standard, but it is generalized to work on any type of IoT device network. Dotdot allows devices to join a local network of other devices and to communicate information about their abilities to each other. For instance, a light may request to join the local home network. The security of that connection is negotiated using the Dotdot specification and the device communicates its abilities to the rest of the network. In the case of a light, the device might broadcast that it has the ability to turn on and off, to be dimmed from 0 to 100 percent or to change color. Having multiple devices from different manufacturers on one home network without something like Dotdot would make it impossible for device-to-device communication to occur. This leads to the scenario described above, where multiple manufacturers must each maintain a separate method of communication for interacting with their devices and requests must come through separate cloud services. With Dotdot, even if the internet connection to a home is out, a user could send a request over their home WiFi network, through a border router and directly to a Dotdot device. The Future of Smart Home IoT While both Thread and Dotdot are promising new technologies for smart home system control, the specifications for both technologies are still relatively new. Dotdot in particular only had its specification finalized last year and it is still in the process of ratification and release to members of the Zigbee Alliance. On paper, Thread and Dotdot seem like excellent choices for home and commercial IoT systems but time will tell if these technologies have the adoption rate that it will take to be a serious player in the market.
<urn:uuid:0ba2b0a0-6d03-43f3-9fb2-a0ddb6bd9e5d>
CC-MAIN-2022-40
https://www.iotforall.com/dotdot-over-thread-deep-dive
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00635.warc.gz
en
0.943949
1,056
2.5625
3
Cybersecurity measures are an integral part of every modern business. With high-profile data breaches in the news every other week and reputations at stake, there’s never been a better time to protect your network. Digital security requires robust architecture, effective management, and regular testing. While setting up a secure network is a great first step, ongoing tests are needed to mitigate risk and ensure compliance. What Is Penetration Testing? Also known as pen testing or ethical hacking, penetration testing is an essential component in every security solution. Penetration testing looks for specific vulnerabilities in computer networks to address the overall security posture of an organization. From security services and policies to staff awareness and disaster response plans, it’s important to understand why penetration testing matters to your business. What Does Security Penetration Testing Do? Highlights Specific Risk Factors Cybersecurity is a large and growing field that includes everything from antivirus applications to firewalls, authentication standards, and data encryption. The sheer size and scope of the data security industry make it difficult to identify specific risk factors and potential exploitation points. Penetration testing is the perfect way to find the gaps in your current system, with testers trained to think like hackers and stay ahead of the game. Enables a Continuous Security Stance Computer networks are always changing, with services and locations added, new applications introduced, and updates applied as an organization grows and evolves. While it’s important to set up a secure network architecture from the outset, it’s equally important to approach security as an ongoing process that adapts and responds as needed. Penetration testing is not a one-time solution – rather, annual testing is advised and additional tests are needed whenever significant changes are made. Ensures a Good Reputation Maintaining the integrity of your data is the responsibility of every modern business. Whether you run a small family business or a large multinational corporation, you are both ethically and legally responsible for the privacy and security of sensitive customer and employee information. Data breaches have an undeniable influence on professional reputation, with large companies often suffering for years after a major breach and many smaller organizations having to fall on their swords. According to research conducted by the National Cyber Security Alliance, 60 percent of hacked SMBs go out of business after six months. Provides You with Valuable Insights All professional testing procedures conclude with a report, including details on possible and potential breaches, specific vulnerabilities, and recommended remediation action. The end goal of every penetration test is to improve your security posture by updating services and applications, investing in new hardware solutions, and training your staff to manage risk. Uncovering potential exploits in critical business systems are the best way to gain actionable insights and implement efficient security strategies. For example, according to the 2019 Internet Security Threat Report from Symantec, Living off the Land (LotL) attacks increased by 78 percent in 2018. These attacks allow hackers to hide inside legitimate processes, and this insight helps security teams to develop effective new strategies. Ensures Regulatory Compliance While being compliant doesn’t make an organization secure, carrying out regular pen tests is a necessary part of many compliance standards. Data protection standards are often determined by external regulatory bodies, with pen testing mandated by many technical, financial, and healthcare industries, just to name a few. Penetration testing can also help with general policies regarding information security, many of which are prescribed in industry regulatory standards. Helps with Training Implementations Penetration testing is concerned with what can be compromised, not just what is already vulnerable. Along with testing systems and applications, pen testing is also interested in the people who control the technology. Comprehensive testing includes both external and internal procedures, with external threats coming through the Internet and internal threats coming from employees. In both cases, people are usually the path of least resistance for attackers, which is why training is such an important tool. Pen testing uncovers gaps in communication and training systems and helps to implement appropriate support and education strategies.
<urn:uuid:3c768d2f-09d9-4508-99d3-0377722445e5>
CC-MAIN-2022-40
https://www.ctsinet.com/why-penetration-testing-matters-to-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00635.warc.gz
en
0.940673
841
2.578125
3
Is This An Issue? 2020 started off with a bang in the world of energy-related cybersecurity. A pipeline in the United States was shut down as a result of a ransomware incident.1 In this particular case, the natural gas supplier saw an attack that began as a spear-phishing email. Eventually, it compromised human-machine interfaces, data historians and polling servers on the OT network, having come across via the IT network.2 In the wake of the incident, which was one of many that have occurred over time, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) issued an alert that showed some of the flaws in the natural gas provider’s security. The vulnerabilities included human factors; a failure to have cybersecurity and related training as part of the emergency response plan and too much focus on only physical security.3 Also exposed was a lack of segmentation between the IT and OT networks.4 Overall, a lack of knowledge seemed to be the biggest factor, and the CISA made the following recommendations: network segmentation, multi-factor authentication, regular data backups, least privilege access policies, anti-phishing filters, AV, whitelisting, traffic filtering and regular patching.5 CISA also suggested any critical infrastructure providers should add cyber-knowledge to any training. But most organizations are okay, right? This is a critical issue for several reasons. The first, as reported in a study conducted by Allianz, is that 54% of critical infrastructure suppliers had reported attempts by hackers to control systems, and 40% experienced attempts to shut down systems.6 This represented a 20% increase at that time, and numbers suggest that the increases have continued. In other words, these are real threats that are happening on a regular basis. Today’s threats can turn into attacks very quickly. Secondly, confidence in critical infrastructure is essential, especially for the public who relies upon critical infrastructure. While the US to date has largely been spared from these kinds of attacks, Polish Airline LOT had to ground planes after a DDoS attack, and national power grids in Israel and the Ukraine were the victims of major cyber-attacks that required the grids to be shut down to prevent the spread of a virus.7 One attack in the United States occurred on the Bowman Avenue Dam in New York, in which Iranian hackers took control of the floodgates.8 Nation state attackers are very real threats. The write-up with the study went so far as to include this ‘nightmare scenario’ of what an attack would look like: During a particularly harsh winter, a group of hacktivists spreads panic by bringing down the US power grid. Millions of homes and businesses are plunged into darkness, communications are cut, banks go offline, hospitals close and air traffic is grounded. Anyone could easily imagine what such a scenario would mean, and how quickly the confidence in a local utility or even a wider scale attack would alter reality. The reality is that critical infrastructure information systems are complex. On a simple level there are traditional information technology systems and industrial control systems, like Supervisory Control And Data Acquisition (SCADA) systems and IoT. The traditional information technology systems are often more well protected, because they are easy to protect in any cases and often viewed as easier to breach than control systems. That said, many of the hacking attempts have been seeking not data, but control of those key systems. A more recent report, compiled by Utility Dive, revealed that 37% of the 566 utilities surveyed had not fully implemented their plans around cybersecurity.9 At the same time 84% of the utilities felt well prepared for a cyber attack and 78% had instituted organization wide digital hygiene programs.10 Optimism was high, but yet the respondents felt that cybersecurity was the 4th most important topic on their plates, and 38% admitted to a failure in implementing cybersecurity programs to deal with third-party vendors.11 Which leads to our last section… Keep article reading below… So what about NERC CIP? The North American Reliability Corporation’s Critical Infrastructure Protection (NERC CIP) is the standard most utilities strive to achieve with regards to their cybersecurity, amongst other topics. 2020 is a big year, as there have been several recent updates and more coming to NERC CIP. The key deadline to remember is July, when several updates come into effect. For those that aren’t familiar with NERC, it’s a non-profit quasi-governmental agency that sets the CIP standards. CIP is a general protection standard meaning it spans topics beyond just cybersecurity. Companies covered include utilities and other companies involved with critical infrastructure including the vendors that support utilities and operations like ports. Most recently there was a deadline on January 1st of 2020 pertaining to CIP-003-7 on Security Management Controls. Next on deck are updates to two categories and a de novo category around supply chain cybersecurity risk. First off is CIP-005-6, pertaining to Electronic Security Perimeter. Then its CIP-010-3 regarding Configuration Change Management and Vulnerability Assessments. These both represent changes to existing policies. Lastly is the new policy, CIP-013-1, which implements Supply Chain Risk Management. CIP 005-6 Cybersecurity — Electronic Security Perimeter This section defines fairly detailed rules for firewalls, DMZs, and network segmentation requirements for protected assets. Added requirements center around the implementation of CIP-005-6 parts R2.2.4 and R2.2.5, which stipulate that they must have methods for determining how many active vendor remote access sessions they have at any given time and a way to disable these sessions. Many general remote access solutions don’t differentiate between internal and vendor sessions and don’t allow granular management and control over individual sessions. If you have one of these systems or no system at all and are just using VPN connections, you will have to develop some custom controls to monitor this activity and manually pull the reports you need to show compliance. Implementing a vendor management system that focuses on third-party access can help you isolate and track vendor sessions separate from internal sessions and make this job a whole lot easier.12 CIP 010-3 Cybersecurity — Configuration Change Management and Vulnerability Assessments These controls are designed to prevent unauthorized changes to systems and also stipulate regular vulnerability assessments and tests to make sure systems are not susceptible to such modifications. There are a number of elements to this section, but the only changes that will be made for July 2020 implementation are R1.1.6, R1.6.1, and R22.214.171.124, which require you to verify the identity of any software you use in your supply chain and its integrity. This can be done by checking hashes and having processes for software downloads that stipulate known sites, checking certificates, and more. Most of this is fairly easy to implement unless you have a large software development operation. Some software development tools will do some of this for you as well.13 CIP 013-1 Cybersecurity — Supply Chain Risk Management This adds a new section to the CIP standards and probably represents the area that’s least implemented in full by covered entities. It details the development and deployment of a formal supply chain risk management program. An astonishingly large number of organizations don’t have a written program to track third-party risk, even those managing a large population of vendors doing critical tasks. Section 1.2 describes the various requirements you must have for vendors and supply chain partners, including notifications of breaches on their end, onboarding and offboarding of their users in your systems, and software integrity verification. Finally, it all has to be reviewed and signed off on by the enterprise’s CIP Senior Manager at least every 15 months, with documentation of compliance per the R2 and R3 rules. While this may seem like a lot of things to get done, there are many technology solutions out there that can help get technical controls in place, such a Vendor Privileged Access Management (VPAM), and various vendor risk assessment platforms and exchanges to do risk assessments. The key is getting started with your program policy and procedure documents, for which there are many templates available on the Internet and consultants willing to put them together for you.14 These updates are the last for the next two years, with another batch expected in July of 2022. Implementing these will be a big step, especially the new requirements around vendor supply chain and cybersecurity.
<urn:uuid:6bb3c2c9-08ce-4e80-a01d-19f794f2e6ec>
CC-MAIN-2022-40
https://www.bitlyft.com/resources/dont-get-barbecued-by-nerc-cip-compliance-this-july
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00635.warc.gz
en
0.94838
1,936
2.65625
3
In the IBM i 6.1 release, the ability to encrypt data on disk was introduced. This support allows you to encrypt your data on user and independent disk pools (“disk pools” is the term used in the graphical user interface; if you typically use green screen interfaces, auxiliary storage pools (ASPs) may be more familiar terminology.) There are several reasons you may want to consider encrypting your data on disk: - If you are using external storage, you may want to have the data encrypted so it cannot be viewed over the network when it is being written to or read from the external storage system. - Similarly, if you are using data replication technologies to mirror your data for disaster recovery or high availability purposes, you may want your data to be encrypted to protect it during transmission in the cross-site mirroring environment. - If you had a disk drive stolen, encrypted data on that disk would protect your data from being read. However, disk encryption just ensures the data on the disk is encrypted. Disk encryption will NOT protect your data from misuse within your organization. Once the data is read from disk and loaded into memory, the data is available in the clear to the programs (and thus users) accessing that data. To use disk encryption, you need to purchase and install Option 45, Encrypted ASP Enablement. When this option has been installed, it will make available to you the ability to encrypt data on your disk pools. The user interface to specify that disk pools are to be encrypted is available through the IBM Navigator for I graphical user interface or through the Disk Management tasks within System Service Tools (SST); note however, that encrypted independent ASPs (IASPs) can only be configured through the graphical user interfaces. In the 6.1 release, you specify that a disk pool is to be encrypted when you configure it and encryption cannot be turned off without recreating the disk pool (which means deleting it and creating it again). In the 7.1 release, IBM enhanced the disk encryption support so you can dynamically turn encryption on and off, but this can be a lengthy operation; turning encryption on means all the data in the disk pool must be encrypted. Likewise, turning encryption off means all data in the disk pool must be unencrypted. Once the disk pool is set up to use encryption, you can expect an increase in CPU consumption and additional memory requirements, but with proper planning, you should be able to achieve the same performance when encrypting you data as you had without encryption. The Performance Capabilities Reference reviews the performance characteristics of disk encryption. This blog was just a high-level review of IBM i disk encryption. If you chose to learn more about encrypting your data on disk, be sure to read about master keys, understand key management, and be aware of important backup and recovery information, all of which is covered in the IBM i Information Center. Another useful reference is the Security Guide for IBM i, which also covers IBM i disk encryption. This blog post was originally published on IBMSystemsMag.com and is reproduced here by permission of IBM Systems Media.
<urn:uuid:e941a8b5-0e9f-4e26-8605-f7eae5dfdb38>
CC-MAIN-2022-40
https://dawnmayi.com/2011/08/10/encrypt-your-data-on-disk/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00635.warc.gz
en
0.932615
637
2.578125
3
Is Blockchain the Right Technology for My Application? This is an attempt at guidelines to help in technology decisions about when and where to use blockchain technology. Blockchain in a Nutshell A blockchain is a distributed list more than one entity can add new elements to. Each list element cryptographically validates its predecessor. Combined with the fact that adding new elements is computationally expensive, this protects the list’s integrity. Pros and Cons of the Blockchain Architecture Let’s start with what blockchain is good at: - Due to the distributed nature, there is no dependency on any individual entity for the management of the data - Tampering with the data is uneconomical because of the huge resources that would be required And here are the caveats: - Maintaining the list is highly inefficient due to the tamper-proof architecture - The throughput in transactions per second is very low compared to a database - Storage requirements can be high because verification of the list’s integrity requires access to all blocks When to Use Blockchain Technology From the above, we can deduce where blockchain is a good technology choice. Trust and Accountability Legal contracts allow for pretty efficient management of accountability and – indirectly – trust. Making a system’s technology tamper-proof is not necessary when the risks of tampering are too high for the parties involved. Takeaway: blockchain technology only makes sense where traditional legal instruments are insufficient or not applicable. Dependencies on Elements Outside the Blockchain If you cannot move the entire data set including all dependencies to the blockchain, the mechanisms that protect the blockchain’s integrity are wasted. People will simply cheat elsewhere in the process. Takeaway: a blockchain must be self-sufficient and not rely on external data. Processing Speed and Efficiency Blockchain technology incurs a high computational overhead. Compared to databases, blockchains are inefficient and slow. Takeaway: blockchains are not replacements for databases. Putting the hype aside, there seem to be very few use cases where blockchains would genuinely be the best technology choice. Bitcoin seems like a good fit but is hampered by the low slow transaction speed and huge (energy) inefficiencies.
<urn:uuid:1f8cae6a-7b2c-48ee-a759-3927b0d668fd>
CC-MAIN-2022-40
https://helgeklein.com/blog/is-blockchain-the-right-technology-for-my-application/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00635.warc.gz
en
0.880373
456
2.859375
3
Based on current projections, up to 50 million COVID-19 vaccine doses will be distributed globally in 2020 and up to 1.3 billion doses by the end of 2021. Some vaccines in development must maintain a consistent temperature of approximately -70°C ±10°C (94°F ±14°F) during storage and shipment to remain stable and viable, which presents a logistical challenge in the global distribution. This is where Cold Jet® is making a difference. Cold Jet – the worldwide leader in the manufacturing of dry ice production and blasting equipment – is helping industry and governments to safely distribute billions of COVID-19 vaccine doses around the world. Pharmaceutical and logistics companies as well as governmental agencies need to create an ultra-low temperature cold chain in order to keep vaccines below sub-zero temperatures. Currently, the existing refrigerated transport infrastructure and supply chain is not prepared to handle these types of shipments. Dry ice, made of recycled CO2, will be essential in the safe distribution of every single one of these doses. It sublimates without creating any waste or residue, making it the perfect refrigerating medium during the shipment of vaccines. “Dosing dry ice directly into temperature-controlled thermal shippers is the only way to maintain that temperature level during transit and storage,” said Wim Eeckelaers, Managing Director, EMEA, Cold Jet. “At all points in the vaccine transportation and distribution cold chain, dry ice is needed to maintain temperature.” Dry ice has long been an essential contributor to the global economy. However, it has taken on a new responsibility in humanity’s fight against COVID-19. Once the vaccine reaches local communities, it will be further divided up and sent to vaccination centres, public and mobile clinics, hospitals, and pharmacies. Cold Jet is working with local distributors around the world to ensure that they are able to meet the needs of all communities, whether they are in the inner-city or in hard to reach rural areas. The World Health Organization cites inadequate cold chain capacity as a major issue to vaccine distribution in developing economies. In recent years, millions of doses of vaccines were lost due to cold chain failures. Cold Jet’s Dry Ice Production Hub helps solve this issue. “We want to guarantee that any person, no matter where they live in the world, is able to obtain a safe and viable vaccine,” said Dennis Hjort, Vice President – Global Dry Ice Manufacturing Systems, Cold Jet. Cold Jet’s dry ice production machines are capable of producing up to 1,600 pounds (750 kilograms) per hour and are engineered to run 24 hours a day, seven days a week. From the packaging lines at a multinational pharmaceutical company to distribution centres at global logistics companies and locally within hundreds of communities around the world, Cold Jet machines are producing dry ice at all points in the vaccine distribution cold chain. “The impact of COVID-19 on global health and the global economy has been painful, but Cold Jet is extremely proud to play such a vital part in the distribution process of a vaccine,” said Eeckelaers.
<urn:uuid:57309825-5fec-4f6b-b02e-ffc0e7f10fc9>
CC-MAIN-2022-40
https://itsupplychain.com/cold-jet-is-providing-lifesaving-dry-ice/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00035.warc.gz
en
0.931483
647
2.734375
3
Stakeholders include all members of the project team as well as all interested entities that are internal or external to the organization. The project team identifies internal and external, positive and negative, and performing and advising stakeholders in order to determine the project requirements and the expectations of all parties involved. The project manager should manage the influences of these various stakeholders in relation to the project requirements to ensure a successful outcome. Stakeholders have varying levels of responsibility and authority when participating on a project. This level can change over the course of the project’s life cycle. Their involvement may range from occasional contributions in surveys and focus groups to full project sponsorship which includes providing financial, political, or other support. Some stakeholders may also detract from the success of the project, either passively or actively. These stakeholders require the project manager’s attention throughout the project’s life cycle, as well as planning to address any issues they may raise. Stakeholder identification is a continuous process throughout the entire project life cycle. Identifying stakeholders, understanding their relative degree of influence on a project, and balancing their demands, needs, and expectations are critical to the success of the project. Failure to do so can lead to delays, cost increases, unexpected issues, and other negative consequences including project cancellation. An example is late recognition that the legal department is a significant stakeholder, which results in delays and increased expenses due to legal requirements that are required to be met before the project can be completed or the product scope is delivered. The following are some examples of project stakeholders: A sponsor is the person or group who provides resources and support for the project and is accountable for enabling success. The sponsor may be external or internal to the project manager’s organization. From initial conception through project closure, the sponsor promotes the project. This includes serving as spokesperson to higher levels of management to gather support throughout the organization and promoting the benefits the project brings. The sponsor leads the project through the initiating processes until formally authorized, and plays a significant role in the development of the initial scope and charter. For issues that are beyond the control of the project manager, the sponsor serves as an escalation path. The sponsor may also be involved in other important issues such as authorizing changes in scope, phase-end reviews, and go/no-go decisions when risks are particularly high. The sponsor also ensures a smooth transfer of the project’s deliverables into the business of the requesting organization after project closure. Customers and users: Customers are the persons or organizations who will approve and manage the project’s product, service, or result. Users are the persons or organizations who will use the project’s product, service, or result. Customers and users may be internal or external to the performing organization and may also exist in multiple layers. For example, the customers for a new pharmaceutical product could include the doctors who prescribe it, the patients who use it and the insurers who pay for it. In some application areas, customers and users are synonymous, while in others, customers refer to the entity acquiring the project’s product, and users refer to those who will directly utilize the project’s product. Sellers, also called vendors, suppliers, or contractors, are external companies that enter into a contractual agreement to provide components or services necessary for the project. Business partners are external organizations that have a special relationship with the enterprise, sometimes attained through a certification process. Business partners provide specialized expertise or fill a specified role such as installation, customization, training, or support. Author : Mahmoud Qeshreh
<urn:uuid:3bb20f3d-2a5b-4539-bf9b-a5e11b7aa802>
CC-MAIN-2022-40
https://www.greycampus.com/blog/project-management/project-stakeholders
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00035.warc.gz
en
0.9365
764
2.9375
3
- What is the definition of a computer network? - Basic Definition of a Computer Network - Computer Network Definition pdf - Define Computer Network - What are the advantages of a computer network? - What are some of the disadvantages of a computer data network? - What are the components of a computer network? - What are the uses of a Computer Network? - What are the types of computer networks, their definition, and Computer network example? - What are the types of computer network designs or architecture & their definition? - What are the types of computer network topologies & their definition? - How does a computer network work? What is the definition of a computer network? Computer networks are the foundation of the modern business world. They’re the connections that carry network traffic and link together the hardware and software of every digital device. Without a well-designed and functioning network, businesses and individuals wouldn’t be able to collaborate, share, or make their daily lives more efficient. This article will help you understand the definition of a computer network, its advantages, components, types, and uses. Basic Definition of a Computer Network The simplest and basic definition of a computer network can be described as “when two or more computers are linked together in some way so that they can share information.” A computer network or Computer networking is the interconnection of computers to share data and resources. Networked devices utilize communications protocols over digital interconnections to transfer data. These interconnections comprise telecommunication network technologies such as wired, optical, and radio-frequency. Computers, servers, networking hardware, and other special-purpose hosts are all nodes that can be part of a computer network. They all have network addresses, which protocols such as the Internet Protocol used to identify nodes. Computer Network Definition pdf You can also download this post as a pdf by signing in to the form below. Define Computer Network This blog post defines computer networks in terms of their advantages and disadvantages, their use cases, types, and examples. What are the advantages of a computer network? These are the main advantages of Computer Networks: - Better and Faster Collaboration – The ability to work with coworkers virtually from anywhere is now possible because of advances in collaboration tools like Microsoft Teams, Cisco Webex, zoom, etc., and new network technologies like 5G, 100G fibers, etc. - Central Storage of Data – Data can be centrally stored and made available to all users in an organization via a file server (the central node). - Resource Sharing is easy– Resources, such as Internet links, printers, scanners, and copiers, can be easily shared using computer networks. - Cost-Efficient – By Sharing storage and resources mentioned above, you can efficiently reduce costs. - Easy to access and learn – You only need a small set of skills to connect to a modern computer. - Power to choose location – With the growing VPN technologies, you can imitate virtually anywhere while you are still sitting at your home. - Reliable – Computer networks provide backing up your data, essential for being reliable. For example, when a piece of equipment fails, or a part of data is corrupted, a backup copy of the same information is available on another workstation, allowing for uninterrupted work and management of the data. - Flexible – Computer networks allow you to share information in a flexible and in a variety of ways—for example, email, teams, skype, Webex, or Whatsapp. So there is variety for users. - Provide Data Security – With the use of access control protocols ( AAA – Authentication, Authorization, and Accounting), encryption certificates within computer networks provide a high level of security. Further, You can incorporate third-party security solutions like antivirus, firewalls, and antimalware to make networks even more secure. - Problem-solving in less time – You can resolve a specific issue quickly because you can break down a lengthy procedure into several smaller tasks and sub-tasks, each of which is handled by the various tools involved. In addition, computer networks respond quickly to changing conditions enhancing application availability. - Operate Virtually – Multiple “overlay” networks can be created by logically partitioning the underlying physical infrastructure. Data can travel between nodes in an overlay network via numerous physical pathways. Many enterprise networks, for example, use the Internet. - Integrate on a large scale – Modern networking services link geographically dispersed networks. These services can automate and monitor network functions to create one large-scale, high-performance network. Network services can be flexibly scaled. What are some of the disadvantages of a computer data network? The use of computer networks brings a fair amount of advantages with it. But, at the same time, they also have a few disadvantages. I would not say these are disadvantages; however, these are improvements required and recommendations while using computer networks. - Lacks Robustness – In a network, all connected systems are dependent on the main server. If the central server or bridging device fails, the network as a whole will fail. Most major firms preserve their main servers as powerful computers to simplify implementation and maintenance. - Lacks independence – Because networks are centralized, most choices are made by the server. This restricts the user’s freedom to use the computer as they like. Also, new computer networking processes make the operation simple. Thus, people are more likely to use computers for basic tasks that cannot be done by hand. - Spread of Malware and Virus – A virus can readily spread among computers in a network due to their interactions. If one of the machines has malware, it is likely to spread to the others. Infection of the central server can cause similar issues. All of these can corrupt files. The network administrator should scan for malware regularly. - High Deployment Costs – Even though computer networks are deemed cost-effective, they are not deployed. Creating a computer network is costly. How many systems are linked? Separate connections and equipment like switches, routers, and hubs are also required. - Higher Possibility of Hacking – A networked computer, unlike a standalone, poses various security hazards. Since the network has many users, hackers can quickly exploit massive networks like WAN using specialized tools built for this purpose. - Large firms, however, use security systems like firewalls to prevent theft and other illicit actions. - Decrease Productivity – One of the major issues with computer networking is reducing company productivity. Employees can utilize internet connections for purposes other than office work. While this can help employees relax, it can also lead to a loss of productivity. So management must decide how much computer or Internet use is acceptable. - Regular maintenance required – A computer network requires frequent maintenance to function correctly. The issue is that fundamental skills are needed. In addition, it requires complex setups and configurations. So experienced network engineers are needed. - Health Issues – Gaming is one kind of computer networking entertainment. While gaming can be a stress reliever for some, it can also become addictive. - Gaming addiction can lead to many physical and mental health difficulties. Insomnia and obesity are two. - Not Available Everywhere – Some countries still lack the Internet despite most modern computers providing free access. People in impoverished countries, in particular, struggle with connectivity. An entire worldwide network cannot be guaranteed until these issues are rectified. |Advantages of Computer Network||Disadvantage of Computer Network| |Better and Faster Collaboration||Lacks Robustness| |Central Storage of Data||Lacks independence| |Resource Sharing is easy||Spread of Malware and Virus is easy| |Cost-Efficient||High Deployment Costs| |Easy to access and learn||Higher Possibility of Hacking| |Power to choose data access location||Decreases Productivity if misused| |Reliable||Regular Maintenance is required| |Flexible||Can cause Health Issues| |Provide Data Security||Not Available Everywhere| |Problem-solving in less time| |Integrate on a large scale| What are the components of a computer network? A computer network can be wired or wireless. Computer networks are made up of physical and software components that help people set up computer networks at work and home. The hardware parts are the servers, clients, transmission medium, NICs or Network Interface Cards, and connecting devices. The software parts are the operating system and the protocols. - A server is a computer used centrally to manage programs and data files. A server allows these files to be shared and accessed by several users. A server can be said to be the backbone of the Internet. A server is a device that provides service to other devices (the clients). A web server (serving web pages to other computers), an email server (receiving and sending email), a file server (holding and distributing files), and so on are all examples of servers. - A client is a computer that connects to a server, requests, and receives service from the servers to access and use programs and data files. For example, When you send a print job to your printer, you are the client, and the computer or printer you are printing to is the server. - Transmission media is a medium through which computers send information from one place to another. Examples of transmission media include wired like copper wire, optical fiber, and wireless like radio waves, microwave, 5G, and Infrared waves. - A network interface card, or NIC for short, is a computer hardware component inside a computer that provides the communication interface necessary to send and receive data from an external device or network. The NIC allows the computer to communicate over various types of networks such as Ethernet LANs and WiFi WLANs, based on which type of NIC you have installed on your computer. While it is essential for a computer network, the NIC is the least visible. The NIC fits into a slot in the computer’s motherboard. - These devices are middlemen between networks or computers, tying the various network media components together. Some of the standard connecting devices are: - A hub is a not-so-intelligent network device used to interconnect network nodes. A hub contains multiple ports, each connecting to a single network node. Each of these ports is said to be on the network. All ports are connected internally, forming a single shared communications medium. All the ports receive signals transmitted between the ports. Thus, each port sees all the traffic, and there is no concept of one node communicating directly with another. - A repeater is a network device that allows a digital signal to be repeated to cover more distance and, if necessary, more nodes/subnets. It is most often used to expand the reach of an Ethernet LAN, WiFi, or other local area networks (LAN). - A bridge can be considered a hub but with more intelligence. The difference between hubs and bridges is the ability of the bridge to learn the MAC addresses of hosts on the network segment and then make a separate copy of the network segment to deliver data to the correct hosts. - A switch is a layer2 device that segregates a LAN into multiple segments called VLANs. It connects two communication networks with similar requirements or connects multiple LAN Segments. A switch’s main job is to forward and filter the data between the computer network and its users. A switch is a multiport network bridge that operates at the OSI model’s data link layer (layer 2). - A router is a device that forwards data packets between computer networks. A router is generally connected to two or more networks and forward packets received from one network. For example, you could have a router in your home that connects your desktop computer to the Internet. - The modem stands for MOdulator/DEModulator. Modulation converts a digital signal into an analog signal, and demodulation converts the analog signal into a digital signal. For example, a modem converts an analog signal from telephone lines into a digital signal that a router or switch can understand and vice versa. Every home Internet connection device typically has a modem and router inside it. - A Gateway is a piece of hardware that connects two networks. When data enters or leaves a network, this gateway node is where it goes. It is on a corner of the web, and all of that data goes through it. It acts as a “gate” between them, and it could be a router, firewall, server, or any other device that lets traffic move in and out of the network. Like bridges or switches that connect two networks of the same type, a gateway connects two different networks. Technologies used as Software Components Networking Operating System - Networking Operating Systems are often deployed on the server and allow workstations in a network to exchange files, databases, programs, printers, and other resources as required. - Each machine in a network follows a set of rules and guidelines known as a protocol. A protocol suite is a collection of protocols for computer networks that are all interrelated. The most often used protocol suites are − - OSI or Open System Interconnections Model - TCP / IP Model or Transmission Control Protocol / Internet protocol Model What are the uses of a Computer Network? Computer networks have become very important in our businesses and our personal life. Modern network solutions do more than connect people. They are vital to the digital transformation and success of companies in the world today. Below are some of the common uses of a Computer Network. - Faster personal Communication – Computer networks have enabled unprecedented communication speed and volume increases. People are using all kinds of applications like email, Whatsapp, Webex, zoom, etc., for sending text messages, documents, photos, and videos worldwide in a blink of an eye. In addition, social networking sites like Facebook, Instagram, TikTok, etc., have boosted online communication by a factor of ten or more. - Information and Resource Sharing – Computer networks enable organizations with dispersed departments to share information efficiently. Any program or a file or software running on these computers can be shared and accessed by other computers connected to the same network. It also enables sharing of hardware devices such as file servers, printers, and scanners among multiple users. - Getting information from remote− Using computer networks, people can get information about many different things that aren’t close to them. People who use information systems like the World Wide Web can get to the databases that store the information. - E-Commerce − A wide range of business and commercial transactions can now be carried out online, known as e-commerce. Individuals and groups can pool funds, buy or sell things, pay bills, manage bank accounts, pay taxes, transfer payments, and handle investments electronically, all thanks to computer networks. - Highly Reliable Systems − Distributed computing is made possible by computer networks, allowing data to be kept in different places simultaneously. So the system can be relied upon entirely. Even if one of the data sources fails, the system will still function, and the data will still be available from the other sources in the event of a breakdown. - Cost-Effective Systems − Computer networks have cut down on the cost of setting up computer systems in businesses. For example, companies needed to buy and set up expensive mainframes for computation and storage in the past. With the rise of networks, it is enough to set up a group of connected personal computers (PCs) for the same thing. - VoIP − VoIP, or Voice over Internet protocol, has changed the way people communicate. For example, phone calls are now made digital instead of analog lines. Instead of using phone lines, calls are made using Internet Protocols. What are the types of computer networks, their definition, and Computer network example? Below are the computer network types and their examples:- Local-area network (LAN) A LAN is a network only used in a small area, like a building or a floor. It uses short-range technologies like Ethernet, Token Ring, etc., to connect things. Most of the time, a LAN is under the control of the company or organization that needs it. Some of the examples of LAN are:- - Network connectivity in home, office, or between two computers. - Home of Office wireless LAN or WiFi. Wide-area network (WAN) A WAN is a network that connects two or more LANs through a third-party service provider. An example is an MPLS cloud that connects corporate offices in Toronto, London, New York, and New Delhi. This MPLS cloud is provided by a telecom company and not owned by the company that needs it. Example of Wide area network (WAN) - Internet is an example of WAN. Campus Area Network (CAN) This type of network connects LANs and/or buildings owned or operated by the same person or organization in a specific area. Because a single company controls the environment, underground conduits may allow fiber connections between the buildings. College campuses and industrial parks are two good examples. Metropolitan Area Network (MAN) A MAN is a network that connects LANs and buildings in an area more significant than a campus. To connect a company’s offices across the city, for example, a MAN might use the services of a telecom company. Examples of Metropolitan area network (MAN) - Banks usually use MAN to connect their offices within a city. Enterprise computer networks are special networks built for large corporations, usually called Enterprises, with specific requirements that must be met. Since networking is crucial for any modern company to function, enterprise networks must be highly available, scalable, and robust. Enterprise networks are equipped with tools that allow network engineers and operators to design, deploy, debug, and remediate. An enterprise may use LANs and WANs on its campus, branches, and data centers. Examples of Enterprise network - Network of Amazon, Google, or any other big organization serving their users or clients. Service providers use WANs to connect individuals and companies. In addition to leased lines, they may also offer more advanced, managed services to businesses. For example, customers can connect to the Internet and mobile devices through service providers. Examples of Service Provider network - Large Telecom providers like AT&T, Bell, Rogers, etc. What are the types of computer network designs or architecture & their definition? You can break down computer network design into two main categories: - In a client/server network, a central server or collection of servers manages resources and services. - Clients in a client/server architecture don’t share resources like P2P clients. - The server connects clients in a network. - Client nodes rely on server nodes for memory, computing power, and data. - A server can control client node behavior. - Clients can interact with each other but not share resources. - Most of the applications work as a client-server model, including HTTP, FTP, SMTP, etc. A peer-to-peer (P2P) network architecture is frequently employed to transfer digital media assets. Rather than having one central server or client, a peer-to-peer (P2P) network distributes the processing and bandwidth amongst all computers on the web. A network with this level of decentralization is less prone to systemic failure and better uses available resources. Aside from online file sharing, P2P networks have been used by Bluetooth-powered electrical devices and Internet-based communication services. - P2P architecture, for example, is used by certain firms to host memory-intensive programs, such as 3-D visual processing, across several digital devices. What are the types of computer network topologies & their definition? Network topology is an arrangement of nodes and links. They can be assembled in different ways to achieve different outcomes. Examples of some network topologies include: - In a Star topology, all the computers are connected to the central node via an extension cable or wireless. - There is no central hub on a star network. Therefore each computer is linked to the network core separately. - You can manage the entire network from a single location with a star topology. - In a Bus topology, all computers are connected by a single cable, and any data destined for the network’s final node must pass through all of the machines on the web. - If a cable is damaged, all computers connected down the line cannot communicate with the network. - The advantage of a bus topology is that it requires the least amount of cabling possible. - Collapsed Ring topology has a hub, router, or switch as the central node and the rest of the nodes connected as branches. - Ring topology is used internally, and there are plug-ins for cable management. - There is an individual cable that attaches to each computer. - In a Mesh topology, the computer network nodes are connected to as many other nodes as possible, creating a diverse and complex architecture or MESH. - Nodes in this architecture work together to ensure that data reaches its destination as quickly and efficiently as possible. - Many alternative nodes can transfer data if one node fails, making this design more resilient. How does a computer network work? Every device in the network works together. So, for example, a typical Office network connected to a WAN would work like the below. - End devices like computers, laptops, and printers are connected to switch ports through cables. - These end devices have IP addresses, subnet masks and Gateway configured. - Gateway is mostly configured as Router. - Multiple switches are connected to various floors through fibers. - Users are placed into multiple VLANs to segregate traffic on LAN. Each VLAN has a distinct set of IP subnets. - Wireless Access points connect to switches to provide WI-FI access to users. - Then all the switches connect to a single layer3 switch or two layer3 switches or routers for redundancy. - And that router connects to WAN through a Telecom provider device. - Routers connect a LAN to a WAN network to connect other offices or to the Internet. - On the other side of the network is mostly a Data Center where firewalls, routers, switches, and servers are placed and users access those servers to transfer files, server Internet, etc. This article has discussed the basic definition of a computer network, its advantages, disadvantages, and uses. Then, we looked into different computer network types and their examples and ended the article with a sample computer network. Let me know if you want me to add anything else to this article. Please share this article so that it reaches the maximum number of people. You can also take these best computer networking courses online to jump-start your networking career.
<urn:uuid:67cd9976-9e73-4f03-84af-df5a395a8b83>
CC-MAIN-2022-40
https://afrozahmad.com/blog/what-is-the-definition-of-a-computer-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00035.warc.gz
en
0.917889
4,837
3.453125
3
Approach To The "Things" In The Internet Of Things Is Crucial For Designing An IoT Product When designing products and systems for the Internet of Things (IoT), manufacturers often tend to focus on only a single aspect of the process: embedding wireless communications, or establishing a cloud connection, or writing web- or mobile-based software to control an IoT device. Taking an isolated approach to the IoT, however, is a huge mistake. Enabling security, performance, reliability, in-field updates, and long-term maintenance requires a cohesive, end-to-end approach—both as a design paradigm and when planning to extract the real value of the IoT, through analysis of the data generated by connected devices. The Importance of ‘Things’ in the IoT Most often, the focus in IoT is on the “Internet” part of the phrase. But it’s the approach to the “Things” that is crucial for designing an IoT product or system, industrial or otherwise. Effective implementations require expertise and experience in the full IoT spectrum—from the communications chips embedded in the connected device, through cloud computing and network security, to control via a web-based or mobile application. Plus, every piece of IoT technology needs to work seamlessly with all the other pieces to create an integrated, secure, end-to-end system. Parsing responsibility for different pieces to different entities, and assuming that the correct integrations and handoffs will occur, is a recipe for disaster. Most often, the focus in IoT is on the ‘Internet’ part of the phrase. But it’s the approach to the ‘Things’ that is crucial for designing an IoT product It’s also important to have a clear idea of the ultimate purpose of any IoT implementation, and to design all aspects of the product or environment so they work together to achieve that purpose. For example, what is the mix of legacy and new devices that need to be taken into consideration? What kinds of users will be interacting with the system, and how should the devices and applications be controlled? What level of responsiveness and availability is required? Design Considerations Across the Full IoT Spectrum IoT solutions must encompass connected device, cloud and control application technologies. Here are a few of the capabilities and kinds of technical expertise required for each of these areas: For connected devices: ♦ Internet connection providing reliable, secure cloud connectivity ♦ A tested, provisioned, up-and-running IP stack ♦ A scheduler that is debugged, tested and production-ready ♦ Wireless connectivity (in industrial settings where wireless connectivity is both possible physically and advisable) that can handle addressing, connectivity to the network and to other devices, and security ♦ Device addressability, identification and authentication ♦ Methods for distributing and managing software updates, upgrades and bug fixes remotely and over time For cloud connectivity: ♦ Tight network security, including access control; device and cloud authentication; data privacy; data security both at rest and in motion; firewalls; wireless encryption; and continual updating to the latest security standards ♦ Networking protocol support; in the case of IoT environments with legacy equipment, this means supporting both old, technically obsolete protocols as well as future protocols that have not yet been designed ♦ Reliability and resilience, including for scheduling, automatic responses, and what happens in an IoT environment if the cloud connection is lost ♦ Scalability; it’s one thing to offer a given level of security, performance, reliability or responsiveness for a handful of connected devices, but what happens when it’s time to scale to thousand or millions of devices? ♦ Cloud service architecture optimized for chosen cloud service providers, each of which offers different sets of services ♦ A data infrastructure able to handle and process the constant stream of data generated by always-on devices For application control: ♦ The type of application control: web-based, mobile app or other ♦ Secure connectivity of the control application with the cloud ♦ Identifying and authenticating authorized users ♦ Discovery, selection, authentication and management of wireless LAN access points ♦ Management of user interactions with schedules, sensor input, actuators, data generation and analysis, and other automated activities Taking a Platform Approach It is highly unlikely that the owners of an IoT implementation also are experts in all aspects of IoT connectivity—especially given the recent emergence of IoT technologies and the long histories of most industrial and commercial equipment being connected to the IoT. Expecting in-house teams to build their own IoT solutions is unreasonable and unwise. In most cases, it is just too difficult, expensive, time-consuming and risky. A smarter approach is to start with an IoT platform that already integrates all the technologies—across devices, cloud and application control—required for a fully functional, secure, reliable IoT solution. An effective IoT platform should offer all of the functionality needed to deliver this top-notch level of IoT connectivity, including end-to-end standards-based security. Through data analytics, it should make it easy to track IoT products’ real-work performance; improve everything from time to market to maintenance costs; and even deliver insights into how to improve the design and functionality of future IoT products and systems. And because the very concept of IoT connectivity is at odds with the existence of silos, an IoT platform should be able to operate as part of a robust ecosystem— integrating with third-party clouds and keeping pace with evolving standards, protocols and security updates. No one can see into the future to know what features, technologies or intangibles will be important for the next generations of IoT solutions. For that reason, any IoT platform chosen today must adapt easily to changing innovations, standards, components and demands. Flexibility enables future-proofing—which allows IoT environments to remain viable without the expense and hassle of ripping and replacing each time a technology advances at the device, cloud or control application level.
<urn:uuid:04512eaa-3a13-4bf9-bf0c-31a68f0a1514>
CC-MAIN-2022-40
https://cisco.cioreview.com/cxoinsight/approach-to-the--things--in-the-internet-of-things-is-crucial-for-designing-an-iot-product-nid-18214-cid-61.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00035.warc.gz
en
0.914351
1,227
2.546875
3
Data encryption of a table in SQL Server is done at the column level, column by column, and utilizes symmetric encryption. The following steps detail how this process occurs within SQL Server: - A database master key is created - A self-signed certificate is created which will be protected by the database master key - A symmetric encryption key to be used for the column level encryption is created from the certificate - The table is then encrypted with the EncryptByKey function using the symmetric key and the name of the certificate To decrypt data, the DecryptByKey function is called, which also requires the symmetric key and the name of the certificate.
<urn:uuid:34a94ba7-44e6-4798-a609-ed85a6fc4de4>
CC-MAIN-2022-40
https://www.encryptionconsulting.com/tag/decryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00035.warc.gz
en
0.895352
136
3.3125
3
Back to School Cyber Safety Tips from Century Back to School Cyber Safety Tips from Century The end of summer marks the return of students to school. As part of our commitment to helping schools and students we would like to offer a few tips to help with computer security. These tips will help students who communicate with family, friends and complete school assignments when using smart phones, tablets, and laptops. Based on an article recently release by, The US Department of Homeland Security, there are simple steps that can help students stay safe while using their internet-connected devices. How can I improve my computer’s security? The following are important steps you should consider to make your computer more secure. While no individual step will eliminate all risk, when used together, these defense-in-depth practices will strengthen your computer’s security and help minimize threats. Secure your router. When you connect a computer to the internet, it’s also connected to millions of other computers—a connection that could allow attackers access to your computer. Although cable modems, digital subscriber lines (DSLs), and internet service providers (ISPs) have some level of security monitoring, it’s crucial to secure your router. Enable and configure your firewall. A firewall is a device that controls the flow of information between your computer and the internet. Most modern operating systems (OSs) include a software firewall. The majority of home routers also have a built-in firewall. Refer to your router’s user guide for instructions on how to enable your firewall and configure the security settings. Set a strong password to protect your firewall against unwanted changes. Install and use antivirus software. Installing an antivirus software program and keeping it up-to-date is a critical step in protecting your computer. Many types of antivirus software can detect the presence of malware by searching for patterns in your computer’s files or memory. Vendors frequently create new signatures to ensure their software is effective against newly discovered malware. If your program has automatic updates, enable them so your software always has the most current signatures. Remove unnecessary software. Intruders can attack your computer by exploiting software vulnerabilities, so the fewer software programs you have installed, the fewer avenues there are for potential attack. If you don’t know what a software program does, research the program to determine whether or not the program is necessary. Modify unnecessary default features. Like removing unnecessary software, modifying or deleting unnecessary default features reduces attackers’ opportunities. Review the features that are enabled by default on your computer, and disable or customize those you don’t need or don’t plan on using. Operate under the principle of least privilege. In most instances of malware infection, the malware can operate only using the privileges of the logged-in user. To minimize the impact of a malware infection, consider using a standard or restricted user account for day-to-day activities. Only log in with an administrator account which has full operating privileges on the system when you need to install or change your computer’s system settings. Secure your web browser. When you first install a web browser on a new computer, it will not usually have secure settings by default, you will need to adjust your browser’s security settings manually. Securing your browser is another critical step in improving your computer’s security by reducing attacks that take advantage of unsecured web browsers. Apply software updates and enable automatic updates. Most software vendors release updates to patch or fix vulnerabilities, flaws, and weaknesses (bugs) in their software. Intruders can exploit these vulnerabilities to attack your computer. Keeping your software updated helps prevent these types of infections. When setting up a new computer, go to your software vendors’ websites to check for and install all available updates. What are some additional best practices I can follow? Use caution with email attachments and untrusted links. Malware is commonly spread by users clicking on a malicious email attachment or a link. Don’t open attachments or click on links unless you’re certain they’re safe, even if they come from a person you know. Use caution when providing your information. Emails that appear to come from a legitimate source and websites that appear to be legitimate may be malicious. An example is an email claiming to be sent from a system administrator requesting your password or other sensitive information or directing you to a website that requests your information. Online services (e.g., banking, ISPs, retailers) may request that you change your password, but they will never specify what you should change it to or ask you what it is. Create strong passwords. Use the strongest, longest password or passphrase permitted. Don’t use passwords that attackers can easily guess, like your birthday or your child’s name. Attackers can use software to conduct dictionary attacks, which try common words that may be used as passwords. Century Business Products handles nearly forty percent of the school districts in South Dakota and many more in Iowa, Minnesota and Nebraska. Helping schools to control cost, manage documents, provide tracking solutions, connect into Google accounts and automate test scoring is some of the technology being provided with the Kyocera copier and printer line. Providing ways to be more aware of cyber security is another way Century is happy to be giving back to the communities it serves. For more information about Century Business Products and the many offerings available, contact us at CBPNow.com or call 800-529-1950.
<urn:uuid:bcf0979a-b890-41e4-86c4-49de747756e8>
CC-MAIN-2022-40
https://cbpnow.com/century-business-products-news/back-to-school-cyber-safety-tips-from-century/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00035.warc.gz
en
0.891051
1,132
2.875
3
About subnets and subnet masks A subnet is a block of IP addresses in your network. In IPv4, an end point such as a mobile phone, desktop, or laptop is typically connected to only one subnet and has only one IP address. Network devices like routers and firewalls, however, can have multiple IP addresses, each in a different subnet. A device knows another device is on the same subnet by looking at the IP and network address (also called your “subnet mask”): If all the IP address bits in the subnet mask match, it’s on the same network. If a device needs anything not on the subnet, it will forward the data to a router, which serves as the external boundary to your subnet. Subnets were originally determined by a number of different variables, such as how many IP addresses the subnet needs (it’s “network class”). Today, rather than seeing a subnet mask written out in its full form, you’ll typically see CIDR notation. Using the subnet calculator Calculating subnet ranges can be a tricky and time consuming process. Most network professionals keep a subnet mask calculator bookmarked because of this. Auvik’s IP subnet calculator is designed to quickly calculate all the information you might need for both classful and classless addressing. You can provide the subnet calculator with an IP address and subnet mask, and it will return a number of details about that network. Outputs to the subnet range calculator include number of usable addresses and their IP ranges, network and subnet mask addresses. You can also see the outputs in two common different notations: netmask dot-decimal and binary. We hope you get a lot of use out of our online subnet calculator!
<urn:uuid:c8287651-6fb0-40df-acf8-3cb063b6dac8>
CC-MAIN-2022-40
https://support.auvik.com/hc/en-us/articles/8614287074580-How-Do-Subnets-and-Subnet-Masks-Work-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00035.warc.gz
en
0.948962
386
3.75
4
The Ultimate Guide to Virtual Reality Also known as VR. Strap on a headset and enter a computer-generated world where interactions and movement bring person and machine closer together. As a simulation, experiences can vary from closely matching the real world, through to fantastical environments limited only by the imagination of their developers. VR is typically broken down into three categories- non-immersive, semi-immersive, and fully-immersive. A non-immersive example would appear on a normal screen, semi-immersive might add a headset, while fully-immersive combines headset and motion-sensitive controls. VR applications are extensive and include video gaming, training simulations including in military, healthcare and business, and travel and leisure.
<urn:uuid:ee8d145b-e495-436b-9d72-7d54dfd56458>
CC-MAIN-2022-40
https://itbrief.asia/tag/virtual-reality
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00035.warc.gz
en
0.935168
148
2.875
3
Although there are many different types and sizes of motors, most electric motors used in commercial and industrial applications can be classified as single-phase (1 φ) or three-phase (3 φ). Single-phase motors are commonly used in some commercial applications such as garage door openers, sump pumps, automatic doors, and ceiling fans. Three-phase motors are commonly used in commercial and industrial applications such as freight elevators, mixing equipment, conveying equipment, HVAC equipment and material processing. A single-phase (1 φ) motor is a motor that operates on single-phase electricity. Single-phase motors have two main parts, the stator and rotor. A stator is the stationary part of an AC motor that produces a rotating magnetic field. A rotor is the rotating part of an AC motor. The rotating magnetic field created by the stator rotates the rotor and shaft to deliver work. Other parts of a motor are the frame, end bells, bearings, and a fan connected to the motor shaft. See Figure 1. Single-phase motors are commonly used in commercial and industrial applications for smaller loads, such as automatic garage doors, small drill presses, industrial vacuum systems, sump pumps, and electric conduit benders. Figure 1. Single-phase motors are commonly used in applications such as automatic doors, machine and maintenance tools, and electric conduit benders. A three-phase (3φ) motor is a motor that operates on three- phase electricity. Three-phase motors are used in many commercial and most industrial applications because 3 φ power is typically available in the facility and 3 φ motors are smaller, lighter, and are less likely to malfunction than 1 φ motors with similar horsepower. Three-phase motors are more efficient in energy conversion than 1 φ motors. Three-phase motors are similar in construction to 1 φ motors. See Figure 2. Figure 2. Three-phase motors are typically used in many commercial and most industrial applications because they are more energy efficient. Dual-Voltage, 3 φ Motors Most 3 φ motors are made so that they can be connected for either of two voltages. Making motors for two voltages enables the same motor to be used with two different power line voltages. A dual-voltage motor is a three-phase electric motor that is capable of operating on more than one system voltage. A dual-voltage, 3 φ motor is the most commonly installed motor. A common dual-voltage rating is 208- 230/460 V. This type of dual-voltage rating indicates that the motor is actually rated to operate at the three different voltages of 208 V, 230 V, and 460 V. The 208 V and the 230 V ratings are considered “low voltage” on the motor’s nameplate wiring diagram. The 460 V rating is considered to be the “high voltage” rating on the wiring diagram on the motor nameplate. The higher voltage is preferred when a choice between voltages is available. The motor uses the same amount of power and gives the same horsepower output for either high or low voltage, but as the voltage is doubled (230 V to 460 V), current is cut in half. Using a reduced current enables the use of a smaller conductor size, which reduces the cost of motor installation. A wiring diagram is used to show the terminal numbering system for a dual-voltage, 3 φ motor. See Figure 3. Nine leads are brought out of the motor. Leads coming out of the motor are marked T1 through T9 and can be externally connected for either of the two voltages. The terminal connections for high and low voltage are normally provided on the motor nameplate. Figure 3. A dual-voltage motor is a 3φ electric motor that is capable of operating on more than one system voltage. The nine leads are connected in either series (high voltage) or parallel (low voltage). To connect a motor for high voltage, L1 is connected to T1, L2 to T2, and L3 to T3. Lead T4 is connected to T7, T5 to T8, and T6 to T9. By connecting the leads in this configuration, the individual coils in phases A, B, and C are connected in series, with each coil receiving 50% of the line-to-neutral point voltage. The neutral point is the internal connecting point of all three phases.
<urn:uuid:4b6f67c2-b5bc-4721-9da3-7c7732cfbd23>
CC-MAIN-2022-40
https://electricala2z.com/motors-control/types-electric-motors-applications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00035.warc.gz
en
0.919999
936
3.5
4
Vulnerability assessment consists of defining, identifying, classifying and prioritizing weak points in applications to provide an assessment of predictable threats and enable appropriate responses. Organizations of any size, which face an increased risk of cyber-attacks, can benefit from vulnerability assessments to make their environments more secure. However, large companies and other organizations that are subject to continuous attacks are forced to develop more robust analysis routines to protect their structures and applications. In companies where data is the main business asset, avoiding loss or leakage of information is a critical success factor. In this blog post, we gathered some general information about the vulnerability analysis in corporate structures, explaining a little more about the importance of this type of analysis and passing on tips to professionals who intend to take the first steps in searching for safer environments. Check it out! Importance of vulnerability assessment Security vulnerabilities in corporate environments are often used by crackers to promote harmful access to the company. In this sense, it is essential that technology professionals make efforts to identify weaknesses before they are exploited by malicious users. A more comprehensive assessment of vulnerability should take into account the applications used in the company’s day-to-day activities, from operating systems, software to perform daily tasks (CRM, ERPs, file repositories, etc.), software aimed at corporate digital security ( UTM Firewall, NGFW etc), and also software and applications developed by the company itself to supply internal needs, or for commercial purposes. A vulnerability assessment identifies security weaknesses in the environment and in specific applications, serving as a parameter to assess risks and promote changes in the environment in search of safer structures. The analysis also helps to understand the company’s technological structure and gain maturity in terms of information security. All of this helps to reduce the likelihood of virtual attacks being successful. How do vulnerability assessments work? There are two main steps in a vulnerability analysis, depending on the assessment model adopted: - Create profiles to locate possible weaknesses that can range from incorrect configurations to complex defects with the ability to dramatically put an application at risk; - Produce detailed reports with records of the vulnerabilities found to allow immediate correction and learning on future occasions. The vulnerability assessment can take on several profiles, depending on the type of application and the needs of the developer. One of the most used analytical logic is the Dynamic Application Security Test (DAST). Its technique identifies security defects by feeding fault conditions to find vulnerabilities in real time, and it occurs from the execution of web applications under conditions of computational stress. Another common vulnerability analysis is the Static Application Security Test (SST), an in-depth scan of an application’s code in order to identify vulnerabilities without running the program. Both DAST and SST establish different courses for vulnerability analysis. While SST is able to identify serious vulnerabilities, such as malicious scripts and SQL injection, DAST identifies critical flaws through external intrusion tests, which occur while web applications are running. Finally, one of the most used vulnerability assessment procedures is the penetration test, which involves security checks with specific objectives, adopting an aggressive approach that simulates an invasion. The penetration test can, for example, seek to discover information about a user or make an application unavailable, in addition to other objectives common to malicious users. Conducting constant vulnerability analysis is the only way to ensure the highest security for your network and applications, and OSTEC can help you to reinforce the integrity of your network and applications. Talk to one of our experts and find out how we can help you!
<urn:uuid:6ace110e-1add-483c-ac6d-6733b04b28ab>
CC-MAIN-2022-40
https://ostec.blog/en/general/first-steps-to-perform-vulnerability-assessment-on-corporate-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00035.warc.gz
en
0.922621
729
2.609375
3
Automation is the key to success; every company is expanding on this domain’s expertise, as organizations take on a more global approach. Given the problems of decision making, learning, and the need for adaptability when understanding data, data scientists introduced the concept of Machine Learning within the realm of Artificial Intelligence. These practices have been able to bring about a radical change in modern business efficiency. Artificial Intelligence is commonly a platform which performs tasks intelligently, without incurring the need for human intervention. On the other hand, Machine Learning is an exclusive part of the Artificial Intelligence world, which encapsulates the know-how and the logic behind making the concept of Artificial Intelligence a real success story. Through the use of Machine Learning, machines can be taught to work more sensibly, thereby allowing them to recognize different patterns and understand new circumstances with ease. Machine Learning has come to be used extensively, especially when it comes to providing analytical solutions to the world of consumers and technology. Through large systems of data, Machine Learning has been able to drive solutions, which help create a more data-driven approach towards solving problems. How Artificial Intelligence is Changing Enterprise Applications Corporate enterprises are showing a growing interest in the field of Artificial Intelligence and Machine Learning. From IBM’s Watson to Google’s DeepMind to AWS’s multiple Artificial Intelligence services, there is a lot of activity happening in the market these days. Other features of Machine Learning include the likes of Deep Learning, computer vision and natural language processing (NLP). With all these innovations languages in place, computers can enhance their functionalities, including pattern recognition, forecasts, and analytical decision-making. By incorporating Artificial Intelligence and Machine Learning techniques in day to day functions, large enterprises can automate everyday tasks and enhance their overall efficiency in the long run. Here are some ways in which Machine Learning techniques are helping enterprises enhance their efficiency: Improving Fraud Detection: Fraud detection has become the need of the hour, as more and more companies are investing heavily in these new capabilities. With more companies falling prey to fraudulent practices, there is an imminent need to be ahead in the game of fraud detection. With Artificial Intelligence and Machine Learning in place, companies and organizations can extensively direct their resources towards enriching their fraud prevention activities, to help isolate potential fraud activities. Loss Prediction and Profit Maximization: When it comes to deriving insights from heaps of data, there is nothing better than Machine Learning to prevent loss prediction and maximize profits. The stronger the techniques, the more foolproof the loss prediction methodologies would become in the long run. Personalized Banking: In this era of digitization, everything is automated. For this reason, banks often seek to deliver customized, top notch, personalized experiences to their customers to keep loyalty intact. By leveraging their data, banks can aim to unearth customer needs and fulfill them with the utmost precision and dedication. Robotic Financial Advisors: Portfolio management has become the talk of the town these days, especially since robotic financial advisors have stepped into the game. Clients can benefit immensely by this advancement, since the right opportunities are mapped with their portfolio needs and demands. Robotic applications are easy to merge with services such as Alexa and Cortana, allowing banks to provide exceptional service to their customers. Through this integration, financial institutions can hope to acquire new customers and also offer more individualized services to existing customers. Next-Era Digital Traveling: Through the use of recommendation engines, travelers can experience the new recommendations for their travel aspirations. Organizations can play a role by allowing customers to converse with chatbots, which are created through the use of Artificial Intelligence and Machine Learning. As predicted by Gartner, by the year 2020, 25% of all customer service operations will rely on virtual assistant technology to make their business ends meet. Detailed Maintenance: Through the help of predictive maintenance, industries like aviation, transportation, and manufacturing are expecting to be able to provide the best customer service in the market. Through the use of predictive models, such industries can accurately forecast prices and predict their losses, thereby, reducing any redundancies in the future. With digitization paving the path of the future, there is a bright scope for companies and organizations which are investing heavily in these new age technologies of Machine Learning and Artificial Intelligence. Third party consulting services such as Idexcel are ready to help companies looking to take their first step with industry leading consulting and cloud-advisory services. As we progress through the years, what should be interesting to note are the changes we will get to see in the various industries, as every sector aims to provide exceptional customer service to their customers in multiple ways. How Your Small Business can Benefit from Machine Learning Machine Learning’s Impact on Cloud Computing Amazon SageMaker in Machine Learning The Future of Data Science Lays within Cloud-Based Machine Learning and Artificial Intelligence
<urn:uuid:6e813e3d-bb90-4946-ba33-871c814fb920>
CC-MAIN-2022-40
https://www.idexcel.com/blog/understand-how-artificial-intelligence-and-machine-learning-can-enhance-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00035.warc.gz
en
0.940278
1,000
2.796875
3
Energy consumption in individual data centers is increasing rapidly, by 8% to 12% per year. The energy is used for powering IT systems (for example, servers, storage and networking equipment) and the facility’s components (for example, air-conditioning systems, power distribution units and uninterruptible power supply systems). The increase in energy consumption is driven by users installing more equipment, and by the increasing power requirements of high-density server architectures. While data center infrastructure management (DCIM) tools monitor and model energy use across the data center, server-based energy management software tools are specifically designed to measure the energy use within server units. They are normally an enhancement to existing server management tools, such as HP Systems Insight Manager (HP SIM) or IBM Systems Director. These software tools are critical to gaining accurate and real-time measurements of the amount of energy a particular server is using. This information can then be fed into a reporting tool or into a broader DCIM toolset. The information will also be an important trigger for the real-time changes that will drive real-time infrastructure. Hence, for example, a change in energy consumption may drive a process to move an application from one server to another.
<urn:uuid:c89c4682-5ea9-4c08-96c5-39bf4106ba64>
CC-MAIN-2022-40
https://www.gartner.com/en/information-technology/glossary/advanced-server-energy-monitoring-tools
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00235.warc.gz
en
0.914987
247
2.578125
3
Cloud infrastructure refers to the hardware and software components that support the delivery of a cloud-based service. It differentiates from traditional on-premise data centers in terms of its system architecture and service delivery model: A typical cloud infrastructure is located off-premise and accessed via the internet. The hardware resources are virtualized and abstracted, to allow for resource scaling, sharing, and provisioning among end users located at disparate geographic locations. In this article we will describe components, characteristics, and the service and deployment models of a cloud infrastructure. Components of cloud infrastructure A cloud service consists of client-side systems such as PC, tablets, and other devices that are connected with the backend data center components over the network. The components that constitute a cloud infrastructure include: The network is the communications channel that enables information to travel between backend cloud systems and front-end client devices. The computing process takes place at the off-premise cloud data center. Users access and interact with these components over private or public networks that communicate data between the two ends of a cloud service. The data is typically the visual information, logs, or control functionality communicated across the network. The network consists of physical electrical components such as routers, wires, and switches as well as software apps and hardware firmware that enable data communication as per the OSI data communications model. Cloud computing is accessed by a set of virtual hosts that represent a preconfigured set of physical hardware components. While end users don’t control, manage, and operate hardware at the physical layer, underlying the layers of abstraction and software-defined infrastructure is a range of hardware assets common to any data center, whether cloud or on-premise. These hardware components include servers, processing units, GPUs, power supply, memory, and other components. Allocation of these hardware resources can be scaled across users and IT workloads through virtualization and layers of abstraction depending on the model of the cloud service. Redundancy and flexibility are built into the hardware systems to ensure that the performance, security, and availability issues pertaining to cloud infrastructure hardware do not impact end users. Platform and storage system is a critical component of the cloud infrastructure stack. Cloud data centers store data across a variety of storage types and devices, keep backups, and scale storage allocation among users. The underlying hardware stack that supports the storage infrastructure is abstracted through virtualization or a software-defined architecture. This allows users to use storage as a cloud service that can be added or removed as without manually provisioning the hardware at every server when required. Common cloud storage formats include: - Block storage. This approach splits data into blocks that are stored across different storage systems in multiple server arrays. The data is decoupled from the underlying hardware environment. An individual storage volume can be split into multiple instances called blocks. Block storage is most suitable for static data assets. - Object storage. Data files are broken down into pieces, each provided with a unique metadata identifier, and stored as uncompressed, unencrypted data objects. The metadata information can be customized (unlike in block storage, which only allows for a limited set of metadata attributes as identifiers). Object storage is suitable for data assets changing dynamically. - File storage. This is associated with Network Access Storage (NAS) and works similar to the local hardware device storage on your PC. It is easily configurable within a single data path. The cloud service is decoupled from its hardware resources such as computing power and storage using virtualization or other software-defined computing architecture. The hardware functionality is emulated within a software system—users get access to a virtual version of hardware resources such as platform, processing, storage, and networking. The hardware resources that enable a cloud service are operated and managed by cloud vendors. Users only pay for the services they consume, which means that the issues in hardware underlying a cloud service must not impact the Service Level Agreement (SLA). With virtualization, these limitations are masked from users of a cloud service as IT workloads can be dynamically moved and allocated across a pool of hardware resources available in virtualized and reconfigurable IT environments. Characteristics of cloud infrastructure The characteristics of cloud infrastructure are different than those of on-site data centers, thanks especially to the operating model of cloud services and the necessary architecture for cloud computing. These cloud characteristics include: - High scalability - Flexible resource pooling - On-demand self-service provisioning - Secure with multiple layers of security against cyber-attacks - As-a-service delivery model charged on a service consumption-basis - Highly available access to IT resources and services - Managed by the cloud vendor Deployment models for cloud infrastructure Cloud infrastructure can be dedicated to individual users with isolated access or shared among multiple users—or a combination of both. The basic infrastructure resources are the same regardless of the deployment model but differentiate in their allocation between users. The three most common cloud deployment models are categorized like this: - Public cloud. A pool of virtualized resources shared among multiple users outside of the vendor’s firewall. The service is distributed on an as-needed consumption basis, charged with a pay-as-you-go model. The vendor is responsible for managing and operating the public cloud. - Private cloud. These are cloud environments dedicated to individual users accessible via their own firewall. Private cloud environments are typically deployed as an on-premise but virtualized data center. An added layer of automation allows users to leverage the virtualized infrastructure as a private cloud service. - Hybrid cloud. The integration of public and private cloud creates a hybrid cloud model. The workloads are portable across the hybrid cloud, allowing organizations to use the public cloud set up for cost-sensitive workloads and the private cloud set up for security sensitive workloads. For more details, read our BMC Blogs:
<urn:uuid:6a311166-7305-4da7-a02f-3bd1ada2345d>
CC-MAIN-2022-40
https://www.bmc.com/blogs/cloud-infrastructure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00235.warc.gz
en
0.912241
1,215
3.359375
3
The research, conducted by the University of Sheffield, also strengthens evidence that smoking and sexual behaviour were shown to be risk factors for oral HPV infection, which can lead to oropharyngeal (throat) cancer. This timely study published in the British Medical Journal Open today, led by Professor Hilary Powers, Dr. Vanessa Hearnden and Dr. Craig Murdoch and funded by the World Cancer Research Fund UK, coincides with the announcement of a new UK HPV vaccine programme for boys which will reduce the risk of HR-HPV related cancers. Rates of oropharyngeal cancers are increasing worldwide, attributable to an increase in the rate of oral infection with HR-HPV. This new study of 700 men and women in Sheffield, which is the largest of its kind in England, looked for HR-HPV infection and also asked participants lifestyle questions relating to their sexual history and tobacco use. A total of 2.2 per cent of people were infected with oral HR-HPV infection with 0.7 per cent positive for HPV16 or HPV18. There are large variations in oral HR-HPV prevalence globally however this study showed lower rates compared to previous Scottish and US studies which both found 3.7 per cent of individuals positive for oral HR-HPV. Former smokers were significantly more likely to be HR-HPV positive compared with those that had never smoked. The study also found that participants with a greater number of sexual or oral sexual partners were more likely to be HR-HPV positive. Dr. Vanessa Hearnden, from the Department of Materials Science and Engineering at the University of Sheffield, said: “Previous studies have been US-focused or in smaller UK studies in London or Scotland. This is the first study in the North of England and found lower rates of oral high-risk human papillomavirus infection. “We fully support the newly announced HPV vaccination programme for boys which will reduce the risk of HPV related cancers including throat cancer in men and will also provide further prevention of cervical cancers through herd immunity. “However, we found the majority of individuals testing positive for high risk strains of HPV were actually positive for strains other than those covered by the current vaccine (HPV 16 and HPV 18). This shows the need to consider newer vaccines which protect against more HPV strains in the future and for individuals to be aware of lifestyle risk factors such as number of sexual partners and tobacco use.” Dr. Craig Murdoch, from the University of Sheffield’s School of Clinical Dentistry, said: “Many people associate the HPV virus with cervical cancer but there is less recognition of the fact that HPV causes oropharyngeal cancer, and unfortunately, the prevalence of this cancer has increased dramatically in the past few years. “The Sheffield Head and Neck Oncology Research Team are conducting research into HPV-related oral cancer in order to find better ways to treat this disease and improve quality of life.” Dr. Kate Allen, Executive Director of Science & Public Affairs for World Cancer Research Fund International, said: “This study confirms the importance of lifestyle risk factors in prevention of the disease and sheds new light on the rates of oral HR-HPV infection in England.” More information: Vanessa Hearnden et al, Oral human papillomavirus infection in England and associated risk factors: a case–control study, BMJ Open (2018). DOI: 10.1136/bmjopen-2018-022497 Journal reference: BMJ Open search and more info website Provided by: University of Sheffield search and more info website
<urn:uuid:1be1dfe6-e813-4a8b-9001-a5f30133a85e>
CC-MAIN-2022-40
https://debuglies.com/2018/08/20/infection-rates-of-high-risk-human-papillomavirus-hr-hpv-oral-infection-in-england-are-lower-than-expected/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00235.warc.gz
en
0.947812
750
2.640625
3
Artificial intelligence (AI) is playing an ever greater role in our technocratic society. The automated collection and processing of data is making it possible to reproduce intelligent or even human-like behavior. Because the method has taken on such a central role, it is creating some major risks. This article discusses the extent to which AI can be hacked. The discussion presented here applies to both strong and weak AI applications in equal measure. In both cases, input is collected and processed before an appropriate response is produced. It doesn’t matter whether the system is designed for classic image recognition, a voice assistant on a smartphone, or a fully automated combat robot. The goal is the same: to interfere with the intended process. This kind of disruption is any deviation from the ideal behavior, or, in other words, the goal, expected outcome or normally observed outcome during the development, implementation and use of an AI application. Image recognition might be tricked into returning incorrect results; illogical dialogs might be prompted with a voice assistant; or the fundamental behaviors of a combat robot might be deliberately overridden. INPUT AND SENSOR TECHNOLOGY The purpose of any AI application is to produce a reaction, which requires a certain initial state. With traditional computer systems, the available data provides for the initial state. This data can be obtained using an input device (e.g. keyboard or mouse), data media (e.g. hard drive, external disk) or via network communication (e.g. Bluetooth, Internet). AI solutions increasingly rely on sensors that are replacing – or at least complementing – traditional input devices, for example, microphones for voice input and cameras for image input. But other kinds of sensors for measuring temperatures, speeds or centrifugal forces may be used as well. If a person interferes with the intended type of input, this can affect the subsequent data processing. The frequencies of voice commands must first be analyzed to recognize voice input as relevant. Background noise can be just as disruptive as unexpected linguistic constructs. In the case of imaging analysis, invisible noise can be used to trick the recognition process. This type of manipulation cannot be seen by the naked eye and requires complex technical analyses. Here, too, there are methods designed to foil this sort of spoofing. Data is processed using algorithms coded specifically for this purpose. Logical errors can result in undesirable states. Identifying and avoiding logical errors – at least when compared to traditional vulnerabilities, such as buffer overflows and cross-site request forgery – involve an excessive amount of work. Usually, only a formal data flow and logical analysis can help in such cases. To minimize the effort required here, code should ideally be kept as simple as possible. Complexity is security’s biggest enemy, which is true both for traditional software as well as AI developments. Small, modular routines can help make testing them manageable. This also requires doing away with the integration of complex external libraries, which of course creates an obstacle for many projects. After all, not every voice or image recognition application can be developed from the ground up and in the expected level of quality. If external components are involved, their security must be carefully tested during an evaluation process. This also entails discussing future viability, which requires examining what is needed to maintain the components. Unmaintained components or ones that are difficult to maintain independently can become a functional and security liability. AI applications depend on technology stacks that use a range of technologies and their corresponding implementations. They are normal pieces of hardware and software, which have the usual shortcomings one might expect. Security holes occur in AI systems just as frequently as they do in operating systems or web applications. However, they are often more difficult to exploit and require unorthodox methods. Targeted attacks can be used to identify and exploit vulnerabilities in the individual components. After all, it is not always possible to simply enter an SQL command from a keyboard, which can then be reused directly somewhere else. Instead, the individual input mechanisms and the transformations that occur in the subsequent processing steps must be taken into consideration. It is just as likely, however, that the database connection is exposing a voice assistant to an SQL injection vulnerability. Whether or not this can then be exploited using voice input depends on whether and how the necessary special characters can be introduced. From the hacker’s perspective, voice input can hopefully be used for the text input of a single quotation mark, for example. Popular voice assistants (Siri, Alexa, Cortana) support communication on the device with both voice and text input. This can greatly simplify traditional hacking techniques. This approach makes it possible to exploit all kinds of vulnerabilities. Hacking techniques that work with complex meta-structures, such as cross-site scripting, tend to be more complex, however, due to their unconventional communications methods. The capacity to learn is an important aspect of AI. Productive systems usually require an initial training phase, during which they are trained in a distinctly isolated context that creates a foundation for understanding the problem and developing any initial solution approaches. During this phase, the primary hazard of malicious manipulation would be from insiders who provide the initial training data or carry out this process directly. An obvious example of such manipulation would be the blatant tampering with content (e.g. photos of dogs instead of cats). But there are also more subtle methods, which may only be discovered later, if at all. Depending on where the initial training data comes from, however, they can also be manipulated by third parties first. If, for example, training data uses images taken from the Internet or social media, certain undesirable effects can creep in. However, a more likely and hard-to-manage target is learning mechanisms that work during live operation: for example, self-teaching systems that develop their understanding based on current data or the results generated in the process. If a hacker has control over this process, they can “train the system to death”. Voice recognition that uses dynamic learning to better understand social and personal idiosyncrasies (e.g. accents) can, for example, be tricked into using precisely the indistinct language variation as the standard for its comprehension. As a result, this can reduce the quality of recognition in the long term. Many chatbots with self-learning, context-neutral algorithms fall prey to these types of attacks. Microsoft’s Twitter bot Tay is one particularly egregious example. Within a few hours, it was flooded with racist and sexist content, transforming it into an absurd and offensive parody of its original family-friendly character. Microsoft was forced to take down and reset the bot to minimize the damage. The development of sound and secure AI requires meeting the same criteria required of traditional hardware and software development. Defensive programming aims to prevent external factors from creating states that are undesirable, counterproductive or even harmful. Input validation is just as important as it is with a conventional web application. Perhaps the only difference is that the original input is in a form that only uses conventional string or byte structures. The validity of the input, calculations and output needs to be fully validated. This applies to the form (structure) as well as the content (data). Unexpected deviations must either be sanitized or rejected. This is the only way to directly or indirectly prevent manipulation of the processing. Self-learning systems always call for a certain measure of skepticism. Input must be verified to ensure it is trustworthy and of high quality to prevent potentially poor or malicious input or to ensure that such input only has a marginal effect. Only input that is classified as legitimate must and should be allowed to affect the outcome. This can, of course, compromise flexibility and adaptability. Artificial intelligence is one of the main factors driving modern data processing. Due to all the hype, people often forget that they are dealing with highly complex software constructs. This complexity should not be underestimated, because, most importantly, it can lead to security problems. For this reason, an extra defensive development strategy should be put into practice. The approaches traditionally used in hardware and software development need to be applied, even if the possibilities for attacks initially appear non-existent or are assumed to be quite difficult to exploit. Just because attack scenarios are thought to be unorthodox or unpopular certainly does not mean that they won’t happen sooner or later.
<urn:uuid:770e0cf1-e6c5-413d-92c0-cbde62010bd5>
CC-MAIN-2022-40
https://swisscognitive.ch/2020/01/28/hacking-artificial-intelligence-influencing-and-cases-of-manipulation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00235.warc.gz
en
0.934047
1,725
3.296875
3
India's Smart Cities' Plan Could Impact Environment The Indian government's vision to create 100 new smart cities to support the rapidly growing urban population could have a significant detrimental impact on the environment, a study has warned. Researchers led by Hugh Byrd, professor and a specialist in urban planning at the University of Lincoln, UK, said the detrimental environmental impact would increase at a greater rate than the population. "The pursuit of cities to become "smart", "world-class", "liveable", "green" or "eco" has been promoted alongside increased population densities and urban compaction. This planning goal (will) reach a point where resources are inadequate for the fully functioning metabolism of a city," said Byrd Read full article here.
<urn:uuid:3d952d7a-7c3c-49a2-b930-5d3b12e990e7>
CC-MAIN-2022-40
https://www.databahn.com/blogs/fortune-1000-sales-trigger-events/indias-smart-cities-plan-could-impact-environment
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00235.warc.gz
en
0.946026
150
3.015625
3
First published May 2005 by Brian Carrier reproduced with permission from The Sleuth Kit Informer, Issue 18 The output of many TSK tools is relatively easy to understand because each tool has a specific focus. For example, the outut of fls is a list of file names and corresponding inode addresses. There are two tools, fsstat and istat, which contain a lot of information and the type of information varies by file system. These tools provide the details of a metadata structure or file system, respectively. This article covers the output of the fsstat command when run on a FAT file system. The fsstat command gives the general information about a file system. This information is typically located in the boot sector or superblock of a file system and does not apply to any specific file or directory. Examples of the information in these data structures are the size of the data units, the number of data units in the file system, and the number of metadata structures. When using Autopsy, this information is shown under the Image Details tab. The fsstat output is broken up into sections and each file system type has a different number of sections. The FAT output has four sections and the first three are based on the file system, content, and metadata categories of the basic file system model that I use. The last section of the output contains a graphical representation of the file allocation table. Before we look at the fsstat output, we will briefly cover some of the basics of the FAT file system. This is not an extensive description of FAT and is intended only as a review. For more details, refer to the FAT specification (or wait until File System Forensic Analysis is released in March 🙂 ). The first sector of a FAT file system contains the boot sector data structure, where the basic administrative information can be found. This data structure describes the layout of the file system. Following the boot sector is the first file allocation table structure (FAT). The FAT is used to determine the next cluster in a file and is used to determine which clusters are not being used. In FAT12/16 the FAT immediately follows the boot sector, but in FAT32 there are reserved sectors in between. A backup FAT typically follows the first FAT. After the last FAT, is the start of the Data Area, which is where the directory and file contents are stored. The layout of the Data Area is different for FAT12/16 and FAT32. With FAT12/16 the sector after the last FAT is the beginning of the root directory, which has a fixed size. After the root directory is the first cluster, which is given an address of 2 (there are no clusters 0 and 1). With FAT32, cluster 2 starts in the sector following the last FAT. The FAT32 root directory can start anywhere in the Data Area. The Data Area extends until the end of the file system. File and directory content are stored in clusters, which are groups of consecutive sectors. As I previously stated, the first cluster is located dozens or hundreds of sectors into the file system after the boot sector and FATs. TSK does not use cluster addresses in its output because it is too confusing. If TSK were to use clusters, then it would need two different addressing schemes. If you wanted to examine the data in the FAT, you would need to use its sector address. if you wanted to examine the data in a file’s contents, you would need to use a cluster address. TSK simplifies this by showing and using only the sector addresses (even if the file system stored the address as a cluster address). I will now go through the output from an example FAT32 image. The output for a FAT12/16 file system is slightly different from a FAT32 image because it has a different layout. The output of this command is from the new TSK version 2.00, which is slightly different from version 1.73. # fsstat -o 63 disk-1.dd FILE SYSTEM INFORMATION File System Type: FAT32 The above line represents what file system type TSK thinks the image is. It determines this using the algorithm given in the specification, which is based on the total number of clusters. OEM Name: MSDOS5.0 Volume ID: 0x6c2e5cb8 The boot sector contains several labels that were created when the file system was formatted. The above OEM name is typically based on which OS or application formatted the file system. Another common name is “MSWIN..”. The Volume ID is assigned by the creating application and should be based on the time of creation , but I have not always found this to be the case. Volume Label (Boot Sector): NO NAME Volume Label (Root Directory): FAT-VOLUME A user can assign a volume label to a file system and there are two locations where it can be stored. One location is in the boot sector and the other is in a special entry in the root directory. I have found that Windows XP will store the label in only the root directory, as can be seen above. The boot sector has the name “NO NAME” and the root directory value contains the actual name “FAT-VOLUME”. Refer to DFTT Test #9 for an example how a file was hidden using the special volume label. File System Type Label: FAT32 The boot sector contains a label for the file system type, but it is not really used and does not need to be correct. In this case it is correct, but the FAT spec gives an algorithm that should be used to determine the actual type (which was given in the first line of the output). Next Free Sector (FS Info): 173952 Free Sector Count (FS Info): 61258528 The two values shown above are only in a FAT32 file system. They give the next cluster that can be allocated and how many free clusters exist (TSK has converted them to sector values). Sectors before file system: 63 The above value shows how many sectors that may exist before the file system. This typically corresponds to the offset of the partition in which the file system is located. In this case, the file system is in the first partition of the disk and in sector 63. The next set of information contains the layout of the file system. The stars on the left are used to describe the hierarchy. Lines with ‘**’ are located inside of the previous range with a ‘*’. Because the data area layout is different for FAT12/16 versus FAT32, the output is slightly different for a FAT12/16 file system. File System Layout (in sectors) Total Range: 0 – 61432496 * Reserved: 0 – 33 ** Boot Sector: 0 ** FS Info Sector: 1 ** Backup Boot Sector: 6 The above lines show us that there are 61,432,497 sectors in the file system and 0 to 33 are in the reserved area (which is the area before the start of the first FAT). The original boot sector is in sector 0 and the backup is in sector 6. The FS Info data structure is unique to FAT32 and contains information about the next available cluster and number of free clusters. * FAT 0: 34 – 15024 * FAT 1: 15025 – 30015 This file system has a primary and backup FAT and the sector ranges are given in the above output. * Data Area: 30016 – 61432496 ** Cluster Area: 30016 – 61432479 *** Root Directory: 30016 – 30047 ** Non-clustered: 61432480 – 61432496 After the second FAT is the start of the Data Area. This is FAT32, so the first cluster starts in the first sector of the Data Area, which is sector 30,016. The size of the Data Area is not a multiple of the cluster size and therefore there are 17 sectors at the end of the Data Area that are not allocated to a cluster (because we will later see that each cluster is 32 sectors). The FAT32 root directory can be located anywhere in the file system and its location is given in the above output. Range: 2 – 982439426 Root Directory: 2 The above output is the second major section in the fsstat output and it contains the metadata-related information. The FAT file system does not assign addresses to its metadata structures, which it calls directory entries, so TSK must create its own addressing scheme. The above output shows the valid range of addresses. These are the addresses that you would use with the icat or istat tools. The maximum address is based on the total number of sectors in the file system. In this case, the valid range is 2 to 982,439,426 and the root directory has been assigned an address of 2. Sector Size: 512 Cluster Size: 16384 Total Cluster Range: 2 – 1918828 The above output is the third major section in the fsstat output and it contains general content-related information. We see that the sector size is 512 bytes and that each cluster is 16KB. The total cluster range is also given, even though TSK shows all addresses in sectors. FAT CONTENTS (in sectors) 30016-30047 (32) -> EOF […] 30176-30303 (128) -> EOF 30304-30335 (32) -> EOF 30336-30367 (32) -> 85984 30368-30399 (32) -> EOF […] 85984-86015 (32) -> 133920 […] 133920-133951 (32) -> 146304 The above output is the fourth major section of the fsstat output and it continues for many more pages. It is a graphical representation of the primary FAT structure. Each line corresponds to a “cluster run”. The FAT structure contains a pointer to the next cluster in the file. The first line in the output is for cluster 2. We know this because there is no cluster 0 or 1 and we saw in the layout that the first sector with a cluster is sector 30,016. We also know that each cluster is 32 sectors in size (which is the length of the first run). Therefore cluster 2 has been allocated to a file and it does not point to any other clusters, which means it is the last cluster in the file. The third line shows that a file has allocated four consecutive clusters and we see a run with 128 sectors. There should be four separate entries, but TSK will group them together if an entry points to the next consecutive cluster. The fifth line shows a run for a file that was not able to allocate consecutive clusters. We can see that after it allocated the cluster in sectors 30,336 to 30,367 that it allocated the cluster that starts in sector 85,984. If we jump to the FAT entry for that cluster then we see that it again jumps to the cluster in sector 133,920. This repeats for several more jumps. If a cluster is not allocated then it will not have an entry in the fsstat output. For basic investigations only some of the information in the fsstat output may be needed. But, it provides a wealth of information if you are looking for hidden data, recovering deleted files, or verifying your results. Future issues will cover the fsstat output for other file systems. References Microsoft FAT32 File System Specification. Available at: http://www.microsoft.com/whdc/system/platform/firmware/fatgen.mspx Brian Carrier. File System Forensic Analysis. Available at: http://www.digital-evidence.org/fsfa/ Craig Wilson. Volume Serial Numbers & Format Verification Date/Time. Digital Detective White Paper. Available at: http://www.digital-detective.co.uk/documents/Volume%20Serial%20Numbers.pdf Brian Carrier. Digital Forensic Tool Testing Image #9 – FAT Volume Label Test #1. Available at: http://dftt.sourceforge.net/test9/index.html Reproduced with permission from The Sleuth Kit Informer, Issue 18
<urn:uuid:4fd5c648-b16e-4bed-a7bd-c3bd6cc927f2>
CC-MAIN-2022-40
https://www.forensicfocus.com/articles/description-of-the-fat-fsstat-output/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00235.warc.gz
en
0.906528
2,630
2.53125
3
Physical servers were usually under-utilized and took time and effort to deploy. These servers also consumed data center space, power and cooling. Virtualization reduced hardware costs, reduced the environmental requirements by saving on power and cooling and improved the utilization of physical hardware in comparison to dedicated server environments. Of course there are tradeoffs in using virtual infrastructure. Operating system licenses are not free; there are additional management costs to consider and staff have to be trained and gain experience in the technology. However, the net effect of virtualization has been to allow many companies to reduce their overall computing costs. Virtualization, as it continues to gain adoption, has a very close relationship with storage. Virtual machines are just data and have to be stored somewhere. On standalone virtual servers, this can be achieved simply by using DAS (locally attached) disks. But if more advanced virtualization features (and increased availability) are required then virtual machines will normally be stored on SAN or NAS arrays. Using a dedicated storage array provides a number of significant benefits to the user, namely: • Increased resilience – storage arrays operate with a high degree of redundancy with multiple power supplies, fans and other internal components. • Scalability– storage arrays are highly scalable devices and can be extended dynamically without outages. • Shared access– storage arrays allow shared access to data, which isn’t generally practical or possible with DAS. Shared access is essential in increasing resiliency. • Increased functionality– storage arrays offer advanced features such as replication and snapshots that can offload processing power from the virtual server. Virtual guests all contained on a single LUR; LUN failover keeps replicated LUNS together. Storage Features Key to Virtualization As virtual servers increase in levels of adoption, they will encompass more and more mission critical systems. It will be essential, therefore, for virtual guests to be replicated between locations to improve availability and protect in the event of a disaster scenario at the local data center. Replication can be performed synchronously (where I/O is confirmed as written to both source and target arrays before being confirmed to the host), or asynchronously(where the source array confirms I/O complete to the host without waiting to confirm the target array has received the data). Storage arrays are highly efficient at replicating data, which has been a key feature of these devices for 15-20 years. However, for SAN arrays, the way in which data is presented to the host can cause an issue with replication. Storage is typically presented to virtual servers as large LUNs (Logical Unit Number). This is a single unit of storage as far as the array is concerned, but from the virtual server perspective it will be used to hold many virtual guests. For example, a 500GB LUN could hold 25 virtual guests of 20GB per guest. Each of these virtual guests will have their own service levels and DR requirements. The storage array will replicate the entire LUN and in the event of a “failover” scenario where operation moves to the remote array, it is expected that the primary LUN will not be accessed, and all host I/O will occur on the remote LUN. This may be fine for a complete DR failover but doesn’t address other operational requirements, for example, where a single VM guest is moved to another location for operational reasons rather than a DR outage. In this instance, the VM guest may have to be manually moved to another data store, removing the benefit of using replication within the array. We will see later that this issue is being addressed by new functionality in the array. The replication issue touches on another closely related subject, that of data mobility. Where replication provides benefits in increasing levels of availability, the wider subject of data mobility becomes more important in virtual environments. By mobility, we refer to the ability to move data (and by definition, virtual guests) around a storage enterprise, both within the data center and between separate data centers. Mobility within the data center is an essential feature for a number of reasons: • Balance – It enables workloads to be balanced across multiple storage arrays. • Change configuration – It enables storage devices to be added into and taken out of a configuration as required, for instance, when arrays are being replaced. • DR scenario – It enables data to be moved to other locations for pre-emptive DR planning or in the event of an actual DR scenario. The ability to move data around the infrastructure is becoming more important in delivering today’s virtual environment and will be even more important in the future as workload moves into “the Cloud.” Without advanced storage array functionality, data movement would have to be performed by the virtual server, consuming CPU and network resources. We will see later that storage vendors are working toward solutions that will enable arrays to move large amounts of data independent of the virtual server itself. Virtualizaton and Backup The move to virtualized environments meant a new approach to backup. Although possible, it is impractical to backup each virtual guest individually. Instead, functionality within the virtual server enables backup images of each virtual guest to be taken and accessed by a separate backup server. To improve the performance of this feature, the storage array can perform the snapshot process, offloading CPU and I/O resources from the virtual server. Performance is clearly an issue in backup, however, performance in general is a key storage array feature for virtualization. Storage arrays have been developed to process large volumes of I/O, which can be either sequential or random. In virtual environments the I/O is typically random and this doesn’t work well with DAS storage, which would require more expensive, high-speed drives. Storage arrays can benefit from large numbers of disks (as it is a shared environment), dedicated cache and multiple I/O connectivity, all of which both improves performance and delivers a more consistent I/O response time. As more workload is virtualized, the hypervisor itself becomes a significant part of the support effort, because it is the platform that is tied to the hardware itself. Boot from SAN enables the hypervisor to be disconnected from the hardware and allows a single hypervisor instance to be booted on any server; it also allows the hypervisor to be replaced with another instance that could be (for example) an upgraded version. By removing the boot device from the server and placing it on the SAN, the server holds no state information and so becomes a commodity. This is most easily demonstrated with the use of blade servers, where multiple physical servers exist in a single chassis; they can be added or removed from the blade infrastructure at any time. Ultimately, blade flexibility is served best with shared SAN storage. Both hypervisor and boot disk stored on SAN; hypervisor can be booted from any physical server. Why VAAI Is So Important Although storage arrays already offer many important features to virtual environments, there are additional requirements not met by today’s hardware. That is why for VMware vSphere, VAAI (vStorage API for Array Integration) was developed. VAAI defines a set of API calls that are implemented within the storage array through amendments to the SCSI protocol. Most notably of these are the following: • Block Zero– implemented as Write Same in SCSI, this pushes the task of zeroing out large blocks of data down to the array. In fact, Write Same could be used to write any values over a large range of data, however it’s most useful to vSphere to write zeroed out data when creating new virtual disks (VMDKs). • Full Copy– implemented within the array as SCSI EXTENDED COPY, this feature allows bulk movement of data both within and between storage arrays, taking the load off the vSphere hypervisor when performing storage vMotion or guest cloning functions. • Hardware Assisted Locking (HAL) – this moves the SCSI hardware lock from the LUN to the block level, improving performance on certain vSphere operations that require locking for data integrity. However, HAL will potentially resolve the issue of LUN replication, allowing I/O on both sides of a replicated LUN pair. Virtualization and Storage Vendor Solutions Storage vendors are starting to offer new features and products that specifically meet the needs of virtual environments. Example 1: EMC VPLEX EMC’s VPLEX product virtualizes the storage LUN and permits I/O to either side of a replicated LUN pair. This enables virtual guests to be moved between storage arrays (typically in geographically distant locations) with no outage and without waiting for data to be replicated. Example 2: Compellent Live Volume Compellent’s newly announced Live Volume feature enables a single logical LUN to be spread across multiple storage arrays. The LUN can be associated with one array and dynamically moved to another in order to meet workload balancing or DR requirements. Virtualization and Storage: Summary It’s clear that as virtualization continues to have a greater importance in the enterprise, storage will form a critical part in delivering that infrastructure. The features of the storage array will continue to evolve and deliver better performance, availability and resilience than could be achieved using directly attached storage (DAS) alone. Storage and virtualization are and will continue to remain, closely linked.
<urn:uuid:979e6a7a-d1a1-45ff-a4bb-31dba1c932a9>
CC-MAIN-2022-40
https://www.datamation.com/storage/virtualization-and-storage-overview-vendor-solutions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00235.warc.gz
en
0.933364
1,936
3.09375
3
What is Clickjacking? A common clickjacking definition is a type of attack in which the victim clicks on links on a website they believe to be a known, trusted website. However, unbeknown to the victim, they are actually clicking on a malicious, hidden website overlaid onto the known website. Cursorjacking is another version of clickjacking. In cursorjacking, attackers trick users by adding a custom cursor image that confuses victims into clicking on parts of the page they have no intention of clicking. In more advanced clickjacking scenarios, victims do more than just click. They might even enter usernames, passwords, credit card numbers, and other personal information into what they believe to be common sites they use frequently. But instead, their information is being scraped by a malicious, hidden website. Also known as a user redress interface attack, the term clickjacking was coined by Jeremiah Grossman and Robert Hansen in 2008. While clickjacking might seem like spoofing—in which the cyberattacker recreates websites or landing pages in an effort to trick users into thinking the fake pages are the original, legitimate pages—it is much more sophisticated. The website the victim is looking at in a clickjacking scheme is the real website of a known, trusted entity. However, the attacker has added an invisible overlay over its content using various HTML technologies, including custom cascading style sheets (CSS) and iframe, which allow for content from other websites to be ported onto another website. Types of Clickjacking Attacks There are several different types of clickjacking attacks. Due to the open nature of the internet and the continued advances in web frameworks and CSS, clickjacking attacks can become quite complex. Complete Transparent Overlay Perhaps the most common clickjacking strategy, this method overlays a legitimate webpage over a malicious page. The legitimate page is loaded into an invisible iframe, and the user has no idea that a malicious page is underneath. Cropping, which is trickier to program, occurs when the cyberattacker overlays only selected controls from the malicious page onto the legitimate page. The attacker could replace hyperlinks on the legitimate page with redirects, replace the text of buttons on the legitimate page with other language (thereby confusing the victim), or change the content in a way that misleads the user. This could be many things, but cursorjacking, mentioned above, is an example. In this strategy, the cyberattacker creates a tiny iframe, perhaps as small as a 1x1 pixel, that can be positioned under the mouse cursor and undetectable to the victim. As such, any click will go to the underlying malicious page. Click Event Dropping Click event dropping might be a more obvious attack to a user. In this strategy, the attacker sets the CSS pointer-events property to none, which means clicking will seem to do nothing on the page. But in reality, the clicks are working on the malicious page underneath. Users should alert the webmaster when their continued clicking on the website's buttons or links does not work. Rapid Content Replacement For more sophisticated cyberattackers with significant know-how in user experience and behavior, rapid content replacement can be an effective strategy. In this scheme, overlays are covered up, removed for a fraction of a second to register a click, and then immediately replaced. With this scenario, the user might not notice that they are clicking on a possibly malicious button or link because the object disappears so quickly. Apart from using insert overlays, there are other ways attackers can trick users into clicking unexpectedly malicious content. In this scenario, the cyberattacker creates a legitimate dialog box or pop-up with a button partially off the screen. The buttons go to the malicious webpage underneath, but the box appears as a harmless prompt. The challenge for attackers in using this strategy is that the victim may have an ad blocker or pop-up blocker installed on their browser. The attacker will need to find a way to circumvent this. (Bogus ad-blocker extensions are yet another type of cyberattack.) This is a type of rapid content replacement attack, in which the cyberattacker quickly moves a trusted user interface (UI) element while the user is focused on another portion of the webpage. The idea is to have the victim inadvertently click the moved element instead of focusing on reading, scrolling, or clicking something else on the page. Quick jumps or movements should be obvious to most users, and when this occurs, the employee should notify the webmaster and security team. Drag and Drop This is a clickjacking strategy that requires the user to do more than just click. The victim will need to fill out forms or perform another action. The web forms might look like those of the legitimate page, but when users fill out the fields, the data is captured by the cyberattacker via the malicious page underneath. The goal, as with any cyberattack, is to obtain personal or sensitive information without the victim's knowledge. How to Prevent Clickjacking? Luckily, there are several steps that an organization can take to protect its employees, customers, and other stakeholders from a clickjacking attack. These protections are typically undertaken by the web development team, as they are server-driven and require some coding and knowledge of the functionality of the web. Move the Current Frame to the Top Also known as an X-Frame-Options, this strategy relies on the response header—or code used to indicate whether a browser should be allowed to render a page in a frame, as an embed, or as an object—when webpages are pushed through the browser. The header provides the webmaster with control over the use of iframes or objects. With this extra code in the header of a webpage, the webmaster can decide whether the inclusion of a webpage within a frame can be prohibited. X-Frame was first developed for Internet Explorer 8, and it is not consistent across all browsers. The web development team will need to take this into consideration when implementing X-Frame-Options. When used together, a CSP and X-Frame-Options can serve as a strong defense against a clickjacking attack. Consider Browser Add-ons Some web browsers have add-ons that halt scripts from running once there is a Hypertext Transfer Protocol (HTTP) request. With the scripts stopped in their tracks, the cyberattacker's code cannot be executed. This is a client-side strategy and requires employees to install an add-on on their browser. For added protection, they should install the add-on on all of their devices. Add a Framekiller to the Website Use a Strong Cybersecurity Solution A robust platform such as the Fortinet next-generation firewall (NGFW) can protect a network from multiple threats and attack vectors. A security platform can recognize suspicious behavior and block threats like clickjacking in real time. Employee education is imperative, as employees or other users can provide another way to notify the security team of a clickjacking attack that is underway. As part of overall cybersecurity training, employees need to be on alert if they suspect that clicks or parts of what they believe to be the normal interface of the website seem suspicious. How Fortinet Can Help An end-to-end security solution is necessary to thwart cyberattacks. Clickjacking schemes target the security vulnerabilities of an organization's website, taking portions of legitimate webpages and overlaying them over a malicious site intent on exploiting user trust. As threat vectors multiply and increase in sophistication, the Fortinet NGFW can serve as organizations' first-line defense. It filters all traffic and provides intrusion protection for an organization's network across the entire threat landscape. What is Clickjacking? Clickjacking is a type of attack in which the victim clicks on links on a website the victim believes to be a known, trusted website. However, they are actually clicking on a hidden website that has been overlaid onto the known website. How dangerous is clickjacking? Clickjacking is another threat vector and has the potential to enable a security breach. Is XSS clickjacking? XSS, or cross-site scripting, is a related attack but can be much broader in scope. In XSS, cyberattackers exploit vulnerabilities in web servers and inject malicious client-side scripts without users' knowledge. What is used to prevent clickjacking? A range of strategies can be used to prevent clickjacking, including implementing a Content Security Policy (CSP), coding for X-Frame-Options, adding browser add-ons, using an advanced firewall system, and educating employees.
<urn:uuid:b3a74434-d464-49a0-b9f2-682b9a8dc936>
CC-MAIN-2022-40
https://www.fortinet.com/cn/resources/cyberglossary/clickjacking
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00235.warc.gz
en
0.925368
2,160
2.71875
3
British police have expressed concern that using AI in their operations may lead to increased bias and an over-reliance on automation. A study commissioned by UK government advisory body the Centre for Data Ethics and Innovation warned that police felt AI may “amplify” prejudices. 50 experts were interviewed by the Royal United Services Institute (RUSI) for the research, including senior police officers. Racial profiling continues to be a huge problem. More young black men are stopped than young white men. The experts interviewed by the RUSI are worried these human prejudices could make their way into algorithms if they’re trained on existing police data. It’s also noted how individuals from disadvantaged backgrounds tend to use more public transport. With data likely to be collected from the use of public transport, this increases the likelihood of those individuals being flagged. The accuracy of facial recognition algorithms has often been questioned. Earlier this year, the Algorithmic Justice League tested all the major technologies and found that algorithms particularly struggled with darker-skinned females. A similar report published by the American Civil Liberties Union focused on Amazon’s so-called Rekognition facial recognition system. When tested against members of congress, it incorrectly flagged those with darker skin more often. Both findings show the potentially devastating societal impact if such technology was rolled out publicly today. It’s good to hear British authorities are at least aware of the potential complications. The RUSI reports that experts in the study want to see clearer guidelines established for acceptable use of the technology. They hope this will provide confidence to police forces to adopt such potentially beneficial new technologies, but in a safe and responsible way. “For many years police forces have looked to be innovative in their use of technology to protect the public and prevent harm and we continue to explore new approaches to achieve these aims,” Assistant Chief Constable Jonathan Drake told BBC News. “But our values mean we police by consent, so anytime we use new technology we consult with interested parties to ensure any new tactics are fair, ethical and producing the best results for the public.” You can find the full results of the RUSI’s study here. Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.
<urn:uuid:5cc3c9be-ba4e-4e30-b6a9-f22af382d653>
CC-MAIN-2022-40
https://www.artificialintelligence-news.com/2019/09/17/uk-police-concerned-ai-bias-automation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00435.warc.gz
en
0.935303
507
2.65625
3
Many industries depend on their remote water tanks. Ensuring that they remain full and free from leaks requires continual visits. But visits to these distant locations cost money related to employees, fuel, and vehicle wear and tear. If visits are scheduled around average estimates of water use, they can fail to respond to actual conditions. This results in wasted drive time, without eliminating the chance of tanks going dry or springing leaks. Without eyes on the site, leaks can worsen, and tanks can end up dry for long periods. The solution is remote water tank monitoring. By installing remote terminal units (RTUs) on-site, companies can monitor water levels from their central offices. This allows them to respond to problems as soon as they arise, instead of waiting for the next scheduled visit. When they do respond, they'll know what they're dealing with ahead of time, enabling technicians to travel equipped with the right tools for the job. Water is heavy and difficult to transport. Often, water needs for populations or various industries require more water than is available from the local ecosystem. In other cases, water tanks are necessary to hold contaminated water until it can be treated and safely returned to circulation. As such, water tanks are essential equipment for several critical industries in the U.S. and abroad, including: Both drinking water and wastewater systems rely on water tanks to fulfill their duties. Public water systems use water tanks to hold water in reserve before it is used or to preposition water for easy use - such as on the top of skyscrapers. Wastewater systems rely on holding tanks and ponds to treat large amounts of water before releasing it or returning it to circulation. Agriculture, particularly open-range ranching, often occurs in remote areas. Some, such as the deserts of Arizona, New Mexico, and West Texas, have limited access to water and rely on water tanks to keep livestock adequately hydrated. Water tanks are also used to irrigate crops in dry seasons. Advanced monitoring functions can be used for digital farming, controlling irrigation at the touch of a button. Among other uses, gas companies rely on holding tanks and ponds to store fracking fluid before and after use. Keeping a close on holding tanks and ponds prevents environmental contamination, as well as ensuring regulatory compliance. Mining operations can use large amounts of water to unearth or process minerals. As in oil and gas operations, this water must be kept in holding tanks or ponds before and after it has been used operationally. Monitoring water prevents leaks and contamination while demonstrating compliance. Remote monitoring solutions for each industry follow similar patterns, helping protect valuable assets and the environment while preventing fines from regulators. The tools employed are often the same as well: remote terminal units and master stations. RTUs are multi-capable devices, able to monitor several conditions at once. In addition to watching water levels, RTU sensors can detect: Monitoring these conditions provides tank owners with substantial insight into the minute-to-minute conditions of their expensive equipment far in the field. Companies and utilities with more than ten tanks to monitor will find that individual alerts coming from RTUs for each condition at each tank will become more of a nuisance than a benefit. To monitor large numbers of tanks, companies employ master stations, otherwise known as alarm masters. Master stations provide several benefits, including: Displaying all tank, environmental, and equipment conditions on a central computer screen Master stations enable operations to go from single-instance monitoring to monitoring entire networks. On top of aiding water tank monitoring, they can also receive reports from other important equipment and assets used by utilities, agriculture, oil and gas, and mining companies. This cohesive, encompassing network coverage provides significant benefits, preventing breakdowns and improving maintenance results. Remote water tank monitoring is an indispensable ability for any industry which requires large amounts of water. Combined with remote monitoring of other large assets and infrastructure, it is a critical ability for companies seeking to minimize their maintenance costs and maximize their reliability. DPS Telecom provides reliable remote monitoring equipment for water tanks and other important assets. Our experts can help you develop your remote monitoring system, and provide important installation insights. Reach out and get a quote today! Image courtesy Shutterstock You need to see DPS gear in action. Get a live demo with our engineers. Have a specific question? Ask our team of expert engineers and get a specific answer! Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
<urn:uuid:641cefbe-f237-4e0d-be96-0403d2490920>
CC-MAIN-2022-40
https://www.dpstele.com/insights/2020/02/27/remote-water-tank-monitoring/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00435.warc.gz
en
0.943768
953
3.234375
3
The Secure Connector includes a basic firewall to allow users to create policies to handle traffic passing through the device. Traffic using protocols other than TCP or UDP are blocked. The Secure Connector firewall uses a zone concept: The interfaces on the Secure Connector are assigned to a firewall zone, such as LAN, WAN, VPN, or Wi-Fi. Depending on the configuration, a firewall zone may contain no, one, or more than one interfaces. Interfaces with dual purpose, such as the Wi-Fi interface, are assigned to the firewall zone reflecting the current configuration. For example, if configured as a Wi-Fi client, the Wi-Fi interface is part of the WAN firewall zone. When configured as an access point, the interface is placed into the Wi-Fi firewall zone. The source and destination IP address are translated into source and destination firewall zones and then matched to the firewall rules. When a firewall rule matches, the action set is applied. Secure Connector Firewall Rules Firewall rules allow you to block or allow traffic between two firewall zones. Traffic must match both the source zone and destination zone for the policy of the rule to be applied. You can exempt a list of IP addresses from the source firewall zone by adding them to the exception list of the rule. For more information, see How to Create Secure Connector Firewall Rules. Firewall management rules control access to the web interface and to the command line via SSH, and can also block or allow ICMP traffic. For SSH access to be granted, you must also enable SSH in the Secure Connector Editor. For more information, see How to Create Secure Connector Firewall Management Rules. Source NAT rules rewrite the source IP address for connections with the IP address used by the interface associated to the destination zone. You can create source NAT rules for the following zones: WAN, LAN, and Wi-Fi. For more information, see How to Create Secure Connector Source NAT Firewall Rules. Destination NAT rules allow you to forward traffic both from a source zone to a specific destination IP address and from a port to another IP address. For more information, see How to Create Secure Connector Destination NAT Firewall Rules.
<urn:uuid:137e86df-acf2-4b10-8bbd-0b9b60a18cff>
CC-MAIN-2022-40
https://campus.barracuda.com/product/cloudgenfirewall/doc/91128322/secure-connector-firewall/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00435.warc.gz
en
0.874967
466
2.671875
3
Smart transportation is closely tied with smart city initiatives. What happens when vehicle travels from a smart city to one that might not be so smart? Will autonomous vehicles bridge the intelligent transportation divide, or will autonomous vehicles rely on a smart infrastructure to support functions critical to improving public safety, traffic efficiency, and sustainability? - Why does transportation need to be smarter? – Bill and Leonard discuss the motivations and the objectives that the auto industries and municipalities have for smart transportation. What is the carrot that is getting parties excited, or is the carrot not big enough to bring about the digital transformation we have all been expecting in some form? - What are the challenges facing smart transportation? – Smart transportation has taken a patently vehicle-centric focus as of late with excitement about self-driving cars that will function as robotaxi’s that will chauffeur us around. But progress for smart transportation solutions that could benefit the broader driving community have been nothing short of stagnant with only a few pilot project of any notice. What are the persistent challenges that get in the way of the smart future of transportation? - What kind of mindset change is need to make progress? – Do we have the right attitude about smart transportation? Could it be that we don’t think broadly enough about the benefits of modernizing our transportation system and how we will make that modernization happen? - What can the tech and “IoT” industry do to deliver the promise? – Bill and Leonard brainstorm what the tech community and industry can do to make IoT a thing for transportation. We discuss the approaches that can be considered to get investments and commitments to stick given short-term political and administrative agendas of our cities, counties, states, an dour nation. Our reThink Media Center features the YouTube video podcast of this episode.
<urn:uuid:4554e666-554c-4d0f-8f3a-1733847d4c02>
CC-MAIN-2022-40
https://next-curve.com/tag/smart-transportation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00435.warc.gz
en
0.934899
367
2.5625
3
Tens of thousands of papers involving A.I. are published each year, but it will take some time before many of them make their potential real-world impact clear. Meanwhile, the top funders of A.I. — the Alphabets, Apples, Facebooks, Baidus, and other unicorns of this world — continue to hone much of their most exciting technology behind closed doors. Copyright by www.digitaltrends.com In other words, when it comes to artificial intelligence, it’s impossible to do a rundown of the year’s most important developments in the way that, say, you might list the 10 most listened-to tracks on Spotify. But A.I. has undoubtedly played an enormous role in 2020 in all sorts of ways. Here are six of the main developments and emerging themes seen in artificial intelligence during 2020. It’s all about language understanding In an average year, a text-generating tool probably wouldn’t rank as one of the most exciting new A.I. developments. But 2020 hasn’t been an average year, and GPT-3 isn’t an average text-generating tool. The sequel to GPT-2, which was labeled the world’s most “dangerous” algorithm, GPT-3 is a cutting-edge autoregressive natural-language-processing neural network created by the research lab OpenAI. Seeded with a few sentences, like the beginning of a news story, GPT-3 can generate impressively accurate text matching the style and content of the initial few lines — even down to making up fabricated quotes. GPT-3 boasts an astonishing 175 billion parameters — the weights of the connections that are tuned in order to achieve performance — and reportedly cost around $12 million to train. GPT-3 isn’t alone in being an impressive A.I. language model spawned in 2020. While it was quickly overtaken in the hype cycle by GPT-3, Microsoft’s Turing Natural Language Generation (T-NLG) made waves in February 2020. At 17 billion parameters, it was, upon release, the largest language model yet published. A Transformer-based generative language model, T-NLG is able to generate the necessary words to complete unfinished sentences, as well as generate direct answers to questions and summarize documents. First introduced by Google in 2017, Transformers — a new type of deep learning model — have helped revolutionize natural language processing. A.I. has been focused on language at least as far back as Alan Turing’s famous hypothetical test of machine intelligence. But thanks to some of these recent advances, machines are only now getting astonishingly good at understanding language. This will have some profound impacts and applications as the decade continues. Models are getting bigger GPT-3 and T-NLG represented another milestone, or at least significant trend, in A.I. While there’s no shortage of startups, small university labs, and individuals using A.I. tools, the presence of major players on the scene means some serious resources are being thrown around. Increasingly, enormous models with huge training costs are dominating the cutting edge of A.I. research. Neural networks with upward of a billion parameters are fast becoming the norm. […] Read more: www.digitaltrends.com
<urn:uuid:1717b3d5-6e2d-4d3f-b6ea-be25ac669e76>
CC-MAIN-2022-40
https://swisscognitive.ch/2020/12/07/ai-milestones/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00435.warc.gz
en
0.940094
702
2.71875
3
Millions of workplace accidents happen each year, many of which are within the power of employers to prevent. According to Getac’s research, while overall worker safety has improved over the last decade, some industries seem to have plateaued or even reverted in recent years. According to the research, the situations considered most hazardous by the surveyed environment, health, and safety (EHS) professionals are what you might expect. This includes working at heights, with electricity, or operating machinery. However, these factors alone often don’t cause accidents. Secondary hazards tend to factor in. Common factors listed include human error or fatigue, neglect of safety procedures, and inadequate training. EHS professionals see three technologies with the most potential for addressing these secondary hazards – AR/VR, AI, big data, and wearables. All three of these technologies either require or can be made more effective or cost-effective by being coupled with a rugged tablet for the near future. Augmented reality (AR) need not require a head-mounted device (HMD). It can be done via an overlay on a tablet screen at less cost. If you do use an HMD for either augmented or virtual reality (VR), having at least some of the computing work done in a rugged tablet instead of the HMD, or some other piece of specialized wearable tech, is likely to be more cost-effective. It is less likely to inspire apprehension in those facing the prospect of having a computer on their head or their body. AI & Big Data Advanced robotics enabled by artificial intelligence (AI) and big data has tremendous potential to improve worker safety. However, the move from Industry 4.0 to Industry 5.0 will see the widespread deployment of collaborative robots (cobots). These are more functionally flexible than the typical industrial automatons in use today. A rugged tablet will prove an effective way to work with them. Wearables can monitor a worker’s health and vital signs, including signs of fatigue or waning attention. However, if a worker is in a remote location, a wearable might not have the long-range transmission capabilities needed to stay connected. Such capabilities would drain a wearable-sized battery, as would the broadband capabilities required to enhance its functionality and security with edge computing. However, many industries that might use such workers’ safety technologies, coupled with rugged tablets, might also require these technologies to be intrinsically safe. What Is Intrinsic Safety? Intrinsic safety is a protection technique applied to electrical and wiring products. This is to ensure safe operation in hazardous areas by limiting the energy available for ignition. Intrinsic safety may include electrostatic discharges, hot surfaces, heat, friction, or spark. A product rated as “intrinsically safe” is designed and certified to be incapable of producing heat or spark sufficient to ignite an explosive atmosphere. When a Getac tablet is rated as intrinsically safe, it is rated to ATEX/IECEx standards (depending on the geographic region) for intrinsic safety in Zone 2/22 conditions under regular operation. Zone 2 means the atmosphere is not typically explosive, but there is a risk if fumes (normally petrochemical fumes) escape. In Zone 22, the atmosphere is also not ordinarily explosive. However, there is potential if an excess of powder or dust enters the atmosphere due to some problem. With the digital transformation of industries that operate with such hazards increasingly taking on a mobile component, the need for intrinsically safe technologies is growing. But beyond the transformation drive, sectors that use intrinsically-safe tablets, such as oil & gas, mining, and certain utilities, face other pressures beyond worker safety driving adoption. During the pandemic, the doing of specific tasks by pairs or teams has become problematic. It is becoming increasingly necessary to do these tasks alone, at least physically. But with a ruggedized tablet, a solo worker can receive spoken or visual instructions from HQ or even AR overlay. And as an added benefit, that expert at HQ can lend their expertise to many more tasks this way than they ever could in the field. One type of remote guidance that warrants special attention is isolated inspection. Facilities such as mines and oil rigs must be periodically inspected for safety. They can be challenging to examine under normal circumstances if they are in remote locations. But the pandemic has made things harder, and there are no guarantees as to when it will end. But remote guided inspection, where a worker onsite carries around a tablet with a camera, remotely piloted by an offsite inspector, has filled this gap in many instances. An Aging Workforce Many hazardous industries will see a large portion of their workforce retire over the next few years. And employers don’t want to train their replacements in classrooms. They want to train them in the field, in context, using remote guidance or video instruction. It is best to use an intrinsically safe device with long battery life. A bright screen in the device is essential to view and read under any circumstance. This is more so if you’re a new employee heavily dependent on your tablet for instruction, guidance, and reference. There Are Other Features You’ll Want Intrinsic safety relates to explosion risk, but there are other features you’ll want in an intrinsically-safe tablet. This is because you may be using it for many different things over a long work shift. Advanced Connectivity Capability Intrinsically-safe Wi-Fi access points are costly. Hazardous areas like oil refineries might have more ground to cover than is ideal for a Wi-Fi network. Another dangerous industry where Wi-Fi can be problematic is mining, with its irregular, obstructive, shape-changing landscape. Private mobile networks can offer the range and comparable network capacity you need for mobile video streaming and augmented reality overlay for both industries and many others. Full Work Shift Operation with Powerful Batteries In an area where sparks are a concern, you can’t just plug your device into a local wall socket when it runs low on power. It would be best if you had long battery life, preferably with the ability to swap out a drained battery, without a tool, without operational interruption (i.e., hot-swapping), so you can stay in the field until the job is done. There may be very little shade if you’re in an open-pit mine, an oil field, or at a remote utility site. So, if you are receiving AR overlay instructions on your tablet’s screen, direct sunlight can make them impossible to see if the screen is dim. It is recommended to have at least 800 nits of brightness to guarantee viewability out in the open on a bright sunny day. In doing so, this creates an even greater need for that long battery life mentioned earlier. What Getac Offers Getac offers various intrinsically-safe tablets in different screen sizes, purpose-built and customizable for multiple industries and use cases. All are easily identifiable by the suffix “EX” (as in ATEX/IECEx) attached to their model names (F110-EX). Getac tablets are fully-rugged devices built to withstand impact, extreme temperatures, water, dust, vibration, and many other forms of stress. They are built to the usual Getac standards and are available in a diverse variety of sizes and use cases. Getac offers a pair of highly-mobile intrinsically-safe models in either the Android (the 7” ZX70-EX) or Windows (the 8” T800-EX) operating systems. Getac optimized these devices for portability and one-handed use. This makes them well-suited to being your all-day mobile communications companion. Getac built the UX10-EX with Windows for those who need a little bit of everything. Its 10” screen is large enough for comfortable media viewing and multitasking. This is ideal especially when you attach its optional keyboard and/or office dock. It is not so large as to sacrifice the mobility you want in a device you may be carrying all day. Media & Office Productivity Getac’s 11” and 12.5” models (F110-EX and K120-EX) are our workhorse models. These products offer screens sufficiently large for group presentation (made better with an optional kickstand), in-motion vehicular use (via optional mount), and a comfortable all-day typing experience (via optional attachable keyboard) if necessary. To learn more about intrinsic safety (ATEX/IECEx), click here.
<urn:uuid:700ebc3b-4412-49be-beee-70b454762c6d>
CC-MAIN-2022-40
https://www.getac.com/intl/blog/why-you-need-an-intrinsic-safe-tablet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00435.warc.gz
en
0.94247
1,811
2.9375
3
BILLINGS, Mont. (AP) — U.S. officials on Thursday solicited outside help as they craft definitions of old growth and mature forests under an executive order from President Joe Biden. The U.S. Forest Service and Bureau of Land Management issued a notice seeking public input for a “universal definition framework” to identify older forests needing protection. Biden in April directed his administration to devise ways to preserve older forests as part of the government’s efforts to combat climate change. Older trees release large volumes of global warming carbon when they burn. Biden’s order called for the Forest Service and Bureau of Land Management over the next year to define and inventory all mature and old growth forests on federal land. After that, the agencies must identify the biggest threats those forests face and come up with ways to save them. There’s disagreement over which trees to count. Environmentalists have said millions of acres of public lands should qualify. The timber industry and its allies have cautioned against a broad definition over concerns that could put new areas off limits to logging. The Forest Service manages 209,000 square miles (541,000 square kilometers) of forested land, including about 87,500 square miles (226,000 square kilometers) where trees are older than 100 years. The Bureau of Land Management oversees about 90,600 square miles (233,000 square kilometers) of forests.
<urn:uuid:c101e722-cfab-4ad4-bc46-639e13da3fdb>
CC-MAIN-2022-40
https://federalnewsnetwork.com/government-news/2022/07/us-solicits-help-as-it-defines-old-growth-and-mature-forests/?readmore=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00435.warc.gz
en
0.939456
290
2.984375
3
Countries are paying increasing attention to the ethics of artificial intelligence (AI). The UK has appointed Roger Taylor, co-founder of healthcare data provider Dr Foster, as the first chair of its new Centre for Data Ethics and Innovation, and started a consultation on the centre’s remit. Singapore is establishing an advisory... council on the ethical use of AI and data, chaired by the city-state’s former attorney-general VK Rajah, with representatives of companies and consumers. Australia’s chief scientist has called for more regulation of AI, and the National Institution for Transforming India, a government think tank, has proposed a consortium of ethics councils. Some companies are doing likewise. Google’s chief executive, Sundar Pichai, recently published a list of AI principles, including the ideas that the technology should be socially beneficial, should avoid creating or reinforcing unfair bias, be built and tested for safety and accountable, incorporate privacy by design and uphold high scientific standards. Earlier this year, more than 3,000 of Google’s staff signed a letter to Pichai arguing against the company’s involvement in Project Maven, a US military programme using AI to target drone strikes. In his list of principles, Pichai responded that Google would not design or deploy AI in weapons, other technologies likely to cause overall harm, “surveillance violating internationally accepted norms” or anything which contravenes international law and human rights. “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” he added. Google’s principles are worthy of consideration by IT professionals elsewhere, says Brhmie Balaram, a senior researcher for the RSA (the Royal Society for the encouragement of Arts, Manufactures and Commerce). “Developers have a lot of power in this field to act ethically, to ensure their technology is being used to benefit society rather than hinder or harm,” she says. Research for a recent RSA report on ethical AI, carried out by YouGov and involving 2,000 Britons, found that just 32% were aware that AI is used in decision-making, with only 14% familiar with the use of such systems in recruitment and promotion and 9% in criminal justice, such as whether to grant bail. For both areas, 60% of respondents opposed such usage, with a lack of empathy and compassion the top reason. The report recommended that organisations should discuss the use of AI with the public. Balaram says this has a number of potential benefits, not least that the General Data Protection Regulation (GDPR) gives data subjects the right to an explanation of automated decision-making. Working with members of the public can help to develop explanations that are meaningful to them. “Citizens are really interested in the rationale for a decision,” she says, rather than the exact algorithm. “They want to understand why they’ve received that decision and the processes behind it.” More generally, Balaram says the public can give technologists advance warning of what kinds of automated decision-making might cause an outcry. Groups of experts can suffer from groupthink, so the RSA has used panels which are both demographically representative of the population, and also represent those who tend to oppose as well as support technology. “Being exposed to that diversity can change thinking sometimes,” she says. Ethics as oil? A major ethical problem is capable of derailing an entire project, such as happened to the English National Health Service’s patient data-sharing Care.data project. “Ethics used to be perceived as something that was an obstacle,” says Mariarosaria Taddeo, a research fellow at the Oxford Internet Institute and deputy director of its digital ethics laboratory. “We have moved from a moment when ethics was considered grit in the engine, to a moment where ethics is oil in the engine.” She says IT professionals can implement high ethical standards through being mindful of unintended consequences, including testing and examining possible deviations in a system’s design. An important task is to examine training data for bias, as this is likely to result in similarly biased outcomes. But as it is impossible to predict all unintended consequences, AI should also be supervised when it is in operation. “It’s very difficult to predict AI, very difficult to explain how AI works,” Taddeo says. “We have to have a human on the loop, supervising and possibly intervening when things go wrong.” Mariarosaria Taddeo, Oxford Internet Institute Such supervision should also include broader auditing, monitoring and stress-testing of AI systems already in operation, by users and regulators. The biased judgements generated by US criminal justice IT company Northpointe’s Compas algorithm could have been picked up by law enforcement users, rather than eventually exposed by public interest publisher ProPublica. “It’s stopping to trust blindly in technologies,” says Taddeo, adding that the fact that many AI systems work as “black boxes”, with no way to see how they are reaching decisions, strengthens the case for users to be critical and questioning. Taddeo adds that it is often hard to identify who is legally responsible when something goes wrong in an AI system. But this means the ethical burden is widened: “It means everyone involved in the design and development shares some of the responsibility.” Joanna Bryson, a cognitive scientist at the universities of Bath and Princeton, adds that trying to shift responsibility for decisions to algorithms and software is an untenable position. “People mistake computers for maths. There’s nothing in the physical world that’s perfect,” she says, and that includes AI systems built by humans. “Someone is responsible.” There are pragmatic reasons for assuming responsibility for AI decisions, including that regulators and courts may allocate this anyway, with Germany’s Federal Cartel Office criticising Lufthansa for trying to “hide behind algorithms” when the airline’s fares rose sharply after rival Air Berlin went out of business. What’s an IT professional to do? At a basic level, Bryson says programmers and others making decisions on setting up systems should work carefully – such as by writing clean code, ensuring personal data is stored securely and examining training data for biases – and document what they do. But IT professionals should also consider the ethics of potential employers. Bryson tells the story of someone who considered a job with Cambridge Analytica, the data analytics company which collapsed after exposure of its use of data gathered through Facebook. Employees at the company warned the person they would hate themselves and would be helping people they would despise. Bryson recommends discussing potential employers with peers, which was what helped the person decide not to take the Cambridge Analytica job. Specific things to investigate include whether the organisation has clear pathways of accountability, including a manager who will listen and accept information from you; good practice on security, including board-level responsibility and an executive tasked with listening to employees’ concerns; and good general practice, including programmers having access to the code base. Ethics advisory boards A recent report from the House of Lords select committee on artificial intelligence recommended that organisations should have an ethics advisory board or group. Its chair, Timothy Francis Clement-Jones, a Liberal Democrat peer, says that such boards could work in a similar way to ethics committees within healthcare providers, which decide on whether research projects go ahead. Some leading AI companies have set up such committees, including Alphabet-owned DeepMind, although its membership is not disclosed. Clement-Jones adds that a diverse workforce – both in demographics but also educational background, as those with a humanities background will consider things differently to scientists – should help. There is also potential in using AI techniques that require less data, and have less need for vast archives reaching back many years that are more likely to contain biases. Clement-Jones, who is also London managing partner of law firm DLA Piper, says that despite the legal sector’s early use of AI, it is not necessarily a model for other sectors. “Trust is already there between client and lawyer,” he says. “We tell our clients we are using AI for a project upfront.” Furthermore, AI is typically used for low-level work, such as looking for useful material in a mass of documents. Significant ethical problems arise when major decisions on people are made by AI, such as considering disease symptoms or deciding whether or not to offer a service. Clement-Jones says this is more likely in mass-market service industries such as healthcare and finance. He believes that industry regulators such as the UK’s Financial Conduct Authority are best-placed to examine this. “We’re not keen on regulating AI as such,” he says as a parliamentarian, with industry-specific legislation making more sense, such as the automated and electric vehicles bill currently going through Parliament. Aside from laws and regulations, academics and companies are collaborating to establish ethical frameworks for AI. The Leverhulme Centre for the Future of Intelligence, which involves several universities and societies, is one of the partners in Trustfactory.ai, an initiative set up at the International Telecoms Union’s AI for Good summit held in Geneva in May. Huw Price, the academic director of the centre, says the aims of Trustfactory include broadening trust in AI by users – including disadvantaged ones – and building trust across international, organisational and academic disciplinary borders. Price says there are numerous discussions on ethics in AI, and professionals working in the field can take advantage of these. “There are lots of people starting to think about these things,” he says. “Don’t try to tackle it on your own, connect.” Several companies and not-for-profit organisations are connecting through the Partnership on AI, which brings together Amazon, DeepMind, Google, IBM, Microsoft and SAP with several universities, and campaigners including Amnesty and the Electronic Frontier Foundation. Price says it is important that such partnerships do not just include Silicon Valley organisations, or those from the English-speaking world. Price, who is also Bertrand Russell professor of philosophy at the University of Cambridge, says AI’s ethical questions will get increasingly serious. It may be possible to build a drone the size of a bee, equipped with an AI-based facial recognition system and packed full of enough explosive to kill the person the AI system identifies. “It’s the kind of issue where an individual might want to say ‘no’,” he says of someone asked to build one. Some particularly dangerous weapons, such as landmines and biological weapons, have been be drawn in what AI should do, perhaps outlawing interference in elections as well as more deadly applications. Whatever choices are made, building an ethical framework for AI will need the involvement of IT professionals – and is likely to help them in their work, as well as with their consciences.
<urn:uuid:cc8af2af-ee53-4c1d-8500-2424785d3ec9>
CC-MAIN-2022-40
https://www.computerweekly.com/feature/Ethical-AI-requires-collaboration-and-framework-development
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00435.warc.gz
en
0.96237
2,338
2.546875
3
The CDESS® course is aimed at providing knowledge of the standards and guidelines related to environmental sustainability, and how to move your data centre (existing or new) to a more environmentally sustainable design and operations. The primary audience for this course is any IT, facilities or data centre professional who works in and around the data centre and has the responsibility to achieve and improve efficiency and environmental sustainability, whilst maintaining the availability and manageability of the data centre. Participants must have at least one to two years' experience in a data centre or facilities environment. The CDCP® is highly recommended. The CDESS® will discuss data centre facility aspects and without the CDCP® or equivalent knowledge, the participant may not be able to gain the full benefits of the CDESS® training. After completion of the course the participant will be able to: Understand the impact of data centres on the environment Describe the various environmental/energy management standards Understand the purpose and goals of the legally binding international treaties on climate change Implement various sustainable performance metrics and how to use them in the data centre environment Manage data centre environmental sustainability using international standards Set up the measurement, monitoring and reporting of energy usage Use power efficiency indicators in a variety of data centre designs Use best practices for energy savings in the electrical infrastructure and in the mechanical (cooling) infrastructure Use best practices for energy savings for the ICT equipment and data storage Understand the importance of water management and waste management Understand the different ways to use sustainable energy in the data centre Get practical tips and innovative ideas to make a data centre more sustainable Module 1 – Impact of Data Centres on the Environment Module 2 – What is Environmental Sustainability Module 3 – Environmental Management Module 4 – Power Efficiency Indicators Module 5 – Electrical Energy Savings (Electrical) Module 6 – Electrical Energy Savings (Mechanical) Module 7 – Electrical Energy Savings (ICT) Module 8 – Electrical Energy Savings (Data Storage) Module 9 – Water Management Module 10 – Waste Management Module 11 – Sustainable Energy Usage Module 12 – Automated Environmental Management Systems The CDESS® course is lectured by an EPI Certified Instructor using a combination of lectures and question-and-answer sessions to discuss participants’ specific needs and issues experienced in their own environment. Participants are able to tap into the trainer’s extensive experience to enable them to solve practical problems in their current environment, thus adding tremendous value. CDESS® course available in the following delivery methods: Exam: Certified Data Centre Environmental Sustainability Specialist (CDESS®) Attendees will take a 1-hour CDESS® exam. The exam is 40 questions, multiple-choice, closed-book. The passing mark is 27 out of 40. Attendees passing the exam will be awarded the internationally accredited and recognized 'Certified Data Centre Environmental Sustainability Specialist (CDESS®) certificate. CDESS® is globally accredited by EXIN, a fully independent exam and certification institute. The CDESS® certificate is valid for 3 years, after which recertification is required. Please see the EPI Recertification Program for available options. Click to download brochure in PDF format: Create your own personalised data centre career plan If you are wondering what is the next step in your career and how to prepare for it, use the DCPT (Data Centre Career Planning Tool) to help you create a data centre career plan! Click here > DCPT. Find a training schedule that suits you, click here.
<urn:uuid:50a917c3-a845-494d-bda5-2980b3f228d5>
CC-MAIN-2022-40
https://www.epi-ap.com/services/1/3/154/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00435.warc.gz
en
0.861288
741
2.5625
3
What is the SYNCHRONIZE File Access Right? When dealing with Windows NTFS file system permissions, one quickly encounters the SYNCHRONIZE access right, the purpose of which may not be obvious. SYNCHRONIZE belongs to the standard access rights, just like DELETE, READ_CONTROL, WRITE_DAC and WRITE_OWNER. Here is the definition of SYNCHRONIZE from MSDN (2010): The SYNCHRONIZE access right is defined within the standard access rights list as the right to specify a file handle in one of the wait functions. Here is the newer definition of SYNCHRONIZE from Microsoft Docs (2019): The right to use the object for synchronization. This enables a thread to wait until the object is in the signaled state. Some object types do not support this access right.
<urn:uuid:74133e2a-5101-44f4-9542-097a511677bd>
CC-MAIN-2022-40
https://helgeklein.com/blog/what-is-the-synchronize-file-access-right/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00635.warc.gz
en
0.842632
182
2.625
3
I’ve often talked about “trial and error” hacking tactics and how organizations frequently build “rat maze” defenses in response to them. Each time they learn about a new attacker, they add or update a wall. However, a persistent rat can get through a maze, exploring different paths and gradually learning which ones are successful. Similarly, digital attackers are free to try again and again, with few consequences from a failed attempt. And unlike the human body, your enterprise is under constant attack from digital threats designed, shared, and constantly modified to damage or profit from your digital assets. Humans are exposed to a wide variety of risks to health and personal security. We can erect barriers against some of these risks, with hand washing, surgical masks, protective clothing, or vaccines. Other risks, such as cuts, burns, or infections, are handled with education, teaching children what is hot or sharp, and with rapid response when necessary. Building barriers that protect us from all risks may be used temporarily or for the very vulnerable, but they are impractical as a permanent solution. The first step in developing a digital immune system for a line of business is to get blunt, even amoral, answers to three key questions: How would attackers get rich off us? How would they ruin us? What regulations affect us? Armed with this information, you can design the appropriate security system, defend your plans, and put resources in the right place. If we just use digital barriers for protection, our systems are not learning how to respond to attacks more effectively. Sure, after an attack we analyze log files and quarantined files or packets for clues, but the delay between an attack and adding a new defense leaves the system vulnerable. Meanwhile, the attacker has learned about our defenses and is adapting and probing again. According to a recent Verizon report on data breaches, the time between an attack, its discovery, and containment is growing, not shrinking. Luckily for most of us, our personal health and safety is not subject to anywhere near the range and frequency of attacks that target our digital assets. But the body’s security system is constantly watching for internal and external threats, using our nerves, organs, and bloodstream. Conscious and subconscious processes choose the appropriate action, whether it is avoidance, prevention, or cure. New situations are added to the rule set, continuously improving our health and safety. Today, the security central nervous system is a piecemeal integration of security components using proprietary APIs. This organism is very slow and constrains innovation. We need to open ourselves up, so that we can quickly learn from every attack and every time we defend ourselves. We need a data exchange layer that enables our sensors and processes to publish and use information, not just with each other, but with the information that provides context for real-time protection decisions. For example, from the sea of computers, who is communicating and has found a new service, a new process, or a new download? At this point, we don’t know if this is good or bad. But our digital immune system can move at the speed of the attacker. What is the context of the internal connection point? Have other devices followed a similar pattern? Has the status of the employee recently changed? Then, in context, a decision can be made to kill it, approve it, or investigate further. Our attackers operate in real time; we cannot operate with only a historical view.
<urn:uuid:c48797b3-7f1b-4118-a5c0-fbc63fc82901>
CC-MAIN-2022-40
https://www.darkreading.com/intel/enterprise-security-why-you-need-a-digital-immune-system
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00635.warc.gz
en
0.951068
703
2.515625
3
Investing in robotic technology will lead to job creation in the manufacturing industry rather than job cuts, according to the latest research by Barclays. The Future-proofing UK Manufacturing report claims that far from having a negative impact on the workforce, automation will safeguard many jobs and greatly increase productivity. The report indicates that although the manufacturing sector does utilise robots, it could be embracing the technology in a bigger way. The UK currently uses around half as many robots as Germany, example, and the Barclays report says that increased robotics funding would lead to significant benefits. Additional investment of £1.2 billion in automation could increase the value of the manufacturing sector to the UK economy by £60.5 billion over the next decade. It would also encourage growth of £38 billion within the manufacturing industry itself and help protect in excess of 100,000 jobs. This is in stark contrast to the current momentum of the UK manufacturing sector, where companies like Rolls-Royce have recently announced job cuts. It also goes against the commonly-held view that robots will soon cause many people to be out of work. “Put simply, increased automation could allow UK manufacturers, and their supply chains, to more effectively compete domestically with importers of manufactured goods from both lower-cost and higher-productivity international markets, as well as helping them to succeed in growing export destinations,” explained report author Mike Rigby. “In contrast, our research shows that continued investment at the anticipated business-as-usual level is likely to result in a greater reduction in employment in the UK manufacturing sector over the next five to 10 years.” As well as greater investment in robotics, the report also stresses that businesses require more support and assistance from suppliers, the government and other interested parties when it comes to automation, if the UK is to compete on an even playing field with its international competitors. Image Credit: Wikipedia
<urn:uuid:ea9d7ee4-342a-46c5-a3c6-d66f59b96e6a>
CC-MAIN-2022-40
https://www.itproportal.com/2015/12/01/robots-will-create-rather-than-cut-jobs-in-uk-manufacturing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00635.warc.gz
en
0.967563
385
2.5625
3
April 12, 2022 Not all hackers have bad intentions! In cybersecurity, ethical hackers—known as white hat hackers—use modern hacking techniques for positive purposes to help secure businesses. Read on to learn more about white hat hacking, how these “good” hackers are able to help businesses, and the differences between different types of hackers. A white hat hacker is an ethical hacker who uses modern techniques, technology, and strategies to hack into business systems in the name of cybersecurity. Unlike a classic depiction of evil hackers, white hats use their power for good, helping businesses identify their weaknesses, strengthen their defenses, and learn more about the vectors that bad actors will use to steal data and plant malware. An ethical hacker often works with managed security service providers (MSSP) like DOT Security to play a part in building cybersecurity strategies, performing gap analyses, penetration testing, and awareness training. Having an experienced hacker on the team gives businesses a look inside the mind of those with malicious intent to see how they think, what they’re looking for, and what can be done to stop them. Using the latest hacking techniques, white hat hackers will attempt to infiltrate your system and identify where your biggest vulnerabilities are, how they’re being abused to access your system, and help your cybersecurity team to build a strategy that addresses them. One part of your cyber defenses that a white hat hacker will test is your resilience to social engineering, testing how aware your teams are of potential malware disguised as harmless emails. Along with identifying potential weaknesses, white hat hackers will also assess the strength of your established defenses like honeypots, firewalls, password security, credential security, and more. These are your first lines of defense and if an ethical hacker can break them down, a black hat hacker will have no trouble getting through. While a white hat hacker’s goal is to help people improve their cybersecurity systems by finding vulnerabilities and strengthening defenses, black hat hackers use those same skills and tools to do harm. A black hat hacker creates malware with the intent of spying on people and businesses, seating information, locking out users, and gaining access to networks. Their motivations are always malicious and focused on personal gain, financial gain, or revenge. They can modify data, destroy it, and steal it with the intent to sell it to third parties or hold it for a large ransom (ransomware). The hacking world is not always black and white. In fact, there are a few other kinds of hackers both with different motives than black or white hats, but with the same hacking skill and potential for danger. Grey Hat Hackers: These hackers live on the line between bad and good because they don’t typically hack with the intent to steal data or hold businesses hostage, they do demand money. The way they operate is by hacking businesses, alerting them of the hack, then asking for money to fix what they broke and seal up the hole they used to get in. Red Hat Hackers: These hackers are the chaotic good of the hacking space. Though they use nefarious means like viruses, malware, and other attacks—they use them against black hat hackers by destroying their systems. With white hat hackers on your side, you’re setting your business up to be more protected and better prepared to face the challenges presented by bad actors and malware. Working with an MSSP like DOT Security gives you access to a team of ethical hackers, cybersecurity specialists, a vCISO, technicians, engineers, and more to help you build the strongest possible defense for your business. Wondering how covered your business currently is against modern cyberthreats? Explore our new checklist to help you determine what you have, what you need, and how vulnerable you are to an attack.
<urn:uuid:bf10583f-ba3b-4c03-98d8-6502d0aac995>
CC-MAIN-2022-40
https://dotsecurity.com/insights/what-is-a-white-hat-hacker
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00635.warc.gz
en
0.920091
782
2.671875
3
The past 12 months have been filled with stories about cyber security attacks, election meddling, state surveillance, and massive corporate data breaches. In other words, the 10th anniversary of national Data Privacy Day, set to be celebrated in the United States, Canada, India and 47 European countries on January 28, couldn’t come at a better time for Internet users and anyone who carries around a smartphone every day. Data Privacy Day officially recognizes the 1981 signing of Convention 108, the first legally binding international treaty on privacy and data protection. The official goal of this year’s event is to raise awareness about the value of personal information and educate consumers on how to manage their data privacy. Data Privacy Day was officially recognized for the first time as a truly transatlantic event in 2008. At the time, Europe was celebrating the event as Data Protection Day, and the consensus was growing in the U.S. that something more had to be done to protect the data and security of private citizens and businesses on both sides of the Atlantic. As a result, momentum built within the U.S. government to promote Data Privacy Day as a day of education and awareness about consumer data privacy. Given the growing debate over privacy and security issues, the theme of this year’s 10th anniversary Data Privacy Day event is the following: “Respecting Privacy, Safeguarding Data and Enabling Trust.” This theme is a call to action for individuals, families, businesses and nonprofit organizations such as the National Cyber Security Alliance (NCSA), the official U.S. sponsor of the event. Looking back at 10 years of Data Privacy Day What has changed more than anything else since the inception of the Data Privacy Day event back in 2008 is how much data privacy and data protection is now viewed as an international effort. The rise of 24/7 global data flows means that data or personal information about U.S. or European citizens could be at risk almost anywhere in the world. When the Data Privacy Day event originally launched in 2008, the focus was primarily on social networking, and how personal information being shared online could be used without the knowledge of these social media users. For example, information and data you unknowingly shared on Facebook might be used by advertisers to construct sophisticated profiles of your buying, shopping or browsing habits. In the following year, the scope of the Data Privacy Day event was expanded to include much more of a focus on data security. In many ways, this was at the urging of Europe, which was celebrating the event as Data Protection Day. But it also reflected the new wave of data breaches and data security problems afflicting major U.S. corporations. Thus, the day became unofficially known as “Data Privacy and Protection Day.” Over the past ten years, the focus of the Data Privacy Day has steadily widened. Notably, more corporate sponsors have signed on as participants. This year’s event, for example, has several high-profile sponsors, such as Cisco and Intel, which are trying to play a role in making data privacy and data security a reality for today’s Internet users. In addition, the educational focus of the event has broadened to include more of an emphasis on what families should be doing to protect the data privacy of their children. This comes at a time when more children and teens carry around smartphones, and may be unwittingly sharing their personal information and location with total strangers. Data privacy in an era of the “Internet of Me” Clearly, a lot has changed in the past 10 years. One major development, of course, was the rise of the new mobile-first reality, in which smartphones now act as daily, 24/7 generators of data and personal information. Another development cited by the NCSA is the development of the “Internet of Me.” This is a conceptual framework to describe how many people now carry around several digital devices at one time, each one of them communicating information about their habits, preferences, and location. And there’s another factor to keep in mind as well, and that’s the growing consumer acceptance of wearables that track our physical fitness and health. Thus, the “Internet of Me” can be visualized as a cloud of data and personal information that includes not just our demographic information (age, gender, national origin), but also our health data, our personal financial information, and insights into our shopping preferences, and possibly even our political inclinations. So what can be done to encourage more privacy for the “Internet of Me”? That’s another of the focal points of this year’s 10th anniversary event. There needs to be more education on respecting privacy, safeguarding data and enabling trust. And more needs to be done to safeguard the Internet of Things (IoT). According to the NCSA, tens of billions of devices will be connected to the Internet by 2020, raising the risk for IoT users. Growing groundswell of support for data privacy and data protection worldwide Ultimately, Data Privacy Day is all about making people and businesses smarter about data privacy and data protection. As part of the educational materials prepared for the event, the NCSA outlines a wide range of steps that families and individual users can take to better protect themselves – such as knowing exactly what types of data they are transmitting every time they use an app, and what types of information corporations are collecting about them every time they visit a website. As a result of previous outreach efforts, it’s perhaps no surprise that consumers are embracing web browsers that focus on personal privacy, and are becoming more circumspect every time they are asked to check a box on a “terms of service” page for websites and apps. That’s a positive step, and certainly a good sign that the educational mission of Data Privacy Day is succeeding as planned. That being said, even the most hopeful optimist would agree that data breaches will still occur. In just the past 12 months, we’ve seen breaches at major companies that consumers once trusted with their data – including Equifax, Verizon and Uber. So, obviously, more can and should be done. All companies should be taking data privacy and data protection as seriously as the corporations that have signed on as sponsors of this annual event. By taking time out every January to celebrate Data Privacy Day, individuals on both sides of the Atlantic – and indeed, around the world – can take the right steps to ensure the privacy and protection of their data and the data of their families. Individuals can take better action to protect and manage their privacy, while businesses can take steps to respect user privacy and safeguard customer data.
<urn:uuid:2481212b-28fd-4eed-9e81-65fe01972641>
CC-MAIN-2022-40
https://www.cpomagazine.com/data-privacy/data-privacy-day-2018-comes-at-important-time-privacy-security-debate/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00635.warc.gz
en
0.958163
1,371
2.71875
3
The last 12 months have brought radical changes to the field of weather satellite image reception and processing. Starting with the launch of GOES-R on November 19th, 2016 atop an Atlas V rocket, NOAA started what is to be one of its most groundbreaking eras. The current families of both geostationary and polar-orbiting satellites that NOAA is putting into service now will represent a huge leap-forward in the field of weather forecasting. GOES-R which has since been officially named GOES-16 and become the new GOES-East geostationary satellite and contains so many technological innovations that it would take many pages to properly describe. Thanks to the use of a dual signal (two circularly polarized streams) the data rate of the new GOES (31 MBps) is roughly 14 times faster than the older generation GOES. Images have a resolution of 0.5 Km in the visual spectrum and 1 Km in the infra-red spectrum. The new GOES uses GOES Rebroadcast (GRB) as the primary relay of full resolution, calibrated, near-real-time direct broadcast of Level 1b data from each instrument and Level 2 data from the Geostationary Lightning Mapper (GLM). GRB is the service that has replaced GOES VARiable (GVAR) service from the previous generation GOES. GRB contains data from the Advanced Baseline Imager (ABI) which is the primary instrument on the GOES-R Series for imaging Earth’s weather, oceans and environment. ABI views the Earth with 16 different spectral bands (compared to five on the previous generation of GOES), including two visible channels, four near-infrared channels, and ten infrared channels. Better instruments result in better services such as: - Improved storm and hurricane forecasting - Increased time to issue storm and bad weather warnings - Better detection of intense rainfall and flood prevention - Improvements in flight route planning - Improvements in the issuance of air quality alerts - Improvements in the estimation and detection of forest fires - Improvements in the detection of solar flares But geostationary satellites are not the only ones getting an upgrade. NOAA and NASA cooperated in development of a new series of advanced polar orbiting satellites. The Joint Polar Satellite System (JPSS) is the new family of polar orbiting satellites. JPSS has two primary meteorological instruments installed which are VIIRS (Visible Infrared Imager Radiometer Suite) and ATMS (Advanced Technology Microwave Sounder). These instruments will be able to provide the user with images and information such as: - Snow/ice cover - Cloud cover - Fog and aerosols - Fire/smoke plumes - Vegetation health - Ozone monitoring - Vertical temperature and water vapor profiles JPSS will transmit its data in the X-Band at 7,812 MHz through its HRD downlink. NOAA just launched its first new polar bird (JPSS-1) on a United Launch Alliance Delta II rocket from Vandenberg Air Force Base, California, on November 18th, 2017. The new satellite has since been renamed NOAA-20. Another three satellites of the same type will be launched over the coming years. But how do these developments affect the end-user? How will meteorological departments around the world adapt to these changes? For starters, legacy ground receiving system will need to be replaced. Very few of the existing old GVAR or HRPT systems can be upgraded. Differences in the data-rate, polarization, and data stream structure mean that existing geostationary satellite receiving systems will need to be replaced in order to receive GRB transmissions. In the case of HRPT, the new JPSS transmits in the X-Band, so your old system will have to be replaced. Morcom offers several cost-effective alternatives to users that want to be able to receive and use the wealth of data available in the new transmissions. Our new GRB and JPSS receiving systems are described in the appropriate section of this website. In addition, we offer an alternative very affordable solution with our new Geonetcast Americas receiving system. Geonetcast is a low-cost information dissemination service which aims to provide global information as a basis for sound decision-making in a number of critical areas, including biodiversity and ecosystem sustainability; disaster resilience; energy and mineral resources management; food security and sustainable agriculture; infrastructure and transportation management; public health surveillance; sustainable urban development and water resources management. Call us for more information (1-800-683-4101) or send us an E-Mail to [email protected] .
<urn:uuid:0454f6fa-4070-4845-ac4a-50003b340a80>
CC-MAIN-2022-40
https://www.morcom.com/blog/dissemination-of-weather-satellite-products-and-data-in-2018
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00635.warc.gz
en
0.914403
974
2.671875
3
How to talk to your kids about security online - June 20, 2019 - Posted by: Kerry Tomlinson, Archer News - Categories: Archer News, Cyber Crime, Cyberattack, Posts with image, Privacy You may have trouble keeping up with the latest tech, apps and trends, but your kids probably don’t. They go headfirst into the digital world, often without looking both ways before they cross. We spoke with the head of the National Cyber Security Alliance to learn how he talks to his own kids about cybersecurity. Kelvin Coleman’s two children may sigh a bit when he starts talking about cybersecurity. “They always tell you ‘too much,’ that I talk to them way too often about it. But it’s too important,” Coleman said. “I want them to understand that what they’re doing now is going to have consequences in the future.” He is the executive director of the National Cyber Security Alliance, with a mission of keeping all of us smarter and safer online. And that starts with his own kids. Here are some of his tips and strategies for working on cybersecurity with your children. Kids & Cybersecurity Talk to your kids about doing the right things online: —Use long passwords —Use different passwords for each account Two-factor authentication gives you a second step to verify who you are before you sign in to accounts — like a code sent to your phone or a special key — so if crooks steal your password, they have trouble getting in. Be a Role Model Coleman said he taught his kids to use two-factor authentication for every account. “I think it’s one of those things where you have no choice,” he said with a chuckle. “You’re in a car, you have to put on your seatbelt, right? When you’re crossing the street, you have to look both ways. That’s not an option.” “For my kids, where two-factor authentication is available, you absolutely use it,” he added. Your kids are more likely to do all this good stuff if you do it, too, he advised. If you don’t do it, why should they? Hang Out, Check In Check in on what they’re doing online. Hang out with them to see where they go and what they do. “They’re using technology all the time,” Coleman said. “So, why wouldn’t we be more curious about how they’re using it and if they’re going to be more secure on it?” He described a typical interaction with his children. “Sit down with them and say, ‘Hey, let me see your two-factor,’” he said. “‘Oh, Dad! Okay, now I can I go back to whatever.’” “Okay, great, that was just a little drill, a fire drill if you will,” he said. More Good Stuff The National Cyber Security Alliance has more ideas on kids & cybersecurity: —Help your kids come up with their own long passwords, or passphrases. They like being creative with a string of unique words. —Go over examples of shady sites and posts so they can learn critical thinking skills. —Talk about the possible problems with clicking on or downloading things, how you can download malware and allow crooks to invade your entire network. —Discuss how posting images and info can reveal too much information and allow people to hurt you now and in the future. It can be hard for kids to think about filtering their info online, especially when the current currency is public attention. “Young people may not get it right away,” Coleman said. “But I tell you what, that’s no reason to not continue to talk to them about it.” Let them know that cyber crooks can use your information against you. For example, if you give your pet’s name online and then use the same name for the answer to a security question, you’re making it easy for people to break into your accounts. Criminals can steal your money, your points, or the account itself. They can use your accounts to pretend to be you and harass other people. And things you post now can make it hard for you to get a job, a scholarship or respect later on in your life. “You absolutely have to have to protect your information, because may not be as important to you right now, but it’s going to be important in a few years down the road,” Coleman explained. Your kids’ friends may care more about attention than privacy. “Still have to talk about it,” Coleman said. “Their friends may not care about a lot of things. We still have to talk about these various things in terms of staying safe on line.” Helpful links for kids & cybersecurity: Main image: Kids, one with phone. Image: StockSnap
<urn:uuid:1bc2b287-7fe8-44ed-ae53-5d328969848b>
CC-MAIN-2022-40
https://archerint.com/how-to-talk-to-your-kids-about-security-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00635.warc.gz
en
0.946264
1,168
2.90625
3
#devops #cloud #ado Understanding the difference between concurrency and connections is an important step in capacity planning in public cloud environments Choosing the right size for an instance in the cloud is – by design – like choosing the right mix of entrée and sides off the menu. Do I want a double burger and a small fry? Or a single burger and a super-size fry? All small? All large? Kiddie menu? It all depends on how hungry you are, on your capacity to consume. Choosing the right instance in cloud computing environments also depends on how hungry your application is, but it also depends heavily on what the application is hungry for. In order to choose the right “size” server in the cloud, you’ve got to understand the consumption patterns of your application. This is vital to being able to understand when you want a second, third or more instance launched to ensure availability whilst simultaneously maintaining acceptable performance. To that end, it is important to understand the difference between “concurrency” and “connections" in terms of what they measure and in turn how they impact resource consumption. Connections is the measure of new connections, i.e. requests, that can be handled (generally per second) by a “thing" (device, server, application, etc…). When we’re talking applications, we’re talking HTTP (which implies TCP). In order to establish a new TCP connection, a three-way handshake must be completed. That means three exchanges of data between the client and the “thing” over the network. Each exchange requires a minimal amount of processing. Thus, the constraining factors that may limit connections is network and CPU speed. Network speeds impact the exchange, and CPU speed impacts the processing required. Degradation in either one impact the time it takes for a handshake to complete, thus limiting the number of connections per second that can be established by the “thing.” Once a connection is established, it gets counted as part of concurrency. Concurrency is the measure of how many connections can be simultaneously maintained by a thing (device, server, application, etc… ). To be maintained, a connection must be stored in a state table on the “thing”, which requires memory. Concurrency, therefore, is highly dependent on the amount of RAM available on a “thing” and is constrained by RAM available on a “thing”. In a nutshell: Concurrency requires memory. New connections per second requires CPU and network speed. Now, you may be wondering what good that does you to know the difference. First, you should be (if you aren’t) aware of the usage patterns of the application you’re deploying in the cloud (or anywhere, really). Choosing the right instance based on the usage pattern (connections heavy versus concurrent heavy) of the application can result in spending less money over time by choosing the right instance such that the least amount of resources is wasted. In other words, you’re making more efficient the resources being used by pairing it correctly to the right application. Choosing a high-memory, low-CPU instance for an application that is connection-oriented can lead to underutilization and wasted investment, as it will be need to be scaled out sooner to maintain performance. Conversely, choosing high-CPU, low-memory instances for applications dependent on concurrency will see performance degrade quickly unless additional instances are added, which wastes resources (and money). Thus, choosing the right instance type of the application is paramount to achieving the economy of scale promised by cloud computing and virtualization. This is a truism whether you’re choosing from a cloud provider’s menu or your own. It’s inevitable that if you’re going to scale the application (and you probably are, if for no other reason than to provide for reliability) you’re going to use a load balancing service. There are two ways to leverage the capabilities of such a service when delivered by an application delivery controller that depend, again, upon the application. STATEFUL APPLICATION ARCHITECTURE If the application you are deploying is stateful (and it probably is) then you’ll not really be able to take advantage of page routing and scalability domain design patterns. What you can take advantage of, however, is the combined memory, network, and processing speed capabilities of the application delivery controller. By their nature, application delivery controllers generally aggregate more network bandwidth, contain more efficient memory usage, and are imbued with purpose built protocol handling functions. This makes them ideal for managing connections at very high rates per second. An application delivery controller based on a full-proxy architecture, furthermore, shields the application services themselves from the demands associated with high connection rates, i.e. network speed and CPU. By offloading the connection-oriented demands to the application delivery service, the application instances can be chosen with the appropriate resources so as to maximize concurrency and/or performance. STATELSS or SHARED STATE APPLICATION ARCHITECTURE If it’s the case that the application is stateless or the application shares state (usually via session stored in a database), you can pare off those functions that are connection-oriented from those that are dependent upon concurrency. RESTful or SOA-based application architectures will also be able to benefit from the implementation of scalability domains as it allows each “service” to be deployed to an appropriately sized instance based on the usage type – connection or concurrency. An application delivery service capable of performing layer 7 (page) routing can efficiently sub-divide an application and send all connection-heavy requests to one domain (pool of resources/servers) and all concurrent-heavy requests to yet another. Each pool of resources can then be comprised of instances sized appropriately – more CPU for the connection-oriented, more memory for the concurrent-oriented. In either scenario, the use of TCP multiplexing on the application delivery controller can provide further mitigation of the impact of concurrency on the consumption of instance resources, making the resources provisioned for the application more efficient and able to serve more users and more requests without increasing memory or CPU.
<urn:uuid:f99dc834-d817-46fa-8eed-387ecd2766dc>
CC-MAIN-2022-40
https://community.f5.com/t5/technical-articles/capacity-in-the-cloud-concurrency-versus-connections/ta-p/279714
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00635.warc.gz
en
0.919025
1,298
2.53125
3
A study published in the Journal of Psychosocial Research on Cyberspace has highlighted the cost of cyberloafing to businesses. Cyberloafing has a massive impact on productivity, yet it is all too common. The cyberloafing costs for businesses are considerable and employees who partake in cyberloafing can seriously damage their career trajectory. Employers are paying their employees to carry out work duties, yet a huge amount of time is lost to cyberloafing. Cyberloafing dramatically cuts productivity and gobbles up company profits. The study was carried out on 273 employees and cyberloafing was measured along with the characteristics that led to the behavior. The study indicated a correlation exists between dark personality traits such as psychopathy, Machiavellianism and narcissism, but also suggested that employees are wasting huge amounts of time simply because they can do so. The sites most commonly viewed were not social media sites, but news websites and retail sites for online shopping. In a perfect world, employees would be able to complete their duties and allocate some time each day to personal Internet use without any reduction in productivity. Some employees do just that and curb personal Internet use and do not let it impact their work duties. However, for many employees, cyberfloafing is an issue and huge losses are suffered by employers. A report on cyberloafing published by Salary.com indicated 69% of employees waste time at work every day, with 64% visiting non-work related webs pages. Out of those workers, 39% said they wasted up to an hour on the Internet at work, 29% wasted 1-2 hours, and 32% wasted over two hours a day. Cyberloafing can have a huge impact in company profits. A company with 100 workers, each of whom spend an hour daily on personal Internet use, would see productivity losses of in excess of 25,000 man-hours annually. Productivity losses caused by cyberloafing are not the only problem – or cost. When employees use the Internet for personal reasons, their actions slow down the network resulting in slower Internet speeds for all. Personal Internet use increases the chance of malware and viruses being introduced, which can cause further productivity losses. The cost of addressing those infections can be huge. What Can Employers do to Reduce Cyberloafing Costs? First of all, it is vital that the workforce is educated on company policies relating to personal Internet use. Advising the staff about what is an acceptable level of personal Internet use and what is considered unacceptable behavior ensures everyone is aware of the rules. They must also be told about the personal consequences of cyberloafing. The Journal of Psychosocial Research on Cyberspace study says, “a worker’s perceived ability to take advantage of an employer is a key part of cyberloafing.” By improving monitoring and making it clear that personal Internet use is being recorded, it acts as a good deterrent. When personal Internet use reaches problem levels there should be repercussions for the employees involved. If there are no sanctions for employees that break the rules and company policies are not enforced, little is likely to change. Action could be taken against the workers concerned through standard disciplinary procedures such as verbal and written warnings. Controls could be implemented to curb Internet activity – such as blocks applied for certain websites – social media sites/news sites for example – when employees are wasting too much time online. Those blocks could be temporary or even time-based, only permitting personal Internet use during breaks or at times when workloads are usually low. WebTitan – An Easy Solution to Cut Productivity Losses and Curb Cyberloafing Such controls are simple to apply using WebTitan. WebTitan is an Internet filter for SMBs and enterprises that can be deployed in order to reclaim lost productivity and block access to web content that is unacceptable in the workplace. WebTitan allows administrators to apply Internet controls for individual employees, user groups, or the entire company, with the ability to apply time-based web filtering controls as appropriate. Stopping all employees from logging onto the Internet for personal reasons may not be the best way forward, as that could have a negative impact on morale which can similarly impact productivity. However, some controls can certainly help employers reduce productivity losses. Internet filtering can also reduce the risk of lawsuits as a result of illegal activity on the network and blocking adult content in the workplace and can help to stop the development of a hostile work environment. If you would like to increase productivity and start enforcing Internet usage policies in your company, contact TitanHQ today. WebTitan is available on a free trial to test the solution in your own environment before making a decision about a purchase.
<urn:uuid:2ff2bf86-ad7e-486a-929a-3419b49d8760>
CC-MAIN-2022-40
https://www.arctitan.com/blog/cyberloafing-costs-revealed-in-new-study/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00635.warc.gz
en
0.949777
971
2.515625
3
The prospect of a “ransomware” attack, where your device is infected with malware that locks you out of your personal data and files and demands money to potentially unlock them, is a frightening thought to most people. And, these attacks have been on the rise, with nearly 9 million incidents detected by McAfee by the end of 2016 alone. For instance, you probably heard about the “WannaCry” worm that wreaked havoc earlier this year. Major enterprises such as FedEx, Deutsche Bank, and the U.K.’s National Health Service were all crippled when their Windows computers were infected and payment was demanded in Bitcoin to unlock their data. Making the matter worse, paying the ransom to regain access to files (and figuring out how to pay in Bitcoin, a hard to trace digital currency) is no guarantee that you will get your information back. The attackers could continue to ask for more money, or never unlock your files under any circumstances. Given the prevalence of ransomware and how difficult it can be to deal with, the smart move is to try to avoid it altogether. Here are 5 important tips to keep your devices and information safe from ransomware: 1) Backup Your Data—Since paying a ransom to get your data back is often ineffective (and just encourages the attackers) the best preventative measure you can take is to backup your data on a regular basis, just in case you need to wipe your device clean after an attack. Use a backup drive, or backup to the cloud. This way you can easily retrieve all of your important information without paying a ransom. 2) Use strong security—Antivirus software can now block some ransomware attacks by detecting variants of known viruses. Make sure to run regular scans to prevent ransomware and other common threats. And, in the case that you do fall victim, security software can be especially important in making sure your system is clean after the attack, and before you reinstall your data from backup. 3) Keep your software updated—You may remember that WannaCry only targeted outdated Windows software that wasn’t patched with the latest security fixes. This is why it’s imperative that you keep all your software, including mobile apps, up-to-date. This way, attackers have a harder time taking advantage of known vulnerabilities. 4) Be careful where you click—Sometimes it doesn’t matter how many technical tools you have at your disposal; whether attackers are successful or not can depend on your own online behavior. Ransomware attacks can be distributed in phony online ads, email links, social media messages and even via text message. Be skeptical. Don’t respond to messages from strangers or click on links in spam emails. 5) Stay Aware—Cybercrooks are always looking for new ways to trick us out of money and information. Stay informed about the latest ransomware attacks and how to avoid them. Know that businesses are also commonly targeted and that the precautions you take at home should also be applied to your work devices and data. Although ransomware is a concerning trend, the good news is we can do a lot to counteract these sort of attacks. In fact, over the last year the No More Ransom project, which McAfee is part of, has helped thousands of people recover files that have been encrypted, or locked, by cybercrooks. Of course, your best shot at beating the cybercriminals is to avoid an attack in the first place by following the tips above. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:d6b8dc28-4afc-426f-ade2-d5f37bfac8c0>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/consumer/consumer-threat-reports/5-tips-defend-ransomware/?wcmmode=disabled
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00635.warc.gz
en
0.944444
739
2.703125
3
Have you ever had a user upload a corporate file to a consumer cloud storage service like Google Drive? Has a user ever tried to forward a work email outside the corporate domain? Has a user ever lost a USB drive with data on it? Has an abnormal amount of data ever been transferred outside your network? These are all examples of ways to prevent intentional or accidental data loss within your organization. Who Could Benefit From DLP? DLP is more often used in industries that face regulation, such as: However, as SMBs may be less likely to have a formal program, they have become a prime target for hackers. How Does DLP Work? DLP typically is a software product that network administrators use to control the transfer of data among users in your organization. This product would aid in denying users the ability to commit any of those examples listed above. Data exists in three states: Data in use - Data is being processed by an app or endpoint. DLP can authenticate users and control their access. Data in motion - Data is being transferred across a network. DLP mitigates the risk that it will be transferred outside via FTP, email, or a number of other methods. Data at rest - Data is in storage. DLP ensures that only authorized users should be able to access it, and tracks the data if it is leaked or stolen. Identify sensitive data. Scan data in motion, in use, and at rest. Remediate actions such as alerting, prompting, quarantining, blocking, and encrypting. Report for compliance, auditing, forensics, and incident response purposes. What Happens If I Don’t Have A DLP Strategy? About 34% of companies experience a data breach because of an accident, according to Breach Level Index. That’s because employees often aren’t aware of best practices for cybersecurity. A security awareness program is a crucial factor in helping, but having a DLP strategy is another way to make sure that only the people who should be accessing and transferring data are the ones doing so. Companies that lose data could see: Their brand, goodwill, and reputation diminish. Their value reduced. Loss of customers. Accidents happen, but often, they can be prevented. Mitigate data loss in your organization today by adding a DLP strategy to your cybersecurity plan.
<urn:uuid:d5dc7dfa-c761-480b-83c5-2e377c955894>
CC-MAIN-2022-40
https://blog.integrityts.com/data-loss-prevention-employees
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00635.warc.gz
en
0.935787
526
2.578125
3
How does network-attached storage work? Three general components are utilized in NAS configurations: standard hardware, specialized software, and data transfer protocols. Dedicated NAS hardware, often called NAS box, NAS unit, NAS server, or NAS head, contains storage drives and disks, processors, network adapter, and memory (RAM). Together, these form the hardware component of a NAS unit which is where data will be stored and accessed via network-based communications. Software is the key difference between a NAS unit and simply attaching a general server to a network to house and share files. First, NAS units use a paired down OS, which runs the NAS software that is typically embedded on the hardware to improve performance and security. A general purpose server operates standard OSes, with all its overhead, whereas a NAS unit only handles data storage and file sharing requests, greatly improving efficiency and performance. The final component is the set of protocols that the NAS unit will be configured to use in connecting and transferring data across the network. Because NAS attach to TCP/IP networks, TCP/IP are the fundamental protocols for transferring data. TCP/IP rounds up packets of data, and then packages them with an address to be sent over the network. Because when NAS units are connected to networks they can use a file-sharing protocol, they appear as file shares on workstations. File-sharing allows multiple users access to information and files as if it were on their own system. Each OS, Windows, Linux, and Apple, use a separate file system protocol for sharing: - Network File System (NFS): Typical to linux and unix based systems, NFS is a vendor agnostic protocol, compatible with most hardware, operating systems, and networks. - Common Internet File Sharing (CIFS)/Server Message Block (SMB): CIFS/SMB are windows specific protocols, which offer other features such as sharing printers. - Apple Filing Protocol (AFP): AFP is Apple’s proprietary file sharing protocol, formerly known as AppleTalk.
<urn:uuid:607246b0-e5f3-4667-9148-c1596ae59df6>
CC-MAIN-2022-40
https://www.hitachivantara.com/en-anz/insights/faq/what-is-network-attached-storage.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00635.warc.gz
en
0.916164
419
3.46875
3
Denial of Service (DoS) attacks continue to be on the rise, which is no surprise given our ever-growing dependency on Web-based services, coupled with the fact that these attacks are relatively cheap and easy to carry out. In this article, we’ll discuss what DoS attacks are, some various types of DoS attacks, tips to keep them at bay, and references to security tools to help you mitigate vulnerabilities. DoS attacks and their impact A DoS attack is an explicit attempt to prevent legitimate users from accessing information or services on a host system. It does this by overloading the targeted machine or service with requests, thus making the resource unreachable or unresponsive to its intended users. DoS attacks exploit known weaknesses and vulnerabilities in systems and applications. These attacks aim to consume valuable resources to disrupt a service. Resources targeted include: - Network connectivity - Data structures - CPU usage - Disk space - Application exception handling - Database connections. Unfortunately, DoS attacks are becoming more sophisticated and getting better at evading detection. They can wreak havoc on organizations by bringing down business critical services and inhibiting Web access to users, which can result in thousands to hundreds of thousands of dollars per day in lost revenue! Hackers use several methods to deploy DoS attacks. These attacks come in all different shapes and sizes. Let’s take a quick look at some of them: 1. SYN attacks In a SYN (synchronize) attack, networking capability of the targeted system can be knocked out by overloading its network protocol stack with information requests or connection attempts. A SYN attack exploits known weaknesses in the TCP protocol and can impact any system providing TCP-based services, including Web, email, FTP, print servers, etc. In a normal TCP connection, the client and server exchange a series of messages to establish the connection, known as the three-way handshake. First, the client sends a SYN message to the server. The server will acknowledge the receipt of this message with a SYN-ACK (synchronize-acknowledgement) back to the client. Lastly, the client responds with an ACK (acknowledge) and the connection is established. Taking advantage of this process, an attacker sends multiple SYN packet requests continuously, but then doesn’t return a response. This means the targeted host just sits and waits for acknowledgement for each request, which ties up the number of available connections. In turn, connection attempts from legitimate users get ignored. Tips to stay secure: Make sure you have a firewall/security device in place that is capable of detecting the characteristics of this type of attack. Also, be certain that you have the appropriate filters configured, including one that restricts input to your external interface by denying packets that have a source address from your internal network. You should also filter outgoing packets that have a source address different than your internal address scheme. Additionally, ensure you have the latest security patches in place, including operating system and application updates, as well as firmware updates for your network and security devices. 2. Poisoning of DNS cache DNS cache poisoning exploits vulnerabilities in the domain name system (DNS). In this case, the attacker attempts to insert a fake address entry into the DNS server’s cache database in order to divert Internet traffic from legitimate sites to “rogue” sites. The goal is to lure unsuspecting users to download malicious programs, which can then be exploited by the attacker. Tips to stay secure: First, ensure you’re running the latest release of your DNS software. You should also configure your firewall to drop packets having an internal source address on the external interface as these are in most cases “cooked-up” addresses. Another important step is to collect and analyze log files from your DNS servers to identify anomalies and suspicious patterns, such as a multiple queries from the same IP within a short amount of time. 3. ICMP/Ping flood In this case, the attacker sends a continuous stream of ICMP echo requests to the victim as fast as possible without waiting for a reply—in other words, “floods” it with ping packets. This barrage of data packets consumes the victim’s outgoing and incoming bandwidth, preventing legitimate packets from reaching their destination. Tips to stay secure: Filter ICMP traffic appropriately. Block inbound ICMP traffic unless you specifically need it, such as those tools used for normal administration and troubleshooting. For ICMP traffic you do allow, do so only to those specific hosts that require it. Also, configure appropriate parameters and rate limits on firewalls and routers, such as setting a threshold for the maximum allowed number of packets per second for each source IP address. Additionally, make sure you’re monitoring those device logs in real time to immediately detect patterns of high ICMP volume. 4. E-mail bombs This type of attack involves sending huge volumes of bogus emails simultaneously, and in most cases, containing very large attachments. E-mail bombs consume large amounts of bandwidth, as well as valuable server resources and storage space. An attack of this kind can quickly bring your mail service to a crawl or crash the system altogether. Tips to stay secure: In addition to firewalls, you can put other perimeter protection in place, such as content filtering devices. It’s also wise to limit the size of emails and attachments, as well as limiting the number of inbound connections to the mail server. 5. Application-level floods Application denial-of-service attacks target Web servers and take advantage of software code flaws and exception handling. These types of attacks are common and difficult to defend against since most firewalls leave port 80 open and allow traffic to hit the backend Web applications. Tips to stay secure: Make sure servers and applications stay up-to-date with security patches. Also, educate developers on the risks of sloppy code and leverage a Web Application Firewall (WAF) to protect against bad code and software vulnerabilities. In addition, you should be logging relevant data from all your business-critical applications. Security tools to mitigate vulnerabilities As long as there are vulnerable systems on the Web, there are going to be denial-of-service attacks. And, though some DoS attacks can be difficult to defend against, there are ways to mitigate your risks to these types of cyberattacks. First and foremost, ensure you systems are up-to-date with the latest patches. Patch management is one of the most critical processes in vulnerability management. You need to apply the latest security patches and updates to operating systems and applications, as well as firmware updates for your network devices, including routers and firewalls. Next, continuously monitor your systems and devices. Start by creating a baseline and then monitor how the network is behaving to identify anomalies. To do this successfully requires that you have a solution in place that is capable of monitoring and correlating log event data throughout your environment, and very importantly, reacting in real time. This is where Security Information and Event Management (SIEM) solutions come into play. Log management solutions centrally collect and correlate logs from network and security devices, application servers, databases, etc., to provide actionable intelligence and a holistic view of your IT infrastructure’s security. Another important step is to ensure your firewalls and network devices are configured properly and that you have the appropriate rules and filters in place. Configuration and change management plays a vital role in protecting your network from unauthorized and erroneous changes that could leave your critical devices vulnerable. Following these guidelines can go a long way in protecting your IT infrastructure and services. It’s much better to implement precautionary measures up front to prevent an attack than to try and recover after one has occurred.
<urn:uuid:49be40be-671a-49af-bf4c-e8b38deb7e72>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2013/09/06/understanding-and-defending-against-denial-of-service-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00035.warc.gz
en
0.936688
1,605
3.15625
3
Artificial Intelligence (AI) has found its way into numerous industrial processes. Companies implement AI not just to accelerate production, but also to increase precision and efficiency. Nevertheless, AI used to be a buzzword only a few decades ago, when the developments were not as far reaching as they have become now; it needed particular minds to both develop and operate it. However, now, our modern IT infrastructure has become endowed with talented thinkers that can turn impossibility into reality. With an increasing emphasis on DevOps, organizations are focusing on efficiency and better reliability. The multi-leveled and interwoven IT strategies require equally sharp eyes and a keen mind to notice and trace critical events that trigger a specific function – this is where real-time and centralized log analytics plays a vital role. AI helps to troubleshoot the main issues quickly and efficiently, while also predicting future problems. AI has gone from being a buzzing luxury to becoming a necessity of industries today; AI is redefining the entire system of proceedings itself. It is being combined with human knowledge to create breakthroughs and opportunities that would have been impossible without its intervention. Even in IT, where the environment has increasingly become agile and dynamic due to DevOps, the complex methodologies are being simplified through AI implementation. Apart from procedural ease, AI enables IT professionals in gaining insights into the problems that are otherwise so hard to trace. The immensely complicated DevOps process often falls outside the reach of the human mind. The operations involved need precision, pace and, big data streaming, which are possible only with AI intervention. Thus, AI has become a powerful and essential tool for efficiently analyzing and taking over decision-making processes for better results. AI fills the gaps between human capability and big data through applications of operational intelligence. Additionally, AI speeds up troubleshooting and real-time decision-making. AI’s Cognitive Insights One of the most groundbreaking pieces of AI technology is applied in IT operations, namely Cognitive Insights (CI), which utilizes machine-learning algorithms to match human domain knowledge with log data, open source repositories, discussion forums, and social threads. Through this informational repertoire, CI forms relevant insights that contain solutions to a wide range of critical issues faced by DevOps teams on a daily basis. DevOps engineers face numerous challenges, which can be effectively attenuated by integrating AI into log analysis and other concerning operations. There are several applications of Cognitive Insights, which include: Frequent attacks such as Distributed Denial of Service (DDoS) have become all the more prevalent. Threats which used to be limited to high-profile public websites and multinational organizations are now targeting small-scale servers, SMBs, and mid-sized enterprises. Having a centralized logging architecture to identify and pinpoint potential threats from numerous entries is essential for warding off such attacks. For this purpose, the application of anti-DDoS mitigation through Cognitive Insights has been highly effective. Leading organizations such as Dyn and British Airways had sustained potential damage from DDoS attacks in the past and subsequently installed a full-fledged ELK-based anti-DDoS mitigation strategy to restrict hackers and secure their operations against future attacks. Cognitive Insight can compile logs at a centralized point, with each entry carefully monitored and registered. It also provides the luxury of viewing the process flow clearly and executing queries of records from various applications; this thereby increases overall efficiency. With AI Cognitive Insight it is becoming straightforward to pinpoint the small, yet potentially harmful, issues in vast streams of log data. The core of this program is based on ELK stack and makes it easier to have a clear view of DevOps processes through the help of data simplification and assortment. Besides these cases, AI integration in DevOps can yield several other useful outcomes including: • AI-driven log analytics systems efficiently solve issues of identifying and resolving critical issues, which subsequently amplifies management and overall operational pace • Improved customer success due to better results • Monitoring and customer support becomes even easier • Risk reduction and resource optimization • Maximize efficiency by making logging data easily accessible In other words, Cognitive Insights and other such Artificial Intelligent integrations can be of great help in data log management and troubleshooting. They can quickly pinpoint the issues from thousands of log entries which are often time consuming and erroneous when a human mind handles them. Understand How Artificial Intelligence and Machine Learning Can Enhance Your Business The Future of Data Science Lays within Cloud-Based Machine Learning and Artificial Intelligence True Business Efficiency Combines the Power of Cloud Computing and DevOps Practices
<urn:uuid:de6a46fd-4ba0-4b0a-95d7-c622645ffb06>
CC-MAIN-2022-40
https://www.idexcel.com/blog/tag/artificial-intelligence-consulting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00035.warc.gz
en
0.944413
932
2.71875
3
What is Keystroke Logging and Keyloggers? Keyloggers are built for the act of keystroke logging — creating records of everything you type on a computer or mobile keyboard. These are used to quietly monitor your computer activity while you use your devices as normal. Keyloggers are used for legitimate purposes like feedback for software development but can be misused by criminals to steal your data. Keystroke Logging Definition The concept of a keylogger breaks down into two definitions: - Keystroke logging: Record-keeping for every key pressed on your keyboard. - Keylogger tools: Devices or programs used to log your keystrokes. You’ll find use of keyloggers in everything from Microsoft products to your own employer’s computers and servers. In some cases, your spouse may have put a keylogger on your phone or laptop to confirm their suspicions of infidelity. Worse cases have shown criminals to implant legitimate websites, apps, and even USB drives with keylogger malware. Whether for malicious intent or for legitimate uses, you should be aware how keyloggers are affecting you. First, we’ll further define keystroke logging before diving into how keyloggers work. Then you’ll be able to better understand how to secure yourself from unwanted eyes. How Keystroke Logging Works Keystroke logging is an act of tracking and recording every keystroke entry made on a computer, often without the permission or knowledge of the user. A “keystroke” is just any interaction you make with a button on your keyboard. Keystrokes are how you “speak” to your computers. Each keystroke transmits a signal that tells your computer programs what you want them to do. These commands may include: - Length of the keypress - Time of keypress - Velocity of keypress - Name of the key used When logged, all this information is like listening to a private conversation. You believe you’re only “talking” with your device, but another person listened and wrote down everything you said. With our increasingly digital lives, we share a lot of highly sensitive information on our devices. User behaviors and private data can easily be assembled from logged keystrokes. Everything from online banking access to social security numbers is entered into computers. Social media, email, websites visited, and even text messages sent can all be highly revealing. Now that we’ve established a keystroke logging definition, we can explain how this is tracked through keyloggers. What does a Keylogger Do? Keylogger tools can either be hardware or software meant to automate the process of keystroke logging. These tools record the data sent by every keystroke into a text file to be retrieved at a later time. Some tools can record everything on your copy-cut-paste clipboard, calls, GPS data, and even microphone or camera footage. Keyloggers are a surveillance tool with legitimate uses for personal or professional IT monitoring. Some of these uses enter an ethically questionable grey area. However, other keylogger uses are explicitly criminal. Regardless of the use, keyloggers are often used without the user’s fully aware consent and keyloggers are used under the assumption that users should behave as normal. Types of Keyloggers Keylogger tools are mostly constructed for the same purpose. But they’ve got important distinctions in terms of the methods they use and their form factor. Here are the two forms of keyloggers - Software keyloggers - Hardware keyloggers Software keyloggers are computer programs that install onto your device’s hard drive. Common keylogger software types may include: API-based keyloggers directly eavesdrop between the signals sent from each keypress to the program you’re typing into. Application programming interfaces (APIs) allow software developers and hardware manufacturers to speak the same “language” and integrate with each other. API keyloggers quietly intercept keyboard APIs, logging each keystroke in a system file. “Form grabbing”-based keyloggers eavesdrop all text entered into website forms once you send it to the server. Data is recorded locally before it is transmitted online to the web server. Kernel-based keyloggers work their way into the system’s core for admin-level permissions. These loggers can bypass and get unrestricted access to everything entered in your system. Hardware keyloggers are physical components built-in or connected to your device. Some hardware methods may be able to track keystrokes without even being connected to your device. For brevity, we’ll include the keyloggers you are most likely to fend against: Keyboard hardware keyloggers can be placed in line with your keyboard’s connection cable or built into the keyboard itself. This is the most direct form of interception of your typing signals. Hidden camera keyloggers may be placed in public spaces like libraries to visually track keystrokes. USB disk-loaded keyloggers can be a physical Trojan horse that delivers the keystroke logger malware once connected to your device. Uses for Keyloggers To explain the uses of keyloggers, you’ll have to consider: what is keylogger activity legally limited to? Four factors outline if keylogger use is legally acceptable, morally questionable, or criminal: - Degree of consent — is the keylogger used with 1) clear-and-direct consent, 2) permission hidden in obscure language in terms of service, or 3) no permission at all? - Goals of the keystroke logging — is the keylogger being used to steal a user’s data for criminal uses, such as identity theft or stalking? - Ownership of the product being monitored — is the keylogger being used by the device owner or product manufacturer to monitor its use? - Location-based laws on keylogger use — is the keylogger being used with intent and consent in accordance with all governing laws? Legal Consensual Keylogger Uses Legal keylogger use requires the person or organization implementing it to: - Involve no criminal use of data. - Be the product owner, manufacturer, or legal guardian of a child owning the product. - Use it in accordance with their location’s governing laws. Consent is notably absent from this list. Keylogger users don’t have to obtain consent unless laws the area of use require them to. Obviously, this is ethically questionable for uses where people are not made aware that they are being watched. In consensual cases, you may allow keystroke logging under clear language within terms of service or a contract. This includes any time you click “accept” to use public Wi-Fi or when you sign an employer’s contract. Here are some common legitimate uses for keyloggers: - IT troubleshooting — to collect details on user problems and resolve accurately. - Computer product development — to gather user feedback and improve products. - Business server monitoring — to watch for unauthorized user activity on web servers. - Employee surveillance — to supervise safe use of company property on-the-clock. You might find legal keyloggers are in your daily life more than you realized. Fortunately, the power to control your data is often in your hands if the monitoring party has asked for access. Outside of employment, you can simply decline permission to the keyloggers if you so choose. Legal Ethically Ambiguous Keylogger Uses Non-consensual legal keyloggeruse is more questionable. While it violates trust and privacy of those being watched, this type of use likely operates in the bounds of the laws in your area. In other words, a keylogger user can monitor computer products they own or made. They can even monitor their children’s devices legally. But they cannot surveil devices outside of their ownership. This leaves a bit of a grey area that can cause problems for all involved. Without consent, people and organizations can use keyloggers for: - Parental supervision of kids — to protect their child in their online and social activities. - Tracking of a spouse — to collect activity on a device the user owns for proof of cheating. - Employee productivity monitoring — to watchdog employees use of company time. Even consent that has been buried under legal jargon within a contract or terms of service can be questionable. However, this does not explicitly cross the line of legality either. Criminal Keylogger Uses Illegal keylogger use completely disregards consent, laws, and product ownership in favor of nefarious uses. Cybersecurity experts usually refer to this use case when discussing keyloggers. When used for criminal purposes, keyloggers serve as malicious spyware meant to your capture sensitive information. Keyloggers record data like passwords or financial information, which is then sent to third-parties for criminal exploitation. Criminal intent can apply in cases where keyloggers are used to: - Stalk a non-consenting person — such as an ex-partner, friend, or other individual. - Steal a spouse’s online account info — to spy on social media activity or emails. - Intercept and steal personal info — such as credit card numbers and more. Once the line has been crossed into criminal territory, keyloggers are regarded as malware. Security products account for the entire user case spectrum, so they may not label discovered keyloggers as immediate threats. Similarly to adware, the intent can be completely ambiguous. Why Keystroke Logging is a Threat Threats of keyloggers can come from many issues around the collection of sensitive data. When you are unaware that everything you type onto your computer keyboard is being recorded, you may inadvertently expose your: - Credit card numbers. - Financial account numbers. Sensitive information like this is highly valuable to third-parties, including advertisers and criminals. Once collected and stored, this data then becomes an easy target for theft. Data breaches can expose saved keystroke logs, even in legitimate use cases. This data can easily be leaked inadvertently via an unsecured or unsupervised device or through a phishing attack. More common leaks can occur by a direct criminal attack with malware or other means. Organizations collecting mass keylogging data can be prime targets for a breach. Criminal use of keyloggers can collect and exploit your information just as easily. Once they’ve infected you with malware via drive by download or other means, time is of the essence. They can access your accounts before you even know that your sensitive data has been compromised. How to Detect Keylogger Infections At this point, you’re probably wondering, “how do you know if you have a keylogger?” Especially since fighting keyloggers is a challenge in itself. If you end up with unwanted keystroke logging software or hardware, you might not have an easy time discovering it on your device. Keyloggers can be hard to detectwithout software assistance. Malware and various potentially unwanted applications (PUAs) can consume a lot of your system’s resources. Power use, data traffic, and processor usage can skyrocket, leading you to suspect an infection. Keyloggers don’t always cause noticeable computer problems, like slow processes or glitches. Software keyloggers can be hard to detect and remove even by some antivirus programs. Spyware is good at hiding itself. It often appears as normal files or traffic and can also potentially reinstall itself. Keylogger malware may reside in the computer operating system, at the keyboard API level, in memory or deep at the kernel level itself. Hardware keyloggers will likely be impossible to detect without physical inspection. It is very likely that your security software won’t even be able to discover a hardware keylogging tool. However, if your device manufacturer has a built-in hardware keylogger, you may need an entirely new device just to get rid of it. Fortunately, there are ways that make it possible to protect your computer from keyloggers. - Detecting software keyloggers: Whether you choose a free or more comprehensive total-security package, you’ll want to run a full scan of your system and devices. - Detecting hardware keyloggers: You might be lucky and just have a USB drive or external hard drive that has malicious material on it. In that case, you’d simply remove the device by hand. An internal hardware keylogger would require a device teardown to discover. You might want to research your devices before buying to ask if the manufacturer has included anything suspicious. How to Prevent Keystroke Logging Knowing how to detect a keylogger is only the first step towards safety. Proactive protection is critical to keeping your devices keylogger-free: - Always read your terms of service or any contracts before accepting. You should know what you’re agreeing to before you sign up. Researching user feedback on software you plan to install might provide some helpful guidance as well. - Install internet security software on all your devices. Malicious keyloggers generally make their way to devices in software form. If you have a security software suite like Kaspersky Anti-Virus, you’ll have an active shield to guard against infections. - Make sure your security programs are updated on the latest threats. Your security needs to have every known keylogger definition to detect them properly. Many modern products automatically update to protect against keylogger malware and other threats. - Don’t leave your mobile and computer devices unsupervised. If a criminal can steal your device or even get their hands on it for a moment, that may be all they need. Hold on to your devices to help prevent keyloggers from being implanted. - Keep all other device software updated. Your operating system, software products and Web browsers should all be up to date with the latest security patches. When an update is offered, be sure to download and install it as soon as possible. - Do not use unfamiliar USB drives or external hard drives. Many criminals leave these devices in public places to entice you to take them and use them. Once plugged into your computer or mobile device, they can infiltrate and begin logging. No matter how you approach anti-keylogger protection, the best defense is to install a good anti-spyware product that protects against keylogging malware. Using a complete Internet security solution with strong features to defeat keylogging is a reliable route towards safety. Kaspersky Internet Security received two AV-TEST awards for the best performance & protection for an internet security product in 2021. In all tests Kaspersky Internet Security showed outstanding performance and protection against cyberthreats. - What is a Botnet? - Infographic: Vulnerable Software - What is a security breach? - How to detect spyware to safeguard your privacy?
<urn:uuid:dadcba92-531f-4d36-bc9d-506475640fc8>
CC-MAIN-2022-40
https://www.kaspersky.com/resource-center/definitions/keylogger
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00035.warc.gz
en
0.90975
3,180
3.234375
3
The most successful and dangerous of all the cyber-attacks is phishing. Research has found that 91% of all cyber attacks start with a phishing email. Phishing continues to be the most common form of cyber-attack due to its simplicity, effectiveness, and high return on investment. Phishing is a type of online scam where criminals send out fraudulent email messages that appear to come from a legitimate source. The email is designed to trick the recipient into entering confidential information (ex: account numbers, passwords, pin, birthday) into a fake website by clicking on a link. The email will include a link or attachment which once clicked, will steal sensitive information or infect a computer with malware. Cybercriminals will use this information to commit identity fraud or sell it on to another criminal third party. Traditionally, phishing attacks were launched through massive spam campaigns that would have indiscriminately targeted large groups of people. The aim was to trick as many people as possible into clicking a link or downloading a malicious attachment. However, as the general public has become more knowledgeable about these types of scams, attackers have become more sophisticated and targeted in their approach. A successful phishing attack can result in: - Identity theft - Theft of sensitive data - Theft of client information - Loss of usernames and passwords - Loss of intellectual property - Theft of funds from business and client accounts - Reputational damage - Unauthorised transactions - Credit card fraud - Installation of malware and ransomware - Access to systems to launch future attacks How to Prevent Phishing Our MetaPhish product has been designed to provide customers with a powerful defence against phishing attacks by training employees how to identify and respond appropriately to these threats. The software contains a library of smart learning experiences such as infographics, notices and training videos and unlike other phishing solutions, the software allows the user to communicate back to the administrator. MetaPhish enables organisations to find out just how susceptible their company is to fraudulent phishing emails and helps identify those users that require additional training.
<urn:uuid:148a00be-acaf-496f-ab26-80b851c2d7c4>
CC-MAIN-2022-40
https://www.metacompliance.com/fr/cyber-security-terminology/phishing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00035.warc.gz
en
0.95573
425
3.265625
3
MIT Names Top 10 Breakthrough Technologies For 2018 MIT Technology Review unveils its breakthrough technology list for 2018 – a rundown of 10 awe-inspiring scientific and technological advances that have the potential to change our lives in dramatic ways. I spoke to editor David Rotman about why these particular breakthroughs made the cuts, what makes them exciting – and why some of them raise important ethical concerns that will need to be addressed in the near future. He told me “We select the list by asking each of our journalists what are the most important new technologies they wrote about this year? And which will have a long-term impact. We’re looking for fundamentally new advances in technology that will have widespread consequences.” 1. 3D Metal Printing We’ve all become used to 3D plastic printing over the last few years, and the ease it has brought to design and prototyping. Advances in the technology mean that instant metal fabrication is quickly becoming a reality, which clearly opens a new world of possibilities. The ability to create large, intricate metal structures on demand could revolutionize manufacturing. “3D metal printing gives manufacturers the ability to make a single or small number of metal parts much more cheaply than using existing mass-production techniques,” Rotman says. “Instead of keeping a large inventory of parts, the company can simply print a part when the customer needs it. Additionally, it can make complex shapes not possible with any other method. That can mean lighter or higher performance parts.” 2. Artificial Embryos For the first time, researchers have made embryo-like structures from stem cells alone, without using egg or sperm cells. This will open new possibilities for understanding how life comes into existence – but clearly also raises vital ethical and even philosophical problems. Rotman told me “Artificial embryos could provide an invaluable scientific tool in understanding how life develops. But they could eventually make it possible to create life simply from a stem cell taken from another embryo. No sperm, no eggs. It would be an unnatural creation of life placed in the hands of laboratory researchers.” 3. Sensing City At Toronto’s Waterfront district, Google’s parent company, Alphabet, are implementing sensors and analytics in order to rethink how cities are built, run, and lived in. The aim is to integrate urban design with cutting edge technology in order to make “smart cities” more affordable, liveable and environmentally sustainable. Rotman says “Although it won’t be completed for a few years, it could be the start on smart cities that are cleaner and safer.” 4. Cloud-based AI services Key players here include Amazon, Google, IBM and Microsoft, which are all working on increasing access to machine learning and artificial neural network technology, in order to make it more affordable and easy to use. Rotman told me “The availability of artificial intelligence tools in the cloud will mean that advanced machine learning is widely accessible to many different businesses. That will change everything from manufacturing to logistics, making AI far cheaper and easier for businesses to deploy.” 5. Duelling Neural Networks This breakthrough promises to bestow AI systems with “imagination”, through allowing them to essentially “spar” with each other. Work at Google Brain, Deep Mind and Nvidia is focused on enabling systems that will create ultra-realistic, computer generated images or sounds, beyond what is currently possible. “Dueling Neural Networks describes a breakthrough in artificial intelligence that allows AI to create images of things it has never seen. It gives AI a sense of imagination,” says Rotman. However, he also urges caution, as it raises the possibility of computers becoming alarmingly capable tools for digital fakery and fraud. 6. Babel Fish earbuds Named for the science-fiction comedy concept introduced by Douglas Adams in The Hitchhiker’s Guide To The Galaxy, these are earbuds utilizing instant online translation technology, effectively letting humans understand each other while communicating in different languages, in near real-time. Rotman says “Google’s Pixel Buds mean that people can easily carry out a natural conversation with someone speaking a different language.” Although the ear buds themselves are still at an early stage and, reportedly, do not yet function too well, anyone can access the underlying technology today through Google’s voice-activated translation services on computers and mobile devices. 7. Zero-carbon Natural Gas New engineering methods make it possible to capture carbon released during the burning of natural gas, avoiding greenhouse emissions and opening up new possibilities for creating clean energy. Currently, 32% of electricity used in the US is produced by burning natural gas – a process which accounts for around 30% of carbon emissions from the power sector. 8 Rivers Capital, Exelon Generation and CB&I are highlighted as key players here. “The clean natural gas technology holds the promise for generating electricity from a cheap and readily available fossil fuel in a way that doesn’t generate carbon emissions,” Rotman says. 8. Perfecting Online Privacy Blockchain-based privacy systems make it possible for digital transactions to be recorded and validated while protecting the privacy of the information and identities underlying the exchange of information. This means it is easier to disclose information without risking privacy or exposure to threats such as fraud or identity theft. 9. Genetic Fortune Telling Huge advances are being made in predictive analytics using genomic data by players including Helix, 23andMe, Myriad Genetics, BK Biobank and the Broad Institute. This is making is possible to predict chances of diseases such as cancer, or even IQ, by analyzing genetic data. This promises to be the next quantum leap in public health protection, but also raises huge ethical concerns, including the risk of genetic discrimination. “Nothing like this has been possible before,” says Rotman. “Genetic fortune telling will make it possible to predict the chances that you’ll be smart or below average in intelligence. It will also make it possible to predict behavior traits. But how will we use that information? Will it change how we educate children and judge their potential?” 10. Materials’ Quantum Leap Using a seven-qubit quantum computer designed by IBM, researchers at Harvard have created the most complete simulation of a simple molecule. The molecule – beryllium hydride – is the biggest yet simulated by quantum computing. Rotman says “The promise is that scientists could use quantum computers to design new types of materials and precisely tailor their properties. This could make it possible to design all sorts of miracle materials, such as more efficient solar cells, better catalysts to make clean fuels, and proteins that act as far more effective drugs. MIT Technology Review’s full report on the list of breakthrough advances can be seen here.
<urn:uuid:f9b58ce4-f441-4db1-a416-2a7a3a6f82e7>
CC-MAIN-2022-40
https://meltechgrp.com/mit-names-top-10-breakthrough-technologies-2018/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00035.warc.gz
en
0.927711
1,445
2.765625
3
A complete guide to understanding, monitoring and fixing network packet loss Unified Communications and Collaboration (UCC) has always been important to the working world. But the global adoption of hybrid working and monumental changes in culture and dynamic has highlighted how vital it is that our UCC systems function properly. Organizations no longer work from a fixed address, or rely on a wired connection to keep the lines of communication open. The worldwide implementation of VoIP and video as major communication solutions is making these changes possible. But all new technologies come with challenges and one of the major hurdles that IT teams face is network packet loss. Packet loss describes lost packets of data not reaching their destination after being transmitted across a network. Packet loss occurs when network congestion, hardware issues, software bugs, and a number of other factors cause dropped packets during data transmission. This comprehensive guide will explain everything you need to know about the causes of packet loss in communication networks. We’ll take an in-depth look at packet loss issues, the reasons for packet loss in networking, and how to fix packet loss. Table of Contents Download a PDF copy of the Optimizing your Network Guide What is internet packet loss? In any network environment, data is sent and received across the network in small units called packets. This applies to everything you do on the internet, from emailing, uploading or downloading images or files, browsing, streaming, gaming – to voice and video communication. When one or more of these packets is interrupted in its journey, this is known as packet loss. The Transmission Control Protocol (TCP) divides the file into efficiently sized packets for routing. Each packet is separately numbered and includes the destination’s internet address. Each individual packet may travel a different route, and when they have arrived, they are restored to the original file by the TCP at the receiving end. What causes packet loss? Network congestion - The primary cause of network packet loss is congestion. All networks have space limitations, so in simple terms, network congestion is very much the same as peak hour traffic. Think of the queues on the road at certain times of the day, like early mornings and the end of the working day. Too much traffic crowding onto the same road can become bottlenecked when it tries to merge, and the result is that it doesn’t reach its destination on time. At peak times, when network traffic hits its maximum limit, packets are discarded and must wait to be delivered. Fortunately, most software is designed to either automatically retrieve and resend those discarded packets or slow down transfer speed. Image source: Comparitech Network hardware problems - The speed with which hardware becomes outdated or redundant these days is another major problem for your network. Hardware such as firewalls, routers, and network switches consume a lot of power, and can considerably weaken network signals. Sometimes organizations overlook the need to update hardware during expansions or mergers and this can contribute to packet loss or connectivity outages. Software bugs - Closely related to faulty hardware is a buggy software running on the network device. Bugs or glitches in your system can sometimes be responsible for disrupting network performance and preventing the delivery of packets. Hardware reboots and patches may fix bugs. Overtaxed devices - When a network is operating at a higher capacity than it was designed to handle, it weakens and becomes unable to process packets, and drops them. Most devices have built-in buffers to assign packets to holding patterns until they can be sent. Wifi packet loss vs wireless packet loss - As a rule, wireless networks experience more issues with packet loss than wired networks. Radio frequency interference, weaker signals, distance and physical barriers like walls can all cause wireless networks to drop packets. With wired networks, faulty cables can be the culprit, impeding signal flow through the cable. Security threats - If you’re noticing unusually high rates of packet drop, the problem could be a security breach. Cybercriminals hack into your router and instruct it to drop packets. Another way that hackers can cause packet loss is to execute a denial-of-service attack (DoS), preventing legitimate users from accessing files, emails, or online accounts by flooding the network with too much traffic to handle. Packet loss can be difficult to fix during a full-blown security. Deficient infrastructure - This highlights the importance of a comprehensive network monitoring solution. Some out-of-packet monitoring tools were not engineered for the job they’ve been assigned to do and have limited functionality. The only way to effectively deal with packet loss issues is to deploy a seamless network monitoring and troubleshooting platform that can view your entire system from a single window. In a nutshell, comprehensive network monitoring solution = packet loss fix. Ping and packet loss When it comes to the determining what constitutes a strong internet connection, and the reduction of random packet loss, there are three factors to consider: upload speed, download speed and ping. This is how fast you can send data packets to others. Uploading is used when sending large files through email, or in using video to chat with others. Upload speed is measured in megabits per second (Mbps). This is how fast you can pull data packets from the server to you. By default, connections are designed to download more quickly than they upload. Download speed is also measured in Mbps. This is the reaction time of your connection, or how quickly you get a response after sending out a request. A fast ping means a more responsive connection, and this is especially important in real-time applications like gaming, and voice and video calls. Ping is measured in milliseconds (ms). Anything below a ping of 20 ms is considered ideal, while anything over 150 ms would result in noticeable lag. Even though your ping is good you may still be having issues with packet loss. because although the data is being sent and ultimately received quickly by the destination server, some data might not be getting there correctly. The effects of packet loss For users, packet loss can be more than annoying, particularly in real-time processes like VoIP and video conferencing. According to a QoS tutorial by Cisco, packet loss on VoIP traffic should be kept below 1% and between 0.05% and 5% depending on the type of video. Different applications are affected by packet loss in different ways. For example, when downloading data files, a 10% packet loss might add only one second to a ten second download. If packet loss rate is higher, or there is high latency, it can cause delays to be worse. Real-time applications like voice and video will be affected more severely by packet loss. Something as small as a 2% packet loss is usually quite noticeable to a listener or viewer, and can cause the conversation to be stilted and unintelligible. The effects of packet loss also differs depending on the application/protocol (TCP/UDP) If a packet is dropped, or not acknowledged, TCP protocol is designed to retransmit it. UDP, however, doesn’t have the capability to retransmit, and therefore doesn’t handle packet loss as well. Diagnosing and fixing packet loss Everyone has experienced packet loss in voice calls. This is where comprehensive network monitoring and troubleshooting comes into its own. Network monitoring can quickly and accurately diagnose and identify the root causes of the loss of data packets, such as in the following examples. During a Teams call, the quality deteriorates and becomes distorted and patchy, or eventually drops out completely. But even though Teams may be having issues, you might still be able to successfully communicate using Zoom, Webex or WhatsApp. This is because of the difference in the way that each specific program transmits over the internet, and the route that the packets take. You may be on a call with a perfect connection to a server in Springfield, IL but then find you’re experiencing an exceptionally high loss of data packets when connecting to a server in Richmond, VA. This would indicate problems with the pipeline between your location and the server in Richmond. Image source: Researchgate Do a ping test A ping test is a diagnostic tool that provides data on how well an internet-enabled device communicates with another endpoint. A ping test can assess network delays or issues by sending an Internet Control Message Protocol (ICMP) packet – or ping – to a specific destination. ICMP packets contain only a tiny amount of information, so they don’t use much bandwidth. When the ping reaches the device, that device recognizes and replies to the originating device. The total time taken for the ping to arrive and return is recorded as ‘ping time’ or ‘round trip time’. If the number of packets sent and received are not equal, this means some packets never arrived to or from your phone. This inevitably leads to call quality issues like choppy voices, extended silences, jumbled audio and other call quality problems. Deep packet inspection Any organization with a private network will have hundreds or even thousands of unique connections and data transfers every day. Deep Packet Inspection (DPI) is an in-depth way of examining and managing network traffic. DPI is one of the most important tasks that network administrators need to carry out. It locates, identifies, blocks or re-routes packets with specific data or code. It examines the contents of packets passing through a given point and determines what the packet contains. Most network packets are split into three parts: - Header – containing instructions about the data carried by the packet such as length, synchronization, packet number, protocol as well as originating and destination addresses. - Payload – the actual data contents, or body of the packet. - Trailer – also referred to as the footer tells the receiving device that it has reached the end of the packet. Restart the router The classic aid to any problem with connectivity issues is to restart the router – and a vast majority of times, it works to your benefit. What is User Datagram Protocol (UDP)? User Datagram Protocol (UDP) is a communications protocol primarily used to establish low-latency and loss-tolerating connections between applications on the internet. UDP speeds up transmissions by enabling the transfer of data before an agreement is provided by the receiving party. As a result, UDP is beneficial in time-sensitive communications, including voice over IP (VoIP), domain name system (DNS) lookup, and video or audio playback. Traceroute packet loss and high latency Traceroute is a command-line tool that comes with Windows and other operating systems. Along with the ping command, it’s an important tool for understanding Internet connection problems, including packet loss and high latency. If you’re having trouble connecting to a website, traceroute can tell you where the problem is. It can also help visualize the path traffic takes between your computer and a web server. Monitoring packet loss Every network experiences some degree of packet loss, but what is acceptable? Prevention is better than cure, so the ability to easily monitor and measure packet loss is essential when implementing solutions. Network monitoring should be the first strategy you use to preserve and uphold the integrity of your network environment. Regularly scanning your devices will ensure that your routers are capable of handling capacity, and your system is equipped to prevent data loss. Summary – addressing network packet loss This guide has been created to define network packet loss, and to help identify, understand and troubleshoot the most common problems related to packet loss in computer networks. Network packet loss, jitter and latency, have always been major obstacles standing in the way of clear communication, but with the global shift to hybrid working, it's even more essential that your user experience is the best it can be. Today's UCC ecosystem is far more complex then it's ever been, and even the smallest amount of downtime, outages, poor video and audio quality can severely affect an organizations bottom line. IR's Collaborate suite of hybrid-cloud performance management tools brings together reliability, agility, and innovation to solve the complexities of managing critical technologies that keep you in business. With the rapid shift to hybrid working, organizations all over the globe are tasked with managing increasingly complex unified communications environments to ensure the lines of communication are always open. In a complex, multi-vendor unified communications ecosystem, we help you avoid, and quickly find and resolve performance issues in real-time – across your on-premises, cloud or hybrid environments. Prognosis UC Assessor is a 100% software-based solution that can find and fix problems before migration without the need for network probes. Ensure a positive end-user experience with one-click troubleshooting for all network issues affecting UC performance. Deployment and getting started is quick, generating insights within minutes of installation across multiple sites within your environment. You can improve IT efficiency with the ability to operate and troubleshoot your entire multi-vendor UC environment from a single viewing point. Reduce costly outages and service interruptions with automated, intelligent alerts. Plan, deploy and migrate new technologies with confidence.
<urn:uuid:28c3ce4d-797b-45ac-99f8-a48bc3c6a4e5>
CC-MAIN-2022-40
https://www.ir.com/guides/what-is-network-packet-loss
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00035.warc.gz
en
0.93788
2,730
3.09375
3
How does email get hacked? There are several techniques used to gain access to an email account using a password or backdoors. With the rate of technological advancements, new technologies such as deep machine learning and strong artificial intelligence have led to more sophisticated ways of hacking emails. No email is immune to hacking. Therefore, every company must educate its workforce on common hacking techniques and how to prevent them. In this article, I’ll walk you through the main techniques hackers use to access your email. By the end of this article, you will be well-informed of the hackers’ techniques and as well as different tools and mechanisms you can use to prevent infiltrations into your account. 1. How does email get hacked? By Keylogging Keylogging is a simple way to hack email passwords or accounts. It involves monitoring a user’s activity and recording every keystroke typed on the computer keyboard. In most cases, this is achieved with the help of a spying tool known as Keylogger. There are no special skills required to install software or program on a computer or network infrastructure. Keyloggers operate in stealth mode. They are challenging to detect and can stay in the system for long periods without being identified. These spying programs can also be installed remotely, so the attacker does not have to gain physical access to the target’s computer. Keylogging is arguably the most straightforward breaching technique used by hackers to steal sensitive information from targets. Apart from hacking emails, keylogging can also be used to spy on your target’s phone calls, messages, and other valuable credentials. Methods Used by Hackers to Send Keyloggers to Computers Recently hackers have developed the tendency of embedding keyloggers and other backdoors in software. At face value, it may seem like a legit mobile application, a PDF file, or a flash player update. When installing the software, the embedded Keylogger also installs as part of the application. Since the emergence of the Corona outbreak, hackers have infiltrated more than 10 million emails. They embed keyloggers and local access Trojans in software that claims to track COVID spread. That’s how hackers trick users into downloading malicious software. Phishing emails are fake emails sent to target computers to lure into a malicious course of action. The mail contains corrupted files with malware that promptly installs in the background when downloaded by a user. This is the primary method used by hackers to spread Trojans and Malware. Hackers also target work-from-home employees with phishing emails in an attempt to hack a corporates network. Most phishing emails prompt you to act immediately, a tactic you can use to identify such types of emails. Hackers also use vulnerabilities and loopholes within a computer system or network infrastructure to inject a keylogger. Vulnerabilities, in most cases, are a result of the running of outdated software, add-ons, or plug-ins. Black hats identify vulnerabilities in web browsers and computers. Phishing URLs may be at the bottom of an article, an app description, or behind a fake software. These phishing links re-direct users to illicit websites such as pornographic websites, websites that ask for donations, or malware-infected websites. These malicious websites then install a keylogger to your system without the user’s knowledge. Hackers also use malicious ads to send Keylogger to computers. Malicious ads can also be found on legitimate websites used by advertisers to bid for space. In some cases, the ads install a keylogger when you click on them, while others install the keylogger when you close them out. That’s how hackers send keyloggers to your phone and computers easily. After learning about how hackers can use these techniques to hack your email account, you should have a better understanding of how to prevent keylogger infection: - Avoid opening emails from unknown or malicious sources. - Download and install applications and extensions from trusted publishers. - Be cautious with advertisements you click on - Always scan the URL before clicking to verify whether it’s safe or not. - Install software updates regularly. All in all, it’s your responsibility as a user to develop good browsing habits. However, there are also user-friendly tools that you can use to help avoid becoming a victim of a keylogger attack. Tools To Prevent a Keylogger Attack Patch management automatically looks for software updates online for your computer. Vulnerabilities are one of the major gateways through which keyloggers are introduced into a system. A patch management tool ensures that you have the latest updates with all security fixes for your operating system at any given time. URL Scanner employs AI to deep scan websites to countercheck whether it’s safe or malicious. All you have to do is highlight, copy, and paste the link in the provided space. It’s one of the most reliable ways to avoid being re-directed to malware-infected websites. Some free URL scanners online include VirusTotal and Comodo Website. Key Encryption Software Encryption software can be used as an extra protection technique by concealing the characters you type on the keyboard. The encryption software works by encrypting the keys with random numbers as they navigate through the operating system. The disoriented characters make it difficult for keyloggers to capture the exact keys. This type of software protects you from a variety of malware. Anti-malware software scans through various files you download to prevent infiltrations by malware. This is one of the critical software that can protect you against malware attacks. With the rapid technological advancements, you should always go for the latest and the most advanced anti-malware software because sophisticated malware can get past the traditional anti-malware software. 2. How does email get hacked? By Phishing Compare to Keylogging techniques, phishing is a more complicated method of hacking emails. Phishing emails involve the use of spoofed webpages designed to be identical to those of legitimate websites. When executing this malicious social engineering activity, hackers create fake login pages that resemble Yahoo, Gmail, or other service providers. If you key in your credentials on the fake login pages, black hats monitor your activity and steal the credentials. Phishers are smart enough to send you an email that looks just like what could have been sent by Gmail or Yahoo. These emails contain links asking you to update your email account information or change the password. In some cases, an online persona of someone you know at a close level is used to hoodwink you into providing your email login credentials. To successfully execute a phishing attack, one likely will have considerable hacking knowledge with prior experience in scripting languages such as CSS and JSP/PHP. Phishing is considered a criminal offense in most jurisdictions. Enabling a 2-factor authentication for your email is not sufficient protection against phishing attacks. One needs to be very vigilant before giving out their email credentials despite how convincing the situation might seem to be. Always double-check the web address from where the email is originating from before dishing out your details. If you have never requested a password change, then ignore any message prompting you to change, update or confirm your security details. These are scammers waiting to exploit you. Warning signs for phishing attacks Email from Unfamiliar Sender Before opening that message you just received, there are several details you can check to verify whether you are a target for a phishing attack or the email is legitimate. First, scrutinize the sender’s details. It might be from a source you have never interacted with before, and if so, then check on the various online platforms to check its legitimacy. The sender’s email seems off. For instance, you may receive an email from [email protected], which resembles that of Joseph Goast, who works at Logo Inc. Joseph might be a real person and work for Logo, as stated, but his account of details may have been manipulated by a hacker who aims at getting your credentials to hack your email account. The company name might be misspelled, or the email could have a wrong ending, such as logo.cn instead of logo.com. Other signs to look out for may include: - The style of opening statements – if it seems oddly generic, then you need to take caution against clicking any link or downloading an attachment as they may be corrupted. 3. How does email get hacked? By Password Guessing and Resetting Email accounts can also be hacked through password guessing, a social engineering technique exploited by most hackers. Password guessing techniques best work with those whom you know or those whom you are close to. In this type of attack, an attacker aims at manipulating the target in an attempt to figure out their personal information. Password guessing and resetting require a witty person with impeccable thinking power who can almost read the victim’s mind. For the attack to be successful, an attacker needs to know the target considerably well, which calls for an A-class social skill. Black hats that often use this technique tend to be colleagues, friends, or even family members. Such persons might have in-depth knowledge about you, be it hobbies, lifestyle, habits, and even personal information such as birthdates. This makes it easier for persons to figure out your email password. They also may be able to answer security questions while resetting your email’s password easily. 4. How does email get hacked? By not logging out of the account. Always ensure to log out of your email after using a public device or PC. It’s advisable to develop a tendency and a habit of logging out every time you sign in using a shared device or public computers. Otherwise, avoid signing in to your accounts using public PCs altogether. Avoid using computers at internet cafes and libraries to access personal accounts or corporate websites as it’s not easy to identify whether they are infected with keylogging spyware or malware. 5. How does email get hacked? By using easy passwords Do not use the same password across multiple platforms. If you have been doing so, it’s time to change and get unique login credentials for every website or service you need. A good rule of thumb is to make the password not less than 16 characters, and at least one should be a number or a unique digit. For the sake of future use, you can base them on a complex sentence, with the first letter of each word serving as a character in the credentials. Hackers find it easy to hack email accounts with weak passwords through trial and error techniques. Several tools are available, which use artificial intelligence and machine learning to monitor your activities and match your web activity. From such data, black hats can analyze and predict what you are likely to use a password, so up your game. 6. How does email get hacked? By using an insecure Wi-Fi network to access your email account Hackers easily bypass unsecured Wi-Fi network infrastructure and eavesdrop or intercept the connection to get the password and other valuable information. To avoid such incidents, you should only connect your devices to reputable networks that are password protected and can be trusted. You can use VPN services such as HMA! or AVG Secure VPN to secure and encrypt your connection. 7. Spammers harvested your email. Your email can get harvested by scammers if you list it publicly online in places such as blogs, online forums, online ads, and so on. For the sake of your security, don’t list your email address on such platforms. Avoid such acts like the plague! There you have it, the seven common ways in which your email can be hacked. So be woke!. Follow the above-stated advice, and it will take you a long way in preventing an email hack from befalling you.
<urn:uuid:75251c17-7a72-47de-9730-1713755d6f2a>
CC-MAIN-2022-40
https://cyberexperts.com/how-does-email-get-hacked/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00236.warc.gz
en
0.911021
2,523
2.578125
3
The Basics of DNS DNS is the acronym used for Domain Name System. A primary purpose of DNS is to translate IP addresses into hostnames (alphabetic names) inside a local network and vice versa (Kralicek, 2016). DNS is an essential component of the Internet because this IP conversion creates a much more user-friendly experience. Without DNS, the user would be required to navigate the Internet using numeric (IPv4) or hexadecimal IP (IPv6) addresses. It is much easier for users to remember hostnames that usually consist of easily remembered words. An example of a hostname is Amazon.com. One of the IPv4 addresses that are associated with Amazon.com is 126.96.36.199. For humans, the hostname of Amazon.com is easier to remember than the IPv4 address. There is often the need to remember dozens of web addresses, so DNS is essential. DNS has evolved to become a worldwide network of databases that resolves IP addresses to support internet traffic. DNS works with both IPv4 and IPv6. The invention of IPv4 came in the 1970s. IPv4 addresses consist of 32-bit numeric characters providing the capability of about 4.3 billion different combinations of numbers. The 32-bit numbers contain four digits separated by periods, as shown in the Amazon.com example above. Each of the four numbers can have a value that ranges from 0 to 255. IPv4 is considered a classful network architecture. There are five classes, but only three are commonly used by hosts on networks. Large organizations such as governments, large universities, large businesses, and large Internet Service Providers use Class A network addresses. Mid-sized companies and organizations use Class B network addresses. Small organizations, businesses, and home offices use Class C network addresses (Panek, 2020). The development of IPv6 came in the 1990s. The need for IPv6 was driven by the expectation that the approximately 4.3 billion address capacity of IPv4 would be exhausted because of the ever-increasing number of devices that require addresses. IPv6, which replaces IPv4, solved the address exhaustion problem by using 128-bit address space instead of the 32-bit address space of IPv4. This larger address space gives IPv4 the capability of providing exponentially more addresses than IPv4 (3.4 undecillion addresses) (Kralicek, 2016). IPv6 addresses are divided into eight groups that each contain four hexadecimal digits. Every hexadecimal digit can represent four bits. The preferred form is x:x:x:x:x:x:x:x. Each x is a 16-bit section that can be represented using up to four hexadecimal digits, with the sections separated by colons (Cisco Press, 2017). Some Advantages of IPv6 over IPv4 Beyond the increase in available address space, IPv6 has some additional advantages over IPv4. In the 1970s, when IPv4 was created, there was less focus on security compared to today. IPv4 required the introduction of security, while IPv6 was designed to have native security baked in. IPv6 utilizes IPSec to provide end-to-end packet encryption that ensures data is transmitted across the network securely. Another advantage of IPv6 is that it eliminates the need for Network Address Translation (NAT). NAT for IPv4 is a method to deal with the limited number of available IP addresses. NAT works on routers that sit between two networks. It translates private addresses used on a local network to globally unique addresses that can be forwarded to other networks. Using NAT, only a single address gets advertised by the router that connects the network to the outside world. When incoming packets are received, NAT translates again to ensure that the packet is delivered to the correct device within the network. Since IPv6 eliminates the problem of limited address space, IPv6 removes the need for NAT. The removal of NAT from a network is an advantage because it removes a point of failure. Also, the removal of NAT means that less processing is needed resulting in more efficiency and potentially higher data transmission speeds. IPv6 has configuration advantages over IPv4. In IPv4, network administrators manually assign IP addresses or use Dynamic Host Configuration Protocol (DHCP). DHCP enables temporary IP addresses to be assigned automatically from a pool. The IP addresses are returned to the pool for reassignment after the “IP Lease” expires. IPv6 allows IP addresses to be automatically assigned using Stateless IP Address Autoconfiguration (SLAAC) (Hagen, 2014). With SLAAC, when a new device is added to a network, it can automatically obtain its own IP address without the need for DHCP. IPv4 supports broadcast transmissions, while IPv6 supports multicast. Broadcast is the sending of data packet(s) to all users on a network without the need to individually address the packet(s) and without the need for a response from the users. In IPv4, a broadcast is sent using a broadcast address. Conversely, IPv6 was designed with the capability of multicast. Multicast sends data to a set of hosts that are predetermined by adding the host addresses to multicast groups (Juniper, 2021). Multicast is more efficient than broadcast because multicast allows the senders to select who receives the transmission. This results in more efficiency within the network since the nodes within the network do not need to continuously listen for and receive broadcast traffic that might not be necessary. Quality of Service (QoS) is another differentiator between IPv4 and IPv6. QoS is used to control traffic so that performance is guaranteed for specific applications. QoS is applied for bandwidth-intensive applications like Voice Over Internet Protocol (VOIP). VOIP is a protocol that allows phones to work over the network, replacing the need for traditional Plain Old Telephone Service (POTS) phones. If data transmission performance is low (i.e. latency or jitter) for VOIP, the voice quality can be affected. With IPv4, QoS data is included in the packet, and routers are configured to prioritize critical traffic (like VOIP traffic). IPv6 has built-in QoS. Diferences between IPv4 DNS and IPv6 DNS The shift from IPv4 to IPv6 does not change the user experience when it comes to DNS. With IPv6, the user will still enter the same hostnames, and the IP address will be resolved in the background, just like when using IPv4. The configuring of IPv6 DNS is also very similar to the process for configuring IPv4 DNS. There are two types of lookup zones utilized in DNS: Forward Zone and Reverse Zone. Forward lookup zones translate the hostname to the IP address, while reverse lookup zones translate the IP address to the hostname. In IPv4, forward lookup zones are represented using ‘A Records’. ‘A Records’ are only designed to hold 32-bit IP addresses. Since IPv6 addresses are 128 bits, DNS needed a solution that would accommodate the larger IP addresses. The answer came with introducing the ‘AAAA’ (Quad A) record (Liu, 2011). Berkely Internet Name Domain (BIND) is open-source software that is commonly used for DNS servers. BIND currently supports IPv6 and ‘AAAA’ Records. Reverse zone lookups translate hostnames to the IP address. IPv6 uses the IP6.ARPA domain to accomplish reverse zone lookups (Pete, 2004). ARPA is the acronym for Address and Routing Parameters Area. Similarly, IPv4 uses the IP4.ARPA domain for this reverse lookup function. Advantages of IPv6 DNS The primary advantage of IPv6 DNS is that it enables the benefits that IPv6 has over IPv4. These include the ample address space, the elimination of NAT, configuration advantages, multicast enablement, QoS, etc. Another advantage of IPv6 DNS is that it is more secure than IPv4 DNS. Disadvantages of IPv6 DNS A disadvantage of IPv6 DNS is that it is not backward compatible with IPv4. Since the IPv6 rollout is a slow process, lasting many years, there is the need for DNS servers to respond to both IPv6 and IPv4 requests. This requirement results in less efficiency until the completion of the IPv6 conversion. IPv6 may reduce the practice of subnetting. Subnetting is often used in IPv4 to segment networks to increase the efficiency of the available IP space. Since IPv6 has an exponentially higher number of IP addresses available, system administrators may reduce this practice. Subnetting has the side effect of reducing unnecessary web traffic. The result of less subnetting would result in the disadvantage of an increased traffic load on DNS servers. Since IPv6 does not need or allow for NAT, a security feature existing in NAT does not apply to IPv6. NAT hides the internal network IP addresses and port numbers to not be visible to the outside world. The fact that IPv6 does not allow for this could be considered a disadvantage. This disadvantage is arguable since the hiding of internal network IP addresses is not regarded as a robust security feature. As mentioned, IPv6 uses SLAAC to assign IP addresses automatically. Using SLAAC, the IPv6 end nodes choose their own IP addresses. An issue arises because the DNS servers still need to have reverse DNS records for the IP selected using SLAAC, but these records are not available to the DNS servers (Internet Society, 2014). Several options have been recommended and implemented for overcoming this issue, so this disadvantage is no longer relevant. How IPv6 May change the way networks use DNS The IPv6 advantages of eliminating NAT and increased IP space, along with the proliferation of new connected IoT devices, will lead to massively increased traffic to DNS servers. This increase will likely require the DNS server infrastructure to scale up to meet the demand. More processing power and storage will be required. The DNS hierarchy can is a tree that consists of managed zones with root servers at the top. Due to limitations in IPv4, there are only 13 root server addresses, but there are over 600 different root servers distributed across the world. The increase in internet traffic and the removal of the limitations of IPv4 may also lead to the implementation of additional root server addresses. Hagen, S. (2014). IPv6 Essentials (3rd ed) O’Reilly Kralicek, E. (2016). Accidental SysAdmin Handbook, Sybex. Liu, C. (2011). DNS and BIND on IPv6, O’Reilly Panek, C. (2020). Networking Fundamentals, Springer Nature. Pete, L. (2004). IPv6: Theory, protocol, and practice 2nd ed) Morgan Kaufmann DNS considerations for IPv6. (2014, June 14). Internet Society. https://www.internetsociety.org/resources/deploy360/2014/dns-considerations-for-ipv6/ IPv6 address representation and address types. (2017, October 3). Cisco Press. https://www.ciscopress.com/articles/article.asp?p=2803866 Multicast protocols user guide (2021, January 13). Juniper. https://www.juniper.net/documentation/us/en/software/junos/multicast/topics/concept/multicast-ip-overview.html
<urn:uuid:fc1a5751-f6e4-4145-b6c3-275e17b772dc>
CC-MAIN-2022-40
https://cyberexperts.com/ipv4-dns-vs-ipv6-dns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00236.warc.gz
en
0.904825
2,398
4.09375
4
Secure your business with CyberHoot Today!!! Doxxing is the act of revealing identifying information about someone online, such as their real name, home address, workplace, phone, financial, and other personal information. That information is then circulated to the public without the victim’s permission. While the practice of revealing personal information without one’s consent predates the internet, the term doxxing first emerged in the world of online hackers in the 1990s, where anonymity was considered sacred. Feuds between rival hackers would sometimes lead to someone deciding to “drop docs” on somebody else, who had previously only been known as a username or alias. “Docs” became “dox” and eventually became a verb by itself (without the prefix “drop”). The definition of doxxing has grown beyond the hacker community and now refers to personal information exposure. While the term is still used to describe the unmasking of anonymous users, that aspect has become less relevant today when most of us are using our real names on social media. Doxxing attacks can range from the relatively trivial, such as fake email sign-ups or pizza deliveries, to the far more dangerous ones, like harassing a person’s family or employer, identity theft, threats, other forms of cyberbullying, or even in-person harassment. Worse still, when someone is doxxed they can become a target of malicious individuals with a Swatting attack. This is where a malicious person reports a bomb threat at a dox location or worse, calls police to report an active shooter and to send the “Swat” team onsite guns drawn. This can have tragic consequences are seen in this article here. What does this mean for an SMB or MSP? CyberHoot’s Minimum Essential Cybersecurity Recommendations The following recommendations will help you and your business stay secure with the various threats you may face on a day-to-day basis. All of the suggestions listed below can be gained by hiring CyberHoot’s vCISO Program development services. - Govern employees with policies and procedures. You need a password policy, an acceptable use policy, an information handling policy, and a written information security program (WISP) at a minimum. - Train employees on how to spot and avoid phishing attacks. Adopt a Learning Management system like CyberHoot to teach employees the skills they need to be more confident, productive, and secure. - Test employees with Phishing attacks to practice. CyberHoot’s Phish testing allows businesses to test employees with believable phishing attacks and put those that fail into remedial phish training. - Deploy critical cybersecurity technology including two-factor authentication on all critical accounts. Enable email SPAM filtering, validate backups, and deploy DNS protection, antivirus, and anti-malware on all your endpoints. - In the modern Work-from-Home era, make sure you’re managing personal devices connecting to your network by validating their security (patching, antivirus, DNS protections) or prohibiting their use entirely. - If you haven’t had a risk assessment by a 3rd party in the last 2 years, you should have one now. Establishing a risk management framework in your organization is critical to addressing your most egregious risks with your finite time and money. - Buy Cyber-Insurance to protect you in a catastrophic failure situation. Cyber-Insurance is no different than Car, Fire, Flood, or Life insurance. It’s there when you need it most. Each of these recommendations, except cyber-insurance, is built into CyberHoot’s product and virtual Chief Information Security Officer services. With CyberHoot you can govern, train, assess, and test your employees. Visit CyberHoot.com and sign up for our services today. At the very least continue to learn by enrolling in our monthly Cybersecurity newsletters to stay on top of current cybersecurity updates. For more info, watch this X min video on Cybrary Term. CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like: - Cybrary (Cyber Library) - Press Releases - Instructional Videos (HowTo) – very helpful for our SuperUsers! Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’.
<urn:uuid:fb8fac00-bea1-4a86-8c16-d63839096bfe>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/doxxing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00236.warc.gz
en
0.929045
993
2.96875
3
PCI DSS is short for Payment Card Industry Data Security Standard. Every party involved in accepting credit card payments is expected to comply with the PCI DSS. The PCI Standard is mandated by the card brands, but administered by the Payment Card Industry Security Standards Council (PCI SSC). The standard was created to increase controls around cardholder data to reduce credit card fraud. The PCI Security Standards Council’s mission is to enhance global payment account data security by developing standards and supporting services that drive education, awareness, and effective implementation by stakeholders. Compliance will ensure that a company can uphold a positive image and build consumer trust. This also helps build consumer loyalty, since customers are more likely to return to a service or product from a company they consider to be trustworthy. What exactly is PCI DSS? PCI DSS is an international security standard that was developed in cooperation between several credit card companies. The PCI DSS tells companies how to keep their card and transaction data safe. When the PCI DSS was published in 2004, it was expected that organizations would achieve effective and sustainable compliance within about five years. Some 15 years later, less than half of organizations maintain programs that prevent PCI DSS security controls from falling out of place within a few months after formal compliance validation. According to a 2019 Verizon Payment Security Report, research shows that PCI sustainability is trending downward since 2017. An increase in online transactions One of the side effects of the COVID-19 pandemic has been an increase in online transactions. As more people worldwide have started to work from home and practice social distancing to combat the spread of COVID-19, businesses must prepare to handle a higher percentage of online transactions. After all, it is likely that these online customers will continue to shop online when they learn to appreciate the ease of use, especially if they are confident about the security of their online transactions. However, with this rise in the frequency of digital payments comes the increased threat of data breaches and digital fraud. The elements of compliance A recent Bank of America report states that small businesses are protecting themselves by implementing industry security standards, like PCI compliance. Specifically, PCI Compliance Requirement 5 indicates that you must protect all systems against malware and regularly update anti-malware software. PCI DSS Requirement 5 has four distinct elements that imply they need to be addressed daily: - 5.1: For a sample of system components, including all operating system types commonly affected by malicious software, verify that anti-malware software is deployed. - 5.2.b: Examine anti-malware configurations, including the master installation of the software, to verify anti-malware mechanisms are configured to perform automatic updates and periodic scans. - 5.2.d: Examine anti-malware configurations, including the master installation of the software and a sample of system components, to verify that the anti-malware's software log generation is enabled, and logs are retained per PCI DSS Requirement 10.7. - 5.3.b: Examine anti-malware configurations, including the master installation of the software and a sample of system components, to verify that the anti-malware software cannot be disabled or altered by users. Basically, this boils down to our regular advice pillars: - Make sure software (including anti-malware) is updated. - Perform automatic and/or periodic scans for malware. - Log and retain the results of those scans. - Make sure protection software (especially anti-malware) can't be disabled. Common problems and objections The first requirement (5.1) requires an organization to maintain an accurate inventory of their devices and the operating systems on those devices. However, configuration management database (CMDB) solutions are notorious for not being completely implemented. As a result, it can be quite an exercise to determine if every system that needs anti-malware software is installed. If so, look for a solution that provides an inventory of protected endpoints for you. You may use such an inventory for auditing your CMDB and verifying compliance. The next hurdle with requirement 5.1 is that we still run into pushback from macOS and Linux users/administrators over their need to run an antivirus solution. Yet, a review of the CVE database debunks those claims. Yes, these OSes have fewer vulnerabilities than Windows. However, they would still be “commonly affected,” given the number of vulnerabilities and the frequency with which those vulnerabilities are published. And as we have reported in the past, Mac threat detections are on the rise and actually outpace Windows in sheer volume. Using a solution that can cover all the operating systems in use in your organization can help you organize and control all your devices without adding extra software. Sometimes, you will get pushback from server administrators who swear that any antivirus solution takes too much CPU to run and adversely affects server performance. While it's getting better, we still regularly encounter people who make this claim but then fail to provide documented proof. (Not that we don't believe them, as there are several legacy antivirus programs that can adversely affect performance.) However, in most cases, the person is making these claims based on past experiences and not on trials of a more contemporary solution. No matter how you look at this, you will have to deploy anti-malware on Windows, macOS, and Linux Server endpoints to meet the PCI DSS. Why compliance matters Data from the Verizon Threat Research Advisory Center (VTRAC) demonstrates that a compliance program without the proper controls to protect data has a more than 95 percent probability of not being sustainable and is more likely to be the potential target of a cyberattack. The costs of a successful cyberattack are not limited to liabilities and loss of reputation. There are also repairs to be made and reorganizations may be necessary, especially when you are dealing with ransomware or a data breach. A data breach also involves lost opportunities and competitive disadvantages that are near impossible to quantify. The 2019 IBM/Ponemon Institute study calculated the cost of a data breach at $242 per stolen record, and more than $8 million for an average breach in the US. Ransomware is the biggest financial threat of all cyberattacks, causing an estimated $ 7.5 billion in damage in 2019 for the US alone. For those companies engaged in online transactions, reputational damage can be fatal. Imagine customers shying away from the payment portal as soon as they spot your logo. PCI compliance, then, is not just a regulation—it could quite literally save your company's bacon. So stay safe (which in this case means staying compliant)!
<urn:uuid:fa318d86-8136-459d-9c3c-1a125c6c179b>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2020/09/pci-dss-compliance-why-its-important-and-how-to-adhere
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00236.warc.gz
en
0.946638
1,373
2.53125
3
What is Cardinality? The broad definition of cardinality represents the number of elements in a set. What is cardinality in a database In a database context, cardinality refers to the number of unique values in a relational table column relative to the total number of rows in the table. The cardinality of a column is assessed and stored in system tables for optimizer use when the database administrator (DBA) runs statistics. Why is it important in databases The cardinality of a column is very important to database designers and the database query optimizer. For the designer or DBA, knowing a column is mainly repeating values tells them that it is a poor candidate for an index as it will not be very selective. For a cost-based query optimizer, the selectivity of a potential index dictates whether it will be used or ignored. Creating and maintaining indexes is expensive in terms of CPU and IO resource usage, so designers and developers need to ensure they create ones that will get used. Types of cardinality in databases Database designers map the degree of relationship between entities. An entity can have a one-to-many or one-to-one relationship with another entity. For example, one storage container may have one lid, making a one-to-one relationship. One doctor might have many patients forming a one-to-many relationship. This is known as relationship cardinality. Data cardinality refers to the uniqueness of the values contained in a database column. If most of the values are distinct, then it is considered to have high cardinality. If the column contains mostly repeated values, that makes it a low cardinality column. When partitioning a table based on ranges of data values, low cardinality can lead to data skew, resulting in uneven data distribution across partitions. This isn’t good because you want to balance resource usage across all the available processors, not just a subset. High and low cardinality A column that is populated with distinct values is known as a high cardinality column. A low number of distinct values in a column make it a low cardinality column. When selecting a column to index or use as a basis for a partitioning key, you are looking for high cardinality candidates. Similarly, a database query plan will use an available index if a column contains distinct values. In terms of database performance tuning, a low cardinality column can result in a full table scan operation which is the most expensive (in terms of resource usage) way to query a table. Cardinality and modality When measuring the number of associations between two or more table columns or rows, we use the term cardinality. The focus is on the maximum number of associations. The modality focuses on the minimum number of relationships between entities or table rows. The modality of a relationship is 0 if the relationship is optional, while the modality is 1 if an occurrence of the relationship is mandatory.
<urn:uuid:62192564-7222-416a-b401-699ad4e66c50>
CC-MAIN-2022-40
https://www.actian.com/what-is-cardinality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00236.warc.gz
en
0.897815
604
3.65625
4
In an earlier blog post about DNS hijacks, we briefly touched on the hosts file. The hosts file is like your speed dial directory for the internet. Some systems only have a few numbers stored and others have lots of entries. What if someone was able to change that directory and you end up calling a one dollar per second number when you wanted to call a relative? Basically, that is what we will discuss here. Where is the hosts file located?The actual location of the hosts file is stored in the registry under the key, HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters, in the value, DataBasePath. By default, this file's folder location is (and has been since Windows NT/2000) %systemroot%\SYSTEM32\DRIVERS\ETC, where %systemroot% is usually the C:\Windows directory. What kind of file is it?The hosts file does not have an extension, but it can be viewed by opening it with Notepad (or something similar). To replace or alter the hosts file, you will need Administrator privileges, but every user has “Read” permissions. Before resolving an internet request (to look up the IP that belongs to a domain name), Windows looks in the hosts file to see if there is a predefined entry for that domain name (the speed dial, remember?). Possible reasons to change the hosts fileThese predefined entries in the hosts file can exist for several reasons: - Blocking: some people (who are oftentimes unaware that hosts files can be installed by their security programs) use them to block unwanted sites by connecting malicious or otherwise unwanted domains to the IPs 127.0.0.1 or 0.0.0.0 that both point at the requesting system itself, so in effect there will be no outgoing traffic for these requests. - Pointing: for example, system administrators use the hosts file to map intranet addresses. - To block detection by security software: for example, by blocking the traffic to all the download or update servers of the most well-known security vendors. - To redirect traffic to servers of their choice: for example, by intercepting traffic to advertisement servers and replacing the advertisements with their own. Recent examplesOne of the more blatant and ruthless methods to abuse someone else’s hard work is done by an adware that steals the hosts file that arguably is used most for ad blocking purposes and change it to redirect all the traffic to their own server. The hosts file in question is the MVPS hosts file, and it is altered by an adware calling itself “Pakistani Girls Mobile Data”. In this screenshot, you can see the original on the left and the altered copy on the right: The malware authors didn’t even bother to remove the header. They did replace the IP 0.0.0.0 with their own 188[DOT]138[DOT]17[DOT]135 and left it at that. Please note that the system on which this changed hosts file was installed by the malware does not have the MVPS hosts file before the infection. It is equipped with the default Windows hosts file. So, the malware did not alter a hosts file that existed on the system, but planted a hosts file that they downloaded and altered first. Another that caught my attention is one that we have discussed before for another reason. At that point, I dubbed it Dotdo audio. This browser hijacker uses a lot of tricks and one of them are semi-randomized file-and-folder names. And, in what is most likely an attempt to stop people from checking their file in an online virus scan, they have decided to reroute the traffic to Virustotal.com. Special mentionOne hosts hijack deserves some extra attention, simply because of the complexity of the method that is used. Some variants of Shopperz have patched the Microsoft dnsapi.dll file in such a way that it points to a different hosts file. So if you look at your hosts file, you would see nothing wrong, but the system would be looking at a completely different file when it does the lookups. SummaryThe hosts file is the internet variant of a personal phonebook. We discussed a few malware variants that replace or change that phonebook, so you end up calling the wrong sites. The ones they want you to call. Pakistani-Girls-Mobile-Data.exe SHA256: 1058e4f356af5e2673bf44d2310f1901d305ae01d08aa530bc56c4dc2aecb04c Malwarebytes Anti-Malware detects this file as Trojan.HostsHijack. As always, stay safe out there and make sure you are protected.
<urn:uuid:c38c115e-45b0-4302-b73d-81461a108f5f>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2016/09/hosts-file-hijacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00236.warc.gz
en
0.939482
1,017
2.53125
3
As the marketing of almost every advanced cybersecurity product will tell you, artificial intelligence is already being used in many products and services that secure computing infrastructure. But you probably haven’t heard much about the need to secure the machine learning applications that are becoming increasingly widespread in the services you use day-to-day. Whether we recognize it or not, AI applications are already shaping our consciousness. Machine learning-based recommendation mechanisms on platforms like YouTube, Facebook, TikTok, Netflix, Twitter, and Spotify are designed to keep users hooked to their platforms and engaged with content and ads. These systems are also vulnerable to abuse via attacks known as data poisoning. Manipulation of these mechanisms is commonplace, and a plethora of services exist online to facilitate these actions. No technical skills are required to do this – simply get out your credit card and pay for likes, subscribes, followers, views, retweets, reviews, or whatever you need. Because the damage from these attacks remains tricky to quantify in dollars – and the costs are generally absorbed by users or society itself – most platforms only address the potential corruption of their models when forced to by lawmakers or regulators. However, data poisoning attacks are possible against any model that is trained on untrusted data. In this article, we’ll show how this works against fraud detection algorithms designed for an e-commerce site. If this sort of attack turns out to be easy, that’s not the kind of thing online retailers can afford to ignore. What is data poisoning? A machine learning model is only as good as the quality and quantity of data used to train it. Training an accurate machine learning model often requires large amounts of data. To meet that need, developers may turn to potentially untrusted sources, which can open the door to data poisoning. A data poisoning attack aims to modify a model’s training set by inserting incorrectly labelled data with the goal of tricking it into making incorrect predictions. Successful attacks compromise the integrity of the model, generating consistent errors in the model’s predictions. Once a model has been poisoned, recovering from the attack is so difficult that some developers may not even attempt the fix. Data poisoning attacks have one of two goals: - Denial-of-service attack (DoS), the goal of which is to decrease the performance of the model as a whole. - Backdoor/Trojan attack, the goal of which is to decrease performance or force specific, incorrect predictions for an input or set of inputs selected by the attacker. What a successful attack on a fraud detection model might look like We decided to study data poisoning attacks against example scenarios similar to those that might be used in a fraud detection system on an e-commerce website. The trained models should be able to predict whether an order is legitimate (will be paid for) or fraudulent (will not be paid for) based on information in that order. Such models would be trained using the best data the retailer has available, which usually comes from orders previously placed on the site. An attacker targeting this sort of model might want to degrade the performance of the fraud detection system as a whole (so it would be generically bad at spotting fraudulent activity) or launch a pinpoint attack that would enable the attacker to carry out fraudulent activity without being noticed. To mount an attack against this system, an attacker can either inject new data points into, or modify labels on, existing data points in the training set. This can be done by posing as a user or multiple users and making orders. The attacker pays for some orders and doesn’t pay for others. The goal is to diminish the predictive accuracy of the model the next time it’s trained so fraud becomes much more difficult to detect. In our e-commerce case, label flipping can be achieved by paying for orders after a delay to change their status from fraudulent to legitimate. Labels can also be changed through interactions with customer support mechanisms. With enough knowledge about a model and its training data, an attacker can generate data points optimized to degrade the accuracy of the model, either through a DoS attack or backdooring. The art of generating data poison For our experiment, we generated a small dataset to illustrate how an e-commerce fraud detection model works. With that data, we trained algorithms to classify the data points in that set. Linear regression and Support Vector Machines (SVM) models were chosen since these models are commonly used to perform these types of classification operations. We used a gradient ascent approach to optimally generate one or more poisoned data points based on either a denial-of-service or backdooring attack strategy, and then studied what happened to the model’s accuracy and decision boundaries after it was trained on new data that included the poisoned data points. Naturally, in order to achieve each of the attack goals, multiple poisoned data points were required. Running successful poisoning attacks to commit e-commerce fraud The results of the experiment we ran found that we needed to introduce far fewer poisoned data points to achieve the backdoor poisoning attack (21 for linear regression, 12 for SVM) than for the denial-of-service poisoning attack (100 for both). The linear regression model was more susceptible to the denial-of-service attack than the SVM model. With the same number of poisoned data points, the accuracy of the linear regression model was reduced from 91.5% to 56%, while the accuracy of the SVM model was reduced from 95% to 81.5%. Note that a 50% accuracy in this scenario is the same as just flipping a coin. The SVM model was more susceptible to the backdoor poisoning attack. Since SVM models have a higher capacity than linear regression models, their decision boundary can better fit anomalies in the training set and create “exceptions” in their predictions. On the other hand, it requires more poisoned data points to move the linear decision boundary of the linear regression model to fit these anomalies. What we learned by testing poisoned data This experiment found that poisoning attacks can be executed with some ease by attackers, as long as they have enough knowledge about machine learning and optimization techniques. Several publicly available libraries also exist to assist with the creation of poisoning attacks. In general, any machine learning model that sources third party data for training is vulnerable to similar attacks. Our fraud detection example simply illustrates the ease at which an attacker might use a poisoning attack for potential financial gain. In our experimental setup we observed that complex models were more vulnerable to backdoor attacks while simple ones were more prone to DoS strategies, indicating that there is no silver bullet to protect against all attack techniques by design. Given the extraordinary difficulty of retraining models that are used in the real-world and, in the case of our example scenario, the potential costs of automated fraud, additional layers of defense are needed to secure these applications. In order to have trustworthy AI, it needs to be secure. But the machine learning algorithms that are already in use present security challenges that machines cannot solve on their own. At least, not yet.
<urn:uuid:b80b8d23-f45d-48c2-b20f-58c0f5860768>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2021/05/24/fraud-detection-algorithms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00236.warc.gz
en
0.950128
1,444
2.796875
3
The CDC’s National Center for Health Statistics is testing how blockchain can be used to provide access to data in electronic health records. Researchers with the Centers for Disease Control and Prevention's National Center for Health Statistics are looking to use distributed ledger technology to securely collect data from electronic health records to use in the hundreds of reports and surveys they publish each year tracking the state of public health. “If you look at EHRs and medical data records as a supply chain management problem, blockchain is an ideal fit,” Christopher Lynberg, lead research and development scientist at the CDC, told GCN. “We decided to create a single, isolated, cloud-only prototype using synthetic data to educate ourselves on how to use blockchain technology.” The proof of concept uses IBM’s blockchain platform. The EHR data itself is stored off the chain in the IBM cloud, and references to the data from the distributed ledger are accessible only to select CDC employees. The synthetic environment allowed researchers to experiment. They used the blockchain to change ownership of EHR data, access the information and authorize others with access to use the data for their NCHS reports. “We used best practices to create off-the-chain data storage" and a cryptographic hash or pointer to reference where the data resides in the distributed ledger, IBM Blockchain Tech Lead David McElroy said. “The hashes provide a consistency between systems so researchers can have access to the information that they need.” Due to HIPAA regulations, no personal public health information is kept on the blockchain. The only data found in the EHRs is anonymized medical histories, treatments and doctor visit summaries. However, the information used in this proof of concept is synthetic – not actual CDC data. “We’ve set up this prototype as separate entity from the currently operating data collection system,” Askari Rizvi, chief of the CDC’s technical services branch, said. “We see this project giving us the ability to provide another layer of consent, traceability, reportability into the EHR collection process.” The blockchain test also helped CDC address questions related to identity access management. “We get additional control through creating permissions and found the trackability and immutable record of transactions worked as expected,” said Tom Savage, a blockchain researcher at the CDC. After the test with the synthetic data is complete, a larger test of an EHR system could occur before turning the system into a real-time production environment. Besides helping CDC better manage and secure its data, the distributed ledger technology will also allow outside researchers to get access to CDC data. “We want to engage with proofs of concept outside the CDC that will help research institutions find solutions for things like food traceability,” Lynberg said. “In order to this, we must first understand this technology and find ways to match the right blockchain platform to the right issue to solve supply management problems.” “The CDC is responsible for about 1,500 different science topics, and every one of them has a database behind them,” Savage said. “In the future, it is not hard to imagine that there could be blockchain technology addressing every one of those scientific topics.” Editor's note: This article was changed June 21 to correct the spelling of Askari Rizvi's last name.
<urn:uuid:52219a98-c264-4670-a6b7-d143c69c0e69>
CC-MAIN-2022-40
https://gcn.com/emerging-tech/2018/06/harnessing-blockchain-for-electronic-health-records/298495/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00236.warc.gz
en
0.925098
702
2.640625
3
How To Secure Web Servers- Best Practices In Brief These are the best web server security practices to follow to keep your web server secure: - Use strong passwords and change them regularly. - Enable HTTPS protocol (SSL/TLS) to encrypt the information. - Keep all software up to date. Web server security refers to the tools, technologies, and processes that enable information security (IS) on a Web server. There are three main types of Web server security: physical, network and host. All network connections are protected by a firewall, a hardware or software component that prevents unauthorized access to or from a network. Web Server Security encompasses two major areas: - The security of the data on the web server - The security of the services running on the web server The data on a web server is protected by operating system security and access controls. Firewalls and anti-virus software protect the services running on a web server. The data on the server may be the most valuable asset and hence is the target of the most attacks. Data protection is achieved by encrypting the information on the disk and using intrusion detection software to detect and respond to intrusion attempts. When a user is surfing the internet, he’s not just interested in getting to his destination quickly. He also wants to know that he can get there safely. This is why Web server security is so important. Information technology (IT) professionals can use several methods to protect a Web server from malicious attacks. One of the most basic methods is to use a firewall, which is a program that checks all Internet traffic coming into and going out of the Web server, blocking any traffic that seems suspicious or otherwise dangerous. Importance of Web Server Security Security is an integral part of your website, especially when it comes to your web server. Unsecured servers can be easily attacked, and their information can be stolen. That is why Web server security is critical to have. Web servers store, process, and deliver Web pages and other online content. Web servers can also host and serve different data types, such as audio and video files, database records, and executable programs. To ensure the confidentiality, integrity, and availability of information, Web servers must be protected from unwanted access, improper use, modification, destruction, and disclosure. Common Vulnerabilities in Web Server Web servers are the backbone of the internet, but there are still a lot of vulnerabilities that plague these servers and affect their users. Standard web server vulnerabilities include SQL Injection, Command Injection, DoS Attacks, and Cross-Site Scripting (XSS). Some of these vulnerabilities can be easily exploited, and others require additional details to be exploited. Let’s understand these security risks in depth. 1. SQL Injection Attacks SQL injection is a prevalent and dangerous attack used to take over a database. When an attacker enters malicious payloads in the user inputs and the application does not sanitize the input. Thus the name SQL injection — you are injecting an SQL statement (malicious payloads) into the database. Why is this so dangerous? Imagine that a user has a table in a database called ‘users.’ The ‘username’ field is supposed to contain the username of the user, but instead, the user enters the following SQL statement: SELECT * FROM users LIMIT 0,1; 2. DoS Attacks A denial of service (DoS) attack attempts to make a server or network resource unavailable to its intended users. DoS attacks target either a server or a network resource and flood traffic until it becomes unreachable to engaged users. The aim is to cause a DoS condition. The attack is often made via malicious tools such as bots or viruses that consume the victim’s network bandwidth or CPU resources. The attack may also be made using a computer or network that a virus or other malicious software has infected. 3. Cross-Site Scripting Cross-site scripting (XSS) is a type of vulnerability used to attack a user’s interaction with a website by injecting code executed by the user’s browser. This code is performed in the user’s session, which is usually obtained by sending the user’s cookies to the server. Attackers generally use XSS to perform actions on the user’s behalf, such as to gain access to the user’s session. Web Server Security best practices Web server security is an important topic, without which no business can survive these days. With the increase in cybercriminal activities, the importance of keeping your web server secure has grown more than ever. You must keep your web server secure from cybercriminals, who can do a lot of damage to your business. So let’s discuss some of the most common best practices for web server security. 1. Use Strong Passwords The first thing you need to do is make sure that you choose strong passwords. If you are using your default password, change it immediately. Or, if you are using a password that is easy for you to guess or something publicly available, change it. Also, make sure you change your password regularly, at least once in a quarter. 2. Use secure protocols and ciphers Always use TLS v1.2 and AES ciphers to encrypt communication with web servers. Enable HTTPS protocol (SSL/TLS) to encrypt the information of the users that they send to your website and make sure that the certificate you use is valid. 3. Keep Software Updated The most important thing you can do to secure your web server keep all software up to date. This includes both the operating system and web server software. If you manage your web server, you should regularly check the manufacturer’s website to apply security patches, especially if you are planning to use a web server that is several years old. 5 Open Source Web Server Security Tools The best way to secure your web server is to make sure that you are aware of all the possible dangers and prevent them from happening. The following are considered the best open source web server security tools to help you secure your server. 1. Snort: Snort is an open-source network intrusion prevention system that helps in real-time traffic analysis. The software uses a combination of protocol analysis and pattern matching to detect anomalies, misuse, and attacks in network traffic. 2. Nmap: Network Mapper, or Nmap, is an open-source utility for network exploration, security auditing, and network discovery. It was designed to rapidly scan large networks, although it works fine against single hosts. 3. OpenVAS: OpenVAS is a vulnerability scanner that can perform a complete vulnerability scan of the network infrastructure. OpenVAS is an international project that is used by many organizations all over the world. It is available for free and can be used with commercial products. 4. Metasploit: The Metasploit Project is a computer security project that provides information about security vulnerabilities and aids in penetration testing and IDS signature development. It is open-source, accessible, and available to the public. 5. Sqlmap: Sqlmap is an open-source automated security testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over databases. Internally it uses the same engine as the commercial tool sqlninja, but its features and syntax are slightly different. Why Choose Astra for Server Security Software? We at Astra help businesses and organizations secure and protect their web servers against cyber attacks. Our web server security solutions include: - Website firewall - Malware protection - Botnet protection - Vulnerability assessment - Security audits. We are a global leader in safeguarding online businesses and organizations from cyber attacks. At Astra, we understand the sensitive nature of your web server. Our engineers possess the required expertise to test your web server security. We are a leading web server security firm with more than 10 years of experience in the field. The web server is the central component for any website. The computer hosts the main website files and provides them to the users who visit the site. Keeping the webserver secure is essential to prevent unauthorized access and data loss. We hope you’ve found this blog post helpful in learning more about web server security. If you have any questions or want to learn more about web server security, don’t hesitate to contact us anytime. We’re happy to help! 1. What is web server security? Web server security refers to the protective measures applied to safeguard information assets accessible from a web server. 2. Which Web server is most secure? There is no definitive answer to this. Some of the better secure web server hostings are SiteGround, Apache, and Cloud Flare. 3. Can a Web server be hacked? Yes, web servers are vulnerable to network-level attacks and operating system attacks. 4. How do I secure my web server? Run vulnerability scans to identify existing loopholes, install a firewall, keep patches updated, and remove unnecessary elements.
<urn:uuid:78aa147a-0616-4512-9308-0308a9b7caab>
CC-MAIN-2022-40
https://www.getastra.com/blog/security-audit/web-server-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00236.warc.gz
en
0.902744
1,886
2.921875
3
SSL and TLS are not monolithic encryption entities that you use or do not use to securely connect to email servers, websites, and other systems. SSL and TLS are evolving protocols with many nuances to how they may be configured. The “version” of the protocol and the ciphers used directly impact the level of security achievable through your connections. Some people use the terms SSL and TLS interchangeably, but TLS (version 1.0 and beyond) is the successor of SSL (version 3.0). See SSL versus TLS – what is the difference? In 2014 we saw that SSL v3 was very weak and should not be used going forward by anyone; TLS v1.0 or higher must be used. Among the many configuration nuances of TLS, the protocol versions supported (e.g., 1.0, 1.1, 1.2, and 1.3) and which “ciphers” are permitted significantly impact security. A “cipher” specifies the encryption algorithm, the secure hashing (message fingerprinting / authentication) algorithm to be used, and other related things such as how encryption keys are negotiated. Some ciphers that have long been used, such as RC4, have weakened over time and should never be used in secure environments. Other ciphers protect against people who record a secure conversation from being able to decrypt it in the future if somehow the server’s private keys are compromised (perfect forward secrecy). Given the many choices of ciphers and TLS protocol versions, people are often at a loss as to what is specifically needed for HIPAA compliance. Simply “turning on TLS” without configuring it appropriately is likely to leave your transmission encryption non-compliant. Read the rest of this post »
<urn:uuid:4e153d5f-7959-405a-bd82-f86f67d113b1>
CC-MAIN-2022-40
https://luxsci.com/blog/tag/ssl
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00236.warc.gz
en
0.937568
365
2.640625
3
What is a SOC? Security Operations Centre SOC is the abbreviation for Security Operations Centre. As the word ‘centre’ implies, it is the physical location of an information security team. The people who work in the SOC are constantly monitoring and improving the security posture of an organisation or enterprise and at the same time preventing, detecting, analysing and responding to cyber security incidents. In a SOC, the security team uses a combination of technology solutions and a strong set of processes. This skilled team usually consists of security analysts, engineers, and managers who oversee the security operations. The team works closely with the incident response team, who ensure that security issues are acted upon quickly after discovery. Not all organisations are able to set up a Security Operations Centre. This has several reasons but is often related to a lack of resources, lack of in-house expertise, time and funding to set it up, etc. For this reason, many organisations choose to outsource SOC services to an external trusted IT-partner. In that case, we speak of a managed SOC service. The technology used in a SOC To set up effective security operations you'll need the right tools. Without it, you'll be overwhelmed with a large number of security events. Below we selected the most important security solutions that will help you to automate many processes and deal with these events and make sure you find the significant threats. SIEM - Security Information and Event Management A SIEM can offer full visibility into activities within your network by collecting, parsing and categorising machine data from a wide range of sources. It analyses this data as well to make sure that you can act on possible threats in time. The key to a successful SIEM deployment is its usability and the reports and events that it generates. In short, this comes down to correctly defining use cases – that is to say, situations or conditions that are considered abnormal or bad. Without these definitions the SIEM will either “over-report” on issues that are not relevant or potentially miss serious issues. EDR - Endpoint Detection and Response All devices that are connected to your network are vulnerable to a cyber attack. An EDR focuses on the detection of malicious activities and software that is installed on endpoints. It will investigate the entire life-cycle of the threat, providing insights into what happened, how it got in, where it has been, what it's doing now, and how to stop it. By containing the threat at the endpoint, EDR solutions help eliminate the threat and prevent it from spreading. NGFW - Next-Generation Firewall A firewall will monitor incoming and outgoing network traffic and automatically block traffic based on established security rules. With an NGFW you will have complete visibility, control and prevention at your network edge. Automated application security With application security, you automate the testing process across all software and provide the SOC security team with real-time feedback about vulnerabilities. Unprotected applications are vulnerable to a number of cyber attacks such as the OWASP Top 10, sophisticated SQL injections, malicious sources and DDoS attacks. This makes them an easy entry point for hackers. Your security analysts are searching for vulnerabilities and weaknesses in your network 24/7. But it is always smart to have a second pair of eyes to go through your network looking for vulnerabilities and weaknesses. The key to successful security assessments and data breach prevention is achieving and maintaining the right security level.
<urn:uuid:e13e9645-f784-4349-94bb-a02a580b6f7b>
CC-MAIN-2022-40
https://www.nomios.com/resources/what-is-a-soc/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00236.warc.gz
en
0.94845
708
2.578125
3
Server load balancing is a technology that enables your websites and applications to keep up the performance despite a high volume of traffic or sudden spikes. It does so by sending or splitting the traffic over to various servers. When this process is carried out globally, it’s called global server load balancing (GSLB). Let’s dig deeper into the concept. What Is Server Load Balancing? The technique by which an appliance — the load balancer — splits up incoming traffic is called server load balancing. Placed in your data centers; server load balancers could be software-defined appliances or hardware machines. Server load balancers sit between the client and the backend machines to divide the traffic your website receives. It then distributes traffic to different serves in the backend. It ensures each server is performing up to its optimal capacity and not getting exhausted. Furthermore, server load balancing ensures the scalability and availability of the application. This essentially means that with no interruption to your client, the load balancers will flawlessly distribute traffic in the backend, ensuring a seamless experience. The client will receive requested content almost instantly while the server load balancer distributes traffic in the back end, which is not visible to the client. Finally, to ensure each server is working efficiently, server load balancers also check the health of the servers to avoid directing traffic to an overwhelmed or dead server. Here are two ways server load balancing works: - Transport layer load balancing: it’s a technique where TCP/IP level load balancing or DNS-based approach works. Transport-level load balancing is an approach that does not depend on the content of the application. In other words, it will work regardless. - Application-level load balancing: Application-level load balancing uses the amount of traffic to decide how to divide the traffic. Let’s now look at some of the benefits of the server load balancer. Benefits of Server Load Balancers Server load balancers enable your enterprise to serve thousands or more requests with impressive response time simultaneously. When there’s a sudden spike in the traffic, it can increase the throughput, ensuring the client faces no downtime. Load balancers uplift the availability of your serves by shifting the load evenly between the existing servers, maintaining uptime. Availability is essentially redundancy, where if one server is exhausted, the others can take up the load preventing hardware failure and downtime. Server load balancing allows your web application to handle traffic flawlessly, even in the face of high volume. The process distributes and redirects any incoming requests from clients so that they are always available for use by other users without breaking or dropping connections. It also means that in any complex infrastructure, you can conduct maintenance at any point without disturbing the flow of incoming traffic. Load balancers will distribute the load to other healthy servers, ensuring business continuity. Array’s Server Load Balancer Sever load balancers are a crucial component of a data center. They help your applications and systems accommodate more traffic without compromising performance. As a result, server load balancers are increasing the capacity of services and improving performance by reducing bottlenecks. It’s also essential because, without proper redundancy in place at all times, your company could experience an outage if any single component fails. To know more about server load balancers and their usage for your enterprise, reach out to our sales team. Learn more about what load balancers are, along with their type and benefits, here.
<urn:uuid:0b618f21-dcf9-4732-b1fc-a5ebd2bc4933>
CC-MAIN-2022-40
https://arraynetworks.com/tutorials/server-load-balancer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00436.warc.gz
en
0.89692
725
3.109375
3
The problem with Mission Critical emergency systems is that failures only occur when the systems are called upon to operate. Comprehensive electrical maintenance does not preclude a failure; however, it dramatically increases the odds that a problem can be detected and corrected in advance. The problem with electrical safety is that folks rarely realize the potential consequences until after an incident occurs. Don’t allow your employees, contractors, or business become another statistic. Electrical shock and arc-flash are the two primary types of electrical safety hazard in your workplace. Electrical shock occurs when the human body becomes part of an energized electrical circuit. The degree of injury is directly related to the path the current takes through the body. As little as one milliamp is enough to cause death. Arc-flash is literally a fireball that occurs when an energized conductor is unintentionally connected to another energized conductor or ground. The air within the sphere of the established arc becomes conductive and the arc grows exponentially until such time as current is interrupted. Question: What is an arc-flash hazard warning label? Answer: A label containing all necessary information about the arc flash hazard faced at a specific location that is affixed to each piece of electrical equipment with a removable cover or door providing access to current carrying conductors when energized (see the figure). Question: What information is contained on the arc-flash hazard warning label? Answer: All pertinent information necessary so personnel understand the degree of hazard faced and protective measures required. - Hazard class. The level of hazard exposure. - Incident Energy. The amount of energy generated during an electrical arc impressed on a surface, 18-inches. (the length of the average forearm) away from the source of the arc expressed in calories per centimeter squared (cal/cm²). This is worst case as if you were standing directly in front of the energized conductor. The farther you are from the source, the lower the cal/cm². - PPE Required. The specific PPE required for the class hazard faced. - Voltage Hazard. The voltage level one would be exposed to at the point of access. - Equipment Identification = The equipment the information refers to. - Arc Flash Protection Boundary.The distance from the access point at which the incident energy from an arcing fault falling would equal 1.2 cal/cm² (equivalent to a mild sun burn). - Limited Approach Boundary, The line that may not be crossed by unqualified persons,unless accompanied by qualified persons both wearing appropriate PPE. - Restricted Approach Boundary. The boundary that only qualified persons are permitted to approach exposed, energized conductors, wearing appropriate PPE and with a written and approved work plan. - Prohibited Approach Boundary. The line that is considered to be the same as actually contacting the exposed part. A risk assessment must be completed prior to crossing this line. Question: How do manufacturers deal with this hazard when designing electrical gear? Answer: Manufacturers are promoting a variety of design features generally divided into active and passive solutions. Active protection seeks to prevent the arc from happening and mitigating the event to a high degree. Examples include; - Arc flash detection. Since an arc flash will continue until current is interrupted, early detection is a huge advantage. One such detector incorporates an unclad fiberoptic loop routed around the inside of the gear to detect a sudden change in the intensity of the ambient light over a very brief duration of time coupled with current transformers to detect the current spike associated with an arc flash incident. The output of this detector is designed to trip the upstream over current device very quickly thereby minimizing the duration of the event. - Some manufacturers have taken a more direct physical approach by coupling rapid detection of an arc flash event with the direct physical intervention of a secondary fault, which is designed to safely deplete the energy from the original fault and trip the upstream over current protective device. One such device is GE’s Arc Vault. This device is connected directly to the bus and upon detection of an arc flash, strikes a plasma arc inside a robust container thereby sapping the energy from the uncontrolled arc flash and effectively extinguishing the destructive arc flash event. Passive protection is mainly provided in what is being termed “fault tolerant” design. This means that gear is designed to minimize damage and physically withstand an arc flash. Common mechanical design features include louvers in the top to relieve the tremendous pressure created by an arc flash event, ducts or chutes to direct the arc up and out, reinforced cover and doors etc. While this approach is desirable, it is reactive. Regardless of these features, an arc flash creates real direct and collateral damage that must be repaired. It is a bit like insurance, the building burned, we lost everything, but we got a check. Arc flash incidents result in damage and interruption of business operations. Examples of passive solutions are; - Remote controlled circuit breaker draw out machines. This device is designed primarily to protect the operator in case of a malfunction during service work. - Service setting on controls and over current protective devises temporarily set relays and trip devices to minimum levels during service and repair activities again in order to rapidly trip the relay or device and interrupt fault current extinguishing the arc flash. Electrical safety is not an option. This topic is broad and complex and requires the allocation of significant resources to establish a comprehensive program. Four to five injuries or deaths occur each day in the US as a result of electrical shock or arc flash. You can debate the difference between standards and statutes; however. standards are the basis for statutes and codes. One industry study concludes the minimum cost of an arc flash event is $750,000. I would submit that it is likely to be a lot higher when you consider the direct damage to the equipment and facility, the liability as a result of injury or death, and the business disruption. As a facility manager, you could be held personally liable in the event of an incident if you fail to enforce safe work practices for your employees and contractors. In a court of law or the court of public opinion, you’ll fare much better having done the right thing. It’s time to get serious about electrical safety in every facility. Protect your employees, your contractors and your company.
<urn:uuid:a55f9612-f210-4aa0-9373-ba30629baf25>
CC-MAIN-2022-40
https://www.missioncriticalmagazine.com/articles/84612-whats-in-your-program
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00436.warc.gz
en
0.93482
1,290
3.65625
4
Docker is a container-based open platform for developing, deploying, and running applications. Containerization enables you to create a container that includes your application, any binaries or libraries that your application depends on, and any configuration details. The way which containerized applications operate is shown below. As can be seen from this diagram, a Docker container includes an application plus any binaries or libraries that the application requires in order for it to run. The container runs under the control of Docker, which in turn runs on top of the operating system (which can be Windows 10, Windows Server 2016, or Linux). Compare the containerized approach shown above with the diagram below that shows similar applications running in virtual machines rather than containers. Note that a virtual machine includes a guest operating system whereas the corresponding container does not.
<urn:uuid:9edac90b-fc22-450b-809f-b3ae983ffac7>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/visual-cobol/vc50pu8/VS2019/GUID-AF13027E-77FD-4FA0-AAD8-27F5D00CFEDB.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00436.warc.gz
en
0.891585
163
3.34375
3
In accounting, a list of current assets is a list of all assets that are owned by a company and have not yet been utilized or are unlikely to be used during the course of the year. In tax planning, a list of current assets is used to calculate the amount of the gross value of the company’s assets and liabilities at the end of the year and for tax planning purposes. The type of assets a company has is important in determining the amount it will be entitled to as a deduction or benefit under the laws of various countries. An important consideration is the type of investment used to acquire the assets of the company. A company can use cash, property, machinery, and intangible assets. The most common and useful asset types are property, equipment, inventory, and goodwill. The following are some general considerations that must be made when creating a list of current assets. Current assets are an important asset category, but they do not necessarily include the total of all current liabilities of the company. A company’s cash balance accounts for the most amount of cash in the business at the end of the year and is therefore a significant determinant of current assets. It is important to determine the percentage of cash available for investment in the business. This percentage is usually called net worth. Other items on a list of current assets include accounts receivable, accounts payable, accounts receivable, prepaid expenses, current investments, and current tangible assets. The amount of cash equivalents held by the business also affects the amount of the company’s current assets. This category includes money market funds, bank accounts, investment securities, government bonds, and stocks. The next category, tangible assets, includes inventory, vehicles, buildings, and other non-intangible assets that have definite uses. Examples of tangible assets include furniture, machinery, tools, patents, supplies, machines, and raw materials. Inventory accounts for the value of the goods and services that the company purchases in its regular operations. Assets such as office supplies, machinery, and computers may be depreciated over time. Buildings and land accounts for the cost of new construction and repair expenses that are not amortized over time. Intangible assets include licenses and patents. These include the rights to manufacture, produce, sell, or distribute products or information. They also include rights granted to third parties for the right to use particular forms of technology. The cost of current assets should be determined based on its fair value using several different methods. These methods include selling, purchasing, determining market value, and cost method. When creating a list of current assets, the following steps should be taken: determining the amount and type of assets to include, listing the current and long term liabilities, calculating a discount rate, determining an appropriate level of net worth for the business, and preparing a statement of earnings and/or income. List building or maintaining the list of assets and a current asset’s list is one way to maintain consistency between financial records. The first step in creating a list is to identify the amount and types of assets to list. Allocating a dollar amount is based upon an estimate of the expected value of these assets. The inventory value will need to be determined, with the most recent inventory value being used for the determination of the inventory level and the discount rate. The next step in preparing a current asset’s list is to list the current and long-term liabilities and the current and future debt requirements for each of the assets that are being considered. This includes consideration of the value of any fixed assets and liabilities. Other assets that are being listed include the estimated future sales, expenses for production and marketing, possible capital investments, and future purchases, and returns of interest, and tax, and lease obligations. The current and long-term debts of the company should be determined. This includes the current balance sheet and the value of any outstanding commercial mortgages and promissory notes. The present value of the cash flows of the business is also considered to determine the present value of future cash flows. The next step in preparing a list of assets is to list the total assets and total liabilities in descending order of current value. The current value of the assets and liabilities must be compared with the net worth of the business to determine which asset or liability should be on the current assets list. A total assets list provides a list of assets and liabilities to be accounted for in the income statement. Current assets lists are required for tax purposes.
<urn:uuid:10035526-c2ac-4656-8f5a-b8c8e5f34d6e>
CC-MAIN-2022-40
https://globalislamicfinancemagazine.com/an-introduction-to-current-assets-lists/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00436.warc.gz
en
0.943674
907
2.953125
3
Protocol Basics, Formatting for Transmission, Ethernet Many companies claim to be the power behind the Internet. Even ex-Vice Presidents claim to have masterminded this phenomenon that is in so many people’s lives and is fueling the entire optical networking industry. But the main power source of the Internet is not down to Al Gore or even Al's Optical Inc. The Internet is primarily powered by the Internet Protocol (IP). IP is the protocol that controls all traffic on the Internet – from that last-minute birthday email to that multimedia package from naughtybits.com. All of this data is transferred according to the rules laid down by IP. You will no doubt be reading this page on a PC in which the “TCP/IP Protocol Suite” is installed. It is this set of protocols which has enabled you to request and then receive this Web page on your screen. Included in TCP/IP is HyperText Transfer Protocol (HTTP) for Web browsing, File Transfer Protocol (FTP), Transmission Control Protocol (TCP), and Internet Protocol itself, along with many others. Your request for this page will have been bundled down through the layers of the OSI (Open System Interconnection) model within your PC until it reached Layer 4, the transport layer. At the transport layer, the TCP protocol will have encapsulated the Web request with a TCP “header.” This is around 20 bytes of extra information added to your data in order to track it and make sure that it does indeed arrive where it should. Layer 3, the network layer, is where IP kicks in. It provides its own header, again around 20 bytes in length, which can be thought of as an envelope for the TCP header and the data “payload” that helps it to reach its destination. This forms what is known as an IP “packet,” which may be a maximum of 64 kilobytes in length. The IP header contains a variety of information to help the packet along its journey. Identifiers are included to give a unique reference to the packet, and also to help the packet to be pieced together with any other related packets in the correct order at the receiving end. There is also a value included known as the Time To Live (TTL) field. This is a number that will be decreased by one every time the packet passes through an IP router in the network, until it reaches zero and the packet is then dumped. The idea is that the packet will reach its destination before the TTL field is allowed to reach zero; but if the packet were to get caught in a never-ending loop then it would not be allowed to stay there forever and degrade the network performance. A variety of other useful fields are also contained within the IP header, but perhaps the most important is the addressing information. Like a well-written envelope, the IP header contains the address of its destination as well as the address of its sender. The destination address is naturally crucial in determining the packet’s route through the network. Before describing how this routing takes place, it is worth taking a look at the format of IP addresses. Each device attached to the Internet has an associated IP address, which is given in a so-called “dotted decimal” notation (for example: 188.8.131.52). In data terms, this address has a length of four bytes (32 bits) with each byte being a binary representation of one term of the address. It is worth noting that this address can be changed at will and is not hard-wired into machines. However, problems will obviously arise should two devices somehow have the same address. The number of bytes for each address represents how many different IP addresses are possible. When the Internet was young, four-byte addresses seemed large enough to account for all the computers that might ever be connected. But with the Internet continuing to metastasize, four-byte addresses are beginning to seem a bit limiting. With everything from battleships to Barcaloungers™ coming to be hooked up to the ’Net, there will be ever-increasing IP addressing problems. Therefore, there is a new version of IP in the pipeline (version 6 – IPv6) which includes 16-byte addresses as one of its improvements. So your PC has cleverly bundled up your Web request into a TCP packet and then an IP packet, resplendent with its addressing information. The packet is now ready to squirt out of your PC to travel around your office network in search of its ultimate destination. It gets onto the cable connected to your PC via, usually, the Ethernet protocol that provides Layers 2 and 1 of the OSI model via an Ethernet card in your PC. The IP packet is then routed through the network according to its destination address by so-called “IP Routers.” Each router it encounters will strip off the Ethernet information now attached to it and look at the IP destination address. It will then consult its own table of destinations to determine out of which port to pass on your request. Should the request be destined for outside of your office building, the router just forwards it towards the external gateway from your building to the outside world. Ethernet again packages the packet up before it is sent to the next router on its journey. There are many error-detecting mechanisms built into this TCP/IP system, but they serve only to guarantee that your information will arrive in the correct format. They do not give any guarantee as to how long it will take to get it right; so TCP/IP is often referred to as a “best effort” service that has no quality of service (QOS) guarantees. A final word on routers: It should be pointed out that in American English “router” sounds like “chowder,” whereas in Britain, curiously enough, it rhymes with “hooter.” - TCP/IP is a protocol suite in computers supporting all Internet related traffic - Transmission Control Protocol (TCP) adds around 20 bytes of header information to data to ensure its safe delivery - IP adds a further 20 bytes to provide other functions, most notably source and destination addresses for routing purposes - Data + TCP header + IP header = IP Packet - IP routers direct packets through the network based upon four-byte IP addresses - IP provides a “best effort” service Sonet (Synchronous Optical NETwork) and SDH (Synchronous Digital Hierarchy)
<urn:uuid:295ae9cc-9a9a-4d19-8a11-adb735808196>
CC-MAIN-2022-40
https://www.lightreading.com/internet-protocol-(ip)/d/d-id/575169
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00436.warc.gz
en
0.952645
1,344
3.84375
4
What is a zero trust network? Traditionally networks were built for the purpose of communication and collaboration. Due to the increasing cyber threat and the ability of criminals to exploit this openness it has grown increasingly necessary that the networks be restricted to only the communications that are necessary for business to occur. This has led to the idea of “zero trust” networks. New networks and cloud networks (especially regulated clouds like Azure and AWS government clouds) are built with the idea of zero trust, meaning that by default no communications are allowed, every communication must be allowed by exception. If you think about it, why is it that by default PC or laptop 1 can have unfettered access to PC or laptop 2? Unless there is truly a need for access, the only use it has is for a cyber criminal (or pen tester) to move laterally around the network. Zero trust networks prevent that sort of lateral communication by default. They also do not allow broadcasts which means the administrator has to define all communication paths. This makes it difficult for the bad actor to perform enumeration and or reconnaissance. Are there any downsides? One issue that can arise with the zero trust model comes from compliance. Many tools required for meeting compliance mandates require the ability to scan or enumerate the network. In most cases this can be allowed, but of course that becomes an attack vector that needs to be dealt with and mitigated in some way. There is more overhead to managing a zero trust network, but you can look it as there is more difficulty in having to both unlock a door and turn off an alarm as opposed to just walking in an open door. It’s a worthwhile venture. Zero trust networks are the future of cybersecurity so if you are an IT professional, compliance officer, or in any other fashion involved in day to day operations of networks it would help you and your organization to start accepting and embracing the future now.
<urn:uuid:22325139-9d2c-4367-a923-e375d2ea499d>
CC-MAIN-2022-40
https://foresite.com/blog/zero-trust-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00436.warc.gz
en
0.963531
393
2.515625
3
There are two primary approaches to analyzing the security of web applications: dynamic program analysis (dynamic application security testing – DAST), also known as black-box testing, and static code analysis (static application security testing – SAST), also known as white-box testing. Both approaches have their advantages and disadvantages and one cannot be seen as a replacement for the other. Therefore, we recommend that you employ both techniques to fortify your web applications. However, if you have limited resources and cannot afford both, dynamic program analysis is often perceived as a better solution. Static Code Analysis Static code analyzers scan the source code of the web application and they are used as part of the code review process. They do not take into account the operating environment, the web server, or the database content. On the other hand, static analysis tools have full access to the code, so they cover hidden/unlinked code fragments (for example, new code that is being developed but not yet used) and they can pinpoint the exact line of code. They also cover all possible execution paths at once. For example, if you used a static code analysis solution to analyze code for SQL Injections, the tool would scan the source code for all functions that query the database. It would then analyze if these functions access the database safely. If not, it would provide the exact location of the error, for example, the place where the developer used user input directly in the query. On the other hand, if you wanted to test your web application for weak passwords, a static analyzer would be useless. It would also be unable to cover security issues caused, for example, by Apache misconfiguration or issues related to the data flow. Dynamic Program Analysis In the case of dynamic analysis, the tool does not need access to the source code at all. A DAST tool simulates an end-user and has access to exactly the same resources as the end-user. It analyzes runtime web application security using HTTP requests, links, forms, etc. This means that a DAST tool is completely independent of the programming languages that your applications use and only needs to support client-side technologies. However, it can only analyze parts that are accessible to the user. For example, when you scan a web application for SQL Injections using dynamic analysis, the tool behaves like an automated penetration tester. It enters data in web forms and creates malformed requests to try to exploit your application. When it succeeds, it shows you how it was done. The downside is that it cannot show you the exact line of code which caused the security vulnerability, so a developer may need more time to find the error. If a dynamic analysis tool is not built using efficient technologies, it may take quite some time to work because it needs to analyze multiple execution paths. It will also not help you in any way with coding standards and code quality in general. The Nightmare of False Positives Leading-edge dynamic analysis tools managed to find ways to greatly reduce their false positives. Unfortunately, static code analysis tools still have this problem. Some of the leading SAST tools state that their false positive rate is around 5 percent. That is a very high rate compared to the best DAST tools. On the surface, false positives may not seem like a major headache. However, they introduce two big issues. First of all, the analysis of false positives requires much more development team time than the analysis of real errors. This is because the developer must find ways to prove that the code is secure and must take responsibility for deciding against the tool. Therefore, some developers even resort to changing their code just for the sake of avoiding false positives. After such changes, the code may become less efficient or less readable. The second major issue is that a tool that reports a lot of false positives causes the user of the tool to lose trust in it. With time, users start ignoring vulnerability warnings or simply stop using the faulty scanner completely. Analysis in the Software Development Lifecycle In the past, static analyzers were praised for the fact that they are made to be used as part of the software development lifecycle (SDLC). As such, they would be able to find security flaws earlier than dynamic analysis tools. However, leading-edge dynamic analyzers can be used at exactly the same stage now because they can be easily integrated with CI/CD tools. If you want thorough security analysis, your DevOps can design CI/CD flows so that applicable source code (in supported languages) is first scanned using a static analyzer, then compiled, then scanned using a dynamic analyzer (for all languages). Such an approach guarantees top software quality. The use of static analysis in SDLC also makes it possible to discover errors in pieces of code that are not yet linked to the main application. Who Is the Tool for? Another aspect to consider when making a choice whether to go for a SAST scanner or a DAST scanner is: who do you want to use the tool? Static analyzers are tools primarily for developers and QA/testers. Developers can check whether their code is safe and QA/testers can double-check it. Of course, in both cases, this can be automated by including it in the SDLC. Manual dynamic analyzers are usually tools for dedicated penetration testers: they help them perform black-box attacks on applications. However, business-class DAST vulnerability scanners are no longer tools for developers or penetration testers. They are designed for automation and integration and therefore are not used manually. Such tools have a great advantage: they reduce the security team workload. If You Can’t Have Both If you don’t want to invest in SAST tools for all your languages and you decide to go for only a DAST tool instead, you have another option to consider. Business-class dynamic scanners employ additional mechanisms that are not exactly static code analysis but bring you closer to it. This technology is often called interactive application security testing (IAST) or grey-box testing. For example, Acunetix uses AcuSensor technology which intercepts calls to the source code or bytecode (depending on the language). This gives it access to hidden functions and provides more details about the location of the vulnerability in the source code. DOWNLOAD FEATURED E-BOOK Including Web Application Security in an Agile SDLC Download this e-book to learn how a medium-sized business managed to successfully include web security testing in their SDLC processes. DOWNLOAD FEATURED E-BOOK
<urn:uuid:825a61a6-fce9-48cd-9a22-8227d9868b6b>
CC-MAIN-2022-40
https://www.acunetix.com/blog/web-security-zone/dynamic-static-code-analysis-web-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00436.warc.gz
en
0.931875
1,464
2.546875
3
The first article in this series provided a clear definition of what Artificial Intelligence (AI) is and an overview of its use in Cybersecurity. This article will explain the scientific origins of AI and unpack the various types and subspecialties of AI systems and their respective functions. Not all AI systems are created equal! AI draws from several scientific fields - Believe it or not, the field of AI is actually a derivative of other fields, namely the following: - Logics and Mathematics - Computational Algorithms - Psychology and the Cognitive Sciences For example, it is the logical structure of the mathematical algorithms that are used to “learn” from previous examples and apply those rules in making decisions about future decisions. And as for the neurosciences, AI tools also try to imitate, or mimic, the neuronic activity of the brain, especially in the way that neurons either fire or do not fire. AI systems vary in level of intelligence and functionality Artificial Intelligence (AI) can be divided into two main classifications, which are as follows: Type 1: This consists of the following: - Weak, or “Narrow” based Artificial Intelligence: In this scenario, the AI system is focused on just accomplishing one specific task or goal. This can be considered a very primitive form of AI, as it can only do very basic tasks, such as playing against a competitor in a basic Chess game. In this situation, all the rules and probable scenarios must be fed manually into the AI system before it can engage a true competitor. In other words, it cannot learn on its own; every time a new game is played, all the rules and outcomes must be fed into the AI system. - Strong Artificial Intelligence: This is the kind of AI that is used most typically in Cybersecurity today. For example, based upon the data and intelligence that are fed into it, this kind of system can literally learn all on its own from past observations, and use that knowledge to make informed decisions for the future. In other words, every effort is made to emulate human thought and decision-making processes. The key differentiator here is that there is almost no human intervention needed with this type of AI systems; the only time human input is needed is when new information and data feeds need to be inserted. Type 2: These kinds of AI systems are based upon the functionalities they possess, or are anticipated to possess, in the future. These systems include: - Reactive Machines: These are considered the most rudimentary or basic form of an AI system, It is, in fact, more like a Type 1, as described above. This type of AI system has a very limited memory and can only store very limited amounts of information and data. Such systems cannot make future decisions or predictions on their own; some degree of human intervention is required. - Limited Memory: These are the AI systems that can “learn” from past examples and use those lessons to make decisions about future events. In fact, this is the classification that is most representative of AI systems that exist today, and which are currently used in the Cybersecurity industry. - The Theory of the Mind: The main purpose of this kind of AI system is to embody “… emotion, belief, thoughts, expectations and be able to interact socially.” (Source 1).Although Theory of the Mind has a very long way to go until it can even come remotely close to achieving the above, we are already seeing some very basic forms of this starting to take place. The best examples of this are the Virtual Personal Assistants such as Alexa, Siri, and Cortana. These kinds of applications try to learn our thought and decision-making profile so that recommendations and directions can be provided based upon previous actions. - Self-Awareness: This is the ultimate goal of any AI system. Such a system would be very much like a live being and would even act like one. The best, most illustrative example of this is the character Data from the TV series, Star Trek: The Next Generation. These classifications of Type 2 AI systems are illustrated below: Subspecialties of A1: Delving deeper There are also many subspecialties from within AI, and these are as follows: - Data Science: Today’s Cybersecurity industry is experiencing a huge influx of information and data. It can take an entire IT Security staff days or even weeks to comb through all of this. Of course, this is an almost impossible task to be achieved. In this respect, AI can be viewed as the ultimate “Savior” when it comes to analyzing these huge datasets, also known as “Big Data”. Within a matter of seconds, such an AI system will discover hard-to-detect and unseen trends and even recommend how they should be applied when modeling the Cyber Threat Landscape. - Machine Learning (ML): This subspecialty can be considered as an extension of the above, but with the main difference being that super sophisticated mathematical algorithms are used to help the IT Security team classify, categorize, and even predict data extrapolations from any given dataset. These mathematical algorithms are actually coded into a specific programing language (such as that of Python) to help create and build an entire Machine Learning system. This is the one area of Artificial Intelligence that can be used to help filter through false positives and determine which are real or have enough merit to warrant further action. - Neural Networks (NN): This is the area of AI that tries to mimic the central nervous system and the neurological functions of the human brain. It is the neuron that forms the basis of these two, and it can be defined specifically as follows: “The neuron is the basic working unit of the brain, a specialized cell designed to transmit information to other nerve cells, muscle, or gland cells. Neurons are cells within the nervous system that transmit information to other nerve cells, muscle, or gland cells. Most neurons have a cell body, an axon, and dendrites.” (Source 3). In fact, it is estimated that the human brain has an average from 100 million to 100 billion neurons, all connected to one another. The primary goal of a Neural Network system is to map these interactions, and hard code them so they can be used for predictive behaviors, such as modeling the Cyber Threat Landscape. A typical neuron is illustrated below: Image Processing: Without a doubt, every waking moment of our lives is spent in seeing objects. This visual information is captured by the eye and then transmitted to the brain via the optic nerve (which is the collection of blood vessels at the back of the eye), making vision possible. This is illustrated below: Image processing in AI (also known as “computer vision”) attempts to mimic visual processing in human beings. One of the best examples of how this application is being used is in Facial Recognition, a biometric technology that is often used in conjunction with CCTV Technology to confirm the identity of a particular individual. Incorporating Computer Vision into such a system makes greatly enhances the degree of accuracy and reliability. - Robotics and Embedded Systems: Simply put, this is where AI tools are being created and deployed into robots. This kind of technology is most widely used in manufacturing, where it can perform very mundane and routine tasks on an automated basis with a high level of accuracy, and of course, at faster speeds. The subspecialties of AI just described are illustrated in the diagram below: Up Next: The role of AI in Cybersecurity today The use of AI in Cybersecurity is ever evolving, with future applications not yet tapped into. The next article in this series will provide an overview of the many ways AI is being used in Cybersecurity today. Ravi Das is a Cybersecurity Consultant and Business Development Specialist. He also does Cybersecurity Consulting through his private practice, RaviDas Tech, Inc. He is also studying for his Certificate In Cybersecurity through the ISC2.
<urn:uuid:efc051cb-85a9-4080-9477-487e40b19d4b>
CC-MAIN-2022-40
https://platform.keesingtechnologies.com/the-role-of-artificial-intelligence-in-cybersecurity-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00436.warc.gz
en
0.944077
1,846
3.609375
4
Cyberbullying is on the Rise. Bullies have moved off the playground and onto the internet. As technology usage among teens continues to grow, malicious teens have moved their bullying activities to the web. As of 2013: - 93% of teens are online - More than 78% of U.S. teens own a smartphone device or their own computer - Of those teens, 80% are active on one or more social media sites - 88% of social media-using teens have witnessed other people be mean or cruel on social network sites - 15% of social media-using teens say they have been the target of online meanness - 13% have felt nervous about going to school the next day Where can this type of harassment lead? - 25% of social media teens have had an experience on a social network site that resulted in a face-to-face argument or confrontation with someone - 90% of social media-using teens who have witnessed online cruelty say they have ignored mean behavior on social media, and more than a third (35%) have done this frequently - Two-thirds of teens who have witnessed online cruelty have also witnessed others joining in – and 21% say they have also joined in the harassment Even though internet, cell phone, and social media access will continue to grow among teens, bullying does not have to follow that increasing trend. Cyberbullying and its results can be monitored and prevented. How to Detect and Prevent Cyberbullying in Schools Proactive and Reactive Solutions There are two key approaches to prevent cyberbullying: One is reactive; the other is proactive. By implementing both solutions your organization will have a dual-layer approach to assure that cyberbullying is stopped in its tracks. Both are system level applications that schools, districts and administrators should consider implementing. Proactive – Social Media Monitoring and Keyword Scanning Social media monitoring helps prevents inappropriate social media posts to student, teacher, staff or your school’s social media accounts. With social media monitoring, you can setup alerts to notify adminstrators for keywords, questions, or personal information. This monitoring will allow you to identify potentially harmful posts and address them as needed. This gives you the ability to prevent cyberbullying and develop strategies to deal with harmful situations. Email Keyword Scanning helps you curb cyberbullying, sexual harassment, and other inappropriate communication by flagging associated keywords. This enables you to identify and scan for keywords such as gun, kill, maim, destroy, shoot, plot, and other illicit or harmful terms. These messages are flagged so that a potentially harmful situation can be defused before it creates an incident, disaster, or tragedy. Reactive – Social Media and Mobile Device Archiving Social media archiving provides you the ability to archive social media communications within your network. When students post to social media, you need the ability to archive those communications in a central database. If anything inappropriate is posted by students, teachers, or staff, you need to be able to find out about it. Mobile device archiving provides you the ability to archive mobile communications on any device. By archiving this data, you now have a record of texts and phone call logs to give you a full picture of what was said and to whom, thus protecting you from potential litigation. Start protecting your school and students today! GWAVA provides a complete solution to protect your students, school, and staff from potentially harmful situations by providing detection and prevention tools including; keyword scanning, social media monitoring and archiving, and mobile archiving. For more information about how to prevent cyberbullying and how to protect your students and school with GWAVA, visit www.GWAVA.com
<urn:uuid:050a13cd-4ce3-4b41-93bf-461a79dea277>
CC-MAIN-2022-40
https://blog.microfocus.com/how-to-detect-and-prevent-cyberbullying-in-schools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00436.warc.gz
en
0.94066
765
3.375
3
In the IT world, Kubernetes is everywhere. Supported by more than 43,000 contributors, this open-source system has become the default way to deploy modern applications in the cloud. Why? Simply put, Kubernetes makes life much easier for developers, speeding up the rollout of apps and adding value to the underlying platform for end-users. Kubernetes, also known as k8s, has recently turned heads in the telecoms industry, as operators look to make the transition from walled gardens to open platforms. The system is set to be an intrinsic part of the flexible cloud-native architecture required to bring out the best in standalone 5G networks. A true cloud-native network offers a myriad of benefits, from rapid deployment of services and automation to much greater resiliency and efficiency. As a case in point, Google deploys more than two billion containers a week using its internal platform Borg, which was the predecessor to Kubernetes. This kind of flexibility and scalability is fundamental to the telco cloud vision—the idea that 5G networks will become a versatile platform for a wide range of services and apps developed by third parties. As it enables telcos to deploy apps in portable containers, Kubernetes can, for example, bring services to the edge of the network, closer to end-users, reducing latency. So, what does Kubernetes actually do? A modern application is typically built out of different microservices, each handling a specific function, such as order management, reporting, or payments. Once the microservices are packaged in containers, Kubernetes automates their deployment, scaling, and management. Kubernetes can also support auto-healing in the case of a failure in a cluster of microservices. Crucially, Kubernetes abstracts away some of the underlying internal networking complexities for app developers. This makes life much easier for the typical app developer. In essence, Kubernetes enables apps to be made accessible to the outside world in a simple and straightforward way. A Kubernetes ingress controller is the conduit through which an end-user interacts with a web application via the HTTP protocol. The ingress controller also provides traffic management, ensuring that user requests get routed to the right microservice within the Kubernetes cluster. All these benefits can be harnessed by 5G networks’ service-based architecture, but only if telcos adopt a new mindset. Although Kubernetes delivers many benefits, the system can seem somewhat orthogonal to what telecoms engineers are accustomed to. As they live and breathe networking, telcos are comfortable manually configuring the IP addresses of networking equipment and setting their own rules for routing and load balancing, and other parameters that Kubernetes is designed to abstract. However, this kind of manual configuration would undermine the whole point of Kubernetes: it would prevent the rapid deployment and automated scaling that are hallmarks of the major cloud platforms, such as AWS and Azure. The first network functions virtualization (NFV) solutions deployed by telcos in 4G networks have tended to employ manual scripting and, as a result, lack the dynamism and automation of a modern IT architecture. With the rollout of 5G core networks, telcos have a blank slate. But, unfortunately, they can’t simply transplant a standard Kubernetes system. The vast majority of telcos are deploying 5G networks alongside 4G networks, so they will need a Kubernetes ingress that can also support standard telco protocols—SCTP, Diameter and GTP—as well as HTTP. That is because the Kubernetes ingress that front ends the 5G services won’t interface directly with a user via HTTP but will instead connect to other 4G and 5G core elements. In some cases, an interworking function will be required that translates HTTP/2 messages into Diameter messages and vice versa. Another complication is how to support internal communication within the application cluster. In a standard Kubernetes deployment, a service mesh is typically used to securely manage and track the communication between different microservices. While such service meshes support IT-centric tracing capabilities, telcos are discovering that the associated functionality is not optimally adapted to their requirements. The third issue is how the 5G functions within the Kubernetes cluster will talk to the outside world. Opening up the dynamic internal IP addresses assigned by Kubernetes is not a good idea. The addresses will change over time and giving this level visibility to the outside world would constitute a major security risk. Telcos want full control over the assignment of IP addresses to certain 5G functions. These should be independent from the IP addresses used by the underlying containers that make up this 5G function. A smart Kubernetes egress function is required to achieve this. One option would be to dispense with Kubernetes and just deploy a 5G function in a container with a static IP address that is accessible to the outside world. But if you cut corners in this way, you would pay a big price in terms of scalability and flexibility. You would not, for example, be able to deploy 5G functions anywhere in the network simply by pressing a button. If you want that level of automation, which will be the future, you can’t cut corners with Kubernetes. F5 has long straddled the telecoms and IT worlds, and we have figured out how to help telcos harness the extensive benefits of Kubernetes. This includes our BIG-IP SPK solution, which enables a Kubernetes ingress to support telco protocols, as well as HTTP. It also uses network address translation (NAT) and routing to enable a Kubernetes egress to provide a static predefined IP address to the outside world, without impacting the internal dynamism of the cluster. Regardless of what happens inside the cluster, you can always provide the same IP address to external entities. Further, our Aspen Mesh service mesh supports 'telco-grade' observability and tracing. It can give telcos full visibility and tracing for the traffic flowing between the 5G microservices, thereby bolstering security. If approached correctly, Kubernetes can be truly transformative for telcos: it can unlock the many benefits of a cloud-native architecture and make it much easier for operators to interact with the outside world. Once it is fully tailored to a telecoms environment, this open-source system will surely be an integral part of the 5G future. This article is the first in a two-part series. Next time, we’ll explore how F5 technology can solve the operational headache of managing IT and telco workloads across large numbers of Kubernetes clusters running on different platforms.
<urn:uuid:444127fa-9acc-4f51-93c6-03dd7ffc7c6c>
CC-MAIN-2022-40
https://www.f5.com/fr_fr/company/blog/tailoring-kubernetes-for-telcos
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00436.warc.gz
en
0.924297
1,412
3.09375
3
Landsat 8, loaded with several technological advancements for better data-gathering, blasted off Monday from Vandenberg Air Force Base in California using an Atlas V rocket. The latest satellite in the 41-year-old Landsat program has enhanced capabilities to record the changes happening on the planet. Landsat 8 “very greatly boosts, let alone continues, the single most important record of changes in Earth’s ecosystems,” Gregory Asner of the Carnegie Institution for Science’s Department of Global Ecology told TechNewsWorld. “No other series of satellites or any combination of all other satellites from all countries, including the United States, can match what Landsat has provided humankind — a record of change at a spatial resolution and temporal frequency that scientists and citizens can understand,” Asner continued. What Landsat 8 Offers One of Landsat 8’s technological advancements is in the way it records images. Previous Landsat satellites used what Landsat Data Continuity Mission (LDCM) scientist Jim Irons called “whisk-broom sensing.” They had “a few detectors with a mirror that directed the field of view across a 185-km swath,” he told TechNewsWorld. The mirror oscillated back and forth seven times a second. The detectors only viewed each parcel of land in the image for microseconds, “so we didn’t have time to get a lot of signal as the field of view moved back and forth to build up the image,” Irons said. Landsat 8 uses push-through imaging. Each of the focal planes on its sensors uses “thousands of detectors” across the plane, he explained. Landsat 8 pushes the field of view along its flight path. This “lets the detectors dwell on a parcel of land for much longer than we were able to with the whisk-broom approach, so we get a much better signal-to-noise ratio,” Irons said. The detectors are read out every 30 meters along the ground track. “Previous sensors were like having a very accurate yardstick marked off in quarter-inch lengths,” he elaborated. “This new sensor is like having an equally accurate yardstick marked every 1/64 or 1/128 of an inch.” The Landsat Mission’s Task Landsat satellites have taken millions of images over the years. These are used in agriculture, cartography, geology, forestry, regional planning, surveillance and education applications. They also help in researching global change. “Our global landscape is changing at rates unprecedented in human history because of the rising population, climate change and other factors, so observation of the planet from this series of satellites is critical in understanding the gamut of our land cover and land use,” Irons said. “We want to maintain a consistent data record in order to continue to observe these changes to know where we’re going in the future.” A principal driver of climate change is land-cover change, Asner said. “Cutting down forests, creating fires, logging, agricultural practices, road and city building, changes in sea and land ice, and much more all contribute to greenhouse gas emissions, which absolutely drive important aspects of our climate.” At the Carnegie Institution for Science’s department of global ecology, Asner runs one of the largest Landsat-based monitoring systems in the world. It focuses on tropical regions and provides high-accuracy assessments of tropical deforestation, forest degradation through logging and fires, and carbon emissions. Filling in the Gap In addition to enabling better pictures, Landsat 8 makes up for a gap in the program’s ability to monitor the planet. Landsat 7 suffered a severe mechanical failure in 2003 on a subsystem called the “Scan Line Corrector,” Asner said. This rendered about two-thirds of each image unusable. “When you put together the one-third of each image that is good, you get a very chaotic map that few can use,” Asner said, “so we have been working with at least one arm tied behind our backs for many years.” That led most of the global land-mapping and monitoring community to rely on Landsat 5 since 2003. However, Landsat 5 was decommissioned earlier this year, LDCM’s Irons noted, “so it was past time to get another Landsat satellite in orbit.”
<urn:uuid:de4ab40a-6104-43ca-9092-fd671d29b803>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/landsat-8-pushes-the-earth-monitoring-envelope-77283.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00436.warc.gz
en
0.936388
951
3.5625
4
Too many hours of Internet use might actually change your brain. Researchers in China have concluded that those who are addicted to the Internet may experience changes in the brain that are similar to those seen in individuals hooked on drugs or alcohol. A research team lead by Hao Lei of the Chinese Academy of Sciences used magnetic resonance imaging (MRI) to scan the brains of 35 male and female adolescents. Seventeen members of the group were classified as having Internet addiction disorder (IAD), based on interviews about their behavior. In the brain scans of those adolescents with IAD, there were changes in the white matter of the brain, the area that contains nerve fibers. There was evidence of disruption to the connections in the nerve fibers that connect brain areas involved in emotions, decision making and self-control. The changes appeared similar to those seen in brains scans of individuals addicted to alcohol, cocaine, heroin and other drugs, the researchers noted. The study’s findings were published in the scientific journal PLoS ONE. Addiction of the Young Earlier studies have revealed a direct correlation between age and Internet addiction. Young adults are more likely to be addicted to the Internet than any other age group, according to SafetyWeb, an Internet monitoring service for parents. It has not been determined whether there is an intrinsic vulnerability among young people or whether it’s simply that young adults are early technology adopters and thus have been affected sooner than other age groups, SafetyWeb noted. At reStart, an Internet addiction recovery program, the vast majority of patients are adolescents. “They fit into the category of failure to launch,” Hilarie Cash, PhD, LMHC, executive director of reStart, told TechNewsWorld. They haven’t figured out how to assume adult responsibilities, she said. “They don’t have basic knowledge of how the world works and how to function in it.” Anyone Can Get Addicted While some may be more susceptible, it’s possible that too much Internet use could result in addiction in any user. “Overexposure can trigger this in any brain,” said Cash. “We all are vulnerable.” Device addiction isn’t limited to the Internet, she adde, noting that “many of us are mildly addicted to our phones.” One danger sign could be as obvious as overuse. This may apply to surfing the Internet just as easily as to other behaviors. “Overexposure of any substance or device can trigger adverse reactions such as IAD,” Laura DiDio, principal analyst at ITIC, told TechNewsWorld. There is no longer any argument that addiction to the Internet, television or video games is real, she maintained. “The real and present danger is that there is not enough information available to predict who might be affected and under what circumstances,” said DiDio. “So the fact that one can’t predict when IAD might strike makes it all the more frightening.” Limit Internet Time and Get Outside A number of therapies can help Internet addicts recover, Cash said. “We do a combination of psychotherapy and helping these people figure out the skills they need to function in the world. We work on both physical and emotional health.” The road to recovery could include plenty of hiking and backpacking, she observed. “That’s a way to get them both physically fit and reconnected to the world.” Perhaps the most basic step in breaking the addiction is to deny access to the Internet. A doctor might advise such people to go cold turkey and quit the Internet altogether until they can get the addiction under control, noted DiDio. “Bottom line, it’s still very early stages in the research of the triggers and impact of IAD both physically and mentally.”
<urn:uuid:a5e02fb1-a2ae-44c0-a600-88b410d776a7>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/this-is-your-brain-online-74172.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00436.warc.gz
en
0.94402
809
2.84375
3
After two Israeli researchers published a paper earlier this month explaining how security mechanisms in short-range wireless Bluetooth technology could be quickly undermined, members of the Bluetooth Special Interest Group (SIG) are now urging users to take several precautions. Bluetooth, a radio technology that allows users to exchange data over the airwaves at a distance of around 10 meters, has been a target of intrusion attacks in the past. Bluetooth security is essentially based on devices generating a secure connection through a pairing process. During this process, a user of one of the devices needs to enter a PIN code, which is used by internal algorithms to generate a secure key. This key is then used to authenticate the devices whenever they connect in the future. But the findings of the Israeli researchers suggest the technology may be even more susceptible to attack than previously known. The academic paper puts forward a theoretical process that could potentially “guess” the security setting on a pair of Bluetooth devices, according to the Bluetooth Web site. To do so, the attacking device needs to listen in to the initial one-time pairing process. Form this point, it can use an algorithm to guess the security key and masquerade as the other Bluetooth device. What is new in this paper, according to the Bluetooth SIG, is an approach that forces a new pairing sequence to be conducted between the two devices and an improved method of performing the guessing process, which brings down the time significantly from previous attacks. Even though this is an academic analysis of Bluetooth security and not a reported, real-life intrusion, SIG members, which include IBM Corp., Intel Corp., Nokia Corp., Microsoft Corp. and Motorola Inc., want to quickly eliminate any concerns users may have. On the official Bluetooth Web site (www.bluetooth.com), the group offers three basic elements of good practice to help safeguard from attack: – When pairing devices for the first time, do so in private at home or in the office and avoid public places; – Always use an eight character alphanumeric PIN (personal identification number) code as the minimum. The more characters within the code, the more difficult it is to crack; – If your devices become unpaired in a public place, wait until you are in a private, secure location before re-pairing them. Additional tips on how to use Bluetooth wireless technology securely are available at: www.bluetoothcom/help/security.asp.
<urn:uuid:f7bf939e-ef24-4eb0-9912-c175e8a42123>
CC-MAIN-2022-40
https://www.itworldcanada.com/article/bluetooth-offers-security-tips-to-avoid-attacks/13059
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00636.warc.gz
en
0.927582
502
3.234375
3
Cybercriminals are always on the hunt for users’ information online. Adversaries often exploit users’ data to launch various kinds of cyberattacks and scams. The officials at the FBI are warning U.S. citizens to be vigilant while posting personal information online. The federal agency stated that Google Voice authentication scams target people who share their contact details. Fraudsters reportedly targeted users who post their phone numbers while selling goods in online marketplaces or social media platforms. “You post your real phone number on some online platform. It’s common for scammers to target victims who use popular marketplace apps or websites to post items for sale. Want to get rid of that old couch? Post it on one of those popular re-sale sites, and hope someone likes your taste in style. Recently, we have also been getting reports of people who are getting targeted in other locations, including sites where you post about lost pets,” the FBI said in a statement. Misuse of Google Voice Google Voice authentication service allows users to set up a virtual phone number which is then used to make domestic and international calls or send and receive text messages. Threat actors often exploit these virtual numbers to launch various scams and frauds. Scammers could use compromised virtual phone numbers in fraudulent ads or other malicious activities to hide their real identities. How Google Voice Scam Works Fraudsters contact the stolen numbers via text or call showing false interest in buying the products advertised by the user. The attacker sends an authentication code from Google to the victim to confirm the authenticity. The attacker then asks the victim to provide the authentication code received. Here, the attacker is actually setting up a Google Voice account with the victim’s name using his contact number as verification. Once set up, scammers use that Google Voice account to perform various frauds against the victims and even leverage the authentication code to compromise the victim’s Gmail account. The FBI recommends that victims of the Google Voice authentication scam visit Google’s support website to know how to regain control of their Google Voice account and the voice number. The agency also shared certain security measures to prevent such attacks from happening in the first place. These include: - Never share a Google verification code with others. - Only deal with buyers, sellers, and Fluffy-finders in person. If money is to exchange hands, make sure you use legitimate payment processors. - Do not give out your email address to buyers/sellers conducting business via phone. - Do not let someone rush you into a sale. If they press you to respond, they are likely trying to manipulate you into acting without thinking.
<urn:uuid:6020f3e3-c576-4dd5-899a-a17644171dcc>
CC-MAIN-2022-40
https://cisomag.com/fbi-issues-warning-about-google-voice-authentication-service-scamming-users/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00636.warc.gz
en
0.924423
546
2.75
3
RSA CONFERENCE 2019 – San Francisco – Vulnerabilities in connected medical devices could have massive implications for patients and the healthcare industry as a whole. The Internet of Medical Things (IoMT) is poised to broaden the attack surface for healthcare organizations, according to Check Point experts. Eighty-seven percent of healthcare institutions are expected to use IoT technologies by the end of 2019, with nearly 650 million IoMT devices in use by 2020, states a new Check Point study. The study underscores the danger of what could happen if these devices are poorly secured. IoT devices collect vast stores of data and are commonly built on outdated software and legacy operating systems. This makes them a simple gateway for cybercriminals, who could break in and move laterally across the target network. Consider ultrasound technology. Researchers explain how "huge advancements" have been made to provide detailed health data to doctors and patients. Unfortunately, they report, this innovation hasn't made its way to the security of IT environments where ultrasound machines sit. To prove this point, they went "under the hood" of a real ultrasound device. What they found was a tool running on Windows 2000. Like many IoMT devices, this no longer receives updates or patches, and leaves both the machine and its data exposed to intruders. It wasn't hard to exploit vulnerabilities and access its database of ultrasound images, they explain. An attacker with this access could launch a ransomware campaign on the hospital system or swap patients' images. "Think how much chaos that can do in the hospital," said Oded Vanunu, head of product vulnerability research at Check Point, in an interview with Dark Reading here at the RSA Conference. Cybercriminals may use health records to get pricey medical services and prescription medications; they may also gain access to government health benefits. Or they could sell it: The Ponemon Institute found healthcare breaches are most expensive, at $408 per record. Healthcare organizations often don't have the budget for strong IT and security, Vanunu explained. "Hospitals are flat networks – from our perspective ... we think cybercrime will start to move to the weakest networks." It's happening already, he noted. IoMT devices are in mass production, Vanunu continued, but nothing is being done to secure them. Because the device Check Point analyzed was running Windows 2000, exploiting it was simple. "We didn't use any sophisticated tools," Vanunu said. "No zero-day, no reverse-engineering vulnerability. Any beginner can exploit it." - Companies Having Trouble Translating Security to Mobile Devices - Phishing Attacks Evolve as Detection & Response Capabilities Improve - How China & Russia Use Social Media to Sway the West - Twitter, Facebook, NSA Discuss Fight Against Misinformation Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.
<urn:uuid:508b551f-0bae-4eb3-842f-7ea8edeaef03>
CC-MAIN-2022-40
https://www.darkreading.com/threat-intelligence/ultrasound-machine-diagnosed-with-major-security-gaps
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00636.warc.gz
en
0.946104
611
2.5625
3
Malvertising — or malicious advertising — is getting a bit more attention as of late. In essence, it’s just another method by which criminals attempt to infect user PCs with some form of malware — albeit a very scary form as it can reach so many users so easily. The important point is that criminals will continue to exploit new methods to infect users with malware. Regardless of the method (e.g., malvertising, spear-phishing, infected websites, drive-by downloads, etc.), the objective remains the same: criminals want to obtain control over online identities. So, what do you do to help protect against malvertising? As an end-user? As an organization seeking to protect employee information and identities? As a service protecting online customers? Unfortunately, regardless of how careful we are as end-users, enterprises, customers or governments, the malware will get through. Again, even if we: - Avoid certain websites - Adhere to strict online practices - Protect corporate networks with firewalls and intrusion detection - Secure access to online customer accounts The malware will infiltrate the perimeter — and it’s best to assume this has already taken place. And, the more sensitive the transaction or information at risk, the more sophisticated the attack. Here are some best practices to help protect against malvertising and any other online threat. End-Users & Online Customers - Be safe. Practice safe browsing and always keep all your software up to date. Be educated and share good practices with others. - Use suspicion. Don’t assume SMS, email and social networking messages are necessarily from legitimate acquaintances or businesses. Be suspicious and never reveal account or personally identifiable information. - Switch it up. Where passwords are your only choice, use a passphrase technique such as taking the first letter of an easy-to-remember phrase AND use different ones for different sites. - Take advantage. Always take advantage of advanced security controls offered by online providers. So many online thefts can avoided. - Go mobile. To access online services, consider downloading and using mobile applications from legitimate app stores (i.e., no jailbreaking) versus traditional PC browsers. Employers & Service Providers - Secure in layers. Implement layered security controls for networks, employees and online customers. Perimeter security is just step No. 1. - Protect identities. Ensure identities are well protected with controls beyond username and passwords with some form of two-factor authentication that is dynamic in nature. - Go OOB. For higher-risk transactions, make sure they are confirmed on an out-of-band (OOB) channel to defeat malware that has initiated or modified transactions. - Be smart. Consider both security and usability when introducing controls — the technology exists.
<urn:uuid:5e9afbe6-df84-4497-abf5-559decaa8d54>
CC-MAIN-2022-40
https://www.entrust.com/es/blog/2013/04/malvertising-and-other-online-mischief/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00636.warc.gz
en
0.90017
568
2.515625
3
Defence in depth is an approach to IA. It derives its inspiration from the military strategy with the same name. In this article, we explained what defence in depth involves and why it is useful for your organization. Defence in depth (also referred as Castle Approach) is an approach to information assurance. The most prominent feature of this approach is its multiple layers of defence. The defence in depth concept involves setting numerous security controls throughout the systems of your organization. The aim of this AI approach is providing extra layers of protection. With this approach, security vulnerabilities across various elements of the system are addressed individually. In other words, all features of your organization are protected –including but not limited to: personnel, operations, and technology. Deemed as a ‘comprehensive approach’ by National Security Agency, defence in depth is designed to provide a strong protection against elaborate attacks that utilize more than one technique in order to penetrate through your security measures. The idea behind defence in depth is a military tactic which mainly aims to delay enemy forces instead of stopping them altogether. In military world, this strategy is utilized to buy some time while waiting for the support. Within the framework of information assurance, defence in depth keeps the attackers busy long enough, so that your IT team can notice them.Additionaly, it might be benefical for you to check components of information security in order to understand take a closer look at information security. What are the three areas of defence in depth? Defence in depth can be split into three different areas also known as ‘controls.’ Below you can find detailed information regarding each of them. As the name suggests, physical controls refer to almost anything that limits intruders to access to your IT systems physically. Fences, password protected doors, security guards, CCTV systems and such measures can be considered as physical controls. Technical controls consist of various software and hardware designed and/or adopted for protecting your systems and networks. There are so many alternatives to technical controls. We can consider data encryption, password protection, fingerprint or retina readers and such as technical controls. Hardware technical controls can be confused with physical controls. The difference between them is the fact that physical controls block the access to the systems themselves such as servers whereas hardware technical controls aim to hinder the access to information within the systems. The purpose of administrative controls is making sure that the regulations are met and all the staff is informed about the security measures and their role within these measures. Administrative controls include policies, protocols and procedures. They can cover and/or govern the procedures regarding hiring new employees, data storage and such. Why is defence in depth useful? Having more than one layer of security makes your valuable data safer. If one or more of your defence mechanisms are breached, there are several more to slow down the attackers. Hindering the attackers means buying more time for your IT team to contain and stop the ongoing threat. Moreover, it is no secret that the hackers and cyber attackers develop more elaborate techniques to sneak into your systems. Often, they utilize more than more strategy in order to find their way in. Since more layers of defence require more strategies and/or tools, adopting defence in depth enhances the security posture of your organization significantly. AI supported security measures, SOAR and SIEM allows cyber security professionals to have holistic approach to cyber security.
<urn:uuid:076832ca-9608-444c-b9a1-0549097fcfcc>
CC-MAIN-2022-40
https://www.logsign.com/blog/what-are-defence-in-depth-measures/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00636.warc.gz
en
0.952994
695
2.84375
3
When people think of software architecture, they often picture layers of code. But in recent years, there’s been a shift from this model—known as the monolithic approach—toward a more modular development style. This new approach, known as microservices, has given rise to a phenomenon known as API sprawl. This blog post will explain what API sprawl is, the factors causing its growth, its consequences, how to identify it, and the benefits of having a well-managed policy. What is API Sprawl? Each system component is broken down into its own independent service in a microservices architecture. These services communicate with each other via application programming interfaces (APIs). The move away from monolithic programming means that instead of one giant codebase, there are now many small codebases—each with its own set of APIs. Every team is responsible for its service. API sprawl is many APIs of many different types, spread over many locations and managed by many other teams. This sprawl leads to zombie APIs: those that are no longer used but are still lurking around, taking up space. It also leads to inconsistency in design and functionality, as different teams develop their APIs according to their preferences. Factors Driving API Sprawl The first factor driving API sprawl is the rise of microservices architecture. There are many benefits to using microservices over monolithic technology, so just returning to monoliths is not a good answer to the problem. Unfortunately, microservices teams tend to get siloed off from each other. When siloed, they are less likely to share best practices or collaborate on API design. A breakdown in communication and sharing leads to each team developing their APIs, which adds to the overall sprawl. The second factor driving API sprawl is the rise of the cloud. The cloud makes it easy to provision new servers and services on demand. Unfortunately, many servers hosting many APIs can lead to a situation where there are many unused or underused APIs, which adds to the clutter. The third factor driving API sprawl is the rise of DevOps and continuous delivery/integration. In this model, software is released frequently, in small increments. These small increments can produce instability in the APIs, and some developers may find it easier to develop their own stable API instead of dealing with constant breaks. Furthermore, it introduces yet another team to the development and deployment process. The more teams, the more difficult it is to manage communication between them. The fourth factor driving API sprawl is the size and complexity of businesses and their software systems. Gone are the days when an enterprise could have a single software system doing one thing. In addition, companies are composed of many different teams, each with its services, goals, and business priorities. The fifth factor driving API sprawl is the need for speed. In today’s fast-paced business world, companies face pressure to release new features quickly. The pressure can lead to development teams focused on getting something done instead of doing it right. In the case of API sprawl, developers are not searching for an API that already exists or is further in development. This need for speed leads to: - Lack of governance: When there is no central authority governing the development of APIs, it leads to a Wild West situation where everyone does their own thing. - Lack of documentation: If there is no clear documentation for how developers should use an API, then different teams will develop different conventions and standards. This lack of standardization leads to inconsistency and confusion. - Lack of tooling: Without the proper tooling, it can be challenging to keep track of all the different APIs and their dependencies. Not having a clear picture makes it hard to make changes or update APIs without breaking things. Consequences of API Sprawl API sprawl can lead to significant problems, including security vulnerabilities, decreased productivity, and increased complexity. With so many different APIs, it becomes more challenging to keep track of who has access to which API and what they can do with it. The lack of awareness of access and authority can lead to an increased risk of data breaches and other security issues. Decreased productivity is one of the most common consequences of API sprawl. With too many APIs, it can be difficult for developers to find the right one. Without the ability to quickly find appropriate APIs, it leads to wasted time spent searching and, eventually, developer frustration. In addition, when too many APIs exist, it’s more likely that some will be duplicates of others. Increased complexity is another common consequence of API sprawl. When there are too many APIs, it can be challenging to keep track of them all. The lack of clarity on what is available and its location can lead to confusion and errors. Furthermore, when multiple teams are working on different services, coordinating between them cannot be easy. Lack of coordination can lead to delays in releases and increased costs. With so many different APIs to keep track of, it can be challenging to make changes quickly. The inability to react rapidly to the changing business landscape can lead to missed opportunities and a competitive disadvantage. It also becomes more challenging to make changes and update systems. This is the most significant issue with monolithic applications, and microservices with API sprawl lead to the same issue, so now you’ve rebuilt a system with the same problems. The inability to change can lead to frustration and stagnation. More APIs mean more development and maintenance costs. It can also lead to wasted efforts as teams duplicate work already done by other teams. When users see many different APIs and no practical or logical way to manage or search them, they may not trust that the system is well-designed or well-maintained. The lack of trust can lead to them using other systems or building their own, thus contributing further to the problem. How to Identify API Sprawl Identifying API sprawl is straightforward but not trivial. Consider the following: - Do all of your APIs perform a unique job, calculation, or service? - Do you have clear and consistent documentation for all of your APIs? - Do you have effective tooling in place to manage all of your APIs? - Are there clear governance procedures for new API development and deployment? - Can developers easily find the correct API to use in their development? - Do security audits find no vulnerabilities in your APIs? If you answered no to any of these questions, you either have a problem with API sprawl or are at risk of developing it in the future. Benefits of a Well-Managed API Ecosystem without API Sprawl The benefits of a well-managed API ecosystem include - reduced risk of data breaches, hacks, and slowdowns - ability to deliver new and improved functionality to the business faster - reduced cost of providing business functionality - improved developer satisfaction and morale - increased trust in the system by users and developers API sprawl is a real problem, but with tools like Traceable AI, you can manage your organization’s APIs and prevent sprawl from happening in the first place. In addition, it can help organizations with existing API sprawl to get it and keep it under control. Depending on your role and the needs at your organization, we offer multiple options to get started with Traceable AI: - If you’re a CISO or DevSecOps security leader and want to evaluate your API security risks, try our API Security Posture Assessment. - To start your journey, sign up for our Free Tier and learn all about your APIs — internal, external, third-party, and even the shadow or rogue APIs you may not even be aware of. - If you want to compare different API security solutions in the market, check out our API Security Tools comparison guide. - You can also view a demo or book a meeting to learn more and ask your questions on how Traceable can meet your API security requirements. This post was written by Steven Lohrenz. Steven is an IT professional with 25-plus years of experience as a programmer, software engineer, technical team lead, and software and integrations architect. They blog at StevenLohrenz.com about things that interest them.
<urn:uuid:7a2f337c-e849-4aac-8583-c1e1ec2f7f23>
CC-MAIN-2022-40
https://www.traceable.ai/blog-post/api-sprawl-what-it-is-and-why-you-should-care
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00636.warc.gz
en
0.94894
1,739
2.6875
3
The Remote Desktop Protocol (RDP) is commonly used by many different Windows software solutions to provide users with access to remote services. Depending on your IT environment, there’s a good chance that RDP is being used this very minute by one or more of those solutions. RDP was developed by Microsoft as a proprietary technology and has been built into every version of Windows since Windows XP in 2001. And, yes, that does include more recent versions of the operating system like Windows 10 and 11. As its name indicates, the Remote Desktop Protocol was intended to make remote desktops more user friendly by facilitating communication between Microsoft’s Terminal Server and the Terminal Server Client. Part of that ease of use derived from the standardization that RDP provides. Windows servers and clients know that RDP port number 3389 is the default listening port for computers to establish a remote desktop connection, so they keep this port open automatically. That way, users are less likely to encounter the kinds of connection errors or Windows Firewall issues that will send them to IT in search of help. Unfortunately, the use of 3389 as a standard port didn’t escape the attention of malicious actors. They quickly realized that they could exploit RDP’s open port as a way to deliver a ransomware payload or a DDOS attack. A popular method is simple brute force attacks: Hackers will try a relentless series of authentications in the hope of gaining illicit access to the remote desktop server on that port. This has turned the default RDP port into a major liability. Cybercrime experts currently estimate that RDP is the initial attack vector for half of all ransomware attacks. Naturally, the number of ransomware attacks rose during the pandemic, when the world shifted quickly to providing remote desktop access to users who were now working outside of the office. But with a 2021 PWC survey revealing that 83% of companies anticipate continuing remote or hybrid work going forward, remote desktop services and the software that leverages them will remain in demand. Consequently, RDP will remain a point of vulnerability for IT and organizations as a whole. The not-so-quick (or effective) fix: Manually configure your RDP port There’s a widespread assumption that simply changing the default port for RDP to something other than 3389 will thwart hackers. And if you have no other options, it’s true that assigning a new RDP port is a better defensive maneuver than not changing it at all. Here’s a quick tutorial on how to do it: - Double-click on the Windows Start button. Type regedit and then press Enter. This will launch the Registry Editor. In newer versions of Windows, you can do this directly from the Windows Search feature. - In the Registry Editor, look for HKEY_LOCAL_MACHINE in the sidebar. Extend the drop-down list and navigate to HKEY_LOCAL_MACHINE\SYSTEM. Keep extending the drop-downs next to CurrentControlSet > Control > Terminal Server > WinStations > RDP-Tcp. - Click on RDP-Tcp. That will open up a list of items in the main window. - Locate the dword file named “PortNumber”. Right-click on the PortNumber dword file and select “Modify…” - This results in a dialog with three fields: Value name, Value data and Base. Change the base to Decimal. In the Value data field, enter a new port number between 1025 and 65535. Make sure that the new remote desktop port number you choose is not already in use by another application or service. - Click OK, then reboot the computer. This general procedure should change the default RDP port on your Windows machine. But bear in mind that the Windows Registry contains sensitive, system-level data that is not supposed to be altered in most circumstances. Any changes you make could cause instability. Another important thing to remember is that this only changes the local ports on the current machine. If you have multiple clients using Windows Remote Desktop or other RDP-based software, you will need to make the exact same changes to the default RDP port on those machines as well. On top of this, you’ll also need to update your Windows firewall rules. This is done by creating a new rule or set of inbound rules that account for the new RDP port. If you’re using Windows Server to provide remote desktop services, these changes to the Windows Registry and Windows Firewall will likely need to be replicated there too. Double check with your software solution provider to determine whether it’s okay to do this without breaking functionality. The next time the user connects to these RDP-based services using a Remote Desktop client, they will have to manually update the local port. They can do this by adding a colon and the new RDP port number after the machine’s hostname or IP address (e.g., “hostname:1234”) in the connection field. However, just changing the RDP port number doesn’t mean that the security problem is solved. It isn’t hard for someone with basic technical knowledge to determine the new port number, especially if they gain access to a remote computer. This method is also insufficient if your organization practices or plans to implement a zero trust policy. Zero trust assumes that every device is potentially compromised, so any open port—even if it’s not the default—is treated like an attack vector. In a zero trust environment, the only acceptable course of action is to lock down vulnerabilities, restrict user access to essential functionality and minimize all exposure of the internal network to remote entities. Practice zero trust with Cameyo cloud desktops Cameyo’s Virtual App Delivery (VAD) platform enables organizations to maintain strict zero trust IT policies while providing their work-from-home (WFH) and hybrid users with effortless cloud desktop access. We’re able to achieve this mix of uncompromising security and incredible ease of use thanks to a suite of innovative technologies and practices. These include: - Non-persistent servers: Every time the user logs out, all of their user data is fully wiped from the Cameyo server. - Cameyo NoVPN: As a rule, virtual private networks (VPNs) grant users access to the corporate network. Cameyo keeps clients off the corporate network, yet it’s also far easier for users to connect than with a VPN. - Secure Cloud Tunneling: With Cameyo, IT can deliver applications to remote & hybrid users outside of the VPN and without opening any ports in their firewall. It’s the best of both worlds: flexibility and security. - User segmentation: Cameyo’s virtual app delivery (VAD) isolates sessions and ensures constant separation of resources, so users and their devices never come into contact with networks or data beyond that. - No lateral movement: In the event that a user’s device is infected with malware, by design Cameyo prevents that malware from ever reaching your internal network and data. Nor can it reach the Cameyo system. - Least privilege: Cameyo delivers all apps via a secure HTML5 browser and encrypts all traffic with HTTPS. Cameyo also leverages Windows Terminal Services and temporary user profiles, so admin privileges, settings and files remain off-limits - Identity and access control: Cameyo integrates with your single sign-on (SSO) provider of choice. Any multi-factor authentication (MFA) you have set up with your SSO carries over to Cameyo. - Port Shield: Rather than leaving the RDP port open, Cameyo opens and closes both the HTTP and the RDP ports dynamically in response to authenticated user activity and whitelisted IP addresses. This is how Cameyo delivers an ultra-secure, user-friendly cloud desktop even as it eliminates the need to tinker with Windows Registry settings and firewall rules. Better still, Cameyo’s VAD solution is Windows-independent. What this means is that Cameyo doesn’t force users to interact with an entire Windows-based desktop environment or use a Windows-based client to stay productive. They can selectively access the apps they want, and they can do so on any device, regardless of its operating system. That stands in stark contrast to Windows Remote Desktop and other legacy remote desktop access solutions, which are often built around providing a full Windows desktop experience. If zero-trust security coupled with industry-leading ease of use for your remote workforce sounds like an ideal combo, simply sign up for your free trial of Cameyo’s VAD platform to experience it for yourself. And if you’ve got technical questions about how Cameyo is able to provide greater flexibility while hardening security, all you have to do is request a demo. Our engineers will gladly talk you through the features and practices described above in more detail.
<urn:uuid:4861e127-4d9a-45fa-882a-f51f9430a55b>
CC-MAIN-2022-40
https://cameyo.com/how-to-secure-your-remote-desktop-ports/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00636.warc.gz
en
0.900386
1,865
2.671875
3
It seems that just about every website you visit has some type of notification asking you to accept cookies. The name “Cookies” sounds friendly and innocent, but these tracking files can be anything but. Cookies are text files that are used by websites to track certain activities on that website and others. They have both positive and negative purposes and it’s important to understand what you’re agreeing to when you click to “accept all cookies,” otherwise your online privacy could be at risk. How Cookies Work Cookies, also known as HTTP cookies, are created when you connect to the web server of a site. Some sites will store cookies without your permission, others will ask. (We’ll get into why most need to ask shortly.) The cookie is stored in your web browser, and it identifies you with a unique ID to the site the planted the cookie. Your ID stores “session” data, which are the activities you take on a website during that connection session. The types of information that cookies store will differ according to how a website developer sets them up. Here are a few examples of data a site can save in a cookie during your session: - What product pages you looked at on a website. - Any products that you added to a shopping cart. - How long you are looking at a specific page - Where else you travel online Positive Use of a Cookie Positive use of a cookie would be to save preferences you’ve put in place on a site, like a “wish list” on a shopping website like Amazon. Most people have experienced what happens when you click “clear all cookies” in your website browser. It’s like websites you visit don’t even know you anymore! That’s because you’ve cleared the tracking cookie that has all your personal session data stored and that the site reads to know who you are. Negative Use of a Cookie Negative use of a cookie would be ads that drop 3rd party cookies that follow you around online and track your every moment so more ads can be served to you. “Retargeting” is a term that is now common in online advertising. These cookies aren’t contained in a single website. Instead, they follow your online journey and spy on everything you do. They’re the reason that if you visit a site about running shoes, all of a sudden you’ll start seeing running shoe ads when you’re on Facebook or Instagram. Why Do I Get All These Requests to Accept Cookies? But as online privacy regulations began to be put in place, especially the European Union’s GDPR (General Data Protection Regulation), companies were forced to provide more information to website users about the personally identifiable data they were tracking. If a website owner doesn’t alert users that they are using cookies and collecting that user’s session date, their company can be subject to fines and other penalties. Despite the threat of non-compliance, it’s estimated that 53% of these notices are hidden near the bottom of the website, and 93% of those that do ask the user to accept with a prompt, give them no other option but to accept the cookies. What Should You Do? Cookies are a part of life on the internet, but it’s important to know that you don’t have to accept them if you don’t want to. What happens if you don’t accept cookies? The website owner could make it impossible for you to use their site. But in most cases, you just would not have as personalized an experience. For example, if you saved items to your shopping cart, they would not be there if you left the site and then came back. Some sites will allow you to save limited cookies, which they might term “required cookies only.” In this case, you can opt for a lower level of tracking that is most likely associated with saving important site preferences. It’s best to keep cookies to a minimum and to clear out cookies regularly in your browser settings for sites that you don’t frequently visit or where you don’t care about having a personalized experience or having preferences saved. Need Help With Online Security Safeguards? Online privacy can be tricky because you really have to be your own advocate on most websites. C Solutions can help your Orlando area business optimize your network for security and keep your staff up to date on security awareness. Schedule a free consultation today! Call 407-536-8381 or reach us online.
<urn:uuid:b3876c33-d109-49ce-aeac-a9fa549998a8>
CC-MAIN-2022-40
https://csolutionsit.com/accepting-cookies-on-websites/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00036.warc.gz
en
0.946471
1,023
3.203125
3
In today's world, one of the most vital skills is the ability to write a program. all because computer technology is everywhere like vehicles factories or in household appliances. Programming simplified our lives. And now we will check the topmost and popular programming languages for students, beginners or already know the programming, Have you decided what programming languages will you learn or need help with this blog especially for you? Learning or choosing what programming language suits you from a career point of view is not an easy task we will try to help you to choose the best programming languages of the future. Our first choice is Python just because of fast, user-friendly, and easy to deploy and use python programming language has no doubt earned first place in our blog. Most powerful scripting language with a dizzying amount of modules along with libraries. It seems anything can do everything that we need. Python is free and open-source we can use python also for commercial use. Python has a very easy and simple elegant syntax. if we compare Python with other languages like c++, java, or c# python is much easier to read and write. We can also move Python programs from one platform to another and run them without facing any type of issues or changes. Why Learn Python Programing language? Python is easy to understand and learn, Syntax is easy and readable code. We can use python in different types of a platform, for example, developing web applications, data science, etc. It gives you to write programs in short lines than most of the other programming languages. Day by day python's popularity fastly increasing now most popular programming is python. JAVA is one of the most popular programming languages created in 1995. and its owned by Oracle and more than 3 billion running java devices. Java basically used for Mobile applications, Desktop applications, Web Applications, games, Database connections, and much more. Why we use Java Programming Langauge? Easy to use and easy to learn Java support multiple platforms ( Windows, Mac, Linux, Raspberry Pi, etc) Open source and free One of the most popular programming language in the programming world Java is a secure, fast, and powerful language Java is an object-oriented language that helps us to provide clear structure programs and allows code to reuse along with lowering the development costs Java is close to C++ and C+ its helps us to switch to Java or vice versa Basically, c++ is a cross-platform programming language and it used to create high-performance applications developed by Bjarne Stroustrup. It provides a high level of control over the memory and system resources. Why use C++: C++ is one of the most popular programming languages. and can be found in today's operating pc, Graphical user interfaces, and embedded systems. It comes with an object-oriented programming language that provides a clear structure to programs and allows code to reuse for another task also lowering development costs. C++ basically portable and can be adapted to multiple platforms as per our requirement also easy to learn. C++ almost close to C# and java hence it makes it easy for programmers to switch to C++. Go or Golang is open-source programming and it makes it easy to build reliable, simple and efficient software. GoLang is a new programming language with a full pack of solutions.
<urn:uuid:9f4bdb4f-8f57-45ea-af38-a3f2995ac7a2>
CC-MAIN-2022-40
https://www.darkworldhacker.com/post/top-computer-programing-languages-for-future
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00036.warc.gz
en
0.921067
766
2.9375
3
If one organization detects a threat, another can learn from it and prevent it from entering its network. However, this is only possible with information sharing. Cybersecurity experts are making continuous efforts to defend against agile and persistent cyber adversaries who find new attack vectors and vulnerabilities every day. In such conditions, a reactive approach to deal with threats is not sufficient, information sharing is needed. For effective incident detection and response, improved and proactive information sharing is essential, which can be carried out in a cyber fusion-based environment. What is Information Sharing? Information sharing in cybersecurity refers to the exchange of threat information among different organizations. In order to smoothen their information sharing processes, organizations are building virtual cyber fusion centers (vCFCs) that leverage end-to-end, bidirectional threat intelligence platforms (TIP) for automated sharing of strategic and technical threat information. In a virtual cyber fusion center (vCFC), sharing allows security teams to communicate, receive, and access threat information in real-time, which enhances their ability to understand and respond to cyber threats. Essentially, information is nothing but analyzed and enriched threat intelligence, derived from the resources and knowledge of many organizations and technologies. Sharing makes the information accessible and operational, boosting every participating organization’s knowledge pertaining to adversaries, assets, tactics, techniques, and procedures (TTPs), indicators of compromise (IOCs), and much more. It raises awareness about lurking cyber threats as they happen, and also helps in reducing response time to incidents and implementing security measures. Cyber fusion strengthens information sharing, providing exposure to resources and additional insights that add value to security operations. The idea behind threat intelligence sharing is to gain contextual awareness of threats and toughen security readiness against cyberattacks, enabling organizations to understand attack patterns and define necessary defense mechanisms. By fostering collaboration between security teams from different organizations, cyber fusion empowers them to derive and employ intelligence on a greater level to address all kinds of threats. This builds collective defense, allowing security teams to come together and mitigate cyberattacks An end-to-end, bidirectional sharing TIP, which is one of the core components of virtual cyber fusion centers (vCFCs) allows organizations to both share and receive threat intelligence with/from information sharing communities such as Information Sharing and Analysis Centers (ISACs) and Information Sharing and Analysis Organizations (ISAOs), commercial feed providers, National CERTs, peer collaborators such as vendors, clients, and others. More organizations are now engaging in real-time bidirectional threat intelligence sharing with their industry peers, vendors, clients, and sharing communities. The Role of Threat Intelligence Platform (TIP) in Information Sharing By using an advanced TIP, security teams can ingest strategic and technical threat information from all kinds of human and machine-readable sources. The advanced enrichment capabilities of a TIP allow security teams to enrich and contextualize threat data from several trusted sources to perform correlation, analysis, deduplication, and indicator deprecation in real-time. Such platforms also leverage advanced frameworks like MITRE’s ATT&CK to correlate information on threat actors’ TTPs, identify trends across the cyber kill chain, and map attacker footprints based on historical or contemporary incidents and threat data. Moreover, the cyber fusion features allow threat data sharing to other security tools for real-time actioning. A cyber fusion-based TIP can ingest tactical and technical intelligence from several external sources such as threat intel providers, peer organizations, ISACs, regulatory bodies, the dark web, and more. It automatically converts, organizes, and store threat data from multiple formats such as STIX, JSON, XML, MAEC, Cybox, and others. Nowadays, advanced TIPs also support algorithms boosting the confidence of IOCs through scoring and utilizing the validated intelligence to perform actions such as automated dissemination to preventive and response technologies. Because of such unique features of a TIP, organizations are realizing its need and therefore embracing it. Standardization Before Sharing Organizations must define what they want to share. Describing the content, topic fields, and aspects they want to share when the incident takes place can lead to challenges, therefore threat information needs to be standardized. In order to make intelligence valuable, every organization needs to understand what they are receiving and be able to use it to gain a better understanding of the threats and make informed decisions. This requires the intelligence to be standardized, converting it into the shared language and format for ease of use. An advanced TIP supports sharing standards such as Structured Threat Information Expression (STIX), Trusted Automated eXchange of Indication Information (TAXII), and Cyber Observable Expression (CybOX). These are open community-driven efforts and a set of free specifications that represent threat information in a standardized format for threat intel sharing. A state-of-the-art TIP leverages STIX/TAXII server-based feeds to collect threat data, automate real-time information sharing and intelligence submission with different industries, government bodies, and other organizations. This makes threat intelligence sharing more automatable, flexible, extensible, and easily readable. By leveraging these standards in a cyber fusion-based environment, organizations collect and share threat intelligence feeds in a structured format, reducing the manual effort required for the normalization, enrichment, correlation, and analysis of threats. Standardization of threat intelligence sharing allows organizations to exchange and deliver crucial threat warnings and incident-related information in real-time. It also enables organizations to automate and orchestrate threat intelligence workflows effectively. Furthermore, if a large number of organizations start ingesting and sharing threat intelligence in standard formats such as STIX, it would also increase the overall threat intel participation rate and eventually help in creating large threat data repositories that could be used for advanced processes such as confidence scoring. Threat Intelligence Platforms (TIPs) for Information Sharing Communities (ISACs/ISAOs) With the rising use and importance of threat intelligence, information sharing among organizations has become paramount. Industry-centric sharing initiatives, such as ISAOs and ISACs, have led to a significant increase in threat intelligence sharing. Moreover, government-led initiatives such as the Cyber Information Sharing and Collaboration Program (CISCP) and the Cybersecurity Information Sharing Partnership (CiSP) are promoting threat intel sharing collaborations between governments and private institutions. The modern-day cybersecurity landscape has transformed the way organizations are responding to threats. Many organizations are now becoming a part of information-sharing communities such as ISACs and ISAOs to get involved in bi-directional threat intelligence sharing in real-time. Outdated threat information sharing solutions prove inept when it comes to gaining insights into the attacks faced by member organizations of these sharing communities. More and more ISACs and ISAOs are now leveraging advanced TIPs to overcome the threats targeting their industries by sharing appropriate and actionable threat intelligence in real-time with their member organizations. A TIP leverages avant-garde capabilities that facilitate real-time alerting and automated threat intelligence sharing to deliver advanced information sharing. It helps ISACs and ISAOs to reduce the time taken to ingest, enrich, and disseminate threat intel while also boosting member collaboration. Changes in any kind of intelligence-related information can easily be communicated with the help of threat intelligence sharing. This allows member organizations to pass information more quickly, make informed decisions, and deliver better insights to their stakeholders and consumers. By sharing threat intelligence, member organizations gain access to knowledge and information beyond their network and deploy tools and leverage it for a higher level of visibility and awareness. Threat intelligence sharing helps organizations detect threats in real-time and protect their users from malicious encounters. Further, sharing threat intel within an industry significantly minimizes the risk of cyberattacks by providing an organization with increased awareness and predictive knowledge of impending attacks. Threat intelligence sharing gives early warnings, which enable security teams to use the right tools and save time in looking for the root causes of attacks. The costs involved in responding to an attack can be huge, so by minimizing risks security teams can reduce the potential expenditures. The Bottom Line Today, every industry is looking forward to embracing robust tools, technologies, and processes as part of its cybersecurity roadmap. Information sharing has become a crucial aspect of driving cybersecurity initiatives. By leveraging t he cyber fusion capabilities of a TIP, customers can share threat data freely and take relevant actions.
<urn:uuid:ed4ff74a-ac9e-492c-a13e-51a4b91c3121>
CC-MAIN-2022-40
https://cyware.com/educational-guides/cyber-threat-intelligence/information-sharing-in-cyber-fusion-7a6b/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00036.warc.gz
en
0.933106
1,730
3.046875
3
Data Management Glossary Network File System (NFS) What is NFS? A network file system (NFS) is a mechanism that enables storage and retrieval of data from multiple hard drives and directories across a shared network, enabling local users to access remote data as if it was on the user’s own computer. What is the NFS protocol? The NFS protocol is one of several distributed file system standards for network-attached storage (NAS). It was originally developed in the 1980s by Sun Microsystems, and is now managed by the Internet Engineering Task Force (IETF). NFS is generally implemented in computing environments where centralized management of data and resources is critical. Network file system works on all IP-based networks. Depending on the version in use, TCP and UDP are used for data access and delivery. The NFS protocol is independent of the computer, operating system, network architecture, and transport protocol, which means systems using the NFS service may be manufactured by different vendors, use different operating systems, and be connected to networks with different architectures. These differences are transparent to the NFS application, and the user.
<urn:uuid:e492a8fb-5ff3-4fc6-b38e-364aa45ba5b8>
CC-MAIN-2022-40
https://www.komprise.com/glossary_terms/network-file-system-nfs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00036.warc.gz
en
0.926669
238
4.03125
4
HDFS is the Hadoop Distributed File System. The HDFS design has aimed to achieve reliability and high-throughput access to application data. One of the most important design goals is the minimized cost of handling system faults caused by planned or unplanned outages. HDFS has a master/slave architecture, where the master node exposes the NameNode service that manages files and their metadata, and the slave nodes expose DataNode services that manage block storage. - HDFS is built to work on low-cost hardware - HDFS has a simple design so it’s easy for developers to code and reason about it - HDFS has built-in replication so it’s very reliable. It replicates every file 3 times across different slaves, so even if a slave node fails it can recover - HDFS is high-throughput - HDFS has good availability of data (if the DataNode is available) - HDFS provides block-level storage (i.e., storing data in large blocks that can be spread over different nodes) - HDFS has good performance - HDFS is highly configurable and can be used for various applications
<urn:uuid:d7d62050-133b-4c2f-80c2-647f6e709b09>
CC-MAIN-2022-40
https://data443.com/data_security/hadoop-distributed-file-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00036.warc.gz
en
0.922278
249
3.546875
4
You can use the Access Control Using SQL (DCL) to control the security of the database and the access to it. You can manage the users and roles to specify who is allowed to perform actions in the database. The following SQL statements are the components of the DCL: - CREATE USER: Creates a user. - ALTER USER: Changes the password of a user. - DROP USER: Deletes a user. - CREATE ROLE: Creates a role. - DROP ROLE: Deletes a role. - GRANT: Gives roles, system privileges, and object privileges to users or roles. - REVOKE: Withdraws roles, system privileges, and object privileges from users or roles. - ALTER SCHEMA: Changes the owner of a schema (and all its schema objects) or sets schema quotas. An administrator can create an account for users who want to connect to the database with CREATE USER SQL statement. The new users get either an LDAP configuration, Kerberos Principal, or a password (can be changed later). The password security policy defines how complex the user's passwords should be. It is listed in system table EXA_PARAMETERS (entry PASSWORD_SECURITY_POLICY) and you can change it through ALTER SCHEMA. The naming conventions for user names and passwords are same as SQL identifiers (identifier for database objects such as table names. For more information, see SQL Identifier). However, with this, case sensitivity is insignificant. User name and user roles are not case sensitive. Appropriate privileges are required for a user to perform an action in the database. These privileges are given or withdrawn by an administrator or other users (administrator users). For example, a user needs the system privilege CREATE SESSION to connect to the database. If you want to disable the user temporarily, this system privilege can be withdrawn. For changing the user identity after the login, you can use the statement IMPERSONATE. The database has a special user SYS that cannot be deleted. The SYS user has universal privileges. The default password of the SYS user is exasol. You should change the password at first login to prevent any security risk. To enable security rules for the user accounts, you can set policies individually for your requirements. By default, the policies are deactivated and should be enabled through system parameters or user-specific settings. The following mechanisms are only applied for password-authorized users. In case of LDAP-authorized users, such rules are configured in the external LDAP service. You can specify rules for passwords by adjusting the system parameter PASSWORD_SECURITY_POLICY. You can find the current policy in form of a string value in the system table EXA_PARAMETERS and change it through ALTER SYSTEM command. The value is either OFF (completely deactivated) or consists of a list of rules that are separated by colons. Here is an example for the password security policy. The parameters for the password security are: - MIN_LENGTH: Minimum length for passwords. - MAX_LENGTH: Maximum length for passwords (128 characters at maximum). - MIN_LOWER_CASE: Minimum number of lower-case characters. - MIN_UPPER_CASE: Maximum number of upper-case characters. - MIN_NUMERIC_CHARS: Minimum number of numbers. - MIN_SPECIAL_CHARS: Minimum number of special characters (all UTF-8 non-numerical characters that don't have any lower / uppercase spelling). - REUSABLE_AFTER_CHANGES: Number of password changes, after which an old password may be reused. - REUSABLE_AFTER_DAYS: Number of days, after which an old password may be reused. - MAX_FAILED_LOGIN_ATTEMPTS: Maximum number of failed login attempts, after which the user will be locked out. Do note the maximal number of failed login attempts to avoid brute force attacks (MAX_FAILED_LOGIN_ATTEMPTS). The actual number of failed attempts since the last successful one can be displayed in the system tables EXA_USER_USERS and EXA_DBA_USERS. If the limit is reached, a warning appears in the After a database restart, the number of failed login attempts will be reset to 0 for all users. Password Expiry Policy The system parameter PASSWORD_EXPIRY_POLICY defines when a user password expires and how much time there is to change the passwords. You can find the policy as a string value in the system table EXA_PARAMETERS and can change through ALTER SYSTEM. Its value is either OFF (passwords never expire) or consists of two parameters that are separated by a colon. Here is an example for the password expiry policy: The parameters for the password expiry are: - EXPIRY_DAYS: Number of days after which a password expires. - GRACE_DAYS: Number of days within the user must change the password before being locked out. After the expiry of the password, the user has a grace period (GRACE_DAYS) to log in and change the password. User must change the passwords before being allowed to execute any SQL command or query. If the password is not changed within the grace period, an administrator can unlock it by setting a new password. - The SYS user password never expires. - Already expired passwords keep the status even if you change the expiry policy. It is recommended to let the (temporary) password expire instantly by using the EXPIRE clause of the ALTER USER command, for ensuring that it will be changed by the user. You can overwrite the system-wide password expiry policy for specific users through the ALTER USER statement as shown in the following example: ALTER SYSTEM SET PASSWORD_EXPIRY_POLICY='EXPIRY_DAYS=180:GRACE_DAYS=7'; ALTER USER u1 SET PASSWORD_EXPIRY_POLICY='OFF'; ALTER USER u2 SET PASSWORD_EXPIRY_POLICY='EXPIRY_DAYS=180:GRACE_DAYS=30'; If the value is not set (it is equals to NULL), the system-wide setting (EXA_PARAMETERS) will be used. In the example above, that setting is overwritten. For u1 the expiry is explicitly deactivated, while for u2 the grace period was extended to 30 days. If you want to lock out certain users temporarily from the system, you can revoke the CREATE SESSION privilege. This is not applicable if you grant that privilege to role PUBLIC (or another role of the user) instead of the users directly. Role facilitates the grouping of users and simplifies the rights management. You can use the CREATE ROLE statement to create roles. You can assign multiple roles to the same user with GRANT SQL statement. If you are giving similar privileges to many users, you can create a new role with those privileges and assign the role to the users. A hierarchical structure of privileges is also possible by assigning roles to roles. Roles cannot be disabled. If you want to reverse the assignment of a role, you can withdraw it by using the REVOKE SQL statement. There are two predefined roles: - PUBLIC: Every user receives this role automatically. It simplifies the grant or withdraw privileges from the users of the database. However, it should only happen if you are sure that it is safe to grant the respective rights and the shared data should be publicly accessible. The PUBLIC role cannot be deleted. - DBA: This role is for the database administrator and has all the possible privileges. This role should only be assigned to very few users because it provides full access to the database. The DBA role cannot be deleted.
<urn:uuid:37dd74d1-b36b-4bc6-9a3d-19dfe1802423>
CC-MAIN-2022-40
https://docs.exasol.com/db/6.2/database_concepts/database_users_roles.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00036.warc.gz
en
0.830252
1,730
2.953125
3
This year, news of ransomware attacks have been coming in like dispatches from a battlefield — nonstop. Every day, researchers find new strains of ransomware and discover new and unconventional ways criminals use it to steal money directly from consumers and businesses. And as soon as security experts make some progress, the crooks come up with new ransomware approaches and techniques. Recently, another sophisticated sample of a ransomware was discovered. The malware is dubbed Satana (“Satan”), which might imply Russian-speaking origins. The Trojan does two things: It encrypts files and corrupts Windows’ Master Boot Record (MBR), thus blocking the Windows boot process. We have already discussed Trojans that mess with the MBR — the notorious Petya ransomware is one such malware. In some ways, Satana behaves similarly, for example injecting its own code into the MBR. However, whereas Petya encrypts the Master File Table (MFT), Satana encrypts the MBR. To encrypt PC files, Petya relied on the help of a tagalong Trojan called Mischa; Satana manages both tasks on its own. — Kaspersky (@kaspersky) March 30, 2016 For those who aren’t familiar with the inner workings of computers, we’ll try to shed some more light. The MBR is a part of the hard drive. It contains information on the file system used by different disk partitions, as well as which partition the operating system is stored on. If the MBR becomes corrupted — or gets encrypted — the computer loses access to a critical piece of information: which partition contains the operating system. If the computer can’t find the operating system, it can’t boot. The malefactors behind ransomware like Satana took advantage of this arrangement and enhanced their cryptolocker with bootlocker capabilities. The hackers swap out the MBR, replacing it with the code of the ransom note, and encrypt and move the MBR somewhere else. The ransomware demands about 0.5 bitcoins (approximately $340) to decrypt the MBR and provide the key to decrypt the affected files. Once the ransom is paid, Satana’s creators say, they will restore access to the operating system and make things look just as they did before. At least, that’s what they say. Once it’s inside the system, Satana scans all drives and network instances, looking for .bak, .doc, .jpg, .jpe, .txt, .tex, .dbf, .db, .xls, .cry, .xml, .vsd, .pdf, .csv, .bmp, .tif, .1cd, .tax, .gif, .gbr, .png, .mdb, .mdf, .sdf, .dwg, .dxf, .dgn, .stl, .gho, .v2i, .3ds, .ma, .ppt, .acc, .vpd, .odt, .ods, .rar, .zip, .7z, .cpp, .pas, and .asm files, and starts encrypting them. It also adds an e-mail address and three underscore symbols to the beginning of the file name (for example, test.jpg would become [email protected]___test.jpg). The e-mail addresses are meant to serve as contact information for the victims, who are supposed to write to the address to get payment instructions and then retrieve the decryption key. So far, researches have seen six e-mail addresses used in this campaign. The good news is that it is possible to partially bypass the lock: With certain skills, the MBR can be fixed. Experts at The Windows Club blog produced detailed instructions on how to fix the MBR by using the OS restore feature in Windows. However, that feature is designed for experienced users who are comfortable working with the command prompt and the bootrec.exe utility; an ordinary user is not likely to nail this cumbersome method straight away and may not feel comfortable trying. The bad news is that even with Windows successfully unlocked, the other half of the problem, encrypted files, remains. No cure is available for that part yet. At this point, Satana seems to have just started its ransomware career: It’s not widespread, and researchers have spotted some flaws in its code. However, there is a good chance that it will improve over time and evolve into a very serious threat. — Kaspersky (@kaspersky) November 30, 2015 Our primary advice to users for now is to practice constant vigilance. Our simple recommendations will help to lower your risk of infection and keep you away from trouble as much as possible: 1. Back up your data regularly This is your insurance policy. In the case of a successful ransomware attack, you can just reinstall the operating system and retrieve files from the backup copies. 2. Don’t visit suspicious websites and don’t open suspicious e-mail attachments, even if you got the link or e-mail from a person you know. Be very cautious: Little is known about Satana’s propagation techniques. 3. Make sure to use a reliable antivirus solution. Kaspersky Internet Security detects Satana as Trojan-Ransom.Win32.Satan and prevents it from encrypting files or locking the system. 4. And, of course, follow our news! We’ll always try to tell you about the newest threats as soon as possible, so malware doesn’t catch you unawares.
<urn:uuid:7cf59068-38d4-4fe9-850e-8b888568fda9>
CC-MAIN-2022-40
https://www.kaspersky.com/blog/satana-ransomware/12558/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00036.warc.gz
en
0.923237
1,178
2.8125
3
If you are the kind of person that uses different browsers or different devices to access websites, you may have noticed that many sites can look quite different depending on which browser you are using. When your browser sends a request to a website, it identifies itself with the user agent string before it retrieves the content you've requested. The data in the user agent string help the website to deliver the content in a format that suits your browser. Even though depending on user agents alone is no longer enough to optimize a website, they are still an important source of information. How can I find mine?If you want to check the user agent you are broadcasting to websites you visit, have a look here: http://ip.it-mate.co.uk/. Along with the user agent identification, the browser sends information about the device and the network that the user is on, like the IP address. That information is responsible for the first 3 lines of information on that site. But the 4th line is the one showing your user agent string. The strings can be confusing if you try to read them yourself. For example, for historical reasons, almost every web bbrowser identifies itself as a Mozilla browser. BreakdownNot only browsers utilize a user agent. The same is true for email clients and other programs that display website content. A very different type of user agent strings can be found that are in use by crawlers. This will grant access to certain parts of sites that are restricted for regular users, but on other sites the same crawler may be blocked as a whole. For the breakdown we will concentrate on user agents that can be expected to be web browsers operated by humans. For these browsers the format of the user agent string is: Mozilla/[version] ([system and browser information]) [platform] ([platform details]) [extensions] Since Opera, who were the last to adapt to this standard, also started using the Mozilla user agent string, every popular browser uses this and will start the user agent string with Mozilla and the version number. Where Mozilla/5.0 is the latest version. The platform and platform details is where you can tell the difference between browsers. Some browser extensions are noted in the user agent string if they need certain content to be rendered in a specific way. Is it a problem to give out this information?To be honest, it’s a bigger problem not giving it away most of the times. Of course sites with malicious intentions can use this information to deliver specific exploits that have a bigger chance of working on your system. But there are more refined ways to do this, that get far more useful information. Also, it is not that hard to adapt your user agent string, so if you want to mislead the webserver that is not very hard either. More information about the breakdownChrome User Agent explained, breaks down your user string and explains all the elements. Intended for Chrome, but it does explain big parts of other user agents as well.
<urn:uuid:0f902a10-01b0-44e9-9244-a9bd7dfc4f31>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2017/08/explained-user-agent
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00036.warc.gz
en
0.915124
618
2.953125
3
This year has seen a rise in cyber attacks on government agencies and prompted official warnings. Notably, a recent joint statement from the FBI and CISA warned schools about probable attacks. And a data breach of federal court records further spotlighted the need for improved municipal data governance and security. A perfect storm of cyber security risks makes municipal agencies particularly vulnerable to attack. In the first place, schools, courts, utility departments and other government entities store a treasure trove of sensitive information. Those same agencies often use legacy systems and lack critical cyber security infrastructure and data governance resources. To counter the threat of cyber attacks and maintain public trust, agencies must implement municipal data governance and cyber security best practices. In its detailed joint statement to schools, the FBI and CISA highlighted several key actions to take, including those listed below. Review and Update Incident Response Plans A detailed incident response plan forms a critical component of the organization’s ability to minimize exposure and risk. It includes immediate steps to take to contain the spread of infection, eradicate malicious code and ensure business continuity. The response plan also outlines a specific plan for internal and external communications. Strengthen Backup Policies Regular, reliable data backups provide an essential component of cyber security readiness. CISA emphasizes the need for agencies to ensure that regular backups include the data infrastructure of the entire organization. Backups should be tested regularly and encrypted, and the organization should maintain a copy of the backup offline. Monitor Supply Chain Security Time and time again, attackers have gained access to lucrative targets by first infiltrating a third party. For example, the California Department of Motor Vehicles suffered a ransomware attack last year that exposed thousands of driver and vehicle records. The attack began when hackers first breached a billing services company that contracted with the DMV. Agencies should make it a policy to regularly review the security practices of third-party vendors. They should also monitor all external remote connections, including those with vendors, addressing any suspicious activity. Strengthen Authentication and Access Management Practices Bad actors commonly infiltrate their targets by compromising credentials to gain access to the network. Consequently, a critical element of cyber security includes addressing identity and access management. Begin with strengthening password policies and adopting multi-factor authentication (MFA) where possible. Additionally, pay special attention to accounts with administrative privileges. Implementing risk-based authentication and zero trust policies helps to ensure that hackers cannot easily gain access to sensitive data and services. Improve Patch Management A critical, and too often overlooked, element of effective cyber security strategies involves patch management. Implement a plan to ensure that all software, firmware, and operating systems stay up to date. This includes installing and updating antivirus programs on all devices. Improve Email Security Because email remains a top attack vector, organizations must make email security a priority. This includes implementing high-quality email filters and properly configuring email services. CISA also recommends disabling hyperlinks in incoming emails and adding an email banner to alert users to external emails. Cyber Security Grant Program to Assist Agencies with Municipal Data Governance and Security Responding to increased cyber threats on government agencies, the recent Bipartisan Infrastructure Law includes a cyber security grant program for state and local governments. The program provides for $1 billion to help governments develop and implement cyber security strategies. To qualify for the grant program, state and local governments must produce comprehensive cyber security plans. For agencies that lack sufficient security expertise, this requirement can prove daunting. The municipal data governance and security experts at Messaging Architects can help.
<urn:uuid:819f986e-9700-46af-87a0-b54d26b9aa15>
CC-MAIN-2022-40
https://messagingarchitects.com/municipal-data-governance-and-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00236.warc.gz
en
0.909005
716
2.671875
3
When it comes to fixing a root cause, there are two questions. The first is “Who is able to apply the fix?”, and the second is “who is responsible for applying the fix?” This article explains what we get wrong about cybersecurity, how and why we get it wrong, and what it’s going to take to fix it. Fair warning: it’s going to be a long and bumpy ride. Those bumps include a healthy dose of counterintuitive assertions, cybersecurity heresy, and no mincing of words. Last year, the Biden administration issued an executive order, and later additional guidance, aimed at improving the nation’s cybersecurity. Agencies are now required to deploy Zero Trust architectures by 2024. As things go in government, so they tend to go in the private sector. Zero Trust is, therefore, the cybersecurity buzzword of the day. We’re pretty mission-driven here at Absio. We believe there is a real problem (or problems) in cybersecurity that reaches back to the first computers. We’re eager to help organizations resolve the issues that arise when sensitive data created or processed by software doesn’t enjoy full-lifecycle protection. A big part of the solution to today’s seemingly endless cybersecurity breaches and privacy infringements is to reengineer applications to adequately, reliably, and automatically protect data, by default and by design. A recent Associated Press poll indicates that most Americans think their personal information is vulnerable online. What’s more, 71% of Americans believe that individuals’ data privacy should be treated as a national security issue. In other words, the American people get it: data privacy and security are sadly lacking across the digital ecosystem and consumers are suffering the consequences. As digital solutions have become nearly ubiquitous, few terms have taken a more central place in our conversations than data privacy and data security. Consumers, businesses, and organizations of various types are tiring of the barrage of data breaches and process failures resulting in unauthorized distribution of their sensitive information. Two different classes of identifiers must be tested to reliably authenticate things and people: assigned identifiers, such as names, addresses and social security numbers, and some number of physical characteristics. For example, driver’s licenses list assigned identifiers (name, address and driver’s license number) and physical characteristics (picture, age, height, eye and hair color and digitized fingerprints). Authentication requires examining both the license and the person to verify the match. Identical things are distinguished by unique assigned identities such as a serial number. For especially hazardous or valuable things, we supplement authentication with checking provenance — proof of origin and proof tampering hasn’t occurred. In every field of engineering, there is a grace period when the engineers doing the heroic work of making a complex and highly valuable new technology work can escape liability for poor performance, failures, or damages caused by what they build. That grace erodes as the technology becomes commonplace. Eventually, usually through a combination of litigation, legislation, regulation, and evolving insurance requirements, liability and responsibility for failure starts being pinned to the engineers who designed and built the failed system. Our current concept of cybersecurity is to defend against attacks and remedy failure by erecting more and better defenses. That’s a fundamental mistake in thinking that guarantees failure. Why? Because it’s mathematically impossible for a defensive strategy to fully succeed, as explained in the previous installment of this article series. Another even more fundamental mistake in thinking is that cyberattackers are the cause of our woes. They aren’t. They’re the effect. This article is the second in a series on the physicality of data. Cybersecurity failures have been trending sharply upwards in number and severity for the past 25 years. The target of every cyberattack is data — i.e., digitized information that is created, processed, stored and distributed by computers. Cyberattackers seek to steal, corrupt, impede or destroy data. Users, software, hardware and networks aren’t the target; they’re vectors (pathways) to the target. To protect data, the current strategy, “defense in depth,” seeks to shut off every possible vector to data by erecting layered defenses. Bad news: That’s mathematically impossible.
<urn:uuid:aec7ed52-bfdc-4ded-a4e0-ac5a5b33972b>
CC-MAIN-2022-40
https://www.absio.com/tag/data-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00236.warc.gz
en
0.934781
892
2.515625
3
I wrote this article trying to analyze the problem of COVID-19 contagion and its potential evolutions / mutations in relation to a vector common to many diseases – insects. Specialized research in the field of epidemiological diffusion has for decades highlighted the ability of mosquitoes and related to function as a capillary propagation tool for viruses and bacteria. Although expressed very technically, I have tried to represent this reality from different angles. There is also clear evidence of “evolutions” in the viral families of the famous coronavirus, which can herald epidemics of even more devastating scale if not taken seriously. Let’s start by understanding that pointing the finger at a question “COVID-19 is transmitted by mosquitoes” is very reductive! We must ask ourselves … can mosquitoes transmit this class of virus? are there evolutions? how can we deal with the problem? Where can we start from? Coronaviridae, along with Arteriviridae and Roniviridae, belong to the order Nidovirales. Viruses belonging to these families are large positive strand RNA viruses and are known to infect mammals, birds, fish and arthropods . Entry into a host cell is usually mediated by an interaction between the virus spike glycoprotein and a cellular receptor . After entry, the virus disassembles and a replication/transcription complex forms on double-membraned vesicles ( and references within). New subgenomic RNA is produced by a mechanism known as discontinuous transcription . Coronavirus replication requires the production of negative-strand RNA from which positive-strand RNA is produced. Viral proteins are produced from the positive-strand subgenomic RNAs and from the positive-strand full-length RNA. The two largest open reading frames, ORF1a and ORF1a/b, are translated from the full-length RNA. These open reading frames (ORFs) encode polyproteins pp1a and pp1ab which are cleaved by self-encoded proteases. The proteins encoded in ORF1a and ORF1a/b function as the replicase, making subgenomic RNAs and new copies of the genomic RNA . Production of the pp1ab polyprotein requires the translating ribosome to change reading frame at the frameshift signal that bridges ORF1a and ORF1a/b. Like most viral frameshift signals, frameshifting at the coronavirus signal leads to expression of an RNA-dependent RNA polymerase (RdRP), a protein essential for viral replication (for review, see ). The proteins upstream of the frameshift signal include the predicted proteases and other uncharacterized proteins . We have previously suggested that the ratio of the pp1a and pp1ab proteins might affect the regulation and production of genomic and subgenomic RNA . The SARS coronavirus frameshift signal has a seven nucleotide ‘slippery sequence’ and a stimulatory pseudoknot separated by a spacer region. During programmed -1 ribosomal frameshifting (-1PRF), the tRNAs positioned on the slippery site uncouple from the mRNA and reconnect in the new reading frame. The second stem of the stimulatory pseudoknot is formed by the distal 3’ sequence base-pairing with residues in the loop region of the first stem loop. Unlike other frameshift-stimulating pseudoknots the SARS pseudoknot contains an additional internal stem loop [8,9,10]. The function of this structure, called stem 3, is unknown. We have shown that alterations to the SARS coronavirus frameshift signal affect frameshifting efficiency [9,11]. Reduction in frameshifting efficiency is expected to result in decreased expression of the frameshift proteins, including the RdRP. Some mutations that reduced frameshifting were associated with a several-fold reduction in the amount of genomic RNA . The order Nidovirales The order Nidovirales includes positive-sense singlestranded RNA (ssRNA?) viruses of three families: Arteriviridae (12.7–15.7-kb genomes; ‘‘small-sized nidoviruses’’), Coronaviridae and Roniviridae (26.3–31.7 kb; the last two families are jointly referred to as ‘‘large-sized nidoviruses’’) . All other known ssRNA? viruses have genome sizes below 20 kb. Recently, two closely related viruses, Cavally virus (CAVV) and Nam Dinh virus (NDiV), were discovered by two independent groups of researchers in Coˆte d’Ivoire in 2004 and in Vietnam in 2002, respectively [26, 27]. CAVV was isolated from various mosquito species belonging to the genera Culex, Aedes, Anopheles and Uranotaenia . It was most frequently found in Culex species, especially Culex nebulosus. Except for Culex quinquefasciatus, which circulates worldwide, the other mosquito species are endemic to Africa. NDiV was isolated from Culex vishnui, which is endemic to Asia, and Culex tritaeniorhynchus, which circulates in Asia and Africa , and there are indications that it may infect more mosquito species (Nga, unpublished data). Analysis of abundance patterns of 39 CAVV isolates in different habitat types along an anthropogenic disturbance gradient has indicated an increase in virus prevalence from natural to modified habitat types . A significantly higher prevalence was found especially in human settlements. Analysis of habitatspecific virus diversity and ancestral state reconstruction demonstrated an origin ofCAVVin a pristine rainforest with subsequent spread into agriculture and human settlements . Notably, it was shown for the first time that virus diversity decreased and prevalence increased during the process of emergence from a pristine rainforest habitat into surrounding areas of less host biodiversity due to anthropogenic modification . Both viruses were propagated in Aedes albopictus cells and characterized using different techniques. A number of common properties place CAVV and NDiV in the order Nidovirales. These properties include (i) the genome organization with multiple open reading frames (ORFs), (ii) the predicted proteomes (Fig. 1), (iii) the production of enveloped, spherical virions, and (iv) the synthesis of genome-length and subgenome-length viral RNAs in infected cells [6, 7]. Particularly, the two viruses were found to encode key molecular markers characteristic of all nidoviruses: a 3C-like main protease (3CLpro, also known as Mpro) flanked by two transmembrane (tM) domains encoded in replicase ORF1a, as well as an RNA- dependent RNA polymerase (RdRp) and a combination of a Zn-binding module (Zm) fused with a superfamily 1 helicase (HEL1) encoded in ORF1b. As in other nidovirus genomes, ORFs 1a and 1b were found to overlap by a few nucleotides in both CAVV and NDiV. The ORF1a/1b overlap region includes a putative -1 ribosomal frameshift site (RFS) that is expected to direct the translation of ORF1b by a fraction of the ribosomes that start translation at the ORF1a initiation codon. Thus, a frameshift just upstream of the ORF1a termination codon mediates the production of a C-terminally extended polyprotein jointly encoded by ORF1a and ORF1b. Combined, these markers form the characteristic nidovirus constellation: tM-3CLpro-tM_RFS_RdRp_Zm- HEL1 (Fig. 1) [21, 25]. Likewise, virion proteins are encoded in ORFs that are located downstream of ORF1b and expressed from a set of subgenomic mRNAs. No similarities were found between the (putative) structural proteins of CAVV and NDiV and those of other nidoviruses [26, 27]. The most distinctive molecular characteristic of CAVV and NDiV, however, is the *20-kb genome size, that is inter- mediate between the size ranges of small-sized and large- sized nidovirus genomes. Consequently, each of the two viruses has been proposed to prototype a new nidovirus family [26, 27]. In this study, we compared the genomes of CAVV (GenBank accession number HM746600) and NDiV (GenBank accession number DQ458789) to assess their relationship and use this insight for taxonomic classification of these viruses. To date, only very limited biological information is available for CAVV and NDiV (see above), and in general, biological properties may be affected pro- foundly by a few changes in the genome. In view of these considerations and in line with the accepted taxonomic approach to viruses of the family Coronaviridae , comparative sequence analysis was considered the most reliable basis for classification. The overall similarity between the CAVV and NDiV genomes was found to be strikingly high: nearly identical sizes (20,187 and 20,192 nt, respectively), conservation of ORFs with sequence identities ranging from 87.8 to 96.1% at the amino acid level and from 88.3 to 93.7% at the nucleotide level (Table 1). Given this high similarity, prior assignments of domains and genetic signals were cross-checked to produce a unified description. There was complete agreement between the two studies [26, 27] on the mapping of all nidovirus-wide conserved domains in CAVV and NDiV, as well as on the identifi- cation of GGAUUUU as a plausible slippery sequence in RFS (see above). Additionally, our analysis showed that the NDiV-based assignment of 30-to-50 exoribonuclease (ExoN) and 20-O-methyltransferase (OMT), two replicative domains characteristic for large-sized nidoviruses , and N7-methyltransferase (NMT) in ORF1b extends to CAVV. Likewise, CAVV may lack a uridylate-specific endonuclease (NendoU), as has previously been observed for NDiV . The synthesis of subgenomic RNAs from which ORFs 2a to 4 are predicted to be expressed appears to be controlled by transcription-regulating sequences (TRSs) [30–32] identified upstream of ORF2a/2b, ORF3a and ORF4 (collectively designated as body TRSs). Other putative TRSs were identified downstream of the leader region located at the 50-end of the viral genome [26, 27]. Unique among nidoviruses, NDiV and CAVV may use different leader TRSs during the synthesis of different subgenomic RNAs, although further analysis is required to clarify the basis for some discrepancies between the TRS assignment in NDiV and CAVV. Also, it remains to be shown why the high sequence conservation of virion pro- teins of the two viruses (Table 1) was not manifested in the morphology observed upon EM analysis of virus particles [26, 27]. In this respect, it may be relevant that Zirkel et al. noticed two types of particles in CAVV-infected cells, one of which carried club-shaped surface projections compatible with viral glycoproteins . This latter type of particles was also observed in infected cell culture supernatant. Ultimately, the origin of the particles of both types, and their relationship to the particles isolated from the medium of NDiV-infected C6/36 cells by Nga et al. should be revealed by future research efforts. Furthermore, we evaluated the phylogenetic position of CAVV and NDiV in relation to other nidoviruses. We con- ducted a phylogenetic analysis as described in ref. . The study indicates that CAVV and NDiV consistently, albeit very distantly, cluster with viruses of the family Roniviridae, the only other known nidoviruses infecting invertebrates (Fig. 2). Quantitatively, this Bayesian posterior probability phylogeny illustrates that CAVV and NDiV form a deeply rooted lineage in the nidovirus tree with an evolutionary divergence from other nidoviruses comparable to that sep- arating viruses of the families Coronaviridae and Roniviri- dae (Fig. 2). Together, these characteristics of CAVV and NDiV (insect host, intermediate genome size, deeply rooted phylogenetic lineage) provide a compelling basis for the creation of a new nidovirus family. We propose to name this new family Mesoniviridae, where meso is derived from the Greek word ‘‘mesos’’ (in English ‘‘middle’’ or ‘‘in the mid- dle’’) and refers to a key distinctive characteristic of these viruses, namely their intermediate-sized genomes. The second component of the acronym, ni, refers to nidoviruses, as has been done previously for roniviruses and bafini- viruses . Next, we sought to establish species demarcation criteria to decide whether CAVV and NDiV prototype separate species or belong to a single species. Commonly, this question cannot be answered (reliably) on the basis of only two full genome sequences and otherwise very limited biological data. To solve this dilemma, we exploited information available for other nidoviruses in our analysis. In order to evaluate the genetic similarity between CAVV and NDiV in the context of sequence divergence of lineages representing previously established nidovirus species, we applied a state-of-the-art framework for a genetics- based classification . This recently introduced classification approach has been shown to recover and refine the taxonomy of picornaviruses , and it was also used to revise the taxonomy of coronaviruses extensively (Lauber & Gorbalenya, in preparation) . In addition to CAVV and NDiV, a representative set of 152 large-sized nidoviruses was included in the analysis. Two sets of proteins were used: the first included proteins conserved in all nidoviruses (3CLpro, RdRp, HEL1) (dataset D1), while the second set additionally included ExoN and OMT, which are conserved in large-sized nidoviruses and CAVV/NDiV (dataset D2). For both datasets a concatenated, multiple amino acid alignment was produced, which formed the basis for compiling pairwise evolutionary distances (PEDs) between all pairs of viruses (Fig. 3ab; for details see ref. ). It was found that the PED separating CAVV and NDiV is within the range of intra-species virus divergence in the families Coronaviridae and Roniviridae for both datasets (Fig. 3cd). Specifically, CAVV and NDiV show a distance (0.016 and 0.029 for D1 and D2, respectively) that is below the genetic divergence of members of several established nidovirus species (maximum of 0.032 and 0.37 for D1 and D2, respectively). For both datasets, these viruses include gill-associated virus and yellow head virus (species Gill-associated virus, family Roniviridae) and the coronaviruses feline coronavirus, transmissible gastroenteritis virus, and porcine respiratory coronavirus (species Alphacoronavirus 1), IBV (species Avian coronavirus), murine hepatitis virus (species Murine coronavirus), and Rousettus bat coronavirus HKU9 (species Rousettus bat coronavirus HKU9) . For the dataset comprising the three nidovirus-wide conserved proteins (Fig. 3ac), Mini- opterus bat coronavirus 1 also showed a maximum genetic divergence exceeding that of the CAVV-NDiV pair. Together, these observations show that CAVV and NDiV belong to the same species, representing a single genus in the family. We propose to name this genus Alphamesoni- virus and the species Alphamesonivirus 1, thereby fol- lowing a naming convention recently applied to the subfamily Coronavirinae , which is expected to facili- tate the accommodation of future expansions of the family. A taxonomic proposal for family, genus, and species recognition has been available on-line at the ICTV website (http://talk.ictvonline.org/files/proposals/taxonomy_proposals_ invertebrate1/m/default.aspx) since August 2011. It has been approved by the chairs of the ICTV Arteriviridae, Coronaviridae, and Roniviridae Study Groups and the Executive Committee of the ICTV, and will be considered again at the next EC-ICTV meeting, to be held in Leuven, Belgium, in July 2012. The recognition of CAVV and NDiV as a single virus species can be contrasted with the detection of these viruses in many mosquito host species and their spread to different continents (Africa and Asia, respectively) [26, 27]. The underlying mechanisms of this broad dispersal are unknown but might include the crossing of the host species barrier rather than virus-host cospeciation. Further research, including the characterization of biological properties of CAVV and NDiV and the extension of surveillance studies to other regions of the world, is needed to understand the ecology, host tropism and medical and/or economic relevance of mesoniviruses. Zoonosis (zoo-e-no-sis) is an infectious disease that may be transmitted from animals (wild and domestic) to humans or from humans to animals. The word zoonosis is derived from the Greek, zoon (animal) (pronounced as zoo-on) and nosos (disease). Of the 1415 microbial diseases affecting humans, 61% are zoonotic (Taylor et al., 2001) and among emerging infectious diseases, 75% are zoonotic with wildlife being one of the major sources of infection (Daszak et al., 2001). A new virus has been emerging almost every year since last two decades (Woolhouse and Sequeria, 2005). Of 534 zoonotic viruses (belonging to 8 families) identified 120 cause human illnesses with or without the involvement of intermediate host/vectors. In the past 15 years, many zoonotic viral infections are of emerging and re-emerging in nature (Wilke and Hass, 1999) and haemorrhagic fever causing viruses transmitted by insect vectors (arboviruses i.e., yellow fever virus) (Khan et al., 1988), rodents i.e., Hanta viruses (Peters and Khan, 2002) and also by direct contact i.e., Filoviruses (Payling, 1996). Thus, they pose a great challenge to both veterinary and public health professionals. It is essential to investigate the complex interactions between pathogens, host, vectors and environment to curtail these infections. This review focuses on description of the important zoonotic viral infections with especially the recently emerging and reemerging diseases and their causes, transmission, clinical manifestations, distribution and preventive measures, to abreast the knowledge on zoonoses. Zoonotic viruses are transmitted to humans either directly or indirectly. Direct transmission involves contact between the infected and susceptible individual (orf), bite (rabies) and handling of the affected animal’s tissues or materials (Orf). Indirect transmission involves transmission through the bite of a hematophagous (blood-sucking) arthropod after replicating in the reservoir animal host (Japanese encephalitis, yellow fever). Most viral zoonoses require blood-sucking arthropods for their transmission to humans. Among them, mosquitoes (Equine encephalitis complex) are the most common followed by ticks (Powassan virus), sand flies (Vesicular stomatitis) and midges (bluetongue). The arthropod vector becomes infected when it feeds the blood of a viraemic animal. In most of the cases, virus replicates in the arthropod tissues and reaches their salivary glands. The arthropod then transmits the virus to a new susceptible host when it injects infective salivary fluid while taking a blood meal. The extrinsic incubation period (time between ingestion and transmission of the virus) is usually 8 to 12 days. This period depends on the virus, the environment and the vector species involved (Hubalek and Halouzka, 1999). Arthropod-borne viruses generally remain undetected until humans encroach on the natural enzootic focus or until the virus escapes the primary cycle via a secondary vector or vertebrate host. Wild birds are important to public health as they carry various zoonotic pathogens and they either act as reservoir hosts or help in disseminating the infected arthropod vectors (Reed et al., 2003). In addition, bird migration provides a mechanism for the establishment of new endemic foci of disease at great distances from where an infection was acquired (avian influenza). There has been a change in the transmission pattern especially in the occurrence and incidence of diseases due to broadening of host range (Monkey pox and Nipah viruses), high mutation rate (avian influenza, FMD) and anthropogenic environmental changes viz., ecological imbalance and change in agricultural practices (Wilke and Haas, 1999). Role of Wildlife in Zoonosis The significance of wild life as animal reservoir for zoonotic viruses has been traced long back with two important ancient diseases such as rabies and West Nile virus and represent as large spectrum of transmission mode (Marr and Calisher, 2003). Of the total emerging diseases, 75% are considered zoonotic with wild life as a major source of reservoir. Recent emerging viral diseases which moved into new species such as AIDS, SARS and avian influenza have a strong evidence of wild life origin due to human encroachment and changed international trade and travel patterns. Commonly the pattern of moving of viral agents from wild animal species to human occurs either as actual transmission being rare (HIV, Influenza A, Ebola and SARS) but will be maintained and has potential of man to man transmission or direct/indirect manner through animal bite and arthropod vectors (rabies, Nipah, West Nile virus and hantavirus) (Bengis et al., 2004). Many zoonoses with a wildlife origin are spread through insect vectors (Rift Valley fever, equine encephalitis and Japanese encephalitis), whereas, rabies by animal bite and hantaviruses by contact with rodent excreta is common. The outcome in the form of clinical manifestation in humans depends on the transmission pattern of the agent causing the disease. Direct contact and vector bite lead to the formation of rashes and ulcers, whereas, intake of contaminated meat/water lead to digestive tract problems and diseases transmitted by inhalation of infected foci of dust cause pneumonia like illness (Kruse et al., 2004). Wild life are basically involved in epidemiology of the disease which is influenced by other factors such as change in agro-climatic conditions, host abundance, movement of pathogens/vector/animal host including migratory birds and anthropogenic factors. For example, increase in transmission and subsequent spread of Sin Nombre Hantavirus causing Hantavirus Pulmonary Syndrome (HPS) to humans is due to increase in heavy rainfall and host abundance in USA. Increase in the emergence of some wild life diseases result in high potential of emergence of human pathogens as in the case of West Nile virus spread in USA. A potential threat to human health, animal welfare and species conservation from domesticated and wild life is presented equally by emergence of human and wild life pathogens. Emerging and Reemerging Zoonoses The complex interaction between environment/ecology, social, health care, human demographics and behavior influences the emergence and re-emergence of zoonotic viral diseases. Periodic discovery of new zoonoses suggest that the known viruses are only a fraction of the total number that exist in nature. The RNA viruses are capable of adapting to changing environmental conditions rapidly and are among the most prominent emerging pathogens (Ludwig et al., 2003). Mutations are more common in RNA viruses (Influenza) than DNA viruses (Pox). The common mutations are point (insertion/deletion), drift (minor) and shift (major). In addition to these, movement of population, birds, vectors, pathogens and trade contribute to the global spread of emerging infectious diseases (influenza, severe acute respiratory syndrome – SARS). Other factors viz., human migration, change in land use pattern, mining (disturbance of ecosystem), coastal land degradation, wetland modification, construction of buildings, habitat fragmentation, deforestation, expansion of agents host range, human intervention in wild life resources like hiking, camping and hunting also influence on acquiring zoonotic infections from wildlife (Daszak et al., 2001; Bengis et al., 2004; Patz et al., 2004). Cessation of vaccination against smallpox since 1980s, emergence of some genetically related orthopoxviruses has been reported throughout the world i.e., monkey pox (Nalca et al., 2005), buffalo pox (Singh et al., 2007) and Bovine Vaccinia (BV) infections (Fernandes et al., 2009). Despite successful eradication of some viral diseases (small pox and almost polio in humans and rinderpest in cattle) due to intensive research and dedicated coordinated efforts, modern medicine has failed to control many infectious diseases resulting from emerging and reemerging viruses (Table 4). Some infectious agents already known to be pathogenic have gained increasing importance in recent decades due to change in disease patterns. Several previously unknown infectious agents with a high pathogenic potential have also been identified (Manojkumar and Mrudula, 2006). Several infectious viral agents (DNA and RNA viral families) have been emerged as zoonotic agents (Table 4). They are associated with flu-like signs (Alkhumra virus infection, influenza A) to respiratory (SARS), pox lesions mostly localized distributed over hairless parts of body namely udder, teats, ears and tail (in buffaloes) and fingers and hands (in humans) due to buffalopox and Orf virus infections in affected goats , hepatitis (hepatitis E virus), haemorrhagic fevers (Ebola, Marburg and hanta virus infections) and encephalitis (Henipa virus complex). Treatment/prophylaxis is not available to many of these infections. But some of antiviral compounds, which are under trial, are found to be effective. Table 3:Viral zoonotic infections causing rashes and arthralgia Table 4: Emerging and re-emerging zoonotic infections - Anonymous, 1993. Inactivated Japanese encephalitis virus vaccine. recommendations of the advisory committee on immunization practices (ACIP). MMWR Recomm. Rep., 42: 1-15. - Anonymous, 1999. Human rabies prevention-United States recommendations of the advisory committee on immunization practices (ACIP). MMWR Recomm. Rep., 48: 1-21. - Areechokchai, D., C. Jiraphongsa, Y. Laosiritaworn, W. Hanshaoworakul and M.O. Reilly, Centers for Disease Control and Prevention (CDC), 2006. Investigation of avian influenza (H5N1) outbreak in humans-Thailand, 2004. Morb. Mortal. Wkly. Rep., 55: 3-6. - Barclay, A.J. and D.J. Paton, 2000. Hendra (Equine morbillivirus). Vet. J., 160: 169-176. - Bauer, K., 1997. Foot-and-mouth disease as zoonosis. Arch. Virol. Suppl., 13: 95-97. - Baxby, D., D.G. Ashton, D. Jones, L.R. Thomsett and E.M. Denham, 1979. Cowpox virus infection in unusual hosts. Vet. Rec., 104: 175-175. - Bengis, R.G., F.A. Leighton, J.R. Fischer, M. Artois, T. Morner and C.M. Tate, 2004. The role of wildlife in emerging and re-emerging zoonoses. Rev. Sci. Tech., 23: 497-511. - Berrios, E.P., 2007. Foot-and-mouth disease in human beings a human case in Chile. Rev. Chilena. Infectol., 24: 160-163. - CDC, 2001. Outbreak of Powassan encephalitis-Maine and Vermont, 1999-2001. MMWR Morb. Mortal. Wkly. Rep., 50: 761-764. - Capner, P.M. and A.S. Bryden, 1998. New Castle Disease. In: Zoonoses, Palmer, S.R., L Soulsey and D.I.H. Simpson (Eds.). Oxford University Press, Bath Press, Oxford, pp: 323-326. - Carey, D.E., 1971. Chikungunya and dengue: A case of mistaken identity. J. Hist. Med. Allied Sci., 26: 243-262. - Centers for Disease Control and Prevention (CDC), 1997. Human monkeypox-kasai oriental democratic republic of congo. February 1996-October 1997. Morb. Mortal. Wkly. Rep., 46: 1168-1171. - Chalmers, R.M., D.R. Thomas and R.L. Salmon, 2005. Borna disease virus and the evidence for human pathogenicity a systematic review. Q. J. Med., 98: 255-274. - Chan, R.C., D.J. Penney, D. Little, I.D. Carter, J.R. Roberts and W.D. Rawlinson, 2001. Hepatitis and death following vaccination with 17D-204 yellow fever vaccine. Lancet, 358: 121-122. - Charrel, R.N., A.M. Zaki, S. Faqbo and X. de Lamballerie, 2006. Alkhumra hemorrhagic fever virus is an emerging tick-borne flavivirus. J. Infect., 52: 463-464. - Chua, K.B., 2003. Nipah virus outbreak in Malasyia. J. Clin.Virol., 26: 265-275. - Daszak, P., A.A. Cunningham and A.D. Hyatt, 2001. Anthropogenic environmental change and the emergence of infectious diseases in wildlife. Acta. Trop., 78: 103-116. - Dumpis, U., D. Crook and J. Oksi, 1999. Tick-borne encephalitis. Clin. Infect. Dis., 28: 882-890. - Fabiansen, C., G. Kronborg, S. Thybo and J.O. Nielsen, 2008. Ebola-haemorrhagic fever. Ugeskr. Laeger., 170: 3949-3952. - Fauquet, C.M., M.A. Mayo, J. Maniloff, U. Desselberger and L.A. Ball, 2005. Virus Taxonomy: Eighth Report of the International Committee on Taxonomy of Viruses. Academic Press, San Diego, CA., USA., ISBN-13: 9780080575483, Pages: 1162. - Feldmann, H. and H.D. Klenk, 1996. Marburg and Ebola viruses. Adv. Virus Res., 47: 1-52. - Fernandes, A.T.S., C.E. Travassos, J.M. Ferreira, J.S. Abrahao and E.S. Rocha et al., 2009. Natural human infections with Vaccinia virus during bovine vaccinia outbreaks. J. Clin. Virol., 44: 308-313. - Fields, B.N. and K. Hawkins, 1967. Human infection with the virus of vesicular stomatitis during an epizootic. N. Engl. J. Med., 277: 989-994 - Fields, B.N., D.M. Knipe, P.M. Howley, R.M. Chanock, J.L. Melnick and T.P. Monath, 1995. Pox Viruses in Fields Virology. 3rd Edn., Lippincott Raven Publishers, Philadelphia, pp: 2673-2702. - Georges, A.J., S. Baize, E.M. Leroy and M.C.G. Courbot, 1998. Eboa virus what the practitioners need to know. Med. Trop. (Mars), 58: 177-186. - Giulio, D.B.D. and P.B. Eckburg, 2004. Human monkey pox an emerging zoonosis. Lancet Infect. Dis., 4: 15-25. - Goens, S.D. and M.L. Perdue, 2004. Hepatitis E viruses in human and animals. Anim. Health Res. Rev., 5: 145-156. - Gould, E.A. and S. Higgs, 2009. Impact of climate change and other factors on emerging arbovirus diseases. Trans. R. Soc. Trop. Med. Hyg., 103: 109-121. - Gubler, D.J., 1981. Transmission of Ross River virus by Aedes polynesiensis and Aedes aegypti. Am. J. Trop. Med. Hyg., 30: 1303-1306. - Gubler, D.J., 1998. Dengue and dengue hemorrhagic fever. Clin. Microbiol. Rev., 11: 480-496. - Hayden, F.G., W.A. Howard, L. Palkonyay and M.P. Kieny, 2009. Report of the 5th meeting on the evaluation of pandemic influenza prototype vaccines in clinical trials: World Health Organization, Geneva, Switzerland, 12-13 February 2009. Vaccine, 27: 4079-4089. - Hoch, S.P.F., J.K. Khan, S. Rchman, S. Mirza, M. Khurshid and J.B. McCormick, 1995. Crimean-congo hemorrhagic fever treated with oral ribavirin. Lancet, 346: 472-475. - Hooper, J.W. and D. Li, 2001. Vaccines against Hantaviruses. Curr. Top. Microbiol. Immunol., 256: 171-191. - Hubalek, Z. and J. Halouzka, 1999. West Nile fever–a reemerging mosquito-borne viral disease in Europe. Emerg. Infect. Dis., 5: 643-650. - Huggins, J., Z.X. Zhang and M. Bray, 1999. Antiviral drug therapy of filovirus infection: S-adenosylhomocysteine hydrolase inhibitors inhibit Ebola virus in vito and in a lethal mouse model. J. Infect. Dis., 179: 240-247. - Khan, A.S., A. Sanchez and A.K. Pfieger, 1988. Filoviral hemorrhagic fevers. Br. Med. Bull., 54: 675-692. - Kinney, R.M. and C.Y. Huang, 2001. Development of new vaccines against dengue fever and Japanese encephalitis. Intervirology, 44: 176-196. - Kiwanuka, N., E.J. Sanders, E.B. Rwaguma, J. Kawamata and F.P. Ssengooba et al., 1999. O’nyong-nyong fever in South-Central Uganda, 1996-1997 clinical features and validation of a clinical case definition for surveillance purposes. Clin. Infect. Dis., 29: 1243-1250. - Kolhapure, R.M., R.P. Deolankar, C.D. Tupe, C.G. Raut and A. Basu et al., 1997. Investigation of buffalopox outbreaks in Maharastra state during 1992-1996. Ind. J. Med. Res., 106: 441-446. - Krebs, J.W., J.S. Smith, C.E. Rupprecht and J.E. Childs, 2000. Mammalian reservoirs and epidemiology of rabies diagnosed in human beings in the United States, 1981-1998. Ann. N. Y. Acad. Sci., 916: 345-353. - Kruse, H., A.M. Kirkemo and K. Handeland, 2004. Wildlife as a source of zoonotic infections. Emerg. Infect. Dis., 10: 2067-2072. - Lacy, M.D. and R.A. Smego, 1996. Viral hemorrhagic fevers. Adv. Pediatr. Infect. Dis., 12: 21-53. - Ludwig, B., F.B. Kraus, R. Allwinn, H.W. Doerr and W. Preiser, 2003. Viral Zoonoses A threat under control. Intervirology, 46: 71-78. - Mackenzie, J.S., A.K. Broom, R.A. Hall, C.A. Johansen and M.D. Lindsay et al., 1998. Arboviruses in the Australian region, 1990-1998. Commun. Dis. Intell., 22: 93-100. - Madani, T.A., 2005. Alkhumra virus infection a new viral hemorrhagic fever in Saudi Arabia. J. Infect., 51: 91-97. - Maiztegui, J.I., K.T.Jr. McKee, J.G.B. Oro, L.H. Harrison and P.H. Gibbs et al., 1998. Protective efficacy of a live attenuated vaccine against Argentine hemorrhagic fever. J. Infect. Dis., 177: 277-283. - Maki, A.Jr., A. Hinsberg, P. Percheson and D.G. Marshall, 1988. Orf contagious pustular dermatitis. CMAJ., 139: 971-972. - Manojkumar and Mrudula, 2006. Emerging viral diseases of zoonotic importance-review. Int. J. Trop. Med., 1: 162-166. - Marr, J.S. and C.H. Calisher, 2003. Alexander the Great and West Nile virus encephalitis. Emerg Infect Dis., 9: 1599-1603. - Martin, M., T.F. Tsai, B. Cropp, G.J. Chang and D.A. Holmes et al., 2001. Fever and multisystem organ failure associated with 17D-204 yellow fever vaccination a report of four cases. Lancet, 358: 98-104. - Marx, P.A., C. Apetrei and E. Drucker, 2004. Aids as a zoonosis confusion over the origin of the virus and origin of the epidemics. J. Med. Primatol., 33: 220-226. - McCaughey, C. and C.A. Hart, 2000. Hantaviruses. J. Med. Microbiol., 49: 587-599. - McJunkin, J.E., E.C.L. de Reyes, J.E. Irazuzta, M.J. Caceres and R.R. Khan et al., 2001. La Crosse encephalitis in children. N. Engl. J. Med., 344: 801-807. - Miranda, M.E., T.G. Ksiazek, T.J. Retuya, A.S. Khan and A. Sanchez et al., 1999. Epidemiology of Ebola (subtype Reston) virus in the Philippines, 1996. J. Infect. Dis., 179: 115-119. - Mumford, E.L., B.J. McCluskey, J.L.T. Dargatz, B.J. Schmitt and M.D. Salman, 1998. Public veterinary medicine public health serologic evaluation of vesicular stomatitis virus exposure in horses and cattle in 1996. J.Am. Vet. Med. Assoc., 213: 1265-1269. - Murphy, F.A., 1998. Emerging zoonoses. Emerg. Infect. Dis., 4: 429-435. - Murphy, F.A., E.P.J. Gibbs, M.C. Horzinek and M.J. Studdert, 1999. Veterinary Virology. 3rd Edn., Academic Press, San Diego, CA., USA., ISBN-13: 9780080552033, pp: 423-425. - Nalca, A., A.W. Rimoin, S. Bavari and C.A. Whitehouse, 2005. Reemergence of monkeypox prevelence diagnostics and countermeasures. Clin. Infec. Dis., 41: 1765-1771. - Nunes, M.R., L.C. Martins, S.G. Rodrigues, J.O. Chiang, S.A. Rdo, A.P. da Rosa and P.F. Vasconcelos, 2005. Oropouche virus isolation Southeast Brazil. Emerg. Infect. Dis., 11: 1610-1613. - Ostrowski, S.R., M.J. Leslie, T. Parrott, S. Abelt and P.E. Piercy, 1998. B-virus from pet macaque monkeys an emerging threat in the United States. Emerg. Infect. Dis., 4: 117-121. - Parker, S., A. Nuara, R.M. Buller and D.A. Schultz, 2007. Human monkey pox an emerging zoonotic disease. Future Microbial., 2: 17-34. - Pattnaik, P., 2006. Kyasanur forest disease an epidemiological view in India. Rev. Med. Virol., 16: 151-165. - Patz, J.A., P. Daszak, G.M. Tabor, A.A. Aguirre and M. Pearl et al., 2004. Unhealthy landscapes policy recommendations on land use change and infectious disease emergence. Environ. Health Perspect., 112: 1092-1098. - Payling, K.J., 1996. Ebola fever. Prof. Nurse., 11: 798-799. - Peiris, J.S. and L.L. Poon, 2008. Detection of SARS coronavirus in humans and animals by conventional and quantitative (real time) reverse transcription polymerase chain reactions. Methods Mol. Biol., 454: 61-72. - Perez, J.G.R., A.V. Vorndam and G.G. Clark, 2001. The dengue and dengue-hemorrhagic fever epidemic in Puerto Rico, 1994-1995. Am. J. Trop. Med. Hyg., 64: 67-74. - Peters, C.J. and A.S. Khan, 2002. Hanta virus pulmonary syndrome the new American haemorrghic fever. Clin. Infect. Dis., 34: 1224-1231. - Peters, C.J. and J.W.L. Duc, 1999. An introduction to Ebola the virus and the disease. J. Infect. Dis., 179: 9-16. - Petersen, L.R. and J.T. Roehrig, 2001. West Nile virus: A reemerging global pathogen. Emerg. Infect. Dis., 7: 611-614. - Reed, K.D., J.K. Meece, J.S. Henkel and S.K. Shukla, 2003. Birds migration and emerging zoonoses West Nile Virus, Lyme disease, influenza A and enteropathogens. Clin.Med. Res., 1: 5-12. - Richt, J., I. Pfeuffer, M. Christ, K. Frese, K. Bechter and S. Herzog, 1997. Borna disease virus infection in animals and humans. Emerg. Infect. Dis., 3: 343-352. - Rott, R., S. Herzog, B. Fleischer, A. Winokur, J. Amsterdam, W. Dyson and H. Koprowski, 1985. Detection of serum antibodies to Borna disease virus in patients with psychiatric disorders. Science, 228: 755-756. - Saijo, M., Y. Ami, Y. Suzaki, N. Nagata and N. Iwata et al., 2006. LC16m8, a highly attenuated vaccinia virus vaccine lacking expression of the membrane protein B5R, protects monkeys from monkey pox. J. Virol., 80: 5179-5188. - Schuffenecker, I., I. Iteman, A. Michault, S. Murri and L. Frangeul et al., 2006. Genome microevolution of chikungunya viruses causing the Indian Ocean outbreak. PLOS Med., 3: 263-263. - Singh, R.K., M. Hosamani, V. Balamurugan, C.C. Satheesh and K.R. Shingal et al., 2006. An outbreak of buffalopox in buffalo (Bubalus bubalis) dairy herds at Aurangabad, India. Rev. Sci. Tech., 25: 981-987. - Singh, R.K., M. Hosamani, V. Balamurugan, V. Bhanuprakash, T.J. Rasool and M.P. Yadav, 2007. Buffalopox emerging and re-emerging zoonoses. Anim. Health Res. Rev., 8: 105-114. - Swayne, D.E. and D.J. King, 2003. Zoonosis update avian influenza and newcastle disease. J. Am. Vet. Med. Assoc., 222: 1534-1540. - Switzer, W.M., V. Bhullar, V. Shanmugam, M.E. Cong and B. Parekh et al., 2004. Frequent Simian foamy virus infections in persons occupationally exposed to non human primates. J. Virol., 78: 2780-2789. - Tai, D.Y., 2006. SARS how to manage future outbreaks. Ann. Acad. Med. Singapore, 35: 368-373. - Takeuchi, Y. and R. Weiss, 2000. Xenotransplantation reappraising the risk of retroviral zoonosis. Curr. Opin. Immunol., 12: 504-507. - Taylor, L.H., S.M. Latham and M.E.J. Woolhouse, 2001. Risk factors for human disease emergence. Philos. Trans. R. Soc. London B: Biol. Sci., 356: 983-989. - Tesh, R.B., D.M. Watts, K.L. Russell, C. Damodaran and C. Calampa et al., 1999. Mayaro virus disease an emerging mosquito-borne zoonosis in tropical South America. Clin. Infect. Dis., 28: 67-73. - Thiel, H.J. and M. Konig, 1999. Caliciviruses an overview. Vet. Microbiol., 69: 55-62. - Wilke, I.G. and L. Haas, 1999. Emerging of new viral zoonoses. Dtsch. Tierarztl. Wochenschr., 106: 332-338. - Will, R.G., 2003. Acquired prion disease iatrogenic CJD, variant CJD, kuru. Br. Med. Bull., 66: 255-265. - Willems, W.R., G. Kaluza, C.B. Boschek, H. Bauer, H. Hager, H.J. Schultz and H. Feistner, 1979. Semliki forest virus cause of a fatal case of human encephalitis. Science, 203: 1127-1129. - Winkler, W.G. and D.C. Blenden, 1995. Transmission and Control of Viral Zoonoses in the Laboratory. In: Laboratory Safety Principles and Practices, Fleming, D.O., J.H. Richardson, J.L. Tulis and D. Vesley (Eds.). 2nd Edn., American Society for Microbiology, Washington, DC. - Winter, Agnes, Charmley and Judith, 1999. The Sheep Keeper`s Veterinary Handbook. Crowood Press Ltd., Marlborough, UK., ISBN: 1-86126-235-3. - Wolfe, N.D., W.M. Switzer, J.K. Carr, V.B. Bhullar and V. Shanmugam et al., 2004. Naturally acquired simian retrovirus infections in central African hunters. Lancet, 363: 932-937. - Woolhouse, M.E. and S. Gowtage-Sequeria, 2005. Host range and emerging and reemerging pathogens. Emerg. Infect. Dis., 11: 1842-1847. 1. Nga P.T., Parquet M.C., Lauber C., Parida M., Nabeshima T., Yu F., Thuy N.T., Inoue S., Ito T., Okamoto K., et al. Discovery of the first insect nidovirus, a missing evolutionary link in the emergence of the largest RNA virus genomes. PLoS Pathog. 2011;7:e1002215. doi: 10.1371/journal.ppat.1002215. [PMC free article] [PubMed] [CrossRef] [Google Scholar] 3. de Haan C.A., Rottier P.J. Hosting the severe acute respiratory syndrome coronavirus: Specific cell factors required for infection. Cell Microbiol. 2006;8:1211–1218. doi: 10.1111/j.1462-5822.2006.00744.x. [PubMed] [CrossRef] [Google Scholar] 4. Sawicki S.G., Sawicki D.L., Siddell S.G. A contemporary view of coronavirus transcription. J. Virol. 2007;81:20–29. doi: 10.1128/JVI.01358-06. [PMC free article] [PubMed] [CrossRef] [Google Scholar] 6. Plant E.P. Ribosomal Frameshift Signals in Viral Genomes. In: Garcia M.L, Romanowski V., editors. Viral Genomes—Molecular Structure, Diversity, Gene Expression Mechanisms and Host-Virus Interactions. InTech; Rijeka, Croatia: 2012. pp. 91–122. [Google Scholar] 7. Plant E.P., Rakauskaite R., Taylor D.R., Dinman J.D. Achieving a golden mean: Mechanisms by which coronaviruses ensure synthesis of the correct stoichiometric ratios of viral proteins. J. Virol. 2010;84:4330–4340. doi: 10.1128/JVI.02480-09. [PMC free article] [PubMed] [CrossRef] [Google Scholar] 8. Baranov P.V., Henderson C.M., Anderson C.B., Gesteland R.F., Atkins J.F., Howard M.T. Programmed ribosomal frameshifting in decoding the SARS-CoV genome. Virology. 2005;332:498–510. doi: 10.1016/j.virol.2004.11.038. [PubMed] [CrossRef] [Google Scholar] 9. Plant E.P., Perez-Alvarado G.C., Jacobs J.L., Mukhopadhyay B., Hennig M., Dinman J.D. A three-stemmed mRNA pseudoknot in the SARS coronavirus frameshift signal. PLoS Biol. 2005;3:e172. doi: 10.1371/journal.pbio.0030172. [PMC free article] [PubMed] [CrossRef] [Google Scholar] 10. Su M.C., Chang C.T., Chu C.H., Tsai C.H., Chang K.Y. An atypical RNA pseudoknot stimulator and an upstream attenuation signal for -1 ribosomal frameshifting of SARS coronavirus. Nucleic Acids Res. 2005;33:4265–4275. doi: 10.1093/nar/gki731. [PMC free article] [PubMed] [CrossRef] [Google Scholar] 11. Plant E.P., Dinman J.D. Comparative study of the effects of heptameric slippery site composition on -1 frameshifting among different eukaryotic systems. RNA. 2006;12:666–673. doi: 10.1261/rna.2225206. [PMC free article] [PubMed] [CrossRef] [Google Scholar] 21. de Groot RJ, Cowley JA, Enjuanes L, Faaberg KS, Perlman S, Rottier PJM, Snijder EJ, Ziebuhr J, Gorbalenya AE (2012) Order Nidovirales. In: King AMQ, Adams MJ, Carstens EB, Lefkowitz EJ (eds) Virus taxonomy. Ninth report of the international com- mittee on taxonomy of viruses. Elsevier Academic Press, Amsterdam, pp 785–795 22. Faaberg KS, Balasuriya UB, Brinton MA, Gorbalenya AE, Leung FC-C, Nauwynck H, Snijder EJ, Stadejek T, Yang H, Yoo D (2012) Family Arteriviridae. In: King AMQ, Adams MJ, Carstens EB, Lefkowitz EJ (eds) Virus taxonomy. Ninth report of the international committee on taxonomy of viruses. Elsevier Aca- demic Press, Amsterdam, pp 796–805 23. de Groot RJ, Baker SC, Baric R, Enjuanes L, Gorbalenya AE, Holmes KV, Perlman S, Poon LL, Rottier PJM, Talbot PJ, Woo PCY, Ziebuhr J (2012) Family Coronaviridae. In: King AMQ, Adams MJ, Carstens EB, Lefkowitz EJ (eds) Virus taxonomy. Ninth report of the international committee on taxonomy of viruses. Elsevier Academic Press, Amsterdam, pp 806–828 24. Cowley JA, Walker PJ, Flegel TW, Lightner DV, Bonami JR, Snijder EJ, de Groot RJ (2012) Family Roniviridae. In: King AMQ, Adams MJ, Carstens EB, Lefkowitz EJ (eds) Virus tax- onomy. Ninth report of the international committee on taxonomy of viruses. Elsevier Academic Press, Amsterdam, pp 829–834 25. Gorbalenya AE, Enjuanes L, Ziebuhr J, Snijder EJ (2006) Nid- ovirales: evolving the largest RNA virus genome. Virus Res 117:17–37 26. Nga PT, Parquet MD, Lauber C, Parida M, Nabeshima T, Yu FX, Thuy NT, Inoue S, Ito T, Okamoto K, Ichinose A, Snijder EJ, Morita K, Gorbalenya AE (2011) Discovery of the first insect nidovirus, a missing evolutionary link in the emergence of the largest RNA virus genomes. PLoS Pathog 7:e1002215 27. Zirkel F, Kurth A, Quan PL, Briese T, Ellerbrok H, Pauli G, Leendertz FH, Lipkin WI, Ziebuhr J, Drosten C, Junglen S (2011) An insect nidovirus emerging from a primary tropical rainforest. MBio 2:e00077-11 28. Junglen S, Kurth A, Kuehl H, Quan PL, Ellerbrok H, Pauli G, Nitsche A, Nunn C, Rich SM, Lipkin WI, Briese T, Leendertz FH (2009) Examining landscape factors influencing relative distri- bution of mosquito genera and frequency of virus infection. EcoHealth 6:239–249 29. Chen Y, Cai H, Pan J, Xiang N, Tien P, Ahola T, Guo DY (2009) Functional screen reveals SARS coronavirus nonstructural pro- tein nsp14 as a novel cap N7 methyltransferase. Proc Natl Acad Sci USA 106:3484–3489 30. Enjuanes L, Almazan F, Sola I, Zuniga S (2006) Biochemical aspects of coronavirus replication and virus-host interaction. Ann Rev Microbiol 60:211–230 31. Pasternak AO, Spaan WJM, Snijder EJ (2006) Nidovirus tran- scription: how to make sense…? J Gen Virol 87:1403–1421 32. Sawicki SG, Sawicki DL, Siddell SG (2007) A contemporary view of coronavirus transcription. J Virol 81:20–29 33. Cowley JA, Walker PJ (2002) The complete genome sequence of gill-associated virus of Penaeus monodon prawns indicates a gene organization unique among nidoviruses. Arch Virol 147:1977–1987 34. Schutze H, Ulferts R, Schelle B, Bayer S, Granzow H, Hoffmann B, Mettenleiter TC, Ziebuhr J (2006) Characterization of White bream virus reveals a novel genetic cluster of nidoviruses. J Virol 80:11598–11609 35. Lauber C, Gorbalenya AE (2012) Partitioning the genetic diver- sity of a virus family: approach and evaluation through a case study of Picornaviruses. J Virol 86:3890–3904 36. Lauber C, Gorbalenya AE (2012) Toward genetics-based virus taxonomy: comparative analysis of a genetics-based classification and the taxonomy of picornaviruses. J Virol 86:3905–3915 37. Edgar RC (2004) MUSCLE: a multiple sequence alignment method with reduced time and space complexity. BMC Bioin- formatics 5:113 38. Gorbalenya AE, Lieutaud P, Harris MR, Coutard B, Canard B, Kleywegt GJ, Kravchenko AA, Samborskiy DV, Sidorov IA, Leontovich AM, Jones TA (2010) Practical application of bio- informatics by the multidisciplinary VIZIER consortium. Anti- viral Res 87:95–110 39. Drummond AJ, Rambaut A (2007) BEAST: Bayesian evolu- tionary analysis by sampling trees. BMC Evol Biol 7:214
<urn:uuid:37f24ab5-01b8-4545-a430-3d7e6a885acd>
CC-MAIN-2022-40
https://debuglies.com/2020/03/21/report-covid-19-transmission-by-insect-bite/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00236.warc.gz
en
0.815606
13,138
3.109375
3
Whenever agencies like the National Institute of Standards and Technology, or NIST, publish new recommendations and guidelines, it’s worth taking note. In a world where software comes in novel forms and fulfills diverse applications, relying on these industry standards can provide computer systems users and government contractors with effective game plans for making the most of their tools. NIST’s September 2017 Special Publication 800-190, or Application Container Security Guide, is no less worthy of attention than other special publications. Here’s what it entails. What Is an Application Container? Containerization is a virtualization practice that takes place at an operating-system level. Many operating systems let users run virtual machines, partitions or other instances, known as containers, which include both a target application and the OS needed to run it. To a user examining a container from the outside, it’s like looking at one OS running inside of another. To a program that runs inside of the container, the internal virtualized operating system acts like a completely independent computer. Why Use Containers? Containerization strategies provide their users with many advantages. For instance, a software developer might use containers to package applications and distribute them in stable forms that have a higher likelihood of running as expected. Containerization is an effective means of running software in sandboxed environments that limit its access to system resources. Containers also facilitate software testing on different platforms and OSes without requiring individual machines for each target environment. Since entire virtualized operating systems can be distributed as files, commonly known as images, they’re highly portable and shareable. Potential Container Concerns: Understanding the Security Implications Containerization isn’t perfect. Here are some of the risks that you might face at various stages of the process. An image containing an OS and application is a snapshot in time. A user who runs it later may be exposing themselves to vulnerabilities that weren’t known at the time of the image’s creation. Images produced from badly configured OSes pose similar hazards, and those obtained from untrusted sources may house malware or spyware. Registries that store and distribute images, such as commercial and self-hosted download services, must be managed with care. If they contain old images, they may expose users to significant risks. As with other remote transactions, downloads from registries are also in danger of attacks like man-in-the-middle hacks, and some may disclose sensitive organizational data to bad actors. Dangers for Orchestrators Orchestrators, or the administrators who run containerized applications on their servers, may allow their app users to compromise other containers by forgetting to limit individual containers’ access rights. They might also become overly reliant on authentication directory tools that place servers at risk by managing old accounts improperly. Many containers depend on virtualized overlay networks that tend to obfuscate their traffic via encryption, which could make it harder for external security tools to oversee what’s going on. Ill-advised configurations might also place orchestrators at risk of unauthorized hosts gaining access to containers. This becomes even more dangerous when containers that are publicly accessible run on the same hosts as those containing sensitive private data. Container and Host Hazards Some containers expose servers to vulnerabilities contained in their software. Since most runtimes permit container-to-container access, malicious actors or rogue containers can use one virtualized environment to corrupt another. The host OSes that run containers also need to be secure. Their many potential vulnerabilities might place multiple containers at risk via - Vulnerable system components, - File system tampering, - Poorly managed user access rights, or - Shared OS kernels. Smart Strategies for Safe Application Container Usage NIST recommends a variety of practices to help combat the vulnerabilities associated with containers, such as - Enforcing compliance with secure image configuration and lifecycle management practices, - Maintenance of trusted image standards and credentials, - Rigorous oversight of registry connections, contents, and accounts, - Stringent orchestrator access control, inter-container communication, workload sensitivity management and node trust practices, - Container runtime vulnerability monitoring, network access limitation, app vulnerability monitoring and management, and runtime access controls, - Environmental segregation of activities like development, testing, and production, and - Hardware-based host cybersecurity measures, OS access controls, limited per-container file system permissions and containerized workload isolation. Complying With Special Publication 800-190 Containerization has many unique enterprise advantages, but you must be on your guard against the dangers. Such vigilance demands extensive oversight, however. Does your organization have the time, resources and operational expertise to institute better practices? Working with a data specialist may be the easiest way to get compliant as efficiently as possible.
<urn:uuid:8c4be3b0-5d4e-45f3-aeb9-af24579eca85>
CC-MAIN-2022-40
https://lifelinedatacenters.com/nist/nist-800-190-containerization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00236.warc.gz
en
0.914229
989
3.1875
3