text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
In Ecuador, sustainable shrimp for a sustainable future "It's not just farm to fork. This is before the farm." Ecuadorian shrimp photos via the Sustainable Shrimp Partnership Pamela Nath has just gotten off the phone with a Manhattan chef when we connect on a snowy New York morning. The chef’s fresh shrimp delivery is being delayed because of the weather. That delivery is one tiny part of the 1.5 million pounds of shrimp exported from Ecuador in 2019. Shrimp are sold by their quantity per pound; Ecuadorian shrimp average from 20-30 a pound to 50-60 a pound. “It’s a lot of shrimp,” Nath said. Nath is the director of the Sustainable Shrimp Partnership, a sustainability initiative led by Ecuadorian shrimp producers. SSP’s goal: the highest quality premium shrimp that meets the highest social and environmental standards. The ever-expanding global population, estimated by the UN to hit 9 billion by 2050, means a growing need for protein, including seafood—and shrimp. Around 3 million tons of farmed shrimp are produced worldwide annually. Shrimps are farmed via aquaculture, where fresh- and saltwater populations are cultivated under controlled conditions. Aquaculture uses less land and fresh water than meat production, according to Nath, with better feed conversion and ratio, and higher rate of protein retention. That means more food and more protein with fewer resources—with a smaller environmental impact. The UN expects aquaculture to contribute to more than half of global fish consumption by 2025. Aquaculture methods can vary widely from one farm to the next—let alone across the globe. Which is why, Nath said, one of SSP’s founding tenets is a race to the top. She explains: Farmed shrimp, a commodity market, has traditionally rewarded those with the lowest prices. Those low prices can also mean that best farming practices get sacrificed, which could mean a lower quality product, as well as higher antibiotics use because of greater risk of disease. Ecuador is the world’s second largest shrimp producer, after India. “SSP was born because a group of Ecuadorian enterprises got together a few years ago and said, we see many regions are looking to lower the cost of shrimp prices,” Nath said. Their concern was that lower prices would come at the cost of responsible practices. The group wanted to highlight that Ecuador was committed to producing shrimp with sustainable practices. SSP members are Aquaculture Stewardship Council certified, with a particular focus on zero antibiotic use, neutral water impact and full traceability. “A growing race to the bottom of the shrimp industry is harmful to the shrimp, and to the environment,” Nath said. “It also limits consumer choice and their ability to buy healthy and sustainable farm shrimp.” Consumer awareness is critical, said Jose Antonio Camposano, president of Ecuador’s Aquaculture Chamber. He works with SSP to educate consumers and retailers alike on why origin adds to the value of the end product. “Most consumers, especially in the U.S., don’t really know where the shrimp they eat comes from,” Camposano said in a phone interview. Distributors, supermarkets, importers and wholesalers too may not know—or want to know—the origin of the shrimp, especially if it may be associated with bad practices, or environmental or labor issues. SSP criteria 1: zero antibiotics In the 1990s, white spot disease decimated entire shrimp farms across Asia. To combat and prevent disease, much of the global shrimp industry began putting antibiotics into the water the shrimp were farmed in. The Ecuadorian shrimp industry took a different tact. “We helped the animal develop its own resistance,” Camposano said. Decades later, the resistance in Ecuadorian shrimp is natural, a result of the animals’ own genetic capacity to resist and tolerate disease. Part of that is the shrimp feed that is crucial to keep the animals’ immune system healthy, he added. Antibiotic levels in animal-based food production have long been a concern for researchers. Studies show that even small levels can lead to the development of antibiotic resistance in humans. SSP criteria 2: traceability to demonstrate all value to the end consumer For this, SSP turned to IBM. “We’re working under the IBM Food Trust system,” Camposano said, “to provide all the information to consumers so they can better understand how the shrimp was produced.” IBM Food Trust was created specifically for the food ecosystem. It’s a blockchain solution different from any other blockchain product, IBM Food Trust Business Development Executive Vanessa Barbery said: “it’s been created and tailor-made for the industry.” The clients don’t need knowledge about blockchain to use it, Barbery said. “We integrate into their data and their supply chain. For the client, it’s really simple.” Each shrimp has an identifier, which is leveraged through blockchain. Anyone can follow the full journey of the life of the shrimp through the supply chain. “It stocks at X processor, Y distributor, Z retailer,” IBM Food Trust Global Sales Leader Luis Izquierdo said. “There’s one simple version of the truth that can be followed on the blockchain.” Sharing that information—that truth—can help drive trust for the brand. Which can then help drive sales. “It’s not just farm to fork,” Izquierdo said. “This is before the farm. What feed goes into the shrimp? What’s the shrimp larva information? There’s lots of information that can be shared.” In their work with SSP, Barbery and Izquierdo see substantial interest from the farmers in the technology. “They need it to stand out from the competition,” Barbery said. Because their product is premium, its prices may also be premium. The farmers may struggle with explaining the price: “it’s because I don’t use antibiotics, and I don’t use children in my production line,” are all true, but don’t necessarily add value. Traceability provides that value. “Our shrimp has many certifications,” Barbery said. “But instead of just saying that, we can share the data that validates that information,” which includes care at every stage of the production cycle to avoid antibiotic use. That care also extends to employees and the environment as well. SSP criteria 3: neutral impact on water Water used to produce the shrimp is the same quality when it goes out as when it came in. That means aquaculture farms have effective waste management strategies in place. What’s next for SSP and other industries “Since the launch of SSP, we’ve seen industry colleagues and countries announcing efforts to improve their practices,” Camposano said. “That’s very good. We want to make sure everyone is racing with us.” SSP has had early success with early adopters, and now is working to educate the mainstream market that there’s space for a new category of shrimp produced with the highest environmental and social standards. “We think everyone deserves a better product,” Camposano said. That might extend beyond shrimp in the not-so-distant future. Other Ecuadorian industries—banana, cocoa, coffee—are looking to SSP, he said, and asking how it’s done. “Let’s say you’re a pineapple producer,” Barbery said. “You’re rainforest certified. You put that badge on your product. As a consumer, how do I know what you’re saying is true?” With technology, the pineapple producer could share about how they’ve planted 15,000 trees. That information validates what you’re saying about your product in general, Barbery said. “That’s our next goal,” Barbery said. “How do we face the consumer with transparency, and show them why what you’re saying is true.” Do you eat shrimp? What’s your favorite shrimp recipe? Barbery and Izquierdo both love ceviche. So does Camposano: “I eat a lot of shrimp. I love Ecuador ceviche.” Nath describes SSP’s 180-second challenge: “You put the shrimp in boiling water for 180 seconds. Then you take it out. You don’t need anything on it.” That’s right: no sauce, no condiments. Camposano also loves the 180-second boiled shrimp. “It’s salty but sweet. The crunch! The texture!” he said. “You can taste the difference with a shrimp that’s been taken care of.”
<urn:uuid:d31ccaa8-064f-4d23-a145-b5bc4b20bfdf>
CC-MAIN-2022-40
https://www.ibm.com/blogs/industries/sustainable-shrimp-ecuador-blockchain-foodtrust/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00630.warc.gz
en
0.939408
1,943
3.234375
3
What is 5G? 5G wireless technology means it’s the fifth generation or the latest cellular technology that’s set to be deployed soon. What sets 5G apart from the previous technologies is the data speed. Current LTE or 4G speeds top out at about 50 Mbps, while 3G topped out at about 144 Kbps. With 5G, it’s expected that users will enjoy speeds as high as 20 Gbps, which is faster than many wired connections. Also, 5G is supposed to provide higher capacity, which should put less strain on the network as more and more people stream high definition content to their mobile devices. Lastly, 5G improves latency times, which means less delay or lag when you’re connecting to the network. Right now, latency times with LTE hover at around 40 milliseconds. With 5G, that latency drops to about 1 millisecond. Right now, both AT&T and Verizon are working on their 5G networks and hope to begin deploying them in the next few years. T-Mobile unveiled their 5G network plans back in February of 2018 at Mobile World Congress, and they plan to build out 5G in 30 cities this year. Why It’s Important For Business While some people say 5G is just a gimmick, many people recognize it for the potential it offers, not only for consumers but businesses as well. The faster speeds mean companies will be able to deploy new and better services, and their customers will have better experiences using these services thanks to more rapid data transfer. The ability to send large files quickly while on the go opens up new opportunities for companies to do business. Since the 5G network is designed to have higher capacity, the system won’t slow to a crawl under heavy loads. Imagine being in a connected car and having to rely on a steady stream of information only to find that the network slowed down because of high usage. That’s not just inconvenient, but it can be dangerous in certain situations. With a more robust and reliable network, companies can serve their customers with applications and services that they couldn’t under a slower, less reliable network. Having a fast, reliable network like 5G will enable companies to hire workers in more remote locations or deploy workers in these locations. Imagine having all the benefits of an ultra-fast fiber wifi connection, but on a phone or tethered laptop virtually anywhere. Imagine being able to stream live video without worrying about dropped frames or being booted off the network because of overcapacity. For a business that specializes in news gathering and reporting, this technology will be a game changer. And speaking of games, companies will be able to develop more realistic games, especially VR, thanks to a fast network. VR also has implications in the medical industry, the manufacturing industry, retail, and the entertainment industry. Once 5G is deployed and running smoothly, companies will take advantage of the new technology by developing new apps and services that make our lives better. A huge benefit to companies with the advent of 5G will be in communications. With more and more communications happening wirelessly, it’s vital that companies have reliable networks to communicate with customers and other branches within the company. Thanks to 5G, companies will be able to video conference more reliably, and workers will be able to collaborate on data-heavy projects in real time. Increases in Efficiency The advent of faster wireless networks means that companies can become more efficient than before. Imagine having a smart building that monitors electric consumption so that managers can make more informed decisions on how best to manage their energy output to save money. Also, consider banks or hospitals having to approve and process transactions quickly or send highly detailed medical files to clinics or other hospitals to help save lives and improve care. Once 5G is fully deployed, some companies can become leaner by allowing more of their workforce the ability to work from home or while traveling. This not only makes the company more efficient, but it saves them money in salaries and benefits. If you or your business is looking to increase your team’s efficiency, consider finding telecommunications consulting services to help! One industry likely to see an immediate benefit to faster 5G is the auto industry. With fast, reliable wireless networks fully deployed, cars will not only be connected to the internet but will be connected to one another too. The hope is that having automobiles that talk to each other will lessen the likelihood of a crash or minimize the damage if an accident does occur. The trucking industry, too, is likely to be affected. With a robust and fast network, self-driving trucks will become more and more ubiquitous, and hopefully safer as human error will no longer lead to accidents. The effects of 5G won’t be fully felt for sometime. The transition from current technology to the new one is going to take years, and it’s going to be expensive. And, to be fair, not all companies are happy about what’s coming down the road. Companies that provide wired DSL or internet through cable aren’t pleased about an ultra-fast wireless network. It’s an easy bet that once 5G is available around town, most people will opt to cut the cord and ditch their wired service provider for good. Still, if you’re the owner of a medium to large-sized company, the coming of 5G should be exciting, and you should start planning now on how best to take advantage of that technology.
<urn:uuid:a3f679e5-ca20-4819-97d3-9799cc635651>
CC-MAIN-2022-40
https://network-control.com/whats-the-deal-with-5g-and-why-does-it-matters-to-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00630.warc.gz
en
0.959756
1,129
2.796875
3
Movement patterns of people you know contain 95% of the information needed to predict your location. Turning off your data tracking doesn’t mean you’re untraceable, a new study warns. Data about our habits and movements are constantly collected via mobile phone apps, fitness trackers, credit card logs, websites visited, and other means. But even with it off, data collected from acquaintances and even strangers can predict your location. “Switching off your location data is not going to entirely help,” says Gourab Ghoshal, an associate professor of physics, mathematics, and computer science at the University of Rochester. Ghoshal and colleagues applied techniques from information theory and network science to find out just how far-reaching a person’s data might be. The researchers discovered that even if individual users turned off data tracking and didn’t share their own information, their mobility patterns could still be predicted with surprising accuracy based on data collected from their acquaintances. “Worse,” says Ghoshal, “almost as much latent information can be extracted from perfect strangers that the individual tends to co-locate with.” FRIENDS AND STRANGERS The researchers analyzed four datasets: three location-based social network datasets composed of millions of check-ins on apps such as Brightkite, Facebook, and Foursquare, and one call-data record containing more than 22 million calls by nearly 36,000 anonymous users. They developed a “colocation” network to distinguish between the mobility patterns of two sets of people: - People who are socially tied to an individual, such as family members, friends, or coworkers. - People who are not socially tied to an individual, but who are at a location at a similar time as the individual. They might include people working in the same building but with different companies, parents whose children attend the same schools but who are unknown to each other, or people who shop at the same grocery store. By applying information theory and measures of entropy—the degree of randomness or structure in a sequence of location visits—the researchers learned that the movement patterns of people who are socially tied to an individual contain up to 95% of the information needed to predict that individual’s mobility patterns. However, even more surprisingly, they found that strangers not tied socially to an individual could also provide significant information, predicting up to 85% of an individual’s movement. DATA TRACKING’S SLIPPERY SLOPE The ability to predict the locations of individuals or groups can be beneficial in areas such as urban planning and pandemic control, where contact tracing based on mobility patterns is a key tool to stopping the spread of disease. In addition, many consumers appreciate the ability of data mining to offer tailored recommendations for restaurants, TV shows, and advertisements. However, Ghoshal says, data mining is a slippery slope, especially because, as the research shows, individuals sharing data via mobile apps may be unwittingly providing information about others. “We’re offering a cautionary tale that people should be aware of how far-reaching their data can be,” he says. “This research has a lot of implications for surveillance and privacy issues, especially with the rise of authoritarian impulses. We can’t just tell people to switch off their phones or go off the grid. We need to have dialogues to put in place laws and guidelines that regulate how people collecting your data use it.” Additional coauthors of the paper are from the University of Exeter, the Federal University of Rio de Janeiro, Northeastern University, and the University of Vermont
<urn:uuid:f0c7b04e-f9ce-4872-b4f4-9cb647ca8340>
CC-MAIN-2022-40
https://www.nextgov.com/emerging-tech/2022/04/data-friends-and-strangers-show-where-you-are/365593/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00630.warc.gz
en
0.942744
764
2.859375
3
In fiber-optic communications, wavelength-division multiplexing (WDM) is a technology which multiplexes a number of optical carrier signals into a single optical fiber by using different wavelengths of laser light. This technique enables bidirectional communications over one strand of fiber, as well as multiplication of capacity. A WDM system (Figure 1) uses a multiplexer at the transmitter to join the signals together, and a demultiplexer at the receiver to split them apart. With the right type of fiber it is possible to have a device that does both simultaneously, and can function as an optical add-drop multiplexer. The concept was first published in 1978, and by 1980 WDM systems were being realized in the laboratory. As a system concept, the ways of WDM includes coarse wavelength-division multiplexing (CWDM) and dense wavelength-division multiplexing (DWDM). Figure 1: The WDM system The CWDM System In simple terms, CWDM equipment performs two functions: segregating the light to ensure only the desired combination of wavelengths are used, multiplexing and demultiplexing the signal across a single fiber link. Typically CWDM solutions provide 8 wavelengths capability, separated by 20nm, from 1470nm to 1610nm, enabling the transport of 8 client interfaces over the same fiber, as is shown in Figure 2. What’s more, CWDM has the capability to transport up to 16 channels (wavelengths) in the spectrum grid from 1270nm to 1610nm with a 20nm channel spacing. Each channel can operate at either 2.5, 4 or 10Gbit/s. CWDM can not be amplified as most of the channels are outside the operating window of the erbium doped fiber amplifier (EDFA) used in Dense Wavelength Division Multiplexing (DWDM) systems. This results in a shorter overall system reach of approximately 100 kilometers. However, due to the broader channel spacing in CWDM, cheaper un-cooled lasers are used, giving a cost advantage over DWDM systems. Figure 2:The CWDM system CWDM proves to be the initial entry point for many organizations due to its lower cost. Each CWDM wavelength typically supports up to 2.5Gbps and can be expanded to 10Gbps support. This transfer rate is sufficient to support GbE, Fast Ethernet or 1/2/4/8/10GFC, STM-1/STM-4/STM-16/OC3/OC12/OC48, as well as other protocols. CWDM is the technology of choice for cost efficiently transporting large amounts of data traffic in telecoms or enterprise networks. Optical networking and especially the use of CWDM technology has proven to be the most cost efficient way of addressing this requirement. In CWDM applications, a fiber pair (separate transmit and receive) is typically used to serve multiple users by assigning a specific wavelength to each subscriber. The process begins at the head end (HE) or hub, or central office (CO), where individual signals at discrete wavelengths are multiplexed, or combined, onto one fiber for downstream transmission. The multiplexing function is accomplished by means of a passive CWDM multiplexer (Mux) module employing a sequence of wavelength-specific filters. The filters are connected in series to combine the various specific wavelengths onto a single fiber for transmission to the field. In the outside plant a CWDM demultiplexer (Demux) module, essentially a mirror of the Mux, is employed to pull off each specific wavelength from the feeder fiber for distribution to individual FTTX applications. CWDM is suitable for use in metropolitan applications, also being used in cable television networks, where different wavelengths are used for the downstream and upstream signals. In these systems, the wavelengths used are often widely separated, for example, the downstream signal might be at 1310 nm while the upstream signal is at 1550nm. CWDM can also be used in conjunction with a fiber switch and network interface device to combine multiple fiber lines from the switch over one fiber. CWDM is optimized for a cost conscience budgets in mind, with low-cost, small-powered laser transmitters enabling deployments to closely match guaranteed revenue streams. The DWDM System DWDM stands for Dense Wavelength Division Multiplexing. Here “dense” means the wavelength channels are very narrow and close to each other. DWDM uses the same transmission window but with denser channel spacing. Channel plans vary, but a typical system would use 40 channels at 100 GHz spacing or 80 channels with 50 GHz spacing. DWDM works by combining and transmitting multiple signals simultaneously at different wavelengths on the same fiber, as is shown in Figure 3. In effect, one fiber is transformed into multiple virtual fibers. So, if you were to multiplex eight OC -48 signals into one fiber, you would increase the carrying capacity of that fiber from 2.5 Gb/s to 20 Gb/s. Currently, because of DWDM, single fibers have been able to transmit data at speeds up to 400Gb/s. Figure 3: The DWDM system A basic DWDM system contains five main components: a DWDM terminal multiplexer, an intermediate line repeater, an optical add-drop multiplexer (OADM), a DWDM terminal demultiplexer and an Optical Supervisory Channel (OSC). A DWDM terminal multiplexer contains a wavelength-converting transponder for each data signal, an optical multiplexer and an optical amplifier (EDFA). An intermediate line repeater is placed approximately every 80–100 km to compensate for the loss of optical power as the signal travels along the fiber. An optical add-drop multiplexer is a remote amplification site that amplifies the multi-wavelength signal that may have traversed up to 140 km or more before reaching the remote site. A DWDM terminal demultiplexer consisting of an optical demultiplexer and one or more wavelength-converting transponders separates the multi-wavelength optical signal back into individual data signals and outputs them on separate fibers for client-layer systems (such as SONET/SDH). An Optical Supervisory Channel (OSC) is a data channel which uses an additional wavelength usually outside the EDFA amplification band (at 1,510nm, 1,620nm, 1,310nm or another proprietary wavelength). DWDM is designed for long-haul transmission where wavelengths are packed tightly together and do not suffer the effects of dispersion and attenuation. When boosted by erbium doped fiber amplifiers (EDFAs)—a sort of performance enhancer for high-speed communications—these systems can work over thousands of kilometers. DWDM is widely used for the 1550nm band so as to leverage the capabilities of EDFA. EDFAs are commonly used for the 1525nm ~ 1565nm (C band) and 1570nm ~ 1610nm (L Band). A key advantage to DWDM is that it’s protocol and bit rate independence. DWDM-based networks can transmit data in IP, ATM, SONET/SDH, and Ethernet, and handle bit rates between 100Mb/s and 2.5Gb/s. Therefore, DWDM-based networks can carry different types of traffic at different speeds over an optical channel. From a QOS standpoint, DWDM-based networks create a lower cost way to quickly respond to customers’ bandwidth demands and protocol changes. WDM, as a multiplexing technology in optical field, can form a optic-layer network called “all-optic network”, which will be the most advanced level of optical communications. It will be the future trend of optical communications to build a optical network layer based on WDM and OXC to eliminate the bottleneck of photoelectric conversion with a pure all-optic network. As the first and most important step of all-optic network communications, the application and practice of WDM is very advantageous to developing the all-optic network and pushing forward optical communications!
<urn:uuid:c7571f5e-db3b-4f03-9033-f8e82397169b>
CC-MAIN-2022-40
https://www.fiber-optical-networking.com/the-wdm-system.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00630.warc.gz
en
0.923078
1,732
3.875
4
Businesses around the globe rely on their IT equipment to enable processes and manage data — and since electronics are sensitive to overheating, it’s imperative to keep this IT equipment cool with proper airflow management. Data center design needs to take the production and movement of cold and warm air into account for an energy-efficient environment that enables the proper functioning of all relevant equipment. Maintaining the Optimum Intake Temperature of the Equipment Along with advances in technology, IT experts have learned more about how correct data center airflow management can protect servers and other sensitive equipment. The objective, of course, is to maintain a cool optimum intake temperature of the air drawn in and passing over IT equipment. It’s also important for the expelled hot air to rise and be efficiently removed via the computer room air conditioning — or CRAC — unit’s return plenums. For example, a proven solution in data center airflow management is the installation of a raised floor. In this design, data center airflow can be better managed because cool air is distributed through perforated floor tiles. Because the floor is raised, less air is needed to provide the servers and other equipment with the cool intake air they require. This setup, in turn, reduces energy consumption and promotes the correct management of air temperatures across the data center at a reduced cost. Short Airflow Paths Data center airflow management doesn’t need to be overcomplicated. One way to keep it energy efficient and simple is to maintain short airflow paths for chilled air to reach equipment. Along with keeping energy costs low, short airflow paths are also a good way to help reduce or even eliminate leaks and the unwanted mixing of hot and cold air. One way to decrease airflow paths is the popular data center design method of creating hot and cold aisles for the appropriate air exchange. In this manner, cold aisles are set up to directly take in freshly chilled air before it has the chance to dissipate and/or mix with the general air across the surrounding facility. In contrast, when racks of servers are arranged to expel used air directly into a hot aisle, it travels the shortest distance to the return plenums. For hot and cold aisle airflow management to work effectively, every precaution needs to be taken to ensure proper air pressure and prevent airflow leaks. As businesses expand, many data centers experience growth, along with the installation of new equipment. And when there are changes to the amount and arrangement of equipment, there can be challenges to data center airflow management. This is especially true for top-of-rack, horizontally mounted servers that are sometimes difficult to supply with sufficient cool air from perforated floor tiles below. For this reason, enclosed cabinets with airflow containment systems — which allow new equipment to be added — are a solution that permits flexibility without undue disruption to airflow management. DataSpan — Helping Data Centers Manage Airflow Issues With Energy Efficiency Since 1974, our team of experts at DataSpan has been providing innovative solutions to all sizes of data centers. That’s why more than half of the Fortune 1000 relies on us to advise and serve them. For all your airflow management issues, you can trust DataSpan. Contact us today, or find a representative in your area.
<urn:uuid:09158bda-3cca-4afd-a148-1ff9c2c50458>
CC-MAIN-2022-40
https://dataspan.com/blog/ways-to-manage-data-center-airflow/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00630.warc.gz
en
0.922805
671
2.8125
3
Sources of Interference and When You Should Be Concerned Sources of Interference and When You Should Be Concerned What is interference? Wi-Fi operates in the license-exempt frequency bands. The main advantage of operating in these bands is that the network administrator does not need to pay a licensing fee to operate their equipment. The primary disadvantage of operating in these bands is that other equipment may also be transmitting in these bands, and could cause interfere with the operations of your Wi-Fi system. Equipment that operates in the license-exempt frequency bands include Wi-Fi, Bluetooth radios, sensor networks such as ZigBee, microwave ovens, cordless phones, wireless Surveillance cameras, and satellites. Surveillance cameras can be particularly disruptive to Wi-Fi networks as they may be transmitting 100% of the time. Interference negatively impacts performance Before a Wi-Fi device transmits, it listens to the frequency channel. If it detects noise on this channel above a threshold defined in the 802.11 standards it will not transmit. When the wireless medium is free, the Wi-Fi device will then start its transmission procedures. Interference from devices operating on the same channel can therefore cause wireless transmissions to be delayed, resulting in a significant degradation to throughput and Quality of Service (QoS). If the interference is bursty in nature, then a Wi-Fi device could transmit thinking that the medium is free, but the signal could be impacted while being transmitted over-the-air. This can lead to an increase in retransmission of lost packets, which in turn also detriments throughput and QoS. Interference reduces coverage Most IP Professionals understand that interference impacts throughput and QoS, however they do not realize the impact of Wi-Fi coverage. A good analogy for understanding the impact on coverage is to consider a conversation between two people in a restaurant. If the restaurant is quiet with little noise and interference, you can easily have a clear and error free conversation. If the restaurant is noisy, you have to shout to be heard over the noise and you have to repeat your words more often. It also helps if you get closer to the person you are trying to talk with. In other words, in a noisy high interference environment, cell coverage is reduced. Detecting sources of interference The best way to detect interference sources is to use a spectrum analyzer. Spectrum analyzers display all received signals in a selected frequency. Transmissions from Wi-Fi devices, microwave ovens, cordless phones and cameras have different transmission profiles. Some of these transmission profiles are illustrated in figure 3. By looking at the shape of the signals on the spectrum analyzer you can identify the probable type of interfering sources. A spectrum analyzer can range in price from a few hundred dollars to thousands of dollars depending on the functionality. Indeed, many Access Points actually include spectrum analyzer capabilities. A simple low cost spectrum analyzer should enable you to both measure the received signal strength, and provide analysis on how the received signal strength changes over a period of time. This is important as it will tell you now the received signal strength is fluctuating during the day. Widely fluctuating signals warrant further investigation, as it is an indication that something is changing in the environment. What is an acceptable level of interference In order to successfully receive a signal, the transmitted signal must be heard above the interference. One of the key measurements IT professionals who are managing Wi-Fi networks need to understand is the Signal to Interference Ratio (SIR). The SIR is a measure of the received signal power level over the interference at the receiver. It indicates whether the received signal can be recovered at the receiver. Typically for data applications you will want to make sure your received signal strength does not drop below -70 dBm. If you are implementing wireless phones, a higher threshold such as -67 dBm will be needed. In most enterprise environments, interference is typically below -90 bBm. In other words, a SIR of 20 or 23 is generally considered good for data or voice applications, respectively. Want to learn more? Interface is offering a new course for IT professionals, to equip them with the skills necessary to plan, configure, troubleshoot and optimize Wi-Fi networks. Register for this course today. WIRE400: Wireless Networking for the IT Professional 5-Day course at Interface Technical Training in Phoenix, Arizona. Learn how to take control of your enterprises Wi-Fi network. In this 5-day hands-on course, you will learn how to plan, configure, deploy and manage a wireless network, secure and troubleshoot interference sources, identify and analyze Wi-Fi packets in Wireshark for data transfer. This course finishes with an overview of the latest next generation in Wi-Fi product enhancements for enterprise and end-users. You May Also Like In this video, you will gain an understanding of Agile and Scrum Master Certification terminologies and concepts to help you make better decisions in your Project Management capabilities. Whether you’re a developer looking to obtain an Agile or Scrum Master Certification, or you’re a Project Manager/Product Owner who is attempting to get your product or … Continue reading Agile Methodology in Project Management This is part 1 of our 5-part Office 365 free training course. In this Office 365 training video, instructor Spike Xavier introduces some of the most popular services found in Microsoft Office 365 including the Admin Portal and Admin Center. For instructor-led Office 365 training classes, see our course schedule: Spike Xavier SharePoint Instructor – … Continue reading An Overview of Office 365 – Administration Portal and Admin Center In this Office 365 training video, instructor Spike Xavier demonstrates how to create users and manage passwords in Office 365. For instructor-led Office 365 training classes, see our course schedulle: Spike Xavier SharePoint Instructor – Interface Technical Training Phoenix, AZ 20347: Enabling and Managing Office 365
<urn:uuid:d342151a-410c-4ce3-bccb-2620402482ed>
CC-MAIN-2022-40
https://www.interfacett.com/blogs/sources-interference-concerned/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00030.warc.gz
en
0.919898
1,220
2.75
3
Data Breach is defined as a security violation, which involve Sensitive, Unauthorised and confidential Data to be copied, Transmitted, exposed, stolen by an unauthorised individual for the purpose of personal gain or Malicious intentions. Data Brach influence a wide range of impact ranging from an Individual to the Giant corporations and Governments. With the increase in User Dependence on Internet of things and the rapid evolution of technology, it is much easier to collect, process data. However, the ineffective information security or the security mechanism to protect information is vulnerable to Data Breaches. “ISO/IEC 27040 defines a data breach as: compromise of security that leads to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to protected data transmitted, stored or otherwise processed” Data Breaches exposes millions of Personal details or billions worth of corporate details like Intellectual property details and government data. Data Breach can happen from internal or external. It directly or indirectly cost great expense for organisations dealing with high volumes of data. Much of the data breach may not have any effect or likely to be mitigated with low amount of damage. However, some data breach may cost huge burden for some Organisations. Till date yahoo data breach in 2016, was the most expensive data breach which shelled out nearly $1 Billion. “The cost of cybercrime continues to climb; it’s expected to double from $3 trillion in 2015 to $6 trillion by the end of 2021 and grow to $10.5 trillion by 2025. The average cost of a single data breach in 2021 was $4.24 million, a 10% jump from 2019, according to Deloitte” With the increase in technology and altering user behaviour with evolving IOT, Information security has become a Substantial affair. Data breaches can be classified by amount of user information leaked; Value of Information leaked etc. Healthcare, energy, banking, utilities are some of the industries which are mostly affected with Data Compromises. Top 10 Data Breaches of 2021 Around 700 million LinkedIn user data was compromised in June 2021. This was Second Data Breach in LinkedIn after 2012 where 200 million users’ data was leaked. In April 2021, nearly 533 million Facebook user data has been compromised containing usernames, passwords, locations etc. In January 2021, around 200 million user data has been breached from this Chinese social media agency through its unsecured Elasticsearch database. The scrapped data was mostly non encrypted and not password protected. This Men’s clothing brand suffered a data breach in January 2021 compromising 12.3 million user data. The company claims that the data breach was targeted by cybercriminals through backup servers containing customers data. 125GB of sensitive data with potentially 7 million user data has been leaked from this company owned by Amazon. Unlike other data breaches, the data leaked from Twitch was almost the entire twitch data code. Hence it may have impacted all of its users. This US based Retailer lost nearly 4.8 million user data information. Most of the data was banking details of the users. The Dating app lost nearly 2.28 million user data. Most of the data posted on dark web was primarily private information of the users. Nearly 1.9 million user database of Pixlr was breached in January 2021. 9. Four Sports warehouse brands The most recent data breach reported in 2021. About 1.8 million user data of four sports stores namely Tackle Warehouse LLC, Running Warehouse LLC, Tennis Warehouse LLC, and Skate Warehouse LLC were breached. Most of the Credit card details of customers About 1.1 million user data of UK based Jewellery store was Breached. User data of high-end customers like Donald Trump, Saudi crown prince were leaked. Data breach involve exposing of various sets of data from personal Information like name, Social Security numbers, Address, Email/phone numbers, Financial Information, Biometrics to Corporate companies protected and confidential revenue details, sales reports, user details and Trade secrets to government data like Defence secrets, state beneficiaries’ details. Most of the data breaches happens because of ineffective cybersecurity practices followed by the organisations dealing with data. In the past, most data breaches have been unexposed or Concealed by the Data Fiduciaries. However, with the evolving of strict data protection laws, it was made mandatory to notify any data breach and the measures taken by company to mitigate the damage.
<urn:uuid:9342747a-69f9-4f71-909b-8543dda4af71>
CC-MAIN-2022-40
https://www.consultantsfactory.com/uncategorized/information-security-top-10-data-breaches-of-2021
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00030.warc.gz
en
0.933247
967
3.234375
3
What is an IVR? An IVR or Interactive Voice Response is a form of self-service that allows customers to accomplish tasks on their own by interacting with a telephone keypad (DTMF) or speech recognition. IVRs register the DTMF signals emitted by the keypad on each phone to identify the number being pressed. Based on the number pressed, the IVR will direct an incoming call down a tree of predetermined steps until the caller gets the end result they are looking for, whether that’s accomplishing a task entirely by self-service (e.g., look up account balance) or connecting with an agent (e.g., get me to the loans department). More simply: an IVR is what directs your call when you call your bank, get annoyed, and desperately press 0 to be directed to a live person. In addition to self-service tasks, IVRs also perform the basic tasks of routing inbound calls to the correct departments based on the responses to predetermined prompts that are either spoken or pressed on the keypad of a telephone. How Does an IVR Work? A basic IVR system only needs a computer hooked up to a phone line, a telephony board, or telephony card to determine DTMF signals produced by the phone key pad, and IVR software that will allow for the set-up of a basic pre-recorded response. A caller encounters an IVR right when they call a number associated with the IVR. The IVR will answer the call and direct the caller to either press a number for a different department (“press 1 for customer support, press 2 for account information”…and so on) or ask the caller to say specific words, like “yes” or “no” or different numbers into the IVR, where it will detect what they say and properly direct their request. Once a caller inputs their response, the IVR works down a tree of predetermined scripts to run based on their response to the questions asked. What Are the Most Common Uses of an IVR? Many large organizations use IVRs to help handle the incoming calls they receive. Today, the organization types with the biggest adoption of this technology include: - Filling prescriptions - Calling different departments to schedule appointments for procedures - Finding out your latest test results - Reporting a claim or finding out your claim status - Finding out your current coverage - Billing and processing of personal information - Account balance - Making transfers between accounts - Disputing charges made on your card to report fraud - Warranty claims or replacement parts for products - Customer support when transactions online are failing What Happens When an IVR Doesn’t Work? There is nothing more frustrating than when someone calls into an IVR and the system does not work the way it is supposed to. Sometimes the number that correlates with a response is incorrect, the data look- up fails, the IVR prompt times out dropping the call, or the IVR does not respond properly to what the caller is saying and sends them to the wrong department. This usually results in the caller pressing 0 for the agent and then yelling at the agent about the broken IVR, when the agent has no control over whether the IVR works or not. In order to help prevent agent abuse from the customers calling your IVR, as well as a host of other potential issues, check out Cyara’s CX Assurance Platform, which will help ensure your IVR is working properly before it goes into production.
<urn:uuid:e87f70c7-d5b6-4c72-a96e-eacbc7e75253>
CC-MAIN-2022-40
https://cyara.com/what-is-an-ivr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00030.warc.gz
en
0.930241
745
2.671875
3
Telehealth and COVID-19, Protecting ePHI What is Telehealth? The Health Resources Services Administration (HRSA) defines telehealth as, “the use of electronic information and telecommunications technologies to support long-distance clinical health care, patient and professional health-related education, public health, and health administration. Technologies include videoconferencing, the internet, store-and-forward imaging, streaming media, and terrestrial and wireless communications.” In today’s healthcare landscape, telehealth technologies help bridge the gap between patients and providers, ensuring patients can continue to receive the highest level of care even when they’re unable to physically visit a physician. How has COVID-19 Impacted the use of Telehealth? Staying Safe and Cyber-Secure Through Telehealth Using a secure platform To help, “empower medical providers to serve patients wherever they are during this national public health emergency”, Health and Human Services (HHS) issued guidance and enforcement discretion for organizations to be able to implement tools to provide routine care for patients with chronic diseases and high risk factors. It’s a necessity that telehealth be operated on a secure platform. Of the available communication platforms, only a select few are regarded as secure and appropriate for telehealth uses. The list below includes some ‘non-public’ facing communication platforms that represent that they provide HIPAA-compliant video communication products that they will enter into a HIPAA BAA: - Skype for Business / Microsoft Teams - Zoom for Healthcare - Google G Suite Hangouts Meet - Cisco Webex Meetings / Webex Teams - Amazon Chime - Spruce Health Care Messenger *The OCR has not reviewed the BAAs offered by the vendors and the list does not constitute an endorsement or recommendation of the technology. Knowing that hospitals need to adapt quickly to remote options for healthcare, the OCR will not impose penalties for non-compliance with the requirements under the HIPAA Rules against covered health care providers in connection with the provision of telehealth during the COVID-19 public health emergency. This means that Telehealth can be handled through the following platforms until the pandemic is over: - Apple FaceTime - Facebook Messenger video chat - Google Hangouts video Platforms that are considered ‘public-facing’ and do not have the appropriate security measures that promote privacy and are regarded as inappropriate include but are not limited to: - Facebook Live - Chat rooms Business Associate Agreements How to achieve compliance? The security, policies, procedures, and enforcement required to adhere to HIPAA regulations and correctly implementing a telehealth solution can seem complex. That’s why at Intraprise Health, we’ve chosen to simplify these procedures and ensure complete compliance is easily attainable. The Intraprise Health offers various training courses that are created to address the proper use of information and how to prevent theft. Our HIPAA workforce training includes details regarding what safety measures providers are recommended to apply. Disregarding HIPAA compliance may result in hefty fines because of PHI breaches. HIPAA One® is here to help so you can easily achieve compliance and handle audits together. Taking all these precautions will allow practitioners to stay safe while maintaining PHI security. For more information view our recorded webinars page.
<urn:uuid:9e0987b8-dc66-406c-b366-e9d502a99ca2>
CC-MAIN-2022-40
https://intraprisehealth.com/telehealth-and-covid-19-protecting-ephi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00030.warc.gz
en
0.912422
726
2.71875
3
A career in ethical hacking requires a proactive stance because the ethical hacker ensures the prevention of cybercrimes and protects cyberspace from any untoward intrusion. The term 'Ethical' means good and legal, including the security professionals that provide offensive services. Ethical hacking is a profitable carrier option, with lots of fun and challenges throughout the carrier. A If you want to learn hacking stuff but for ethical work, then we suggest you go through Ethical Hacking Course - Certified Ethical Hacker (CEH) v11. Below we will give a piece of brief information regarding how you can pursue an Ethical Hacking carrier and the path to becoming an ethical hacker. In this, we’ll cover the subsequent topics:-What is Ethical Hacking? - Ethical Hacker Definition. - What does an Ethical Hacker do? - Popular tools used by Ethical Hackers - How to become an ethical hacker? - Skills required - A career in Ethical Hacking? - Advantages and Disadvantages of a career in ethical hacking Ethical Hacking MeaningHacking any computers or systems by a company or individual to help identify potential threats on a computer or network for the motive to secure the security loop or flaws. Ethical hacking is the act of looking into computers and devices legally to check for a company's defenses. In other words, ethical hacking refers to the technique of identifying the flaws in the network and gaining access to the devices connected to the system, because the information received back is used to narrow the gaps and make their respective network more secure. Who is an Ethical Hacker? In layman's terms, A hacker is someone who discovers flaws and security loopholes in products or code and works to fix them; specifically in a network, software, or computer system. What does an Ethical Hacker do? - Find the flaws and vulnerabilities in the computer system. - By the security policy, monitor the networks and take appropriate action. - Analyze Data from logs and network packets - Perform application and infrastructure penetration and intrusion testing. - Perform a root cause analysis. - A security incident investigation to determine the source of a policy violation. - Ascertain that your assets are not jeopardized. Popular Tools used by Ethical Hackers There are numerous online tools available, some of them are mentioned below for you: - Burp Suite - Hashcat and many more... How to Become an Ethical Hacker? Building a carrier in ethical hacking needs a pre-plan program. This is because many of the companies' or organizations' jobs require to have a Bachelor’s degree in computer science or cybersecurity-related field, exceptions are made for people with techie sound knowledge of operating systems, databases, networking, and other technical fields. So, if you are looking for the same carrier, then after 10th, go for the science stream with physics, chemistry, arithmetic, and computing as your main subjects. Procure degree:-Most ethical hackers have a computer science degree. Because the majority of hackers have extensive network security experience, ethical hacking is a more advanced career path. This means that on-the-job training and certifications constitute a significant portion of job training. To start a career in ethical hacking, they must first understand computer components and their function or features. Many people learn these skills while pursuing a degree. Look for professors or classes that specialize in ethical hacking and security at universities with strong computer science programs. Gain experience:- You can start working in network and IT support once you have a working knowledge of computer networking and the 7 layers of OSI. This can help you become familiar with security programs, such as how to update, install, and monitor them for flaws or malfunctions. Here are some jobs that will give you the necessary experience: - Security Specialist - Network Security Engineer - Security Administrator Acquire certifications:- A variety of certifications are available to help you work as an ethical hacker. Although these certifications are not always required, they can help your CV stand out to security teams. The Certified Ethical Hacker exam is a global certification. Skills Required To Become An Ethical Hacker To become an ethical hacker, you want to possess the following skills: - Proficiency in handling databases and computer networking. - Knowledge of hash algorithm and Password Cracking - Familiar with networking and its traffic - Knowledge of OWASP Top 10 Vulnerabilities, which are- - Broken access control - Cryptographic failures - Insecure design - Security misconfigurations - Vulnerable and outdated components - Identification and authentication failures - Software and data integrity failures - Security logging and monitoring failures - Server-side request forgery (SSRF) Why choose a career in ethical hacking Creative Profession:- A significant advantage of this career path is that it is a creative profession that requires you to use your creativity and problem-solving skills to investigate various methods and loopholes through which hackers can gain access to the system, and you must discover all of those loopholes while putting yourself in the shoes of the hacker. Adaptability in the workplace:- Another significant benefit of ethical hacking as a career option is the ability to work from any location on the planet. Working does not necessitate moving or remaining physically present in an office. All you need is a computer with an internet connection, access to the company's system, and some online tools. Threats will never go away:- Cybercriminals, on the other hand, will never stop. Businesses will always require protection to stay ahead of the game and maintain customer trust, whether it's updating old strategies to keep classic threats at bay or developing new methods of preventing criminals. A constantly changing industry:- Our cyber landscape is rapidly changing, and new technology brings with it new threats. As more businesses adopt cutting-edge technology like cloud computing and the Internet of Things, these specialized technologies will face unique security challenges. With more sophisticated defense technology, cybercriminals will have to constantly devise new strategies to try to breach a company's defenses, putting you at risk. Advantages and Disadvantages of a career in ethical hacking |There is a vast market demand for ethical hackers.||Even though the demand is high, the method of selection and hiring is quite inconsistent.| |There is a good chance that this unconventional career choice will be financially rewarding.||The certification and therefore the courses’ completion need to be from a recognized university/institute or else it would become bothersome for a career in ethical hacking.| |Identifying the weak points in the IT environment and aiding in its prevention.||The work may mostly be part-time.| |Creating an IT infrastructure that is secure from external threats.||Many people out there are using ethical hacking for the wrong purposes, which is leading plenty of companies to not be able to build trust in ethical hackers.| How much can an Ethical Hacker earn? Because of the high demand for ethical hackers in organizations, as well as the scarcity of professional and certified ethical hackers who can perform the given job with high proficiency, an ethical hacker's average pay is significantly higher than that of other professionals in the same field. According to Salary.com reports, an Ethical Hacker's salary range typically falls between $92,400 and $118,169, with an average base salary of $103,583.Ethical hacking is one of the highest-paying career options in terms of pay. For note, pay depends on several different factors, including years of experience, skills, other certifications, education, job level, and location. For example, a certified ethical hacker’s pay varies based on skills: - Network security management skills — Avg. pay $89,300 / year - IT security and infrastructure skills — Avg. pay $85,000 / year - Penetration testing skills— Avg. pay $75,076 / year With the necessity for cybersecurity increasing by the minute, job prospects for ethical hackers are sure to increase. All you would require are the qualifications and skills to bag a lucrative role in ethical hacking with an established organization.
<urn:uuid:95b1b56c-690f-4583-953e-9e7d48717d67>
CC-MAIN-2022-40
https://www.cyberkendra.com/2022/08/best-career-options-to-look-for-in.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00030.warc.gz
en
0.916909
1,771
2.6875
3
Q: Whats the difference between “storage virtualization” and “file virtualization”? A: At one level pretty much everything in storage these days is virtualized to some degree. Low level standards like SCSI and RAID are forms of virtualization that abstract or hide the underlying physical attributes of disk drives from operating systems and applications. But thats not the usual meaning of virtualization as applied to storage today. In the modern sense there are really two major kinds of storage virtualization, one at the block level and one at the file level. Block-level virtualization is usually just called storage virtualization, and serves applications such as database software that need block-level access to data. The disks will typically (but not always) reside in Storage Area Network arrays (SANs). File virtualization is really something completely different. It serves applications that need to access data in the form of entire files rather than block-by-block, these files will typically reside in file systems located on Network Attached Storage (NAS) devices. Q: Why is block-level storage virtualization needed? A: The need for this kind of virtualization arose because SAN users found that a lot of important storage management services were restricted to the disks in a particular array and couldnt be expanded beyond that. Once you filled up all the disks in that array, you had to get another array, and that meant a new thing to manage. If the new array was from a different vendor, you ran into the fact that each vendor has a closed architecture. You cant manage them from the same console, its hard to replicate data between them, and so forth. Storage virtualization tries to remedy these problems by moving key management functions off the storage arrays internal controller out into the network. Once you have done this it no longer matters what the maximum capacity of a given array is or what type of array is your replication target. You can just manage a bunch of discrete physical arrays as if they were one big virtual array. The first vendors to try this approach were companies like DataCore and FalconStor starting in the late 90s or early 2000s, but today plenty of vendors offer it.
<urn:uuid:84581ba6-9240-4d85-a2b4-afa3793ec021>
CC-MAIN-2022-40
https://www.eweek.com/database/whats-the-difference-between-storage-and-file-virtualization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00030.warc.gz
en
0.959807
443
2.53125
3
It’s tempting to imagine the cloud as an actual cloud full of ones and zeros, but it’s really a broad range of computing services such as virtualized IT infrastructures, storage, data processing and apps which are offered over a network. A public cloud is one where these services are available to anyone (either free or paid) and accessed over the internet. Most known examples of public clouds are Amazon Web Services (AWS) and Microsoft Azure. A private cloud differs in that it’s designed for the use of only one organization. A hybrid cloud, as its name suggests, combines elements of both. Many businesses also use multi-cloud strategies. This means they employ more than one public cloud platform and may also use a hybrid cloud. What sort of cloud is right for my business? There are advantages and disadvantages to all types of cloud. These tend to center around cost and security, a private cloud being both more expensive and, theoretically, more secure for protecting your business data. A company that wants quick access to its data from anywhere might use a public cloud – the market-leading suppliers of cloud storage have a strong focus on providing the highest level of security. A business which holds large quantities of sensitive data, such as legal and financial, and some tech companies, might choose a private cloud. Increasingly, however, many companies go hybrid – critical data may be stored privately, while less important information and resources such as apps are held in the public cloud. We’re all heading for the clouds Cloud adoption is growing strongly. According to research by Kaspersky, 37 percent of small- to medium-sized businesses and 50 percent of all enterprises are either using the cloud or planning to increase their use of the cloud. There are good reasons for this. Maintaining IT infrastructure and the associated expertise in-house is expensive and keeping up to date with new technologies can mean running just to stand still. For these reasons, even large multinationals often buy in their cloud services. One of the other great drivers is changing working habits: increasingly staff is either distributed or working remotely. If everything they do is in the cloud, it can be accessed anywhere and from any device. Even so, Kaspersky also found that the speed of cloud adoption varies considerably by industry sector and geographical location. For small- and medium-sized businesses (SMBs), the arguments in favor of cloud adoption are perhaps the most compelling: the cost of maintaining IT infrastructure in-house is often prohibitive; as a result, many have ‘made do’ with technology that is far from cutting edge. Now they can buy everything in and have access to the same kind of IT and processing power as large companies. SMBs can use the latest productivity software; they can mine data to gain greater insights. What’s more, cloud computing is also highly scalable. If you need more cloud services, you just buy them. It’s like buying electricity – you just pay for what you use. Traditionally, SMBs have lacked the specialist skills and capabilities to set up infrastructure-as-a-service (Iaas), so they tend to use instead software-as-a-service (SaaS) solutions, which are far simpler to implement and remove the need to maintain the underlying software. Security in the cloud When the cloud was very new, security was a concern for businesses of all sizes. Much of this was down to the regular news of data breaches and anxiety around the idea that sensitive data might be stored ‘off-site’. However, cloud security is generally highly secure; data is encrypted which reduces the chances of a breach, and a private cloud can be used for sensitive data. The cloud also allows you to store data within a certain country, which may be necessary to comply with data protection laws. For companies with distributed workforces who use multiple devices, the cloud can also have significant security benefits. Finally, cloud security offerings are now fully mature and offer a balance of peace of mind and ease of use. Businesses are starting to acknowledge that including cloud usage into their business processes may result in better security than they used to have when running them entirely on their premises. In some ways, what we’re seeing with the cloud is the next development of the internet: greater speeds, better connectivity and improved security are making it possible for even more functionality to reside on the network so it can be accessed from anywhere. Some companies such as tech start-ups have jumped right in, but many companies are still getting used to this brave new world of location-independent productivity. However, in five years’ time, it will be the norm for everyone. Is now the right time for your business to transition to the cloud?
<urn:uuid:cfb549ff-8ee6-4343-9891-94f9ab7f947c>
CC-MAIN-2022-40
https://www.kaspersky.com/blog/secure-futures-magazine/cloud-computing-options/28260/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00030.warc.gz
en
0.962227
986
2.59375
3
Not all hacks are created equal, but many have familiar markings: Often, hacks start with a reconnaissance mission and end with stolen data, or malware that infiltrates company networks. Known as the Cyber Kill Chain, many cyberattacks can be broken down into seven steps or fewer. The more your company knows about the ins and outs of a hacker’s objective, the easier it is to take cautionary measures. And while cleaning up after a hack can be complex and costly, many preventative solutions are simple. Here we break down the anatomy of breaches and hacks—and compare scenarios between (hypothetical) companies with and without security solutions in place. Plus, we shed light on major cyberattacks across industries, from healthcare to tech, proving that a cyberattack can happen to the best of us. The anatomy of a cyberattack When examining an attack, cybersecurity professionals often reference the seven-step Cyber Kill Chain, first introduced by Lockheed Martin Corp. - Command and control (C2) Not all attacks contain all seven steps. For example, they may not use an exploit (step 4) or malware (step 5). A typical cyberattack scenario Here’s a simplified example of one of the many potential scenarios in a multiphase attack that starts out with phishing. You’ll notice that it follows the general outline of the seven steps above, but not necessarily in the same order. Infrastructure: Cybercriminals establish the infrastructure needed to carry out the attack. They may use a phishing kit that includes everything necessary for launch, from images and source code to a website. Some kits come complete with a massive set of email addresses, as well as an email template. In a targeted attack, the cybercriminals first conduct reconnaissance to probe systems for vulnerabilities, find high-value targets within the business, gather employee info from social networks, assess what tools the company uses, and so on. This enables them to zero in on the best targets when sending the phish. Phishing emails: Impersonating a legitimate company, the attackers send phishing emails, attempting to lure the recipients into clicking on a link. The link redirects to a fake login page or a site that contains malware. Logins: The site either captures the credentials when the user tries to authenticate or downloads malware to steal the credentials—for example, a username and password that’s stored in the web browser. Additional payloads: With the credentials compromised, the attackers launch the next phase. This typically involves exploiting system vulnerabilities, creating backdoors, and deploying new malware payloads to escalate privileges and move laterally. Attackers continue to map the infrastructure and compromise more systems connected to the network. Data exfiltration: If the attackers are successful in establishing a command-and-control channel to manipulate the IT systems remotely, they’ll move on to carry out their actual objective. If that objective is to steal information, they may use malware and a staging server to collect the data, then exfiltrate it off the network. At this point, the hack becomes a data breach. A tale of two hacks A simple security measure can keep your company protected during a cyberattack. Let’s look at this hypothetical example of cybercriminals targeting two fake companies: A Corp. and B, inc. Bad actors acquire a list of corporate email accounts from both companies on the dark web. After getting employee names off social media, the cybercriminals use each company’s email conventions to generate a long list of emails. Armed with a list of common passwords (e.g., qwerty, password, 123456), they launch a password spray attack—using one password at a time with each email to try to log into the targeted app or system, then waiting for 30 minutes to avoid triggering a red flag. Cyberattack Scenario 1: A Corp. After working at it for several hours, the attackers find a successful email and password combination. All they need is one. The attackers use the compromised credentials to conduct network reconnaissance, elevate privileges, and eventually go for the win—stealing A Corp.’s customer data. By the time A Corp. discovers the data breach, the bad actors are long gone, and the company is looking at months of costly recourse. Cyberattack Scenario 2: B, Inc. The attackers targeting B, Inc. have exhausted their initial email list, with no success. They decide to change tactics, enacting more reconnaissance to identify high-value targets within B, Inc. With a new list of select names, they proceed with a brute-force attack—using “dictionary” passwords or guessing passwords based on information from social media posts (e.g., favorite sports team or family pet’s name). Still, no success. What the bad actors don’t know is that B, Inc. uses a password manager. Once a quarter, IT admins even consult their password health dashboard to see their overall company security score and work with employees who may need to update passwords that could compromise the company. And since the password manager makes it easy for employees to use strong passwords, they don’t have to resort to using easily guessable passwords, like names of pets, which hackers could easily figure out. Recent cyberattacks at Zoom, Nintendo & More The London-based currency-exchange company’s online operations were crippled for several weeks in January 2020 after a ransomware attack. The attackers demanded a ransom of several million dollars and threatened to publish exfiltrated customer data if Travelex didn’t pay. The losses from the double extortion contributed to a financial crisis at the company, and Travelex entered into administration—the U.K.’s equivalent to bankruptcy—later in the year. Healthcare: Universal Health Services Universal Health Services (UHS), which serves 3.5 million patients at 400 U.S. and U.K. locations, suffered an estimated loss of $67 million in a ransomware attack in September 2020. Many UHS hospitals around the U.S. had to redirect patients elsewhere for treatment and cancel appointments. Staff also had to revert to all-paper methods. Restoration of the IT systems took close to a month. A cybersecurity company discovered half a million Zoom accounts for sale on the dark web in April 2020, available at a bulk price of $0.002 per account. The compromised data included email addresses, passwords, and personal meeting URLs and host keys. Zoom itself wasn’t breached—the exposed accounts appeared to be a case of credential stuffing, with attackers using previously stolen credentials in a large-scale, automated attempt to gain access to Zoom accounts. Public sector: Veterans Administration The U.S. Veterans Administration (VA) suffered a data breach that exposed the sensitive data of 46,000 military veterans in September 2020. The attackers targeted a third-party vendor—a payment-processing system provider—with the goal of stealing money that VA sent to healthcare providers. The attackers used social engineering to compromise access authentication protocols. Travel and hospitality: Marriott International In March 2020, the global hotel company notified more than 5 million guests that their personal information was exposed due to a vulnerability in the company’s app. An unauthorized party accessed the data from mid-January through the end of February by using two employees’ login credentials, and the company disabled both accounts upon discovery of the incident. The Japanese gaming giant suffered a data breach that exposed the accounts of 300,000 customers in spring 2020. The cybercriminals then took over numerous accounts, and gamers reported financial losses. Exposed data included names, birth dates, emails, and countries of residence. Some security researchers believe the attackers used credential stuffing and compromised credentials from previous data breaches. Information technology: SolarWinds An unprecedented-scale attack on U.S. IT company SolarWinds, reported in December 2020, put numerous high-profile companies, government agencies, and other organizations at risk of hacking and data breaches. The attackers gained access to the SolarWinds software and added malicious code, which was then sent to customers during routine software updates. The hack wasn’t discovered for months. Reports later identified a weak password (solarwinds123) created by an intern as the catalyst for the attack. Don’t wait for a cyberattack. Take action now. Download our free ebook, A Business Guide to Data Breaches and Hacks, to get a 360° perspective that includes causes, consequences, and prevention techniques.
<urn:uuid:995e7967-6a03-4fd1-8033-d15caa0ad54e>
CC-MAIN-2022-40
https://blog.dashlane.com/prevent-cyberattacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00230.warc.gz
en
0.917682
1,805
3.046875
3
The Heartbleed bug probably has you shaking your head and wondering when, why, and how to change your password – just when you’d actually memorized it! There are resources to tell you when and why; here’s how – or some basics of maintaining secure passwords beyond this particular incident, because the aftermath of Heartbleed shouldn’t be the last time you change your password! Some of the most common calls we get from users to our Network Operations Center are password related. Someone forgot their password. Another entered the wrong one too many times and got locked out. Yet another might need to change someone else’s password. You get the picture – this seemingly simple stuff quickly feels complicated. Protect Your Data By Regularly Updating Your Password Passwords are a very frustrating part of information technology because, by nature, it is a hurdle that the user must overcome to access data. Even more vexing, your passwords may have to be changed on a regular basis. Why do we need to keep changing our passwords? What should I do, and what should I avoid, when choosing passwords? How am I supposed to remember them all? Before you throw up your hands in despair or, worse, start creating weak passwords, here’s some advice. Why are password policies so complicated? Two aspects of password-based authentication are frustrating to users: password complexity and password aging (having to change it every so often). However, both of these characteristics are necessary in order for passwords to be effective. Information systems require passwords in order to prevent unauthorized parties from accessing sensitive data. One type of unauthorized party would be a hacker repeatedly attempting to guess your password. For this reason, systems require a complex password (upper and lowercase, special characters, and so forth). Another type would be someone who already knows your password: maybe an assistant, a former employee, or someone who gained access to another account of yours that has the same password (how many of us use the same password for our Windows logon and our Amazon logon?). For this reason, systems may require you change your password on a regular basis. How should I pick a password? DO: make it something hard to guess. That doesn’t necessarily mean hard to remember, but make it something that would be difficult for a hacker or even someone who knows you to deduce. You could make your password a sentence, for example “I have 8 amazing cats!” (complete with spaces, exclamation point, and number). DO: use different passwords for each and every service. Don’t make your work password the same as your personal email password. Don’t make your personal email password the same as your cable company password. Why? Because you don’t know who on the other end can see your password, or what they might do with that information. Some companies (unfortunately) store your password in plain text, meaning someone working for the company can see your password. Maybe that’s fine for a particular case – I would expect my cable rep to have full access to my account –but what’s to stop them from trying to use that password on another one of my accounts, like my personal email? Another reason is if you give a coworker your work password (“I’m out sick, can you send an email from my computer?”) they might deduce that the same password would work for another account of yours. DO: use two-factor authentication where available. This increasingly popular method sends a text message to your phone for additional verification. It combines the security of something you know (your password) with something you have (your phone), making it much harder for someone to trick or guess their way into your account. DON’T: share passwords. Even if two people require access to the same data (for example, a shared mailbox), you should create two separate accounts and two separate passwords. As my colleague Tobin would say, “Passwords are like toothbrushes.” Sharing is icky. How do I keep track of my passwords? Everyone has their own system. The bottom line is to use one that works for you, and keeps your information secure. Here are two recommendations: DON’T: save all your files in an Excel or Word document. Such files are easy to open and compromise, even if you have a password on the document itself. Looking for more information about information security? We’ve got you covered. Check out our infographic Is Your Organization Protected from Cyberattacks? for more information on threats to your organization’s security and how you can prevent your data from being compromised.
<urn:uuid:556e4bfa-3b2f-4ade-b99f-b1e6c339762b>
CC-MAIN-2022-40
https://www.delcor.com/resources/blog/password-management-dos-and-donts
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00230.warc.gz
en
0.941827
974
2.625
3
By splitting up workloads and giving more than one person control over processes, separation of duties (SoD) ensures that multiple people share responsibilities in a series of checks and balances, reducing the chances of errors or fraud. Implementing separation of duties for a security team takes some time, but it’ll make the process of managing the organization’s network safer. What is Separation of Duties Anyway? In the world of finance and accounting, separation of duties is a common practice. By separating those in the organization who handle receipts from those who make the bank deposits from those who pay the bills, for instance, the organization reduces the chances of fraud. If one person controlled all of those aspects of the organization’s financial side, they could misappropriate or steal funds, altering the accounting records to hide their bad actions, and it would be difficult for someone outside the accounting department to catch the fraud. SoD in Security Within an IT department, separation of duties (also called segregation of duties) refers to the network security of the organization. Implementation of SoD in terms of security deals with both fraud and hacker attacks. - Fraud: Detecting fraud through the implementation of SoD involves enacting policies that find conflicts of interest or unchecked errors before those problems could place the network at risk. - Attacks: By implementing SoD control features, the organization’s security department should be able to detect potential theft of information or security breaches before they happen. In basic terms, SoD involves implementing a series of checks and balances that ensure the security team can keep the organization’s network and data safe from both internal and external attacks. How Separation of Duties Works When developing a separation of duties plan, the organization’s security team will come up with a list of tasks and duties the plan should address. The team then determines how to split up the duties properly to create the desired level of checks and balances. Some of the most important aspects of creating an SoD plan include the following: - Data Management: Security team members should rank data in terms of its sensitivity levels. Data deemed to have the highest levels of sensitivity should receive the greatest protection under the SoD plan. Commonly available data doesn’t need protection. - Data Ownership: Should some of the organization’s members only need access to sensitive data occasionally, the SoD plan should spell out who can grant them access to this data and how at least two members of the security team will track the use of the sensitive data. - Modifying the Network: Any modification of the network processes should need approval from at least two team members, whether that involves changing network permissions, downloading software, or making adjustments to the network firewall. - Monitoring System Logs: Within the SoD plan, no single person should have the ability to review and edit system logs without oversight from another team member. - New Accounts: When new employees join the organization, checks and balances in the creation of new accounts will catch errors or fraudulent activity that would give a new employee unnecessary network permissions. Common Mistakes in Creating an SoD Plan The biggest mistake organizations make in terms of trying to create a separation of duties plan is not discussing the roles and oversight thoroughly enough as a team. Without clear role assignment and oversight, some aspects of network security may not have the proper level of checks and balances, creating exploitable holes in the system. Additionally, it’s important for the security team to run detailed tests on the plan before implementing it. The testing process should catch areas where only one person has control of a certain aspect of the network, as well as places where the definition of the members’ roles is lacking necessary detail. Even if your organization hires a third-party group to oversee your separation of duties plan, you should have a few people who review the work of this third-party group on a regular basis. Example 1: Defeating Inside Attacks Team members have access to the network every day, allowing them to do their jobs and keep the organization working smoothly. Although you trust your team members, a cyberattack that originates from inside the organization is always a possibility. Implementing a proper SoD plan will help you greatly reduce the possibility of this type of attack. With checks and balances in place, other team members should be able to spot clues that reveal the possibility of an inside attack. Thwarting Planned Inside Attacks Within an SoD plan, at least two team members should always have oversight of the system. Should an employee attempt to upload malicious software or malware to harm or bring down the network, for example, having more than one person in charge of searching for malware increases the chances of catching it before it does damage. Should one of the security team members be initiating the attack, having a second person involved in checking for attacks will thwart the first person. As an added benefit, any employee considering starting an inside attack may choose not to because of the strong separation of duties plan in place. The employee knows they will not be able to avoid detection, so they won’t try the attack. Thwarting Inadvertent Inside Attacks Some inside attacks are inadvertent. The right SoD plan should catch these potential errors before they happen too. Suppose a team member who has oversight for downloading apps and software unknowingly downloads malware or another type of malicious software. Under the SoD, a second team member would also be able to oversee the software downloads, hopefully spotting the dangerous software in case the first person misses it. Without an SoD plan, if only the first person has oversight regarding downloading of apps, the malware may end up on the network without detection. The team member probably didn’t mean to download the dangerous software, but the result is the same as if the attack occurred on purpose. Example 2: Protection of Data Preventing data breaches is a key role of any organization’s security team. A data breach could lead to significant financial loss for the organization, either through the loss of customer trust or through financial judgments against the organization. Governmental oversight requires protecting sensitive data for customers and employees. If hackers steal credit card information or Social Security numbers, your organization could end up receiving significant financial penalties. Ensuring data protection through the implementation of a separation of duties plan involves a few different practices. Viewing Sensitive Data The security team needs to ensure that only those members of the organization who need to see sensitive personal data for customer relations have access to this data. At least two members of the security team need to be able to review which people have access to sensitive data and to make adjustments to permissions as needed. Without an SoD plan in place and without clear oversight from security team members about who receives access to this data, the likelihood of an employee having unnecessary access to sensitive data goes up. This error would increase the potential for the data to fall into the wrong hands through fraud or hacking. Tracking Those Who Access Sensitive Data The separation of duties plan should set up a tracking process for monitoring which employees gain access to which types of sensitive customer data. Should a data breach occur, being able to track who accessed the sensitive data can help the security team discover the source of the breach. The SoD plan should implement checks and balances so that at least two people work together to track down the information. With oversight from multiple people, there’s no chance of the same person who caused the data breach also being the only person who is investigating the breach. Example 3: Splitting Network Oversight Duties If an attack occurs where hackers steal the login credentials of a member of the security team, your organization’s SoD plan can limit the amount of damage the hackers can accomplish before you discover the intrusion. Without an SoD plan in place, members of the security team may have access to the network without any checks and balances from other team members. If the hackers are able to steal the credentials of one of these powerful team members, they could take full advantage of their newfound access, doing significant damage to the network and stealing significant amounts of sensitive data. By splitting up the duties of the security team among several different people, no single team member has unchecked power over the network and the organization’s data. Consequently, the hackers would not be able to gain unchecked power either, unless they manage to steal credentials for multiple people. Example 4: Implementing Technical Safeguards A significant part of any organization’s security plan includes setting up firewalls, intrusion detection systems, and vulnerability scanning. If one team member has control of all of these aspects of network protection, it’s possible that this person will miss something, leading to a breach. Having a separation of duties plan in place that splits control and management of these safeguards ensures that multiple team members have a role in managing and monitoring the network’s protective measures. Whether a team member makes an error with the network’s management software on purpose or inadvertently, having checks and balances in place should allow the security team to catch the error before significant damage occurs. How to Get Started With Separation of Duties Here are some ideas to help an organization begin implementing an SoD plan. Perform an Assessment of Risk Before assigning responsibilities within the organization regarding security measures, it’s important to understand exactly where your organization’s vulnerabilities lie regarding security. You then can build the SoD plan around the risks. Additionally, the risk assessment should define the potential security risks for each position on the security team. Define the duties for each position, and then determine what kinds of security risks those duties have. Undertaking a detailed assessment of risk for your organization’s security measures should be a regular process. Depending on how quickly the organization is growing and changing, you may need a reassessment every three to six months. For an organization growing at a slower pace, an annual reassessment probably will be sufficient. Hire a Third-Party Security Service Third-party companies are available to handle the security concerns of an organization. These organizations can perform a variety of security functions including: - Performing a risk assessment - Developing a security plan - Running a security audit - Testing the security measures - Creating reports for the organization to review - Monitoring the safety of sensitive data - Watching for security breaches Hiring an outside team to take care of the implementation of security measures for the organization fits the definition of separation of duties. No employee would have control over the security processes, reducing the chances of fraud. The organization may want to have the third-party security service report to multiple people in the organization or to an audit committee to follow SoD practices. You don’t want one person in your organization to have access to all of the reports, determining which information they choose to share with the rest of the team. Another reason to consider hiring a third-party service is to support a small security team. If the team members are able to handle certain aspects of the separation of duties plan, the third-party organization could cover any remaining areas outside your team’s bandwidth. Determining the Success of an SoD Implementation After creating a separation of duties plan, the security team can ask itself a few different questions to test the usefulness of the plan. If the answer to any of these questions is no, the plan will need some tweaking to make it something the organization can implement. - Does one person control sensitive data? Any sensitive customer or organizational data should have at least two people with the ability to track its usage. This ensures one person cannot move, delete, or copy the data without the knowledge of at least one other member of the security team. - Does one person handle security monitoring? At least two members of the security team should be able to monitor warnings about security breaches or digital attacks. If only one person monitors this information, they could allow hackers to go unchecked, either on purpose or inadvertently. - Does any employee have responsibilities that conflict? If one member of the security team has responsibilities for implementing the security plan and also testing the security plan, this could result in holes in the plan. The person implementing the plan may not see the need to run extensive tests, for example, or may decide to manipulate the plan or its testing to their own benefit. - Are subordinates monitoring the actions of supervisors? Unless your security department is extremely small, only consisting of a few people, subordinates should not receive responsibilities that would require them to monitor the actions of a supervisor. This could cause significant conflicts of interest. - Are non-security personnel involved? Again, unless you have a small security department, you may not want employees who rarely deal with security to have responsibilities under separation of duties. Non-security personnel may not understand exactly what kinds of problems they should watch for, and they may not have the time to appropriately dive into security issues, allowing potential breaches to occur. Another option is hiring a third-party auditor to study your organization’s SoD setup to determine whether it is as safe as it could be.
<urn:uuid:2b708acc-f673-42c7-9f7f-e6f7b4035c42>
CC-MAIN-2022-40
https://nira.com/seperation-of-duties/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00230.warc.gz
en
0.935467
2,714
2.96875
3
Nowadays networks are becoming more complex when compared to the past networks, but still the rationales are same as before: to offer the connectivity so that the consumer can share the resources and information. When any physical connectivity's are damaged, then the consumers are brought to dead end and also the productivity instantly declines. Despite any problems can be much complex and need effective knowledge of configuration and software, others are as so lucid as having the suitable physical item connected in a better method. Whether you have a small network or large enterprise troubleshooting the physical connectivity can be challenging. The complexity of the troubleshooting may vary depends upon the number of devices connected with it. While blindly performing troubleshooting, it is required to have some knowledge of the infrastructure to grasp the issue part more quickly. Anyone can start troubleshooting by learning the infrastructure and some common network tools. There are lots of things to learn to excel in the troubleshooting. It is necessary to troubleshoot from the layer 1 i.e. Bottom up to the higher layers to avoid confusion and complexity. Faulty cables or faulty connectors will prevent data transmission at first layer successfully. The quality of the fiber and copper infrastructure affects most of the thing that traverses the network. It is mostly impossible to troubleshoot the most common physical connectivity issue without any proper steps. Troubleshooting without aid of visual topology is mostly impossible unless you localize the problem to a specific router and any new techniques. In large scenarios, a network topology is necessary to troubleshoot the physical connectivity issues. Here you can get the physical connectivity problems such as After a time, most of the connectors may go through physical damage and fatigue. It is true in more fragile cases such as few fiber optic cables. If it experience an intermittent or complete shortage of connectivity, for that it is important to audit the connector to ensure that the connectors are not broken or cracked and that the fiber and wire are properly and securely in suitable places. Faulty cables and faulty connectors will prevent layer 1 data transmission successfully. The bad cable will be incorrect type of cable which used for specific works. In some case, perhaps if 2 1000Base-Tx devices are interconnected with the help of a cat 5 cable in the place of higher cable or cat 6 cable. One of the obvious signs which connectors will be bad and may causing issue when any network problem is isolated to 1 particular location. Depends on the bad connectors, you may have a spotty connection or no connection which goes and comes at regular odd intervals. Determine and check whether the connectors are broken tabs, loose wiring or other physical signs of issues. If you feel that the connectors has some problems take the necessary step to change it as soon as possible. If the connectors don't look questionable but the same issue is isolated indeed to run, then consider it as bad wiring. An open type is the broken copper strand, which prevents cureent flowing through the circuit. But the short will happen when 2 connectors of copper touches each other which results in flow of current via that instead of the attached electrical circuit, for the reason that the short has less resistance. Addition to the miswiring, some other issues that will happen with cables which has to be check with the help of multifunction cable tester like short/open faults. In the same way, An open issue indicates that the fitted cable is not producing the full circuit and it can be owed to cut across some wires or all wires in the cable. The short fault indicates that the data tries to move on the wires other than which is intended. It can be due to twist or miswiring in the cable at the cut admitting to touch the bare wires. The UTP or unshielded twisted pair type cable comprises of 8 individual copper leads. Anyhow, only 4 out of 8 leads are used for the purpose of data in which 2 receive leads and 2 transmit leads. Due to that, it resulting 4 unused leads in it. Some of the installers will use that 4 extra available leads to assist the connection of second Ethernet on the single unshielded twisted pair cable. Whereas such approach will function, the nonstandard wire is used for the 2nd Ethernet connection. You must aware of all the nonstandard pin outs which are used in network which you troubleshoot. Most of the spilt in the cables are intentional which enables you to runs the wirings in so many directions by using splitter. Depends on the cable type in question, each split to decrease the signal strength is not uncommon. You can split the cables possible number of times and identify the splitter if issue occurs when it works normally. If the split cable is unintentional, then you often deals with an short/open. Collisions are the part of an arbitration method in half duplex Ethernet. It is not really physical problems, but the physical problems may inadvertently cause Ethernet collisions. Full duplex Ethernet communications will never collide because it has separate channels for receiving and sending. The signal power of the data transmission sometime degraded at one point where transmissions are not perfectly interpreted by the receiving device. This loss in a signal power is called as a decibel loss or dB loss. It may occur due to excess distance which is above the fiber or copper cable limitations. Without grasping into a complex match, the DB losses are an algorithm which helps to calculate the signal at the designation and at the source differences. The DB losses of the 0 will be perfect and it is tough to get it. In each and every network media there would be at least few DB losses, but the main concept is to maintain the number at minimum level. Based on the type of media using, and there are some tables for adequate DB loss/ 100 feet. Employing the best practices for every category of connectors will maintain the DB loss in an admissible level. The perfect measurements for the DB losses in every media are now beyond the scope. TX is known as transmit and RX is referred as receive. The TX has to connect to RX for every twosome of wire in network cables like patch cables. Using an ordinary patch cable to connect with similar devices may cause the connections of the transmit to transmit and receive to receive, which will never work properly. These kinds of reversal can evoke by an improper connection of wires on the wall patch or jack panels. Few devices have the capacity to autosense the rescinding and accomplish the corrections, but few devices may not. Correct cable replacement in the network or datacenter throughout the building and closest is necessary for the reliable and effective communications. The cables have to run either below the data center raised floor to ceiling where it was safe, but it can be accessed if necessary. It is important to take care and to maintain it away from the power cable whenever possible, additionally it necessary to cross the power cables, then the cross at the exact angle of 90 degrees to reduce the cross talk. As wire may grasp on some additional current if it placed nearby to any magnetic source, at that it is necessary to be very careful where the communicable cables are run. This property type which affected by an external magnetism is also known as EMI - electromagnetic interference. At the same time, you must avoid interference by maintaining the copper cable far away from the entire magnetic sources which is so powerful. It also includes speakers, mobile telephones, wireless network, power facilities, florescent lights, electric motors, copy machines, refrigerators, fluorescent light ballast, amplifiers, microwave oven and much more. Anything which produces a magnetic field must be avoided when you positioning the cable. It is essential to decide which cable has to use based on network topology and distance between those components. It is because few network technologies may run farther than some other without any communication errors. Most of the technologies of network communications agonize from the attenuation, which is signal degradation due to its medium. Attenuation is more pronounced in few cable types when compared to others. Impedance is an opposition to the flow of signal and many points along the way. The electrical impedance measured in ohms and the differences in impedance result in the signal reflection. A common issue on the traditional POTS networks when moving 4 wires to 2 wires which result impedance mismatching leaks transmit audio onto the receive side as echo. It is an occurrence of bleeding signal between 2 wires which carries current and adjacent to one another. It may accomplish network communications into slow or not work almost. The designers of network cable can minimize the crosstalk which is inside the network cables over wiggle the match wires together, in that way put it at an angle of 90 degrees to each other. To avoid crosstalk, it is important to use essential cable to speed up the network. To eliminate it completely, usage of fiber optic cable is more recommendable. Mostly fiber will use light rather than electricity and it is thoroughly resistant to crosstalk. Crosstalk is measured in decibels of negative number and this minus sign is just an assumption. The twists are the one which helps to reduce the crosstalk and more twist will reduce more and so more twists are efficiently better. An important measurement is the near end crosstalk and the signals are strongest at the source end. Measure with the specialized equipments and the signals are transmitted down one pair and anything heard on the other pair is crosstalk. Less crosstalk is always better than high crosstalk. For ex: 40dB is really better than 30dB. It is important that the negative signal strength is implied. Attenuation is the power loss in an electrical signal. This attenuation also represented as lost signal and the minus sign is implied. The IEE 568 -limits attenuation to 24 dB for the 100 MHz signal which is less than 1/100th of the original signals. To perform properly, the hard drive needs an adequate power supply and connection through a ribbon cable to PC. In some cases the jiggle may lose due to plugs when moving the PC and it is hardly simple to uproot the connection of ribbon cable when pulling boards of circuit or performing the maintenance task inside the case. The hard disk also works well with any of the Molex connector from PC power supply. It is important that the plug must be fully inserted. This connector needs full pressure to insert it fully and requires more pressure to remove it. On the cable network, the easiest way to approve the physical connection is to audit that link lights on devices at every end of each cable are lit. Review the plugs that the each end to assure that it was pressed securely into the sockets of the network. Link lights are most often found on network sockets or front of devices such as LAN capable routers or switches. Troubleshooting the common physical connectivity doesn't have to be difficult if you understand the various techniques that make it all work. Gaining strong knowledge and understanding of the physical connectivity is going to go a long way to resolve the issue. To troubleshoot any network, you need to start with physical connectivity and then move to the configuration of IP address. By grasping the required skills and knowledge you can resolve the issue and troubles easily and quickly. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from [email protected] and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:1495a5bc-b657-498e-abf7-1a35f04087db>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/network-plus-common-physical-connectivity-how-to-troubleshoot-it.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00230.warc.gz
en
0.941272
2,402
2.5625
3
This article was previously published in Cybersecurity Trends, authored by Aron Seader, Eclypses Sr. Director of Core Engineering Society has already benefited from quantum computing, and we continue to witness how quickly this powerful technology can simulate and test theories. But not everyone uses this technology for good – with more powerful technology comes more advanced bad actors who will use it maliciously. We cannot continue to wait for these hackers to use quantum computing against us, so preparation now is critical. To combat these threats, it will take proactive, thoughtful approaches to cybersecurity and a new understanding of best practices that may seem foreign but will be instrumental in protecting our data and companies moving forward. How does quantum computing work, and what makes it vulnerable to cyberattacks? Quantum computing works on the same principles as modern computing, representing the world through complex mathematical equations. The difference is that instead of being restricted to a binary (yes/no, on/off, 0/1) answer, quantum computing can use qubits and superposition to represent more complex answers. Because they are quantum particles, Qubits can exist in 2 states at once, and it is the superposition, the amount it is in one state versus the other, that can be utilized to answer more complex questions. Classical computers can only compute one answer at a time, systematically computing through lots of individual equations to reach an overall conclusion. Quantum computers, on the other hand, can do many related calculations simultaneously to arrive at an overall conclusion in one operation. This ability becomes very concerning in cybersecurity and encryption because it exponentially speeds up public key and brute force attacks. Data encryption & its vulnerabilities These quantum algorithms, specifically Shor’s and Grover’s, will supply a way to break both asymmetric and symmetric style encryptions. Asymmetric encryption, in this case, refers to the key agreement used to set up the encryption, not the encryption algorithm itself. These key agreements allow endpoints to share a public key insecurely and generate private keys to encrypt and decrypt data. This method is widely depended upon because it enables endpoints to have no previous knowledge of each other, making it very flexible and easy to set up. It is also important to note that these agreements are used in various instances (i.e., digital signatures), and the following points apply to all. Today, these asymmetric algorithms are safe because the calculations needed to break them take so long to execute that it is not a practical attack vector for cybercriminals. Quantum computers, on the other hand, can perform the factorization and logarithmic algorithms needed to break asymmetric algorithms at an alarmingly fast rate, making them an efficient attack vector. Cybercriminals using quantum computing will target asymmetric algorithms first, not only because it is so widely used but also because they can manipulate a public key to get a private key. On the other hand, symmetric style encryptions are widely accepted as more resistant to quantum computing attacks because there is no public key to manipulate into a private key. Only a private key exists that must be securely stored on the encrypting and decrypting devices or securely provided to the encryption algorithm when needed. This style of encryption, in turn, is typically compromised by brute force methods. Even when using key chaining, once one key is exposed, one can apply the same chaining to that key to gain other keys. These brute force methods are currently linear operations where conventional computers guess keys one at a time until they recognize the data as decrypted (for reference, this style of brute force would take the Fugaku supercomputer an average of 23 trillion years to perform on an AES-128 encrypted payload). Contrastingly, quantum computers can try every encryption key in parallel and reveal the data in hours instead of years. Advanced computing requires advanced safeguards These problems of tomorrow call for a unique solution unlike what is in use today. There are two strong possibilities for this solution; it could not use encryption at all or could manage encryption keys in a way that removes quantum vulnerabilities. The first possibility may seem farfetched as anything outside encryption seems otherworldly, but it would be relatively simple in theory to produce an alternative quantum-resistant technology. Drawing upon successful schemes of the past, which have proven to be some of the most secure data protection methods, the one-time pad would be a great model to work off. Developed in WWI, the one-time pad takes data protection to the byte level, shifting each byte of data by a different random amount instead of altering an entire piece of data with a single key as encryption does. It does this by using a sample of random data that is the same length as the data being secured and XORing each byte of both data sets to get a secure third string of bytes. This byte-level replacement of actual data for random data is robust and eliminates quantum computers’ advantage of guessing a key to decrypt an entire payload. Guessing of keys only works because there are recognizable clues in a payload that verifies if the guessed key is correct or not. Brute forcing at the byte level becomes astronomically harder than verifying an entire payload because of the lack of contextual clues. The trouble with the one-time pad is that the random data must match the data length and somehow get to both sides for the securing and un-securing actions. These are complex challenges to overcome, primarily when the basis of most modern systems are zero-knowledge and session handshaking. However, the new push for zero-trust full-knowledge environments and the use of secure deterministic random bit generators (DRBGs) make these challenges manageable. They enable secure endpoint relationships to persist between sessions and simultaneous generation of random data at any length, respectively. The second possibility of managing keys might seem like a solution already offered, but the popular answers are anything but quantum resistant. This solution takes the process of key generation down below the level of encryption and removes all third parties and humans from the mix. It would eliminate the need to share public/private keys by systematically generating random single-use encryption keys when needed, only keeping them around while in use and never reusing them. This method allows keys to change with every transmission instead of the session-based approach of TLS and other securities. Also, making the keys random means there is no basis on the data or any other key, eliminating the links that quantum computing manipulates. Creating such a method seems like a tall order, but again it only takes a shifting perspective for the proposition to seem obtainable. The generation of the keys via DRBGs would allow two paired endpoints to generate encryption keys simultaneously without ever sending any key information. Removing the need to send a public key eliminates the most significant attack surface quantum computing has on encryption. The problematic piece to figure out is how do these two endpoints synchronize. Drawing again on the fact that environments are driving towards zero-trust with full knowledge, the seeding of the DRBGs could draw from knowledge both endpoints already know, thus eliminating credential sharing and handshaking. Also, if the secure relationship persists between sessions, the registration of endpoints could be more secure and stringent than today’s since it would only need to happen once during the first use of an endpoint. These two solutions can even be combined using byte-level substitution for highly sensitive pieces of data and random key generation for larger, less sensitive data. This combined approach would be a highly secure yet efficient solution with endless flexibility to accommodate any environment. Readily available quantum computing is on its way, and the time is now to start future-proofing systems for its arrival. Waiting until we have the first quantum computing breach is too late. Gone are the days of monitoring being enough; quantum computing has the power to obliterate an environment within seconds of discovering a vulnerability. There will not be enough time to quarantine and take the reactive steps relied upon currently. Data needs to be the focus of security, and that security needs to anticipate the power of quantum computing.
<urn:uuid:6c846674-bfa2-484d-b5f0-87381f8ed045>
CC-MAIN-2022-40
https://eclypses.com/news/cybersecurity-trends-quantum-computing-requires-advanced-safeguards/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00230.warc.gz
en
0.933982
1,645
3.15625
3
Can bad hackers reform and hack for good? The world of computer technology is full of such instances of disruptive activity carried out by people who excel at dismantling systems and peeling back security layers in order to see how things tick. Let’s provide some clarity on the terms used to describe hackers before we take a look at some of history’s noteworthy hackers: White, black and gray hat hackers Calling back to tropes in old Western films that made it easy to identify the heroes from the villains, hackers are typically divided into three groups. Black hat hackers engage in illegal and criminal hacking. Some do so for the financial gain to be made from the stealing and selling of private data while others vandalize or disrupt targeted networks or websites for political purposes. No matter the incentive, black hat hackers are never officially sanctioned, unless you count those that are employed by clandestine state actors or ransomware gangs. White hat hackers, on the contrary, are professionals who are enlisted by organizations in order to find security holes and bugs within their networks. They are valued for their unorthodox problem solving and their ability to “think like a hacker” when it comes to penetrating networks. They follow a strict code of ethics and do not access or break into networks or systems that they are not specifically told to. Many white hat hackers are contracted, although keeping some on the payroll has become increasingly common for large companies. Gray hat hackers, as their name might suggest, fall somewhere between these two. These hackers are not officially hired, but may break into systems with potentially good intentions. Sometimes they will hack a network after finding a vulnerability and then ask the organization for a fee to fix the exploit, which can seem a bit like extortion. If a company does not comply, a gray hat hacker may sell or simply release the instructions to hack the targeted network to the internet. Who are some bad hackers gone good? Some of history’s most notorious “black hat” hackers have turned the page and decided to use their expertise for good. Robert Tappan Morris Robert Tappan Morris is best known for creating the “Morris Worm,” considered to be the first computer virus. While his intentions were to create code that would allow him to measure the size of the internet, his software bogged down computers due to continually re-infecting them. Morris was fined $10,000 and ordered to perform 400 hours of community service for his actions. Today, Morris works at MIT and has founded Y Combinator, a startup accelerator that has helped to launch companies including Airbnb, DoorDash and Reddit. Mark Abene, hacker alias Phiber Optik, founded a hacker group called the Masters of Doom. While his work in the group was not intended to be malicious, his poking into unauthorized systems resulted in him garnering the attention of the FBI. After his eventual arrest, he started a short-lived cybersecurity company before transitioning into becoming a cybersecurity consultant. Kevin Mitnick led authorities on a wild goose chase after successfully stealing software and breaking into unauthorized systems. He often used social engineering tactics in order to encourage people to simply hand over login credentials. Mitnick was eventually caught and served five years in jail. Since his release he has founded a cybersecurity company called Mitnick Security Consulting, LLC and written books about his time as a hacker as well as his run from the authorities. Best known for hacking the telephone lines of a Los Angeles radio station in order to win a Porsche 944 S2 Cabriolet, Poulson used the online handles “Condor” and “Dark Dante.” He landed on the FBI’s most wanted hackers list after he accessed federal networks and stole wiretapped information. He was eventually arrested and served time in jail. Since his release, Poulson has become a highly regarded investigative security journalist who has assisted law enforcement in arresting 744 sexual predators that were lurking on social media network MySpace. He has since been a senior editor for Wired News but still managed to find himself in the hot seat after doxing an individual in 2019. He helped to develop SecureDrop, a platform that is designed to allow for secure communication between journalists and their sources. Steve Wozniak is famously known for having co-founded Apple along with Steve Jobs. However, he started his computer career creating what was referred to as “blue boxes.” These devices were able to hack phone lines and allow people to make long distance calls for free. He and Jobs sold the illegal boxes to their college classmates. Wozniak left his illegal hacking days far behind him and has written books, served as a consultant for a wide range of topics in the industry and has engaged in a large amount of philanthropic work. He also created Woz U, a training platform for software engineers. “Mr. White Hat” “Mr. White Hat” is the name given to an anonymous hacker by Poly Network, the decentralized finance platform that they stole more than $600 million in crypto from in August of 2021. The day after stealing the funds, the hacker returned about half of it with a message that insisted they were intending to expose a vulnerability within Poly Network’s security. While possibly acts of desperation, Poly Network first offered the hacker a “bug bounty” of $500,000 in exchange for the entirety of the stolen funds. They then upped the ante by also offering them the role of chief security advisor within the company. While the hacker publicly turned down both offers, all of the funds were eventually returned and “Mr. White Hat” did end up moving the bug bounty money promised into an account. While the individual in this case would likely be classified as a gray hat hacker given their tactics, the fact that this event occurred recently, combined with Poly Network themselves referring to them as “Mr. White Hat,” makes them a notable, current addition to the list. How to stay safe from hackers Cybercrime rates have reached an all time high and show little sign of slowing down. While big names like Lapsus$ Group and REvil steal the headlines by attacking high profile companies, the fact remains that small businesses are more frequently targeted and can find it very difficult to recover after a hack. Follow these basic cybersecurity rules to help ensure that your network is not targeted by hackers: - Practice excellent password hygiene. Using a password generator can help you create random, impossible to guess login credentials. Never use the same password across more than one device or account. - Be sure that your staff has an understanding of phishing and social engineering tactics. - Bookmark and follow cybersecurity news blogs and online cybersecurity resources. - Keep your entire system updated from your OS to your hardware. You can affordably update your old hardware by purchasing refurbished equipment from a reputable supplier. - From dark days to white knights: 5 bad hackers gone good by Meghan Kelly, 8 Nov 2013, VentureBeat - The Black Hat Hackers who Turned Over a New Leaf 17 Sep 2019, CISOMAG - The World’s Most Famous and Best Hackers (and Their Fascinating Stories) by Dan Price, 22 April 2022, MUO - Famous White-Hat Hackers by Annie Mueller, 25 July 2021, Investopedia - About | Officially Woz - White Hat, Black Hat, and Grey Hat Hackers: What Do They Do, and What Is the Difference Between Them? 7 Feb 2021, TripWire - ‘Mr White Hat:’ The story behind a $600m crypto caper by Philip Stafford, Siddharth Venkataramakrishnan and Miles Kruppa, 13 Aug 2021, Financial Times - Timeline: Poly Network and the curious case of ‘Mr Whitehat’ by Joanna England, 6 Dec 2021, FinTech
<urn:uuid:9ac5b8f5-7f1d-4117-a207-f9f94ad6f37e>
CC-MAIN-2022-40
https://news.networktigers.com/opinion/bad-hackers-gone-good/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00230.warc.gz
en
0.962136
1,657
2.828125
3
Web applications attacks/Weak encryption Many web applications protect their access with an encryption/authentication mechanism. Be careful to apply a strong mechanism. Indeed, weak encryptions can easily be reversed-engineered. An encryption is qualified as "weak" when it is easily predictable. - WebGoat, Insecure lient Storage lesson shows how to crack a client-side weak encryption mechanism. - WebGoat, Spoof an authentication cookie is another example of a predictable session due to a weak encryption mechanism. - HackThisSite.org, Basic, Level 6 shows how to reverse-engineer a weak encryption mechanism to decrypt a password from its encrypted form.
<urn:uuid:a192b27b-bb22-404a-b38e-aa878d54c9ec>
CC-MAIN-2022-40
https://www.aldeid.com/wiki/Web_applications_attacks/Weak_encryption
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00230.warc.gz
en
0.853143
142
3.046875
3
Routing protocols are divided into the following three classes, depending on how they work: - Distance Vector – Distance Vector protocols are characterized by two things: - They use distance has a measure of the cost of a route. The number of hops in between a router and a destination network determines the distance. - They periodically send their entire routing table to the neighboring routers. The receiving router then merges its routing table with the received information based on AD and metrics. This process is called routing by rumor since the receiving router believes the information received from the neighbor. - Distance vector protocols are slower to converge. A network is considered converged when all routers in the network know about all destination networksDistance Vector protocols are relatively easier to configure, manage and troubleshoot. However on the other hand, they consume a lot more bandwidth and CPU because they periodically send out the entire routing table, irrespective of the fact that nothing has changed in between the period. RIP is an example of a distance vector protocol. - Link State – Link state protocols are characterized by the following three things: - They form a neighbor relation with other routers before sharing the routing information. They do not send out routing information to the entire network as in case of distance vector protocols. Information related to their neighbors are stored in a table. - They only exchange connectivity related information or link states, unlike distance vector protocols that send out routing tables. This information is stored n a topology table to construct a full view of the network. - Based on links states received, each router calculates the best path to every destination in the network. Each protocol has its own algorithm to calculate the best path. - Link state updates are sent out only when there is a change instead of periodically as in case of distance vector protocols. - Link state protocols converge faster than distance vector protocols.Link State Protocols are a little more complex to configure, manage and troubleshoot compared to distance vector protocols. OSPF is an example of a link state protocol. 3. Hybrid – Hybrid protocols use aspects of both distance vector and link state protocols. EIGRP is an example of hybrid protocol.
<urn:uuid:822e4a69-c684-4559-a641-a5a8fda534a2>
CC-MAIN-2022-40
https://www.freeccnastudyguide.com/study-guides/ccna/ch4/routing-protocols/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00230.warc.gz
en
0.932159
438
3.90625
4
What is Document Capture? How is it Different Than Document Scanning? Without a document management system, document capture is the process of opening the mail, routing the document to the appropriate person in your organization, and then having that person file the document. With document management, document capture of paper documents is the process of scanning that documents through an online document scanner. It then becomes a digital document. When done correctly, these digital copies can be used as the legal original copy. Later, we will discuss the legal aspects of preserving documents digitally, but for now let’s, focus on the capture process. Scanning documents creates an image of the document in a format that can be viewed online. Most people are familiar with PDF files. That is just one format. Other formats are used for document capture solutions, too. Every format has its benefits. For example: - TIF format: most documents that are scanned are turned into TIF format, which compresses documents so they do not take up a lot of storage and at the same time enables them to be indexed on a page level. So if you had a 100 page contract to scan in, then TIF would be a better format than PDF as it would be better indexed. - PDF format: if the contract was created in MS-Word, and you created the PDF directly out of Word, then the PDF document would be smaller than a scanned TIF file. Further, a PDF created from a Word document would probably be a better quality image than a scanned document. As with all things, there are pluses and minuses to every choice. While we discussed scanning paper documents, we should keep in mind that all documents are not black and white. You might have scanned a picture. Think of a contract for a piece of property. It might have pictures of the property as exhibits. Pictures might be scanned in as TIF documents, but they could also be JPEG files or even other formats. Again, different formats have different benefits. Document capture is not just the process of scanning paper documents. Document capture also takes into account electronic documents. In the example above, we referenced creating a PDF from an MS-Word document. Keep in mind the MS-Word document is already an electronic document. You might want to have the original Word document or a PDF in your document management system for future reference. This leads to the concept of importing electronic documents into your system. Remember that the power of a document management system is the ability to tie all your documents together. In addition to the file cabinets of paper documents, you have large numbers of important electronic documents. These electronic documents are just as important as the paper files, maybe even more important in many instances. Importing documents is a different process than scanning. However, you may choose not to import various electronic records. For example, records in your accounting system will probably not be imported into your document management system. Yet, these electronic records are critical to having a successful and efficient document management system. There are two ways to accommodate these records with document management. One way is to link to the accounting system to pull in the data from these records. The more common way is to have the accounting system link to the document management system and display the information needed. That way, in this example, the AP team uses the accounting system for their work and when they need the supporting documentation, the accounting system requests them from the document management system. Later we will discuss implementing a document management system and tying it to your existing applications. For now we need to explore how documents are indexed for storage and retrieval.
<urn:uuid:62ca5ac9-b6c0-430b-911f-9b1ecef796f8>
CC-MAIN-2022-40
https://www.docuvantage.com/document-capture
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00230.warc.gz
en
0.934524
744
3.109375
3
Graphic user interface (GUI) for smart management and visibility "Large" has many dimensions. Commonly, we think of a large company as one which has a significant number of employees. But describing a company as large can also signal a widespread physical footprint, like hundreds of telecom towers or thousands of gas wells. Even if businesses don't sprawl hundreds of miles, they can have hundreds of pieces of hardware in operation to keep track of, like a hospital might. Company magnitude extends in another direction as well - time. In this sense, companies, like fish or trees, are constantly growing larger as they grow older. Why does this matter? Because larger companies are more difficult to monitor and manage. A company that has been installing equipment for the last 30 years will have a vast range of different equipment styles and capabilities to rationalize. As you know if you have teenagers, being years apart means you may not even speak the same language. Developing and implementing successful company strategies depends on accurate, comprehensive information about a company's operation. The larger a company is in every dimension, the more difficult it is to implement a successful strategy. In these cases, company managers must find ways to make the amount of information they must process more easily visible. In particular, instead of dealing with a scattered Babel of hardware languages squawking from the four corners of the wind, large companies can use uniform SNMP in networking and monitoring their hardware. SNMP stands for Simple Network Management Protocol. It's a hardware Esperanto, a lingua franca, a common tongue shared by many different types and ages of equipment. Not every piece of equipment can transmit information in SNMP, but many can. Those which can't would instead transmit to central master stations, which will convert Modbus, DNP3, or other machine dialects into the common SNMP protocol. What's the benefit of having so many pieces of hardware speak the same language? It allows them all to be monitored and viewed from a central location. So a company with widespread infrastructure can visibly see its entire operation on a computer screen. Able to access this comprehensive, accurate information, company leaders can develop, implement, and monitor successful management and maintenance strategies. This information can significantly improve maintenance results, reducing expensive downtime. Depending on the equipment, downtime can have expensive primary and secondary consequences. Companies often rely on pre-scheduled preventative maintenance schedules to assign their limited maintenance resources. This is most effective for maintenance issues that fall in the fat center of the bell curve, not for outlier issues. Monitoring hardware via SNMP improves this strategy in two ways: As the Internet of Things becomes bigger and bigger, more and more equipment will transmit its own sensor information in SNMP. The only step company managers need to take in this case is to ensure that new equipment actually does this, and can interface with the company's existing network. To simplify communications, this equipment should transmit information to master stations which can retransmit data to the alarm mediation master. Older equipment may transmit information on its own, just not in SNMP. For this equipment, as previously discussed, the best method is to arrange transmission to a master station. The master station will translate and then re-transmit to the alarm mediation station, where it can be viewed. Some equipment, older and even sometimes modern, won't transmit any information at all. In order to monitor this equipment remotely, separate devices known as Remote Terminal Units (RTUs) are required. RTUs can be configured with a wide range of sensors relevant to equipment conditions, such as temperature, humidity, battery levels, tank levels, or vibration. Rather than constantly broadcasting a sea of noise, RTUs keep communications streamlined by transmitting only important events to master stations. Doing so allows the central alarm mediation master to update its information solely from master stations, saving processing power. And, for greater visibility, some master stations even offer graphic user interface (GUI) capabilities to enable management to view geographical details on maps and charts. Companies with older equipment, which transmit in different protocols or not at all, can still implement comprehensive monitoring by using RTUs and master stations to transmit and convert relevant equipment data. Using SNMP in networking for large company operations improves the ability of managers to effectively strategize and simplify communications. You need to see DPS gear in action. Get a live demo with our engineers. Have a specific question? Ask our team of expert engineers and get a specific answer! Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
<urn:uuid:87d24fc3-5ad9-4671-b6da-7fed088c46df>
CC-MAIN-2022-40
https://www.dpstele.com/insights/2019/10/29/snmp-networking-improve-company-strategy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00430.warc.gz
en
0.940559
976
2.71875
3
KLEZ_WORM, Denial of service, NIMDA, the web server system has been corrupted yet again. Will it ever end? The news is filled daily with horror stories about companies who have been crippled by virus attacks and network security breeches. Ever wonder why some are seemingly never affected by security attacks, while others are plagued constantly? I am concerned, is there anything that I can do to stop attacks? Yes! You are not helpless. “In fact, if you follow some best practices you will block 80-90% of the attacks immediately.” So says Dee Liebenstein Senior Product Manager, Symantec Security Response Team. Learn something about network and computer security threats, then practice good security hygiene, and you will have cut your risk considerably. According to www.webopedia.com “The pejorative sense of hacker is becoming more prominent largely because the popular press has co-opted the term to refer to individuals who gain unauthorized access to computer systems for the purpose of stealing and corrupting data. Hackers, themselves, maintain that the proper term for such individuals is cracker.” Hacker or cracker, either way they can be bad news for your important company data. Software until quite recently was not generally built with security in mind. Although the government has been requiring security in computer systems for years, the majority of companies and individuals did not make it a priority. Why? Unless it is carefully designed, it is very difficult to build security that is not intrusive to the user. Think of how many passwords you are required to remember nowadays. How many of you have given up and keep them in a file on your computer? Enough said. You might be tempted to blame Microsoft for creating the problem because their software is so full of vulnerabilities. Don’t. Almost all commercial software has security holes. So many people use Microsoft products that they make an obvious target. If you are a wily hacker and you want to wreak the most havoc on the computer world why bother writing a virus for Star Office. Yes, there are hardy souls that still use that software, but would anyone else notice or care? Back in 1987 when the internet started, the Morris Worm was unleashed on the unsuspecting networked computer community. Although it was intended as a warning that such things were possible (little did we know in those days), it was taken very seriously by law enforcement at the time. Since then the number of methods of attacks and possibilities for system compromise has grown exponentially. The threats fall into three main categories: viruses, intrusion, and “denial of service” attacks directly on your network service. Viruses and worms What are viruses? They are pieces of code that take advantage of a vulnerability or “hole” in the system or application software itself. Some distinguish a worm as a special type of virus that replicates itself and uses memory, but cannot attach itself to other programs. “But,” according to Dee Liebenstein, “from a systems perspective think of worms spreading from machine to machine, while viruses spread from file to file. Most of things that we call viruses today are really worms.” Most people are familiar with viruses because they tend to affect user’s personal computers directly. Viruses range from the merely annoying like the recent “X97M.Ellar.E”, a MS Excel macro virus, to the extremely destructive, like “W32.KLEZ.H@MM”, a KLEZ worm variant which insinuates itself into your system and spreads through e-mail address book listings. “Symantec analyses an average of 10 new viruses a day,” says Liebenstein. www.cert.org, www.viruslist.com and www.sans.org are all excellent sources of current information about viruses and worms. In addition, all the commercial virus protection products also maintain sites with the latest information and software updates. Denial of Service Recently my company website had so much traffic that many customers could not get to it. A great business success or a “denial of service” attack? Sometimes it is hard to tell the difference. The hackers attack vulnerable systems by sending literally millions of “hits” using up limited computer or network resources, thus blocking the legitimate users from systems. The original CodeRed virus had a payload that caused a Denial of Service attack on the White House Web server. These attacks are particularly difficult to stop or prevent. Have you checked your website lately? Does it still have the content that you put there? “Website defacement is the most common type of attack. It accounted for 64% of the attacks reported, by far exceeding proprietary information theft at 8%. According to Attrition.org, the number of recorded defacements has recently increased to a current average of 25 defacements per day! London shopping emporium, Harrods recently suffered website defacement. A hacker mapped out where in the store certain ‘items’ could be bought, including the unlikely product, cocaine,” Says Iain Franklin, European Vice President of Entercept Security Technologies. According to the CERT Coordination Center, part of the Software Engineering Institute at Carnegie Mellon University, “an intruder may use your anonymous ftp area as a place to store illegal copies of commercial software, consuming disk space and generating network traffic which may also result in denial of service.” If all this is not enough, the latest weapon in the hacker arsenal is the blended threat that uses multiple methods to attack or propagate. The most insidious part is that they are automated, that is, they require no human intervention to propagate. The usual method is by co-opting your e-mail address list and sending copies of itself to everyone, but there are now viruses that can embed themselves into unsuspecting company websites and attack customers when they visit the site. Some of these blended threats are downright nasty. “Backdoor.Sadmind is a backdoor worm program that may affect systems that are running unpatched versions of Microsoft IIS or Solaris. Lion is a worm that exploits a well known vulnerability in BIND to gain privileged access to Linux systems. Once it has obtained access, Lion runs a “rootkit” to hide its presence, and then proceeds to search for other vulnerable systems. A software update is available for BIND, but many systems remain vulnerable, allowing Lion to spread. CodeRed II has a payload that allows the hacker full remote access to a Web server,” states Liebenstein. To prevent these threats requires special security practices in addition to the traditional ones. Now that we have reviewed many of the potential threats to your network and systems, next issue we will discuss methods of reducing the threat by using a combination of software, vigilance, and Beth Cohen is president of Luth Computer Specialists, Inc., a consulting practice specializing in IT infrastructure for smaller companies. She has been in the trenches supporting company IT infrastructure for over 20 years in a number of different fields including architecture, construction, engineering, software, telecommunications, and research. She is currently writing a book IT for the Small Enterprise and pursuing an Information Age MBA from Bentley College.
<urn:uuid:3538453a-85ec-4029-8176-b3d278a0f013>
CC-MAIN-2022-40
https://www.datamation.com/security/network-security-covering-the-basics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00430.warc.gz
en
0.938246
1,629
2.546875
3
In this week’s Voices of the Industry, Earl Keisling, co-founder and CEO of Inertech LLC, the data center infrastructure technology division of Aligned Energy, discusses why data center water use matters, and what can be done to make our use of this essential resource sustainable. U.S. data centers consumed 626 billion liters of water in 2014 and are on track to consume 660 billion liters of water by 2020. That is according to the recent United States Data Center Energy Usage Report, which was based on research conducted by the Lawrence Berkeley National Laboratory, in collaboration with researchers from Stanford University, Northwestern University, and Carnegie Mellon University. It is easy to see why data centers’ water consumption is huge, and rising: demand for the service the data center provides – that is, compute, network, and storage – is increasing exponentially. Having increased fivefold between 2010 and 2015, by the end of 2016 annual global Internet traffic (a reasonable proxy for compute load trends) will have exceeded one zettabyte, according to Cisco. Just four years later, it will have doubled. (Read more here.) Data center water use affects the planet, yes, but also companies’ financials Already, water scarcity affects 4 billion people, according to research by Mesfin M. Mekonnen and Arjen Y. Hoekstra of the University of Twente in the Netherlands. Water crises rank just behind climate change and weapons of mass destruction as the most impactful risk for the years to come, according to the World Economic Forum’s Global Risks Perception Survey. In this kind of environment, rising data center water consumption is simply not sustainable. Because of the impact data centers’ water consumption has on an already water-stressed planet, investors are now paying attention to companies’ “water footprints” just as they pay attention to their carbon footprints. As Himani Phadke, research director at the Sustainability Accounting Standards Board, a non-profit that writes corporate sustainability reporting guidelines for investors, said in a recent Bloomberg article, “Operational efficiencies at data centers have a direct link to companies’ profitability and pose an increasing risk for investors in a ‘tense’ climate change environment.” In the same article, William Sarni, director and practice leader of Water Strategy at Deloitte Consulting LLP explained that investor concern about corporate water use will only continue to grow. “Over the past few years, we have seen a dramatic increase of interest in water as a business risk and also as a business opportunity issue. I see it accelerating.” What can be done to reduce data centers’ water impact Exponentially rising demand for data center services leads to rising water consumption by data centers. Add to that worldwide water scarcity, and rising investor concern about companies’ data center water use, and the end result is a very clear and immediate need for data centers to use less water. Data centers can do that, first and most obviously, by using less water. By far the largest opportunity for direct water savings in the data center is the cooling system. Perhaps less obviously though, there is an even greater opportunity for indirect water savings: using less power. Of the 626 billion liters of water consumed by data centers in 2014, about 80% of it was used in the generation of power for the data center. I can remember very clearly being at a job site standing outside next to the enormous cooling towers and thinking, “There has to be a better way.” My co-founders and I worked hard to innovate that better way. We completely rethought the approach to data center cooling, and the result is a system that uses up to 85% less water and 80% less power than a traditional cooling system. Instead of the traditional water- and power-guzzling chiller plant, a CACTUS® cooler is an air-cooled adiabatic rejection system comprised of two major components: a fluid cooler with an indirect evaporative cooling mode, and a compressor trim unit. The design combines both condenser heat rejection and water-side economization functions into the same product. In contrast to traditional chillers, this system relies on free cooling most of the time, even in hot climates. When temperatures can’t support 100% free cooling, the system makes use of indirect evaporative cooling and trim compression. The cycle consistently and effectively manages water and compression power consumption. Inside the data center, instead of a traditional forced-air system, we’ve close-coupled heat sinks above the racks to absorb the heat at its source, allowing hot air to rise and cooler, denser air to settle where it is needed. A heat sink draws the hot air from the servers, passes it across coils and “neutralizes” the heat without chilling it – sending 75-77°F air to the server inlets at the front of the enclosure. Removing heat at its source takes far less energy than making outside air cold and blowing it into the data center to mix with the hot air there, and it is more effective. …and maintain reliability Key in any measure to reduce water and power consumption in the data center is to do it without sacrificing reliability. Until recently, there haven’t been technologies available to allow data centers to be more efficient and maintain their reliability. The bottom line Water use is huge and growing. And it matters, to all of us. Two-thirds of the global population already face water scarcity, and it is only going to get worse. It affects companies’ financials, too, as investors are beginning to ask about companies’ “water footprints” as they ask about their carbon footprints. Fortunately, there are innovations that can dramatically reduce data centers’ water consumption, without sacrificing reliability. Earl Keisling is a co-founder and CEO of Inertech LLC, the data center infrastructure technology division of Aligned Energy. He has been awarded multiple patents and is a master mechanical engineer with more than 30 years of experience designing and building large, complex construction and infrastructure projects. Inertech’s solutions has been proven to be even more reliable than a traditional forced-air/chiller plant cooling system. (Learn why here.)
<urn:uuid:9b026fad-8989-47be-b1a9-388757c16a3e>
CC-MAIN-2022-40
https://datacenterfrontier.com/data-center-water-use-matters/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00430.warc.gz
en
0.941524
1,305
2.515625
3
A recent NPR report detailed an issue that seems to be prevalent throughout the entire broadcast industry, but not often talked about: broadcasting towers are killing birds. If you’ve driven around at night, you have likely seen one of these towers lit up with bright red lights. The lights are there to warn low flying aircraft of these giant obstructions, but they haven’t had the same luck with birds. According to NPR, there was a broadcasting tower in Gun Lake, Michigan that was responsible for the death of 2,300 birds in a single night. Caleb Putnam, who has a joint position with the Michigan Department of Natural Resources and Audubon Great Lakes, told NPR that for reasons scientists can’t figure out birds just keep flying into broadcast towers. If you take that number and multiply it out by the thousands of towers that exist in the United States you end up with an absurd amount of dead birds. In the U.S. alone it’s estimated about 7 million birds die from hitting broadcasting towers every year. It’s a serious problem. That night in Michigan took place in 1976, scientists had no idea why these birds kept dying until 2003 when biologist Joelle Gehring started a study to find out what could be done. It was a grueling study, often involving Joelle and her colleagues visiting various towers during migration season to count dead birds. They discovered the cause, and it’s as simple as it is surprising: lights. “We were able to reduce the numbers of bird fatalities on communications towers by simply extinguishing those non-flashing lights,” Joelle told NPR. “Those fatalities were reduced by as much as 70 percent.” While there is no definitive answer as to why these lights are causing birds to fly to their deaths, Joelle does have a theory. “Some research has documented that when birds are exposed to long wavelengths of light such as red or white that it actually interferes with their ability to use magnetic fields for navigation,” Gehring says. In 2015, the Federal Aviation Administration (FAA) changed the regulations for new towers requiring that every new tower be built with only flashing lights, and it’s working. While there are still thousands of towers that are not bird friendly, steps are being taken to help the bird population by broadcasting companies. It’s a fascinating problem, with a simple solution.
<urn:uuid:edeafb0c-299d-4f93-9017-8385636952e8>
CC-MAIN-2022-40
https://deal.godish.com/broadcasting-towers-killing-birds-alarming-rate/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00430.warc.gz
en
0.977985
501
2.859375
3
You’re browsing the web on a trusted website. You click on an regular-looking link. Suddenly, the website changes on you. A splash screen appears. You read it out, a pit forming in your stomach as you see the skull above the text. ‘We have encrypted all of your files. The only way to get your files back is to buy our decryption software. You have 94 hours to pay us, or your files will be permanently destroyed.’ How did this happen? You were just surfing the net—and you weren’t even on a suspicious website! The answer? You’ve been hit by an Exploit Kit. So, What Exactly Is An Exploit Kit? An exploit kit is a hacking toolkit that cybercrims use to take advantage of your old, outdated software. Like a big cat on the Savannah, they watch the herds of traffic going by… then pounce on the weaker members that lag behind. To use less metaphor, it’s a hacking toolkit that provides several different kinds of trojans, malware, and ransomware, all tailored to ‘exploit’ a vulnerability in your existing software. Typically, they target popular products such as Adobe Flash, Java, and Microsoft Silverlight. When you access a website, the exploit kit analyzes the software installed on the system/devices used to access it. If any unpatched software is detected, that’s when they attack. How Normal Websites Hide Exploit Kits To subject an internet user to scanning with an Exploit Kit, they must be tricked into clicking on a link. This is done through spamming them (forcing them to click it to get rid of pop ups) or social engineering lures. Once the link is clicked, the user traffic is redirected to the exploit kit, which scans the user. Depending on the prey, it delivers an appropriate payload. Exploit kits are frequently combined with malvertising—malicious advertisements on legitimate webpages. Since these malware ads are on top-tier news, entertainment and political commentary sites, a user’s guard is down. It preys on the fact many users think bad links only exist on bad websites… and out-of-date software. Phishing Education and Exploit Kits: Only A Half Fix Unpatched software is just like an open door to your house. Sure, you could blame the people on the street who would happily come in and loot the place… or you could lock your door. They can only exploit you because there’s a vulnerability to exploit in the first place! Since any link can lead to an exploit kit, that means phishing can take place via e-mail, social media, advertisements, pop-ups, etc. That’s a lot of bait left out, and all it takes is one click to put your servers in hot water. Preemptively Defending Your Data Even if you keep your software completely up to date, the tried and true way to protect your data is to double up on it. Use a 3-2-1 backup strategy to make sure your data is secure. BackupAssist has this sort of scheduling available and several others to protect your data. And while you’re at it, ask yourself this: is the software you’re using to read this article completely up to date?
<urn:uuid:1653fc2a-32cd-4b42-a291-ccb6e1f65946>
CC-MAIN-2022-40
https://www.backupassist.com/blog/what-is-an-exploit-kit-how-hackers-hunt-your-old-software
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00430.warc.gz
en
0.923607
704
2.625
3
The ethical aspects of cybersecurity are not a big part of the curriculum of IT security students, but they should be. Cybersecurity is more than technology. Hacks and breaches can have a profound impact on people’s privacy or even their professional career. With technical cybersecurity and network consultant Fons Quidousse we discussed how ethics should get more attention and how Management should be its sponsor. A key aspect of cybersecurity is the protection of privacy and confidentiality of data. What’s your view? Fons: We often talk about the CIA triad, where CIA stands for Confidentiality, Integrity and Availability. If any of these three elements is breached, we talk about a hack, regardless of whether the hack was intentional or not. As companies are increasingly interconnected with one another, the impact of a breach at one organization will have an impact on other organizations too. Leaking an email password, for instance, can have much wider ramifications than just to the company where the password was leaked. This applies to any hack. Just look at what happened when IT security was breached in a school in Kortrijk, recently. The school had to be closed for several days to give the IT department the time to get everything up and running again. As companies are increasingly interconnected with one another, the impact of a breach at one organization will have an impact on other organizations too. Ethics vs revenue Are organizations alive to the wider ramifications of breaches? Fons: The situation is not perfect yet, of course, but we are making progress. In the domain of privacy for instance, the GDPR legislation has spurred companies to get their house in order. It would be better, I think, if organizations did not wait for Government to impose rules and regulations. Being proactive can be rewarding. Ethical questions are never easy for businesses that tend to focus on revenue and profit. This is not just the case for cybersecurity, you see the same thing when it comes to climate change. That’s why, I think, that any security audit needs to take a broad view. If you’re conducting an audit in hospital, the auditor needs to highlight the consequences of spending too little on IT security. Perhaps we should alert Management to the cost impact of a security breach? Should we monetize the ethical consequences? Fons: You are right in stating that Management should care more. Cybersecurity is an issue that should be discussed at management level and should be championed by Management. I think you should make it more tangible, for instance by looking at a personal use case: what happens if someone can’t pay the rent because his company was hacked and had to shut down for a long period? What happens if a patient does not get medication on time? That will get people thinking. On a second level, you need to look at the cost you can avoid by having proper IT security. That’s a risk analysis: what is the cost of updating security infrastructure versus the cost of a hack or a ransomware attack. On a third level, you manage the long-term impact of cybersecurity, for instance by instilling a culture of security where employees feel better in a company that clearly demonstrates that cybersecurity is a corporate value. This is more difficult to calculate, but employee experience is a key parameter. Is the cost of failed cybersecurity easy to measure? Fons: Some costs are easy to quantify. If a ransomware attack shuts down a factory for a couple of weeks, you know what revenue you are missing. Other aspects are less easy to measure in numbers. How do you measure a tarnished reputation? A cyberattack that is widely publicized will cause reputational damage. The extent of the damage may differ from one industry to another, and a good communication plan can limit that damage. Another ethical question on ransomware is whether you should pay up or not. What do you think? Fons: Ha, that is literally the one-million-dollar question. I don’t have a clear-cut answer. As an outsider, it is easy to say that you should not pay up. Paying is an incentive for criminals to continue their line of business. It’s like not negotiating with terrorists, right? But if a ransomware attack shuts down your company, I understand that it’s tempting to pay up and hope you get the decryption keys. I can only repeat that setting up proper security is the best way of preventing ever having to pay. Make sure your cybersecurity is optimal, ensure you have the right backups that are stored in an external location… That’s the best insurance against ransomware. Cybersecurity is an issue that should be discussed at management level and should be championed by Management. I think you should make it more tangible, for instance by looking at a personal use case. Finding the right balance Sometimes there’s a bit of a dilemma in cybersecurity: if you are monitoring everything on your network, couldn’t it happen that you are actually going further than what privacy allows? Fons: It’s a very thin line indeed, but that is no different in the virtual world than it is in the physical world. If the police conduct a search in someone’s home, that’s also a breach of privacy. But a necessary one, perhaps. Such a search will only happen when there are indications of a crime. In the real world, we have more experience in finding the right balance than in the virtual world. There’s more experience and a better legal framework. It is less clearly defined what actions a security administrator can or can’t perform. On the other hand, as an end-user, you should not assume that an IT administrator is watching your every move and recording it. It’s a question of mutual trust, I guess. Companies implement security to do right, not to do wrong. Of course, an IT administrator can overstep his privileges, just like a police offer could make improper use of his service gun. On the same topic, there’s also the question of a Data Protection Officer (DPO) or Chief Security Information Officer (CISO) stumbling upon illegal business practices when auditing a company. Is it clearly defined what needs to be done? Should a DPO act as a whistleblower? Fons: A DPO will always take a broad view, looking at data classification, how data is stored… In an ideal world, illegal activities will be prevented. If it does happen anyway, an organization will need to decide whether it is something that can be solved internally or if the authorities need to get involved. That’s why the use of an external DPO is always a good thing. An external DPO need not fear losing their job by acting as a whistleblower. Simac takes a broad view of cybersecurity Do you see it as Simac’s role to keep insisting on the ethical aspects? Fons: We should. And I think it differentiates us in the market. Anyone who has worked or partnered with Simac knows that we always look beyond the technology. It is in our DNA to take a broader view and not just reply to technical requirements. We put ourselves in the shoes of the customer and try to see what is important to them in the long run. That is typically the relationship that we build with our customers. Fons Quidousse is technical cybersecurity & network consultant at Simac ICT Belgium. He helps his colleagues and clients to find the best possible solutions for challenges concerning networks and security. Fons has been working at Simac since October 2020.Contact us now
<urn:uuid:ad136163-5f19-448a-9ca9-b0059be3d24f>
CC-MAIN-2022-40
https://www.simac.be/en/it-integration/blog/the-ethics-of-cybersecurity
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00430.warc.gz
en
0.956545
1,567
2.6875
3
A Data Flow Diagram provides a way for an organization (typically an IT or Engineering group) to understand, perfect, or implement a new process or system. It is a way to visually represent the way data flows through the system. There are two types of Data Flow Diagrams: logical and physical. A logical diagram explains where the data comes from, where it goes, and how it changes A physical diagram explains how the data moves through the system, for example how a data file is copied to a different location. A Data Flow Diagram is composed of four types of elements: - External Entity: an outside system or process that sends and receives data to and from the system. They are typically either the source or the destination for the data. In Bluescape’s template, the green diamond object represents this concept. - Data Store: a holding area for information for later use. In Bluescape’s template, the purple square object represents this concept. - Process: a procedure that manipulates the data and its flow by taking incoming data, changing it, then producing the data as output. In Bluescape’s template, the blue rectangle object represents this concept. - Data Flow: the path the information takes from external sources through processes and data stores. In Bluescape’s template, connectors between objects represent this concept. There are several best practices to follow when creating a Data Flow Diagram: - Each process should have at least one input and one output. - Each data store should have at least one data flow (connector) in and one data flow out. - A system’s stored data must go through a process. - All processes in a diagram must link to another process or data store. - External entities are typically placed at the edges of the data flow diagram. To use the Data Flow Diagram Template, identify the process you would like to map, then place the template in your workspace. Use the sample objects to model the process, adding connectors between objects to show the data flow. Change the sample text within each object to give it a descriptive name, and label each connector to capture the data flow details. Continue building your diagram, following the best practice guidelines listed above. Where to Next? Not what you were looking for? Reply below or Search the community and discover more Bluescape.
<urn:uuid:6b53677c-c7df-4534-a260-4ab0bbfb998f>
CC-MAIN-2022-40
https://community.bluescape.com/t/using-the-data-flow-diagram/245
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00630.warc.gz
en
0.871692
494
4.03125
4
IBM is preparing for a Martian invasion of its own. When NASA launches its new Phoenix Mars Lander Aug. 4, it will use an onboard BAE Systems computer that is based on IBMs Power Architecture, according to the IT giant, based in Armonk, N.Y. When it lands on Mars, the Phoenix will steer its way toward the Red Planets north pole to explore that frozen section, all the while searching for signs of life. Guiding NASAs latest probe will be a radiation-hardened RAD6000, an embedded, single-board computer designed by BAE Systems, a London-based defense and aerospace contractor. As it has with previous missions to Mars, IBM is supplying 32-bit Power processing power to this computer. In 2003, Power Architecture technology helped both the Spirit and Opportunity Mars Exploration Rovers during their mission to Mars to look for signs of water on the planet. Once again, IBM licensed its processor technology to BAE and helped design the computers for the latest mission to Mars. After it lands on Mars, the Phoenix, which cost $420 million to design and build, will endure temperatures of minus-100 degrees Fahrenheit and wind speeds of up to 40 meters per second, according to IBM and NASA. On previous space missions, Power-based systems survived winds of 80 mph and temperatures that reached close to minus-200 degrees, according to IBM. Raj Desai, vice president of IBMs Global Engineering Solutions, said the whole idea of using Power Architecture on a NASA mission was to test the limits of computing and design. It also allows the companys engineers to see how the technology works under harsh conditions. “There are a lot of innovations that go into the engineering of it,” Desai said. “What we are doing is pushing the boundaries of physics and design rules.” From a business perspective, the latest announcement by IBM shows how far the company has moved away from supplying Power-based processors to regular PCs. Now the companys efforts involve moving the architecture into high-end server systems as well as highly specialized fields and the consumer market. At the 2007 Computer Electronics Show in Las Vegas, IBM demonstrated how its Power Architecture and Cell processors have made their way into gaming systems, cell phones and automobiles. In addition, IBMs Power.org project, which the company established in 2004, is a way to make its Power Architecture much more open. Earlier this year, IBM and several partners entered into an agreement to expand even further into the consumer and embedded microprocessor market. According to that agreement, the companies will jointly develop microprocessors that will be built on a 32-nanometer manufacturing process that will use CMOS (complementary metal oxide semiconductor), a process for manufacturing processors. The goal, according to IBM and its partners—including Freescale, Samsung, Chartered Semiconductor Manufacturing and Infineon Technologies—is to develop high-performing, energy-efficient chips at 32 nanometers for use in a wide range of consumer products, handheld devices and even supercomputers.
<urn:uuid:da48edcb-d543-437b-bf73-372217ca018e>
CC-MAIN-2022-40
https://www.eweek.com/networking/ibm-prepares-for-a-martian-invasion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00630.warc.gz
en
0.937596
634
2.734375
3
Ping and tracert are 2 very common and effective network diagnostic utilities which are extensively used in Windows PCs for network-level troubleshooting. Another Windows network diagnostic utility is pathping which provides the ability to locate hops that have network latency and network loss. Unlike Ping which just pings from the originating device to the destination device, Pathping a ping to each hop along the route to the destination. It is an extremely useful tool in diagnosing packet loss and can help with diagnosing slow speed faults. Running Pathping – To PathPing a destination, below are the steps – 1. Open a Windows Command Prompt window. 2. At the command prompt, type, pathping <IP address>. Pathping syntax – “Windows XP, Vista, 7, and 8” pathping [-g host-list] [-h maximum_hops] [-i address] [-n] [-p period] [-q num_queries] [-w timeout] [-P] [-R] [-T] [-4] [-6] target_name Pathping example –
<urn:uuid:4c796083-07d1-4d3a-bbd8-f67bc14cd083>
CC-MAIN-2022-40
https://ipwithease.com/pathping-windows-utility-for-network-diagnostics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00630.warc.gz
en
0.894071
233
2.53125
3
Quality assurance (QA) is a major part of any software development. Software testing is the path to a bug-free, performance-oriented software application—one that also satisfies (or exceeds!) end-user requirements. Of course, manual testing is quickly unscalable due to the rapid pace of development and ever-increasing requirements. Thus, a faster yet accurate testing solution was required, and automated testing became the ideal solution for this need. Automated testing does not mean replacing the entire manual testing process. Instead automated testing means: - Allowing users to automate most routine and repetitive test cases. - Freeing up valuable time and resources to focus on more intricate or complex test scenarios. Introducing automated testing to a delivery pipeline can be a daunting process. Several factors—the programming language, user preferences, test cases, and the overall testing scope—directly decide what can and cannot be automated. However, if set up correctly, automated testing can be the backbone of the QA team to ensure a smooth and scalable testing experience. Different types of automation frameworks came into prominence to aid in this endeavor. An automation framework allows users to easily set up an automated test environment that ultimately helps in providing a better ROI for both development and QA teams. In this article, we will have a look at different types of test automation frameworks available and their advantages and disadvantages. (This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.) What is a test automation framework? Before diving into different types of test automation frameworks, we need to understand what an automation framework is. Test automation is the process of automating repetitive and predictable testing scenarios. A test automation framework is a set of guidelines or rules that can be used to define test cases. These test cases can then be configured and implemented using test automation tools such as Selenium, Puppeteer, etc., to the delivery process via a CI/CD pipeline. A test automation framework will consist of practices and tools that are designed to create efficient test cases. These practices range from coding standards, test-data handling methods, object repository management, and managing access control to test environment and external tools, etc. However, testers have more freedom than this. Testers are: - Not confined to these rules or guidelines - Free to create test cases in their preferred way Still, a framework provides standardization across the testing process, leading to a more efficient, secure, and compliant testing process. Advantages of a test automation framework There are some key advantages of adhering to the rules and guidelines offered by a test automation framework. These advantages include: - Increased speed and efficiency of the overall testing process - Improved accuracy and repeatability of the test cases - Lower maintenance requirements with standardized practices and processes - Reduced manual intervention and human error - Maximized test coverage across all areas of the application, from the GUI to internal application logic Popular test automation frameworks When it comes to test automation frameworks, there are six leading frameworks available these days. In this section, we will look at each of these six frameworks with regard to their architecture, advantages, and disadvantages: - Linear automation framework - Modular-driven framework - Library architecture framework - Data-driven framework - Keyword-driven framework - Hybrid testing framework Linear Automation Framework The linear framework or the record and playback framework is best suited for basic, introductory level testing. In a linear automation framework, users target a specific program functionality, create test scripts in sequential order and run them individually. This process includes capturing all the tests like navigation, inputs, etc., and playing them back repeatedly to conduct the test. Advantages of Linear Framework - Does not require specific automation knowledge or custom code - It is easier to understand test cases due to sequential order - Faster approach to testing - Simper implementation to existing workflows and most automation tools provides inbuilt tools for record and playback functionality Disadvantages of Linear Framework - Test cases are not reusable as they are targeted towards specific use cases or functions - With static data, there is no option to run tests with different data sets as test data is hardcoded - Maintenance can be complex as any change will require rebuilding test cases Modular Driven Framework This framework takes a modular approach to testing which breaks down tests into separate units, functions, or modules and will be tested in isolation. These separate test scripts can be combined to build larger tests covering the complete application or specific functionality. (Learn about unit testing, function testing, and more.) Advantages of Modular Framework - Increased flexibility of test cases. Individual sections can be quickly edited and modified as tests are separated - Increased reusability as individual test cases can be modified from different overarching modules to suit different needs - The ability to scale up testing quickly and efficiently to include any new functionality Disadvantages of Modular Framework - Can be complex to implement and require proper programming knowledge to build and set up test cases - Cannot be used with different test data sets in a single test case Library Architecture Framework This framework is derived from the modular framework that aims to provide a greater level of modularity to testing by breaking down tests by units, functions, etc. The library architecture framework identifies similar tasks within test scripts and groups them by function. These modular parts aren’t directly about function—they’re more focused on common objectives. Then these functions are stored in a library sorted by their objectives, and test scripts call upon this library to obtain different functionality when testing. Advantages of Library Architecture Framework - A high level of modularity leads to increased scalability of test cases - Increased reusability as libraries can be used across different test scripts - Can be a cost-effective solution due to its reusability, especially in larger projects Disadvantages of Library Architecture Framework - Can be complex to set up and integrate into delivery pipelines - Technical expertise is required to identify and modularize the common tasks - Test data are static as they are hardcoded in script with any changes requiring direct changes to the scripts The main feature of the data-driven framework is that it decouples data from the script logic. It is the ideal framework when users need to test a function or scenario with different data sets but still use the same internal logic. In data-driven frameworks, values such as inputs and outputs are passed as parameters to test scripts from external data sources such as variable files, databases, etc. Advantages of Data-Driven Framework - Decoupled approach to data and logic leads to increased reusability of test cases while providing the ability to test under different data sets without modifying the test case - Handle multiple scenarios using the same test scripts with varying sets of data, which leads to faster test environments - Since there is no need to hardcode data, scripts can be changed without affecting the overall functionality - Easily adaptable to suit any testing need Disadvantages of Data-Driven Framework - One of the most complex frameworks to implement as decoupling data and logic will require expert knowledge both in automation and the application itself - Can be time-consuming and a resource-intensive process to implement in the delivery pipeline The keyword-driven framework takes the decoupling of data and the logic introduced in the data-driven framework a step further. In addition to the data being stored externally, specific keywords that are associated with different actions and used to test the GUI are also stored externally to be referenced at the test execution. It makes keywords independent entities that reference specific functions or actions that are associated with specific objects. Users write code to prompt the necessary keyword-based action, and the appropriate script is executed within the test when the keyword is referenced. Advantages of Keyword-Driven Framework - Test scripts can be built independently of the application - Increased reusability and flexibility while providing a detailed approach to categorize test functionality - Reduced maintenance requirements compared to non-decoupled frameworks Disadvantages of Keyword-Driven Framework - One of the most complex frameworks to configure and implement, requiring a considerable investment of resources - Keywords need to be scaled according to the application testing needs, which can lead to increased complexity with each test scope or requirement change Hybrid Testing Framework A hybrid testing framework is not a predefined framework with its architecture or rules but a combination of previously mentioned frameworks. Depending on a single framework will not be a feasible endeavor with the ever-increasing need to cater to different test scenarios. Therefore, different types of frameworks are combined in most development environments to best suit the application testing needs while leveraging the strengths of each framework and mitigating the disadvantages. With the popularity of DevOps and agile practices, more flexible frameworks are needed to cope with the changing environments. Therefore, a hybrid approach provides the best solution by allowing users to mix and match frameworks to obtain the best results for their specific testing requirements. Customizing your frameworks Selecting a test automation framework is the first step towards creating an automated testing environment. However, relying on a single framework has become a near-impossible task due to the ever-evolving nature of the technological landscape and rapid development cycles. That’s why the hybrid testing framework has gained popularity—for enabling users to combine different test automation frameworks to build an ideal automation framework for their needs. Even if you are new to the automation world, you can start with a framework with many built-in solutions, build on top of it and customize it to create the ideal framework.
<urn:uuid:7b8a2fdf-d18a-47a5-b06b-d29bfeff5316>
CC-MAIN-2022-40
https://www.bmc.com/blogs/test-automation-frameworks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00630.warc.gz
en
0.894743
2,008
2.6875
3
In this article, we’ll show how to apply machine learning to cybersecurity. There are several use cases, but this article will focus on analyzing router logs. (This tutorial is part of our Apache Spark Guide. Use the right-hand menu to navigate.) Why use machine learning with cybersecurity It’s almost impossible for an analyst looking at a time series chart of network traffic to draw any conclusion from what they are looking at. Why? People can’t see more than three dimensions. And, too many false alerts cause analysts to simply ignore some of what they’re seeing—too much noise. But machine learning makes it possible to flush out, for example, criminal hackers who are stealing data from your system and transmitting it to their command and control center. This is what intrusion detection systems are supposed to do, but hackers use all kinds of techniques to avoid detection by traditional cybersecurity systems. For example, they could transmit stolen data in small pieces and send each to a different IP address, such as hijacked home computer users, then use those hijacked computers to send those pieces to their hacker command and control center. In this example we’ll illustrate one approach to looking at network traffic. We use router logs provided by Verizon from the Bro-type router. We’ll group each record into one of seven clusters, then we’ll look at traffic in those clusters with the smaller number of entries. That, by definition, are our outliers. We use the k-mean clustering algorithm, which separates data along any number of axes. (For more, see k-means clustering with Apache Spark and Python Spark ML K-Means Examples or browse our Apache Spark guide using the right-hand menu.) A data scientist would say that we are threading a hyperplane into n-dimensional space between the data points. Because we can’t visualize this, think of a 3D space, then thread a piece of paper between each set of data points such that points in one group are on one side of the paper and points in the other group are on the other. This is an unsupervised model because there are no labels, only features. So, we don’t need to train the model, as there’s nothing to predict. Instead we are observing. The code, explained The University of Cincinnati provides this description of the columns in this data: - ts—time; timestamp - uid—string; unique ID of connection - orig_h—addr; originating endpoint’s IP address (aka ORIG) - orig_p—port; originating endpoint’s TCP/UDP port or ICMP code - resp_h—addr; responding endpoint’s IP address (aka RESP) - resp_p—port; responding endpoint’s TCP/UDP port or ICMP code - proto—transport_protoTransport layer protocol of connection - service—string; dynamically detected application protocol, if any - duration—interval; time of last packet seen to time of first packet seen - orig_bytes—count; originator payload bytes, from sequence numbers if TCP - resp_bytes—count; responder payload bytes, from sequence numbers if TCP - conn_state—string; connection state (see conn.log:conn_state table) - local_orig—bool; if conn originated locally T; if remotely F. If Site::local_nets empty, always unset - missed_bytes—count; number of missing bytes in content gaps - history—string; connection state history (see conn.log:history table) - orig_pkts—count; number of ORIG packets - orig_ip_bytes—count; number of ORIG IP bytes (via IP total_length header field) - resp_pkts—count; number of RESP packets - resp_ip_bytes—count; number of RESP IP bytes (via IP total_length header field) - tunnel_parents—set; If tunneled, connection UID of encapsulating parent (s) - orig_cc—string; ORIG GeoIP country dode - resp_cc—string; RESP GeoIP country code First, we load the csv file into a Spark dataframe. from pyspark.sql.types import StructType, StructField, FloatType, BooleanType from pyspark.sql.types import DoubleType, IntegerType, StringType import pyspark from pyspark.ml.linalg import Vectors from pyspark.ml.feature import VectorAssembler from pyspark.sql.functions import lit from pyspark.sql.functions import udf, concat from pyspark import SQLContext conf = pyspark.SparkConf() sc = pyspark.SparkContext.getOrCreate(conf=conf) sqlcontext = SQLContext(sc) schema = StructType([ StructField("ts", StringType(),True), StructField("uid", StringType(),True), StructField("origh", StringType(),True), StructField("origp", StringType(),True), StructField("resph", StringType(),True), StructField("respp", StringType(),True), StructField("proto", StringType(),True), StructField("service" , StringType(),True), StructField("duration", FloatType(),True), StructField("origbytes", StringType(),True), StructField("respbytes", StringType(),True), StructField("connstate", StringType(),True), StructField("localorig", StringType(),True), StructField("missedbytes", StringType(),True), StructField("history", StringType(),True), StructField("origpkts", IntegerType(),True), StructField("origipbytes", IntegerType(),True), StructField("resppkts", IntegerType(),True), StructField("respipbytes", IntegerType(),True), StructField("tunnelparents", StringType(),True) ]) df = sqlcontext.read.csv(path="/home/ubuntu/Documents/forensics/bigger.log", sep="\t", schema=schema) df2 = df.fillna(0) Next, we register a UDF (user defined function). We will use this to turn all the fields sent to this function into integers because machine learning, for the most part, only works with numbers. colsInt = udf(lambda z: toInt(z), IntegerType()) sqlcontext.udf.register("colsInt", colsInt) def toInt(s): if not s: return 0 if isinstance(s, str) == True: st = [str(ord(i)) for i in s] return(int(''.join(st))) else: return s Now, we create some additional columns which are the columns we have selected to feed into our model. For each of these, we will call the colsInt() UDF to convert those to numbers. You could vary the choice of columns according to what hypotheses you want to follow. For example, below we look at the ports and traffic as well as the protocol. - There might be other metrics in that log that we could add or remove. - We should probably leave the destination IP address out of the model because of the hacker’s ability to hide their true destination. - We might drop the UDP protocol since sftp (which is TCP) would be the protocol they would use to transmit that. - Or, we could include the time of day in the local time zone to isolate after-hours events. It all depends on what kind of activity you want to focus on. Note that each of the .withColumn() statements create a new dataframe. This is because Spark dataframes are immutable. a = df2.withColumn( 'iorigp',colsInt('origp')) c = a.withColumn( 'irespp',colsInt('respp')) d = c.withColumn( 'iproto',colsInt('proto')) e = d.withColumn('iorigh',colsInt('origh')) f = e.withColumn( 'iorigbytes',colsInt( 'origbytes')) g = f.withColumn( 'irespbytes',colsInt('respbytes')) h = g.withColumn( 'iorigpkts',colsInt( 'origpkts')) i = h.withColumn( 'iorigipbytes',colsInt('origipbytes')) columns = ['iorigp','irespp','iproto', 'iorigbytes','irespbytes','iorigpkts','iorigipbytes'] The next step adds a column to our dataframe called features. This is a tuple of the columns we have selected. The K-means algorithm will expect there to be a features column. vecAssembler = VectorAssembler(inputCols=columns, outputCol="features") router = vecAssembler.transform(i) Here, we use the K-means algorithm. One nice thing about Apache Spark is its machine learning algorithms are easy to use. They don’t require the reprocessing and reshaping that other frameworks do, and they work with Spark dataframes, so we could work with much larger sets of data. (Pandas does not scale like Spark dataframes do.) from pyspark.ml.clustering import KMeans from pyspark.ml.evaluation import ClusteringEvaluator kmeans = KMeans().setK(7).setSeed(1) model = kmeans.fit(router) predictions = model.transform(router) p = predictions.groupby('prediction').count() q = p.toPandas() We have grouped the observations into 7 clusters. Cluster 0 has 40,303 router records, but cluster 2 has only 171. Clearly, those are outliers, so this is where we focus our cybersecurity analysis. We can plot that as a bar chart to further show how the data is clustered. from plotly.offline import plot import pandas as pd import plotly.graph_objects as go fig = go.Figure( data=[go.Bar(x=q['prediction'],y=q['count'])], layout_title_text="K Means Count" ) fig.show() So, let’s make a new dataframe of just those records in cluster 2. (It’s actually row index 5 in the dataframe, so don’t confuse those two concepts.) suspect = predictions.filter("prediction == 2") Here we convert the output to Pandas, simply because the Jupyter notebook displays that data more clearly than it does dataframes, where it tends to chop off wide columns, making them hard to read. x = suspect.select('ts','uid','origh','resph').toPandas() You can see the same IP address shown more than a few times, which is probably a good place for further analysis. Look and see which machine it is and to whom it connects. So, your analysts can look through logs in your applications, firewall, etc. to see what’s going on with those IP addresses. (Note that some of them are in IPv6 format.)
<urn:uuid:a3937039-bb46-40c7-896b-92873649d120>
CC-MAIN-2022-40
https://www.bmc.com/blogs/machine-learning-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00630.warc.gz
en
0.74705
2,476
2.75
3
In the previous post IS-IS Neighbor Discovery we have discussed how IS-IS automatically discovers neighbors, in this post we will discuss the DIS role in broadcast networks. After the adjacency state reached the UP state the DIS election process take place, the router with the highest priority value (0-127 specified in the Priority field of the IIH PDU) win the election, if multiple routers have the same priority which is the case in most scenarios due to the default value “64” of most implementations, the router with the highest SNPA (MAC address discovered from the MAC header of the received IIH PDUs) win the election and become the DIS for this segment (in Some circumstances the System-ID is also used in the election process. On a frame-relay network the L2 address is DLCI which is the same in both sides so in this case the System-ID of the origin routers (discovered from the Source-ID of the received IIH PDUs) is used instead and the router with highest System-ID win the election. On LAN networks the DIS is used to minimize the link-state database size by using the concept of Pseudo-Node, and reduce the number of acknowledgment messages “PSNPs”. Since the IS-IS communication between routers on the broadcast networks happening through multicast MAC addressed all routers will form a full-mesh of adjacencies with each others, so if we have 3 routers on the same LAN each router will reports that he has adjacency with 2 routers which ultimately makes the link-state topology appears like as 6 adjacency which complex the SPF task, as a solution for such scenario the DIS creates a logical router called Pseudo-node who makes the network appears like a star topology instead of full-mesh, then on behave the Pseudo-Node, the DIS sends Level-1 and Level-2 Pseudo-Node LSPs, each to the appropriate Level destination MAC address which instructs the the IS-IS routers that the Pseudo-Node has adjacency to each router on the LAN including the DIS itself with metric 0, on other hand each router send in its LSPs that he has adjacency to the Pseudo-Node only with metric 10, so now the network is appeared like centralized router which is the Pseudo-Node and all routers are connected to this router which is ultimately reduce the link-state database size because visually instead of 3 routers and 6 adjacencies we have 3 routers with 3 adjacencies, this benefit become significant in a LAN network with over 20 routers running IS-IS, Calculate it 🙂 On broadcast networks and even with the DIS once the adjacency reached the UP state all the routers will exchanging the LSPs directly with each others (remember the multicast MAC address which all of them listening to) the DIS doesn’t send the LSP on behave any router in the flooding stage, the DIS sends CSNP which is carrying each LSP Sequence number, check-sum, Remaining-Lifetime in small interval to make sure that all routers have synchronized databases, the ordinary behavior on P2P links when a router send LSP it is expecting to receive ACK message “PSNP” which confirms that the LSP is successfully received, otherwise the same LSP will be generated within 5 seconds, but in broadcast networks this is not what is happening, as i mentioned above the DIS is there to minimize the database size and the number of messages on the segment, so instead of waiting for ACK for each LSP the router sent the routers check the regular CSNP send by the DIS for their LSP, if it is there with the latest sequence number so the LSP has been received successfully thus no need to retransmit it. The last rule, in the broadcast networks, if JR1 sends LSP but for some reason this LSP doesn’t received by another router let’s say JR2, few seconds JR2 will discover the absence of this LSP in its database after see it in the DIS CSNP so JR2 will request the missing LSP by sending PSNP which is specifying the missing LSP, in turn only the DIS will response to this PSNP and send the requested LSP, otherwise if this rule wasn’t there all the routers will response with the requested LSP which is wasting resources. JUNOS Pseudo-Node and Level-1, Level-2 LSPs [email protected]> show isis adjacency Interface System L State Hold (secs) SNPA ge-0/0/0.0 JR2.core2 1 Up 21 b0:c6:9a:23:7c:0 ge-0/0/0.0 JR2.core2 2 Up 21 b0:c6:9a:23:7c:0 ge-0/0/0.0 JR3.core3 1 Up 8 b0:c6:9a:23:80:0 ge-0/0/0.0 JR3.core3 2 Up 8 b0:c6:9a:23:80:0 Hereunder is the Pseudo-node LSP which reports adjacency between the Pseudo-Node and all routers. [email protected]> show isis database JR3.core3.02-00 detail IS-IS level 1 link-state database: JR3.core3.02-00 Sequence: 0x3, Checksum: 0x4720, Lifetime: 1088 secs IS neighbor: JR1.core1.00 Metric: 0 IS neighbor: JR2.core2.00 Metric: 0 IS neighbor: JR3.core3.00 Metric: 0 IS neighbor: CR5.pe5.00 Metric: 0 IS-IS level 2 link-state database: JR3.core3.02-00 Sequence: 0x3, Checksum: 0x4720, Lifetime: 1088 secs IS neighbor: JR1.core1.00 Metric: 0 IS neighbor: JR2.core2.00 Metric: 0 IS neighbor: JR3.core3.00 Metric: 0 IS neighbor: CR5.pe5.00 Metric: 0 Hereunder is a normal LSP which is reports adjacency between JR2 and the Pseudo-Node only. [email protected]> show isis database JR2.core2.00-00 detail IS-IS level 1 link-state database: JR2.core2.00-00 Sequence: 0x2, Checksum: 0xed9f, Lifetime: 971 secs IS neighbor: JR3.core3.02 Metric: 10 IP prefix: 22.214.171.124/32 Metric: 0 Internal Up IP prefix: 126.96.36.199/24 Metric: 10 Internal Up IP prefix: 188.8.131.52/32 Metric: 0 Internal Up IP prefix: 184.108.40.206/32 Metric: 0 Internal Up IS-IS level 2 link-state database: JR2.core2.00-00 Sequence: 0x3, Checksum: 0x8b6a, Lifetime: 1004 secs IS neighbor: JR3.core3.02 Metric: 10 IP prefix: 220.127.116.11/23 Metric: 10 Internal Up IP prefix: 18.104.22.168/32 Metric: 10 Internal Up IP prefix: 22.214.171.124/32 Metric: 0 Internal Up IP prefix: 126.96.36.199/24 Metric: 10 Internal Up IP prefix: 188.8.131.52/32 Metric: 0 Internal Up IP prefix: 184.108.40.206/32 Metric: 0 Internal Up IOS Pseudo-Node and L1, L2 LSPs CR4#show clns is-neighbors System Id Interface State Type Priority Circuit Id Format 0000.0000.0001 Gi0/1 Up IS 0 0000.0000.0000.00 Phase V CR6 Gi0/1 Up L1 64 CR4.01 Phase V CR1#show isis database CR4.01-00 detail IS-IS Level-1 LSP CR4.01-00 LSPID LSP Seq Num LSP Checksum LSP Holdtime ATT/P/OL CR4.01-00 * 0x0000006E 0xE8C9 414 0/0/0 Metric: 0 IS-Extended CR4.00 Metric: 0 IS-Extended CR6.00 Metric: 0 ES 0000.0000.0001 CR4#show isis database CR6.00 detail IS-IS Level-1 LSP CR6.00 LSPID LSP Seq Num LSP Checksum LSP Holdtime ATT/P/OL CR6.00 0x000001B0 0x74A1 739 0/0/0 Area Address: 49.0001 NLPID: 0xCC Hostname: CR6 IP Address: 220.127.116.11 Metric: 10 IP 18.104.22.168/24 Metric: 0 IP 22.214.171.124/32 Metric: 0 IP 126.96.36.199/32 Metric: 0 IP 188.8.131.52/32 Metric: 10 IS-Extended CR4.01
<urn:uuid:f3e6183f-ddc6-4f08-a3d8-7c54ce010e53>
CC-MAIN-2022-40
https://www.networkers-online.com/blog/2010/04/is-is-dis-in-practice/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00630.warc.gz
en
0.849908
1,995
2.59375
3
Recently, some attention was drawn to the ineffectiveness of the CAPTCHA tool. In 2021, forcing users to count the number of traffic lights before purchasing tickets or registering for an account seems completely pointless. Most discussions about CAPTCHA these days focus on ideas of what could replace it. For years it was ingrained in our heads that the CAPTCHA tool is the only - and best - way to defend websites against bot-based fraud and attacks. However, CAPTCHAs don’t work as intended, as they are easily beaten by knowledgeable attackers, and they add friction to the consumer buying experience. The false perception of security that CAPTCHAs provide very often comes at the cost of satisfied, loyal customers. Evading the CAPTCHA When CAPTCHA technology was first introduced in 1997 by AltaVista, it seemed like a revolutionary idea to a new problem - preventing bots from pretending they were human and logging into a website. And it worked for a little while. But like all problems, soon motivated attackers cleverly found a solution. There have been many different versions of the CAPTCHA over the years, with the most recent (and popular) being Google’s reCAPTCHA technology. Google acknowledges user frustration, but now requires the application owners to create and manage the risk scores that differentiate humans and bots. The bots now have a security control that they can easily bypass. Understanding and limiting the differences between headless Chromium versus Chrome is a (dark) art that enables bots to obtain the same risk score as humans. Attackers have two main approaches to choose from when defeating a CAPTCHA: (1) be undetectable; or (2) automate the process of solving the CAPTCHA. Services such as 2CAPTCHA ensure that CAPTCHAs present no obstacles to well-funded, semi-technical attackers. 2CAPTCHA specifically has over 300 reference cases of bots using their solution. This means that an attacker can solve a CAPTCHA problem for less than $1 per 1,000 solved. The underlying problem is that, despite technology upgrades and tougher problems, Google and other CAPTCHA-type solutions all have the same result: attackers have proven they have effective workarounds to evade these tools. Replacing the CAPTCHA That brings us to the discussion that bubbles up in the security industry every so often - what do we replace the CAPTCHA with? How can we evolve the tool to better filter out bots? Will reCAPTCHA v4 be the one that stops bots cold? The truth is, replacing CAPTCHAs with another similar system, or something like a user-owned security key, simply won’t work. On one hand, attackers are motivated to beat whatever the newest tool is; and on the other hand, moving the onus of security to the user themselves has traditionally failed. Just look at the numbers around employees inadvertently causing the biggest security breaches at their companies by not following the rules and policies as directed. Relying on users to be the point of security for your business’ success is a disaster waiting to happen. Don’t Replace, Rethink There is no doubt that differentiating between humans and bots can be challenging. The rationale for using CAPTCHA solutions, however, appears to be similar between security vendors and application owners – it’s a decision avoidance solution. Even though more advanced, specialized and less invasive CAPTCHAs exist in the market today, they add an unwanted level of friction and potential for false positives that ruin the customer experience and are merely an extension of decision avoidance. It's time that the security industry stops forcing users to cover for a site's lack of security. What's clear is that there simply needs to be a greater effort made by organizations to identify and protect their own sites against bots. Instead of using outdated technology that makes businesses believe they are stopping bots instead of actually stopping them, online businesses need to embrace new approaches to solving the problem. Modern security technology approaches exist that allow them to defend against malicious automation, without depending on their customers to validate that they are indeed human. The ability to detect automation without any reliance on outdated risk scoring models that rely on CAPTCHAs (even for “grey area” cases), coupled with an effective way to control and eliminate it, should be the starting point for any approach. Preventing bots needs to become part of the base-level requirements for operating an online business. It should no longer be acceptable to hide behind technology whose biggest benefit is its ability to show users what traffic lights and crosswalks look like across the globe. About the Author Sam Crowther is an entrepreneur with a passion for cybersecurity. The Kasada founder got his start in the industry as a high school student when he joined the cybersecurity team of the Australian Signals Directorate (ASD). From there, he moved to a red team role at a global investment bank, an experience that inspired him to start his own company. With funding from leading U.S. and Australian investors, Crowther launched Kasada in 2015 to provide innovative web traffic integrity solutions to companies around the world. Based in New York and Sydney, Crowther loves creating simple technical solutions to complex problems and is motivated by challenging preconceived ideas and beliefs in order to have a positive impact on the world.
<urn:uuid:40e52c7d-f983-47e1-b2c1-15688d668346>
CC-MAIN-2022-40
https://www.enterprisesecuritytech.com/post/it-s-long-past-time-to-kill-the-captcha
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00630.warc.gz
en
0.955762
1,105
2.875
3
What is Agile? Using Agile for Software Development It would be nice if developing software were like making a cake: you’d prepare the batter according to a neatly defined recipe, put the cake in the oven, take it out, eat it and be done. But software is not cake. Software applications are constantly evolving creations that, in many cases, are never truly complete. Nor is there typically a simple recipe for you to follow when writing code. Instead, it’s up to you to figure out how best to develop whatever it is you are supposed to develop. Even the features that your application needs to include may change along the way, together with shifts in business goals and priorities. Agile software development is the answer to these challenges. Agile methodologies can help streamline collaboration between developers, maximize software quality, and more. Agile software development is based around self-organization and collaboration. It’s best to think of agile as a broad group of concepts that serve as general guidelines for the way programmers should think about software development, rather than as a rigid, specific set of ideas that dictate exactly how to write code. By focusing on adaptivity, business and user needs, and workers themselves, the agile mindset enables developers to deliver functional end products quickly and incrementally. Agile software development debuted as an explicit concept in 2001, when seventeen software developers convened in Utah to produce a document called the Agile Manifesto. The programmers drew on principles associated with so-called “lightweight” software development, which had been in practice throughout the 1990s. The manifesto codified agile principles for the first time and gave birth to agile software development as a distinct movement. Many of the ideas associated with agile software development have since been applied to other iterative processes, like project management, business development, and even accounting. Agile prioritizes working code over flawless code. It encourages developers to focus on meeting the needs of the end user, rather than wasting time trying to perfect every aspect of an application. Agile is so influential because it offers a range of benefits — not just to developers, but to all stakeholders, from customers to management: - Speed: Agile encourages developers to move quickly and avoid getting bogged down by unnecessary delays, like trying to perfect every line of code before releasing an application. Function over form is the name of the game. - Quality: Even though agile discourages perfectionism, it upholds strong quality standards by focusing on meeting user needs. It also supports the concept of incremental development, which leads to better software quality over time. - Collaboration: Agile prioritizes clear communication and transparency between development teams, as well as the stakeholders they support. - Clear goals: By identifying the production of working software as the central goal of any software project, agile helps developers pursue a clear set of goals, even if the business environment they are working in changes. - Sustainable software: Agile encourages developers to devise application architectures and software development practices that are sustainable over the long term. This helps ensure that the investment a business makes in an agile project will deliver continuous value. - Developer support: When developers have the tools and other resources they need, they’re best positioned to deliver software that satisfies user requirements. As noted above, there is no single specific way to “do” agile. Agile is a set of high-level ideas, and it’s up to an organization to decide how it wants to go about translating those ideas into practice. To help in this process, a number of agile frameworks have emerged. Agile frameworks prescribe certain practices or methodologies that teams can follow to implement the principles associated with agile software development. The two most popular agile frameworks today are Scrum and Kanban. Scrum was first described in the mid-1980s, before the agile movement started, but later came to be closely associated with agile software development. At its core is the concept of “sprints” — time-limited periods dedicated to completing a set chunk of work. By breaking up large, complex projects into smaller components, sprints make daunting tasks achievable. Scrum also involves key roles that help an organization build agile processes: the product owner, development team, and Scrum Master. The product owner defines development goals, while the development team works towards them. The Scrum Master serves as a liaison between the product owner and the development team and is responsible for educating both groups about agile processes and helping them achieve agile goals. A Scrum-based process will only be as good as the Scrum Master who leads it, and an ineffective Scrum Master means the project will struggle to put agile principles into practice. While Scrum places greatest priority on defining certain roles within an agile project, Kanban focuses on communication and transparency. The driving idea behind Kanban is that as long as your stakeholders communicate efficiently and clearly, they will succeed in adhering to agile processes. At the core of the communication process defined by Kanban is a Kanban board, which is a physical or digital board that displays information about every unit of work within the project. Using a Kanban board, teams can track each other’s activities and visualize where an entire project stands at a glance. Just as a bad Scrum Master can undercut the effectiveness of Scrum, a poorly designed Kanban board, or one that is not updated consistently, becomes a single point of failure for a Kanban process. Waterfall and DevOps aren’t agile frameworks. Instead, they’re distinct approaches to software development. Waterfall divides activities into distinct phases and usually does not allow a phase to start until the one preceding it is complete. Small problems can quickly snowball into larger ones that delay the project as a whole due to this linear design and the inability to move onto the next task until you finish the current one. In addition, waterfall’s focus on the needs of developers rather than end-users means that keeping developers busy can take precedence over understanding end-users’ needs and aligning developers’ work to meet them. Agile may also be compared to DevOps, a software development philosophy that became popular over the course of the 2010s. DevOps shares some core principles with agile, such as the concept of continuous delivery of software to users. In fact, some programmers have argued that DevOps is essentially an evolution of agile. But, like the definition of agile itself, that’s a subjective question. A majority of developers draw a distinction between agile and DevOps, despite the commonalities between them. How can you start adopting agile principles? Here are some basic principles for getting agile software development off the ground at your organization. Identify and track key performance indicators (KPIs) about your project, such as how much code you produce per month, how frequently you issue new application versions to your users, and how many bug reports are filed in a given period of time. These metrics will help you quantify and assess your project’s performance. The concept of continuous improvement is part and parcel of agile. No matter how successful your project is, you should always be looking for inefficiencies or problems that you can address in order to make the process even better. Part of the agile philosophy includes ensuring that projects serve the needs of end users and other stakeholders. Toward that end, you should constantly be assessing how well your processes and products support the overall needs of the business, and take steps to realign if you find shortcomings. Agile is also founded on the idea that constant change is not only unavoidable, but good. Rather than resisting change by, for example, trying to avoid rearchitecting your applications or restructuring your team, embrace change as an opportunity to improve and optimize. You should designate roles that define how different stakeholders fit within the agile process. These may mirror those used in Scrum — with a “team lead” substituting for the Scrum Master — but it can be helpful to define other positions within your project as well. For example, you could treat customers or end-users as a distinct role and involve them in your agile process by collecting their feedback about how well your product meets their needs. You may also want to designate a finance lead, who is responsible for helping to keep the project within your budget. Likewise, for projects that need to meet specific regulatory requirements, a compliance or legal lead may be a useful role to include. Part of the reason agile has become so popular is that it can be used or virtually any type of project, on any scale. It works for small teams, but it can also support large enterprise projects that involve hundreds or even thousands of participants through SAFe — or the Scaled Agile Framework. SAFe is a reference framework that provides guidelines for scaling agile practices to fit the needs of large organizations. It emphasizes principles such as delegating responsibility in order to keep agile processes flowing across large teams, avoiding rigidly defined processes that prevent multiple teams from working effectively in parallel, and collaboratively planning for medium-term and long-term project growth. If you’re looking to implement agile on the enterprise-level, SAFe — and the tools needed to make it work — is a great place to start. Having agile-friendly tools on your side goes a long way toward making it practical to implement agile methodologies, and Jira Software is one of the best. It provides a flexible yet consistent way to design project roadmaps, track progress, define roles, associate them with processes, and much more. Jira also offers add-ons for organizing projects according to the tenets of specific agile frameworks, including Kanban, Scrum, and SAFe. With Jira, agile teams can plan, structure and manage projects in whichever way works best for them, while also collecting the insights that support continuous improvement. Agile offers a variety of benefits, but it can also be difficult to implement — especially if you have existing processes in place that you need to adapt to fit agile methodologies. And learning how to use tools like Jira in ways that best align with your business needs requires significant effort. The experts at Contegix can help address these challenges. Drawing on their deep experience with Jira and other Atlassian tools, Contegix consultants can help your team navigate the complex agile landscape and implement the right tools to make agile work for you. Contact us to learn more about how Contegix can empower your organization to implement agile practices.
<urn:uuid:ff25f8cd-347f-40b9-a14d-f25b51208fe7>
CC-MAIN-2022-40
https://www.contegix.com/resources/library/agile-software-development
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00630.warc.gz
en
0.947738
2,168
2.90625
3
You can sync your OneDrive files to your desktop for easy access via File Explorer or for offline access. In this lesson, you’ll learn how to sync and understand sync status of files and folders in your libraries. This video is part of my FREE 30+ lesson self-paced online training course called Collaboration in Microsoft 365 (OneDrive, SharePoint, and Teams). Enroll today at https://www.NateTheTrainer.com for the full learning experience including lesson discussions, quizzes, exams, and a completion certificate. You can also watch the entire course as a YouTube playlist as well (just without the course discussions, quizzes, exam, and certificate). Be sure to subscribe to support my channel and for easy access to future content. A common misconception is that Sync is “Refresh” but it isn’t – just refresh your browser if something looks amiss. Sync is to access OneDrive via File Explorer on your machine to be able to work with files locally as opposed to through a browser. The first time you set up sync, you’ll be walked through a wizard. But if you choose to sync SharePoint libraries in the future, you won’t have to do the wizard each time. Synced files aren’t available offline until you open one (making it available On this device) or you right-click a file and choose Always keep on this device (making it Always available). If a file is Online-only, you won’t be able to access it if you lose your internet connection. How to use synced files Simply double-click a file to open it in the desktop version of its app (i.e. Word, Excel, etc.). Since the file is cloud-based, Autosave will be active so you can close the application once you’re finished making changes and those changes will automatically be synced with the cloud (assuming you’re connected to the internet at the time).
<urn:uuid:c8b62fb4-be9b-4326-8b58-1de60e664018>
CC-MAIN-2022-40
https://natechamberlain.com/2022/01/26/sync-onedrive-for-business-files-for-offline-access-video%EF%BF%BC/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00030.warc.gz
en
0.904456
426
2.53125
3
The average business executive now sees the potential of big data and how it can be useful in operations, accounting and more. However, if he or she were to actually look at the reams of data that a company records on a daily basis, it would not make any sense. As ITProPortal noted, the deluge of information would get overwhelming very quickly. Analysis is a necessity, even with custom BI solutions in place to properly assess the situation. What can greatly help digest data is semantics technology, turning raw numbers into contextual and actionable reports. Understanding what’s in front Semantics technology isn’t just one type of software or hardware. Rather, it’s a series of different functional concepts that seek to understand processed data. There’s a distinct difference between processed and semantics-contextualized data. The former merely contains numbers, variables, locations and other details a data point records. The latter actually consists of information a person can read that explains what is happening within the former. Developments often associated with semantics include natural language processing, data mining and artificial intelligence. There’s a good reason to consider this functionality. For many businesses, there’s simply too much data and not enough time to actually process it all in a way to make informed decisions. Moreover, it’s difficult to gauge what information is actually useful due to the variables of volume, velocity and variety. With semantic technology, this becomes far more manageable. It provides the links to the data that create a contextual explanation. In other words, it’s an abstraction layer that connects data to content and processes. IBM provided an example of such connections through smart cars. There are hundreds of data points in a given area covering important details such as road conditions, traffic and weather. On their own, there’s not much use to them. Through semantics technology, however, all of this information can be fed into a smart car that links it to its current location. From there, the vehicle can inform the driver of the situation ahead and offer routing suggestions if conditions for the current route would cause delays. Another benefit of semantics technology for big data is that it’s interoperable with most IT platforms. Because most of the data models associated with semantics aren’t predefined and simply connected to the data it receives, it can make flexible assessments based on different models, not having to worry about the platform it’s on. By focusing on the relationships between different data points, semantics can deliver to businesses something that makes sense and helps them decide what to do next.
<urn:uuid:348c7707-118e-4ae9-be58-2be6c44b90a5>
CC-MAIN-2022-40
https://avianaglobal.com/how-semantic-technology-can-help-assess-big-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00030.warc.gz
en
0.947531
532
2.578125
3
Researchers have found multiple vulnerabilities in MOXA ioLogik industrial controllers which are widely used in industrial facilities such as utilities and manufacturing plants. Code injection, weak password policies and lack of protection mechanisms allow hackers to execute arbitrary code within webpages and modify settings of vulnerable devices. Mark James, Security Specialist at ESET commented below. Mark James, Security Specialist at ESET: “Sadly most software will have flaws or vulnerabilities, what’s important is how quickly patches and fixes are created and made available for the end user to apply. This usually requires the user to download the patch and apply that to their environment thus fixing the vulnerability. The problem of course is making everyone that is affected aware of the initial problem and that there is a fix available. Most of the flaws we see in the automation industry are proof of concept, it usually involves a specific environment to be in place but the impact could in some cases be catastrophic. Automation often involves heavy equipment doing precision work and if it fails it could cause thousands of pounds of damage. If that equipment were to go wrong around or close to humans then there is always the potential of injury or even death.” It’s virtually impossible to have any software driven machinery that is 100% secure. The very nature of software dictates that’s there is always the possibility of someone somewhere finding a way to do something that was not intended to be done. What’s important is how quickly it’s fixed, as more and more automation takes place it’s important to ensure the security is taken very seriously. Isolating systems and ensuring only physical access is required to update and maintain systems will keep the attack footprint down.”
<urn:uuid:8c21de63-9d88-44a6-b96f-6b2c07482634>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/expert-comments/industrial-control-kit-hackable/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00030.warc.gz
en
0.956368
350
2.796875
3
1 - Using Subqueries to Perform Advanced Querying Search Based on Unknown Values Compare a Value with Unknown Values Search Based on the Existence of Records Generate Output Using Correlated Subqueries Filter Grouped Data Within Subqueries Perform Multiple-Level Subqueries 2 - Manipulating Table Data Insert DataModify and Delete Data 3 - Manipulating the Table Structure Create a TableCreate a Table with ConstraintsModify a Table's StructureBack Up TablesDelete Tables 4 - Working with Views Create a ViewManipulate Data in ViewsCreate AliasesModify and Delete Views 5 - Indexing Data Create Indexes Drop Indexes 6 - Managing Transactions Create Transactions Commit Transactions Actual course outline may vary depending on offering center. Contact your sales representative for more information. Who is it For? Students should have basic computer skills, SQL skills, and be familiar with concepts related to database structure and terminology.
<urn:uuid:dc92b82e-a540-474d-bc2b-badfb54abaa7>
CC-MAIN-2022-40
https://eastbay.newhorizons.com/training-and-certifications/course-outline/id/81/c/sql-querying-fundamentals-part-2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00030.warc.gz
en
0.721531
211
3
3
Biometrics as an access control technology has received something of a mixed response from the general market. The technology is often specified for one of two reasons. The goal is either to achieve positive identification of those entering a protected area, or to reduce the cost of credential management. Benchmark considers both aspects. [dropcap]W[/dropcap]hen you consider the role of biometric technology in the security market-place, you soon discover that there are two somewhat different and, at times, contradictory sales pitches. The most established – and the one probably most accepted by those seeking to introduce the technology into projects – is that biometrics allows individuals to be positively identified when accessing a building or protected area. If this degree of identification is demanded, then often the customer will be willing to invest in systems of an appropriate quality. They will also accept that due to the more secure nature of the system, throughput (in terms of access time per individual) may be slightly lower than with other systems. The second approach, and one which is becoming more common as the cost of biometric readers falls, is that money can be saved by virtually eliminating credential management. Such an approach puts savings in terms of the cost of credentials, and resources to manage these, before higher security concerns. Often those selecting devices for such applications want throughput times that are the equivalent of tags or cards, along with biometric readers that carry a similar price tag to conventional access readers. These two differing approaches carry varied challenges, and whilst the technologies employed are similar, the considerations that need to be made are almost polar opposites! The Biometric ‘fit’ There are commonly three general methods for authenticating the access rights of an individual. These are by something a person possesses, such as a key, token or card; by something a person knows, such as a personal identification number (PIN), alphanumeric code or password; or by a biometric characteristic. Biometric systems are varied and can use physical characteristics such as a fingerprint, retina, iris or vein structure, or behavioural characteristics such as handwriting and speech patterns. The most common biometric identification systems are based on the fingerprint, although there are options that span voiceprints, signature analysis, hand geometry, retinal scanning, vein patterns, facial recognition, facial thermography and iris scanning. Over the years, biometric systems have become more accurate, reliable and affordable. However, when choosing the right system, integrators and installers must assess not only the suitability of the devices used, but also whether a biometric solution will actually meet the expectations of the customer! Verify or identify? It is essential to understand the difference between biometric verification and biometric identification. Verification is the process of determining that the person requesting access is who they say they are. With regard to biometric systems, this involves matching the biometric characteristic to a template called up from a central database. As biometric verification uses a one-to-one comparison, there needs to be a secondary credential, and the biometric element simply confirms that the person using that credential is the individual to whom it – and the authority to enter – was issued. Verification works well and is effective where absolute identification is required. However, if a user is seeking to eliminate credentials, then it somewhat defeats the object of the exercise. Biometric identification uses a one-to-many comparison in order to ascertain whether a given user is authorised for entry. The biometric characteristic alone is used, and is compared all templates in the database, until a match is found or the data is rejected as unidentified. Identification has the advantage of eliminating the need for a user PIN or card. In the past this approach was often dismissed as being too time-consuming to be feasible, especially when systems had a very high number of users. The truth is that with modern processing power, any delay should be negligible. There is one proviso to this: if the customer is seeking to eliminate the use of credentials as a money-saving measure, and subsequently opts for a budget system, the processing might not be up to the job. This will inevitably lead to their expectations not being met. Identification-based systems do require a robust processing engine in order to deliver the right levels of performance. Assessing the accuracy of any biometric reader without a field trial is not always simple. Some systems have an adjustable threshold for sensitivity. These can be adjusted to require near-perfect template matches to virtually eliminate false accepts (approving an unauthorised user), or to only require a match of a certain percent of data to eliminate false rejects (when an authorised user is not approved). If false accepts are eliminated, false rejects increase. Similarly, if the goal is to reduce false rejects, false accepts correspondingly increase. False accepts are a security risk. False rejects create user dissatisfaction. Some biometrics manufacturers have, on occasions, advertised very low false accept rates and very low false reject rates in a single system, without revealing that these rates require radically different sensitivity settings and cannot be attained simultaneously. Understanding system accuracy – even on readers with no sensitivity adjustment – is vital when considering how performance will impact on the user’s expectations. This is true regardless of whether the user is seeking positive identification or trying to eliminate the use of credentials. Real world issues In reality, many of the issues associated with biometric systems will be specific to the application and how the system is being used. The type of biometric technology will also impact on real world performance. As an example, consider using fingerprint scanning for an external door. In average conditions, and for most of the time, such a system will work to an acceptable level. However, on very cold or wet days, many readers will struggle and throughput times could fall. First off, there will practicalities such as users wearing gloves or carrying bags. They might have to put down anything they are carrying on the wet ground, and gloves will need to removed. If numerous people are starting a shift at the same time, this could create a bottle neck. More problematic would be the fact that fingerprint readers using optical scanning lose accuracy – with some simply failing to work – if a wet or even damp digit is presented. This is because moisture in the ridges of the fingerprint becomes compressed when a finger is placed on the scanning plate, and as a result the optical image is a black blob. Whenever Benchmark tests fingerprint readers, we always assess performance with damp and wet digits, and to date have found that only readers using multispectral imaging can deliver consistent performance in a wide range of conditions, including when hands are wet, dirty, dusty or very dry. Multispectral imaging makes scans of the finger using differing light frequencies, with polarised and non-polarised sources. By using blue (430nm), green (530nm) and red (630nm) light, as well as white light scans, the technology captures differing aspects of the physiology of the presented finger. This is because the varying frequencies of light have different reflectance and refractance properties. Each frequency of light discerns different information about the presented finger. It is possible to scan the skin surface, the epidermis, for ridge and groove information. Multispectral imaging technology can also scan the dermis, the second layer of skin. Whilst the benefit of this might not be immediately obvious, the fingertips are literally riddled with tiny blood vessels. This is what makes touch so sensitive. These blood vessels mimic the ridges in the fingerprint; indeed, the ridges of a fingerprint conform to the dermal papillae. By scanning these, much information about the fingerprint can be gathered, even if the digit is applied to the sensor surface with high or low pressure, is dirty or wet, or the skin is damaged. Multispectral imaging sensors are manufactured by Lumidigm, and a number of manufacturers of fingerprint readers use this technology under licence. This anomoly with fingerprint readers highlights the fact that biometric solutions often have to specified to suit the demands of the site. There is no ‘one size fits all’ solution, and deciding which biometric element to use is as important as correctly installing and configuring the final system. A critical consideration when specifying a biometric system for a security application is whether the biometric characteristic is a ‘unique’ element. If a biometric feature is duplicable or changeable, the overall long term reliability and security of the system has to be questioned. The stability of the biometric feature during the lifespan of a given individual must be considered very carefully. Biometric features that are generally accepted as unique include fingerprints (with the exception of identical twins), the retina and the iris. Voice patterns tend to change with the individual’s mood and health. The common cold or flu, for instance, would alter the tone and pitch of a person’s voice. The heat patterns in a person’s face, or facial thermography, are often affected by weather conditions. With vein pattern recognition, these structures change to varying degrees as the person ages, dependent on the particular individual. This necessitates a program of updating templates. Where high security is the objective, the choices tend to centre around fingerprint, retina or iris scanning. A point to note is that whilst iris scanning can be done without the subject having to touch the device, retinal scanning does require a degree of contact. Evidence has shown that users are more likely to resist the use of retinal scanners. If the end user is trying to reduce the costs associated with access control, then the option typically chosen is fingerprint scanning, because the readers carry a lower price-tag. Regardless of the method chosen, throughput time must allow for a consistent flow of traffic, especially at the start of shifts. Data acquisition speed reflects the time required for the system to collect the biometric data on which the access decision is based. Throughput rates for biometric access control systems are maximised in the six to ten persons per minute range. This rate means that each person has from six to ten seconds after arrival to get through the door. Considering the time required to physically open and close a door, these are optimum rates, only attained by the best systems. Again, if the selection of a system is dictated by a desire to reduce budget, there is every chance that the expectations of an end user will not be met. Biometric systems were originally designed for high security applications where one false accept could constitute a fatal flaw. With the correct specification, this can still be achieved. For many, the preferred approach is to opt for a verification system. If readers with distributed intelligence are selected, and smart card technology used to allow a ‘template on a card’ approach – where the user credential includes a secure and encrypted biometric template to be used for comparison – an advanced system can function even when off-line! This ensures that it offers robust security. There is, however, something of a question-mark over the idea of using biometric technology to implement an access control solution without the cost of credentials. The concept is touted by some manufacturers of lower cost solutions, but issues with use in external areas, throughput times and overall resilience means that the customer’s expectation might not be met. If selected appropriately, the right biometric system can provide an organisation with an effective method to identify and authenticate personnel. However, it is critical that the various system capabilities – including accuracy, speed, acceptability, enrolment time and the biometric feature being measured – are carefully validated. Manufacturers of biometric systems – and especially those involved with fingerprint-based systems – often talk about how their products offer the detection and rejection of ‘spoof’ attempts to defeat the biometric reader. Indeed, any quick search on the internet will show numerous attempts, many successful, to defeat a whole range of biometric devices (not necessarily security devices, but they are still used to highlight the issue) using fingerprint scanning as a log-in. However, do these attempts indicate a real issue for security users? Most so-called successful attempts have one thing in common: complicity. Usually these ‘demonstrations’ involve somebody registered with the device in question either making, or assisting in the creation of, a replica of the fingerprint from the registered digit. Depending upon the reader being tackled, this can range from a high resolution image of the fingerprint, through to a mould. With an image, it is then fixed to a finger and presented, and can work with low grade 2D optical scanners. Moulds are used to make a thin latex film with the correct fingerprint, which is then worn by another user. Where complicity is an issue, it is possible to defeat some mainstream biometric fingerprint readers. That said, complicity can also defeat PIN code, card and tag based systems too! Of course, if you rule out complicity, the situation does change somewhat. In such a scenario, it would become necessary for an intruder to ‘lift’ the fingerprint of a registered individual. This immediately introduces a wide number of issues which would have to be addressed. As the fingerprint would need to be lifted surreptitiously, the would-be intruder would have to identify which finger was registered, and ensure the correct print was lifted. Additionally, the actual collection of the print, and the subsequent process to make from that a working facsimile, are not straightforward. In order to have some small chance of success, the would-be intruder would also require a working knowledge of the system in place, along with a knowledge of the site’s security operations. Even after all this effort, the spoof attempt might fail! Given the difficulties associated with such entry attempts, would-be intruders are more likely to investigate a site and find another method of entry. It certainly would be simpler to find a weak spot in a site’s defences than to participate in a long and complex spoof attack that may not work! This does not mean that sites will never suffer from spoof attempts, but it is worth completing a risk assessment with the end user before paying a significant amount for the promise of spoof detection and rejection!
<urn:uuid:5613591b-f504-400d-899c-12086894f36c>
CC-MAIN-2022-40
https://benchmarkmagazine.com/deploying-fingerprint-biometrics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00030.warc.gz
en
0.940886
2,954
2.546875
3
What is a GPS tracking device? A GPS tracking device is an electrical unit that can determine its exact location in real time using the Global Positioning System (GPS), which is a Global Navigation Satellite Systems (GNSS). Most GPS tracking devices are compatible with other GNSSs such as Galileo and GLONASS, however we will simply refer to the GPS tracking system as this is the most commonly known GNSS. A GPS tracking device, or GPS tracker, is commonly fitted to business assets for effective fleet management and includes features such as real-time vehicle tracking. The GPS tracker includes a GPS receiver that receives radio wave signals from the constellation of GPS satellites to make real-time tracking possible. Tracking technology needs to receive signals from at least three GPS satellites to determine its exact location (referred to as trilateration). In some locations, where signals are weak or there is no clear line of sight to GPS satellites, ground stations may be used to make it possible to continue reporting an asset’s location. This location data can then be sent via the cellular network (e.g. GSM) for use by businesses using telematics (or GPS fleet tracking software) or private consumers who may use it for keeping loved ones safe and receive the information to their cell phone (Android or iPhone). Information is normally sent at regular intervals (e.g 30 seconds) and can include more than just location tracking. Car trackers can send data such as fuel consumption, vehicle speed and engine diagnostics via the OBD port. Notifications can be sent when a vehicle’s location crosses a geofence, indicating they have entered or exited a designated area. The functionality and use of GPS tracking devices extends beyond just vehicle tracking devices. There is a whole range of assets that can be tracked and transmit GPS data to a central server in real time. For example, GPS tracking devices are used in asset tracking and can be self-powered, making them ideal for non-powered assets like shipping containers. Depending on the tracked asset’s ping rate, battery life can be up to seven years or more. GPS technology continues to be developed and no doubt will improve the location reporting capability of the GPS tracking devices currently available, widening the scope of how businesses can take advantage of this valuable technology.
<urn:uuid:b4edf640-3be3-463f-9753-0c4c5ed904e7>
CC-MAIN-2022-40
https://inseego.com/resources/fleet-glossary/what-is-a-gps-tracking-device/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00030.warc.gz
en
0.93224
476
3
3
Rice University computer scientists have discovered an inexpensive way for tech companies to implement a rigorous form of personal data privacy when using or sharing large databases for machine learning. “There are many cases where machine learning could benefit society if data privacy could be ensured,” said Anshumali Shrivastava, an associate professor of computer science at Rice. “There’s huge potential for improving medical treatments or finding patterns of discrimination, for example, if we could train machine learning systems to search for patterns in large databases of medical or financial records. Today, that’s essentially impossible because data privacy methods do not scale.” Shrivastava and Rice graduate student Ben Coleman hope to change that with a new method they’ll present this week at CCS 2021, the Association for Computing Machinery’s annual flagship conference on computer and communications security. Using a technique called locality sensitive hashing, Shirvastava and Coleman found they could create a small summary of an enormous database of sensitive records. Dubbed RACE, their method draws its name from these summaries, or “repeated array of count estimators” sketches. Coleman said RACE sketches are both safe to make publicly available and useful for algorithms that use kernel sums, one of the basic building blocks of machine learning, and for machine-learning programs that perform common tasks like classification, ranking and regression analysis. He said RACE could allow companies to both reap the benefits of large-scale, distributed machine learning and uphold a rigorous form of data privacy called differential privacy. Differential privacy, which is used by more than one tech giant, is based on the idea of adding random noise to obscure individual information. “There are elegant and powerful techniques to meet differential privacy standards today, but none of them scale,” Coleman said. “The computational overhead and the memory requirements grow exponentially as data becomes more dimensional.” Data is increasingly high-dimensional, meaning it contains both many observations and many individual features about each observation. RACE sketching scales for high-dimensional data, he said. The sketches are small and the computational and memory requirements for constructing them are also easy to distribute. “Engineers today must either sacrifice their budget or the privacy of their users if they wish to use kernel sums,” Shrivastava said. “RACE changes the economics of releasing high-dimensional information with differential privacy. It’s simple, fast and 100 times less expensive to run than existing methods.” This is the latest innovation from Shrivasta and his students, who have developed numerous algorithmic strategies to make machine learning and data science faster and more scalable. They and their collaborators have: found a more efficient way for social media companies to keep misinformation from spreading online, discovered how to train large-scale deep learning systems up to 10 times faster for “extreme classification” problems, found a way to more accurately and efficiently estimate the number of identified victims killed in the Syrian civil war, showed it’s possible to train deep neural networks as much as 15 times faster on general purpose CPUs (central processing units) than GPUs (graphics processing units), and slashed the amount of time required for searching large metagenomic databases. The research was supported by the Office of Naval Research’s Basic Research Challenge program, the National Science Foundation, the Air Force Office of Scientific Research and Adobe Inc.
<urn:uuid:1af3c5c3-afb4-4975-9a96-fa8c1b552b00>
CC-MAIN-2022-40
https://www.architectureandgovernance.com/app-tech/big-data-privacy-for-machine-learning-just-got-100-times-cheaper/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00030.warc.gz
en
0.934177
706
3.0625
3
In August 2019, the FCC released a mandate requiring telephone networks to sunset long-standing POTS phone line connections, requiring a full phase-out nationwide by August 2, 2022. That deadline is fast approaching and could cost any organization still using these outdated phone lines thousands of dollars per month. What are POTS lines, why are they going away and could you use a connectivity assessment to identify replacement options? Read more to find out. What Are POTS Lines Used For? POTS lines (stands for Plain Old Telephone Service) are copper-wired telephone lines that use analog technology. They have been used for over 100 years and have an estimated 36 million lines still active. But their lifetime is quickly ending. POTS lines are common in business-related applications. In addition to their use for voice phone calls, organizations may have POTS lines for the following uses: - Fax machines - Elevator call boxes - Fire alarms - Burglar alarms - HVAC systems - And More Any organization with POTS lines is looking at a significant increase to their telephone bill starting in August 2022. Right now, one line can cost $65-100 per month. After August, that could rise into the thousands, with some lines anticipated to cost $3,000 per month. No one wants to be stuck with that phone bill, so the best solution is to act now to transition existing POTS lines to a replacement option. Why Are POTS Lines Going Away? The FCC finds POTS to be unsustainable for several reasons: 1. POTS Lines Are Expensive Each POTS line currently costs an estimated $65 - $100 per line, and multiple can be found throughout an organization. Beyond their obvious use for voice phone calls, POTS lines are also used for fax machines, ATMs, elevator call boxes, fire alarms, gate and door access and even HVAC systems. These items add up fast and can put unnecessary strain on any budget. 2. POTS Lines Are Exposed Copper wires are always at risk of damage from the outside world. Bad weather conditions and natural disasters often leave POTS lines vulnerable and in need of constant repair and maintenance. That kind of downtime is unnecessary when digital options, like those found in Unified Communications platforms exist. 3. POTS Lines Lack Monitoring Abilities Due to their physical nature, POTS lines do not have monitoring or remote management like digital platforms do. That means that when there is damage to your wired network, it is difficult to locate, and you will not know if a line is down until you are actively trying to use it. When that happens, your entire network will have to shut down until professionals find what needs fixing. 4. POTS Lines Face Limited Support Speaking of professional service, a workforce that once held ample knowledge of POTS lines is quickly dwindling, leaving few professionals to handle repairs when needed. Equipment shortages add more fuel to the fire and prevent work from getting done quickly. Combine all factors and you have a recipe for frequent and costly network downtime. Replacement Options for POTS Lines When August 2, 2022, rolls around, major networks will be allowed to deactivate and/or raise prices on existing POTS lines. If you choose to keep using POTS, you can expect to pay thousands of extra dollars to use them. That’s why now is the time to transition to a solution that satisfies the FCC mandate and meets the demands of the digital age. Not sure if your organization may be affected by the FCC’s POTS depreciation mandate? Loffler can help you identify where you currently have POTS lines and plan your next steps. Request a free connectivity assessment today:
<urn:uuid:697d4227-d55c-4b25-979d-80ef1143990f>
CC-MAIN-2022-40
https://www.loffler.com/blog/pots-lines-going-away
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00030.warc.gz
en
0.936849
778
2.765625
3
It is no secret that cyber-attacks are on the rise. Over the past two years,… New forms of security threats have been appearing in recent years and this will continue to be the case going forward. As a result, the need for improving the ability to effectively detect and combat these threats is going to become even greater. Cybersecurity automation tools and technology are one solution to this problem. All industries today require heightened cybersecurity, especially as cyberattacks are becoming more prevalent and deadly than ever before. Nowadays, cyberattacks are automated and sophisticated enough to penetrate even the toughest networks. It is, therefore, vital for any company to protect itself from such threats. By using the same tools as attackers, you reduce the time needed for securing your network, which is usually unsuccessful against automated threats. The last twelve months have seen a huge increase in the number of cyberattacks on business globally, with the average cost of security data breaches costing $4.24 million USD per event. The biggest factor in reducing the financial impact of data breaches was the use of cybersecurity automation in companies that used it. So, let’s look at the future of cybersecurity automation, and how your business can use this technology to defend against increased attacks. What is cybersecurity automation? Cybersecurity is the way systems, programs and networks are protected from being attacked by malicious actors. Security systems require monitoring, vast amounts of data that need to be analysed, checking threat alerts, looking for vulnerabilities and so on. As cyberattacks become more complex and sophisticated, the ability to meet the demands for security protection becomes too great. Automation of these repetitive, manual tasks means the process of detection, investigation and action can happen rapidly and threats are stopped before they are able to disrupt business operations. Cybersecurity automation can: - Automatically search for threats - Determine which potential threats need further investigation - Decide if further response is needed and send alerts - Contain and resolve the threat with preset protocols. Automation allows these repetitive workflows to take place in just seconds, without needing any human intervention which can take hours, days or even weeks to perform, depending on the type and complexity of the cyber threat. Industry experts, when talking about best practices in automation, will refer to security technology such as: - Robotic process automation (RPA) – software that is programmed to do basic and repetitive tasks, by creating and deploying a software robot that can launch and run other software - Security Orchestration Automation and Response (SOAR) – a collection or IT stack of security software solutions and tools for browsing and collecting data from various sources. These security measures can collect and analyse threat data, prompt a security team member to act, or deploy automated reactions to data. Are there benefits of cybersecurity automation? It’s no secret that cybersecurity is one of the biggest concerns for organisations today, with data breaches and security attacks increasing every month. Thankfully, there are plenty of security solutions available so businesses can keep their IT environments safe, and their data protected. While many businesses have dedicated IT employees on site to handle the day-to-day running of systems, cybersecurity is becoming more complex and requiring far more investment in resources and time. To mitigate this problem, cybersecurity automation can alleviate the time and necessity for human intervention, while addressing security threats. Going forward, the future of cybersecurity automation is pretty much assured as organisations continue to invest in tools to help keep up with the rising complexity of the cyber security landscape. Automation mitigates human error We all make mistakes – but unfortunately human error is a contributing factor in 95% of cybersecurity breaches. Human error in the security context refers to unintentional action or lack of action that results in security breaches occurring. Cybercriminals know security measures are only as effective if humans properly use them and will look for and target those weaknesses. Many businesses are security aware but may not understand the sheer scope cybersecurity protection entails, leaving IT staff to try and manage as best they can while supporting the business environment. Trying to detect and prevent cyberattacks can be such a time drain that IT staff are unable to put in place preemptive defenses, leading to mistakes and consequently threats that infect systems. Automation is more efficient Threat data is one of the most important tools in protecting businesses from cyberattacks. However, this huge amount of data from security technologies, both within and beyond the organisation, as well as attack vectors, needs to be collated and scrutinised. The data is used to identify groups of threats that predicate an attacker’s next step. The more data collected, the more accurate the results are, and it reduces the chances the groups are just an anomaly. This is where it’s important to choose automation software that collects data from internal security systems and aggregates global threat intelligence data. The analysis must also have enough computing power so it can keep up with the volume of threats coming through, which is unable to be done manually. Automation and machine learning means data can be processed faster, with more accuracy. Combined with threat analysis tools means this approach can detect more advanced and unique cyber threats, much faster than possible if done manually. Increasing levels of security alerts and event management means security teams have to spend more hours of valuable time trying to find and resolve issues. Because there’s many false positives (or mislabeled alerts), that time is often wasted when it could have been spent on more important tasks. Security automation means those human hours aren’t spent on repetitive processes. Security teams are freed up to deal with cyber threats more effectively. Automation improves security innovation While machines can be fast and efficient when given the right conditions, they can’t be innovative. This is one of the biggest advantages of automation that businesses will benefit from. When security processes and the repetitive workload is lifted from security teams, this allows the ability to work more effectively towards improving your business risk profile by focusing on problem solving. Automation allows IT teams to work on mapping out the actual security processes that make automation successful before they’re implemented. Strategic planning is what humans do better than machines, as it requires insight and creative thinking. Automation improves compliance Organisations must meet and comply with regulations when it comes to cybersecurity, and many of these regulations require automated security processes. This means to stay compliant with regulations, businesses need to adopt automated security solutions. Automation also helps companies to stay up to date with compliance standards and is essential for those in regulated industries such as finance and healthcare where data protection is critical. Is cybersecurity automation the future? As cyberattacks become more sophisticated and frequent, automation tools will become incredibly critical in the act of detecting and preventing such attacks. Automating repetitive tasks such as detecting the same type of attack or scanning for malware bolsters security procedures and improves protection. Automation doesn’t replace people, but it does help security teams, enabling them to deploy more specialised security solutions rather than having to constantly plug holes. As the security landscape becomes more complex, companies will need to continue investing in tools to ensure their data, devices, and users are protected. If you’d like to know more about how cybersecurity automation can protect your business, talk to the security experts at INTELLIWORX today.
<urn:uuid:a5044c20-0235-4f23-b75b-7ddff7cbe735>
CC-MAIN-2022-40
https://intelliworx.co/uk/blog/is-cybersecurity-automation-the-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00231.warc.gz
en
0.944491
1,508
2.65625
3
How to Change an MPLS LDP Router ID Multiprotocol Label Switching, or MPLS, is a powerful technique for speeding up networks and getting that low, low latency on your most time-sensitive packets. With MPLS, packets navigate the network through the routes that are pre-determined to be the fastest. But the layer of connections that MPLS depends on can't be maintained or even used if the routers can't find one another. In this blog post, we explain how to change the identifier that an MPLS router uses to communicate with its neighbors and the address it uses to create neighborships. We'll cover the broad strokes of the process, but then we'll cover the actual commands you'll need to configure an MPLS router's LDP ID for yourself. What is MPLS? Quick Definition: MPLS, or Multiprotocol Label Switching, is a technique for routing data through a network that is faster and more efficient than the default method. Normally, each router opens the header information of a packet, finds the packet's destination address, works out the next hop for the packet based on a long and complex lookup in a routing table, then sends it to the next router – for that router to do the same thing. With MPLS, when a packet enters a network, it's given a much shorter and much simpler header called a Forwarding Equivalence Class that directs packets through predetermined routes. Routers in an MPLS-enabled network don't need to perform header analysis on packets with these special headers. This can make routing more efficient by giving low-latency priority to packets with real-time traffic – traffic that would be negatively affected by long delays. What is LDP? Quick Definition: LDP, or Label Distribution Protocol, is the protocol that enables MPLS. Routers using MPLS need LDP to learn who their neighbors are and to establish a web or mesh of connections between one another. Routers communicating with one another with LDP advertise their presence and maintain connections to their neighbors' interfaces with so-called Hello messages. How Does an MPLS Router LDP ID Get Chosen? Any time you enable MPLS on a router and its interfaces, Label Distribution Protocol (LDP) is automatically turned on. So whenever a router's running MPLS, it's going to have something called an LDP Identifier, or LDP ID. An LDP Router ID is an IP address, and it's fundamental to MPLS working properly on it. If you're familiar with how OSPF chooses its Router ID, LDP uses a similar function. Essentially there are a series of three checks that the router goes through, and the Router ID gets chosen according to the first one that gets a positive response. The first check for an LDP ID is through manual configuration. This would be if you've manually configured the router and told it which interface to be used, or what loopback interface you want used for MPLS. Part of manually specifying what interface a router should use is to configure the IP address sitting on that interface as the Router ID. That's the first check: is there a configured Router ID? Second, if there is no configured LDP ID, MPLS will default to using the highest IP address on any loopback interface the moment that LDP comes up. Third and last, in the event that we don't have any loopbacks, as a last-ditch effort, the router will use the highest IP address on any of the other interfaces. Those are the three checks for Router ID: manually configured, highest IP on loopback, highest IP on interface. An Overview of How to Change an MPLS Router LDP ID [VIDEO] In this video, Keith Barker covers how to change the router ID on a label switching router. He'll begin by discussing what determines default LDP Router IDs, where these settings come from, and then walk you through each step of the process you'll need to follow to change it. Why Does the MPLS Router ID Matter? At this point, you've learned a bit about LDP and MPLS, and you might be wondering why it even matters what the LDP ID for a router is. Why would it matter if a router chooses one interface or another, wouldn't any loopback as the Router ID work? The answer to that lies in the messages that the MPLS router will broadcast when it advertises itself and checks for neighbors. The LDP ID has to be an IP address that its neighbors can reach. Imagine a core network that comprises three routers daisy-chained together: R2, R3, R4. In the Hello messages that R3 sends to R2 and R4, R3 is going to use the LDP identifier, that IP address, as the Transport IP Address. And if that address isn't reachable by R2 and R4, they won't be able to become LDP neighbors with R3. How to Verify the LDP ID of an MPLS Router Next, we're going to look at the command line inputs that will let you verify and view what the LDP ID is on a router. Then, we'll walk through actually changing it. As always, here at CBT Nuggets, we think that the best way to learn is to actually do the steps in a controlled environment. If you have a network virtualization tool, we recommend you try manually configuring the interfaces on your R3 router with these commands. Of our three routers we mentioned earlier, we have R3 connected to R2 through gig 2/0 and connected to R4 through gig 3/0. We'll start off by asking R3 exactly what its LDP ID is. There are several ways of seeing that, but we'll discover it by typing: show mpls ldp discovery detail At the top of the table of data this will output, you can see the "Local LDP Identifier". Ours is 188.8.131.52:0. The :0 represents that we're using system-wide label space – as opposed to an individual set of labels per interface. What this means is that as R3 sends Hello messages out gig 2/0 and gig 3/0, in the payload of those little LDP Hellos, it's advertising that the transport address (which means "the address if you want to connect to me") of R3 is 184.108.40.206. And that matches the router's LDP ID. In this same table, you can read what each interface thinks the Transport IP Address is. What that's saying is that anybody who wants to neighbor up with R3 is going to be trying to connect to that IP address. How to Change an MPLS Router LDP ID Next, we'll configure some of the LDP settings for different interfaces on R3. First, let's create a brand-new loopback interface with an IP address. We'll also add the network into OSPF. Because, remember, if we have a router ID that's not reachable by its neighbors, we won't be able to form LDP neighborships with those neighbors. To start, we want to enter configuration mode: We're configuring a new loopback first. In this case, why not use interface loopback 8. Type: interface loopback 8 All we're going to do is give it an IP address of 220.127.116.11/32 in interface config: ip address 8. ip address 18.104.22.168 255.255.255.255 And we'll just pop it right into OSPF 1 area 0 as well: ip ospf 1 area 0 It's never a bad idea to check that what you've done is actually working. Let's validate that it really is participating in OSPF. To run a quick verification of that, type: do show ip ospf int brief This outputs a table for us to review, and we can see that we have loopback 8 and that address is a part of OSPF. And, yet again, the most important reason for checking that is we need to make sure that R2 and R4 can reach this IP address. Because once we set the LDP ID to that new address, that's going to be the transport address that R3's going to be saying he should be reached on as he sends his Hellos. Now all that remains is to force the router's LDP ID to be that IP address on loopback 8. The commandwe're using is "MPLS LDP router-id". Part of the command accepts the target interface, which in this case is loopback 8. We'll also add a keyword of "force". That last keyword will essentially tell the configuration that it needs to happen right away. When it's all written out, the command to change an MPLS LDP Router ID is this: mpls ldp router-id loopback 8 force How to See Changes to an MPLS Router's LDP ID After we make those changes to R3's interfaces and LDP ID, we get a series of returns. The first thing we learn is that our existing relationships with R2 and R4 have been removed (the reason for which is that the LDP Router ID has changed). But, right below that we also see that because the 22.214.171.124 address was reachable by R2 and R4, they were both able to form sessions with R3. What's really happening behind the scenes is that the device that has the highest router ID actually initiates the sessions, but because R2 and R4 could reach 126.96.36.199, when R3 went over to initiate the sessions, R2 and R4 were both able to reply, and the sessions were successful. If we wanted to verify our Router ID, it'd be really simple. We do the same command that we did previously. Remember that we're still in configuration mode, so we add "do" to the front of it. The same command from before should show us not only the interfaces that are participating within MPLS and LDP, but also it should show us what the new LDP ID is. do show mpls ldp discovery detail This table of data should look familiar. We look to the top under "Local LDP Identifier" and we see 188.8.131.52:0. That means that when R3 sends its Hellos to its neighbors, what it's doing is advertising that the transport address is also 184.108.40.206. On top of all of that, we know that the address is reachable by R2 and R4, which means this configuration won't be causing any problems – it's not breaking the MPLS path. If we wanted to verify our neighborships, we could do that by typing: do show mpls ldp neighbor And that would help us verify that our LDP sessions are in place with R2 and with R4. This is one small part of MPLS, but hopefully the LDP ID of a router makes a little more sense and you feel prepared to change it whenever the need arises. If you're interested in learning more about MPLS, the MPLS Fundamentals training from CBT Nuggets contains 15 videos and 5 hours of training about configuring and maintaining MPLS. Or maybe you're looking for a much quicker explanation of MPLS and how it works. In that case, this CBT Nuggets YouTube tutorial explains MPLS and how it functions in less than six minutes!
<urn:uuid:4d3e94f1-afb6-4da5-bef8-298e9cd3d40d>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/technology/networking/how-to-change-an-mpls-ldp-router-id
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00231.warc.gz
en
0.954061
2,435
2.9375
3
A proxy server has a few different definitions according to the type of proxy you are using. In general though, a proxy server is responsible for allowing you to establish an indirect network connection to other locations and services with your PC or mobile device. Some consider a proxy server to be an application where others define it as a PC or system which provides a service that allows you to establish an indirect connection to another network. As a whole, a proxy server is capable of filtering traffic and handling client requests so the request can be forwarded to the appropriate server. In plain English this means that a proxy helps you to make a secure connection to another network while keeping your PC and Internet Protocol address anonymous for the majority of the time. There are many advantages which can be gained by using a proxy server some of which include the ability to surf the web anonymously, increasing the speed of your browsing activity by caching web pages, the ability to offer enhanced security, and blocking questionable websites, to name a few advantages. There are many different types of proxy servers each of which serves a different purpose. In this article we will discuss a few of the most common proxy servers and the purposes they serve. SSL stands for Secure Sockets Layer and is a protocol which is used to protect your data during transmission. SSL is commonly used on the Internet when you execute a transaction and is symbolized by the padlock icon you typically see next to the URL address in the top of your browser window. It is also symbolized by https as opposed to http. An SSL proxy server is connected between the client and the server to provide Secure Socket Layer support. SSL basically intervenes in the connection between the sender and the receiver which prevents hackers from attacking the network and intercepting personal or financial information which is being transmitted over the Internet. FTP stands for File Transfer Protocol and is used in many different applications where you are uploading data to a server. A prime example of how FTP works is when you are building a website. In order to make the website visible to the world you must make a connection with the server space you reserved and then upload the website folders to the server to make the site live on the Internet. An FTP proxy server in advanced mode will offer enhanced security for uploading files to another server. The server typically offers a cache function and encryption methods which make the transmission process secure and safe from hackers. In addition to relaying traffic in a safe environment an FTP server keeps track of all FTP traffic. An HTTP proxy provides for the caching of web pages and files which allows you to access them faster. Most browsers utilize an HTTP proxy to cache websites you frequently visit so you can quickly access them without having to completely download the page all over again. When you type in a URL for a website you want to access an HTTP proxy will search for the website address in the cache. If the website address is located it will return the website to you immediately as opposed to you having to wait for it to download. The downside of an HTTP in some instances is that the cache can build up which slows down your browsing activity. To get around this you must send instructions to clear the cache to speed up your browsing activity. Additionally, an HTTP proxy is capable of filtering content from web pages and reformatting pages to suit the device you are using to access the page. SOCKS really stands for SOCKets and is different than a normal proxy since it is considered to be an application. When you compare a SOCKS proxy to an HTTP proxy, the HTTP proxy handles the request you send to access content on the Internet. On the other hand, when you contact a SOCKS proxy server the connection is established through the exchange of messages which establishes the proxy connection. The connection works through an Internet Protocol (IP) tunnel which also deploys a firewall. The SOCKS proxy requests originate from the firewall using SOCKS protocol and then the network (Internet) communicates with the SOCKS server request as if it were your own machine making the actual request to access a web page. In order to use a SOCKS proxy your PC must have the capability to handle SOCKS protocol plus it is necessary to operate and maintain a SOCKS server. SOCKS technology was originally developed to access the Internet and its main features are the ability to bypass default routing on an LAN (Local Area Network) or internal network plus, it can provide authentication for protocols that you would otherwise be unable to access. An anonymous proxy is just as the name implies and provides you with privacy while you are browsing the Internet. It protects your privacy by hiding your IP (Internet Protocol) address from website owners, eavesdroppers, and other sources that exploit your identity by sending you targeted advertising based on your location or in the case of eavesdroppers, stealing your personal information and listening to your conversations. An anonymous proxy is also capable of eliminating cookies which track your activity, popup advertisements which can be an annoyance as well as a danger to your PC, and other components that invade your privacy while you are surfing the Internet. There are a number of anonymous proxy sites online which allow you to protect your identity while you surf the web. Some of them are free and others charge you a fee plus, you also have to be careful which ones you choose to use as some of them are ineffective and can harm your PC. Others provide a secure and safe way to browse the Web. The quality anonymous proxy servers will provide a SSL (Secure Sockets Layer) tunnel which blocks eavesdroppers. It works similar to the SSL proxy we described earlier. The quality anonymous proxy servers should also support FTP, HTTP, and HTTPS protocols. They also are effective in bypassing your Internet provider to ensure your IP address remains anonymous.
<urn:uuid:32603df1-c3ac-4285-82ec-e68ee474780d>
CC-MAIN-2022-40
https://internet-access-guide.com/5-different-types-of-proxy-servers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00231.warc.gz
en
0.944366
1,192
3.4375
3
When there’s a large cyber attack, the knee jerk reaction is often to lay blame on sophisticated attackers, entire nation states, or criminal gangs. It may be easy to point the finger, but doing so can actually negatively impact your company’s security. Blaming the cyber attacker redirects anger from where it should be focused: on the company’s security infrastructure. And when the “sophisticated attacker” turns out to be an amateur, an unsecure company is left looking foolish (and even becoming a target for other cyber attacks). The following myths about cyber attacks are what keep companies from fortifying their security infrastructure. Make sure that you are informed and put the proper emphasis on your company’s cyber security. Myth #1: All Cyber Attacks are Sophisticated If you don’t invest in your cybersecurity, you’re more likely to suffer from an amateur attack than a sophisticated one. There are several reasons that amateur hackers may target a company, including: - Company leaders refuse to acknowledge that the company is a target and take the proper precautionary steps. - The company does not invest in even basic cybersecurity. - Cybersecurity is not updated. Myth #2: All Cyber Attackers are Professionals While high-profile, sophisticated cyber attacks do happen, most attacks are individuals or small groups with little experience. In fact, if you don’t have strong cybersecurity, odds are your hacker will be a bored teenager trying to learn about hacking. Your network essentially acts as a training tool for kids who want to learn hacking and have a lot of time on their hands to learn how to hack into your minimal defenses. Myth #3: Expensive Cybersecurity is the Answer Just spending money on security without considering anything else is not a foolproof way to ensure your company is not attacked. Before splurging on expensive cybersecurity, consider the following: - Do we have the basics right? Even if you have the most expensive cybersecurity, if you have one unsecured server, a hacker can find it and exploit it. - What are our GAPS? Be sure that you first carry out a GAP assessment and external audit to determine your strengths and weaknesses before building your cybersecurity. - What are our risks? Make sure that the security services and products you plan to purchase address your most relevant risks. Myth #4: All Cyber Attacks Come from outside the Company Did you know that most attacks require the privileges and access rights of an insider in order to succeed? Managing the privileges and access rights of your employees is key to guarding your company against malicious attacks. Myth #5: Nothing Can Prevent a Cyber Attack While hackers may be determined to break through even the most sophisticated cybersecurity, it doesn’t mean there’s no use in investing in cybersecurity at all. The harder your network is to hack, the more likely a potential hacker is to fail or give up. So, no cybersecurity is a guarantee against hacking, but being protected is far more advantageous than being exposed. Get in Touch with FiberPlus FiberPlus has been providing data communication solutions for over 25 years in the Mid Atlantic Market for a number of different markets.. What began as a cable installation company for Local Area Networks has grown into a leading provider of innovative technology solutions improving the way our customers communicate. Our solutions now include: - Structured Cabling - Electronic Security Systems - Distributed Antenna Systems - Public Safety DAS - Audio/Video Services - Support Services - Specialty Systems - Design/Build Services FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry. Have any questions? Interested in one of our services? Call FiberPlus today 800-394-3301, email us at [email protected], or visit our contact page.
<urn:uuid:3c34264c-9af1-4308-8e35-696276ad08ff>
CC-MAIN-2022-40
https://www.fiberplusinc.com/security/5-myths-about-sophisticated-cyber-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00231.warc.gz
en
0.937318
810
2.609375
3
A new attack method dubbed PDFex that extracts the contents of encrypted PDF files in a plain text. All attacker needs are to have a single block of know plain text or the legitimate user needs to open the encrypted document. PDF files are encrypted to exchange and store sensitive information without any additional mechanisms. The encryption key here is the PDF password, the recent version of PDF supports for the 256-bit AES encryption algorithm. PDF encryption is used in several public and private sectors to protect sensitive information, this new attack focuses on vulnerabilities in PDF specification. Researchers analyzed 27 popular PDF viewers, out of the 23 are vulnerable to direct exfiltration and all of them are vulnerable to CBC gadgets Novel Attacks on PDF Encryption Researchers developed two possible attack methods, exploiting the security limitations with PDF encryption based on active-attacker scenarios. The first method involves abusing PDF feature, the attacker should gain access to the encrypted PDF. The encrypted PDF file contains both ciphertexts and plaintexts which allows an attacker to launch direct exfiltration attacks once the victim opens the file. The second method based on Cipher Block Chaining (CBC), PDF encryption uses CBC with no integrity checks, this allows us to create self-exfiltrating ciphertext parts using CBC malleability gadgets and to modify the existing plain text or to create a new one. PDFex Attacker Scenarios It is assumed that attackers already having access to the PDF document, but they don’t know the password or the decryption keys. What attacker can do is by modifying the encrypted files by changing the document structure or adding new unencrypted objects and they send the modified files to the victim. Researchers classified attack into two scenarios. without user interaction – In this case, all attacker need is to just open the modified PDF file and displays the PDF document. with user interaction – In this attack scenario the victim needs to interact with the PDF document (just a mouse click). The attack is successful if they extract complete data as plain text from encrypted PDF or parts of the data from the encrypted PDF file. To note that the PDFex attack won’t allow the attacker to remove or to know the password, instead it allows attackers to read the data from the encrypted file. This section describes how the data is exfiltrated with Direct Exfiltration and CBC Gadgets method. Direct Exfiltration (Attack A) With is attack type, attacker abuses the flexibility in PDF encryption that allows ciphertext and plaintext elements. An attacker can modify the PDF structure to add unencrypted elements that are controlled by the attacker. This action can be in three possible ways - Submit a form - Invoke a URL This document later sends by attackers to the victim and the data leak HTTP request leaks the full data in plaintext automatically to the attacker’s server once the victim opens the file. CBC Gadgets (Attack B) “PDF encryption generally defines no authenticated encryption, attackers may use CBC gadgets to exfiltrate plaintext. The basic idea is to modify the plaintext data directly within an encrypted object”, reads the paper. Two launches the attack, the attacker needs to satisfy two preconditions. Know Plain Text: To manipulate the encrypted the attacker should know parts of the plaintext is necessary. Exfiltration channel: The attacker should have an exfiltration channel. PDF Versions Affected It is important to note that for both attacks, the attacker gets full control of the appearance of the displayed document. According to researchers, 21 out of 22 desktop PDF viewer applications and 5 out of 7 online PDF validation services are vulnerable against at least one of our attacks. “We reported our attacks to the affected vendors and have proposed appropriate mitigations. However, to sustainably eliminate the root cause of the vulnerabilities, changes in the PDF standard are required,” the paper reads.
<urn:uuid:44ba5e0e-7013-4384-887d-4d886f0a6816>
CC-MAIN-2022-40
https://gbhackers.com/pdfex-pdf/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00231.warc.gz
en
0.871375
820
2.625
3
A new treatment developed by Tel Aviv University could induce the destruction of pancreatic cancer cells, eradicating the number of cancerous cells by up to 90% after two weeks of daily injections of a small molecule known as PJ34. Pancreatic cancer is one of the hardest cancers to treat. Most people who are diagnosed with the disease do not even live five years after being diagnosed. The paper was published in the October issue of the journal Oncotarget, a peer-reviewed biomedical journal that covers oncology research. The journal is a lesser-known publication within the academic world, and the studies it publishes generally have a lower impact factor than those published in leading medical journals like The New England Journal of Medicine, Nature or The Lancet. (Impact factor is the number of times a published article has been cited in a year, divided by the total number of articles published in the previous two years.) The study, led by Prof. Malka Cohen-Armon and her team at TAU’s Sackler Faculty of Medicine, in collaboration with Dr. Talia Golan’s team at the Cancer Research Center at Sheba Medical Center, was recently published in the journal Oncotarget.Specifically, the study found that PJ34, when injected intravenously, causes the self-destruction of human cancer cells during mitosis, the scientific term for cell division. The research was conducted with xenografts, transplantation of human pancreatic cancer into immunocompromised mice. A month after being injected with the molecule daily for 14 days, “there was a reduction of 90% of pancreatic cells in the tumor,” Cohen-Armon told The Jerusalem Post. “In one mouse, the tumor completely disappeared.” “This molecule causes an anomaly during mitosis of human cancer cells, provoking rapid cell death,” she said. “Thus, cell multiplication itself resulted in cell death in the treated cancer cells.”Moreover, she said, PJ34 appears to have no impact on healthy cells, thus “no adverse effects were observed.” The mice, she said, continued to grow and gain weight as usual. Pancreatic cancer is currently resistant to all treatments, and patients have poor chances of surviving for five years after being diagnosed. Prof. Malka Cohen-Armon, left, of Tel Aviv University and Dr. Talia Golan of Sheba Medical Center (Tel- Aviv University). In parallel studies, the research team found that the molecule “acts efficiently” when they tested it on other types of deadly cancer cell cultures in the lab, including aggressive forms of breast, lung, brain and ovarian cancer, all of which are resistant to current therapies, she said. Cohen-Armon added that the team hopes to start testing the effect of the molecule on larger animals, and eventually on humans, which could take some two years, depending on funding. Despite a substantial advance in cancer treatment, pancreatic ductal adenocarcinoma (PDAC) have a limited response to current treatments, and a low 5-years survival rate of about 6% [1–3]. Thus, there is an urgent need to explore new mechanisms for treating this lethal malignancy. Recent reports have discovered the capability of phenanthrenes to kill human cancer cells that are resistant to currently prescribed apoptosis-inducing agents [4, 5]. Furthermore, we identified phenanthrenes acting as PARP1 inhibitors that efficiently eradicate a variety of human cancer cells without impairing benign cells [6–9]. Notably, their exclusive cytotoxic activity in human cancer cells was independent of, and un-related to PARP1 inhibition [7–11]. The phenanthrenes PJ34, TiqA and phenanthridinon (Phen) act as PARP1 inhibitors due to their binding potency to the nicotine-amide binding site in the catalytic domain of PARP1 [12, 13]. However, their PARP1 inhibition per-se does not impair nor eradicate human malignant cells, including pancreas cancer cells, PANC1 [7–9]. In contrast, at higher concentrations than those causing PARP1 inhibition, PJ34, Tiq-A and Phen eradicate a variety of human cancer cells by ‘mitotic catastrophe cell death’. This cell-death follows mitosis arrest caused by preventing the post translational modification of NuMA (Nuclear mitotic apparatus protein-1) that enables its binding to proteins . In the tested human cancer cells, NuMA binding to proteins enables its clustering in the spindle poles, which is crucial for stabilizing the spindle, a pre-requisite for chromosomes alignment in the spindle mid-zone and normal anaphase. Notably, NuMA silencing or down regulation of NuMA prevents mitosis in all cell types [14–17]. In human malignant cells, specific post translational modifications of NuMA enable and promote NuMA binding to proteins [18–21]. These post translational modifications are most efficiently prevented by PJ34 [8, 20, 21]. In accordance, in PJ34 treated cancer cells, NuMA is arbitrarily dispersed in the spindle, instead of being clustered in the spindle poles . The consequences are un-stabilized spindle pole with dispersed chromosomes, instead of segregated chromosomes aligned in the spindle mid-zone [8, 17, 22, 23]. This abnormality jeopardizes normal ploidy of the ‘daughter’ cells and evokes mitosis arrest [17, 23]. Cells with these abnormal spindles are eradicated by a rapid death mechanism, ‘mitotic catastrophe’ cell-death [9, 23]. Here, the efficacy of PJ34 to eradicate human pancreas cancer cells is tested in cell cultures and in xenografts. PANC1 cells are most frequently identified in human pancreas tumors [1–3]. Pancreas xenografts were developed in nude mice. Mitosis arrest and cell death were measured in PANC1 cells incubated with PJ34. In xenografts, eradication of human PANC1 cells deduced from a massive reduction of human proteins in the tumors, was measured 30 days after the treatment with PJ34 has been terminated. An increased necrosis measured in the PANC1 tumors of mice treated with PJ34 supports cell-death caused by PJ34 in the xenografts. Normal cells infiltrated into the tumors were not impaired by PJ34. A similar cytotoxic activity of PJ34 was observed in patients-derived PDAC cells and xenografts. These results indicate the potency of PJ34 to eradicate human pancreas cancers.Go to: Treatment with PJ34 causes mitosis arrest and cell death in human pancreas cancer cells PANC1 Measuring changes in the ploidy of PJ34 treated PANC1 cells with stained DNA by flow cytometry reveals piled up PANC1 cells with double DNA content, unable to proceed to mitosis (Figure 1). A similar cytotoxic effect of PJ34 is measured in other human malignant cell types . Recently, the molecular mechanism causing mitosis failure and arrest in human cancer cells incubated with PJ34 has been disclosed . The cell cycle profile of PANC1 cells incubated with PJ34 reflects their mitosis arrest and cell death (Figure 1). PJ34 (20 μM or 30 μM) was applied 24 hour after seeding, and PANC1 cell eradication and the kinetics of S-phase entry and G2/M transition were measured by flow cytometry after 48, 72 and 120 hours incubation (Methods). After 48 hours incubation with PJ34, failure to proceed into mitosis preceded cell death measured after 72 hours incubation. These cells were eradicated after 120 hour incubation with PJ34 (Figure 1). PJ34 efficacy tested in human PANC1 xenografts PJ34 evoked cell death in PANC1 cells has been further examined in PANC1 xenografts. The course of the experiment was based on previous experiments with PANC1 xenografts . A total of 24 nude mice were randomly divided into 3 groups of 8 mice each. PANC1 cells in cell cultures were subcutaneously injected (5 × 106 cells per mouse; Methods). Treatment with PJ34 started once PANC1 tumors reached a mean volume of approximately 100 mm3 (about 15 days after mice were injected with PANC1 cells). Nude mice of one group were injected intravenous (IV) with saline, daily 5 days a week for 3 weeks (control group). In the second group, nude mice were injected IV with PJ34 dissolved in saline, daily 5 days a week for 3 weeks. In the third group, nude mice were injected IV with PJ34 in saline, 3 times a week, every second day, for 3 weeks. Each injected dose contained 60 mg/Kg PJ34 in 100 μl saline (about 1 mg PJ34 per mouse). The volume of the developing tumors and the mice’ weight were monitored throughout the experiment (Figure 2A). In previously described experiments with PANC1 xenografts, PANC1 tumors were developed during 60 days . In our experiments, on day 63 of the study, and 30 days after terminating the treatment with PJ34, mice were euthanized. Their excised tumors were measured and prepared for immuno-histochemichemistry. During the 30 days after treatment with PJ34 has been terminated, mice were not treated with any additional treatment. No abnormalities, toxic signs, or animal death were observed during the study. On the contrary, all mice gained weight during the study, with no difference between control and the two PJ34 treated groups, evidence for wellbeing of the treated mice (Figure 2A). In support, no toxicity has been found in an additional study with immunocompetent BALB/C mice, IV injected with even higher doses of PJ34. These mice were examined for 14 days after treatment. No signs of toxicity were detected, and PJ34 did not impair their weight gain (Supplementary Figure 1). At the end of the study with the immunodeficient (nude) mice, only the group treated daily with PJ34 (5 times a week) showed a significant reduction of about 40% in tumor size (volume and weight, Figure 2B). In addition, in one treated mouse in this group, the tumor started to shrink on day 35 and disapeared on day 56 (Figure 2B, mouse #19). Furthermore, immunohistochemistry conducted in the excised tumors of all the tested mice revealed a massive reduction in human proteins and increasing necrosis in the tumors of mice treated with PJ34. The excised tumors were sliced and prepared for histochemical analysis (Methods). Hematoxylin and Eosin labeling of the slices revealed higher necrosis in the tumors developed in PJ34 treated nude mice (Figure 3). In addition, the amounts of arbitrarily selected three human proteins were measured 30 days after the treatment with PJ34 has been terminated (Figure 4). Immunolabeling of these human proteins in all the tumors revealed that both treatment regiments with PJ34 (daily and 3 times a week) caused 80–90% reduction in their amount in the tumors. The daily treatment provided better results with higher statistical significance (Figure 4E). PANC1 cells were the only human cells in the PANC1 xenografts. Thus, the similar and substatial reduction in the amount of three arbitrarily selected human proteins, 30 days after the treatment with PJ34 has been terminated is attributed to the eradication of PANC1 human cells in the tumors of nude mice treated with PJ34 (Figure 4). This is supported by the enhanced necrosis in tumors of mice treated with PJ34, and by the massive eradication of PANC1 cells incubated with PJ34 (Figures 1, ,33 and [7, 8]). The relatively short turnover of proteins in the cell (hours), as well as the MRT (mean residence time, 41 min) of PJ34 in mice contradict a possible effect of PJ34 on the expression of the three arbitrarily selected human proteins, 30 days after the treatment with PJ34 has been terminated. In addition, there was no reduction in the amount of an abundant protein in fibroblasts infiltrated into tumors of mice treated with PJ34, also contradicting an assumption of some general effect of PJ34 on protein expression 30 days after the treatment with PJ34 has been terminated. The specific immunolabeling of each of the three proteins was measured in 3-4 different slices of each tumor, immunolabeled with specific antibody (Methods). Antibodies were directed against the human kinesin HSET/KifC1, or the c-terminal residue of the human nuclear protein Ku-80, or the Human Leucocytes Antigen (HLA), frequently found on the membrane of human cancer cells (Figure 4A–4D). Measurements of PJ34 induced changes in the immune-labeling of the three proteins provided results with a higher statistical significance. Normal cells infiltrated into the pancreas tumors (stroma) were immuno-labeled against smooth muscle actin αSMA that is mainly expressed in normal fibroblasts of both human and rodents [24, 25] (Figure 4C). The benign cells infiltrating into the human PANC1 xenografts were probably of mouse origin [26, 27]. In each slide, 6–8 different ‘fields’ were analysed (Methods; Figure 4E). Abnormal human tissues that may comprise metastasis were not found in the liver or guts, neither in treated nor in the untreated control mice. This fact is in accordance with other signs indicating the wellbeing of all the mice in this study (Figure 2A). PJ34 efficacy tested in patients-derived pancreas cancer cells and xenografts In view of evidence indicating eradication of PANC1 cancer cells by PJ34 (Figures 1 and and4),4), the effect of PJ34 was further tested in other types of patients-derived pancreas cancer cells. Ascites/pleural effusion derived cells were prepared from 4 different deceased pancreas cancer patients in the Sheba Medical Center (Methods). These cells did not include PANC1 cells. Samples of the effusion fluid were injected subcutaneously in nude mice, and cell cultures were prepared from the developed tumors. PJ34 (15 and 30 μM) added to these cell cultures, 24 hours after seeding has been tested for its effect on cell viability during 24, 48, 72 and 96 hours incubation. The effect of PJ34 was measured by the Sulforhodamine B (SRB) cytotoxicity assay (Methods). PJ34 dose-dependently reduced the cell counts in the four different patients-derived cell cultures (Figure 5A). Next, PJ34 has been tested in xenografts developed after injecting nude mice (8) with Acite/pleural effusion of pancreas cancer patient #1. After pancreas tumors reached a volume of about 100 mm3. treatment with PJ34 started. Mice were injected daily with PJ34 (I. P.) 5 days a week for 3 weeks (60 mg/Kg PJ34, about 1 mg PJ34 in 100 μl saline per mouse). Untreated control nude mice were similarly injected IP with saline. The study ended 5 days after terminating the treatment with PJ34. Both control and PJ34 treated mice were euthanized and their excised tumors were measured and prepared for immuno-histochemichemistry (Methods). Human kinesin HSET/kifC1, HLA and human Ku-80 were specifically immunolabeled in all the tumors. About 90% reduction in the amount of HSET/kifC1 (p = 0.00067) and in Ku-80 (p = 0.00002) were measured in tumors developed in mice treated with PJ34 (Figure 5B). It was attributed to the eradication of the patients-derived pancreas cancer cells in the xenografts. Notably, treatment with PJ34 caused a similar reduction in HSET/KifC1 and Ku-80 labeling in PANC1 xenografts and in xenografts of pancreas patient#1 (Figures 4E and and5B5B).
<urn:uuid:de13117c-e85b-4d85-bb17-fc1c79e1be18>
CC-MAIN-2022-40
https://debuglies.com/2019/12/03/israeli-researchers-founds-a-small-molecule-known-as-pj34-that-induce-the-self-destruction-of-pancreatic-cancer-as-much-as-90-percent-a-month/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00231.warc.gz
en
0.95261
3,455
3.03125
3
The Environmental Protection Agency has reported that the energy used in 2006 by the data center industry was 1.5 percent of the total nationwide energy usage. Experts agree that this usage will top 2 percent soon. As such, savings related to data centers’ efficiency could be in the millions for companies with large data centers such as Facebook. UT Arlington’s Dereje Agonafer is part of a cooperative research center whose focus will be finding more efficient and greener ways to run giant data centers. Agonafer is a mechanical and aerospace engineering professor. The University of Texas at Arlington is joining Binghamton and Villanova universities in forming the Industry/University Cooperative Research Center in Energy-Efficient Electronic Systems. Binghamton will serve as the consortium’s main center but each campus center will focus on various challenges in attaining energy efficiency. Facebook is one of 15 companies signed up to be consortium members. Others firms include data heavyweights like Microsoft, General Electric, Commscope, Bloomberg, General Electric, Corning Inc., Endicott Interconnect Technologies, Emerson Network Power, Verizon and Comcast. The consortium will focus initially on the data centers. Energy spent on running data and telecommunications centers in the United States is about 3 percent of the total national energy expenditure, which is enough to power a couple of good-sized cities for most of a year. Facebook is pledging $50,000 in the first year of Agonafer’s research. That pledge is renewable for up to five years. Agonafer said his focus would find better ways to cool data centers, ways to make air flow more economical, ways to create sustainability savings and determine effects of airborne contaminants on data center equipment. “Working with these businesses gives us leverage into implementing our research activities in the marketplace,” Agonafer said. Agonafer said just a small yield in efficiencies could translate to millions of dollars in savings because these companies’ computing centers are so large. “Those are some big names in data,” Agonafer said. “We’ve had a longtime relationship with Commscope. Facebook thought enough of our research to contribute. Other companies could follow suit.” He said one aspect that attracted the National Science Foundation to UT Arlington is that the University has all the components needed for this research. He said UT Arlington has an electronic cooling lab, a nanofab facility, the Automation Robotics & Research Institute, a manufacturing assistance center and an aerodynamics research center. He said all of those UT Arlington components could play a part in meeting some of the energy-efficiency challenges. Veerendra Mulay, a consortium member from Facebook, said, “the consortium will play a key role in addressing cooling design issues in the dynamic data center business.” The consortium will address energy-efficiency problems from across many disciplines, said Bahgat Sammakia, interim vice president for research at Binghamton University and the center’s director. “The center will provide the kind of answers that leaders in the electronics industry are looking for,” Sammakia said. “Each of the center’s academic partners has expertise in a particular area and by tapping into these individual strengths, we will collectively find the answers to some of the industry’s most challenging practical problems.”
<urn:uuid:274798e1-0f4f-493d-917b-cdb7d09d0a07>
CC-MAIN-2022-40
https://www.missioncriticalmagazine.com/articles/84521-facebook-funds-ut-arlington-to-discover-energy-efficiencies-for-computer-centers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00431.warc.gz
en
0.928436
695
2.53125
3
Oracle ASM Cluster File System (ACFS) replication is a new feature in 188.8.131.52 on Linux; as its name suggests, it allows you to replicate files from one host to another. The terminology used is reminiscent of Oracle Data Guard, with a primary file system and a standby file system, with replication logs recording the changes. The replication logs are shipped across the network from the primary to the standby and the changes are applied. When considering ACFS replication, your most important concern is probably file system space usage. Replication logs take up space in the file system being replicated; if you cannot complete a replication, you cannot complete a change on the file system, so running out of space in this setup can cause major problems. The standby must also not run out of space because that would stop the application of changes on the standby and cause replication logs to pile up on the primary, because they are deleted from both sites only when they have been applied. You must also consider the network bandwidth to the standby, because slow transfer of logs will also mean that the standby will be out of date and logs will remain on the primary for longer than needed. Finally, the standby needs to be up to the job of applying the logs as fast as the primary generates them, so having a significantly lower powered standby machine is not a good idea. Because the ability to determine the rate of change appears to be so important when sizing the file system, network connection, and standby, Oracle has provided a way to determine how to use the acfsutil info fs command. (Note that the 5 in the first line is the period in seconds we are reporting for each time we write the amount of change. The command will continue reporting statistics every 5 seconds until you break out of it.) [grid]$ /sbin/acfsutil info fs -s 5 /u02/app/oracle/acfsdbhome1/ /u02/app/oracle/acfsdbhome1/ amount of change since mount: 9851.31 MB average rate of change since mount: 154 KB/s amount of change: 0.50 MB rate of change: 102 KB/s amount of change: 3.00 MB rate of change: 612 KB/s amount of change: 1.00 MB rate of change: 204 KB/s We will not cover the calculations here to make sure you have enough resources to support replication. These can be found in the Oracle Automatic Storage Management Administrator’s Guide. Following are other considerations for ACFS replication: - A server can be both a replication primary and standby system for different file systems. - Each primary file system can have only one standby file system. - You can replicate a file system only if it is mounted by eight or fewer nodes. - Replication works on single-instance and clustered systems. - ACFS tagging can be used to replicate only certain files within a file system, rather than the whole file system. - File systems that are encrypted or under ACFS security control cannot be replicated. - The standby file system must be running on a server that runs the same operating system and Oracle software distribution as the primary. - The files on the standby are always available for reading, but they may change at any time.
<urn:uuid:d7d48f08-c5bc-459f-94da-cd83b4db1fc9>
CC-MAIN-2022-40
https://logicalread.com/using-oracle-acfs-replication-mc04/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00431.warc.gz
en
0.925719
678
2.90625
3
When it comes to cyber security, AES is one of those acronyms that you see popping up everywhere. That’s because it has become the global standard of encryption and it is used to keep a significant amount of our communications safe. The Advanced Encryption Standard (AES) is a fast and secure form of encryption that keeps prying eyes away from our data. We see it in messaging apps like WhatsApp and Signal, programs like VeraCrypt and WinZip, in a range of hardware and a variety of other technologies that we use all of the time. Why was AES developed? The earliest types of encryption were simple, using techniques like changing each letter in a sentence to the one that comes after it in the alphabet. Under this kind of code, the previous sentence becomes: As you can see, this simple code makes it completely unreadable. Despite the initial unreadability, if you had the time and knew it was a code and not just a bunch of characters spewed onto the page, it wouldn’t be too difficult to eventually figure out. As people got better at cracking codes, the encryption had to become more sophisticated so that the messages could be kept secret. This arms race of coming up with more sophisticated methods while others poured their efforts into breaking them led to increasingly complicated techniques, such as the Enigma machine. Its earliest designs can be traced back to a patent from the German inventor Arthur Scherbius in 1918. The rise of electronic communication has also been a boon for encryption. In the 1970s, the US National Bureau of Standards (NBS) began searching for a standard means that could be used to encrypt sensitive government information. The result of their search was to adopt a symmetric key algorithm developed at IBM, which is now called the Data Encryption Standard (DES). The DES served its purpose relatively well for the next couple of decades, but in the nineties, some security concerns began to pop up. The DES only has a 56-bit key (compared to the maximum of 256-bit in AES, but we’ll get to that later), so as technology and cracking methods improved, attacks against it started to become more practical. The first DES encrypted message to be broken open was in 1997, by the DESCHALL Project in an RSA Security-sponsored competition. The next year, the Electronic Frontier Foundation (EFF) built a DES cracker which could brute force a key in just over two days. In 1999, the EFF and the internet’s first computing collective, distributed.net, collaborated to get that time down to under 24 hours. Although these attacks were costly and impractical to mount, they began to show that the DES’s reign as the go-to encryption standard was coming to an end. With computing power exponentially increasing according to Moore’s law, it was only a matter of time until the DES could no longer be relied on. The US government set out on a five year mission to evaluate a variety of different encryption methods in order to find a new standard that would be secure. The National Institute of Standards and Technology (NIST) announced that it had finally made its selection in late 2001. Their choice was a specific subset of the Rijndael block cipher, with a fixed block-size of 128-bits and key sizes of 128, 192 and 256-bits. It was developed by Joan Daemen and Vincent Rijmen, two cryptographers from Belgium. In May of 2002, AES was approved to become the US federal standard and quickly became the standard encryption algorithm for the rest of the world as well. Related: A beginner’s guide to cryptography Why was this cipher chosen for AES? With any kind of encryption, there are always trade-offs. You could easily have a standard that was exponentially more secure than AES, but it would take too long to encrypt and decrypt to be of any practical use. In the end, the Rijndael block cipher was chosen by NIST for its all-around abilities, including its performance on both hardware and software, ease of implementation and its level of security. How does AES work? Be aware that the following example is a simplification, but it gives you a general idea of how AES works. Unfortunately, there isn’t enough coffee in the world to make most people want to get through the more complicated aspects of AES. Normally, the process is performed in binary and there’s a lot more maths. First, the data is divided into blocks. Under this method of encryption, the first thing that happens is that your plaintext (which is the information that you want to be encrypted) is separated into blocks. The block size of AES is 128-bits, so it separates the data into a four-by-four column of sixteen bytes (there are eight bits in a byte and 16 x 8 = 128). If your message was “buy me some potato chips please” the first block looks like this: We’ll skip the rest of the message for this example and just focus on what happens to the first block as it is encrypted. The “…to chips please” would normally just be added to the next block. Key expansion involves taking the initial key and using it to come up with a series of other keys for each round of the encryption process. These new 128-bit round keys are derived with Rijndael’s key schedule, which is essentially a simple and fast way to produce new key ciphers. If the initial key was “keys are boring1”: Then each of the new keys might look something like this once Rijndael’s key schedule has been used: Although they look like random characters (and the above example is just made up) each of these keys is derived from a structured process when AES encryption is actually applied. We’ll come back to what these round keys are used for later on. Add round key In this step, because it is the first round, our initial key is added to the block of our message: This is done with an XOR cipher, which is an additive encryption algorithm. While it looks like you can’t actually add these things together, be aware that it is actually done in binary. The characters are just a stand-in to try and make things easier to understand. Let’s say that this mathematical operation gives us a result of: In this step, each byte is substituted according to a predetermined table. This is kind of like the example from the start of the article, where the sentence was coded by changing each letter to the one that comes after it in the alphabet (hello becomes ifmmp). This system is a little bit more complicated and doesn’t necessarily have any logic to it. Instead, there is an established table that can be looked up by the algorithm, which says, for example, that h3 becomes jb, s8 becomes 9f, dj becomes 62 and so on. After this step, let’s say that the predetermined table gives us: Shift rows is a straightforward name, and this step is essentially what you would expect. The second row is moved one space to the left, the third row is moved two spaces to the left, and the fourth row is moved three spaces to the left. This gives us: This step is a little tricky to explain. To cut out most of the maths and simplify things, let’s just say that each column has a mathematical equation applied to it in order to further diffuse it. Let’s say that the operation gives us this result: Add round key (again) Remember those round keys we made at the start, using our initial key and Rijndael’s key schedule? Well, this is where we start to use them. We take the result of our mixed columns and add the first round key that we derived: Let’s say that this operation gives us the following result: Many more rounds… If you thought that was it, we’re not even close. After the last round key was added, it goes back to the byte substitution stage, where each value is changed according to a predetermined table. Once that’s done, it’s back to shift rows and moving each row to the left by one, two or three spaces. Then it goes through the mix columns equation again. After that, another round key is added. It doesn’t stop there either. At the start, it was mentioned that AES has key sizes of either 128, 192 or 256-bits. When a 128-bit key is used, there are nine of these rounds. When a 192-bit key is used, there are 11. When a 256-bit key is used, there are 13. So the data goes through the byte substitution, shift rows, mix columns and round key steps up to thirteen times each, being altered at every stage. After these nine, 11 or 13 rounds, there is one additional round in which the data is only processed by the byte substitution, shift rows and add round key steps, but not the mix columns step. The mix columns step is taken out because at this stage, it would just be eating up processing power without altering the data, which would make the encryption method less efficient. To make things clearer, the entire AES encryption process goes: Add round key Add round key x 9, 11 or 13 times, depending on whether the key is 128, 192 or 256-bit Add round key Once the data has gone through this complex process, your original “buy me some potato chips please” comes out looking something like “ok23b8a0i3j 293uivnfqf98vs87a”. It seems like a completely random string of characters, but as you can see from these examples, it is actually the result of many different mathematical operations being applied to it again and again. What’s the point of each of these steps? A lot of things happen when our data is encrypted and it’s important to understand why. Key expansion is a critical step, because it gives us our keys for the later rounds. Otherwise, the same key would be added in each round, which would make AES easier to crack. In the first round, the initial key is added in order to begin the alteration of the plain text. The byte substitution step, where each of the data points is changed according to a predetermined table, also performs an essential role. It alters the data in a non-linear way, in order to apply confusion to the information. Confusion is a process that helps to hide the relationship between the encrypted data and the original message. Shift rows is also critical, performing what is known as diffusion. In cryptography, diffusion essentially means to transpose the data to add complication. By shifting the rows, the data is moved from its original position, further helping to obscure it. Mix columns acts in a similar way, altering the data vertically rather than horizontally. At the end of a round, a new round key that was derived from the initial key is added. This adds greater confusion to the data. Why are there so many rounds? The processes of adding round keys, byte substitution, shifting rows and mixing columns alters the data, but it can still be cracked by cryptanalysis, which is a way of studying the cryptographic algorithm in order to break it. Shortcut attacks are one of the key threats. These are attacks that can crack the encryption with less effort than brute-forcing. When AES was being designed, shortcut attacks were found for up to six rounds of its process. Because of this, an extra four rounds were added for the minimum of 128-bit AES as a security margin. The resulting 10 rounds give the encryption method enough legroom to prevent shortcut attacks under today’s techniques and technology. Why don’t we add more rounds to beef up the security? With most things in security, there needs to be a compromise between pure defensive strength, usability, and performance. If you put ten steel doors with deadbolts at each of the entry points to your house, it would surely make it more secure. It would also take an unreasonable amount of time to get in and out, which is why we never see anyone do it. It’s the same when it comes to encryption. We could make it more secure by adding more rounds, but it would also be slower and much less efficient. The 10, 12 and 14 rounds of AES have been settled on because they provide a good compromise between these competing aspects, at least in the current technological landscape. If you’ve managed to get your head around the encryption process explained above, decryption is relatively simple. To go from the ciphertext back to the plaintext of the original message, everything is done in reverse. If we start with our encrypted result of “ok23b8a0i3j 293uivnfqf98vs87a” and apply the inverse of each encryption step, it starts with the inverse round key, then the inverse shift rows, and the inverse byte substitution, before going into the inverse of the 9, 11 or 13 rounds. It looks like this: Inverse add round key Inverse shift rows Inverse byte substitution Inverse add round key Inverse mix columns Inverse shift rows Inverse byte substitution x 9, 11 or 13 times, depending on whether the key is 128,192 or 256-bit Inverse add round key After this decryption process, we end up with our original message again: “buy me some potato chips please” 128 vs 192 vs 256-bit AES AES has three different key lengths. The main difference is the number of rounds that the data goes through in the encryption process, 10, 12 and 14 respectively. In essence, 192-bit and 256-bit provide a greater security margin than 128-bit. In the current technological landscape, 128-bit AES is enough for most practical purposes. Highly sensitive data handled by those with an extreme threat level, such as TOP SECRET documents controlled by the military, should probably be processed with either 192 or 256-bit AES. If you are paranoid, you might prefer using 192 or 256-bit encryption wherever possible. This is fine if it makes it easier for you to sleep at night, but it’s really not necessary in most situations. It’s not without its costs either, with the extra four rounds of 256-bit encryption making it about 40 percent less efficient. AES security issues Cryptographers are constantly probing AES for weaknesses, trying to come up with new techniques and harnessing the technology that comes their way. This is essential, because if it wasn’t being thoroughly tested by academics, then criminals or nation states could eventually find a way to crack it without the rest of the world knowing. So far, researchers have only uncovered theoretical breaks and side channel attacks. In 2009, a series of related-key attacks were discovered. These are a type of cryptanalysis that involves observing how a cipher operates under different keys. The related-key attacks that researchers discovered aren’t of any great concern; they are only possible against protocols that aren’t implemented properly. Known-key distinguishing attack Again in 2009, there was a known-key distinguishing attack against an eight round version of AES-128. These attacks use a key that is already known in order to figure out the inherent structure of the cipher. As this attack was only against an eight round version, it isn’t too much to worry about for everyday users of AES-128. There have been several other theoretical attacks, but under current technology they would still take billions of years to crack. This means that AES itself is essentially unbreakable at the moment. Despite this, AES can still be vulnerable if it hasn’t been implemented properly, in what’s known as a side-channel attack. Side-channel attacks occur when a system is leaking information. The attacker listens in to the sound, timing information, electromagnetic information or the power consumption in order to gather inferences from the algorithm which can then be used to break it. If AES is implemented carefully, these attacks can be prevented by either removing the source of the data leak, or by ensuring that there is no apparent relationship between the leaked data and the algorithmic processes. The last weakness is more general than AES specific, but users need to be aware that AES doesn’t automatically make their data safe. Even AES-256 is vulnerable if an attacker can access a user’s key. This is why AES is just one aspect of keeping data secure. Effective password management, firewalls, virus detection and education against social engineering attacks are just as critical in their own ways. Is AES enough? In the current age, we all transmit so much of our sensitive data online, AES has become an essential part of our security. Although it’s been around since 2001, it’s repetitive process of adding keys, byte substitution, shifting rows and mixing columns has proved to stand the test of time. Despite the current theoretical attacks and any potential side-channel attacks, AES itself remains secure. It’s an excellent standard for securing our electronic communication and can be applied in many situations where sensitive information needs to be protected. Judging by the current level of technology and attack techniques, you should feel confident using it well into the foreseeable future. Why do we need encryption? Now that we’ve gone through the technical details of AES, it’s important to discuss why encryption is important. At its most basic level, encryption allows us to encode information so that only those who have access to the key can decrypt the data. Without the key, it looks like gibberish. With the key, the jumble of seemingly random characters turns back into its original message. Encryption has been used by governments and militaries for millennia to keep sensitive information from falling into the wrong hands. Over the years it has crept more and more into everyday life, especially since such a large portion of our personal, social and work dealings have now migrated to the online world. Just think about all of the data you enter into your devices: passwords, bank details, your private messages and much more. Without any kind of encryption, this information would be much easier for anyone to intercept, whether they be criminals, crazy stalkers or the government. So much of our information is valuable or sensitive, so it’s clear that it needs to be protected in a way so that only ourselves and those that we authorize can access it. That’s why we need encryption. Without it, the online world just couldn’t function. We would be stripped completely of any privacy and security, sending our online lives into absolute chaos.
<urn:uuid:16a237e5-a78d-4f19-85cb-1985d1fe550d>
CC-MAIN-2022-40
https://www.comparitech.com/blog/information-security/what-is-aes-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00431.warc.gz
en
0.941951
4,537
3.734375
4
In the physical world, “living off the land” simply means to survive only by the resources that you can harvest from the natural land. There may be multiple reasons for doing this — perhaps you want to get “off the grid,” or maybe you have something or someone to hide from. Or maybe you just like the challenge of being self-sufficient. In the technology world, “living off the land” (LotL) refers to attacker behavior that uses tools or features that already exist in the target environment. In this multi-part blog series, we’ll explore why attackers use LotL, review a selection of the tools and features they use, and discuss examples of actual LotL attacks. We’ll also provide some guidance for detecting and preventing some of the commonly used approaches. Why Attackers Live off the Land Let’s start with why attackers use tools that already exist in the environment to execute plan an attack. Attackers may be motivated by one or many of the following reasons: Fly Under the Radar/Avoid Detection Attackers may choose to fly under the radar of either prevention or detection technologies. Typically, prevention technologies will use a signature-based approach to detect and quarantine malicious processes. They may also use hash values or other indicators of comprise (IOCs) to detect a process. While attackers can change (indicators of compromise) IOCs relatively easily (see The Pyramid of Pain), using pre-existing software avoids the process being flagged as suspicious. It also saves the attacker cycles in developing the binary to deliver an attack. The Pyramid of Pain Figure 1: The pyramid of pain represents the difficulty level for attackers to change indicators that a defender might use to detect their activity. Use Power Tools Already Embedded in Operating Systems Operating systems typically carry tooling for automation and scripting administrative activities. Windows PowerShell is a good example. Every Windows OS since November 2006 includes PowerShell. This makes it a pervasive tool in a typical enterprise environment. These tools will typically provide easy access to both local and domain-based configuration. For example, with PowerShell, you can configure anything from Active Directory objects to local raw disks. Tooling Can be Difficult to Develop and Distribute Typically, an attacker will scope out a target, but he or she may not know the entire environment the tools operate in. This creates some hurdles for building, compiling, and testing program. These tools should allow for a variety of operating systems and environments, and it may be difficult, if not impossible, to test for every possible scenario. Why Attackers Use Existing Tools to Execute Attacks Attackers that use already existing tooling avoid the need to build, test, and QA tools. They don’t have to worry about compatibility, dependencies, and so forth. It’s also challenging to build programs that are stealthy enough to avoid detection, particularly if something runs at kernel level. Ultimately, it’s probably cheaper and quicker to use existing tooling. From an attacker’s perspective, using already existing tools and features makes the defender’s job inherently more difficult. Picking out the malicious use of built-in tools versus the authorized use of tools by the system administrator can be somewhat like looking for a needle in a haystack. Interested in learning more about how to combat LotL attacks? Look for upcoming blogs that will surface some techniques that you can use to reduce the size of the haystack and help find those needles. One of the interesting things with combating LotL attacks is that you can begin to see the tactics techniques, and procedures (TTPs) that a particular attacker uses. Once you can detect TTPs, it becomes a bigger challenge for the attacker to change, rather than just changing a hash value to avoid detection. In our next blog post, we’ll take a look at Windows Scheduled Tasks through the lens of living off the land. It’s a pervasive tool in Windows Operating systems, and a favorite among threat actors.
<urn:uuid:71efdcad-8592-4383-a3a3-638305fbe76d>
CC-MAIN-2022-40
https://logrhythm.com/blog/what-are-living-off-the-land-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00431.warc.gz
en
0.919035
834
2.625
3
Increased development and spread of information technology and the internet had led to creation of distinct ways to communicate in virtual environments. The cognitive interfaces provide a new form of interaction between humans and systems. The graphical user interface is based on navigational systems which use hypertext and or option with selection using buttons and menus. However, humans prefer to use natural language as a medium of communication hence research has been made regarding development of natural interfaces , usage of chatter bots or systems designed to simulate a conversation with humans require moderation hence AIML ( Artificial Intelligence Markup Language) was created. In this article we will learn more about AIML language, how it works and where it is used etc. AIML was created by Wallace in collaboration with six software developers communities from 1995 to 2000 based on the concept of Pattern recognition, matching pattern technique. It is applied to natural language modelling for dialogue between humans and chatbots which follow a stimulus response approach. A set of possible user input is modelled and for each of stimuli , pre programmed answers were built to be displayed to the user. The ALICE (Artificial linguistic internet computer entity) chatbot was first to use AIML language and interpreter. In ALICE , the AIML technology was responsible for pattern matching and to relate user input with a response chatbot knowledge base (KB). The AIML language purpose is to make the task of dialogue modelling easier as per stimulus-based approach. It is an XML based markup language and tag based. Tags are identifiers which are responsible to make code snippets and insert commands in the chatterbot. AIML defines a data object class called AIML which is responsible for modelling patterns of conversation. The general form of an AIML object/command / tag has the following syntax. <command> ListOfParameters </command> An AIML command comprises a start tag <command> , a closing tag </command> and a text (ListOfParameters) which contain the command’s parameter list. AIML is interpreted language and as such each statement is read, interpreted and executed by a software known as interpreter. AIML is based on basic units of dialogue, formed by user input patterns and chatbot responses. These basis units are known as categories, and a set of all categories make chatbot KB. The most noted tags in AIML are : category, pattern and template. The category tag defines the unit of knowledge of the KB, pattern tag defines possible user input and template tag sets the chatbot response for a specific user input. The AIML vocabulary comprises of words , spaces and special characters “*” and “_” which are wildcard characters. Wildcards are used to replace a string (Word or sentences) . The AIML interpreter gives high priority to categories having patterns which use wildcard “_” and “*” and they are analysed first. Each object/tag has to follow XML standard hence an object name cannot start with a number and blanks are not allowed. Each AIML file begins with <aim1> tag and is closed by a </aim1> tag. This tag contains the version and encoding attributes. The version attribute identifies the AIML version used in KB. The encoding attribute identifies the type of character encoding which will be used in the document. Example of AIML code <aim1 version = “1.01” encoding=”UTF-8” Basic units of AIML dialogue are called categories. Each category is a fundamental unit of knowledge contained in chatbot KB. A category consist of a user input in the form of a sentence and a response to user input presented by chatbot and an optional context A KB written in AIML is formed using a set of categories. The categories are organized using subjects and stored in files with .aim1 extension. Category modelling is made using the <category> and </category> tag. <pattern> hello Bot </pattern> The pattern tag contains a possible user input. In each category there is a single pattern and it must be the first element to be set. Words are separated by single spaces and wildcards can replace parts of a sentence. The template tag contains possible chatbot answers to the user. It must be within scope of a <category> tag and placed after the <pattern> tag. Most of the chatbot information is bound to this element. This tag can save data and activate other programs or responses. AIML use cases - Virtual agents in the form of chatbots as customer service agents to interact to humans and answer their queries - Deriving meaningful information from digital images, videos and other visual inputs - Self-driving cars powered by sensors which aides in mapping out immediate environment of the vehicle - Enable emotions expressed by humans to be read and interpreted using advanced image processing or audio data processing - Space exploration - Robotics process automation - Biometrics recognition and measurement to foster organic interactions between machines and humans
<urn:uuid:88f31350-f671-4142-9ff1-c5b48fc44d90>
CC-MAIN-2022-40
https://networkinterview.com/aiml/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00431.warc.gz
en
0.898231
1,063
3.609375
4
From K12 to higher education, students learn best in environments that foster engagement, interaction, and creativity. Effective tech integration in the classroom can change student-teacher dynamics and encourage student-centered learning. One type of technology, in particular, can elevate classroom dynamics and transform the way students learn. Lets talk about touchscreens and why theyre the perfect addition to your classroom environment. - Support Active Learning Touchscreens are a great way to bring greater inclusivity to the classroom and make lessons more interactive by connecting with multiple learning styles. Teachers can sometimes struggle to create lessons that cater to all three learning styles visual, kinesthetic, and auditory. With interactive touchscreen technology, teachers can cater to all learning styles by incorporating audio, video, and written text into lessons. Consequently, classes become less about regurgitating information and more about fostering collaboration, discussions, and critical thinking. - Open the Door for Interactive Learning Touchscreens enable you to incorporate various types of content into your lesson plans. Integrating touchscreens in the classroom enables teachers to facilitate learning in groups while creating space for students to develop practical skills individually. Collaboration is brought to a whole new level as it’s easier for students to work in pairs or group settings when interactive panels are in place. - Boost Student Engagement and Collaboration Adding a social element to a classroom activity can really boost learning value. Classes that leverage touchscreen displays create a learning environment where students pay more attention and are more positive, resulting in gains in achievement, participation, and cooperation. Touchscreen technology is inherently immersive, allowing students to participate in their lessons and engage in exploration for longer periods. - Improve Learning Results Theres a strong correlation between student engagement, collaboration, and performance. Touchscreens make lessons more fun and captivating, allowing students to learn more and learn better. They facilitate countless learning adventures and enhance students’ overall attitude towards learning in the classroom. - Educators Can Explore Their Creativity Students aren’t the only ones who benefit from having smart board technology in the classroom. The ultimate benefit of touchscreen technology lies in how versatile it is overall. Teachers can tailor learning programs to their individual styles and showcase their creativity! There’s also the added perk of boosting the morale and performance of faculty members. Bring the Future Classroom to Life! Make school fun, engaging, and interactive for your students. Ask us about our OneScreen Touchscreen solutions. Let us show you how they can enhance collaboration in your classroom!
<urn:uuid:58704c90-2226-40ed-821b-1647e9123a3e>
CC-MAIN-2022-40
https://www.dsinm.com/blog/the-best-collaboration-tool-for-your-classroom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00431.warc.gz
en
0.904701
517
3.40625
3
With the advent of the digital age, what was unimaginable just a few decades ago is now possible. Reaching a single person, or distributing data, over long distances would easily take weeks or months a century ago. In today’s global village, it’s a matter of seconds – if that. We are able to reach and communicate with every part of the globe where the Internet has stepped foot. This global reach is what further propelled the last decades of global interconnectivity. Globally connected organizations are the norm nowadays. A global company easily communicates and distributes data between HQ and branch offices, coordinates employees and holds video conferences, informs and satisfies buyers and suppliers, no matter where they are located. By setting up its own Wide Area Network, an organization holds control over its own “internal internet”. WANs enable them to communicate and relay data effectively and regardless of location. As more and more critical data is moved across those networks, the speed and security aspects, along with associated costs, became a growing issue. In order to keep their network’s perimeter safe, IT departments had to build and maintain a secure and high-performing infrastructure. Specialised hardware and personnel are required for that. Here lies the problem. The MPLS and its Ups and Downs The total cost of ownership for keeping hardware and personnel up-to-date in an ever-evolving threat environment implies significant investments. For long, the only logical option for a high-performing WAN was through the use of MultiProtocol Label Switching techniques. Simply put, MPLS is the data transfer technique used in high-performing networks. It attaches labels to packets and directs data from one node to the next based on label instructions rather than on network addresses. The labels function as virtual paths between nodes. That way it avoids complex DNS routing table lookups. The MPLS also incorporates various network protocols – hence the attribute “multiprotocol”. It’s the best solution in terms of sheer performance. However, there’s more to take into account. As we moved further into the digital age, with increased interconnectivity, cloud services, SaaS, IoT and big data around, the safe perimeter became increasingly expensive, adding complexity to networks and making it harder to maintain. To put it poetically, as businesses and markets evolve, network perimeters dissolve. Fixed locations have given way to mobile users, corporate applications to cloud services, and servers to cloud instances. Legacy WAN architectures based on MPLS do a good job providing predictable performance between offices. However, they’re not implemented with the new IT realities in mind. Mobile users connect through VPNs and firewalls, cloud access goes through unsecured Internet, not MPLS. On top of all, users are consuming more and more bandwidth, which is an expensive resource in terms of MPLS networks. SD-WAN is the Next Logical Step But… Enterprises are increasingly demanding more flexible, open, and cloud-based WAN technologies for their users. They want to avoid installing proprietary or specialized WAN technology that often involves expensive, fixed circuits, or proprietary hardware and subsequent maintenance costs. It’s why many have embraced Software-Defined Wide Area Networks (SD-WAN) as the preferred solution to the growing WAN security and cost issues. SD-WAN brings the ability to handle policy configuration and route calculations through a central SD-WAN controller, rather than treating the network as individual routers and locations. Instead of relying exclusively on private MPLS services, SD-WANs connect branches through any type of data services. That includes Direct Internet Access (DIA) services like xDSL, cable, LTE, but also through MPLS. However, if we only look to replace yesterday’s WAN with a more cost-effective and agile WAN, then a simple SD-WAN solution is all that is required. But there are still discrepancies between today’s mobile, cloud-centric companies and legacy network architectures. For having SD-WAN providing a real step forward for enterprise networks, a larger, holistic approach is required. A rethinking of high-performing networking with new technologies, security, and costs in mind is the only viable long-term option. By bringing Software Defined Networking principles to the WAN, SD-WAN can address many of those tactical challenges. SD-WAN nodes use all available information, along with gathered latency and packet loss data stats, to steer the traffic onto the optimal network connection. For example, email replication, file transfers, and other bandwidth-intensive apps may be sent across an Internet path, while sensitive VoIP sessions would be sent through MPLS (or other low-jitter, low-packet loss Internet path). The Convergence of Security and Networking Rather than deploy SD-WANs to meet IT requirements, CIOs can use this opportunity to rethink and upgrade their WAN to address the root problem – the dissolved perimeter. In its basic philosophy, the WAN must be as simple as possible. More components require more equipment and personnel but also increase the chance of something going wrong. By creating a single network with one set of policies for all locations, all users (mobile and fixed), and all destinations, CIOs and their teams can build a better and smarter network. They must strive for the performance and predictability of an MPLS and the agility, control and cost savings of SD-WANs. By leveraging algorithms, virtual appliances, and cloud capabilities, both security and networking requirements can be met, and even topped. Rising in popularity is the unified Network+Security-as-a-Service (N+SaaS) approach. It moves all security, traffic steering and policy enforcement on cloud services built on top of a robust managed network backbone. A N+SaaS offering is also what we advocate here at GlobalDots. While SD-WANs are a valuable evolution, N+SaaS takes it a bit further and pushes a broader vision of networking and security. As more and more companies migrate to the cloud, their data and apps are driven by a mobile workforce. A single security framework with fallback options for all users and apps makes overall IT agile and reliable. Both MPLS and SD-WAN are to be considered in every network setup where a high-performing and reliable communication is required. It’s the cost and security aspect that stirs things up. The discussion on “which is better” becomes trivial when they are considered as components in a broader N+SaaS grand scheme of things. Each carries its own sets of advantages as well as costs. Convergence of network performance and security is the future, and you should adapt and plan accordingly. The best option is the one you can afford and that keeps your network simple and safe for the future. In case you want to discuss your N+SaaS options, or simply want to know more about getting the most out of your security and performance options, you can talk to one of our in-house GlobalDots experts. They can help you with anything web performance and security related.
<urn:uuid:31edf3b9-e98b-4bb6-8365-58fde198a360>
CC-MAIN-2022-40
https://www.globaldots.com/resources/blog/mpls-or-sd-wan-which-is-better/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00431.warc.gz
en
0.943303
1,504
2.703125
3
Internet searches and brain patterns It is widely thought that mental stimulation might improve the brain’s ability to function. In a study, Researcher’s Small 2009 and colleagues suggested that simply carrying out internet searches might increase brain functioning too. The researchers looked at a small sample (24 participants) of middle to older aged neurologically normal participants, who were all from a similar level of educational background. Half of the participants had minimal past internet search experience and these people were called the Internet Naive group. The other half were allocated into a group known as the Internet Savvy group, so-called because they had extensive past experience in searching on the internet. During the research test the participants were required to take part in two conditions. One was to carry out a novel internet search task, the other task was a reading activity where the text had to be read off a computer screen. The internet naive group Participants brain patterns during both the tasks were assessed. The Internet Naive group showed a similar brain pattern during both tests. The areas showing activation when they were completing the novel search task and when they were reading text from a screen, were the areas of the brain associated with reading, language, memory and certain visual areas. The internet savvy group As expected, when the savvy group were reading of text from a screen, the findings mimicked those of the internet naive group. Areas of the brain activated were again reading, language and memory. However, this group, who were familiar with carrying out internet searches, were found to have greater activation in other areas of the brain when it was their turn to do the novel internet search task. Could internet searching be more stimulating than just reading? When the researchers looked at which areas were being activated by the savvy group when internet searching, they found that decision making, complex reasoning and additional visual areas were being activated as well. In fact, there was a two times greater increase in activation of the major regional clusters in the net savvy group. A richer sensory experience Although the researcher’s acknowledge in their concluding comments that more research needs to be carried out (larger sample sizes and taking account of other lifestyle factors that might affect the results are some suggestions they make) it seems that internet searching may indeed work other areas of the brain. Small says it’s interesting to see how computerised technologies may be beneficial to improving brain function. It’s exciting to think that internet experience may increase brain responsiveness and with today’s internet savvy young children we could be looking at a much smarter next generation.
<urn:uuid:0faf558d-2082-4ee7-8425-b70723a33528>
CC-MAIN-2022-40
https://purple.ai/blogs/internet-making-us-smarter/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00431.warc.gz
en
0.979904
526
3.15625
3
Phishing messages and fake websites for stealing users’ credentials are a common occurrence. Recently, however, mobile banking users in China are facing a new wrinkle: phishing texts that appear to come from a major bank’s official number. The GSM standard is not a secure network because the authentication between mobile phone and network goes in a single direction: The network checks the legality of the client, but client does not check the network. An attacker can take advantage of this to send mass text messages to mobile devices from a fake base station. For more information, check out the following: https://www.twelvesec.com/?s=fake+base+station The following two screen captures of SMS text messages appear to come from the service number of a well-known bank in China: The messages warn that a mobile bank account will become unavailable, and lead the potential victim to fake websites. The bogus site pretends to be the web interface of the bank and “requires” users to input bank account, password, and mobile phone number to register the mobile phone’s bank features. The following images show the fake interface (left) and the legitimate interface (right) of the bank. If a victim delivers the bank account, password, and mobile phone number, an attacker is much more likely to steal money from an account. If a victim delivers the bank account, password, and mobile phone number, an attacker could at least access the account and steal information. (The attacker might not be able to withdraw funds because a one-time password is necessary.) The key to this threat is that the SMS texts appear to come from the bank’s official number. This is an important point because most people trust messages that appear authentic. Unfortunately, this kind of message can be forged with a fake base station and an SMS mass-sending tool. When a user enters an area where the fake base station’s signal is stronger than the real base station’s, the fake station will send SMS messages to the user’s device. This fake base station could be in a house or a moving car. In China, buying the equipment to set up a fake station is inexpensive. Threats vary considerably. In this case, you need to question even official phone numbers, websites, and other apparently authorized sources to avoid being cheated. To check if your device is connecting to a fake base station, try the following: - Call a provider’s service number, for example, if you are a China Mobile user, call 10086 to see if you can reach it. - Send a text message to your provider’s service number and wait for a text message response. For China Mobile, text 10086. - If you don’t mind bothering a friend, text or call to see if you have a legitimate connection. McAfee Mobile Security detects these malicious text messages as SMS/Smishing.D. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:51423b41-edfb-428a-a87c-6d8ca86c9d8b>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/other-blogs/mcafee-labs/sms-phishing-campaign-spreads-china/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00431.warc.gz
en
0.88931
640
2.53125
3
Distributed Denial of Service (DDoS) attacks first appeared on the radars of security experts around 1999, when a wave of cyberattacks brought down countless websites, including resources of major corporations, such as CNN, eBay, Amazon, and E-Trade. Many years later, DDoS attacks have not lost their relevance, on the contrary, DDoS attacks are growing more and more destructive. Financial institutions are increasingly becoming the targets of malicious actors, their financial and commercial losses inflicted by DDoS, lost revenue, customer churn, and hits to reputation, far exceeding operating losses. Research made by Boston Consulting Group suggests that companies in the financial market are 300 times more likely to be targeted by DDoS than companies operating in other industries. DDoS then is a real and pressing threat for banks. But why are DDoS attacks in this industry that much more prevalent? Why do DDoS attackers target banks? With an average attack costing banks up to 1.8 million US dollars, it’s easy to see why so many DDoS actors like to operate on the financial market. When a bank’s network is flooded, the attack can potentially disable a wide range of resources, including online portals, payment networks, and more. Some threat actors launch attacks to demand ransom. Others use DDoS as a smokescreen to distract security teams and attempt to steal personal data and banking credentials. Stolen information enables hackers to open fake accounts or even access private funds. Whatever the reason, a successful exploit carries a tremendous hit to the reputation of the financial institution in question, with their infrastructure fully paralyzed and clients left without access to their money for prolonged periods. Realizing this, some hackers target banks purely because they understand the importance of this online infrastructure, making cyber-vandalism of this level that much more satisfying. Amplification attacks and botnets Since the first half of 2010, amplification has been one of the most widely used types of DDoS attack. An amplification attack starts with a request being sent to the server containing a vulnerability. The server then replicates that request and forwards it to the victim’s address. The cost required to launch an amplification attack is far lower than what the victim company needs to mitigate it, unless they are using an online security provider. Aging amplification attacks are still common. Notably, a series of DDoS campaigns using this technique based on the Memcache protocol swept across Europe in 2018. More recently, the FBI issued a warning against destructive amplification attacks in July 2020. Botnets are another popular technique used by DDoS threat actors. It involves creating a network of infected devices, usually consisting of android-based mobile gadgets and IoT devices, such as routers and security cameras, and programming them to generate garbage requests to a targeted server. Botnets have been used in some of the most destructive DDoS campaigns in history, such as the series of attacks carried out by the Mirai botnet. Among its victims were some of the leading banks of Holland, including ING, ABN Amro, and Rabobank. Staying safe from DDoS Regardless of attack type, the recommendations for setting up protection against DDoS are universal. The constantly evolving toolbox enables threat actors to launch DDoS attacks that can easily exhaust the entire network capacity of the victim. Even specialized devices installed in datacenters sometimes prove ineffective. Operator solutions are not a cure-all either. Provider networks are not designed for extreme loads and often cannot neutralize high-speed attacks, some reaching 1.7 terabits per second level. The industry then is transitioning from the operator and on-site solutions to geo-distributed services specializing in DDoS mitigation, since a distributed threat can be effectively counteracted only by an equally distributed network. The best modern protection then is provided by specialized DDoS protection services that own scrubbing centers in multiple geographical zones and specifically near the physical location of the client’s servers. Broad geographic coverage enables such companies to route and filter malicious traffic on-site, taking the load off the victim’s network itself. The modern financial sector is on the very front of the DDoS war. There are hardly any banks that haven’t yet experienced a denial of service attack at least once. The question then is not whether a company will get attacked, but rather, when will it happen? Therefore, it is imperative to change our approach to security and incorporate DDoS resistance in the very design of IT infrastructure. Whichever method of protection a financial company chooses, the main thing is to be prepared for attacks in advance.
<urn:uuid:974af732-9c2a-4f31-aa00-bbea2f83a3af>
CC-MAIN-2022-40
https://cybersecurity-magazine.com/why-should-banks-be-concerned-with-ddos-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00631.warc.gz
en
0.9395
949
2.578125
3
The Tennessee Eastman Process (TEP) model consists of four main units. Gases interact exothermically inside the reactor. The products leave the reactor as vapors and are fed into the condenser, then into the vapor-liquid separator. The liquid enters the stripping column, where the fractions are separated. The output consists of two products. This is a chemical manufacturing process. However, such units are typical of many industrial environments. Based on the TEP model, we implemented a mathematical model in Python to simulate physical processes, as well as control logic for the physical model in the form of a PLC program. To visualize the simulated processes, we implemented a 3D TEP model and linked it with the generated physical model and PLC telemetry data. To control the stand, we developed a dedicated iPad console that can be used to simulate a variety of cyberattack scenarios. The result was a highly realistic chemical production simulator. The TEP simulator is deployed on one laptop also running the mathematical model of the Tennessee Eastman Process and its 3D visualization. A Schneider controller is used as the PLC. Using a network switch, a copy of the process traffic of the chemical production simulator is sent to Kaspersky Industrial CyberSecurity for Networks, which parses the traffic and transmits the telemetry values obtained from it to Kaspersky MLAD. Our TEP simulator has many parameters that we can control: both sensors and commands, totaling approximately 60 tags. Business parameters are also set, which allows us to calculate an enterprise’s operating costs (on an hourly basis). This helps to assess the overall damage from a hacker attack: an enterprise can suffer financial losses, even if an attack does not result in the worst possible outcome (an explosion or some other disaster). Experimental attacks on the simulator show that Kaspersky MLAD detects anomalies in technological processes in their early stages, and is capable of covering a much wider range of connections between industrial signals than a traditional rule-based protection system can. In a traditional specialized protection system, rules are often generalized to match different conditions. This slows the triggering of emergency protection. A more finely tuned system based on machine learning responds earlier to anomalous changes in processes.
<urn:uuid:f6c726f1-f193-43f4-a237-490b4f7a5527>
CC-MAIN-2022-40
https://mlad.kaspersky.com/tennessee-eastman-process-stand/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00631.warc.gz
en
0.918043
454
2.703125
3
August 6, 2020 | Written by: Vittorio Caggiano and Avner Abrami Categorized: AI | Healthcare Share this post: In a paper recently published in Nature Scientific Reports, IBM Research and scientists from several other medical institutions developed a new way to estimate the severity of a person’s Parkinson’s disease (PD) symptoms by remotely measuring and analyzing physical activity as motor impairment increased. Using data captured by wrist-worn accelerometers, we created statistical representations of PD patients’ motor movement that could be objectively evaluated using AI either in-clinic or from a more natural setting, such as a patient’s home. The human motor system typically relies on a series of stereotyped unit of movement to perform a given task, such as arm swigging while walking. These discreet movements and the transitions linking them create a pattern of physical activity that can be measured and analyzed for signs of PD, an incurable neurodegenerative disease estimated to affect nearly one million people this year in the U.S. alone1. Measurements taken from PD patients deviate from those found in non-patients, and growth in those deviations marks the disease’s progression over time. Existing approaches for evaluating patients for PD are limited. Doctors typically evaluate patients once or twice a year in a supervised clinical setting2 and make subjective assessments based on a standardized rating scale, known as the Movement Disorder Society’s Unified Parkinson’s Disease Rating Scale (or MDS-UPDRS)3. Such examinations tend to rely on patient-reported information that, combined with physicians’ interpretations of motor impairment (evaluated in MDS-UPDRS – Part III), could potentially lead to biased results. In our study—conducted jointly with the Pfizer Innovation Research Lab, Boston University’s Spivack Center for Clinical and Translational Neuroscience and Tufts Medical Center’s Department of Neurology—we demonstrated an unsupervised technique that could be used on PD patients in their homes or in a doctor’s office to generate objective measurements of movement quality. Continuous signals from wearables were transformed in a sequence of “syllables” (see Figure 1). Those sequences – common across healthy subjects – are part of our learned motor repertoire with subsequences shared across different actions. The derived action-independent statistical distribution of ordered transition between syllables was a signature of healthy behavior. Figure 1 – Arm swigging during walking can be converted to a sequence of discrete movements recorded at the wrist. Disorganized sequences of symbols were observed in Parkinsonian patients (Figure 2). By means of AI, we were able to estimate both the gait impairment (PIGD, see Figure 2 – color coded in blue/green) and the overall severity of Parkinson’s symptoms (MDS-UPDRS Part-III, see Figure 2 – color coded in pink/yellow) by capturing increasingly disorganized transitions between movements as motor impairment increases. Figure 2 – Discreet movements in a mild participant (ON drug and OFF drug) and in a severe participant. Upper raw sequence of movements over time during a walking tasks. Lower raw, estimation of postural instability and gait disturbance (PIGD) and overall disease state (total MDS-UPDRS Part-III) of the model vs. neurologist assessment. Our Nature Scientific Reports study was part of IBM Research’s Bluesky Project with Pfizer, launched in 2016 which aimed to develop a system to improve clinical trials conducted for PD drugs in development. IBM Research’s specific role has been developing new algorithms that enable AI to analyze data collected from study participants. Bluesky’s basic premise was to digitize clinical trials to create a more accurate way of assessing patients than the traditional approach of having them self-report. Self-reporting can be especially problematic for PD patients experiencing cognitive impairment. For the study, we applied our AI algorithms to data from three Bluesky studies that collected sensor data from people in three categories – 1) individuals diagnosed with PD undergoing the standard neurological exam, 2) healthy participants undergoing the same protocol and 3) people with PD in unconstrained behavior at home. We developed a new mathematical model to extract value out of the sensor data, which provides a way to objectively monitor and measure disease progression and movement quality in Parkinson’s patients in an unsupervised setting. This can allow a neurologist to compare patient evaluations both in a clinical setting and at home. Our approach also proved highly efficient—it required data from less than 10 minutes of activity on average, to create stable estimates: it allows a continuous 24/7 evaluation of the neurological state. This is particularly important when patients have fluctuations of their pathological state during the day. At a time when there is increasing interest in expanding telemedicine capabilities to enable patients especially vulnerable to COVID-19 to remain at home, our research demonstrates how a neurologist could accurately evaluate PD patients remotely. The added benefit to such a scenario is that telemedicine checkups could be performed more frequently than is possible when patients are required to visit a doctor’s office. Our current research is part of a larger body of work at IBM Research to study how data science and technology can potentially help improve the study of neurodegenerative diseases, including Huntington’s disease, which shares a lot of similarities with Parkinson’s. For instance, another recent publication by our team in Nature Partner Journals – Parkinson’s Disease, also part of the Bluesky project, demonstrated the ability to determine whether patients’ ingestion of levodopa, a palliative drug to substitute dopamine, has taken effect by measuring acoustic and content properties of simple and short speech samples. Looking ahead, although we explicitly tested our proposed model on movement differences associated with PD, we believe this could be the basis for potentially helping to detect other neurological states with characteristic movement signatures. - Prevalence of Parkinson’s disease across North America, npj Parkinson’s Disease 4, 21 (2018). - Standard strategies for diagnosis and treatment of patients with newly diagnosed Parkinson disease, Neurology: Clinical Practice, - The MDS-sponsored Revision of the Unified Parkinson’s Disease Rating Scale, International Parkinson and Movement Disorder Society, August 2019
<urn:uuid:db40494a-aae3-4132-86fe-bbe8caa03efb>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2020/08/ai-could-help-enable-accurate-remote-monitoring-of-parkinsons-patients/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00631.warc.gz
en
0.92723
1,305
2.71875
3
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. Since the early days of artificial intelligence, computer scientists have been dreaming of creating machines that can see and understand the world as we do. The efforts have led to the emergence of computer vision, a vast subfield of AI and computer science that deals with processing the content of visual data. In recent years, computer vision has taken great leaps thanks to advances of deep learning and artificial neural networks. Deep learning is a branch of AI that is especially good at processing unstructured data such as images and videos. These advances have paved the way for boosting the use of computer vision in existing domains and introducing it to new ones. In many cases, computer vision algorithms have become a very important component of the applications we use every day. A few notes on the current state of computer vision Before becoming too excited about advances in computer vision, it’s important to understand the limits of current AI technologies. While improvements are significant, we are still very far from having computer vision algorithms that can make sense of photos and videos in the same way as humans do. For the time being, deep neural networks, the meat-and-potatoes of computer vision systems, are very good at matching patterns at the pixel level. They’re particularly efficient at classifying images and localizing objects in images. But when it comes to understanding the context of visual data and describing the relationship between different objects, they fail miserably. Recent work done in the field shows the limits of computer vision algorithms and the need for new evaluation methods. Nonetheless, the current applications of computer vision show how much can be accomplished with pattern matching alone. In this post, we’ll explore some of these applications, but we will also discuss their limits. Commercial applications of computer vision You’re using computer vision applications every day, maybe without noticing it in some cases. The following are some of the practical and popular applications of computer vision that are making life fun and convenient. One of the areas where computer vision has made huge progress is image classification and object detection. A neural network trained on enough labeled data will be able to detect and highlight a wide range of objects with impressive accuracy. Few companies that match Google’s vast store of user data. And the company has been using its virtually limitless (and ever-growing) repository of user data to develop some of the most efficient AI models. When you upload photos in Google Photos, it uses its computer vision algorithms to annotate them with content information about scenes, objects, and persons. You can then search your images based on this information. For instance, if you search for “dog,” Google will automatically return all images in your library that contain dogs. Google’s image recognition isn’t perfect, however. In one incident, the computer vision algorithm mistakenly tagged a picture of two dark-skinned people as “gorilla,” causing embarrassment for the company. Google also uses computer vision to extract text from images in your library, Drive and Gmail attachments. For instance, when you search a term in your inbox, Gmail will also look in the texts in images. A while back, I searched my home address in Gmail and got an email with an image attachment that contained an Amazon package with my address in it. Image editing and enhancement Many companies are now using machine learning to provide automated enhancements to photos. Google’s line of Pixel phones use on-device neural networks to make automatic enhancement such as white balancing and add effects such as blurring the background. Another remarkable improvement that advances in computer vision have ushered in is smart zooming. Traditional zooming features usually make images blurry because they fill the enlarged areas by interpolating between pixels. Instead of enlarging pixels, computer vision–based zooming focuses on features such as edges, patterns. This approach results in crisper images. Many startups and longstanding graphics companies have turned to deep learning to make enhancements to images and videos. Adobe’s Enhance Details technology, featured in Lightroom CC, uses machine learning to create sharper zoomed images. Image editing tool Pixelmator Pro sports an ML Super Resolution feature, which uses a convolutional neural network to provide crisp zoom and enhance. Facial recognition applications Until not long ago, facial recognition was a clunky and expensive technology limited to police research labs. But in recent years, thanks to advances in computer vision algorithms, facial recognition has found its way into various computing devices. iPhone X introduced FaceID, an authentication system that uses an on-device neural network to unlock the phone when it sees its owner’s face. During setup, FaceID trains its AI model on the face of the owner and works decently under different lighting conditions, facial hair, haircuts, hats, and glasses. In China, many stores are now using facial recognition technology to provide a smoother payment experience to customers (at the price of their privacy though). Instead of using credit cards or mobile payment apps, customers only need to show their face to a computer vision–equipped camera. Despite the advances, however, current facial recognition is not perfect. AI and security researchers have found numerous ways to cause facial recognition systems to make mistakes. In one case, researchers at Carnegie Mellon University showed that by wearing specially crafted glasses, they could fool facial recognition systems to mistake them for celebrities. Data efficient home security With the chaotic growth of the internet of things (IoT), internet-connected home security cameras have grown in popularity. You can now easily install security cameras and monitor your home online at any time. Each camera sends a lot of data to the cloud. But most of the footage recorded by security cameras is irrelevant, causing a large waste of network, storage, and electricity resources. Computer vision algorithms can enable home security camera to become more efficient in the usage of these resources. The smart cameras remain idle until they detect an object or movement in their video feed, after which they can start sending data to the cloud or sending alerts to the camera’s owner. Note, however, that computer vision is still not very good at understanding context. So don’t expect it to tell between benign movements (e.g., a ball rolling across the room) and things that need your attention (e.g., a thief breaking into your home). Interacting with the real world Augmented reality, the technique of overlaying real-world videos and images with virtual objects, has become a growing market in the past few years. AR owes much of its expansion to advances in computer vision algorithms. AR apps use machine learning to detect and track the target locations and objects where they place their virtual objects. You can see the combination of AR and computer vision in many applications, such as Snapchat filters and Warby Parker’s Virtual Try-On. Computer vision also enables you to extract information from the real world through the lens of your phone’s camera. A very remarkable example is Google Lens, which uses computer vision algorithms to perform a variety of tasks, such as reading business cards, detecting the style of furniture and clothes, translating street signs, and connecting your phone to wi-fi networks based on router labels. Advanced applications of computer vision Thanks to advances in deep learning, computer vision is now solving problems that were previously very hard or even impossible for computers to tackle. In some cases, well-trained computer vision algorithms can perform on par with humans that have years of experience and training. Medical image processing Before deep learning, creating computer vision algorithms that could process medical images required extensive efforts from software engineers and subject matter experts. They had to cooperate to develop code that extracted relevant features from radiology images and then examine them for diagnosis. (AI researcher Jeremy Howard has an interesting discussion on this.) Deep learning algorithms provide end-to-end solutions that make the process very easier. The engineers create the right neural network structure and then train it on x-rays, MRI images or CT scans annotated with the outcomes. The neural network then finds the relevant features associated with each outcome and can then diagnose future images with impressive accuracy. Some AI researchers have gone as far as saying deep learning will soon replace radiologists. But those who have experience in the field beg to differ. There’s much more to diagnosing and treating diseases than looking at slides and images. And let’s not forget that deep learning extracts patterns from pixels—it does not replicate all the functions of a human doctor. Computer vision algorithms play an important role in helping these programs parse the content of the game’s graphics. One thing to note, however, is that in many cases, the graphics are “dumbed down” or simplified to make it easier for the neural networks to make sense of them. Also, for the moment, AI algorithms need huge amounts of data to learn games. For instance, OpenAI’s Dota-playing AI had to go through 45,000 years’ worth of gameplay to achieve champion level. In 2016, Amazon introduced Go, a store where you could walk in, pick up whatever you want, and walk out without getting arrested for shoplifting. Go used various artificial intelligence systems to obviate the need for cashiers. As customers move about the store, cameras equipped with advanced computer vision algorithms monitor their behavior and keep track of the items they pick up or return to shelves. When they leave the store, their shopping cart is automatically charged to their Amazon account. Three years after the announcement, Amazon has opened 18 Go stores and it’s still a work in progress. But there are promising signs that computer vision (helped with other technologies) will one day make checkout lines a thing of the past. Cars that can navigate roads without human drivers have been one of the longest standing dreams and biggest challenges of the AI community. Today, we’re still very far from having self-driving cars that can navigate any road on various lighting and weather conditions. But we have made a lot of progress thanks to advances in deep neural networks. One of the biggest challenges of creating self-driving cars enabling them to make sense of their surroundings. While different companies are tackling the problem in various ways, one thing that is constant among them is computer vision technology. Cameras installed around the vehicle monitor the car’s environment. Deep neural networks parse the footage and extract information about surrounding objects and people. That information is combined with data from other equipment such as lidars to create a map of the area and help the car navigate roads and avoid collisions. Creepy applications of computer vision Like all other technologies, not everything about artificial intelligence is pleasant. Advanced computer vision algorithms can scale up malicious uses. Here are some of the applications of computer vision that have caused concern. It is not only phone and computer makers who are interested in facial recognition technology. In fact, the biggest customers of facial recognition technology are government agencies who have a vested interest in using the technology to automatically identify criminals in security camera footage. But the question is, where do you draw the line between national security and citizen privacy? China shows how too much of the former and too little of the former can result in a state of surveillance that gives too much control to the government. The widespread use of security cameras powered by facial recognition technology enables the government to closely track the movements of millions of citizens, whether they are criminal suspects or not. In the U.S. and Europe, things are a bit more complicated. Tech companies have faced resistance from their employees and digital rights activists in providing facial recognition technology to law enforcement. Some states and cities in the U.S. have banned the public use of facial recognition. Computer vision can also give eyes to weapons. Military drones can use AI algorithms to identify objects and pick out targets. In the past few years, there’s been a lot of controversy over the use of AI by the military. Google had to call off the renewal of its contract to develop computer vision technology for the Department of Defense after it faced criticism from its employees. For the moment, there are still no autonomous weapons. Most military institutions are using AI and computer vision in systems that have a human in the loop. But there’s fear that with advances in computer vision and greater engagement of the military sector, it’s only a matter of time before we have weapons that choose their own targets and pull the trigger without a human making the decision. Renowned computer scientist and AI researcher Stuart Russell has founded an organization dedicated to stopping the development of autonomous weapons.
<urn:uuid:2d55ac07-5480-4f98-8303-c563aa5e6327>
CC-MAIN-2022-40
https://bdtechtalks.com/2019/12/30/computer-vision-applications-deep-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00631.warc.gz
en
0.94547
2,637
3.203125
3
This is the most ask questions by Network Engineers, especially the Newbies, What Does QoS Stand For? It is also an important lessons of CCNA and CCNP Trainings. QoS (Quality of Service) is the general name of a concept , which is used to optimize networks with different priority levels to different applications and provide improved services for these appliations on these networks. From the view point of customer, with QoS, users get a better performance without drops, packet loss, unaccepted delays etc. From the view point of your Service Provider company, with QoS, you can use your networks more efficient. With optimized network bandwidh and with a best performance. You can also view the QoS related lessons: After answering what ooes qoS stand for question, now let’s focus on some terms, QoS Enemies in QoS. QoS (Quality of Service) adjusts some important QoS Enemies for the traffic mainly. With this adjustments, networks become more efficient. These important QoS enemies are given below: You can test your knowledg eon All Network Questions Page! So, what are these terms? Packet Loss : Losing the packets along the path. Delay : The time taken from one point to the other along the path. Jitter : Variable Delay in packet transfer. There are various types of traffics like real-time voice, data, streaming video etc. All of these traffic types need different QoS adjustments. Some of these traffics are very senstive to delay, some of them are not. Another is senstive to packet loss, for the other, packet loss is not too important. As you know, voice is a real-time traffic. When you talk with another people, the vioce data is created at that time interactively by you and this traffic needs to reach to the other end in a very short time. If this is not achieved, then it is not possible to communicate with the person at the other end. So, voice traffic is senstive for delays. Beside, your voice data must not have a variable delays, jitter. Because, this also makes the communication worst. Lastly, for a good communication, your sentences need to be clear without any loss. All your talking need to be heared at the other end. This makes voice traffic also packet loss senstive. QoS (Quality of Service) can be implemented with different Service Models. There are three main QoS Service Models. These QoS Service Models are given below: Best Effort is the simplest Service Model. It is also known as a model without QoS. In other words, thi sis the model that there is no QoS Adjustments. In this model, all the traffic are seem similar. So, all the behaviours are same to all types of traffics. The only parameter is the time. In Best Effort traffic, always the first come packets, are sent firstly. Integrated Services is the second QoS Model in which, applications requests QoS from the network’s control plane. With an explicit signalling, Integrated Services QoS instruct the network that it need QoS. In other words, it requests a reservation. For this explicit signalling, RSVP (Resource Reservation Protocol) is used. After this request, it gets specific QoS parameters associated to that traffic and after the confirmation, the data is sent. Differentiated Services is the third QoS Model. In this model, there is no prior reservation technique. In Differentiated Services Model, the network classifies the different types of traffic into groups. After that it marks these groups with DSCP values. All the groups are behaved with different QoS values. There is no explicit signalling before the data sent in Differentiated Services. And Differentiated Services is implemented per hop. Differentiated Services uses DS Feilds in ToS (Type of Service) field. As a summary, you can find the below comparison of Integrated Services and Differentiated Services.
<urn:uuid:ff75e240-2771-46d3-b19d-0241bf11bc06>
CC-MAIN-2022-40
https://ipcisco.com/lesson/qos/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00631.warc.gz
en
0.933237
836
3.25
3
DNA logic utilizes the properties of DNA molecules for storage and processing. Logic functions are formed from the binding of DNA molecules in various combinations. A DNA “AND” gate, for example, would chemically join two separate DNA codes in an end-to-end string. Rapid search functions are possible since DNA strings are attracted to identical or complementary strings. The interconnection of these gates does not occur electrically through wires, but through saline liquids of various concentrations.
<urn:uuid:c47f0130-8bf3-42f7-bab0-c28fed9f79bc>
CC-MAIN-2022-40
https://www.gartner.com/en/information-technology/glossary/dna-logic
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00631.warc.gz
en
0.941669
93
3.625
4
Frame-relay is one of the WAN protocols you need to understand if you plan to become CCNA certified. It’s also one of the most difficult protocols to understand for most CCNA students. In this lesson I’ll explain to you why we use(d) frame-relay and how it works. Before we start looking at frame relay let me tell you a little story. We have a network with four sites: There’s R1, R2, R3 and R4. Since we want connectivity between the sites we have an ISP who sold us three leased lines: - Between R1 and R2. - Between R1 and R3. - Between R1 and R4. Using leased lines is awesome! You are the only person using these lines since you are paying for them. This means high quality and probably a low chance of congestion (if you have fast links). It’s also pretty secure since it’s only your traffic that’s flowing through these lines. There are also a few downsides to using leased lines however: - You are the only one using these lines so you gotta pay for them, being exclusive is fun but expensive. - On R1 you’ll need three interfaces for each leased line, more interfaces means more money. - What happens if you are going to move the site R1? It’s not always possible to move the leased lines with you. - What if the R1 crashes? R2, R3 and R4 will be isolated as well. This is a picture of frame relay and it works a bit different. The idea behind frame relay is that you have a single infrastructure from the service provider and multiple customers are connected to it, effectively sharing everything. In the middle you see a cloud with an icon you probably haven’t seen before. This icon is the frame relay switch. The cloud is called the frame relay cloud and the reason it has this name is because for us as customer it’s unknown what happens in the frame relay cloud. This is the service provider’s infrastructure and we really don’t care what happens there…we are the customer and all we want is connectivity! What else do you see? There are two customers (1 and 2) and each of them has a Headquarters (Hub) and a branch office (Spoke). One more picture, here’s a frame relay network with three routers from one company: There’s a router at the headquarters (Hub) and we have two branch offices (Spoke1 and Spoke2). All of them are connected to the frame relay cloud. We call our service provider since we want connectivity and the first question they’ll ask us is which sites should be connected?. In the example above you can see two virtual circuits, the green and blue one. With frame relay there’s a difference between the physical and logical connections. The physical connection is just the serial cable which is connected to the provider. Our logical links are virtual circuits. As you can see there is a virtual circuit from Spoke1 to the Hub and another one from Spoke2 to the Hub. This means that we can send traffic through our virtual circuits between: - Spoke1 and Hub. - Spoke2 and Hub. There is no virtual circuit between Spoke1 and Spoke2. Does this mean there is no connectivity between them? No you can still have connectivity between them by sending data to the Hub! Of course you can get another virtual circuit between Spoke1 and Spoke2 but you’ll have to pay for it. Virtual circuits are also called PVC (Permanent Virtual Circuit). You also pay for a certain speed called the CIR (Committed Information Rate). The cool thing about frame relay is that when no other customers are using the frame relay network it’s possible you get a higher speed than what you paid for…the CIR however is a speed that is guaranteed. How do we know if a PVC is working or not? Frame-relay uses something called LMI which stands for Local Management Interface. LMI has two functions:
<urn:uuid:bb57bbba-e4d9-4e6f-af7a-73f3ac62a692>
CC-MAIN-2022-40
https://networklessons.com/cisco/ccnp-route/introduction-to-frame-relay-for-ccna-students
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00631.warc.gz
en
0.957705
880
2.59375
3
Since Gordon E. Moore's landmark observation in 1965, the entire technology industry has been rooted in the concept that the complexity of integrated circuits doubles every 18 months (originally stated as every two years). However, many people incorrectly interpret “Moore's Law” to mean that the overall productivity of computer-based processes increases at the same exponential rate. In reality, improvements in practical computer power lag far behind these exponential hardware improvements. While Moore's Law continues to be a fair indicator of the complexity of integrated circuit design, this complexity is not being applied towards single, faster, larger, and more productive computing units with increasingly larger buses. Instead, it's being implemented in processors with multiple computing units in smaller chips. Also, while the complexity and overall computing power of the processor stays true to Moore's Law, the increased complexity of software design due to multicore processors leaves the end result less than ideal. Manufacturers of dedicated appliance solutions, or software built to run on off-the-shelf hardware, intending to ride the “Intel power curve” to consistently increase the overall performance of their products, quickly hit the performance wall of a single processor (core). Individual processor speed, which had for many years increased dramatically and consistently, began to stagnate. The key to maintaining performance became multiprocessor or multicore design. On single-processor, multipurpose machines (like home computers), multitasking and multithreading resembles multiprocessing by enabling the single processor to context-switch between processes or between multiple threads within the same process. For instance, a single-processor home computer can seemingly run both a web-browser process and a word-processing process simultaneously. The single processor can only run one process at a time, but with multitasking, it can quickly switch between the two processes to give the appearance of simultaneous execution. Similarly, individual threads within a process can be treated in the same way. On a single-processor system, however, the number of processing cycles is still finite and the processes share that single resource. Multiprocessing is a computing architecture that allows the simultaneous use of multiple processors (cores) in order to increase the overall performance of the machine. In multiprocessor machines, both the processes and the threads can run simultaneously on different processors instead of just giving the appearance of simultaneous execution. In general, there have been two predominant methods of achieving this goal: Symmetric Multiprocessing (SMP) and Asymmetric Multiprocessing (ASMP). SMP is very similar to the multitasking used on single processor systems. The processes themselves are unaware of the existence of multiple processors. The underlying operating system kernel employs a scheduling process to virtualize the processors and decide which process or thread executes on each processor for any given cycle. This, in effect, still uses multitasking process context-switching; each processor is not guaranteed to continually service the same process (or thread). However, this is the easiest way to gain access to all processing cores with minimal impact on the software design-and it is supported, out of the box, by most operating systems. This is generally more applicable to multipurpose computing platforms (PCs, servers, and so on) although many special-purpose appliances still rely on this form of multiprocessing. ASMP relies less on generic kernel-level virtualization to provide optimal use of multiple processors and puts the control directly into the hands of the developer. Instead of “load balancing” the processes across all processing cores, the application is written to target specific processing cores to handle specific processes. Process A can be dedicated to core 1 and process B can be dedicated to core 2. This significantly reduces or eliminates the need for process context-switching. It also allows the system to take advantage of special-purpose processors (network processors, graphics processors, and others) to augment general-purpose processors much more efficiently. This is more applicable to purpose-built appliance computing platforms, such as dedicated routers, Application Delivery Controllers (ADCs), firewalls, and so on. Both of these methods can significantly improve the performance of an application, but at a cost. Both SMP and ASMP have some significant issues-especially when used for dedicated applications-that prevent them from fully utilizing the additional processing power of multiple processors, particularly as the number of available cores increases. SMP has significant overhead associated with the arbitrary distribution of process execution. First, the scheduling process itself requires processing cycles that are not available to the application for which the device was built. As the number of processing cores increases, so does the number of cycles required to handle process scheduling and inter-process communication. In addition, without specific interaction from the application developer, SMP can have significant overhead when context-switching is required-a very costly, cycle-intensive process. While process scheduling has continued to improve in efficiency and purpose-built appliances generally do not run as many unique processes as multipurpose computing platforms, generic SMP still has significant overhead that affects the available computing power. The most significant issue for ASMP is the need to rewrite and design the specific application to accommodate multiple processing cores. This can add substantial development time, especially when trying to adapt old code. It also increases the complexity of the software (and thus the cost of the developers) and requires code revisions whenever the number or type of available processing cores changes. For example, if the system goes from dual-core to quad-core processors, it needs to be accounted for. Another drawback of ASMP is that, since processes are not load balanced, a single core might have idle cycles while another is incapable of handling its requests-a probability that increases with the number of cores. The efficiencies gained by eliminating context-switching can be quickly eaten up by the inefficiencies of processor usage or the complexity of development. This is not to say that neither model does not provide increased processing capability but, rather, that both models suffer from a case of diminishing returns. A dual-processor/core system does not perform twice as fast as a single-core system. Each core added to the system adds a diminishing amount of computing resources, eventually reaching the point at which all the computing power of an additional core is eaten up by managing and implementing that core. This results in no appreciable increase in overall computing power. This is, to some degree, explained by Amdahl's Law. Named for Gene Amdahl (father of F5's first CTO, Carl Amdahl), Amdahl's Law essentially states that the amount of performance increase that can be expected by parallelizing a process is a factor of the amount of the process that can truly be parallelized. If a process requiring 10 units of time can only be 50 percent parallelized, the process will never run in less than five units, even if the parallelized portion is processed instantly. As a result, the entire process can never be more than twice as fast. The problem, therefore, is that both traditional multiprocessing methods are tightly coupled, suffer from a shared-memory model, and the need for significant inter-process communications. Regardless of whether you virtualize a single process across multiple processing cores with SMP or attempt to break the process across multiple cores with ASMP, both solutions typically share memory between threads or processes and must allow communication between them. This means that in order to avoid race conditions and data corruption, the entire process must be painstakingly orchestrated-thus, the “tightly coupled” definition. For example, any memory access must issue a lock to prevent other processes or threads from simultaneously acting on the same data. Issuing memory locks is not only expensive in terms of cycle times (if using the same data, other processes must wait until the lock is cleared to continue execution), but the entire system can be throttled by the number of locks that can be maintained per unit of work. If we have to process 1 million transactions per second and take out three locks per transaction, at 300nS per lock, 90 percent of the CPU time is taken up on locking-leaving little for actual transaction processing. Consequently, while most manufacturers have focused on increasing the multiprocessor capabilities of their products, the tightly coupled nature of both SMP and ASMP has limited the proportion of their systems that can be parallelized. With the remaining serialized portion of the system no longer improving in performance, it is easily seen why most purpose-built appliances continue to see a diminishing return on multiprocessor implementations. They have been continually improving the performance of only part of their system. If you accept that there is little to be done about the performance improvement of the serialized portion of a system and you recognize the fact that Amdahl's Law demonstrates the futility of continuing to improve the performance of a static parallelized portion, there remains only one way to improve the overall performance of the system in any substantial way. You will need to alter the amount of the process that can be parallelized in proportion to that which remains serialized. This problem of parallelization conjures feelings of déjà vu at F5. It is remarkably similar to a problem we've seen-and solved-before. In the early days of Application Delivery Controllers, when they were known as “load balancers,” F5 competed against many host-based software solutions. F5 invariably outperformed these systems when the pool of servers exceeded more than a few systems. Why? Because the amount of overhead necessary to communicate state information between the hosts quickly exceeded the performance improvement of adding the additional systems; they suffered severely from diminishing returns. The math is pretty straightforward. Let's imagine a simple 10-step process. A fully serialized version will take 10 cycles to complete: Now, let's say that the process can be 40 percent parallelized and you have two cores that can execute the process. That might look like this: Sixty percent, or six steps, must still be done in sequence, but the remaining four steps can be executed simultaneously on two processors (requiring 2 cycles: 2 cycles x 2 processors = 4 steps executed). This process now only takes eight cycles to complete, for a 20 percent improvement in overall performance. However, if you add two additional cores, it would look like this: The process now only takes seven cycles, which represents a total of a 30 percent improvement over the original serialized process, but only a 12.5 percent increase over the eight-cycle version (seven cycles versus eight). This perfectly demonstrates the reason for the diminished returns. Parallelizing the process (as best as possible) and adding a second core returned a 20 percent improvement, but adding two more cores only returned an additional 12.5 percent improvement. In this simple illustration, adding any more cores will do absolutely nothing to improve performance, as all steps that can be run in parallel already are. If, however, you can make the process 80 percent parallelized, that same four-core system can now run the process in four cycles: F5 realized early on that simply adopting multiprocessor architectures without addressing the proportion of the process that can be parallelized was a short-term, dead-end street. The company invested heavily in developing a way to increase the parallelization of the traffic management process. The result of this investment is F5's Clustered Multiprocessing (CMP) architecture. CMP combines the benefits of load balancing and high availability provided by SMP and the efficiency of limited context-switching and special-purpose processor utilization of ASMP. This is all accomplished while eliminating the need for the shared-memory model and reducing the inter-process communications that continue to shackle the performance of other vendors' multiprocessor designs. CMP provides a virtualized processing fabric that delivers industry-changing performance, scalability, extensibility, adaptability, and manageability. To start with, TMOS, the purpose-built software platform on which F5 products run, is extremely efficient when run on a single core. The Traffic Management Microkernel (TMM) is a single-threaded, non-context-switching process optimized specifically for processing Application Delivery Network traffic. In addition, the TMM is designed to easily facilitate ASMP principles to incorporate performance improvements from special-purpose processors. For instance, when executing encryption processes, the TMM is designed to do it on the general-purpose processor (in software), but if an encryption coprocessor is present, it can offload it to the special-purpose processor. It does this without any change in operation other than the increased performance of the dedicated hardware. The TMOS platform, which F5 also spent significant time and resources developing, consistently outperforms other products in the marketplace and remains the core of CMP. From that basis, most manufacturers would simply attempt to use SMP to distribute TMOS process across multiple processors-with shared memory, network card, and special purpose processors. Others might attempt to run multiple instances of the TMM on different processors-still with the requisite shared memory, network card, and special-purpose processors. Instead, CMP enables load balancing of multiple processing cores, each with its own dedicated memory, network interface, and special-purpose processors. Each core runs its own, completely independent TMM process. By separating the dependencies between the instances, CMP allows more of the traffic management process-virtually the entire process-to be parallelized. This provides a substantial benefit to the overall performance of the system. The hardware that enables CMP is comprised of two important, proprietary F5 technologies: the Disaggregator and the High Speed Bridge (HSB). Tendency to Tightly Couple: The more tightly coupled the code, the more inter-process communication overhead there is. The F5 implementation of CMP makes it hard to tightly couple threads/processes. Automatic Scheduling Overhead: This is the scheduling that is done between threads or processes by the kernel. If the number of processes is greater than the number of CPUs, there is a overhead increases. Manual Scheduling Overhead: This is the re-balancing of the thread/process count for a processing pipeline. It frequently crops up in ASMP designs and sometimes in SMP designs. ; The Disaggregator acts as a hardware-based load balancer, distributing traffic flows between the independent TMM instances and managing flow affinity if or when necessary. Not only does this facilitate a near 1:1 linear performance growth (doubling the number of processing cores nearly doubles the computing power with no diminished returns), but it completely virtualizes the processing cores from the system and the other cores. This provides high availability and reliability in the event that any core becomes non-functional. In the VIPRION chassis, this includes the addition and/or removal of entire blades and their associated processing cores. The HSB delivers direct, non-blocking communication between the TMM instances and the outside world without the loss normally associated with Ethernet interconnects. It also provides the streamlined message-passing interface that enables TMM instances to share information. This provides the unsurpassed throughput and interconnectivity of each processor's dedicated network interfaces. It also mitigates the performance impact of inter-process communications in the few remaining instances where it takes place. Again, in the VIPRION chassis, this facilitates efficient traffic distribution and message-passing between blades as well as within the cores of each blade. Up until now, the game has been pretty simple-and widely understood. First, it was to optimize your code to run on a single processor as best you can and ride the “Intel power-curve.” Then, it was to optimize your code for SMP or ASMP and then build your platforms with as many processing cores as possible. All the while, performance improvements have slowly dwindled to miniscule amounts. CMP changes the rules of the game. Instead of working to continually improve the performance of a never-changing proportion of parallelized processes, CMP's most basic tenet is to change that proportion. Continuing improvements in performance can only be realized by increasing the amount of the application delivery process that can be parallelized. Only parallelizing nearly all of that process can enable near 1:1 linear scaling-fully utilizing all the processing cores. In much the same way that F5 redefined the load balancer at the turn of the century with the implementation of SSL offload-starting the evolution of Application Delivery Controllers-CMP redefines the Application Delivery Controller. The ADC is no longer limited by processing capability or network throughput. It is now free to grow with the needs of the organization and has the scalability to adapt to new, unforeseen functionality down the road-all within a single, easy-to-manage package. CMP, in combination with TMOS, provides F5 customers with the scalability, extensibility, adaptability, and manageability to consolidate the data center of today and future-proof the data center of tomorrow.
<urn:uuid:33bdddb0-148a-4def-a836-95343c06bd8b>
CC-MAIN-2022-40
https://www.f5.com/fr_fr/services/resources/white-papers/viprion-clustered-multiprocessing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00031.warc.gz
en
0.930751
3,518
3.890625
4
We all know how to do our bit to reduce our CO2 emissions, from wearing a coat instead of turning on the heating, to walking or taking the bus instead of driving. But we often forget about our tech. New, fancy technology is growing in demand and function at a massive pace. Yet while it’s clear to us that walking instead of driving reduces emissions, the energy consumption of our technology is less obvious. And it’s substantially bigger than you think. Bitcoin alone stands to use 0.5% of the world’s energy consumption this year, making its daily use a ‘bit less than the energy consumption of Ireland’. So, in the pursuit of both better tech and reduced energy consumption, how can we use and change the way we code to combat climate change? Energy consumption of the tech world With the battery power of our devices, the energy needed to read code, and the storage and retrieval of data, our daily technology use takes a lot of power. In fact, the IT industry consumed 3-5% of the world’s electricity in 2015. By 2025, it could be using 20% of all electricity, and emit up to 5.5% of the world’s carbon emissions. That’s a monstrous amount, and not good news for the environment. So, what makes the energy appetite of our tech so insatiable? The answer lies in data, and the way that our code needs to be read. Whether it’s a website, a software program, or data stored on the cloud, the bowels of the technology we use require constant (and enormous) electrical feeding. In fact, data centers currently have the same—and potentially higher—energy consumption as the aviation industry, and it’s set to triple in under a decade. To add salt to the wound, our devices are only getting more energy-demanding and more widely used. There is already an expected 20% increase in demand for fancy tech. That means more devices, all requiring more energy and all adding to our CO2 emissions. The energy reliance of our tech means more fossil fuels burnt. Technology use, then, comes with a weighty carbon footprint and a sizable contribution to global warming. So, along with switching our energy sources from fossil fuels to renewable energy, we need to find ways to reduce the energy consumption of the technology we so greedily consume. Reducing code energy consumption Reducing the energy consumption of code helps tackle the problem at the source. The way we code could be a way to help combat tech energy consumption, and so climate change. Plus, increasing battery life will improve the user experience. Things like endless null checking and messy spaghetti code increase the power consumption needed to use code. By reducing these as much as possible, you make your code more sustainable and easier to maintain. In other words, ensuring your code is as simple, efficient and easy to read as possible can reduce its energy appetite. There are a few ways you can start to do this. You could write negative code, a practice that involves reducing the lines of code in your software while improving its quality and sustainability. This encourages meeting essential complexity: finding the shortest and simplest way to write something. Negative code also means you’re creating less code that will need reading. So, less energy will be needed to run the code. In cases where running the program requires the code to be repeatedly read by a machine, for example, less code can mean faster loading times and reduced power consumption. Code refactoring is another way to support sustainable code creation. Code refactoring is a process where code is restructured without changing the external behavior. It’s useful for keeping code tidy and enhancing both the maintainability and usability of the software components. Because code refactoring improves code maintainability, you reduce the risk of spaghetti code, meaning you also reduce the risk of your code demanding excessive energy consumption. Incorporating sustainable design Then, there are design choices you can make to improve the sustainability of your websites and software. As a bonus, many of these can also improve the user experience, with fast loading times and ease of use. When your content is easy to find and your software easy to use, less energy is consumed by users searching for the information or function they want. Improving your content’s findability, then, can make your code less greedy for energy. On your website, this can be achieved with clearly labelled menus, helpful subcategories and an easy to access search function. For your software, findability means making your features and utility as intuitive to find and use as possible. Optimizing for speed can also reduce the energy consumption of code. On a website, enabling browser caching to reduce how often your code is downloaded is a great start. Another easily achievable step is optimizing your images for low data use, meaning that less energy is used to retrieve website data. You can reduce the energy appetite of your software meanwhile, by removing unnecessary elements and actively avoiding feature creep. (Adding extraneous features that aren’t helpful to your users.) Paint the tech world green We already turn our lights off, reuse our plastic bags and recycle. These are all small actions – little things that add together to make a big difference. Coding with energy consumption in mind is another small thing we can do that stands to make a big difference. So, make the little changes you can to combat climate change with your code. You’ll improve the user experience and could help to make the tech world a slightly greener place.
<urn:uuid:20844143-0fb1-43b6-8e74-2a4b86ccaa6e>
CC-MAIN-2022-40
https://bdtechtalks.com/2018/10/18/reduce-energy-consumption-with-right-code/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00031.warc.gz
en
0.924326
1,159
3.140625
3
Healthcare organizations face constant pressure to deliver better care and results at a lower cost. To achieve that goal, data must be collected, analyzed, and used to drive informed decisions. Data management is a key component in assuring that patients receive proper care and treatment. The need for data management in the healthcare industry is increasing exponentially as the number of user devices and information systems continues to rise. Effective and efficient data management will improve patient care by making healthcare records available quicker, reducing medical errors, and driving cost savings. What is healthcare data management? Healthcare data management is the process of tracking and managing the flow of data between all affiliated departments in a healthcare organization. It includes the collection of data, the transfer of that data, and the storage and/or disposal of the data when no longer needed. It is a blend of IT and management disciplines that must work together in order to achieve optimal results. How does data management affect patient care? Data management ensures that all information is in the hands of the right people at the right time. This may seem like a simple concept, but in practise, it can be challenging. For example, if a doctor is working with a patient’s medical history, but that information is stored in a computer system that can’t share that data with other caregivers in the medical team, it causes delays while permission is requested, and further issues can arise before the doctor is able to assist their patient. When data is missing or out of sync, it impacts the critical relationships between clinicians and patients. Healthcare data management challenges Every organization has unique challenges when it comes to implementing a data management program. Many healthcare organizations struggle with a lack of standardization across disparate systems and data sources. By creating a single source of data, you can increase the accuracy of data analysis and reduce the risk of medical errors. However, without a standard system, data is often fragmented and difficult to access. By improving your data management system, you can increase the consistency of patient data across your organization and improve the timeliness of health information. Benefits of effective health data management Efficient and effective data management is essential to improving the quality of care and driving cost savings. Accurate data on patient demographics, conditions, treatments, outcomes, and costs enables healthcare organizations to deliver better care and drive cost savings. For example, if you have better data on the types of treatments that are most effective for a specific condition, then you can prescribe those treatments more frequently with less risk to patients. Best practises for data management require a multi-disciplinary approach among clinicians, administrative staff, IT specialists, and data analysts. By investing in an initiative that streamlines your data flow, you can ensure that consistent information is accessible across your organization. By automating data collection processes, you can reduce the burden of data entry on your clinicians, while increasing the availability of clinical data across your entire organization. Improved data management can also help to reduce the risk of medical errors, increase the accuracy of data analysis, improve the efficiency of your billing processes, and identify areas for improvement, so that you can form a thorough understanding of each patient’s care and outcomes. Reducing medical errors By collecting more consistent data, you can reduce the risk of making mistakes in your data analysis. Integrating data from disparate sources can improve the accuracy of clinical decisions, which then reduces the risk of medical errors. Improved data management can also help to reduce the amount of time it takes to diagnose and treat patients. When the number of steps between the information source and the patient are reduced, you can lessen the probability of human errors, like misdiagnosis or improper treatment. Risk management and patient care Effective data management can help reduce risks that could affect patient care, such as the risk of unintended harm. Similar to conducting a risk analysis for a medication, a risk analysis can help organizations identify, prioritize, and mitigate risks that could negatively affect patient safety. A risk analysis can help reduce the number of surgical complications, unnecessary re-hospitalization, and other risks that could have a significant impact on patient care. Unifying healthcare data management through EHR It’s estimated that up to 80% of data in a healthcare organization is unstructured and not easily accessible. Electronic health records (EHR) enable organizations to unify their data management efforts by providing a single source of information. It also helps healthcare organizations standardize their data so that it can be easily accessed and used across the organization. Unifying data management efforts can help reduce the risk of data loss by ensuring that all of your data is stored in a single, centralized system. Implement better data management into your routine Healthcare organizations are tasked with providing a continuum of care to their patients. Implementing a data management program can help healthcare organizations to increase their efficiency and improve the quality of care. The data management specialists at Everconnect can help you develop, implement, and manage data management in your healthcare organization so you can provide faster, better care for your patients.
<urn:uuid:dd7594c2-a7aa-4dfe-a18b-5771e0a28a4d>
CC-MAIN-2022-40
https://everconnectds.com/blog/improving-patient-care-with-better-data-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00031.warc.gz
en
0.928056
1,039
3
3
A botnet is a large group of compromised machines that are used by an individual or organization to carry out a cyberattack. Botnets are typically composed of infected machines that have been commandeered by the threat actor, who uses them to launch attacks against other targets. A Botnet is a Network of Computers Controlled by a Threat Actor A botnet is a network of computers controlled by a threat actor. Botnets are often used for cybercrime purposes, such as spamming or launching DDoS attacks. Types of Bots A botnet is a type of threat actor who controls multiple bots in a botnet. Botnets are used for a variety of purposes, including launching DDoS attacks, spreading malware, and conducting reconnaissance. Tactics Used by Botnet Operators Botnet operators use a variety of tactics to control their botnets and make them more efficient. One common tactic is to add new bots to the network regularly in order to keep it active and functioning. Botnet controllers also use bots to send spam emails, launch denial-of-service (DoS) attacks, or steal sensitive data. How to Defeat Bots? There is no single term that accurately describes a threat actor who controls multiple bots in a botnet. In general, the term used to describe this type of individual is “botnet master.” Botnets are collections of infected computers that have been joined together to form a network and share data and commands. Botnets can be used for a variety of purposes, including sending spam emails, launching distributed denial-of-service (DDoS) attacks, or infecting other computers with malware. The size and complexity of botnets has increased over the years, as has the sophistication of the tools used to control them. Today’s botnet masters are able to command their bots using a variety of languages and platforms, which allows them to operate in secrecy. A Botnet is a Collection of Computers Controlled by the Same User When a hacker controls multiple bots in a botnet, they can use the botnet to launch distributed denial of service (DDoS) attacks or steal sensitive information. A botnet typically refers to a network of malware-infected computers used to carry out malicious activities without the knowledge or consent of their owners. Botnets can range in size from just a few dozen machines to tens or hundreds of thousands. A Threat Actor is Someone Who Uses Bots to Attack Other Websites A threat actor is someone who uses bots to attack other websites. This can be done for a variety of reasons, such as gaining access to sensitive information, spreading malware, or disrupting services. Botnets are a common tool used by threat actors because they allow them to control large numbers of bots and use them to launch attacks. A Command and Control Server is Used to Control the Bots in a Botnet A threat actor controls multiple bots in a botnet by using a command and control server. The command and control server is responsible for issuing commands to the bots, controlling their actions, and communicating with the threat actor. A Distributed Denial of Service (DDoS) Attack Is When Multiple Bots Flood a Website with Traffic to Cause it to Slow Down or Cease Functioning A distributed denial of service (DDoS) attack is when multiple bots flood a website with traffic to cause it to slow down or cease functioning. DDoS attacks are typically carried out by attackers who use botnets, which are large networks of infected machines used for distributed denial of service (DDoS) attacks. Botnets are often built by compromising unsecured computers and then using them to send spam or launch DDoS attacks. A threat actor who controls multiple bots in a botnet is generally referred to as a “botmaster.” Botmasters use their bots to carry out various attacks, including distributed denial-of-service (DDoS) and spamming, theft of information and intellectual property, and recruitment of other hackers. As bot masters continue to develop new ways to use their bots for malicious purposes, it’s important that organizations keep up with the latest trends and tactics employed by these threat actors.
<urn:uuid:28a47f5d-da32-4df8-a54d-e8539f0d9bbf>
CC-MAIN-2022-40
https://cybersguards.com/what-is-the-term-used-for-a-threat-actor-who-controls-multiple-bots-in-a-botnet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00031.warc.gz
en
0.946102
851
3.5
4
Concerns about privacy and protecting personal information are in the spotlight for organizations all over the world. New, more comprehensive data privacy laws have been enacted or proposed in the past few years, and it has become imperative for companies of all sizes and across all industries to prioritize the protection of personal data. We have collected 10 data protection regulations across the globe that organizations should know about. 1. GDPR (EU) The EU’s General Data Protection Regulation (GDPR) came into effect on May 25, 2018, and has created a far-reaching ripple effect that brought data protection into the public eye and onto legislative agendas the world over. The GDPR marks the most important change in data privacy regulation in the last 20 years and provides unprecedented protection and individual empowerment. Europe’s new framework for data protection puts new obligations on the companies and organizations to ensure the privacy and protection of personal data, provides data subjects with certain rights, and assigns powers to regulators to ask for demonstrations of accountability or even impose fines in cases of non-compliance. The key concepts of the GDPR include lawful, fair, and transparent processing, clear and explicit consent, mandatory breach notification, right to access, right to be forgotten, and principles like privacy by design and by default. The regulation has extraterritorial applicability, meaning that it applies to all organizations collecting and processing personal data of individuals residing in the EU, regardless of the company location. 2. PIPEDA (Canada) Canada’s federal data protection law, the Personal Information Protection and Electronic Documents Act (PIPEDA) was enacted early in 2000. PIPEDA applies to organizations operating in the private sector and regulates, among others, how businesses collect, use and disclose personal and sensitive information. The law is broken down into ten core principles that businesses must follow. To harmonize the Canadian requirements with the EU’s GDPR, the Government of Canada issued the Data Privacy Act, an amendment to PIPEDA, which came into force on November 1, 2018. This Act adds new rules to PIPEDA and includes consent requirements, data breach notifications, and revised scope of application. 3. CCPA (California) Effective January 1, 2020, the California Consumer Privacy Act (CCPA) responds to the increased role of personal data in contemporary business practices and the privacy implications surrounding the collection, use, and protection of personal information. With this new data privacy law, signed into law on June 28, 2018, the Golden State gives consumers insight into and control over their personal information collected online. It forces companies that conduct business in California to implement structural changes to their privacy programs. Like the GDPR, the CCPA’s impact is expected to be global, given California’s status as the fifth largest global economy. Among the key components of the CCPA is an extended definition of personal information, creating new data privacy rights for California residents, establishing a new statutory damages framework, and introducing new regulations when children’s personal data is used. California’s new privacy law shares many similarities with its European counterpart, the GDPR, including data subjects’ right to know what data is being collected about them and how it is being used, as well as the right to have their data erased; however, significant differences can be traced between the two laws as well, particularly concerning the scope of application and rules concerning accountability. 4. APPI (Japan) Japan’s Act on Protection of Personal Information (APPI) was originally enacted in 2003 and came into effect in 2005. It was significantly amended ten years later, in 2015; the amendments took effect one year ahead of the EU’s GDPR, on May 30, 2017. The APPI protects the personal data of individuals in Japan by establishing rules for governments and certain business operators to protect an individual’s rights concerning acquiring and handling an individual’s personal information. Entities operating in Japan must comply with APPI, whether or not cross-border data transfers occur. APPI is different from the GDPR in several aspects; the GDPR provides greater protection for data subjects and stricter regulations on the companies that process personal data than the APPI. On January 23, 2019, Japan became the first country to earn an adequacy decision from the European Commission (EC) after the GDPR, which will ensure a smooth flow of data between the EU and Japan as well as facilitate the increased volume of data transfers. 5. LGPD (Brazil) On August 14, 2018, Brazil approved the General Data Protection Law (“Lei Geral de Proteção de Dados” or “LGPD”), slated to come into effect on August 15, 2020. The new data protection framework – highly inspired by the GDPR – creates rules for processing personal data online and offline, in public and private sectors, regardless of where the data processor is located. The legislation aims to replace and supplement existing legal norms; one of the reasons for its development was to make data treatment in Brazil compliant with European standards. Key similarities between the LGPD and GDPR include data subjects’ rights (e.g., right to request access to their data and the right to be forgotten), the need for data protection officers, data protection impact assessments, and data breach notifications. However, there are several points, such as the legal bases and mandatory breach notifications on which the LGPD goes further than the European legislation. 6. PDPA (Singapore) Personal data in Singapore is protected under the Personal Data Protection Act (PDPA), which was adopted in 2012 and came into full force in 2014. The PDPA applies to all private sector organizations and establishes a data protection framework that comprises various rules governing the collection, use, and disclosure of personal data. It recognizes both the rights of individuals to protect their data and the needs of organizations to collect, use or disclose personal data for legitimate and reasonable purposes. Like the GDPR, the PDPA has an extraterritorial reach and is extended to those who may not have any presence in Singapore. 7. PDPA (Thailand) Thailand’s first consolidated law to govern data protection in the country, the Personal Data Protection Act (PDPA), was published on May 27, 2019. Organizations collecting and processing personal data must ensure they are compliant with the PDPA by May 27, 2020. Thailand’s Government has largely drawn concepts from the GDPR, with certain modifications suitable to the national perspective. It did so on purpose to demonstrate that Thailand has an “adequate” level of data protection to the EU. The PDPA outlines, among other things, a new definition of personal information, special categories of sensitive data, consent requirements including for minors, data subjects’ rights, extraterritorial applicability, and restrictions on transfers of personal data to third countries. 8. PDPB (India) The national government’s ‘Srikrishna Committee’ had issued its much-awaited draft legislation for a new Personal Data Protection Bill (PDPB) on July 27, 2018. The intended framework proposes to regulate the processing of personal data of individuals (data principals) by government and private entities (data fiduciaries) incorporated in India and abroad. It also states how to collect, process, and store personal data. The Bill is largely influenced by the GDPR and has adopted several principles like the right to access and correction, the right to portability, or the right to be forgotten; however, the individual’s rights are limited compared to the EU law. While the draft bill may suffer some amendments before it is submitted to Parliament, which may request further changes, it will serve as the basis for the final bill. 9. NDB (Australia) The Notifiable Data Breach (NDB) Scheme came into effect on February 22, 2018, and is a part of Australia’s Privacy Act that contains 13 principles regarding entities’ obligations for the management of personal data. Under the NDB Scheme, companies that handle personal data like bank account information or medical records are obliged to report data breaches to the Office of the Australian Information Commissioner (OAIC). They must also inform persons whose information is exposed. Like the GDPR, the NDB Scheme aims to allow affected individuals to take necessary action to protect their personal information, and it imposes considerable penalties on organizations for failing to comply. 10. Data Security Administrative Measures (China) On May 28, 2019, the Cyberspace Administration of China released the draft of its Data Security Administrative Measures (the “Measures”) for public comment. Thus China has joined the list of countries around the world in pushing for stricter data protection legislation. The Measures supplement the Cybersecurity Law of China that came into effect on June 1, 2017, and provide strict and detailed rules for network operators who collect, store, transmit, process and use data within Chinese territory. Network operators who collect important data or sensitive personal information for business operation shall file with the cyberspace administrative departments. In March 2018, the Personal Information Security Specification was issued, providing detailed guidance for information processing compliance. The Measures are intended to provide technical specifications and best practices in data security with legal force. Download our free ebook on A comprehensive guide for all businesses on how to ensure GDPR compliance and how Endpoint Protector DLP can help in the process.
<urn:uuid:6689d130-bf31-4ed2-9375-c05d76ed7529>
CC-MAIN-2022-40
https://www.endpointprotector.com/blog/10-data-protection-regulations-you-need-to-know-about/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00031.warc.gz
en
0.92972
1,954
2.90625
3
Being able to provide users with an ever-increasing amount and quality of content in an ever-decreasing amount of time is the driving force behind every successful business in today’s digital landscape. And, with the widespread availability of broadband internet around the globe, customers’ expectations of how they access and consume content is rapidly changing. Streaming media is much more than a luxury in today’s digital climate: in some cases, it’s a business imperative. To cater to the global demand for media-rich browsing and other content-heavy activities around the clock, many content providers are seeking the help of Content Distribution Networks (CDNs). So, with that in mind, what exactly is a Content Distribution Network, how does it work, and should you be considering using one? What is a Content Distribution Network, and how does it work? A Content Distribution Network (Also sometimes referred to as a Content Delivery Network) is a network of distributed servers, deployed in multiple data centres, that deliver content to end-users based on their physical location. When the end-user accesses a website, a request for the necessary content is sent to the receiving server. This request is then redirected to the server geographically closest to the end-user, from which the required content is delivered. Should the requested content not be available on the closest server, it will be transferred from the originating site and stored as a cached copy, which will then be available for subsequent requests from the same area. Essentially, this makes it possible for content-heavy web services to deliver content equally efficiently to users around the world. In other words, Content Distribution Networks make it possible for users to access web content from the other end of the globe without any impact on retrieval time or delivery speed. What are the benefits of a Content Delivery Network? The most significant benefits of a Content Delivery Network is the dramatic improvement to user experience. Traditionally, when a user accessed a web page, they would request the necessary content from the originating server, which would then be sent through multiple servers before arriving at the user’s browser. If, for example, somebody in Cape Town was requesting content from a server in Oslo, this would mean that the end-user would likely experience a significant amount of packet loss and latency before receiving the information. By storing commonly requested content locally, Content Distribution Networks reduce bandwidth requirements and server loads while delivering content to the end-user faster. In short, Content Distribution Networks bring the edge of the network closer to the end-user, resulting in faster resolution times and an optimised user experience. What are the risks associated with Content Delivery Networks? While Content Delivery Networks have many benefits, especially for organisations offering on-demand streaming media sites and other high-bandwidth internet services, they aren’t entirely without their risks. One of the major downsides of Content Delivery Networks is that they are typically charged for by the amount of data used. For websites with heavy traffic, this can quickly become expensive, and can have disastrous consequences for small sites that may experience periodic spikes in traffic at certain times. However, the extra cost is often more than justified by the improvement in service levels that end-users will enjoy. In the case of an unchecked Denial of Service (DoS) attack, though, the financial and infrastructural consequences can be dire. Another common criticism of Content Distribution Networks is one that is used in any argument against IT outsourcing: handing over content management to a third party takes it out of your company’s hands, meaning you no longer have a say in the quality of service. Additionally, if an outage should prevent access to the Content Distribution Network, end-users will hold you responsible and not them, which could cause damage to your company image. IRIS has over a decade’s experience in the Southern African ISP and telecoms industries. We cater to environments of any scale and complexity thanks to our uniquely customisable and extensible Network Management Software. If you’d like to find out more about how IRIS can help you manage a stable and highly available network, download our free Network Manager’s Guide here. Image Credit: www.volusion.com
<urn:uuid:bf2227de-2f65-4442-8237-6478beebb08a>
CC-MAIN-2022-40
https://irisns.com/2016/04/07/an-introduction-to-content-distribution-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00031.warc.gz
en
0.923571
861
3.015625
3
Interest in philanthropy is growing. People are increasingly looking for ways to improve the world around them, whether through a donation of time, goods or money. In fact, online giving, with a debit or credit card, has become the preferred method for making a monetary donation. Unfortunately, card testing, the online fraud tactic used by criminals to test stolen credit card numbers and check their validity by making a small, nondescript donation, has become a major problem for today’s nonprofits. Our newest infographic “Card-Testing Cybercriminals Are Draining Nonprofit Coffers” illustrates just how big this problem is, and what can be done to protect dollars from being drained from much-needed nonprofit coffers by card-testing cybercriminals.
<urn:uuid:180e3bd7-889b-4cd3-9181-e36bb901d107>
CC-MAIN-2022-40
https://www.digitalresolve.com/nonprofit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00031.warc.gz
en
0.946648
153
2.546875
3
Under certain circumstances, texting Protected Health Information (PHI) can be deemed as a violation of HIPAA. The classification as a violation is dependent upon the message’s content and the recipient. Furthermore, the effort that the sender put into maintaining the integrity of PHI is also considered. If the PHI is well-protected, then texting may be compliant with HIPAA. The only instance in which HIPAA addresses issues of texting is in the Privacy and Security Rules. However, many critics deem these rules to be unclear, and the cause of much misunderstanding regarding what is considered a violation. These rules do not explicitly concern texting, but apply to electronic communications in the healthcare industry. The rules deem it appropriate send messages by text with the condition that the content of the message does not include “personal identifiers”. They also allow for a doctor to send text messages to a patient, if that message complies with the “minimum necessary standard” they outline. All messages sent by text must comply with the technical safeguards of the HIPAA Security Rule to prevent a violation from occurring. The Technical Safeguards of the HIPAA Security Rule The technical safeguards of the HIPAA Security Rule are vital in deciding whether a text-related violation has occurred. This section of the HIPAA Security Rule concerns access controls, audit controls, integrity controls, methods for ID authentication, and transmission security mechanisms when PHI is being transmitted electronically. The requirements outlined by these rules include: - Access to PHI must be limited to authorized users who require the information to do their jobs. - A system must be implemented to monitor the activity of authorized users when accessing PHI. - Those with authorization to access PHI must authenticate their identities with a unique, centrally-issued username and PIN. - Policies and procedures must be introduced to prevent PHI from being inappropriately altered or destroyed. - Data transmitted beyond an organization´s internal firewall should be encrypted to make it unusable if it is intercepted in transit. Standard “Short Message Service” (SMS) and “Instant Messaging” (IM) text often fail to adhere to any of these guidelines. One of the many breaches regards the inability of SMS and IM text message senders to control the ultimate destination of their messages. They could be sent to the wrong number, forwarded by the intended recipient or intercepted while in transit. The fact that copies of SMS and IM messages also remain on service providers´ servers indefinitely also poses a serious security risk. There is no message accountability with SMS or IM text messages because of the ease in which someone can use someone else’s mobile device to send or edit a message. For these reasons (and many more) communicating PHI by standard, non-encrypted, non-monitored and non-controlled SMS or IM is texting in violation of HIPAA. Healthcare Organizations and Text Violations Texting in violation of HIPAA has proved to be major problem for healthcare organizations in recent years due to the ubiquity of electronic messages. Indeed, many healthcare organizations have been keen to implement policies in which employees are responsible for providing their own devices, resulting in 80% of medical professionals using personal devices. This proves to be a serious security issue as PHI is at risk of being accessed by unauthorised personnel. Most messaging apps on mobile devices have the user permanently logged in and, if a mobile device is lost or stolen, there is a significant risk that messages containing PHI could be released into the public domain. The fines for a breach of HIPAA can be considerable. The fine for a single breach of HIPAA can be anything up to $50,000 – per day the vulnerability responsible for the breach is not attended to. Healthcare organizations that ignore violations regarding texting can also face civil charges from the patients whose data has been exposed if the breach results in identity theft or other fraud. Secure Messaging Solutions Secure messaging solutions resolve texting issues by encapsulating PHI within a private communications network that can only be accessed by authorized users. Access is gained via secure messaging apps that function in the same way as commercially available messaging apps. The major advantages are the security mechanisms in place to prevent an accidental or malicious disclosure of PHI. Once logged into the app, authorized users enjoy the same speed and convenience as SMS or IM text messaging, but are unable to send messages containing PHI outside of the communications network, copy and paste encrypted data or save it to an external hard drive. Should there be a period of inactivity on the app, the user is automatically logged off. All activity on the communications network is monitored by another party to ensure total message accountability and to prevent texting in violation of HIPAA. If a mobile device onto which the secure messaging app has been downloaded is lost or stolen, administrators can remotely wipe all content sent to or created on the app and PIN-lock it to prevent further use.
<urn:uuid:1527614c-26ce-45fa-86a9-1739096b058c>
CC-MAIN-2022-40
https://www.hipaanswers.com/is-texting-a-violation-of-hipaa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00231.warc.gz
en
0.920194
995
2.515625
3
What is cybercrime? May 12, 2021 Extortion, identity theft, international data heists: these are the realities of the cybercriminal underworld. Hiding behind online anonymity, thieves and hackers can extort money from victims on the other side of the planet. What is cybercrime? How is it committed? And is there anything we can do to prevent it? What is cybercrime? A cybercrime is a criminal act that targets or utilizes a computer, smartphone, or other connected device. It’s a crime that is committed online. Cybercriminals attack a wide variety of targets using different methods depending on the victim. Some online criminals focus on extorting money from individuals, while others target databases of businesses and corporate organizations. While most are motivated by wealth, certain hackers also double as political activists, attacking government bodies they deem corrupt. However, a broad definition of cybercrime isn’t particularly helpful when trying to understand the wide array of criminal acts this term encompasses. Vague phrases like “hacking” — bypassing security restrictions to access private data — refer to an almost limitless variety of actions. Let’s focus on the specific tools, tactics, and intentions of the modern cybercriminal. Malware delivery and infection Malware is a useful catch-all term for different forms of malicious software, but it doesn't refer to one specific kind of virus or attack. Specific types of malicious software are involved in almost every type of cybercrime. If an attacker exploits a weakness in an operating system, spies on a user’s keystrokes, or remotely hijacks a device, they're probably using malware. To benefit from malware, the attacker must first find a way to install it on the target device. This is often referred to as infection, and there are several popular ways to do this: A website can be used as a malware host, infecting any visitors who view the page. To this end, perpetrators design their own domains, building a malicious download function directly into the site. To reach more victims, criminals may send page links in phishing emails, or use a similar domain name to a popular website. Malvertising uses online ads, coded to install malware or redirect users to infectious websites. Cybercriminals try to sneak their pop-ups and banner ads onto legitimate sites, and even if people don’t click on them, some can run automatically as soon as the page loads. A victim may not notice they’ve been targeted; the malvertisement can quietly install its malware and users will continue to browse on their devices, unaware of the infection. But infection is often just a prelude to the main act of a cybercrime. Having installed malware onto a device, the next step will likely involve some form of theft, with money or data (or both) as the trophy. Cybercriminals employ a range of techniques to steal, scam, and extort money from their victims. For example, they may use keylogging malware or Wi-Fi spying techniques to secretly view the victim’s browsing traffic and steal their banking credentials when those are inputted on a compromised device. Targeting both individuals and, increasingly, corporations, some criminals use ransomware — a type of malware that locks the user’s access to a device or database. Once access has been restricted, the perpetrator demands a ransom. With companies paying an average of $370,000 per attack, the global cost of ransomware crime is expected to reach $20 billion next year. For criminals who don’t rely on malware, social engineering tactics can still convince people to part with their money online willingly. While the notorious Nigerian Prince scam is relatively well known, there are many similar pretexting frauds that have seen people send huge sums of money to criminals posing as businessmen, long-lost family members, and prospective lovers. For an increasing number of cybercriminals, the way to make real money online is through data theft, rather than directly targeting the victim’s bank account with malware or social engineering. When it comes to this type of cybercrime, businesses and corporations are tempting targets. Large-scale data breaches will see a company’s private files hacked and their customer information exposed. User passwords, credit card numbers, and other sensitive data can prove incredibly valuable to an attacker, paving the way for more acts of cybercrime in the future. The average employee in the US has access to around 1,000 sensitive files, and many now work from home, where security protocols may not be properly enforced. By successfully compromising just one employee’s device, a cybercriminal could access a treasure trove of private data, which can then be sold on the dark web or used to facilitate identity theft and further extortion. Disruption and hacktivism Not all cybercrime focuses on financial rewards, however. Some criminal acts may be politically motivated, or simply intended to cause disruption. A disrupted denial of service (DDoS) attack, for example, is an illegal procedure in which the attacker overwhelms a website or application with traffic until it is unable to service legitimate users. In practice, that could mean forcing an entire website to go offline, or just disabling specific page features and functions. The rise of hacktivism — politically-charged cyberattacks that often target government or corporate bodies — has seen DDoS attacks widely used as a form of protest. Other acts of hacktivism involve defacing official websites with messages and slogans, or exposing government or corporate data through leaks. Governments have also faced accusations of cybercrime, with China coming under particular scrutiny. When nations and military organisations resort to hacking, they stray into the realm of cyberwarfare. Who investigates cybercrime? Cybercrime can be investigated by different agencies at various levels, depending on the nature, severity, and location of the incident. Because perpetrators may not be in the same country as their victims, law enforcement agencies like the FBI in the United States work closely with their international counterparts overseas. Inter-governmental organisations like Interpol are particularly effective in tracking and apprehending cybercriminals, because they can draw from resources in multiple nations and jurisdictions. They can also train and educate local authorities in different regions on the nuances of responding to cybercrime. Local police forces may struggle to deal with cybercrime due to a number of reasons, like the complexity of the methods used, the difficulties in tracking online perpetrators, and the lack of legal guidance. However, as cybercrime is becoming ever more present in the 21st century, this will have to change. How to prevent cybercrime: 5 simple steps Use antivirus software. While a virus is just one form of malware, antivirus software can be a dynamic and multifunctional tool. A good antivirus firewall detects malicious software and blocks high-risk downloads. Consider using an ad-blocker as well to bolster security and reduce the threat of malvertising. Protect your passwords. Ensure that you're using long, complex passwords without any detectable patterns or words. Combine characters, numbers, and symbols to protect yourself against brute-forcing software. Avoid using the same login credentials across multiple accounts and find a password manager to simplify the process. Be wary of email links. Emails and social media messages may contain infectious links, even if the sender seems trustworthy. The best way to guard against these scams is to exercise caution whenever you're asked to click something online. Before engaging with an email, confirm the sender's authenticity: call the company’s helpline, or search online for news of similar scams. Human caution is a strong defense against phishing tactics. Update your software. Out-of-date software can provide the weak-spots that malware takes advantage of. Anything from a browser extension to your operating system could be the target. Individuals and companies should regularly check for new software patches or set their systems to update automatically. Use a VPN. A virtual private network (VPN) encrypts the device's browsing data, limiting the risks of Wi-Fi spying and endpoint data breaches. With just one NordVPN account you can enjoy end-to-end encryption on six separate devices. For businesses with a network of hardware to protect, the NordVPN Teams service offers an effective corporate security solution. Start enjoying VPN protection today. You can find the original blog post here at NordVPN.
<urn:uuid:89ab295e-a53e-4466-8f3b-1f9a341c41cc>
CC-MAIN-2022-40
https://nordsecurity.com/blog/what-is-cybercrime
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00231.warc.gz
en
0.91854
1,739
3.1875
3
Graph Neural Networks: Graphs are everywhere around us. Your social network is a graph of people and relations. So is your family. The roads you take to go from point A to point B constitute a graph. The links that connect this webpage to others form a graph. When your employer pays you, your payment goes through a graph of financial institutions. Copyright by venturebeat.com Basically, anything that is composed of linked entities can be represented as a graph. Graphs are excellent tools to visualize relations between people, objects, and concepts. Beyond visualizing information, however, graphs can also be good sources of data to train machine learning models for complicated tasks. Graph neural networks (GNN) are a type of machine learning algorithm that can extract important information from graphs and make useful predictions. With graphs becoming more pervasive and richer with information, and artificial neural networks becoming more popular and capable, GNNs have become a powerful tool for many important applications. Transforming graphs for neural network processing Every graph is composed of nodes and edges. For example, in a social network, nodes can represent users and their characteristics (e.g., name, gender, age, city), while edges can represent the relations between the users. A more complex social graph can include other types of nodes, such as cities, sports teams, news outlets, as well as edges that describe the relations between the users and those nodes. Unfortunately, the graph structure is not well suited for machine learning. Neural networks expect to receive their data in a uniform format. Multi-layer perceptrons expect a fixed number of input features. Convolutional neural networks expect a grid that represents the different dimensions of the data they process (e.g., width, height, and color channels of images). Graphs can come in different structures and sizes, which does not conform to the rectangular arrays that neural networks expect. Graphs also have other characteristics that make them different from the type of information that classic neural networks are designed for. For instance, graphs are “permutation invariant,” which means changing the order and position of nodes doesn’t make a difference as long as their relations remain the same. In contrast, changing the order of pixels results in a different image and will cause the neural network that processes them to behave differently. […] Read more: venturebeat.com
<urn:uuid:1d819268-ef93-4776-a310-42178ca81754>
CC-MAIN-2022-40
https://swisscognitive.ch/2021/10/15/graph-neural-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00231.warc.gz
en
0.959641
481
3.46875
3
What’s El Niño got to do with the crazy weather the U.S. has experienced the first several weeks of 2016? What’s El Niño got in store for the country through spring? How does El Niño impact the network UPS systems? Let’s start at the beginning. What’s El Niño? El Niño is a periodic warming of the central and eastern equatorial Pacific Ocean. Every 2-7 years, this patch of ocean warms for 6-18 months, beginning in summer and building up through fall. The ocean’s temperature has already risen, but the consequences of these temperature anomalies occur during the colder months, aka now. You will get dizzy reading NOAA (National Oceanic and Atmospheric Administration) and Weather Channel reports to get a handle on what El Niño means for our weather through Spring 2016. You will learn that our current El Niño is very strong compared to others in history, the strongest in 18 years AND its effects will impact our March/April weather. But specifically how and where? Exact dates and geographies cannot be predicted. However, the nation’s weather experts roughly predict that, due to El Niño, the weather will be intensified: - Heavy rain in the South, Gulf Coast, California and the desert Southwest to the central plains. - Colder temperatures in the Southern half of the United States which, combined with the above, means the South may experience more ice. - Heavy snowfall in the Great Lakes area and East Coast. - Yet, overall in 2016, record breaking temperatures will put this year at the top of the list of hottest years on record. The good news is that the U.S. northern region is likely to have warmer than usual temperatures. As well, tornados and hailstorms are likely to be fewer as a result of El Niño. But experts warn that forecasting tornados is hard because they happen in capricious clusters. One tornado can spawn many. And as John Allen a postdoctoral researcher at the International Research Institute for Climate and Society asserts, there’s danger in interpreting what a below-average year means. “It only takes one tornado to ruin your house and one major tornado to take your life,” he said. Nature will have its way. Truth Number One, El Niño will cause weather drama, no doubt. Truth Number Two, El Niño’s effects cannot be tracked on a particular schedule. Big Truth Number Three, none of these weather conditions are good for power stability. The impact of extreme weather on power outages is especially worrisome because of our nation’s antiquated power infrastructure, known grimly as “a power grid on sticks.” Even without El Niño level weather, nationally the incidents of constant, unpredictable power fluctuations are on the rise. Even in moderate weather, the nation’s old shaky “sticks” are vulnerable to damage and outages. Now mix in El Niño and stir. Network and server power issues are exacerbated. Heavy rains and flooding saturate the ground holding those power poles in place, uprooting them and downing power lines. Ice and heavy snowfall bear down on the poles, causing toppling, and densely coat power lines, causing breakage. Heavy snow pushed off the roads simply gets heavier and icier — making it more difficult for power companies to get to the lines and fix them. Though extreme hot and cold temperatures don’t wreck power lines, these conditions cause the problem of excessive power usage. As everyone ramps up their heat and air conditioning, systems overload. Blackouts and brownouts occur when usage reaches peaks. So how do you protect your systems from outages? First you must realize that your IT equipment can only withstand 16 milliseconds of an outage. If you are not prepared, your equipment does not have the chance to shut down properly, leaving you prone to data and equipment loss. That’s why Uninterrupted Power Supplies (UPS) were invented beginning in the 1950’s. A UPS is sensitive electronic equipment offering a stable supply of power, responsively as needed. The equipment used to be huge but today’s UPS units are small, portable and environmentally tolerant. But don’t underestimate these small units. They are mighty protectors which will detect, verify and react to a power problem in 4 milliseconds. Their sensitivity is what makes them so effective in an outage. However, be aware: when a UPS is called into play by a power outage, its exquisite sensitivity is affected. That’s why you need to pay attention to keep them functional– recalibrating them, checking and reconditioning UPS batteries as needed. UPS maintenance is important in ordinary times but exponentially important as El Niño weather hits. CoastTec experts are passionate about sharing information and keeping you in power. Call us today to ask questions about your UPS units and how to detect when servicing is needed at 410-521-1000. Learn more about our UPS Maintenance and Repair services. If you plan proactively, you can face El Niño weather with confidence. And stay tuned: on the heels of El Niño comes La Niña in this Fall!
<urn:uuid:5c7b329e-1eb3-4273-ba5a-106d6d5ed863>
CC-MAIN-2022-40
https://coasttec.com/whats-el-nino-got-to-do-with-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00231.warc.gz
en
0.922602
1,069
3.1875
3
A router is basically a computer like PC. We we see now all about router hardware components and what are they doing inside router Routers have many of the same hardware and software components that are found in computers: - Operating System CPU – The CPU executes operating system instructions like booting up, routing functions, and switching functions. RAM – RAM is volatile memory that needs power to keep its content. When the router reboots RAM is deleted entirely. The RAM stores data that needs to be executed. RAM is used to store Operating System, Running Configuration File, IP Routing Table, ARP Cache and Packet Buffer. - Operating System – In the case of Cisco devices that is “Cisco IOS” – Internetwork Operating System. IOS is copied into RAM during bootup. - Running Configuration File – Is the configuration file that stores the configuration commands that the router IOS is currently using. Almost all the configuration commands on the router are stored in the running configuration file, called running-config. - IP Routing Table – File that stores information about directly connected and remote networks. It is used to determine the best path to forward the packet. - ARP Cache – Cache that contains the IPv4 address to MAC address mappings. It’s like ARP cache on a PC. The ARP cache is used on routers that have LAN interfaces such as Ethernet interfaces. - Packet Buffer – Temporarily stored packets are atored in a buffer when received on an interface or before they exit an interface. This is only RAM, volatile memory. However, the router also contains permanent storage areas, such as ROM, flash and NVRAM. ROM – ROM is a form of permanent storage. If we are looking at Cisco devices, they store bootstrap instructions, Basic diagnostic software and Scaled-down version of IOS in this kind of memory. In the ROM we can find firmware that is made of the software that does not normally need to be modified or upgraded, such as the bootup instructions. Flash Memory – Flash memory is nonvolatile computer memory that can be stored and erased electrically. Flash is used to permanently store the operating system. Flash memory does not lose its contents when the router loses power or is restarted. NVRAM – Nonvolatile RAM also does not lose its information when power is turned off. NVRAM is used by the Cisco IOS as permanent storage for the startup configuration file (startup-config).
<urn:uuid:b5d97421-2e4c-482f-a617-14f26737fce9>
CC-MAIN-2022-40
https://howdoesinternetwork.com/2012/routers-hardware?shared=email&msg=fail
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00231.warc.gz
en
0.929814
524
3.5625
4
A pair of NASA spacecraft have recorded the first 3-D images of the sun, giving scientists and space weather experts a much-improved ability to monitor and predict solar storms that can disrupt communications satellites, interfere with widely-used GPS systems, and knock out electrical power grids. The two spacecraft, NASA’s Solar Terrestrial Relations Observatory (STEREO), both orbit the sun, one positioned slightly ahead of the Earth and the other slightly behind. Their offset position, much like the position of human eyes, gives STEREO depth perception, letting the spacecraft produce 3-D images via the onboard telescopes and imaging systems. “The improvement with STEREO’s 3-D view is like going from a regular X-ray to a 3-D CAT scan in the medical field,” said Dr. Michael Kaiser, STEREO Project scientist at NASA’s Goddard Space Flight Center. Cool and Useful The 3-D images of the sun are not only interesting to look at; they’re also meant to give scientists a better understanding of the sun’s properties. STEREO is designed to improve space weather forecasts, which will in turn help people protect and manage various technologies. There are several kinds of solar weather patterns, which come in cycles. “The sun has an 11-year cycle with increased levels of solar activity during the four- to five-year-long solar maximum period. It is possible for the sun to produce a large solar radiation storm during eight or nine of the years during the cycle,” Ron Zwickl, deputy director of the Space Environment Center (SEC). The SEC works with the National Weather Service, which is a branch of the U.S. National Oceanic and Atmospheric Administration (NOAA). The SEC provides space weather alerts and warnings to government and industry in the U.S. “The large radiation storms, called the S scale in the NOAA Space Weather Scales, occur infrequently with only a few per year. Geomagnetic storms, the G scale, can cause problems with electronics for spacecraft and for electric power systems,” Zwickl explained. “If geomagnetic storms could be ‘better predicted,’ we could considerably reduce the impact on electric power grids, essentially eliminating any failure due to geomagnetic storms.” NOAA is currently arranging an international partnership of ground systems to receive a real-time signal stream from STEREO, enabling organizations to study solar weather and provide better forecasts. Even though we are in the low point of the 11-year solar cycle, our sun produced strong solar flares in December of last year. “We’ve been working our way down toward solar minimum, and we may be in it now or just out of it — it’s hard to tell exactly until we’re out of it,” Chris Balch, lead forecaster for SEC, told TechNewsWorld. “We had a real busy two weeks in December … and we are pretty close to minimum, so that was surprising.” There are three primary sun space-weather phenomena: radio blackouts, solar radiation storms, and geomagnetic storms. Radio blackouts primarily affect long-distance high-frequency transmissions, which tend to be used by the military, the north and south poles, and aircraft that need to send signals over the horizon of the earth. Solar radiation storms can affect electronics and spacecraft and pose a danger to astronauts in space, who may need to move into safer areas of spacecrafts or space stations during the outbursts. Geomagnetic storms come from strong surges of solar wind and are often associated with Coronal Mass Ejections (CMEs), which are eruptions of electronically charged gas, called plasma, that erupt from the sun’s atmosphere. A CME cloud can contain billions of tons of plasma and move at a million miles per hour, NASA says. In addition, a CME cloud is laced with magnetic fields and can smash into our planet’s magnetic field. If the CME magnetic fields have the proper orientation, they dump energy and particles into Earth’s magnetic field, causing magnetic storms. The Northern Lights “The most well-known manifestation of magnetic storms is the aurora borealis, and the same time that’s going on, you have currents running in the upper atmosphere, and that can induce a current on the ground,” Balch said. “This causes problems for electric companies because it introduces current into power lines.” In addition, the storms can interfere with increasingly popular Global Positioning System (GPS) systems and devices. Many automobile drivers currently use GPS systems to navigate through cities, and while a service disruption might prove annoying, GPS is also being used to drive unmanned harvesting equipment on America’s farms. Who knows — as we enter the next active solar cycle, some farmers may need to pay attention to space weather in addition to their local forecasts.
<urn:uuid:2eb30ec3-afc1-424f-9402-d564e8198977>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/3-d-images-to-illuminate-suns-secrets-57066.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00231.warc.gz
en
0.937677
1,044
3.875
4
Massive data sets — a season’s worth of baseball statistics, for example, or health data from around the world — can contain some very revealing knowledge. The problem confronting researchers, though, is finding it. That may be a little easier with some tools developed by scientists at Harvard University and the Broad Institute. The suite of tools called “MINE” — Maximal Information-based Nonparametric Exploration — were revealed this week in an article published in the journal Science. What they do is allow researchers to find patterns and relationships in massive data sets that would be otherwise difficult or impossible to find. “What makes MINE unique is its ability to find a very broad range of different types of patterns in data and to do that equally well,” one of the authors of the article, David Reshef, who is in a dual degree program at Harvard and MIT, told TechNewsWorld. Coping With ‘Noise’ Another distinctive characteristic of MINE is its ability to balance generality and equitability in its results. A statistic has generality when it captures a wide range of associations in a large data sample without being limited by linear, exponential periodic or other statistical functions. It has equitability when scores assigned to data pairs described by the statistic are similar when the “noise” associated with the pairs is similar. Noise is what number crunchers call the amount of unexplained variation in a data sample. “The reason that’s important is that if you have a method that gives patterns that look different from each other but have the same amount of noise different scores, then you can’t compare scores across different types of patterns,” Reshef explained. That balancing of generality and equitability by MINE distinguishes it from other tools used for similar purposes, observed another of the article’s authors, Harvard Computer Science Professor Michael Mitzenmacher. “Other similar data-mining techniques that we know of may have one or the other, but don’t appear to have both,” he told TechNewsWorld. MINE could be a valuable tool for anyone analyzing large data sets, especially in scientific fields, such as biology, medicine and genetics. “We’re looking at a large number of applications there because when you’re looking at genetic data or medical data, there are huge data sets with large numbers of variables, and that’s what our tool is designed to help with,” Mitzenmacher noted. One hot area of interest in the life sciences now is the role of bacteria in human intestines, which is also one of the topics the researchers focused MINE on in their article, whose other authors included Reshef’s brother, Yakir, a Harvard student, and Pardis Sabeti, an assistant professor at the Center for Systems Biology at Harvard and a member of the Broad Institute. Life scientists are interested in determining relationships between bacteria living in our guts and things like illness and obesity. “We’re starting to get a ton of data on this, and it’s an exciting example of where MINE could be useful because we don’t know what the patterns we’re looking for are going to look like,” Reshef explained. When MINE was run on the bacteria problem, it identified 7,000 variables that produced 22.5 million relationships. Of those relationships, some 200 were “interesting.” Working in Clusters The researchers later found out from microbiologists that some of those interesting relationships were linked to diets — some diets repressed certain bacteria; others allowed those bacteria to flourish — while others depended on where the bacteria appeared in the intestinal track. When crunching data, MINE works relatively fast, according to Reshef. A sample producing around 63,000 relationships took about 1.5 hours on a laptop. But MINE doesn’t haven’t have to be restricted to a single computer. It can be operated on multiple computers working as a cluster. That’s what was done with the gut data. There hundreds of computers were used and a task that would have taken days was reduced to about two hours. By using MINE on large data sets, analysts no longer have to eyeball reams of printouts looking for relationships. If the researchers printed on paper each potential relationship in the human gut bacteria data set, the stack of paper would be 1.4 miles high, or six times the height of the Empire State Building. MINE’s application need not be restricted to scientific uses, added Mitzenmacher. “People are coming up with all sorts of other suggestions for uses,” he said. “In our paper we tried to show a range of things by also looking at baseball data, and I imagine there will be people who want to try to use the tool for financial analysis.”
<urn:uuid:b21bb81b-8e39-453a-b83b-ceadaa501b14>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/new-mine-tools-can-refine-mountains-of-data-74026.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00231.warc.gz
en
0.946779
1,041
2.953125
3
Right now, if you look at the sky at sunrise or sunset when the sun’s light is dim, you might be able to see Sunspot AR1476, which is now wending its way across the face of Sol, with your naked eye. The sunspot measures 160,000 km (100,000 miles) across, or about a dozen times Earth’s diameter. Eyeballing the sun might hurt your eyes, of course, so it’s better to avoid looking directly at it. At present, that’s about the only damage we can expect from Sunspot AR1476. “There was a disturbance that hit the Earth yesterday but it didn’t look strong enough to do any harm,” Wendell Horton, a professor emeritus at the Living With a Star Program, told TechNewsWorld. In a CME, the sun blasts a billion tons of particles into space at a speed of millions of miles per hour. A sunspot is an area on the sun’s surface that contains constantly shifting strong magnetic fields. A solar flare ejects clouds if ions, electrons and atoms into space. When CMEs and solar flares occur together, they can cause a geomagnetic storm, solar radiation or radio blackouts. Guhathakurta said. A geomagnetic storm could cause problems in the power grid or knock it out altogether, as has happened before if the magnetic fields of the solar particles in the storm carry with the Earth’s magnetic field, Guhathakurta pointed out. “You must have the right orientation or nothing will happen,” Guhathakurta explained. “On March 7 and 8 we had a moderate solar storm and not much happened on Earth because the orientation of the magnetic field of this CME didn’t connect with Earth’s magnetic fields, but many of our satellites observing other planets were affected.” Solar radiations and radio blackouts may occur, but “the effects can often be very subtle” and so aren’t readily noticed by members of the general public, Guhathakurta stated. What About Sunspot AR1476? The sunspot currently trekking across Sol’s face has unleashed several M-class flares and a few C-class flares, the latest an M5-class flare on Wednesday morning. Solar flares are classified as A, B, C, M or X in ascending order, according to the peak flux in watts per square meter of 100 to 800 picometer X-rays near earth. A picometer is one trillionth of a meter. X-class flares “are major events that can trigger planet-wide radio blackouts and long-lasting radio storms,” UC Riverside’s Wilson said. “M-class flares are medium-sized, and can cause brief radio blackouts that affect Earth’s polar regions.”
<urn:uuid:f0ba4c31-4320-4489-b098-5b567fc3fe8a>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/shining-some-light-on-sunspots-75080.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00231.warc.gz
en
0.919383
621
3.53125
4
If there is one thing we know about the people around us, even the perfect strangers, it's that they almost all have smartphones. And those smartphones aren't merely passive receivers, they're broadcasting constantly, looking for things you might want to connect to. Advertisers have exploited the electronic noise that smartphones make for years, using it to track people in places like shopping malls. But now a security researcher has used the same idea to detect if you're being followed. Matt Edmondson had the idea for the tool when a friend of his, who also works for the government, expressed concerns about being tailed when meeting a confidential informant who had ties to a terrorist organization. Although the friend is skilled at escaping those following them by car, he was looking for "an electronic supplement". "He was worried about the safety of the confidential informant," Edmondson explained to Wired. Edmondson wears many hats. He served as a federal agent for the US Department of Homeland Security for 21 years; he is the founder of an infosec consultation company; a hacker; a certified SANS instructor; and a digital forensics expert. Suffice it to say, he has the skills and experience to create something that would make someone safe using parts that don't cost much, some open-source Python code, and a Raspberry Pi. Edmondson presented his project at Black Hat on Thursday. His talk, Chasing Your Tail With a Raspberry Pi, touched on how he assembled the anti-tracking device, the challenges encountered when building it, and some best practices to consider, including creating an ignore list for friendly smartphones, and the importance of randomizing your MAC address (the rarely-changed identifier that allows others to track your smart phone). The anti-tracking device works by scanning for wireless devices and checking if these have been present within the past 20 minutes. Unlike tools made to scan stationary devices, Edmonson’s machine was designed to scan moving ones. This is necessary as the act of tailing requires movement. The device can fit in a shoebox and is in a waterproof case. It has a Wi-Fi card that runs Kismet (a popular wireless network detector), a portable charger, and a touchscreen where the user sees alerts. Each alert solidifies the possibility that one is being tailed. “It’s purely designed to try to tell you that you’re seeing something now that you were also seeing a few minutes ago,” Edmondson says. “This isn’t designed to follow people in any way, shape, or form.” Edmondson pleads with the tech community to take digital tracking and surveillance seriously. “It was really kind of disheartening and depressing to look at the ratio of tools to spy on people versus tools to help you not get spied on,” he says.
<urn:uuid:29b1bfdd-ea29-42b4-9e3e-d6578fcb7933>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2022/08/someone-made-an-anti-tracking-tool-that-alerts-you-if-youre-being-tailed
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00231.warc.gz
en
0.967705
595
2.515625
3
“Stop,” “think” and “connect” are more than just words thrown together to grab your attention. Combined, they form the name of a global campaign by the National Cyber Security Alliance in conjunction with Cybersecurity Awareness Month—our favorite month of the year! History of STOP. THINK. CONNECT.™ In 2009, then-President Obama recognized the need to increase education and dialogue about cybersecurity and issued the Cyberspace Policy Review, which would be the foundation of America’s cybersecurity and digital infrastructure moving forward. As part of this policy, Homeland Security was tasked with creating an awareness campaign to help citizens understand the risks associated with being online. This was the birth of the STOP. THINK. CONNECT.™ (STC) campaign. The campaign officially launched in October 2010 in conjunction with National Cybersecurity Awareness Month. STC started out with 25 different partner companies and seven government agencies backing the project, and in the years since its inception, the campaign has grown immensely. Today, it has more than 300 partners that each take on the responsibility of teaching others how to make sure they’re secure when on the internet. The STOP. THINK. CONNECT.™ campaign has even gone global, with partners from more than 50 countries on six continents. Goals of STOP. THINK. CONNECT.™ So, just what are we stopping, thinking about and connecting to? The campaign’s website presents these basic steps: - Stop and make sure that all security measures are properly in place. - Think about the consequences of every online action. - Connect and enjoy the internet! But there’s more to it than that. According to its website, the campaign’s top five goals are to: - Increase and reinforce awareness of cybersecurity, including associated risks and threats, and provide solutions for increasing cybersecurity - Communicate approaches and strategies for the public to keep themselves, their families and their communities safer online - Shift perception of cybersecurity among the American public from avoidance of the unknown to acknowledgement of shared responsibility - Engage the public, the private sector and state and local governments in our nation’s effort to improve cybersecurity - Increase the number of national stakeholders and community-based organizations engaged in educating the public about cybersecurity and what people can do to protect themselves online The STC campaign also boasts several sub-campaigns, each focused on a different cybersecurity, online safety and privacy topic: To learn more about STC specifically or cybersecurity in general, check out the campaign’s resources page. It’s chock-full of tip sheets, videos, graphics and more, and addresses everything from wedding-related cyber-safety to public Wi-Fi etiquette (complete with cat memes!). You can register your organization to become an official partner of the campaign for free, and after you sign up, you’ll receive a cybersecurity tool kit along with monthly newsletters, webinar invites and regular updates. Don’t forget to check the events page for the latest goings-on, including live Twitter chats, symposiums and webinars. Overall, the most important thing you can do to ensure cybersecurity at your company and beyond is to spread the word. Cybersecurity Awareness Month may only last 31 days, but the principles associated with creating a safer cyber-environment for everyone are important all year round.
<urn:uuid:8d8cad1c-02f2-45ec-a5b2-f05d533d42cf>
CC-MAIN-2022-40
https://integrisit.com/stop.-think.-connect.-discover-how-you-can-help-protect-the-public-from-cyberthreats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00231.warc.gz
en
0.945408
706
2.6875
3
If you’re reading this article, chances are you live a tech-driven everyday life — at least up to a point. Most of us use the internet to connect with friends, family, work and entertainment. Again, up to different points; even “analog” fans own a mobile phone or a laptop. We can engage computers, smartphones, tablets and other internet-dependent devices to check our email, shop online, play games, browse through social media and more. Almost every virtual stop in our routine requires a personal account. Be it Instagram, your online banking, or your food-for-home platform; each account requires you to share certain personal information online. Your name, date of birth, address and payment info are the most commonly shared sensitive data on the web. If left unprotected, this data can fall victim to cybercriminals who then commit identity theft, financial fraud, or system compromise. Personal cyber protection is the sequence of steps necessary to secure our vital data against all potential cyberthreats. Let’s explore how to do that optimally. Some people find updates irritating, especially if they require multiple operating system (OS) restarts to finish the update. “Again? I just updated Windows two days ago!” However, performing frequent updates means you’re regularly getting an improved version of the software on your devices. Programs, apps and OSs can all be exploited by cybercriminals, and updates strive to prevent that. A software update can fix “bugs” in coding errors or security vulnerabilities to deny unauthorized third parties access to your device and personal data. Cybercriminals continuously aim to find and exploit new vulnerabilities, so it’s best to apply patches as soon as they’re issued. Automation is the quickest way to ensure that. If you’re keen on the “set-and-forget” approach, you can turn on and confirm automatic updates on all devices and operational software. Moreover, you can set the automation to conveniently update your system (typically, at night, when you’re asleep). Just make sure your device is plugged on and has enough storage space to apply the updates. Multifactor authentication (MFA) can drastically improve the security of essential accounts. Using two or more steps to enable logins adds extra steps for potential intruders to go through. Even if they somehow crack your password, they’d also need to have access to your phone or access token. Usually, MFA consists of up to three primary pieces: · A PIN code, password or a secret phrase (a thing you know) · A smartcard, authenticator app, physical token, email or SMS code (a thing you have) · Facial recognition, fingerprints, or an iris scan (yourself as an authenticator) Data backups are copies of your essential data (e.g., work files, photos, payment information) saved on an external physical storage device or in the cloud. Disk-image backups are copies of your entire system including the OS, applications, and data. Full-image backups can serve for disaster recovery or help you migrate onto a new device. It is recommended that you back up your device on a daily or weekly basis, depending on how much data you chose to lose if an unforeseen event happens. It is important to back up regularly and, preferably, automate the process. Use of password managers Modern passwords require you to develop a code of at least eight characters, mixing lowercase and uppercase letters, numbers and symbols. Usually, you can use a single character of every type, with special symbols making up half of the password (keep in mind, the most used special symbols in passcodes are %, &, #, @, and _, so maybe stay away from those). However, remembering sophisticated passwords can be challenging, especially if you follow the rule not to use the same password twice. Conveniently, a password manager app can remember all of your passwords for you. Some subscription solutions even offer advanced features to up your passcode game. Mobile device security Smartphones are swiftly becoming the all-in-one solution for tech enthusiasts. You can work, shop, bank and even track your life balance through them. If somebody steals or compromises your smartphone, they can access your online accounts, steal your identity, steal money from you and destroy the personal data you hold dear (e.g., photos, messages, notes). What’s more, hackers can also use your phone to scam other people. To best secure your mobile device, you should protect it with a password, PIN, or a passphrase. Additionally, set the device to lock after a short window of inactivity, install a software security app, enable remote data wiping, turn off Bluetooth and Wi-Fi when out of use, and ensure your device doesn’t connect to open Wi-Fi networks automatically. It is also important that you back up your mobile device as well to ensure you never lose your previous photos, contacts, etc. Use antivirus software Keeping track of online threats is hard work if you do it alone. Instead, you can install antivirus software on all of your devices and leave the hassle to a paid solution. Acronis Cyber Protect Home Office lets you easily schedule data backups while detecting and stopping attacks in real time without the need for your constant attention. You can initiate antivirus scans, encrypt communication data, and secure logins intuitively through an all-in-one interface. Learn the basics — what is a scam, phishing, a spear phishing attack, etc.? The best way to identify and stop a malicious attempt on your system is to educate yourself on what an attack looks like and what to do. Take the time to learn about scam emails and SMS messages demanding that you help a perceived friend or relative; phishing emails inviting you to share personal information with a “reputable” institution; and sophisticated attacks, such as spear phishing (designated to target specific individuals, businesses, or organizations). The primary guidelines to battle phishing attacks are as follows: · Don’t open emails from strangers. · Hover over embedded links to see where they would lead you. · Inspect all incoming emails — the sender’s email, grammar issues, and tone of voice. · Malicious links could also come from friends’ emails if they have been compromised; make sure you read the email thoroughly before downloading any attachments or clicking on any links. Avoid using public Wi-Fi networks Carefully choosing how you use Wi-Fi networks is one of the oldest cybersecurity practices. Public Wi-Fi networks often lack network protection, so your data is easy prey for virtual predators — especially when anyone can access the network and target data in transit. If you must connect to public Wi-Fi network, install and use virtual private network (VPN) software. A VPN will encrypt all device data and effectively hide your IP. Avoid clicking unknown links and visiting suspicious websites “Free” software on the internet is rarely truly free. For decades, cybercriminals have been employing “free” programs and apps to trick users into downloading malware, thus infecting their devices. While there is respected software that has been free from the dawn of the internet, you should avoid clicking on suspicious links and visiting questionable sites. Even if it turns out the site belongs to a beginner web designer, it’s best to stay away from sites you can’t properly vet. Avoid using software and apps from unknown sources Like untrustworthy links and sites, any software coming from an unknown source can be malicious. Even if it promises to save you the one-time fee for actually accessing the app, disguised malware can cost you dearly. It’s best to download and install software only from reputable sources after you’ve made sure their download links are secure. Acronis Cyber Protect Home Office provides the ultimate in data protection Every data protection strategy needs a strong antivirus. Even if you are mindful of your browsing habits, a cybersecurity solution adds extra layers of defense to foil snooping third parties. Acronis Cyber Protect Home Office (formerly Acronis True Image) blocks malicious attacks in real time without human supervision. You can also scan your device for existing infections, rid your system of them, and reduce the risk of data breaches and unwanted cyberattacks in the future. Acronis Cyber Protect Home Office provides a unique integration of reliable backup and cutting-edge anti-malware technologies that safeguard data against all of today’s threats — disk failure, accidental deletion, loss and theft, as well as cybercriminal attacks. PCMag described it as “an all-encompassing tragedy prevention solution” in their “Editor’s Choice” review. With Acronis Cyber Protect Home Office, individuals and small businesses alike can back up their data — including OSs, applications, settings, files and Microsoft 365 accounts to local drives, external hard drives, NAS and the Acronis cloud. In addition, Acronis Cyber Protect Home Office stops cyberattacks — including attacks resulting from zero-day vulnerabilities — from harming both backup and device data with real-time protection, vulnerability assessment, on-demand antivirus scans, web-filtering, ransomware protection, and a cryptomining blocker. In case of a disaster, data can be easily recovered.
<urn:uuid:5dfd4a73-20fe-41af-893b-ae57391262c7>
CC-MAIN-2022-40
https://www.acronis.com/en-us/blog/posts/personal-cyber-protection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00231.warc.gz
en
0.894485
1,986
2.5625
3
If you don’t think you are being tracked as you move around Target or Macy’s or even through a local museum, you must not have a smartphone. Many companies are now using beacons –or stationary devices that measure the movements of people carrying smartphones through Bluetooth or Wi-Fi transmissions, to understand the movements of consumers and build marketing campaigns based on consumer location. On September 23, 2015, the Massachusetts Bay Transportation Authority (MBTA) announced that it too will join the beacon bus. Boston’s MBTA will start tracking public-transit riders using these beacons. Specifically, the MBTA will track users through Gimbal Bluetooth Smart beacons, so if you turn your Bluetooth off you should be out of range. Nevertheless, the MBTA said that it will not be collecting or using personally identifiable data, and it will also use a “secure, closed network” to track its riders. The goal of this new tracking? The MBTA hopes to find ways to improve communications with riders and other MBTA technology. Additionally, and perhaps most concerning, it hopes to find out “how brands can increase engagement and interactions with commuters based on proximity.” This is just a pilot program that the MBTA hopes to roll out for a year before determining its usefulness and the potential for more effective communications.
<urn:uuid:2672ccf0-569d-415a-abba-fe8bb6aa333a>
CC-MAIN-2022-40
https://www.dataprivacyandsecurityinsider.com/2015/10/bostons-mbta-joins-the-bluetooth-beacon-bus-it-will-now-track-the-movement-of-its-riders/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00431.warc.gz
en
0.944141
272
2.578125
3
U of Melbourne scientists create largest time crystal to-date (Finance.Yahoo) Scientists at the University of Melbourne have created the largest time crystal to date by programming one into a simulation in a quantum computer with 57 qubits—the quantum equivalent of a binary bit in a regular computer. That represents an improvement of nearly a factor of three over the last such groundbreaking simulation, as detailed in newly published research in the peer-reviewed journal Science Advances. One of the major upsides to the theoretical time crystal is that, as conceived by Wilczek, they don’t use any energy as the electrons change around in time itself. This results in a special phase of matter that appears to violate Newton’s first law of motion, putting time crystals in the same category as superconductors. These are materials under special conditions—like supercooling matter close to absolute zero—where they lose all resistance as they pass electrons around. Superconductors are also a quantum phenomenon. In their experiments, the University of Melbourne scientists used remote computing to access an IBM quantum computer in the United States. (Think of this when anyone says you can’t work remotely!) They were able to make changes to individual qubits in the system, and then use strong magnetic pulses to reunify the field into one large acting time crystal. This work shows a lot of hope for time crystals, the researchers say, especially in terms of using time crystal systems as possible memory in quantum computers. That will take a lot more time, though, as this 57-qubit system can only “hold a charge” for up to 50 cycles before the qubits revert to more random behaviors.
<urn:uuid:872592f5-dc3a-4096-a3be-afab7ab9e332>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/u-of-melbourne-scientists-create-largest-time-crystal-to-date/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00431.warc.gz
en
0.936883
345
2.984375
3
In this tech-savvy era, everybody operates online. From individuals and businesses to non-profit organizations, many companies have their confidential and critical corporate data stored on digital platforms. That only makes them susceptible to malicious online activities from hackers who target companies’ computer systems to phish data for illegal and malicious practices, such as malware, man-in-the-middle (MITM), password attacks, etc. It’s high time organizations deploy identity and access management (IAM) as part of their cybersecurity strategies to better protect their data. Cyber-attacks like ransomware are not novel incidents. CNN reported a worsening situation in the cybersecurity space as the number of hackers targeting sensitive business information and data for ill motives. With remote working becoming a norm, significant vulnerabilities and loopholes in cybersecurity have been exposed, providing more avenues for hackers to orchestrate their attacks. The Business News Daily discovered that most companies that report these breaches have poorly secured accounts. Among the 1,000 IT leaders sampled in the study, 74% of the respondents from companies whose computer systems had been breached said that the attacks stemmed from breached privileged accounts. Work-from-home policies only mean that employees have increased access to sensitive company information; enough reason for them (businesses) to have tighter access control. Identity and access management (IAM) allows IT professionals to have increased control of who can access confidential and valuable business data and at what time. What is Identity and Access Management (IAM)? “Identity and Access Management” is an umbrella term that describes processes, frameworks, and technologies used to manage digital identities and regulate user access to sensitive accounts and information within a company’s IT infrastructure. IAM also encompasses the vetting technologies that define situations warranting user access to restricted business data. IAM Extends Cybersecurity Beyond the Office For remote workers to perform their professional duties, they need access to corporate resources. While most businesses have already deployed robust cybersecurity systems in their work premises, there has been a significant shift towards IAM because it facilitates the extension of data security from the office to personal spheres, whether at home or other remote locations. Cyberattacks are on the rise. Organizations today hire third-party cybersecurity agencies to safeguard their personal identity and monitor employees’ personal data to identify weaknesses and suspicious activities. While that’s commonplace in many physical office settings, it’s the first time the concept is being introduced to remote workplaces, and the idea that individual employees require cybersecurity measures to protect their digital assets is only gaining traction. What are the Benefits of IAM? There’s no denying that IAM tools help companies bolster their digital security, bringing efficiency due to reduced cyberthreat-induced downtimes. Still, IAM offers several benefits to companies. They include: Reduced Security Costs An IAM tool standardizes and streamlines all identification, authentication, and authorization protocols in risk management. Using a federated identity management platform means IT professionals need not rely on local identification systems for remote or external use. That makes IAM application easier and less costly. Besides, cloud-based IAM solutions alleviate the costs of investing in and maintaining in-house hardware. Improved Data Security With IAM, companies can access and integrate identification and authorization functions on a single platform. It gives IT professionals complete control of the identity management codes and enables them to establish consistency in methods used to control user access based on the company’s identity cycle. For instance, if an employee had access to confidential business accounts, IT administrators can leverage IAM to delete the former employee’s accounts and revoke any subsequent attempts to access the company information. They can block access across all business-critical systems with a single click of a button. Least Privilege Principle The least privilege principle is a concept in cybersecurity management that narrows down access privileges to the bare minimum. Thus, employees can access only the information they require to do their jobs, nothing more. Seventy-seven percent of data breaches result from internal vulnerabilities. Therefore, companies must control access to corporate resources. When an employee switches a role and assumes a new position in another department or office, they’ll have access to multiple business accounts. With time, their access privileges accumulate. The employee then becomes the target of cybercriminals who’ll use them as a gateway to the broader company information. In that case, the IT department can leverage IAM to revoke their access to data accounts that don’t relate to their current position. That significantly reduces exposure to cyberattacks. Enterprise IT Governance The cybersecurity space is a regulated industry. Regulations outlined in the Health Insurance Portability and Accountability Act (HIPAA) and the 2002 Sarbanes-Oxley Act (SOX), only remind us how important IAM is for companies to remain compliant with industry policies and rules. In March 2017, the New York Department of Financial Services (NYDFS) announced new rules that impacted financial institutions’ operations regarding their cybersecurity strategies. Banks are now required to manage audits and monitor the activities of individuals who have access to computer systems. These are functions that an IAM solution can efficiently perform. Furthermore, the identity management platform enables a company to enforce various access policies like separation-of-duty SoD. Automated IAM tools also enable IT leaders to maintain consistency in their identity management techniques to ensure governance control and minimal access violations through automation. How IAM Boosts Security The primary purpose of integrating IAM into your IT infrastructure is to streamline the identification, authentication, and authorization processes by assigning each authorized individual a personal digital identity and password. The role of the IAM is to maintain and track users’ activities based on their access levels during their access lifecycles. IAM bolsters cybersecurity in several ways: Role-based Access Control IAM plays a critical in improving cybersecurity by enabling IT departments to restrict access to business accounts based on a user’s role in the company. Typically, each employee undertakes their job based on their job description, position, and responsibility within the company. With IAM, it’s easier to offer employees access only to accounts that pertain to their roles. Human and Device Identification Apart from providing humans with digital identities, IMA can also assign identification to devices and applications. By linking employees’ digital accounts to their devices, the IT department can verify the user’s identity and ensure they only have access to the corporate resources they are entitled. As mentioned above, an IMA can protect a company’s critical information when an employee leaves the organization. Data theft by ex-employees is not uncommon, especially when the employee parted ways with their employer on bad terms. De-provisioning access privileges manually can take a lifetime, not to mention that remembering all their login credentials to various applications is virtually impossible. Vulnerabilities can easily pop up at different facets of a company’s computer network when doing that manually, and these are weaknesses that cyber attackers are looking for. With an automated de-provisioning process, you can deny access to an ex-employee’s attempt to access their previous accounts and easily manage employees’ identities as they switch roles within the company. How Cyber Sainik Can Help with IAM If you’re questioning the security of your IT infrastructure, it’s time you trust your gut that there’s a vulnerability. You can improve your IT network’s security by partnering with Cyber Sainik. We are a client-focused, Managed Security Services Provider (MSSP). We render a range of services to help you improve your cybersecurity practices. These services include: - Identity management - Intrusion detection and prevention - Vulnerability protection - Device security - Security information and event management (SIEM) With Cyber Sainik, you have access to the tools and resources to implement IAM and multi-leveled authentication protocols that help you create robust security policies. Additionally, our IAM solutions are designed to ensure that only the right user can access corporate accounts at the right time, thus proofing your data against external and internal security breaches. IMA implementation is not an option. It’s imperative. If you are ready to integrate IAM into your cybersecurity systems, contact us today to get IMA services that strongly protect your business data.
<urn:uuid:91d488ce-6353-4de5-ac5b-267cab6e5923>
CC-MAIN-2022-40
https://cybersainik.com/the-role-of-identity-access-management-iam-in-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00431.warc.gz
en
0.930636
1,744
2.6875
3
This article is contributed. See the original author and article here. Mithun Prasad, PhD, Senior Data Scientist at Microsoft “I don’t have enough relevant data for my project”! Nearly every data scientist has uttered this sentence at least once. When developing robust machine learning models, we typically require a large amount of high-quality data. Obtaining such data and more so, labelled or annotated data can be time-consuming, tedious and expensive if we have to rely on experts. Hence, there is a compelling need to generate data for modelling using an automated or a semi-automated way. Specifically, in this work, we explore how we can utilize Open AI’s Generative Pre-trained Transformer 3 (GPT-3) for generating data to build models for identifying how credible news articles are. GPT-3 is a language model that leverages deep learning to generate human-like text. GPT-3 was introduced by Open AI in May 2020 as a successor to their previous language model (LM) GPT-2. It is considered to be better than GPT-2. In fact, with around 175 billion trainable parameters, OpenAI GPT-3’s full version is one of the largest models trained so far. Fake News Generation In this blog post, we discuss the collaborative work between Microsoft’s ACE team and the Dataclair AI lab of O2 Czech Republic, where the goal is to identify fake news. Fake news is defined as a made-up story with an intention to deceive or to mislead. The general motive to spread such news is to mislead the readers, damage the reputation of any entity, or gain from sensationalism. The creation of a dataset for identifying credible news requires skilled annotators and moreover, the task of comparing proposed news articles with the original news articles itself is a daunting task as it’s highly subjective and opinionated. This is where the recent advances in natural language modelling and those in text generation capabilities can come to the rescue. We explore how new language models such as GPT-3 can help by generating new data. We generate fake news data using GPT-3 by providing prompts that contain a few sample fake news articles in the Czech language. Doing something like this would have been unthinkable a few years ago, but the massive advancement of text generation through language models opened doors to such experiments. As the research paper describing GPT-3 shows, GPT-3 is very good at generating news articles of high quality that even humans are not capable of detecting as computer-generated: The plot also shows how text generating models improved via having access to more parameters. GPT-3 is the furthest to the right and the plot conveys how accurately people were able to recognize generated articles from those written by humans. “Prompts” are a way to get the model to produce text by specifying an instruction in natural language and showing some demonstrations of how to follow the instructions well. GPT-3 has an incredible capacity to mimic writing styles. When the prompt is set up correctly, GPT-3 adheres to the example just enough to copy those underlying elements (for example: includes or excludes citations, etc.) and introduce a new twist to the generated text. It is even capable of creating its own complex arguments. Thus, it is not just a replication of pre-existing data, but a creation of new and original articles from which the model can learn. An example of a prompt and parameters used to generate fake news are as follows in bold. The generated text is in italics. Generate a news article based on the headline and with the same style of writing as given in the example. Headline: Where do leftist extremists get the audacity to speak for the nation? My fellow Czechs, we must shake off the shame that the USA, Brussels and other countries have been forced on us with the help of our own “experts” and “journalists”. The same people who are now digging into our nation with the help of a genuine community of the USA and Brussels – the Pekarová and other forces… Temperature: 0.7, Max tokens: 1000, Top p: 1, Frequency penalty: 0, Presence penalty 0 From these parameters, the most important ones are temperature and max tokens. Temperature controls randomness in the text. Therefore, a temperature of 0.7 was chosen to produce less deterministic results that still follow the structure and writing style of the example. Max token value was set to 1000 tokens (~4000 characters) because this is the average length of a news article. It should be noted that when working with GPT-3, the process of finding the right parameters is about experimentation. Admittedly, there are still challenges to deal with. One of them is the need to manually inspect if GPT-3 returns articles that are relevant and in the right credibility category. Due to the sensitivity of the topic of article credibility, data quality checks will need to be implemented. Another minor limitation is that while GPT-3 understands many articles that it has been trained on, it has problems when analysing newer topics. For example, it is unable to fully grasp the significance of COVID-19 and it usually avoids writing about it due to not having enough knowledge about the global pandemic. Thus, it generates less realistic articles when faced with such a topic. Nevertheless, if those obstacles are kept in mind, GPT-3 can help make dataset creation faster and more reliable. This is something that the O2 CZ team plans to utilize for their disinformation recognition AI model. “Our model extracts different features (aggressivity, clickbait etc.) from the article via smaller extraction modules. Those features are then evaluated by the deep learning classification module and subsequently transformed into one number by the ensemble method. For the system to work, we need as many articles as possible for training the classification module, which we hope to obtain with the help of GPT-3,” described by Filip Trhlik, a Data Scientist at the Dataclair AI lab. Disinformation recognition AI model diagram In conclusion, artificially generating new data is a very exciting use case of language models and even though the data generated requires a small amount of manual inspection, it is very beneficial for downstream modelling tasks. The ability to generate a large amount of synthetic data in a short time is very promising. Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
<urn:uuid:a600ce21-4f53-4fe5-b75d-5a675332d656>
CC-MAIN-2022-40
https://www.drware.com/not-enough-data-gpt-3-to-the-rescue-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00431.warc.gz
en
0.932656
1,368
2.921875
3
Despite the popularity of cryptocurrency, blockchain is not only about cryptocurrencies. It is being explored for app development and deployment. The apps are known as decentralized apps (dApps). dApps are being used in different industries, including healthcare, music, real estate, gambling, and finance. As of the end of 2021, there were about 3630 dApps with only 1820 active. This shows that the industry is in its infancy with plenty of room to grow. With an expected growth of $6.25 billion between 2021-2025, it is no surprise that entrepreneurs want to get in on the party of dApps. You have come to the right place to get you started on how to develop a blockchain application. What is Blockchain? Blockchain is a network based on trust that decentralizes data transfer and storage. Everyone on the network has visibility and a record of transactions and data exchanges. It is a digital ledger of transactions that cannot be changed. Any member can add a new block to the chain once everyone has authorized and validated it, making it difficult to hack. Blockchain can be divided into four types. Public, private, hybrid, and consortium blockchains. It is also known as the permissionless blockchain. All network members have access to it from any device. The members can perform and verify transactions, interact with other members and access the code while remaining anonymous. An example of a private blockchain is Bitcoin. It is great for entrepreneurs creating a dApp for a wide audience because it allows any user to join. It is also known as a permissioned blockchain and offers limited access only to authorized members. There are rules governing how transactions occur between members. A permissioned blockchain is most appropriate for organization operations. You do not need to choose one or the other in some cases. With a hybrid blockchain, you can have the best of both worlds from a public and private blockchain. Some data on the blockchain is limited to specific network members, while other data is open to all network members. It is also known as a consortium blockchain and has similar characteristics to a private blockchain. Rather than work within one organization, it works across different organizations with two types of nodes, a validator, and member nodes. The validator node initiates, receives, and validates transactions. The member nodes cannot validate transactions, but they can initiate and receive. How to develop and deploy a dApp Once you have a basic understanding of what the blockchain is and the different types, you are ready to learn how to develop a blockchain application. There are a few steps to follow that will guide you to creating your modern web 3.0 blockchain app. To Blockchain or not to Blockchain? Blockchain is not a one-size-fits-all. It is not for every business. You need to know whether developing a blockchain app will be best suited to your business or operations. How will you know? If your business stores large amounts of data needed to track and monitor transaction values security over the speed of transactions, blockchain may be the best way to go. Since blockchain is based on trust and creates transparency for all members, you can be sure all your data and transactions will be safely stored and easily visible. What do I get out of Blockchain? You need to understand the benefits of blockchain before leaping over it. Some of the advantages you can expect include security, traceability, privacy, transparency, speed, and integrity. It would be best to narrow down what industry you want your dApp for. Several dApps have been developed for use in different industries, including finance, insurance, retail, healthcare, and real estate. You can investigate some notable companies already using apps. These include OpenBazaar and Expedia in retail, HealthVerity, and Hash Health in healthcare. Narrow Down Your Idea You chose an industry, but what problem will you be solving? To stand out from the crowd, your blockchain app should solve a specific industry problem. Your competitors are a great starting point for what is missing in the blockchain app space. By understanding what they offer, you can either do it better or add features they do not provide. Appropriate Blockchain Development Tech Stack Without the right tools, you cannot build the blockchain app you want. It would help if you chose the correct tools for your specific application. It would be best if you had tools for each of these layers: - Services and optional components Some of the most popular blockchain app development platforms include Ethereum, EOS, and OpenChain. Developing the Blockchain App Building your blockchain application will happen in two stages: the discovery stage and the development stage. The discover stage makes sure all your project requirements are clear, and the development stage is where the code base of the dApp is created. The discovery stage includes business analysis, software architecture planning, and team review. The development stage includes development, quality assurance, testing, and project management. Deployment and Release Once the app is developed, you should release it to the market. If it’s a mobile app, you will submit it to the Google Play store and the Apple store. Over time, the app will need maintenance which means you will need to provide support to nip out all the bugs and update features. Getting the Word Out Marketing will help the target customers get to know about the app and maintain their interest once they are onboarded. You can use email marketing, social media marketing, and influencer marketing to spread beneficial information about your blockchain app. Blockchain has great opportunities for different industries in app development. Whether it’s healthcare, retail, or finance, there is something in the blockchain for you. With the correct information on building a blockchain application, such as the right platforms and marketing tools, you can take advantage of an industry that is still budding.
<urn:uuid:922f7977-f688-4766-af2e-6d4c0bae08cf>
CC-MAIN-2022-40
https://www.404techsupport.com/2022/03/24/build-and-deploy-a-modern-web-3-0-blockchain-app-in-2022/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00431.warc.gz
en
0.941629
1,239
2.8125
3
Intel plans to use novel materials and manufacturing processes in order to continue to develop smaller and faster computer chips once current technologies no longer do the trick. The leading chipmaker, which has been stung recently by some near-term product retreats and operational hiccups, laid out its research and development priorities for the next two decades. Intel told analysts and reporters that it would look to nanotechnology to drive the development of substantially smaller chips in the future. The company said it was already starting to explore options such as carbon nanotubes and nanowires. Such materials will be necessary to take chips below the 10 nanometer size, which is expected to occur early in the next decade. Intel now makes many of its chips in a 90-nanometer process and believes current materials will enable a reduction to 10 nanometers by 2011. With processors currently shrinking every two years, materials that can be manipulated in atom-sized portions will be necessary before 2020, Intel said. The chipmaker is also looking beyond the nanotechnology phase to completely different configurations for chips that use techniques such as magnetic “spintronics” instead of electronics. The company said it was working with several universities and private researchers who are tackling nanotechnology issues. Still R&D King The glimpse into the future came at a time when Intel no doubt wanted to move the focus away from some short-term stumbles toward the big picture, in which Intel remains both the dominant market leader and the leading force in research and development. Intel spent US$4.3 billion on R&D in 2003, placing it among the top five companies globally in terms of investment in new product research. Intel recently announced it was shelving a plan to bring its fastest-yet Pentium 4 chip to market and had pulled engineers off the project, which some saw as an admission that some technological limits — in this case, the ability to make chips faster without overheating them — had been hit. However, Intel spokesman Chuck Molloy said the R&D event at the company’s headquarters had been planned for some time and that Intel occasionally invites interested parties to hear the company’s longer-range vision for the future. Other tech companies have set their sights on similar goals. Earlier this year, IBM announced it would partner with Stanford University in an effort to develop spintronics technologies, which some say could completely transform processors and almost all electronics devices as a result. Some analysts say that Intel is comfortable being on the edge of technology, which often means acknowledging when projects flop. In addition to the Pentium 4 debacle, Intel announced late last week that it was killing a plan to develop and market a chip for large-screen televisions that was due out by the end of the year. Intel had hoped to make that chip, code-named Cayley, part of its push into consumer electronics and said it had the potential to dramatically reduce the price of large projection-style television sets. “They’re moving forward on enough fronts that if something doesn’t work out, they can shift resources elsewhere,” Gartner analyst Martin Reynolds said. In the long run, he added, the entire tech industry benefits, whether Intel’s efforts succeed or not. “The important thing is that Intel keeps developing, even if it means trying and failing from time to time,” Reynolds said.
<urn:uuid:69b0eb59-4c1a-419c-a028-a20a600a5013>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/intel-lays-out-future-product-roadmap-37592.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00431.warc.gz
en
0.968748
703
2.9375
3
Explainer: Keystroke recognition Keystroke recognition has been defined by both industry and academics as the process of measuring and assessing a typing rhythm on digital devices, including on: computer keyboards, mobile phones, and touch screen panels. A noted typing measurement, keystroke recognition, often called “keystroke dynamics”, refers to the detailed timing information that describes exactly when each key was pressed on a digital device and when it was released as a person types. Though biometrics tend to rely on physical traits like fingerprint and face or behavioral characteristics, many consider keystroke dynamics a biometric. Biometrics Research Group, Inc. defines biometrics as measurable physical and behavioral properties that make it possible to authenticate an individual person’s identity. Biometrics are therefore used as a collective term for the technologies used to measure a person’s unique characteristics and thus authenticate identity. Keystroke dynamics uses a unique biometric template to identify individuals based on typing pattern, rhythm and speed. The raw measurements used for keystroke dynamics are known as “dwell time” and “flight time”. Dwell time is the duration that a key is pressed, while flight time is the duration between keystrokes. Keystroke dynamics can therefore be described as a software-based algorithm that measures both dwell and flight time to authenticate identity. In 2004, researchers at MIT looked at the idea of authentication through keystroke biometrics and identified a few major advantages and disadvantages to the use of this biometric for authentication. Firstly, the researchers concluded that measuring keystroke dynamics is an accessible and unobtrusive biometric as it requires very little hardware besides a keyboard, making it easily deployable for use in enterprises, vis-a-vis workstation log-ins and other access security points, at relatively low cost. Secondly, as each keystroke is captured entirely by key pressed and press time, data can be transmitted over low bandwidth connections. Other research studies on the keystroke biometrics point out other benefits, including: its ability to seamlessly integrate with existing work environments and security systems with minimal alterations with no additional hardware, along with its non-invasive nature and scalability. That being said, MIT researchers also identified disadvantages to the use of keystroke dynamics as an authentication tool. Firstly, typing patterns can be erratic and inconsistent as something like cramped muscles and sweaty hands can change a person’s typing pattern significantly. Also, MIT found that typing patterns vary based on the type of keyboard being used, which could significantly complicate verification. Despite these concerns, other studies note the general preference for biometrics in multi-factor authentication, coupled with high awareness levels will benefit the continued growth in the keystroke dynamics market, while its attributes and low prices are expected to drive the use of the technology in a range of end-use applications.
<urn:uuid:a53a61fd-a643-41b6-bd9d-6cf41ad536f2>
CC-MAIN-2022-40
https://www.biometricupdate.com/201612/explainer-keystroke-recognition
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00431.warc.gz
en
0.940151
587
3.5
4
Its severity is rated “low,” but patches are out for the second flaw in DNSSEC to be discovered in three months. DNS experts say some exploits are to be expected as the transition continues. DNS Security Extensions is supposed to be the technology that helps to secure the Domain Name System, or DNS , against attack. Yet DNSSEC servers aren’t always infallible, as a pair of vulnerabilities proved this week. While it’s critical to the operation of the Internet as a whole, DNS came under intense scrutiny in 2008 after security researcher Dan Kaminsky disclosed that it was at risk from a widespread vulnerability. Developing a long-term solution to DNS security problems is what the creation of DNSSEC is all about. Yet, this week, researchers identified DNSSEC itself as being at risk from a Specifically, the widely deployed BIND DNS server’s DNSSEC implementation was identified as being at risk from a DNSSEC-validation vulnerability. The ISC (Internet Systems Consortium), which is the lead group behind the development of BIND, has now issued patches for the affected BIND servers. Read “DNSSEC Compromised Again?” at eSecurity Planet
<urn:uuid:037c41b8-4414-4031-9260-60432914d32e>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/security/new-dnssec-exploit-revealed-should-you-be-worried/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00431.warc.gz
en
0.949467
274
2.578125
3
In the current cybersecurity threat landscape, most botnets propagate via exploits and file-based malware. Anything that touches the disk has the ability to be blocked via access controls on the host. However, new malware techniques utilize more than just binaries to execute malicious code, demanding the need for execution control. Kurstis Armour, an Information Security Consultant at cyber security company eSentire commented below. Kurtis Armour, Information Security Consultant at eSentire: One of the most effective ways to protect against compromise is by limiting what someone has the ability to do when they get onto a machine. Consider enforcing these practices within your organization: - Don’t allow general users to connect to the local administrator account. - Limit what software can be allowed on the system. - Disallow the auto-execution of scripts to limit the potential attack surface. - Change the default execution of specific applications to stop malicious files from being executed when a user double clicks on them. - Reduce the risk that comes with embedded VBA macros within Microsoft Office documents. - Enforce application whitelisting rules to ensure only approved applications are executed.”
<urn:uuid:1ecf78fb-2b96-45ec-a01a-575d1a0d7d8c>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/expert-comments/preventing-file-based-malware-botnets/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00631.warc.gz
en
0.867829
346
2.640625
3
The terms ‘diagram’ and ‘model’ are often used interchangeably, yet there is actually an important difference between them. It can be useful to reflect on these differences when undertaking process analysis, management or improvement - and it can be particularly important when utilizing the BPMN approach to modelling. BPMN provides a number of ways to show automation and the use of IT. This includes the ability to denote different types of task (such as ‘manual tasks’ that are entirely manual, ‘user tasks’ where IT is used as well as ‘service tasks’ that are automated and call on some type of service). In a BPMN process diagram, an activity is represented with a rounded-corner rectangle and named according to the kind of work, which has to be done. BPMN Process diagrams are most common, and are depicted as a graph of flow elements – Activities, Events, Gateways, and Sequence Flows that define finite execution semantics. One option for improving a process is to introduce elements of automation. Yet automation is rarely a ‘silver bullet’, and in order to be effective it’ll be necessary to truly understand the process. In this article, Adrian Reed examines how BPMN can be used to drive process automation. BPMN is a rich and detailed standard which enables us to visualize process models using a standard notation. Yet BPMN’s richness can cause challenges when working with a broad range of stakeholders. In this article, Adrian Reed discusses this challenge, and suggests that if we keep our stakeholders front-and-center of our minds, we can keep them on board and create models that they will find useful and valuable. Read this BPMN blog to better understand the roles of the 'Functional' and 'Process' approaches in business process modelling are. This is the concluding part of our 'Managing Business Processes with BPMN: SWOT Analysis'. Here we take a look at the Opportunities and Threats of Managing Business Processes with BPMN. This is past 1 of 'Managing Business Processes with BPMN: SWOT Analysis' where we take a look at the strengths and weaknesses of Managing Business Processes with BPMN. Next week we'll take a look at the Opportunities and Threats. We understand the term “asset” as any item of economic value owned by an individual or organization, especially that which could be converted into money. In this blog we take a look at how in BPMN Business Processes can be considered Organizational Assets too.
<urn:uuid:4afd103e-b425-47d2-85e0-773e87c16704>
CC-MAIN-2022-40
https://blog.goodelearning.com/category/subject-areas/bpmn/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00631.warc.gz
en
0.940523
545
2.671875
3
In this blog post BBC’s ‘Monty Python’, a comedy series, that aired during the late 1960s was a huge hit. The Python programming language, released in early 1990s, turned out to be a huge hit too in the software fraternity. Reasons for the hit runs into a long list — be it the dynamic typing, cross-platform portability, enforced readability of code, or a faster development turnaround. Python was conceived by a Dutch programmer, Guido Van Rossum, who invented it during his Christmas holidays. The ascent of the language has been observed since 2014 owing to its popularity in the Data science and AI domains. See the Google Trends report in Exhibit 1. No wonder Python has risen to the 3rd position in the latest TIOBE programming index. Exhibit 1: Google Trends Report Exhibit 2: TIOBE index Python is being used by a surprisingly wide array of domains/ industries. The power of Python is exploited in the development of various popular web applications like YouTube, DropBox and BitTorrent. NASA has used it in space shuttle mission design and in the discovery of ‘Higgs-boson’ or God particle. The top security agency NSA used it for cryptography, thanks to its rich set of modules. It has also been used by entertainment giants like Disney and Sony DreamWorks to develop games and movies. Now that the data is becoming ‘BIG’, programmers are resorting to Python for web scraping/sentiment analysis. Think of Big Data and the first technology that comes to a programmer’s mind in processing it (ETL and data mining) is Python. Learning Python is quite fun. Thanks to an innovative project called Jupyter, even a person who is getting his feet wet in programming can quickly learn the concepts. Possessing the features of both scripting languages like TCL, Perl, Scheme and systems programming languages like C++, C and Java, Python is easy to run and code. Show a Java program and a Python script to a novice programmer; he will definitely find the Python code more readable. It is a language that enforces indentation. That is why no Python code looks ‘ugly’. The source code is first converted to platform independent byte code making Python a cross platform language. You don’t need to compile and run, unlike C and C++, thus making the life of software developers easier. Let’s draw a comparison between Python and C++. The former is an interpreted language while the latter is a compiled one. C++ follows a two-stage execution model while Python scripts bypass the compilation stage. In C++, you use a compiler that converts your source code into machine code and produces an executable. The executable is a separate file that can then be run as a stand-alone program. This process outputs actual machine instructions for the specific processor and operating system it’s built for. As shown in Exhibit 4, you’d have to recompile your program separately for Windows, Mac, and Linux: You’ll likely need to modify your C++ code to run on those different systems as well. Python, on the other hand, uses a different process. Now, remember that you’ll be looking at CPython, written in C, which is the standard implementation for the language. Unless you’re doing something special, this is the Python you’re running. CPython is faster than Jython (Java implementation of Python) or IronPython (Dot net implementation). Python runs each time you execute your program. It compiles your source just like the C++ compiler. The difference is that Python compiles to bytecode instead of native machine code. Bytecode is the native instruction code for the Python virtual machine. To speed up subsequent runs of your program, Python stores the bytecode in .pyc files: If you’re using Python 2, then you’ll find these files next to the .py files. For Python 3, you’ll find them in a __pycache__ directory. Python 2 and 3 are two major releases of Python, and 2.x will be obsolete by the year 2020. Python 3 is the preferred version among the development fraternity, thanks to its advanced features and optimized functionalities. The latest Python version is 3.7. The generated bytecode doesn’t run natively on your processor. Instead, it’s run by the Python virtual machine. This is similar to the Java virtual machine or the .NET Common Runtime Environment. The initial run of your code will result in a compilation step. Then, the bytecode will be interpreted to run on your specific hardware. If the program hasn’t been changed, each subsequent run will skip the compilation step and use the previously compiled bytecode to interpret: Interpreting code is going to be slower than running native code directly on the hardware. So why does Python work that way? Well, interpreting the code in a virtual machine means that only the virtual machine needs to be compiled for a specific operating system on a specific processor. All the Python code it runs will run on any machine that has Python. Another feature of this cross-platform support is that Python’s extensive standard library is written to work on all operating systems. Using pathlib (a Python module), for example, will manage path separators for you whether you’re on Windows, Mac, or Linux. The developers of those libraries spent a lot of time making it portable, so you don’t need to worry about it in your Python program! Python’s philosophy is “Everything in Python is an object”, just like in Linux where “Everything in Linux is a file”. By designing the core data types as objects, one can leverage the power of attributes of an object for solving problems. Every object or every datatype will have a unique set of attributes. It can interact with all databases including SQL databases such as Sybase, Oracle, MySQL and NoSQL databases such as MongoDB, CouchDB. In fact, the ‘dictionary’ data structure that Python supports is ideal for interacting with a NoSQL database such as MongoDB which processes documents as key-value pairs. Web frameworks written in Python such as Flask, Django facilitate faster web application building & deployment. It is also employed to process unstructured data or ‘Big Data’ & business analytics. Notable to mention are Web Scraping/Sentiment Analysis, Data Science and Text Mining. It is also used with R language in statistical modeling given the nice visualization libraries it supports, such as Seaborn, Bokeh, and Pygal. If you’re used to working with Excel, learn how to get the most out of Python’s higher-level data structures to enable super-efficient data manipulation and analysis. Python is also a glue language by facilitating component integration with many other languages. Integrating Python with C++ or Dot net is possible through the middleman Numpy. Numpy, one of the PyPI modules, acts as a bridge between other languages and Python. PyPI is a growing repository of two hundred thousand modules. So, any developer can check out PyPI before venturing out to write their code. There are also active Python communities available to clarify our queries. Companies of all sizes and in all areas — from the biggest investment banks to the smallest social/mobile web app startups — are using Python to run their business and manage their data, especially because of its OSI-approved open source license and the fact that it can be used for free. Python is not an option anymore but rather a de facto standard for programmers & data scientists.
<urn:uuid:bba7f776-8df5-446b-9e09-7232b973fc34>
CC-MAIN-2022-40
https://www.gavstech.com/prowess-of-python/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00631.warc.gz
en
0.908168
1,639
3.078125
3
With the holiday shopping season upon us, the FBI is warning consumers to be on the lookout for cyber scams and phishing attacks. Why such concern? According to research, phishing remains a popular and surprisingly effective attack method — in fact, 23 percent of recipients open phishing messages and 11 percent click on attachments. Unfortunately, phishing campaigns come in many different shapes and sizes. While some are obvious and indiscriminate, luring only the most susceptible of victims (like that long-lost uncle who just needs your routing number to give you $100,000), others are more poised and targeted, only interested in targeting those with big bank accounts or holders of confidential company documents. In this slideshow, Jon French, security analyst, AppRiver, breaks down what consumers and organizations need to know about phishing scams in order to protect themselves and their networks, this holiday season and beyond. Phishing Scams 101 Click through for a closer look at phishing scams and how consumers and organizations can better protect themselves and their networks, as identified by Jon French, security analyst, AppRiver. What Is a Phishing Attack? A phishing attack is when an outside attacker attempts to gain information from someone by claiming to be something else. A classic example would be when an attacker sends an email claiming to be from your bank, and links to a spoofed website asking for personal details. Sometimes this is obvious, with a poorly made website or typos everywhere, but other times it can be almost impossible to tell by just looking at the page. It’s important to keep an eye out as to what website you are actually at and what information it is asking for. Different Phishing Tactics A number of different phishing tactics are designed to steal your information or get into your network. Spear phishing is one tactic that targets specific individuals, companies and organizations to gather personal information. Clone phishing is another sneaky tactic that replaces legitimate, previously delivered email content with malicious content and attachments. Cybercrooks often get away with it by claiming that they are sending an updated version of the previous email. Another example is whaling. Just what it sounds like, whaling is when phishers are after the “big phish.” Common examples include a subpoena being delivered to a CFO for fraud or a customer complaint to the director of customer service. Phishing Signs – Grammatical Errors Grammatical errors should always be cause for pause. While copywriters and editors may make the occasional typo in their emails, companies that phishers try to imitate, like Amazon and MasterCard, can afford to hire editors who catch those mistakes. Phishing Signs – Design Changes Emails that are formatted differently than normal are also warning signs. It’s one thing for a website or logo to get a facelift, but it’s another for a company that would normally have purchase information in the body of the email to put it in a .zip attachment. Additionally, if taken to a website, certain nuances of a site, like images not loading and boxes not lining up, should raise red flags. And while a website may look similar to what you normally see, it’s a good habit to look at the website address in the address bar and make sure you are at the correct website. Phishing Signs – Asking for Personal Info Your credit card company knows your full account number, complete with the exact spelling of your name as it appears on the card, the security code, the billing address and expiration date. They will never ask you for all of that information. Depending on the scope, they typically would ask for one or two pieces of identifiable information and a security question for verification. And when in doubt, you can always call the company in question and speak to a representative. He or she will be able to tell you if it’s a legitimate email or not. Prevention & Protection: Be a Skeptic Tip 1: Be a skeptic. As a user, always keep a healthy level of skepticism when reading unsolicited email — particularly if you’re seeing some type of too-good-to-be-true holiday shopping deal. Never click on its links or attachments unless it’s a trusted source. Prevention & Protection: Stay Up to Date Tip 2: Stay up to date. This certainly isn’t the first time you’ve heard this, but it’s a good reminder to update your software. Hackers often leverage vulnerabilities in outdated software. That’s why web browsers and third-party software must be kept up to date. IT staff should always ensure this best practice is front and center with employees. Prevention & Protection: Adopt a Layered Security Approach Tip 3: Adopt a layered security approached. While it’s great to familiarize yourself with the latest trends in IT security, the easiest way to prevent a phishing attempt on your network is to adopt a layered security approach. Although there is no “silver bullet” to prevent malware attempts, like phishing, a combination of email filtering and web protection solutions can work together to block malware from gaining access to your network. Prevention & Protection: Reward Honesty Tip 4: Reward honesty and communication. Once a company’s perimeter has been breached, reaction time plays a critical role in mitigating the damage. Employees should not be afraid of facing repercussions if they’ve fallen victim to an attack. Instead, they should be encouraged to inform their IT department straight away. A Final Holiday Tip Throughout the holiday season, a lot of money exchanges hands, both physical and virtual. This is a primetime of the year for phishing attacks to take place and questionable websites to run off with money. With users searching for great deals and in the money spending spirit, it could be possible for them to fall victim to an attack more easily. So keep an eye out for great deals but stay alert to what information you may be giving out and to whom you’re giving it to.
<urn:uuid:cc1ce565-cd27-48f7-8af3-a7d27ac2c880>
CC-MAIN-2022-40
https://www.itbusinessedge.com/security/phishing1-beware-and-prepare-this-holiday-season/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00631.warc.gz
en
0.934343
1,248
3.078125
3
This is a collected IPv6 tutorial form IPv6 posts we have written and should be a good starting point for anyone who is just learning about IPv6. IPv6 was first formally described in Internet standard document RFC 2460. IPv6 offers more addresses, and implements features not present in IPv4. It simplifies aspects of address assignment (stateless address autoconfiguration), network renumbering, and router announcements when changing network connectivity providers. It simplifies processing of packets in routers by placing the responsibility for packet fragmentation into the end points. IPv6 is also called IPng or IP next generation; it builds on the best characteristics of IPv4 instead of building something from scratch; in other words, we are sticking with what we know works and make some required improvements instead of building the whole thing over. IPv4 addresses are represented in the known dotted decimal format by converting every 8bits block to its decimal equivalent and separating them by dots. Using the same dotted decimal representation for IPv6 is not very practical specially for humans. To make the IPv6 address representation shorter it was decided to use a hexadecimal representation. I will have to admit that it enhances the situation a little bit, however IPv6 representation is still ugly at least in my point of view. Global IPv6 unicast addresses are equivalent to the public IPv4 addresses, this means they are routable and reachable over the internet. Global unicast addresses are identified by the binary prefix of 001. The global unicast addressing was designed to fit the hierarchical scheme of the internet. The following format is common for that reason.
<urn:uuid:877f3cbc-1b9b-4c5b-abef-bc69a8ff5425>
CC-MAIN-2022-40
https://www.networkers-online.com/blog/2012/05/ipv6-tutorial/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00631.warc.gz
en
0.941795
330
3.640625
4
If you saw the first reports about Facebook’s artificial intelligence chatbots, you might believe that the robot revolution was about to overthrow human civilization. The reports said that the bots were talking among themselves using a language that humans could not understand. The word was that Facebook’s bots had slipped their leashes and were taking over. Well, not exactly. While it is true that some chatbots created for AI experiments on automated negotiation had developed their own language, this wasn’t a surprise. In fact, it wasn’t even the first time that such a thing had happened. The fact that it might happen was explained in a blog entry on the Facebook Code pages. The blog discussed how researchers were teaching an AI program how to negotiate by having two AI agents, one named Bob and the other Alice, negotiate with each other to divide a set of objects, which consisted a hats, books and balls. Each AI agent was assigned a value to each item, with the value not known to the other ‘bot. Then the chatbots were allowed to talk to each other to divide up the objects. The goal of the negotiation was for each chatbot to accumulate the most points. While the ‘bots started out talking to each other in English, that quickly changed to a series of words that reflected meaning to the bots, but not to the humans doing the research. Here’s a typical exchange between the ‘bots, using English words but with different meaning: Bob: “I can i i everything else.” Alice responds: “Balls have zero to me to me to me to me to me to me to me to me to,” The conversation continues with variations of the number of the times Bob said “i” and the number of times Alice said “to me” in the discussion. The AI language emerged during a part of Facebook’s research where the AI agents practiced their negotiation skills with each other. There, the agents work on improving their skills by chatting with other agents. The researchers initially worked to have the agents simulate being human, specifically to avoid problems such as language creation. “During reinforcement learning, the agent attempts to improve its parameters from conversations with another agent. While the other agent could be a human, FAIR (Facebook AI Research) used a fixed supervised model that was trained to imitate humans,” the researchers explained in their blog entry. “The second model is fixed, because the researchers found that updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating.” It turns out that such ad hoc language development has happened with some regularly at Facebook, as well as in other research efforts. For example, Google’s Translate AI is reported to have quietly created an entire language to help it translate between different human languages. The reason for this language development isn’t that the AI software is taking over, but rather that its priorities are set for it to perform with maximum efficiency. The ‘bots received points for efficiency, but no points were assigned by the researchers for sticking with English, so they didn’t. The researchers published a paper that details how this works, but it’s clear that the researchers could have added points for English if they’d so chosen. “The researchers had the systems stop creating their own language because it wasn’t what they set out to investigate and it was affecting the parameters of their study,” a Facebook spokesperson explained to eWEEK. The spokesperson stressed that the AI process that was shut down was an experimental system, not production software. But the study did turn up some interesting and potentially useful information, perhaps the most important being that when the agents were communicating with humans in an actual negotiation session, the humans couldn’t tell that they were talking to a robot. This is important because it demonstrates that these chatbots can determine a desired outcome, and work to realize it. But there’s also an important lesson for IT managers, now that machine learning is becoming prevalent. As machine learning and other AI characteristics become part of your critical systems, the single most important activity as you integrate them is to test them thoroughly. That means testing with more than the expected parameters. You must test the response of your AI systems with wildly divergent data and you must test it with information that’s simply wrong. After all, if you’re expecting input from humans, at some point they’re going to make a mistake. In addition, you must also develop a means of monitoring what’s happening when your AI system is receiving input or providing output to other systems. It’s not so much that having your machines create their own language is a problem as it is that you need to be able to audit the results. To audit the results, you need to understand what they’re up to. Finally, deep down inside, AI agents need to be instructed to speak English all the time—not just when it thinks the humans are listening.
<urn:uuid:9cbae071-4536-4b2e-8dcb-d5a4a7a00f21>
CC-MAIN-2022-40
https://www.eweek.com/it-management/facebook-ai-experiment-shutdown-holds-lessons-for-it-industry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00631.warc.gz
en
0.974498
1,056
3.296875
3
What is a Key-Value Store? A Key-Value Store is a data storage model designed for storing, retrieving, and managing associative arrays – otherwise known as a dictionaries or hash tables. Being far simpler than a relational database (RDBMS) it can be extremely fast and scales well. It also has the advantage of being schema-less but compared to an RDBMS the key-value stores are lacking in functionality and queries such as joins are either very slow or not possible. Why use a Key-Value Store? If you want a basic set up it’s as simple as it gets when it comes to data storage. Did you know? Key-value stores have been core to Microsoft Windows since 3.1, the operating system stores all its configuration values in a large key-value store called the Windows Registry. Unix-based operating systems, by contrast, store system-wide configuration files in the file system under the /etc directory. Latest Key-Value Store Insights 10 technology trends were found to be game-changing in 2019. Python’s tightening grip in the world of machine learning, the rise of data-centricity and the new ways to merge BI and Data Science are all considered to impact your business. Explore the data blackhole that makes companies unable to unlock the opportunities behind their data and use it as their competitive edge. Our research reveals the extent to which the unseen, fragmented, and unused data is hindering businesses from extracting value. The need for companies to become data-driven has triggered the explosion of new technologies and specialists that claim to solve a certain data problem better than everyone else. So how should your business react to the changing requirements? Interested in learning more? Whether you’re looking for more information about our fast, in-memory database, or to discover our latest insights, case studies, video content and blogs and to help guide you into the future of data.
<urn:uuid:a0dbce8c-5c02-4e05-a070-9b8eff0efb41>
CC-MAIN-2022-40
https://www.exasol.com/glossary-term/key-value-store-definition/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00031.warc.gz
en
0.931218
401
2.59375
3
Wednesday, September 28, 2022 Published 2 Years Ago on Sunday, Jul 26 2020 By Mounir Jamil Launching throughout this week is the COVID-19 Law Lab initiative that shares and gathers legal documents from more than 190 countries worldwide in order to implement and establish healthy legal frameworks, aimed at managing the pandemic. The primary target is ensuring that these laws protect the wellbeing and health of individuals/communities while adhering to international human rights standards. The COVID-19 Law Lab is a joint project between World Health Organization (WHO), United Nations Development Programme (UNDP), The Joint United Nations Programme on HIV/AIDS (UNAIDS), and the O’Neill Institute for National and Global Health Law. Laws that are well designed can reach a long way in building strong health systems, and enforcing actions that create a healthier and safer public place. Achim Steiner, UNDP Administrator mentioned that policies and laws that are deeply rooted in science, human rights and evidence can allow people to access health services and protect themselves against coronavirus. He added that the COVID-19 Law Lab is an essential tool for sharing practices on policies and laws, as the current pandemic has seen an increase in legislative action required to reduce and control the virus. It is important to note that laws that are poorly designed or implemented can cause harm to marginalized populations, and enrich discrimination and stigma, and can even hinder efforts to find an end to the pandemic. The COVID-19 Law Lab is an ideal database of laws that different countries have implemented in accordance with the pandemic. It has several topics, varying from quarantine measures, state of emergency declaration, legal measures from wearing masks social distancing, to lockdown measures, and access to vaccines and medications. The database will continue to grow once it starts being used. Furthermore, the COVID-19 Law Lab will include research on several different legal frameworks. The analyses will focus mainly on the human rights effects of public health laws, and will help countries in finding the best practices to help them in their immediate response to COVID-19, and in addition to socioeconomic recovery once the situation is under control. Google Workspace vs Microsoft 365, which one is better for you? Many firms, especially startups, have a challenging time deciding. Consider this blog post a quick overview of what to expect from each. We will go through each product’s advantages and disadvantages as well as when and why you might prefer using one over the […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:1cfcfa4d-c274-40fb-ab91-b0a98d9ddeb6>
CC-MAIN-2022-40
https://insidetelecom.com/covid-19-law-lab-launching-soon/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00031.warc.gz
en
0.940901
554
3.09375
3
National Symbols of India - Indian National Symbols Get complete information on the National Symbols of India. Each of the National Symbols of India has a deep significance. Indian National Symbols were meant to project India’s positive image to the world. The National Symbols of India serve as a reflection of its unique identity. With this article, you will know more about Indian National Symbols. National Symbols of India Here is a complete collection of all the National Symbols of India. National Symbols of India include National Anthem of India, National Song of India, National Animal, National Bird, National Flag etc. They are all mentioned below. National Flag of India THE National flag is a horizontal tricolour of deep saffron (kesaria) at the top, white in the middle and dark green at the bottom in equal proportion. The ratio of width of the flag to its length is two to three. In the centre of the white band is a navy-blue wheel which represents the chakra. Its design is that of the wheel which appears on the abacus of the Sarnath Lion Capital of Ashoka. Its diameter approximates to the width of the white band and it has 24 spokes. The design of the National Flag was adopted by the Constituent Assembly of India on 22 July 1947. Apart from non-statutory instructions issued by the Government from time to time, display of the National Flag is governed by the provisions of the Emblems and Names (Prevention of Improper Use) Act, 1950 (No. 12 of 1950) and the Prevention of Insults to National Honour Act, 1971 (No. 69 of 1971). The Flag Code of India, 2002 is an attempt to bring together all such laws, conventions, practices and instructions for the guidance and benefit of all concerned. The Flag Code of India, 2002, has taken effect from 26 January 2002 and supercedes the ’Flag Code-Indias’ as it existed. As per the provisions of the Flag Code of India, 2002, there shall be no restriction on the display of the National Flag by members of general public, private organisations, educational institutions, etc., except to the extent provided in the Emblems and Names (Prevention of Improper Use) Act, 1950 and the Prevention of Insults to National Honour Act, 1971 and any other law enacted on the subject. State Emblem of India The state emblem is an adaptation from the Sarnath Lion Capital of Ashoka. In the original, there are four lions, standing back to back, mounted on an abacus with a frieze carrying sculptures in high relief of an elephant, a galloping horse, a bull and a lion separated by intervening wheels over a bell-shaped lotus. Carved out of a single block of polished sandstone, the Capital is crowned by the Wheel of the Law (Dharma Chakra). In the state emblem, adopted by the Government of India on 26 January 1950, only three lions are visible, the fourth being hidden from view. The wheel appears in relief in the centre of the abacus with a bull on right and a horse on left and the outlines of other wheels on extreme right and left. The bell-shaped lotus has been omitted. The words Satyameva Jayate from Mundaka Upanishad, meaning ’Truth Alone Triumphs’, are inscribed below the abacus in Devanagari script. National Anthem of India The song Jana-gana-mana, composed originally in Bengali by Rabindranath Tagore, was adopted in its Hindi version by the Constituent Assembly as the National Anthem of India on 24 January 1950. It was first sung on 27 December 1911 at the Kolkata Session of the Indian National Congress. The complete song consists of five stanzas. Jana-gana-mana-adhinayaka, jaya he Bharata-bhagya-vidhata. Tava shubha name jage, Tava shubha asisa mange, Gahe tava jaya gatha, Jana-gana-mangala-dayaka jaya he Bharata-bhagya-vidhata. Jaya he, jaya he, jaya he, Jaya jaya jaya, jaya he! Playing time of the full version of the national anthem is approximately 52 seconds. A short version consisting of the first and last lines of the stanza (playing time approximately 20 seconds) is also played on certain occasions. The following is Tagore’s English rendering of the national anthem: Thou art the ruler of the minds of all people, Dispenser of India’s destiny. Thy name rouses the hearts of Punjab, Sind, Gujarat and Maratha, Of the Dravida and Orissa and Bengal; It echoes in the hills of the Vindhyas and Himalayas, mingles in the music of Jamuna and Ganges and is chanted by the waves of the Indian Sea. They pray for thy blessings and sing thy praise. The saving of all people waits in thy hand, Thou dispenser of India’s destiny. Victory, victory, victory to thee. National Song of India The song Vande Mataram, composed in sanskrit by Bankimchandra Chatterji, was a source of inspiration to the people in their struggle for freedom. It has an equal status with Jana-gana-mana. The first political occasion when it was sung was the 1896 session of the Indian National Congress. The following is the text of its first stanza: Sujalam, suphalam, malayaja shitalam, Phullakusumita drumadala shobhinim, Suhasinim sumadhura bhashinim, Sukhadam varadam, Mataram! National Calendar of India The national calendar based on the Saka Era, with Chaitra as its first month and a normal year of 365 days was adopted from 22 March 1957 along with the Gregorian calendar for the following official purposes: (i) Gazette of India, (ii) news broadcast by All India Radio, (iii) calendars issued by the Government of India and (iv) Government communications addressed to the members of the public. Dates of the national calendar have a permanent correspondence with dates of the Gregorian calendar, 1 Chaitra falling on 22 March normally and on 21 March in leap year. National Animal of India The magnificent tiger, Panthera tigris, a striped animal is the national animal of India, it has a thick yellow coat of fur with dark stripes. The combination of grace, strength, ability and enormous power has earned the tiger its pride of place as the national animal of India. Out of eight races of the species known, the Indian race, the Royal Bengal Tiger, is found throughout the country except in the north-western region and also in the neighbouring countries, Nepal, Bhutan and Bangladesh. National Bird of India The Indian peacock, Pavo cristatus, the national bird of India, is a colourful, swan-sized bird, with a fan-shaped crest of feathers, a white patch under the eye and a long, slender neck. The male of the species is more colourful than the female, with a glistening blue breast and neck and a spectacular bronze-green trail of around 200 elongated feathers. The female is brownish, slightly smaller than the male and lacks the trail. The elaborate courtship dance of the male, fanning out the tail and preening its feathers is a gorgeous sight. National Flower of India Lotus (Nelumbo Nucipera Gaertn) is the National Flower of India. It is a sacred flower and occupies a unique position in the art and mythology of ancient India and has been an auspicious symbol of Indian culture since time immemorial. National Tree of India The Banyan Tree (Ficus benghalensis) is the National Tree of India. This huge tree towers over its neighbours and has the widest reaching roots of all known trees, easily covering several acres. It sends off new shoots from its roots, so that one tree is really a tangle of branches, roots, and trunks. National Fruit of India Mango (Manigifera indica) is the National fruit of India. Mango is one of the most widely grown fruits of the tropical countries. In India, mango is cultivated almost in all parts, with the exception of hilly areas. Mango is a rich source of Vitamins A, C and D. In India, we have hundreds of varieties of mangoes. They are of different sizes, shapes and colours. Mangoes have been cultivated in India since time immemorial.
<urn:uuid:7c3ef229-1656-42b6-b73c-a63fe8fe4b8c>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/1047/national-symbols-of-india-indian-national-symbols.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00031.warc.gz
en
0.921807
1,933
3.453125
3