text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
This online course delves into the importance of data ethics & responsible AI in data management organizations. It identifies basic ethical principles pertaining to data and describes how these arise from three general ethical frameworks. Participants will consider the data ethics implications of several case studies and current events. Participants will learn how to recognize the data ethics aspects of the DCAM framework for application in real-world situations. You will learn: Presented in partnership with: Module 0. About the Course (4 min) Module 1. What’s Driving the Urgent Focus On Data Ethics (20 min) Click here to download a more detailed outline of this course. This exam tests knowledge and understanding of core concepts, principles, and terminology of data ethics & responsible AI. Number of Questions: 23 Time Limit: 46 Minutes Passing Score: 70% Once you pass the exam, you will receive a Certificate of Education documenting that you have demonstrated mastery of the topic. Further, the exam counts towards Certified Information Management Professional (CIMP) designation in the Data Governance track and towards the Certified Data Steward (CDS) designation. We recommend that you take detailed notes and review the course material multiple times before taking this exam. Click here to learn more about exams.
<urn:uuid:52bb305e-ac7b-4a1d-a769-5188bd50d2a7>
CC-MAIN-2022-40
https://ecm.elearningcurve.com/Data_Ethics_Responsible_AI_Fundamentals_p/dg-12-a.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00453.warc.gz
en
0.84886
266
2.5625
3
Researchers developed a novel molybdenum-coated catalyst that can efficiently split water with acidic electrolytes, and could help with efficient production of hydrogen. When catalyst burns, hydrogen is converted into water and heat to make an entirely clean power source. Thus, in the quest for greener power, there is an urgent need for a sustainable and efficient means of producing it. To split water using a process known as photocatalytic hydrogen evolution. Using sunlight, water molecules split into hydrogen and oxygen to provide the necessary energy. By developing an optimal catalyst, scientists are searching for the ways of improving the water-splitting reaction. Researchers from King Abdullah University of Science & Technology (KAUST) create a hydrogen-evolution reaction catalyst that is both acid-tolerant and selectively prevents the water-reforming reaction1. Angel Garcia-Esparza former KAUST Ph.D. student, said, the development of acid-tolerant catalysts is an important challenge because most materials are not stable and quickly degrade in the acidic conditions that are favorable for hydrogen generation. The team takes a time to establish the optimal pH level between 1.1 and 4.9, because the acidity of the solution was crucial for the stability of the material. They then electro-coated molybdenum onto a standard platinum electrode catalyst in a mildly acidic solution. Comparing the performance of the photocatalyst, the team showed without molybdenum the rate of hydrogen production is eventually reached after 10 hours of operation under illumination by ultraviolet light. However, researchers believe that the molybdenum acts as a gas membrane, preventing oxygen from reaching the platinum and disrupting its catalytic performance. Researchers said, the main challenge for most catalysts is the long-term stability of the materials. So, acid-tolerant material capable of preventing the water-forming back reaction that slows down water splitting. More information: [Angewandte Chemie]
<urn:uuid:422bface-5624-4f61-9b4b-198801f36ae7>
CC-MAIN-2022-40
https://areflect.com/2017/06/06/split-water-using-photocatalytic-hydrogen-evolution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00453.warc.gz
en
0.914006
422
3.90625
4
Can you imagine life without the always-on internet? If you’re like most people today, it’s a simply indispensable tool, and the thought of suddenly being without it makes you shudder and brings images of the Dark Ages to mind. It’s glorious, no doubt, and it really is a remarkable tool, but unfortunately, it’s not all upside. A recent study funded by the European Research Council has concluded that access to broadband internet access costs you as much as 25 minutes of lost sleep every night. Considering that most of us are already getting by on far too little sleep, that’s not good. Luca Stella, one of the researchers responsible for the study, had this to say about it: “Internet addiction and technology use near bedtime are often blamed as a major cause of the sleep deprivation epidemic. Yet the empirical evidence on this relationship is still limited. In our study, we first show descriptive evidence that the use of digital services at night is correlated with shorter sleep duration. Then, exploiting differences in the access to high-speed internet caused by the pre-existing telephone infrastructure in Germany, we analyze the relationship between high-speed internet and sleep. We find that access to broadband internet reduces sleep duration and sleep satisfaction.” While the report is clearly bad news, in many ways, it merely confirms what we already knew. It seems self-evident that if you spend time surfing the web just before bed, you’re probably going to get sucked in, which is going to cut further into your already limited sleep time. Sure enough, we now have evidence that supports this notion. What is interesting in the report, and something that runs counter to most people’s assumptions, is that people in the 30-59 year age range suffered more sleep loss than those in the under 30 crowd. A fair question then, is how much sleep are you getting? If you’d like more, it might be time to consider curtailing late-night internet use.
<urn:uuid:bf8cff66-0dd4-4442-84a3-6db2f09906ed>
CC-MAIN-2022-40
https://dartmsp.com/the-internet-may-be-hurting-your-sleep/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00453.warc.gz
en
0.960085
419
2.71875
3
Imagine for a minute all the healthcare situations where proof is required. Proof of a patient’s healthcare coverage. Proof of a patient’s name. Proof of a patient’s medical history. Proof of a patient’s prescription. Each question could be answered without the verifier (hospital receptionist, doctor or pharmacist) knowing anything except that the statement you told them was true. This is the power of zero-knowledge proofs. Ideation of zero-knowledge proofs In 1985, three researchers — Shafi (MIT), Micali (MIT) and Rackoff (University of Toronto) — drafted a paper titled “The Knowledge Complexity of Interactive Proof-Systems.” Their research introduced first a theorem-proving procedure; a new efficient method of communicating a proof. The second part of the paper addressed the following question: How much knowledge should be communicated for proving a theorem T? We are attempting to convince a verifier of the truth. The idea behind zero-knowledgeness is that the verifier does not learn anything except that a statement is true. What exactly does “does not learn anything” mean? Questions must be answered to formally define the zero-knowledgeness property. The specifics of zero-knowledgeness properties are explained in a good summary paper. Also, due to the math required to adequately explain the concepts of the zero-knowledgeness properties, I will not be covering the math here. We will focus on broader applications for healthcare. For now, you’ll have to take my word for it: The math plays out. Principles of zero-knowledge proofs Zero-knowledge proofs have three important properties: completeness, soundness and zero-knowledge. - Completeness: The verifier always accepts the proof if the fact is true and both parties follow the protocol. - Soundness: The verifier always rejects the proof if the fact is false, as long as the verifier follows the protocol. - ZeroKnowledge: The verifier learns nothing else about the fact being proved from the prover that couldn’t be learned without the prover, regardless of following the protocol. The verifier cannot even prove the fact to anyone later. By leveraging blockchain technologies and smart contracts, we can ensure both parties follow the protocol. Applying zero-knowledge proofs to healthcare Let’s apply this to healthcare. As you recall the initial question presented by Shafi, Micali and Rackoff (collectively referred to as SMR) was, “How much knowledge should be communicated for proving a theorem T?” We can restate this question to be patient-centric and healthcare-specific: - How much information does a hospital receptionist require on a patient to check the patient into the facility (hospital, provider or other)? - What are the minimum pieces of information required to share with a hospital receptionist to demonstrate a patient’s proof of valid health insurance? - Is it possible to share no personal patient information (think the name, DOB, driver’s license), and still have a pharmacist confirm you’re able to pick up the prescription with the assurance you’re the correct patient? An interactive and zero-knowledge proof is a protocol between two parties in which one party, called the prover, tries to prove a particular fact to the other party, called the verifier. This concept is used for identification and authentication. Let’s look at our three questions again, now considering the role of the verifier and prover. - How much information does a hospital (verifier) receptionist require on a patient (prover) to check the patient into the facility (hospital, provider, or other)? - What are the minimum pieces of information required to share with a hospital receptionist (verifier) to demonstrate a patient’s (prover) proof of valid health insurance? - Is it possible to share no personal patient information (think the name, DOB, driver’s license), and still have a pharmacist (verifier) confirm you’re able to pick up the prescription with the assurance you’re the correct patient (prover)? Zero-knowledge proofs in practice Most zero-knowledge proofs are based on a conversation between the prover and the verifier. This conversation occurs in a series of simulations or interactions, and they progress typically over iterations: - Commitment message from the prover. - Challenge from the verifier. - Response to the challenge from the prover. Often this protocol repeats for several rounds. Then the verifier eventually decides whether to accept or reject the proof, based on the prover’s responses in all the rounds. The proof can also be performed efficiently by a simulator that has no idea of what the proof is. A patient with an Android phone or an iPhone could use a decentralized application (dapp) to validate patient information during a healthcare event. Dapps are the simplest form of a smart contract. This is an agreement involving digital assets between two parties that get automatically redistributed based on the contracted formula. In our case, this contract could release information to the verifier based on our zero-knowledge proof smart contract. At the end of the transaction, the verifier would agree that the statement was true — for example, the patient does have medical coverage required for the visit — but without conveying any information apart from the fact that the statement is indeed true. Proving that one has a knowledge of certain information is trivial if one is allowed to directly reveal that information. Knowledge without knowledge — that’s the next generation of patient interactions.
<urn:uuid:4a33b8cd-1a5a-4f35-8b1b-854e21c7e762>
CC-MAIN-2022-40
https://www.cio.com/article/238982/interactive-and-zero-knowledge-proofs-for-better-patient-interactions-with-blockchain-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00453.warc.gz
en
0.917146
1,186
2.53125
3
Fully autonomous vehicles might never come to much, but many of the privacy issues they raise are as salient today as they would be in any high-tech driverless future. Assuming that a recent article in New Scientist is wrong and driverless cars won't be “going the way of the jetpack” anytime soon, it nonetheless seems probable that fully autonomous driverless vehicles will not be an everyday reality in the developed world at least before 2030. This flies in the face of predictions that by 2020, a human behind the wheel would be a thing of the past, and can largely be attributed to developments in AI not keeping up with the blue-sky thinking of many tech evangelists during the past decade. That is not to say, though, that significant strides are not being taken towards this possibly unattainable goal. Since October last year, for instance, Google's sister firm Waymo has been operating a commercial driverless taxi, or robotaxi, service in certain parts of Phoenix, Arizona. Meanwhile, numerous car manufacturers, most vociferously Tesla, have unveiled multiple advanced driver assistance systems. These systems, while a far way off from providing fully automated driving, are nonetheless trumpeted by some, most notably Tesla's “technoking” and CEO Elon Musk, as heralding the arrival of such tech into the popular sphere. Certainly, away from the glare of the popular press, driverless vehicles are increasingly the norm in warehouses, ports and industrial facilities alike. Germany's BASF, for example, is using 32-wheeled automated guided vehicles (AGVs) to transport chemicals in volumes of up to 73,000 litres and at speeds of 30 km/h around its city-sized home site of Ludwigshafen. Moreover, while the notion of pilotless drones killing people on the other side of the planet is now a reality of war, there are many in the maritime industry who have an essentially sailorless shipping sector firmly in their sights. And if that seems hard to fathom, bear in mind that in Japan, a country already heavily committed to robotisation, there are already industry-led plans afoot to make 50% of the country's domestic fleet unmanned by 2040. Japan, by the way, is far from alone in harbouring such ambitions. What's more, while the need for ongoing engine maintenance will likely thwart full autonomy in the realm of deep sea commercial shipping for some time to come, a host of unmanned surface vehicles (USVs) are already performing a widening range of real-life duties, from scientific research to port security. Indeed, at the end of July, US-based AI firm Buffalo Automation and alternative transport operator Future Mobility Network unveiled what they describe as “Europe's first fully autonomous robotaxi service” in the Netherlands. Rather than running on the road as is Waymo's mode of choice, this system instead provides a solar-powered river ferry service in the municipality of Teylingen in South Holland that, hailed via a ridesharing app, “opens the door for cities across the EU to adopt this ground-breaking alternative form of transportation.” A ton of data But whether by land, sea or air, level 4 (high automation) and level 5 (full automation) vehicles still have many technological barriers to overcome before they transcend the pages of sci-fi, not least in terms of the amount of data they would have to amass, transmit and process. Putting a figure on this, Charles Sevior, Dell's chief technical officer for unstructured data solutions, told Verdict in August that when it comes to autonomous cars, test vehicles alone typically generate between 20 and 40 TB of data per day. Out in the wild, though, the true figure could well prove much higher. For instance, Robert Bielby, senior director of system architecture and product planning for automotive at Micron Technology's Embedded Business Unit, estimated in February last year that while the average self-driving car in the US might generate between 1 and 15 TB per day, the daily amount for a road-based robotaxi could well leap to 450 TB. Meanwhile, US law firm Baum Hedlund asserted in 2019 that a driverless car could ultimately generate around 100 GB of data per second. This, if correct and the vehicle were to operate non-stop for 24 hours, would work out at a gargantuan 8,437.5 TB a day. At present, it is still too early to say which of these figures, if any, are more likely accurate. Nevertheless, it seems fair to say that were level 4 and 5 vehicles to become an everyday reality, they would clearly generate a significant amount of data. And while consumers probably need not worry too much about how Big Tech boffins intend to actually store and handle all those digital titbits, they should perhaps wonder what all those ones and noughts are saying about them. 95% connected cars by 2030? Not that chatty components are a new automotive phenomenon. For instance, in June 2017, the US Federal Trade Commission (FTC) and the National Highway Traffic Safety Administration (NHTSA) ran a workshop to discuss such matters as they pertained to Internet-enabled connected cars. PYMNTS.com expects connected cars to account for some 95% of global new car sales by 2030 as opposed to around 50% at present. While workshop participants noted that “many companies throughout the connected car ecosystem will collect data from vehicles”, FTC says in a subsequent report, much of this would consist of aggregate and non-sensitive data that could be used quite benignly for inter alia managing/monitoring congestion or vehicle performance. Nonetheless, other types of data generated by connected cars would be much more sensitive and personally identifiable, potentially including “a fingerprint or iris pattern for authentication purposes” or navigational and other information about the vehicle's, and ergo the occupants', real-time location. “Given all of this data collection, consumers may be concerned about secondary, unexpected uses of such data.” - FTC report “For example,” it continues, “personal information about vehicle occupants using the vehicle's infotainment system, such as information about their browsing habits or app usage, could be sold to third parties, who may use the information to target products to consumers.” Although some people might find this helpful, “others may have concerns about recommendations based on, for example, [the] tracking of their usage of apps.” The thing is, the amount of data garnered by connected cars pales when compared to what level 4 and 5 vehicles would need to accrue, such as that amassed via an array of external cameras and other sensors to provide computer vision and to ascertain their surroundings, other road users, potential hazards and even potential car thieves. Of course, exactly just how much of a privacy risk might be posed by a passing vehicle's possible use of radar, lidar or thermal imaging systems is open to debate. Nevertheless, many concerns have understandably been raised by the notion of a vehicle's cameras capturing things other people would much rather they did not. Huge data set But are these fears justified? Well, when it comes to the cameras on his company's waterborne robotaxis identifying members of the public, Buffalo Automation's CEO Thiru Vikram thinks not. “We do have a huge data set that we collect and annotate but that's primarily to see what types of other vehicles might be there in the frame that could cause problems for our navigation system,” he tells CyberNews. “We're not actively trying to do facial recognition, so we have no idea who any people that might find themselves in our data are.” Accepting the possibility that facial recognition software could be applied to such data, he also notes that it would equally be quite simple to anonymise images if so required. “Our AI is pretty good at knowing when it sees a human face,” he says. “So an easy solution would be to ask us to blur out the face of the individuals in the image.” This is highly reminiscent of the privacy issues encountered when Google launched Street View and which led to the company blurring out any human faces it captured. Certainly, there seems to be relatively few technological barriers to applying this, with Austria's Celantur, for example, operating a cloud-based platform that it claims is capable of anonymising around 200,000 panoramas a day and 90,000 video frames per hour. But facial recognition is arguably not the only potential issue at stake here as seemingly evidenced by China's recent decision to ban Tesla vehicles from military and other sensitive sites. While this move may have been partially motivated by politics and/or economics, the main reason cited in a Bloomberg report this past March concerned fears that the surround cameras employed by these vehicles' Autopilot driver assistance systems presented an unacceptable security risk to the Chinese military. The thinking being that they could harvest information about facilities rather than faces. Whatever the case, when it comes to autonomous vehicles, privacy concerns are by no means limited to external matters. While Thiru reports that the only passenger information that Buffalo Automation's robotaxis collect “is to process payments,” something “that's all done through a third party,” other autonomous vehicles might not stop there. In addition to monitoring the state of the vehicle and whether, for instance, it needs maintenance or servicing, it is likely that a significant battery of cameras, microphones and other sensors will be trained on the occupants to analyse everything from their preferences and habits to how they are sitting or indeed lying down. Or, as Sam Abuelsamid, Guidehouse Insights' principal research analyst, e-mobility, put it in 2019: “The outside of [autonomous vehicles] will bristle with cameras, radar, lidar and other sensors and so will the passenger cabins.” Moreover, as Polish tech firm Summa Linguae notes, Level 4 and 5 vehicles will also rely heavily on advanced speech recognition software to make decisions based not only on specific commands, but also more subtle inferences, such as “Oh no! I've left item X at home.” Thus, it seems not inconceivable that such vehicles will be designed to listen in on every spoken word and not just the range of apps an occupant might choose to select on the in-car infotainment system. While some people might love the idea of being the centre of attention, others who have watched 2001: A Space Odyssey or read Orwell's 1984 might find this constant surveillance a tad unsettling to say the least. However, for David Navetta, partner at US international law firm Cooley and vice-chair of its cyber/data/privacy practice, the biggest privacy issue concerns location-based information and the ensuing ability to ascertain where people are, where they have been, who they are visiting and when and when not they are at home. Equating this to someone tracking your phone, such a situation, he says, would not be unique to driverless vehicles but it would be "specific to cars" and could give rise to serious physical issues. “If someone leaves their house and it's known that they're gone, that could make their house potentially exposed,” Navetta states, adding that in general when companies or other third parties know a person's location by whatever means, they can track them and serve them ads. Perhaps unsurprisingly, “some people might take issue with that type of activity.” The time is now So how concerned should people be with the various privacy challenges presented by driverless vehicles? “I don't think consumers should be significantly more concerned about level 4 and level 5 vehicles than they should be with cars that exist now because [those cars] are already collecting, generating and sharing massive amounts of data,” says Chelsey Colbert, policy counsel at the Future of Privacy Forum and leader of the Washington, DC-based think tank and advocacy group's portfolio on mobility and location data. That, though, is certainly not to say that they shouldn't be concerned. Rather, addressing these issues is something that should not be put off until the day that the Johnny Cabs of Total Recall are actually up and running and Minority Report deemed a documentary. “We should be thinking about it now,” she says. And with that said, the big challenge facing consumers, Colbert believes, is “to keep up with all the technology that is in cars” and staying abreast of what data is being shared and with whom and what is then being done with that data. And to that end, Navetta would appear to agree. More than just reading the privacy notices that come with a vehicle and their increasing number of interfaces, he urges people to try to understand the technology at stake. “More and more companies are building functionality into their products and services,” he says. “So understanding the technology you are working with is becoming more and more important if people are concerned about privacy.” Drawing parallels with current mobile phones and social media, Navetta advises people to dig deeper into the settings and options that may not be readily apparent to the casual user. “If you explore a little bit, you actually have a lot more control than you might think,” he says. However, to do this, the individual first needs to locate those options and settings while also learning how to exercise them. Thus the bigger picture for people wishing to protect their privacy is not just reading the terms and conditions, “but understanding how the technology actually works.” “As consumers, we really need to shift our mindset from a car being something that we buy [and which] doesn’t really change over time,” Colbert says. Moreover, as ever more layers of connectivity and automation are added to vehicles, they essentially become big robots that blend hardware with continually evolving software. “Modern cars are already getting over-the-air software updates,” she states, explaining that this could not only impact data flows, but also “trigger new privacy obligations.” Location and use Of course, many issues pertaining to privacy will depend on where the vehicles in question operate, Colbert observes. For instance, the EU already maintains quite stringent privacy and data protection laws, most notably in the guise of the General Data Protection Regulation (GDPR). Meanwhile, the situation is very different in the US, where, lacking any federal-level GDPR equivalent, it is instead up to individual states to determine what is and what is not acceptable, resulting in broad variations across state lines. Similarly, privacy implications will also likely depend on how any level 4 and 5 vehicles are actually used. Noting that many people enjoy driving and would not want to stop doing so regardless of how technology develops, Thiru sees private spaces, such as campuses and the like, being particularly suited to the deployment of automated vehicles. This itself might also blur some lines as to who or what owns any generated data. Meanwhile, citing the high cost of these vehicles, Colbert, along with other commentators, sees level 4 and 5 vehicles as most likely operating as robotaxis or performing automated trucking and delivery services, such as those forming the focus of Waymo's Via project. This would mean that rather than having a direct relationship with the vehicle and being able to learn about privacy issues from a dealer, the average consumer would most likely encounter these vehicles in situations similar to how they might use a ride hailing app today. Certainly, consumers should not take it for granted that any such vehicles won't be collecting, processing and sharing significant amounts of information. Instead, it might be better to assume that they will. Indeed, this may be something that is already happening if, for instance, the ride hailing app driver, or indeed a conventional taxi driver, is operating a dash cam or similar recording device. How safe is your data? Such usage scenarios could also have implications in terms of cybersecurity. While connected cars have been proven to be hackable, when it comes to level 4 and 5 vehicles, the biggest threats may not actually concern black-hat hackers seeking to take over individual vehicles like they might do a phone or laptop. Instead, Navetta identifies ransomware and other attacks capable of bringing down entire networks of vehicles while simultaneously compromising onerous amounts of data as potentially more worrisome. That said, whether or not level 4 or 5 vehicles, either individually owned or as part of a fleet, would be more attractive, and indeed susceptible, to cybercriminals compared to other potential targets remains moot. After all, as Thiru notes, the autonomy industry in general follows the standards of the broader cybersecurity industry, outsourcing its security needs to established players as opposed to developing its own specific security software. As such, the sector will likely use the common infrastructure employed by other industries that regularly handle sensitive data, such as banking and retail. “I like to joke that [an automated vehicle is] as secure as your bank account,” Thiru says, asserting that there is nothing about driverless vehicles that makes them any riskier than other digitally-enabled systems. “The risk profile is not unique to us,” he says. Privacy by design This may well be so and from a security point of view, the multiple terabytes of data generated by a level 4 or 5 vehicle in any given day may well be deemed largely safe from the attentions of unauthorised actors. However, in this post-Snowden world of seemingly endless snooping, surveillance and data mining, there remains the question in some minds at least as to what those actors with authorised access to that data might choose to do with it. Furthermore, there are also question marks regarding exactly what data is being collected in the first place and whether it is strictly necessary from a functional and/or safety perspective. Indeed, as Colbert believes, perhaps now is the ideal time for manufacturers to ask themselves just how much information their cameras and other sensors actually need to collect and just how much of that information actually needs to be identifiable. “Having those questions right up front at the design stage is crucial,” she says, expressing a cautious optimism that industry will embrace the privacy-by-design principle and build in suitable protections from the get-go. What's more, were this to happen, it wouldn't just be consumers that would arguably stand to benefit. For instance, a 2020 survey commissioned by the Partners for Automated Vehicle Education (PAVE) unearthed considerable mistrust among US citizens when it came to the subject of driverless vehicles, with roughly three quarters of those polled viewing such technology as “not ready for primetime.” Admittedly, the survey focussed more on perceptions of safety as opposed to privacy. However, anything that the sector might do to allay public fears would only help to advance its cause. After all, who wants to hail a cab that might then fleece you of every bit of data it can? About the author: Brian Dixon is a freelance journalist and video editor with more than 20 years' experience covering business, tech and industrial beats for print and online publications in the UK, Poland and Sweden. A keen traveller, he has so far visited 75 countries on six continents. Pandemics permitting, he divides his time between the UK, Poland and Japan.
<urn:uuid:9afda3ea-bf21-419a-a48e-8ff2e741e255>
CC-MAIN-2022-40
https://cybernews.com/privacy/smart-cars-vs-privacy-a-driverless-car-could-generate-100-gb-of-data-per-second/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00653.warc.gz
en
0.961728
4,077
2.890625
3
The average employee isn’t fully aware of the cyber risks that face their company; even fewer know what to do when they encounter a threat. Cybercriminals know this and see your staff as the easiest way to gain access to your network. All it takes is one wrong click from someone on your team to create a pathway that bypasses your firewalls and other security measures. That’s why companies are often targeted by ransomware or social engineering tactics like phishing emails. In addition, when businesses implement work from home, the responsibility for keeping the network secure falls on the employees. However, the average person tends to use big box internet providers and wireless that isn’t secured. Home networks allow a wide variety of network traffic from kid’s devices to guests receiving access. It’s not enough to only rely on firewalls, antivirus tools, and monitoring software. So how do you protect against social engineering and other threats? The answer is cybersecurity training for employees. Giving your team the information they need allows them to take an active role in keeping your network secure.
<urn:uuid:75b80a53-0e2b-487b-ad45-6031c66ff988>
CC-MAIN-2022-40
https://www.42inc.com/security-and-compliance/cybersecurity-training/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00653.warc.gz
en
0.950103
225
2.515625
3
Recently I attended the launch of Dark Data, the latest book by Imperial College’s emeritus professor of mathematics, David Hand, in which he outlines the various ways in which our big data era may be insufficient to make the kind of decisions we hope it can provide. He explores the many ways in which we can be blind to missing data, and how that can lead us to conclusions and actions that are mistaken, dangerous, or even disastrous. The book is undoubtedly fascinating, especially for anyone interested in data and statistics, but for me, the most interesting aspect was, appropriately, something omitted from the book. Wrapping up the event was Professor Sir Adrian Smith, the head of the Turing Institute, and one of the architects of the recently published report into AI and ethics for the UK government. He raised the increasingly pertinent issue of adversarial data, or the deliberate attempt to manipulate the data upon which our AI systems depend. As artificial intelligence has blossomed in recent years, the importance of securing the data upon which AI lives and breathes has grown in importance, or at least it kinda has. Data from the Capgemini Research Institute last year showed that just one in five businesses were implementing any form of AI cybersecurity, and while the same survey also found that around two-thirds were planning to do so by the end of this year, you do wonder just how seriously we’re taking the problem. Trusting the data There have already been numerous examples of AI-based systems that have gone astray on account of having poor, often biased, data with which to train the systems, which often results in discriminatory outcomes. It’s likely that a greater number of systems have poor quality outcomes due to the same lack of quality in the data they’re based upon. In these kinds of examples the lack of quality is something the vendors are complicit in, but adversarial attacks involve the deliberate manipulation of data to distort the performance of AI systems. There are typically two main kinds of adversarial attack: targeted and untargeted. A targeted attack has a deliberate form of distortion that it wants to create within the AI system, and sets out to ensure that X is classified as Y. An untargeted attack doesn’t have such specific aims, and merely wishes to distort the outputs of the system so they’re misleading. While untargeted attacks are understandably less powerful, they’re somewhat easier to implement. Ordinarily, the training stage of machine learning strives to minimize any loss between the target label and predicted label. This is then tested to ensure that the system can accurately predict the predicted label, with an error rate calculated as the difference between the two. Adversarial attackers change the query input such that the prediction outcome is changed. It perhaps goes without saying that in many instances, attackers will have no idea what machine learning model the AI system is utilizing, which you might imagine would make distorting it very difficult. The reality, however, is that even when the model is unknown, adversarial attacks are still highly effective, due in large part because there is a degree of transferability between models. This means that adversarial attacks can practice on one model, before attacking a second, confident that it will still prove disruptive. The question is, can we still trust machine learning? Research has suggested a good way to protect against adversarial attacks is to train systems to automatically detect them and repair them at the same time. One approach to achieve this is known as denoising, and requires methods to be developed to remove any noise from the data. Ordinarily this could be simply Gaussian noise, but by using an ensemble of denoisers, it’s possible to strip the noise out for each distinct type of noise. The aim is to return the data to as close to the original, uncorrupted version as possible, and thus allow the AI to continue functioning properly. The next step is to then use a verification ensemble that reviews the denoised data and re-classifies it. This is a simple verification layer to ensure the denoising has worked well. Suffice to say, these defensive tactics are still themselves at an experimental stage, and it’s clear that more needs to be done to ensure that as AI becomes a growing part of everyday life, we can rely on it to be providing reliable and effective outcomes free from the distorting effects of hackers. There’s a strong sense that initial biased outputs have eroded any inclination to blindly trust the systems to deliver excellent results, but there is perhaps more work to be done to truly convince vendors to tackle adversarial attacks properly, and indeed for regulators to ensure such measures are in place.
<urn:uuid:2827401d-6207-4134-8b41-88b35c369cc7>
CC-MAIN-2022-40
https://cybernews.com/security/the-growing-threat-of-adversarial-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00653.warc.gz
en
0.9663
966
2.5625
3
High-level architecture of a Digital Worker A Digital Worker consists of TaskBots and MetaBots that together form one or more skills. It can include calls to REST APIs for external services (for example, an AI engine), where the REST API calls are executed either via native client commands or through a MetaBot with a dynamic link library (DLL). The Digital Worker enables the customer to automate their workflow. TaskBots and MetaBots included in the Digital Worker implement the skills. Each skill must have a top-level TaskBot, called the Master Bot. The Master Bot invokes other TaskBots and MetaBots. Users must be able to run individual TaskBots and MetaBots by themselves, or from the Master Bot file. In the same way as when hiring an employee, customers must set up the Digital Worker to fit in their environment. Customers downloading Digital Workers from the Bot Store expect to be able to easily configure and install the Digital Worker – and its TaskBots and MetaBots into the workflows. For best practices on configuration, see Building Digital Workers for the Bot Store.
<urn:uuid:744e8ea5-0b6d-4aef-acd8-a124da01d3ee>
CC-MAIN-2022-40
https://docs.automationanywhere.com/fr-FR/bundle/enterprise-v11.3/page/enterprise/topics/aae-developer/aae-high-level-arch-digital-worker.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00653.warc.gz
en
0.922899
230
2.59375
3
Before Unified Extensible Firmware Interface (UEFI), there was legacy BIOS. You may hear us throwing around the term “legacy BIOS” all the time. Although sometimes we use BIOS and UEFI interchangeably (or sometimes at the same time), the actual term we should be using now is UEFI firmware. UEFI is the successor to legacy BIOS and there’s a reason why it is preferred firmware initialization infrastructure. As you may have already read or you may already know, the BIOS is the main component when it comes to the booting process. There is so much activity that happens between power-on and OS login and the BIOS is in charge of the boot initialization to the OS. Before there was UEFI, legacy BIOS was becoming pretty complicated. As technology for legacy BIOS started to evolve, complexities and limitations with boot loaders began increasing. For one thing, legacy BIOS relied upon the real mode execution model that had a limited amount of total memory and often required external plug-ins to allow for more memory space. Also, as processor technology became more powerful, it was necessary to manually code certain elements into the code base. Unfortunately, due to the varying configurations and feature sets of the bootloaders, codes often did not merge well with the original code base and the code could get very messy/difficult to work on. With no governing body to oversee bootloader firmware development, it became difficult for vendors to create a universal bootloader for different systems and platforms. So how was this problem solved? The UEFI Forum was created as a governing body to standardize firmware initialization and advocate interoperability and modularity across various systems and platforms, such as ARM, x86, RISC-V, MIPS and more. The UEFI Forum comprises of computer scientists, leading technology companies, technology vendors and researchers who all collaborate on the UEFI specifications that allow for greater functionality when it comes to firmware initialization and booting. The UEFI specifications were specifically designed with compatibility and modularity in mind and incorporates C code to standardize code bases. Since many key technology players are involved with the UEFI Forum, firmware development has become more collaborative because they are all working together to support each others’ functionality. Bootloaders like Coreboot and Das U-boot have been developed for many years; however they aren’t as standardized as UEFI – in fact, there are over 40 different U-boot variations! With UEFI being fully standardized, UEFI provides the glue between the firmware and the operating system for any architecture, allowing an OS designer to develop once and design everywhere.
<urn:uuid:2a5a70e2-2f0e-42fe-b690-cc66c125d1e7>
CC-MAIN-2022-40
https://www.ami.com/blog/2018/02/19/why-use-uefi-instead-of-other-boot-loaders-4/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00653.warc.gz
en
0.956553
530
2.828125
3
Simplex and duplex are two options for the cables in your fiber optic network. Whether you choose full-duplex vs. half-duplex vs. simplex depends on your application and budget. Learn the differences between simplex and duplex fiber optic cables, their various applications and the advantages of each. Simplex and duplex fiber optic cables are both tight-buffered and jacketed with Kevlar strength members. Simplex fiber optic cables, also known as single-strand, have only one fiber. On one end is the transmitter, and the other end has the receiver. These are not reversible. Duplex fiber optic cables used to have two fibers joined together by a thin web or “zipcord” construction. One strand transmitted from point A to point B and the other from B to A. Both ends had a transmitter and a receiver. The emergence of single-strand fiber transmission has changed the situation. It seemed to be a better alternative for network managers, providing an increased network capacity, higher reliability due to fewer connections and overall cost savings. Single-strand duplex fiber transmission uses a single fiber to send data in both directions, namely bidirectional or BiDi transmission. This technology is based on two wavelengths traveling in opposite directions and is achieved by combining and separating data transmitted over a single fiber based on the wavelengths of the lights (typically around 850, 1300 and 1550 nm). Only some equipment manufacturers are using or moving to a single-strand cable for their connectivity, as the equipment becomes very expensive. It exists for certain applications, but it is not the norm. Duplex fiber optic cables can be half-duplex or full-duplex. Half-duplex means that data can be transmitted in two directions but not simultaneously. Full-duplex indicates that data transfer can occur in both directions at once. Fiber optic simplex offers a one-way data transfer. It’s a good choice for an application such as an interstate trucking scale that sends weight readings back to a monitoring station. Another example is an oil line monitor that sends data about oil flow back to a central location. Fiber optic duplex enables bidirectional data transfer. It’s a good choice for applications such as telecommunications as well as workstations, Ethernet switches, fiber switches and servers, and backbone ports. Simplex multimode fiber optic cables can also be used for bidirectional data transfer if a multiplex data signal is used. Both simplex and duplex fiber optic cables come in single-mode or multimode. Single-mode is often better for long-distance applications because it carries one ray of light at a time. Multimode has a larger core and can transmit more data at a given time. However, it is better for shorter distances due to high dispersion and attenuation rates. Read more about the differences between multimode and single-mode in our Multimode vs. Single-Mode Fiber Optic Cable article. As simplex and half-duplex fiber optic cables use only one fiber to communicate, they are often less expensive than full-duplex fiber optic cables. They also allow for more incoming data at higher speeds. The primary advantage of a full-duplex fiber optic cable is the capacity for simultaneous bidirectional communication. One potential disadvantage to fiber optic full-duplex is that it only permits two devices to communicate at once, which means you will need enhanced connections to accommodate additional devices. Need help deciding whether full-duplex vs. half-duplex vs. simplex is right for your network? Contact Black Box for expert advice on your cabling infrastructure.
<urn:uuid:6810b5dc-8ebb-4b5c-94de-de17c5afea84>
CC-MAIN-2022-40
https://www.blackbox.com/en-ca/insights/blackbox-explains/inner/detail/fiber-optic-cable/the-basics-of-fiber-optic-cables/simplex-vs-duplex-fiber-patch-cable
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00653.warc.gz
en
0.934923
754
3.1875
3
The Governor of Rhode Island approved legislation early in July requiring the state to obtain all its electricity from renewable energy resources by 2033. No American state has a more ambitious timetable than this one. “Anything more ambitious, and I would start being a little skeptical that it would be attainable,” explained a climate and energy researcher at the Breakthrough Institute, Seaver Wang. A future powered by renewable energy is possible True, Rhode Island is a small state. Furthermore, the state is more prepared for such a timeline than the rest of the nation due to its current circumstances. However, researchers claim that by observing this tiny state conduct its political affairs, other states may learn how to pave their own paths toward a future powered by renewable energy. The renewable energy standard in Rhode Island sets a target that power providers must achieve by accumulating a specific number of certificates by 2033. Electricity producers can obtain these certificates by producing their own electricity from renewable energy resources or purchasing certificates from other producers. (Many other states have comparable rules; Rhode Island’s present norm is an update to a previous level.) Policy wonks have also suggested a federal standard. Today, pinning hopes for renewable energy on a state that still gets 89% of its electricity from natural gas may sound overly optimistic. Most of the scant wind energy that does exist is either imported from other states or the 30-megawatt Block Island Wind Farm, the nation’s first offshore wind farm, which has just five turbines and only recently started operating in 2016. However, Rhode Island intends to make up the difference with up to 600 megawatts of new wind energy. It has teamed up with Ørsted to support this endeavor, which might bring a critical mass of turbine experience from Europe, where the industry is far more developed. “I think that adds greatly to the likelihood of [Rhode Island’s] success,” stated Morgan Higman, a clean-energy researcher at the Center for Strategic and International Studies, in Washington, D.C. The measures in the package are very tailored to Rhode Island’s situation. Not only is it one of the states with the least population in the union, but it also already has one of the lowest per capita energy consumption. Additionally, Rhode Island’s grid doesn’t need to support many energy-intensive manufacturing companies because the state’s economy is service-oriented. That makes the 2033 objective even more attainable. “It’s better to have attainable goals and focus on a diverse portfolio of policies to promote clean energy advancement, rather than sort of rush to meet what is essentially…a bit of a PR goal,” explained Wang. Another lesson is provided by the fact that Rhode Island is betting everything on offshore wind, something this marine state may have in abundance. According to Higman, it is a good illustration of the benefits of harnessing a state’s potential resources. Additionally, collaborating with Ørsted may enable the state to access beneficial expertise. Texans may similarly decide to increase their reliance on the state’s wind energy resources. With so much sunshine available, New Mexico may potentially create a renewable energy supply. This kind of behavior, according to Higman, “is the fastest way that we see states accelerate renewable energy deployment.” There is space for improvement in Rhode Island’s policy. Its emphasis on renewable energy ignores fission, the region’s main carbon-free energy source. Moreover, a quarter of the area’s electricity is produced by just two nuclear power stations, Millstone in Connecticut and Seabrook in New Hampshire. A more comprehensive approach might take note and encourage nuclear power as well. Most importantly, any debate on energy policy should acknowledge that Rhode Island’s grid is interconnected with the networks of its neighboring states in New England, New York, and other states. (In fact, it has frequently collaborated on setting objectives and constructing new offshore wind power.) There is a danger that some states won’t have any renewable energy left when all the renewable energy certificates are bought out if nearby states adopt similarly stringent rules without generating additional energy capacity. Analysts, though, are certain that Rhode Island can complete the task “Rhode Island does deserve some kudos for this policy,” stated Wang. “It’s really tempting to applaud states for their goals. This is a useful example of where setting a goal is not very meaningful. Identifying the means and strategies, and technologies to achieve that goal is the most important thing. And Rhode Island has done that,” Higman added.
<urn:uuid:a009ada9-5c2a-4080-8a7f-da7b9d8c2a33>
CC-MAIN-2022-40
https://dataconomy.com/2022/08/rhode-island-renewable-energy-plan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00653.warc.gz
en
0.946788
965
3.03125
3
As many states continue to plead for more face masks and shields, the internet of things can help with inventory and supply-chain tracking. As many states continue to plead for more face masks and shields amid the coronavirus pandemic, some experts are looking at ways to prevent this type of situation from recurring. The answer, they say, is inventory and supply-chain tracking using the internet of things. “There’s so much that IoT is being used for in the public-safety space and by different governments, but I think the reality is there’s so much more you could actually use it for,” said Dilip Sarangan, global research director for IoT and digital transformation at Frost and Sullivan. For inventory management, barcode and radio-frequency identification (RFID) readers can scan tags as equipment enters and leaves a public health department’s warehouse, said Michael Sparks, director of government sales at Zebra Technologies. Similarly, first responders can use them onsite to track assets and workers. “With the appropriate software, you can, at a glance, know where your assets are, know when they arrive on scene and know who is there and where they are throughout the city, incident or building,” Sparks said. Another use case is tracking patients. “There are long lines of people to get tested [for COVID-19], and we have applications that can speed that process through automation,” he said. Specifically, medical personnel collecting samples from potentially positive people can print wristbands onsite that can tie the samples to the patient. That minimizes mistakes in connecting patients and samples at testing labs. Traditionally, inventories and tracking have been time-consuming pen-and-paper processes, with someone visually verifying the assets and marking a checklist, and then someone else enters the data into a database, leaving room for error. “The technology, the software, the procedures, the best practices have been well vetted and developed in the commercial world. We’re simply applying those to the first responder world,” Sparks said. Although it’s gained traction in recent years, IoT technology isn’t that new. Barcode and RFID technology have been around since 1974 and 1948, respectively, and the term “internet of things” was coined in 1999. “Anything, obviously, that’s ‘connected’ in some way, shape or form falls within the IoT realm,” Sarangan said. “For some reason, people assume that IoT in public safety or IoT in any industry is a new thing. It’s really not…. In reality, it’s anything that’s connected using any type of network.” He likens IoT to supply-chain management. “Think of it as Amazon for everyone,” Sarangan said. Consumers who buy something from Amazon get regular updates on where their package is, as do shippers and merchants. “The challenge with an IoT solution when you’re thinking about it from a supply-chain standpoint, it’s not just one company implementing it,” he added. “It’s one company … connected to the supply chains of their customers, their partners, their suppliers – everyone. So, it’s having an end-to-end supply chain that can be monitored. That’s where the value really is.” COVID-19 stands to have a major impact on the global IoT in health care market, according to MarketsandMarkets, which predicted that it will grow from $55.5 billion in 2019 to $188 billion by 2024. By 2025, that number could be about $1.6 trillion. Taking a cue from efforts in South Korea and Taiwan, Sarangan said IoT can even help before a disaster strikes. For instance, facial recognition or video surveillance could be used to determine who a person with a high fever came into contact with. “The whole thing sounds crazy. It sounds like some kind of police state,” he said. However, if the World Health Organization or the United Nations could do that, “that’s where the value comes in.”
<urn:uuid:001230c6-f90b-48f0-8cc1-61ebd9c92bf9>
CC-MAIN-2022-40
https://gcn.com/public-safety/2020/05/how-iot-can-track-health-supplies/291215/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00653.warc.gz
en
0.948101
883
2.65625
3
This story was originally published on Sept. 18, 2008, and is brought to you today as part of our Best of ECT News series. Physicists tell us that only a small percentage of the universe can be seen directly. In fact, the vast majority of the universe is what they call “dark” — composed of matter and energy that we know is there but that we can’t observe, identify or analyze directly. The concept of “dark matter,” first described in 1933 by physicist Fritz Zwicky, has since evolved into one of the cornerstones of modern physics. In fact, scientists have posited that up to 96 percent of what’s “out there” in the universe is either dark matter or dark energy. More interesting than that: In the 75 years since its discovery, we have yet to figure out just what exactly it is. Sounds pretty fantastical, right? Fantastical sounding though it is, it turns out there’s really a lot of good evidence for why scientists believe this to be the case. They can, for example, measure gravitational interaction between two observable bodies and — fairly precisely — account for the amount of mass that must be there, even though they can’t see it directly. In other words, the indirect evidence allows them to conclude the existence of — and some of the properties of — the dark matter and energy. It makes perfect sense. It’s kind of like a stopped drain: You can’t see what’s causing the blockage, but you know for a fact that something is (because otherwise your sink wouldn’t be overflowing) and you can tell something about it (like how thick the clog is) based on how slow or fast water moves through it. So-Called Dark Data Interesting though this concept is on its own merits, it’s not normally one that we encounter in IT. But in this case, there’s actually a very practical reason that I’m bringing it up. Namely, in the same way that the vast majority of the universe is “dark,” the vast majority of data in our enterprises is dark in the same way. What I mean by this is that we all know that our networks and other infrastructure process a tremendous amount of data. Some of this data we know pretty well — compliance activities might have charted some of it out, some of it might be associated with a particular highly visible application set that we’re intimately familiar with, and some of it might be so business-critical that we always have one eye on it to make sure everything runs smoothly. But how much of the total data are we aware of? Definitely not 100 percent. Probably not 50 percent. Twenty percent? Ten? In fact, when it comes down to brass tacks, most of us are in the unfortunate situation that the vast majority of what transmits over our networks is, for lack of a better word — “dark.” We know it’s there — it has to be in order for our businesses to run smoothly. We see it move from place to place when we chart out things like bandwidth utilization or overall traffic patterns. But we don’t know, with any degree of certainty, what it is, where it’s going, where it came from, or why. It’s dark data. For the security organization, this can make for a particularly stressful state of affairs. At best, this dark data is related to legitimate business activity (meaning, of course, that we need to protect and safeguard it). At worst, it can be any number of things that we don’t want: malware, unauthorized/illegal activity, inappropriate user traffic (e.g. pornography, gambling), etc. Being chartered with safeguarding something that we have no knowledge of is never good — particularly when there’s so much of it. And preventing something that we have no knowledge of is even worse. Unfortunately, however, that’s where we are. Quantify the Problem Given this set of circumstances, the challenges should be obvious: This unknown data poses a risk to the firm, we are chartered with reducing risks, therefore we must minimize the amount of unknown data. In other words, we need to maximize what we do know minimize what we don’t know. Easy to say, hard to do. However, it’s important to realize that it’s not a completely unsolvable problem — at least if we approach it from a practical point of view. The temptation is to become overwhelmed by the problem and either a) ignore it or b) spawn an ineffectual, expensive, and overly complicated initiative that’s destined to fail from the get-go. Neither of those strategies work — at least not usually. Instead, a much more practical approach is to borrow a page from the physicists’ playbook. The physicists who study dark matter can very precisely tell you what percentage of the universe they don’t understand. They can say with a high degree of certainty that “96 percent” is dark. But most of the time in IT, we aren’t there yet. To say it another way, we don’t even know what we don’t know. So, the first step would be to understand the scope of the problem. Start with an inventory of what you do know about your data. If you’re in a large organization, there are a bunch of people working overtime trying to solve exactly this problem (albeit for different reasons) — they just probably won’t share their results with you unless you ask. For example, individuals in the compliance arena are working very hard to catalog exactly where your organization’s regulated data is. SOX, HIPAA, or PCI compliance initiatives, for example, are oftentimes a good repository of meta-data about what and where information is stored for their particular area of interest. Folks who are working on business impact analysis (part of disaster recovery) probably have a pretty good idea of what data is used and how by critical business applications. If you consolidate these sources of information into a “known universe” of your enterprise’s data, you can get a much broader view than you would otherwise have. Take the data that already exists — metadata (data about the data) and consolidate it. The goal here is not to get to 100 percent — instead, it’s to create a foundation that you can build on in the future. Plenty of opportunities will arise to gain further information about the data in your firm, but unless you have a place to record it, it’s not going to be useful. So what you’re doing is creating a place for that metadata to go so that when you’re given the opportunity to learn about where more of your data lives, you can record it — and ultimately put it into its proper perspective as you begin to see how data sources and pathways relate to each other. Once you’ve cataloged the data that your organization does know about, you can start to look for other areas from which to glean information. At this point, if you’ve followed the first step above, you’ll have a place to record additional information as the opportunity arises. Building a new application? Map out what data it will process, where it gets its data from, and where it will send it to. Keep a record of that in your living, breathing “data inventory.” Got an audit going on? Keep an eye on what the auditors sample, and ask them to work with you on moving your data inventory forward. Just about any task that tells you something new about the data in your firm is fair game. This doesn’t have to be a fancy, heavily-funded initiative — and it doesn’t have to be farmed out. Be very skeptical of “data charting” initiatives, vendors that claim to be able to categorize all your data for you, or consultants who want to sell you data classification/mapping services. In fact, I’ve seen more organizations be successful at this by starting small, keeping an eye open for opportunities to add to their knowledge, and recording it when they do than I’ve seen organizations be successful trying to formally audit all of their data to catalog it. The first approach works because it’s a “living” activity — when it’s transparent and well-explained, folks understand what you’re trying to accomplish and they actively look to forward the goal when they have an opportunity. It’s “grass-roots.” The latter approach fails because it’s like trying to boil the ocean — it’s just too big and too complex. Hopefully, if you’ve done these two things, you can increase your awareness of where your data is, what it’s for, and who’s using it. Even if you only go from understanding 10 percent of the data to 30 percent, you’re farther along than you were when you started. Ed Moyle is currently a manager withCTG’s information security solutions practice, providing strategy, consulting and solutions to clients worldwide, as well as a founding partner ofSecurity Curve. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development.
<urn:uuid:9f5e936b-84ca-4748-a4e3-fb75a005e053>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/the-dark-side-of-data-65617.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00653.warc.gz
en
0.944316
1,982
3.375
3
We have learned how to open and read a file in python in Python File Read lesson in Python Training. In this lesson, we will focus another action that we will use with files. This is “write”. In this lesson, we will learn Python File Write with different examples. You can also check Python File Create. As we have learned before, we use python open function to work with files, for file handling. With open() function, we use two different parameters. One of them is the file that we will work and the other one is the method that we will use. These methods are given below: For Python File Write mode, we will use “w” as the second parameter of open() function in python. With the help of this function and this parameter, we will able to modify and overwrite existing files after opening them. Beside “w” parameter, we can also use “a” parameter to append files. In other words, with “a” parameter, we can add new characters at the end of the files. Let’s shows this with examples. Again, think about that, we have a demo file under the same folder with python. The name of this file is fileabc.txt and the content of this file is: Weak people revenge. Strong people forgive. Intelligent people ignore. Firstly, let’s check the content with open function and read method. After that we will close it. The output will be: You can also watch the video of this python lesson! Now, let’s use append mode to add new characters to this file. Here, we will add “A Perfect Quote!” sentence, at the end of this demo file. To do this, we will use open() function, “a” mode and write() method. After this modification, we will also use open function with read mode and read() method together to read the new content. With this Python File Write code, the new content of this demo file will be like below: We have learned how to add new characters at the end of a file. Now, let’s learn how to modify or completely change the entire content. To do this we will use open function and “w” mode together. Here, we will use the same demo file as above. We will change the entire content of this file with the below quote of Leonardo Da Vinci: Simplicity is the Ultimate Sophistication. To do this, we will use the below code. Firstly, we will open the file with “w” mode and modify the file with write() method. Then close. After that, we will open it again to read. The output of this Python File Write code and the content of the demo file will be like below: In this lesson, we have learned how to write and modify an existing file with Python File Write modes. These modes are append (a) and write (w) modes. You will use these methods in your coding activities too much.
<urn:uuid:e3dce1d9-3244-4ce5-8bbb-dc00341bc52c>
CC-MAIN-2022-40
https://ipcisco.com/lesson/python-file-write/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00653.warc.gz
en
0.88296
651
3.703125
4
The UK might have been the place where wonder material graphene - an allotrope of carbon in sheets just one atom thick - was discovered, but that doesn't mean the rest of the world is going to sit back and leave us to it: a group of Swedish universities has announced its own research into the material. Since its discovery by Andre Geim and Konstantin Novoselov at the Manchester University - a breakthrough for which they were jointly awarded the 2010 Nobel Prize in Physics - graphene has been touted as the solution to all the world's ills: IBM has produced 100GHz transistors and integrated circuits based on graphene, while the University of California at Berkeley believe it holds the key to boosting the speed of optical networking systems tenfold. A more recent discovery - that 'ribbons' of graphene will stand upright if given a 'boot' of another material, forming nanowalls one atom thick - could help chip designers smash the nanometre barrier, creating chips with components a fraction the size of current processors and drawing a tiny amount of power for equal or better performance. In order to ensure the UK's place at the forefront of graphene research, the Department for Business, Innovation and Skills announced a £50 million investment project to create the Graphene Global Research and Technology Hub in Manchester - something other nations are keen to emulate. Sweden is the latest to jump on the graphene bandwagon, with Chalmers, Uppsala and Linköping universities receiving investment totalling £3.78 million from the Knut and Alice Wallenberg Foundation in order to organise a group of 30 engineers. "We are now achieving critical mass, and will benefit from valuable cross-fertilization between several research areas, all of which are involved in graphene," explained Mikael Fogelström, the project coordinator. "The money will be used for everything from producing graphene to developing a variety of products, with basic research into experimental and theoretical physics along the way." With the funding expected to enable Chalmers to run its graphene research project for the next five years, Fogelström is setting his sights still higher. “It would be a good idea to get together with more graphene research groups, and perhaps form a national research centre. That would be a good step to take for pursuing EU flagship funds."
<urn:uuid:44c2ef31-1b4a-46dc-ae36-87997d0fbb48>
CC-MAIN-2022-40
https://www.itproportal.com/2011/11/08/sweden-pushes-graphene-research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00653.warc.gz
en
0.951318
473
2.8125
3
Earlier this year, a hacker by the name of Mehdi Lauters published some data from an experiment he conducted using two Raspberry Pi devices running GPS, WiFi, and Bluetooth sniffing applications. Put simply, he traveled the city of Bordeaux, France, and collected wireless data from signals within his range. After collecting this data for six months, he summarized and published his findings here. Here's a summary from his write-up on the project: You want to discover your city's public transport infrastructure? If people crossing your street are mainly tourists or neighbors? Check if you always take the tram with a given person who likes pizza and travels? Or maybe more when your neighbor is at home or not right now, and use to be there at this time? Mehdi discovered 62 postmen using the Facteo app on a Samsung device, and he was able to determine which commuters on a particular train knew each other, based on their phone connections. See his project page on github for details on how he did all of this. His results aren't entirely surprising, in terms of the type of data that can be discovered through the use of wireless networks. However, it's a bit scary how easy and inexpensive it was to build and use his data extraction toolset: All these scan were mainly done thanks to 2 Raspberries pi, one with a serial GPS and an external battery pack, and the other at a fixed position to monitor people every day at the same place. But scanning more data at several city points in the same time to profile users and find people streams is also available with low cost devices such as esp8266. Mehdi goes on to discuss how to improve user profiling and build other features into the project. Notice that he was able to track individuals without access to any of their devices. He didn't need any expensive equipment, secret backdoors, federal warrants, or an army of highly trained computer hackers to get this information. This project is very easy for the public to replicate, and the instructions are all on github here. This kind of data gathering is made possible through the use of a ‘sniffer.' This is a program that monitors and analyzes network traffic, and may be used on wireless networks, in a LAN or ISP infrastructure, or somewhere out in the public like Mehdi's device. Any network traffic that is not encrypted is vulnerable to sniffing. Like most ‘hacking' tools, there are legitimate uses for sniffers, such as troubleshooting network issues or analyzing traffic patterns. Wireshark, Solarwinds, and PRTG Network Monitor all provide popular network sniffers / analyzers that are used for business purposes in networks across the world. Mehdi's project is a perfect example of how these analyzers can pull in data that can be used for a variety of agendas, including business, criminal, and espionage. There are a handful of things you can do to protect yourself while you are out in the public: · Avoid WiFi networks that do not require passwords if possible · Use a corporate VPN if you are using public WiFi · Always use HTTPS when you visit a site · Never accept untrusted certificates · Do not enter financial or other sensitive information when connected to open WiFi networks · Never transmit sensitive data without end-to-end encryption on a public WiFi hotspot With the growth of smartphones and apps, the Internet of Things, and the new operating systems being built into vehicles, there is no end to the things that are leaking information about you. Manufacturers and consumers need to follow best practices when using public networks and start thinking of their security and privacy at the device level.
<urn:uuid:27d7a639-8afa-465b-962b-fb7de6ff0334>
CC-MAIN-2022-40
https://fr.blog.barracuda.com/2017/09/18/commuters-and-travelers-at-risk-of-digital-spycraft/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00053.warc.gz
en
0.953037
756
2.5625
3
Deep Learning has swept through the IT industry, bringing benefits and better classifications to a number of applications. Inspired by the way the human brain works, the technology uses a layered learning process to enable the computer to classify, store and access data, which it can then refer to for learning. This means it can use a whole image to recognise, rather than relying on separate elements of that image. This is a cumulative process – the more elements it has to draw on, the better the classification – thus, the better the ‘learning’. The benefits of this technology for face recognition and image classification makes it hugely valuable in the field of security. It touches on every aspect of the security industry – from facial and vehicle detection to behaviour analysis. This, in turn, starts to change the focus of security from being reactive to being able to predict problems before they happen. Hikvision has taken this technology and innovated a family of products to maximise its use. The DeepInview IP camera range and the DeepInmind NVR range work together to provide all the power and benefits of Deep Learning. While the cameras provide the smart ‘eyes’ of the system, the NVR represents the analytic and storage capabilities of the brain. The products help to tackle security on two fronts – recognition, monitoring and counting of people and recognition and detection of vehicles. This uses Deep Learning technology at its most effective – for its ability to classify and recognise thousands of ‘features’. Obviously, this multi-layered approach uses a lot of memory and performance, which is one of the reasons why the technology has become much more widespread in the past few years. To put this into perspective, in the first stages of the technology, it took 1,000 devices with 16,000 CPUs to simulate a neural network. Now, just a few GPUs are needed. Hikvision is partnering with the largest of the chipset brands – Intel and nVidia – to explore the possibilities of Deep Learning for the surveillance industry. Hikvision’s innovation also facilitates and improves on this – the H.265+ codec radically reduces transmission bandwidth and data storage capacity requirements. This means there’s no loss of quality even though the data being shared and stored is exponentially higher. Applications are numerous. The technology could enable the system to provide a black list/white list alarm, for example, which could come in very handy in access control scenarios. It could also be used to recognise unusual behaviour – possibly allowing security staff to prevent an issue if people are found loitering nearby, for example. The new premium range of products will further extend the quality and capabilities of security systems. They will also allow security professionals to start planning to avoid issues, rather than reacting to them. This could be the next evolution of the whole industry – using AI to change the world, one Hikvision solution at a time. For more information, check out our article “How Deep Learning Benefits the Security Industry. Link here And keep an eye on the Hikvision website to see launches of the new DeepinView and DeepinMind products later this year.
<urn:uuid:e6acc245-7e1a-4e14-a5f7-53a446f9ee76>
CC-MAIN-2022-40
https://www.hikvision.com/hk/newsroom/latest-news/2017/feature-article---deepening-the-value-of-surveillance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00053.warc.gz
en
0.947639
637
2.6875
3
Power X, a large energy conglomerate, allows remote employees and contractors to access company systems via a Virtual Private Network (VPN). The company’s technical team would prefer on-site access only, but they learned quickly during the COVID-19 pandemic that such security precautions seemingly hinder business continuity. Joe Johnson works for a third-party vendor that contracts with Power X. He oversees software updates for a specific Supervisory Control and Data Acquisition (SCADA) system, which monitors and controls power transmission and distribution of one of Power X’s largest subsidiaries. While Joe is on vacation in Costa Rica, he remembers that he forgot to tell Bob, his coworker, about a specific patch upgrade available today. It’s not a big undertaking, so Joe decides to do it himself. He connects to the hotel Wi-Fi, immediately logs into the Power X VPN, completes the upgrade, and then heads to the beach. Joe doesn’t realize that he has just fallen victim to a Man-in-the-Middle Attack (MitM). The hotel Wi-Fi that seemed legitimate was not. Instead, it was a fake hotspot created by a local hacker to collect passwords and other data from unsuspecting tourists. In the minute it took for Joe to connect to the VPN, the attacker gained access to Joe’s keystrokes, log files, and more. The hacker now has all he needs to exploit any weakness in Power X’s VPN – and negatively impact millions of Americans. The Remote Access Balancing Act in Critical Infrastructure Power X serves millions of customers across the United States through its subsidiaries. These companies offer a variety of energy resources, including natural gas, carbon-free nuclear, solar, wind, and more. Power X knows that regardless of the resource, its customers are looking for them to “keep the lights on” with affordable, clean, and reliable energy. Power X is just one example of an organization within the critical infrastructure industry. Critical infrastructure companies provide essential assets, such as water, electricity, transportation, and healthcare, to the masses. Without just one of these assets, society as we know it would change forever. While there is no Joe and no Power X in real life, the example above illustrates the importance of secure remote access in modern critical infrastructure environments. Companies must embrace the efficiencies of distributed workforces while safeguarding their systems' confidentiality, availability, integrity, authenticity, and non-repudiation. VPNs have vulnerabilities. Alone, they cannot ensure secure remote access to any environment – let alone a critical infrastructure one. As a result, organizations like Power X need comprehensive, future-proof security strategies to meet today’s remote access challenges. Epoch Axis Ensures Secure Remote Access? Epoch Concepts has created an all-inclusive, bundled cybersecurity solution for critical infrastructure organizations needing a modern approach to secure remote access. We call it Epoch Axis. With Epoch Axis, we make it simple: We uncover and analyze your problems – and design a solution that alleviates them. We then integrate and implement that solution, incorporating hardware and software that complement your specific business needs and technology environment. Epoch Axis integrates with: - External user directories (e.g., Active Directory) for secure user management and Virtual Desktop Infrastructure (VDI) computing. - Leading systems management and identity management solutions. It also includes an Application Program Interface (API) for even deeper integration. Epoch Axis prevents: - The loss of sensitive data through extensive audit controls and the recording of support sessions. - Persistent threats by bad actors and reduces the overall attack surface thanks to granular, role-based access controls. Imagine matching access requests with the appropriate technologies! Epoch Axis allows: - Exhaustive oversight of all trusted third-party actions, including the video playback of all desktop screen interactions. - Access over internal and external networks, as well as the internet. Epoch Axis works with multiple operating systems and various system formats, including laptops, servers, kiosks, etc. The Benefits of Epoch Axis At Epoch Concepts, we don’t sell solutions; we become a partner in your success. Our engineers and architects are expert problem-solvers, and they know the ins and outs of critical infrastructure environments. We understand that failure is not an option, and we are ready to become part of the security strategy that eliminates it. With Epoch and Epoch Axis, you gain: • Secure remote computing access that exceeds regulatory compliance requirements. • Seamless access to remote data and processing capabilities for internal/external personnel while ensuring minimal Operational Technology (OT) exposure. • Information assurance (IA) while enhancing user controls and monitoring. • Reduced overall attack surfaces thanks to role-based access controls. • An affordable, scalable solution with the flexibility to accommodate future security enhancements. We are ready to become your all-in-one Information Technology (IT) solutions provider - from ideation to integration and innovation. Are you ready to revamp your security strategy? Call us today to see how Epoch can support your critical infrastructure environment and mission.
<urn:uuid:6df2573d-a5e1-458b-b810-3a5de73cb3b5>
CC-MAIN-2022-40
https://blog.epochconcepts.com/blog/secure-remote-access-in-critical-infrastructure-environments
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00053.warc.gz
en
0.923935
1,065
2.65625
3
What are DDoS attacks all about? Why are they so crippling? And how can you defend against them? Learn everything you need to know about the next DDoS attack that may target your system – and how to respond. What is a distributed denial-of-service attack? DDoS stands for Distributed Denial of Service. Sounds complicated? Don’t worry, it’s quite a simple concept to grasp. During DDoS attacks, huge numbers of “bots” attack target computers. Hence, many entities are attacking a target, which explains the “distributed” part. The bots are infected computers spread across multiple locations. There isn't a single host. You may be hosting a bot right now and not even know it. When DDoS attackers direct their bots against a certain target, it has some pretty unpleasant effects. Most importantly, a DDoS attack aims to trigger a “denial of service” response for people using the target system. This takes the target network offline. If you’ve repeatedly struggled to access a retail website, you may well have encountered a denial of service. And it can take hours, or days to recover from. How does a DDoS attack work? Why do DDoS attacks cause so much damage? In part, it’s simply a question of resources. Servers have a certain capacity – they aren’t limitless processing hubs. When they breach their capacity limits, systems within the server act to preserve the server as a whole. This takes targeted websites or users offline in the process. Generally, attackers use a variety of denial of service techniques to bombard their targets – from data packets to messages or connection requests. All these techniques have the effect of turning targets into confused, slow, and often dysfunctional systems. To achieve this, DDoS attackers need to control a bot army (or botnet). That’s the tricky part. However, by using social engineering (such as phishing) to spread malware or enticing users to download it, hackers can create the bots they need. After attackers infect your system, it becomes a “bot.” You no longer have complete control over what your computer does online. Instead, control passes to a “master,” who orchestrates DDoS attacks. To do so, the “masters” weave together bots into botnets and coordinate them via special software. These botnets can be massive. For instance, there are estimates claiming that the Srizbi included more than 450,000 bots. And these enormous forces continue to wage war on web users across the world, often with devastating results. The main types of DDoS attacks When we say a DDoS attack, it generally means a large-scale attack aimed to shut down a particular target. However, there are several variations in how DDoS attacks work. Typically, this depends on the part of the network that suffers the attack. Network connections consist of many components, so a DDoS attack could target any one of them to intercept the service. In the network architecture OSI model, these components are more commonly known as layers - and they help us to describe the process of connectivity: Application layer (7th layer) – topmost layer that specifies protocols for interactions with the network Presentation layer (6th layer) – makes sure that the data is in a standardised format that the two separate systems understand Session layer (5th layer) – is a mechanism that manages open network sessions intended for particular exchanges Transport layer (4th layer) – ensures the reliable arrival of messages and confirms their reception Network layer (3rd layer) – responsible for routing data packets through intermediaries like routers and servers Datalink layer (2nd layer) – organizes the data into the packets that are ready to be sent Physical layer (1st layer) – defines the transmission of raw bits over physical data links In this sense, DDoS attacks fall into three categories: application-layer attacks, protocol attacks, and network-centric attacks, depending on which layer they target. Here’s what each of them does. Application layer attacks Or (layer 7 attacks) are a DDoS attack category that targets the outermost communications layer, which specifies protocols and interface methods for data exchange. The aim is to overwhelm these weak points to bring the network to its knees, misdirect it or disturb the exchanges, making them painfully slow. BGP Hijacking – targets the Border Gateway Protocol used to standardize routing and information exchange data. This application-layer attack aims to route Internet traffic to an unintended destination by impersonating ownership of IP address groups via IP prefixes. This information can quickly spread to other networks, routing users to incorrect webpages. Slowloris attack – targets HTTP connection requests to keep as many simultaneous connections open as possible. This exploits the fact that all servers have finite processing power. It only affects the webserver, dramatically slowing it down and negating requests from real users. It makes the service painfully slow and denies genuine requests. Slow POST attack – a slow POST attack works by sending correctly specified HTTP POST headers to the targeted web server. However, the header’s body is intentionally sent at a very low speed. Since the message header is legitimate and there’s nothing wrong with it, the server responds to the request. If the server received thousands of these messages, it could quickly deny all other requests, stuffing the server resources. Slow read attack – you can think of the slow read attack as a reversed slow POST attack. The difference is that in the case of a POST attack, the method is slowly sending the message body. In the case of a slow read attack, the HTTP requests are intentionally accepted and read at a very slow speed. The target server has to keep these requests open since the transfer is in progress, exhausting its resources, especially in cases with massive botnets. Low and slow attack – this type of attack can target Transmission Control Protocol (TCP) via HTTP or TCP sessions with super slow rates. It's a method to slowly and steadily overwhelm servers flooding the pipeline and denying genuine user requests to connect. This attack requires a lot fewer resources to execute and is even possible without a botnet. Plus, it bypasses usual DDoS mitigation methods because the sent packets are genuine. Large payload POST attack – this type exploits extensive markup language (XML) encoding used by webservers to exchange over HTTP. In this instance, the webserver receives data encoded in XML. The data, however, is altered by the attacker so that once it's in the memory, the size would be many times larger. If the server receives a large number of these requests, its memory quickly depletes. Mimicked user browsing – as the name suggests, this DDoS attack imitates a real user's browsing patterns. However, this is actually a massive scale botnet. Each bot imitates real people going to the websites, generating high visitor spikes. It makes it impossible for real user data to go through, denying their queries. Protocol attacks target the data transmission process, exploiting the transport and network layers. They target the protocols authenticating pre-selected connection methods. This type of attack accumulates pressure by bullying firewalls and sending faulty packets that crash the systems. SYN floods – this attack exploits vulnerabilities in the TCP handshake system, which requires a SYN request, SYN-ACK, and ACK packet to authenticate the exchange. The attacker sends out an SYN request to the server, the server responds with an SYN-ACK message, waiting for an ACK confirmation from the client. However, the hacker sets up his equipment in such a way that the ACK packet never arrives, leaving the server hanging. Because there’s a finite amount of how many TCP interactions can simultaneously occur on a given server, a higher amount of these requests can quickly cause a crash. Fragmented packet attacks – this attack type targets the maximum possible capacity of the Internet Control Message Protocol. There is a pre-determined size that a normal internet communications datagram cannot exceed. The attacker then fragments the packet and sends it in parts. Once the receiving server reassembles the packet, it returns an error, crashing the system. IP/ICMP fragmentation attack – this attack is set up by sending malicious datagram packets that exceed the maximum transmission unit. The catch, in this case, is that if a packet is too large, it is transferred to temporary storage. Once there, it stuffs the memory, causing other requests to be denied. Smurf DDoS – this attack exploits the Internet Control Message Protocol with a spoofed victim’s IP to generate infinite query loops. The attacker uses the victim as bait, amplifying the generated queries from the server network. It works as if the target requested queries, being overwhelmed with responses. Network-centric or volumetric attacks mainly involve blitzing targets with data packets. This type of attack accumulates an enormous amount of traffic. It directs it to servers, which are unable to sustain full load for a prolonged period and will crash. HTTP flooding attack – this attack type overwhelms the targeted server with a massive number of HTTP requests. Too many processed requests leave fewer available slots for genuine users. This denies service to them because the server is busy responding to the bots' queries. ICMP flood attack – the most common type of volumetric attack. It works by requesting a high number of Internet Control Message Protocol requests, also known as pings. Each time the server receives such a request, it has to diagnose the health of its network. This exhausts resources and takes slightly longer than to generate a query. IPSec flood – this attack targets the victim’s VPN server, trying to take it out of order. The attacker sends a large volume of IPSec IKE requests, making the server respond with redirected traffic. The goods news is that this type of attack is a thing of the past. After the introduction of the IKEv2 tunneling protocol, this vulnerability was largely solved. UDP flood attack – this type of attack uses a large number of User Datagram Protocol (UDP) requests, sent faster than the server can respond. Due to the added cumulative effect of being bombarded with requests that return no destinations, even the server’s firewall can crumble. This also stops the service from responding to genuine requests. Reflection amplification attacks – this attack works by sending a large volume of UDP packets with spoofed IP addresses to a DNS server. Essentially, it bounces them to the victim’s IP. The target gets hits by a load of responses as if he had contacted all these servers. This allows the hacker to remain anonymous, harassing innocent users with huge spikes clogging the bandwidth. The dangers of DDoS attacks There are plenty of reasons to neutralize the threat posed by DDoS attacks and botnets. Here are but a few examples of what can happen if you let your defenses drop. Commercial systems can fail – in 2018, the Danish rail operator DSB fell victim to a DDoS attack, and it decimated their routing schedules. Ticketing systems went down, and trains slowed to a crawl to protect rider safety. Gaming servers disruption – in 2016, the world of online gaming was rocked by the discovery of what came to be called the Mirai botnet attack. In this case, attackers sought to knock out competing Minecraft servers (which used to be a common money-making scheme). This attack didn’t just disrupt Minecraft players around the world. What’s even worse is the fact that the botnet went “rogue,” inflicting damage across servers in the eastern USA. Bankruptcy is possible – back in 2014, the internet company Code Spaces proved to be a great example of the worst-case DDoS attack scenario. After repeated attacks, the coding hub closed its doors. This is something that could happen to any organization – all it takes is leaving the door open to DDoS attackers. What are the effects of hosting a bot on your system? One of the worst aspects of DDoS attacks is how hard it is to detect whether your system is compromised. While there are some effects on connection speeds, most users barely notice any of this. Instead, they continue their normal online activities, blissfully unaware of the damage they’re spreading worldwide. However, there are consequences for everyday users as well. For example, gamers can see connection speeds drop, and latency increase dramatically when DDoS attacks take place. Some games like Runescape have been victims of such attacks, resulting in terrible ping for many players. DDoS attack map Online you can find many data flow visualizations that pinpoint cyber-attack clusters. Such maps often encompass botnets, hubs setting up for reflection attacks, and more. The DDoS attack map, then, is just one of the ways to filter out just the data that portrays large scale DDoS attack directions, showing them on a map using historical records. Here are a few most known examples. Arbor Networks DDoS attack map The product of Google Ideas and Arbor Networks collaboration is a live data visualization that also doubles as a source for historical data and trends for DDoS attacks. The project uses data gathered by their proprietary ATLAS threat intelligence system collaborating with ISP’s who voluntarily agreed to share anonymized data collected on their users. Fortinet threat map Fortinet threat map acts as a free demo version of what would be available if you decided to opt-in and become a full-fledged Fortiguard user. Displaying the data, collected from anonymized Fortinet products users, it shows the more heated eruptions indicated with different color codes. Actual Fortiguard users have a better version on their hands. They can more easily monitor what threat may be looming nearby on a personalized map. Bitdefender live cyber threat map Although Bitdefender is more famous for its anti-virus service, they also have a cybersecurity threat map that also displays DDoS attacks in real-time. You can filter the reports with attack type and targeted country. Largest DDoS attacks Large scale DDoS attacks can have devastating consequences even for those with the workforce and resources to mitigate the damage. Here are the most vicious examples of past DDoS attacks. Do keep in mind that this list is incomplete, and most likely, something as disastrous could always occur out of the blue. 1. CloudFlare DDoS attack in 2014 In 2014, cybersecurity heavyweight CloudFlare found themselves under a large scale DDoS attack. It started as a reflection attack on one of its customers. However, due to CloudFlare’s cyberthreat mitigation methods, their other server in Europe caught the damage, which was massive even when spread out across several fleets. This attack exploited the Network Time Protocol (NTP) vulnerability, using these servers to bounce spoofed requests to the victims, hitting other CloudFlare’s servers in its way. 2. GitHub DDoS attack in 2018 GitHub’s example shows how a timely alert can help to mitigate even large scale attacks. There were no large botnets. However, it was sending data packets at 126.9 million per second rate. That’s almost 1.4 Terabytes per second. It was executed by flooding memcached servers with spoofed requests, considerably amplifying the scale and redirecting the responses to the GitHub network. Prolexic Technologies, the DDoS mitigation providers that GitHub used, kicked in intercepting the attacks. 3. Dyn attack in 2016 Domain Name System (DNS) provider Dyn, Inc. came under fire of a DDoS attack targeting their systems. Due to the fact how much data DNS servers share, the disruptions sent shockwaves through Paypal, Amazon, Reddit, and more webpages, making them inaccessible. The attack disrupted DNS lookup requests flooding their DNS servers using the same Mirai botnet. 4. Estonian incident of 2007 One of the biggest DDoS attack examples can also possibly be an example of a foreign country intervention. It’s one of the famous examples of Russian hackers making things worse for you in cyberspace. In 2007, Estonia relocated a Soviet Union monument dedicated to the soldiers who perished in World War II. Not long after, the Estonian parliament, government services, and even news media and broadcasters found themselves in the middle of a large scale DDoS attack. It widely believed that Russia had directed the attacks. However, since they didn’t comply with Estonian requests to let them pursue their investigation, it remains a mystery how it happened. How to prevent and stop DDoS attacks The tricky thing about DDoS attacks that there’s no one-click solution that will protect you. DDoS attacks are very pervasive and can have several workarounds to bypass the measures by imitating genuine user traffic. However, there are still several things that could be done as a business to minimize the risk that it will happen. - Monitoring. If you’re running a business, you should be actively monitoring your network for all kinds of possible threats, DDoS attacks being just one of the possible threats. The faster you distinguish a botnet bombardment from the genuine spikes in user traffic, the quicker you can mitigate the damage before your servers melt from the overload. - Have a shield ready. You’d be surprised how many business owners can play the cheap route when setting up the servers. It can quickly backfire if the servers configurations are faulty. Adding such a barrier like firewalls with appropriate traffic limits can help you to avoid blitz from the perpetrators with more modest botnets. - If you’re under fire, act quickly. One of the classic camping tips goes as follows: “when on fire, stop, drop and roll.” The same is true when you’re a target for DDoS. Hence, when you’re under attack, you should have a plan of what to do when it’s already in progress. In one case, this might mean contacting your ISP to ask them to reroute traffic. In other cases, this might mean contacting your DDoS mitigation service provider. It will heavily depend on the situation that you’re in. However, what holds true in all cases, the more quickly you react, the better are your chances to stop it before it causes too much damage. For most individual users, most of these tips will likely don’t be that useful. It’s not very cost-effective to pay for DDoS mitigation service if you’re just casually browsing social media and watching Netflix. However, there’s also a couple of things that you could do to play your part. - Protect your network. The main thing that each user should do is to make sure that his system isn’t taken over by a hacker. This can happen by clicking on suspicious links and installing malware that compromises your network, which then gets incorporated in a large scale botnet weaponizing your resources against the attacker’s targets. What is the difference between DoS and DDoS attacks? The main difference between DoS and DDoS attacks is a difference of scale. DoS attack uses one computer to flood a server with packets to shut it down. DDoS does the same thing, but it ups the scale using many different devices to achieve the same goal. Are DDoS attacks illegal? Cybercrimes are uncharted territory for many countries, so the legality and punishments for those offenses will vary greatly. For example, in the US, DDoS attacks are considered illegal on the Computer Fraud and Abuse Act. From European countries, the United Kingdom is particularly noteworthy because they're specifically outlawed DDoS attacks under the Computer Misuse Act. Do keep in mind that even the actions are deemed illegal, it will be tough to find justice on the court. DDoS attacks by nature use intermediaries, so the attacker can do its job from a safe distance without revealing his identity. How long do DDoS attacks last? If there are no mitigation procedures implemented, the DDoS attacks can last as long as the attacker wants. Some attack types can be quick because they have a clear intent. For example, the Ping of Death sends a malformed packet that quickly crashes the system. Naturally, the attack itself happens reasonably quickly. Whereas the SYN flood would take a lot more time for it to take effect, naturally, the attack time would be longer. Many of the DDoS attacks go unreported, so be cautious about the sites that claim approximate time of attack duration. Treat it more on a case by case basis. Can a DDoS attack be traced? The main problem with tracing DDoS attacks is that the attackers use intermediary servers or botnets to do their job. Since the traffic is coming from thousands of locations and IP addresses simultaneously, it will be challenging to sort through the traffic. Especially considering that this traffic is overloading your network. Usually, the hackers themselves are using proxies to hide their information. Does VPN stop DDoS? VPN is a very effective method for a typical user to add protection from DDoS attacks. This works by masking your real IP address and displays your VPN provider's assigned one. In essence, when someone would try to DDoS you, the attack would be directed at the VPN server, instead of your own. Plus, VPN providers have many mitigation measures set up in place to stop the attack in its tracks. The only way this wouldn't be useful is in cases when your attacker already knows your real IP address. Then, a VPN would not help because your home router, for example, would still be affected, denying you the service.
<urn:uuid:265b807a-b19a-46ab-b582-fdd28b86bcbb>
CC-MAIN-2022-40
https://cybernews.com/security/ddos-attacks-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00053.warc.gz
en
0.935453
4,522
3.0625
3
Visual Explain for Run SQL Scripts October 22, 2008 Skip Marchesani The Run SQL Scripts function, a.k.a. the SQL Script Center or Script Center, allows the user to execute all or a subset of a script that contains one or more SQL statements and/or batch CL commands. It is part of the Navigator Database function and is an extremely powerful and flexible tool with lots of function that can have a very positive impact on application developer productivity. One of these functions is Visual Explain, which is used to investigate and improve the performance of SQL statements. Visual Explain creates a query graph or diagram that graphically displays the execution of a SELECT, INSERT, UPDATE, or DELETE SQL statement, and is very useful in helping to understanding the execution costs (performance) of a specific query. Visual Explain will also recommend any indexes that can be created to help improve SQL performance. To see how Visual Explain can be used, let’s look at executing a very simple SELECT statement using Run SQL Scripts. In the example shown in Figure 1, all columns are being selected from the table called NAMES, where the number in the column called LOG is greater than zero, and the results set is ordered by LOG number in ascending sequence. To run Visual Explain on a specific SQL statement, place the cursor on that statement or highlight the statement, and then click on Visual Explain in the toolbar. When you click on Visual Explain, a drop down menu appears with the options Explain, and Run and Explain. If you choose Explain from the drop down menu, query optimization is performed and the query diagram is displayed along with the optimization information without actually running the query.rnIf you choose Run and Explain from the Visual Explain menu, the query is optimized and actually run before the query diagram is displayed along with any results set. The Run and Explain option may take substantially more time than just the Explain option, but the query diagram and associated information will be more accurate. Initially for this example the Explain option will be selected, which means that optimization will take place, but the query will not be run. After selecting the Explain option, there will be a short wait until the Visual Explain window and query diagram in Figure 2 are displayed. The left pane of the Visual Explain window shows the query diagram and the nodes in the diagram, and the right pane shows the detailed information for the highlighted node, which in this case is the Final Select node. There is one node displayed for each step in the query execution process. To highlight a node and display its associated information, just click on the node in the diagram. If you look at the last line under Time Information in the information panel, you will see that the estimated query runtime or execution time is 625.679 milliseconds. If you scroll down to the bottom of the right pane, the last line tells us the query engine used was SQE, as opposed to CQE. This is shown in Figure 3. SQE is the new query engine and optimizer and is the preferred of the two engines. If you look at the query diagram in Figure 3 you will see that the first node is a table scan and the second node is a temporary sorted list. A table scan means that the entire table is being read in arrival sequence starting with row one and ending with the last row in the table. This is a performance scenario that is going to use a maximum of system resources to execute the query. In other words it’s not going to perform very well. Past experience with seeing the nodes Table Scan with a Temporary Sorted List tells me that this query is a good candidate to have its performance improved with the creation of an index. The next step is to click on the dancing feet on the right side of the bottom toolbar (Statistics and Index Advisor in Figure 3) to see what indexes, if any, that Visual Explain is recommending. Click on the dancing feet, and then click on Index Advisor tab in the resulting Statistics and Index Advisor window as shown in Figure 4. The Index Advisor is recommending that a Binary Radix Index (as opposed to an EVI or Encoded Vector Index) be created over the NAMES table. If there is more than one index recommendation in this window, select one and click on CREATE in the lower right corner of the Window. When you click on CREATE, the New Index window is displayed, as shown in Figure 5, along with the detailed attributes of the recommended index. To create the recommended index, enter an index name on the first line–in this example the name is Log_Nbr–and the click OK on the bottom right of the window to create the index. Once the index is created you are returned to the Index Advisor page to give you the option of creating any additional recommended indexes. In this case there are none, so you can close both the Index Advisor and Visual Explain widows to return to the Run SQL Scripts window. Referring to the Run SQL Scripts window shown in Figure 1, click on Visual Explain on the toolbar and select Explain to run Visual Explain a second time to see the effect that the newly created index has on query performance. The first thing you should note in the new Visual Explain window shown in Figure 6 is that the total estimated run time has dropped from 625 milliseconds down to just 64 milliseconds–almost a 10 to 1 improvement in query run time! Next note that the query diagram has changed and that the first three nodes of the initial query diagram shown in Figure 3 have been replaced by an Index Probe and a Table Probe that tells us that the optimizer choose to use an index the second time, instead of doing the table scan the first time that Visual Explain was run. When you Click on the Index Probe node to highlight it as shown in Figure 6, the detailed information and name of the index the optimizer choose to use is displayed. The first line under the heading Index Info tells us that the name of the index used is LOG_NBR. This is the index recommended by the Index Advisor and created in Figure 7. You can verify the actual or real performance improvement by running Visual Explain a third time, and selecting the Run and Explain option. Skip Marchesani retired from IBM after 30 years and is now a consultant with Custom Systems Corporation. He is also a founding partner of System i Developer and the RPG & DB2 Summit. Skip spent much of his IBM career working with the Rochester Development Lab on projects for S/38 and AS/400 and was involved with the development of the AS/400. He was part of the team that taught early AS/400 education to customers and IBM lab sites worldwide. Skip is recognized as an industry expert on DB2 for i and the author of the book DB2/400: The New AS/400 Database. He specializes in providing customized education for any area of the System i, iSeries, and AS/400; does database design and design reviews; and performs general System i, iSeries, and AS/400 consulting for interested clients. He has been a speaker for user groups, technical conferences, and System i, iSeries, and AS/400 audiences around the world. He is an award-winning COMMON speaker and has received its Distinguished Service Award. Send your questions or comments for Skip to Ted Holt via the IT Jungle Contact page.
<urn:uuid:1dee0b2a-a983-43c2-9043-5892c220ef5e>
CC-MAIN-2022-40
https://www.itjungle.com/2008/10/22/fhg102208-story01/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00053.warc.gz
en
0.913214
1,509
2.59375
3
Security experts for a long time suspected that in-flight Wi-fi could create an entry door for hackers and a new report issued by The US Government Accountability Office (GAO) describes the dungeon of such action. The report titled “FAA Needs a More Comprehensive Approach to Address Cybersecurity As Agency Transitions to NextGen“reveals for example how IP networks left flights “open” to cyber-attacks (in-flight wireless, internet-based cockpit). “IP networking may allow an attacker to gain remote access to avionics systems and compromise them,” states the report. The reports highlights two principal sources of problems. According to the experts, the flight cockpit and passengers use the same router and share the same internal network, this means that a passenger could interfere with control console creating serious problems. The airplanes are very sophisticated systems. They are comparable to a complex network in which each system runs its software component that could be compromised exactly like the information exchanged by the parts. Many investigators revealed that an attacker with a deep knowledge of the plane’s system could intentionally cause serious problems with its normal operation. “The experts said that if the cabin systems connect to the cockpit avionics systems (e.g., share the same physical wiring harness or router) and use the same networking platform, in this case IP, a user could subvert the firewall and access the cockpit avionics system from the cabin,” By the fact that nowadays everyone uses smartphones/tablets, things got even worse, “The presence of personal smartphones and tablets in the cockpit increases the risk of a system’s being compromised by trusted insiders, both malicious and non-malicious, if these devices have the capability to transmit information to aircraft avionics systems,” “One cybersecurity expert noted that a virus or malware planted in websites visited by passengers could provide an opportunity for a malicious attacker to access the IP-connected onboard information system through their infected machines.” We can agree that until now we haven’t seen any attack to an aircraft coming from “outside” of “inside”, but the real threat exists we there is the need to avoid this, to never happen. In 2013, a security consultant Hugo Teso was able to prove the point, and demonstrated how from smartphone he exploited the Automatic Dependent Surveillance-Broadcast navigation system, as well the plane’s Flight Management System. After this demonstration of the used method, the vulnerability was patched. The report also says that the FAA is taking steps to have better cyber security policies, for that a group of experts are working together and it’s expected to have a draft in Sep2015 that will provide a guide to how restructure the IT infrastructure. Concluding, I think that there are yet some steps to be done until we can feel safer when entering and traveling in an airplane and cyber security should be a vector of investment, where there is the need to create more strict regulations, certification standards, proprietary technologies, etc. etc., but all needs time. I look forward seeing what improvements will be done in the industry in the next years, for the flight safety and cyber security of in-board systems. About the Author Elsio Pinto (Security Affairs – Flight, hacking, GAO)
<urn:uuid:b8fc92fe-e745-4daa-8bd4-f5a39b0fb28f>
CC-MAIN-2022-40
http://securityaffairs.co/wordpress/36059/hacking/in-flight-hacking.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00253.warc.gz
en
0.934117
688
2.53125
3
Thank you for Subscribing to CIO Applications Weekly Brief How to Use Data Analytics to Fight Climate Change The globe is predicted to surpass the global warming barrier between 2027 and 2042, according to research. Scientists expect that once humanity hit that barrier, temperatures will rise by 1.5 degrees Celsius, based on mathematical models that assess the current condition of the Earth's climate. Fremont, CA: It's impossible to dispute that humans live in a data-driven world. Organizations may gather raw data about our interactions and activities, run models, and generate conclusions that support important choices as our reliance on new technologies grows. Data analytics tools and procedures can also aid in developing long-term solutions aimed at mitigating the consequences of global warming. For example, scientists may perform qualitative and quantitative assessments using data analytics techniques to offer reliable predictions of catastrophic climate consequences that might increase in a matter of years. Climate change mitigation is a race against time requiring coordinated efforts to alter whole lives, consumer patterns, and civilizations. The problem is figuring out how to get everyone on the same page. Unfortunately, due to certain nations' withdrawal from international climate agreements, considerable progress has yet to be accomplished. Although data analytics may have arrived sooner, it is not too late for this growing technology to play a role in determining the most successful action plans. For the time being, data analytics is getting utilized to assist current climate efforts and contribute to the increasing body of knowledge in climate research. - Analyzing carbon footprints Companies can adequately assess their carbon emissions by using AI-driven solutions. They may then determine what needs to reduce their carbon footprints and implement sustainable supply chain solutions. - Reclaim lost ecosystems Big data also aid conservation initiatives. For example, ecologists can locate regions revived after extended periods of drought or forest fire devastation using historical data. - Promote climate awareness Despite denialists' best attempts to restrict discussion of climate change, data on coming climatic calamities remains strong. It's simply a matter of presenting credible proof and educating the audience through data analytics. - Develop sustainable products Data analytics can assist firms involved in sustainable manufacturing in developing product designs that decrease environmental consequences. These firms may improve their production lines and outputs for better sustainability by evaluating the link between materials, logistics, product design and carbon emissions.
<urn:uuid:2525726a-9cfc-414a-a835-95b6d9843d88>
CC-MAIN-2022-40
https://www.cioapplications.com/news/how-to-use-data-analytics-to-fight-climate-change-nid-8644.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00253.warc.gz
en
0.91235
472
3.4375
3
Reliability Growth in Six Sigma The concept of Six Sigma is as recent as 1986. It is well known and widely accepted in almost every sphere of business. It is used by a company to identify various areas and opportunities that they could work upon and improve, underline the hidden value and bring it out and implement control that can sustain for a long time. Six Sigma is one such methodology that can be used by every industry to improve their performance and the reliability growth is the proof of it. Read on see how it works and what reliability growth is. More on Six Sigma: To understand reliability growth in Six Sigma, it is important to understand the six sigma concept. It is a strategic methodology and a management philosophy that was first developed by Motorola and it aims on have 99.99966% of products statistically free from any defect. This is achieved by setting very high objectives before the project starts, collecting as much relevant data as possible and maintaining stringent standards while analyzing the data. The goal is to eliminate the defects in a systematic manner and get the product or service as close as possible to perfection. If one has to achieve Six Sigma, a company cannot produce more than 3.4 defects in every million opportunities, where an opportunity is defined as a chance for non-conformance. Many companies made it possible to reduce their costs and enhance their productivity, thanks to Six Sigma process. It can also be used as a management system by linking all the crucial goals of an organization to the implementation and using the Six Sigma processes. Two processes are comprised in Six Sigma, which are: - Six Sigma DMADV - Six Sigma DMAIC Each letter in the 5 lettered-term stands for major steps that are involved in any process. Six Sigma DMADV is a process that Defines, Measures, Analyzes, Designs, and Verifies any process, product or service that is trying to achieve the Six Sigma quality. On the other hand, Six Sigma DMAIC is a process that Defines, Measures, Analyzes, Improves and Controls all the existing processes that need to attain the Six Sigma quality. Following these methods, the results will be sustainable over a long period of time. Integrating Reliability with Six Sigma: Reliability can be defined as the probability of a process performing its duty without failing for a specified period of time when it is operated in a particular environment and conditions. Since the Six Sigma process allows almost negligible defects, reliability analysis holds utmost importance. Reliability is usually measured based on the failures of the process, which in turn represents any defect in the reliability process. Once these failures are identified and eliminated, the reliability performance will improve. The Six Sigma methods can be integrated with the reliability processes to maintain a reliability growth at various stages. There are various steps that are involved in ensuring reliability growth. They are: - Thorough Data Analysis - Developing new strategies - Maintaining the developed strategies - Detecting faulty designs and improving them - Note down if any process is dysfunctional or not up to the standards and fix them accordingly When there is a flawless integration between the right Six Sigma process and reliability process, any company can count on getting the desired growth and development. The strengths of both the processes should be linked and aligned properly. When that is done you will get valuable insights of solving difficult reliability issues, an opportunity to design better solutions, maintain the defects to a minimum and project better results. Maintaining the performance to the Six Sigma level is not an easy task at all. This is the reason why companies highlight their Six Sigma achievements if they attain them. Integrating different practices that will help enhance various processes in an organization becomes vital to touch the Six Sigma level.
<urn:uuid:10dbd280-8dbe-4b88-a522-885937f094d3>
CC-MAIN-2022-40
https://www.greycampus.com/blog/quality-management/reliability-growth-in-six-sigma
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00253.warc.gz
en
0.955971
777
2.640625
3
NIST Threat Assessment of Malicious Code Lawrence E. Bassham and W. Timothy Polk National Institute of Standards and Technology Computer Security Division October 1992 1 Introduction - 1 Malicious Code - 2.1 Viruses - 2.1.1 History of Viruses - 2.1.2 Current Protection Against Viruses - 2.2 Worms - 2.2.1 History of Worms - 2.2.2 Current Protection Against Worms - 2.3 Trends for the Future - 3 Human Threats - 3.1 Insider Attacks - 3.2 Hackers - 3.3 Phone Phreaks - 3.4 Trends for the Future - 3.4 Configuration Errors and Passwords - 3.4.1 Internal Threats - 3.4.2 Connectivity - 3.4.3 Information Dissemination - 3.4.4 Summary - 4 Threat Assessment of Malicious Code and Human Threats (NISTIR 4939) As a participant in the U. S. Army Computer Vulnerability/Survivability Study Team, the National Institute of Standards and Technology has been tasked with providing an assessment of the threats associated with commercial hardware and software. This document is the second and final deliverable under the Military Interdepartmental Purchase Request number: W43P6Q-92-EW138. This report provides an assessment of the threats associated with malicious code and external attacks on systems using commercially available hardware and software. The history of the threat is provided and current protection methods described. A projection of the future threats for both malicious code and human threats is also given. Today, computer systems are under attack from a multitude of sources. These range from malicious code, such as viruses and worms, to human threats, such as hackers and phone "phreaks. " These attacks target different characteristics of a system. This leads to the possibility that a particular system is more susceptible to certain kinds of attacks. Malicious code, such as viruses and worms, attack a system in one of two ways, either internally or externally. Traditionally, the virus has been an internal threat, while the worm, to a large extent, has been a threat from an external source. Human threats are perpetrated by individuals or groups of individuals that attempt to penetrate systems through computer networks, public switched telephone networks or other sources. These attacks generally target known security vulnerabilities of systems. Many of these vulnerabilities are simply due to configuration errors. Viruses and worms are related classes of malicious code; as a result they are often confused. Both share the primary objective of replication. However, they are distinctly different with respect to the techniques they use and their host system requirements. This distinction is due to the disjoint sets of host systems they attack. Viruses have been almost exclusively restricted to personal computers, while worms have attacked only multi-user systems. A careful examination of the histories of viruses and worms can highlight the differences and similarities between these classes of malicious code. The characteristics shown by these histories can be used to explain the differences between the environments in which they are found. Viruses and worms have very different functional requirements; currently no class of systems simultaneously meets the needs of both. A review of the development of personal computers and multi-tasking workstations will show that the gap in functionality between these classes of systems is narrowing rapidly. In the future, a single system may meet all of the requirements necessary to support both worms and viruses. This implies that worms and viruses may begin to appear in new classes of systems. A knowledge of the histories of viruses and worms may make it possible to predict how malicious code will cause problems in the future. To provide a basis for further discussion, the following definitions will be used throughout the report. - Trojan Horse - a program which performs a useful function, but also performs an unexpected action as well. - Virus - a code segment which replicates by attaching copies to existing executables. - Worm - a program which replicates itself and causes execution of the new copy. - Network Worm - a worm which copies itself to another system by using common network facilities, and causes execution of the copy on that system. The following are necessary characteristics of a virus: - replication - requires a host program as a carrier - activated by external action - replication limited to (virtual) system In essence, a computer program which has been infected by a virus has been converted into a trojan horse. The program is expected to perform a useful function, but has the unintended side effect of viral code execution. In addition to performing the unintended task, the virus also performs the function of replication. Upon execution, the virus attempts to replicate and attach" itself to another program. It is the unexpected and generally uncontrollable replication that makes viruses so dangerous. Viruses are currently designed to attack single platforms. A platform is defined as the combination of hardware and the most prevalent operating system for that hardware. As an example, a virus can be referred to as an IBM-PC virus, referring to the hardware, or a DOS virus,referring to the operating system. "Clones" of systems are also included with the original platform. The term "computer virus" was formally defined by Fred Cohen in 1983, while he performed academic experiments on a Digital Equipment Corporation VAX system. Viruses are classified as being one of two types: research or "in the wild. " A research virus is one that has been written for research or study purposes and has received almost no distribution to the public. On the other hand, viruses which have been seen with any regularity are termed "in the wild. " The first computer viruses were developed in the early 1980s. The first viruses found in the wild were Apple II viruses, such as Elk Cloner, which was reported in 1981 [Den90]. Viruses have now been found on the following platforms: - Apple II - IBM PC - Macintosh - Atari - Amiga Note that all viruses found in the wild target personal computers. As of today,the overwhelming number of virus strains are IBM PC viruses. However, as of August 1989, the number of PC, Atari ST, Amiga, and Macintosh viruses were almost identical (21, 22, 18, and 12 respectively [Den90]). Academic studies have shown that viruses are possible for multi-tasking systems,but they have not yet appeared. This point will be discussed later. Viruses have "evolved" over the years due to efforts by their authors to make the code more difficult to detect, disassemble, and eradicate. This evolution has been especially apparent in the IBM PC viruses; since there are more distinct viruses known for the DOS operating system than any other. The first IBM-PC virus appeared in 1986 [Den90]; this was the Brain virus. Brain was a bootsector virus and remained resident. In 1987, Brain was followed by Alameda(Yale), Cascade, Jerusalem, Lehigh, and Miami (South African Friday the 13th). These viruses expanded the target executables to include COM and EXE files. Cascade was encrypted to deter disassembly and detection. Variable encryption appeared in 1989 with the 1260 virus. Stealth viruses, which employ various techniques to avoid detection, also first appeared in 1989, such as Zero Bug, Dark Avenger and Frodo (4096 or 4K). In 1990,self-modifying viruses, such as Whale were introduced. The year 1991 brought the GP1 virus, which is "network-sensitive" and attempts to steal Novell NetWare passwords. Since their inception, viruses have become increasingly complex. Examples from the IBM-PC family of viruses indicate that the most commonly detected viruses vary according to continent, but Stoned, Brain, Cascade, and members of the Jerusalem family,have spread widely and continue to appear. This implies that highly survivable viruses tend to be benign, replicate many times before activation, or are somewhat innovative, utilizing some technique never used before in a virus. Personal computer viruses exploit the lack of effective access controls in these systems. The viruses modify files and even the operating system itself. These are "legal" actions within the context of the operating system. While more stringent controls are in place on multi-tasking, multi-user operating systems,configuration errors, and security holes (security bugs) make viruses on these systems more than theoretically possible. This leads to the following initial conclusions: - Viruses exploit weaknesses in operating system controls and human patterns of system use/misuse. - Destructive viruses are more likely to be eradicated. - An innovative virus may have a larger initial window to propagate before it is discovered and the "average" anti-viral product is modified to detector eradicate it. It has been suggested that viruses for multi-user systems are too difficult to write. However, Fred Cohen required only "8 hours of expert work" [Hof90] to build a virus that could penetrate a UNIX system. The most complex PC viruses required a great deal more effort. Yet, if we reject the hypothesis that viruses do not exist on multi-user systems because they are too difficult to write, what reasons could exist? Perhaps the explosion of PC viruses (as opposed to other personal computer systems) can provide a clue. The population of PCs and PC compatibles is by far the largest. Additionally, personal computer users exchange disks frequently. Exchanging disks is not required if the systems are all connected to a network. In this case large numbers of systems may be infected through the use of shared network resources. One of the primary reasons that viruses have not been observed on multi-user systems is that administrators of these systems are more likely to exchange source code rather than executables. They tend to be more protective of copyrighted materials, so they exchange locally developed or public domain software. It is more convenient to exchange source code, since differences in hardware architecture may preclude exchanging executables. The advent of remote disk protocols, such as NFS (Network File System) and RFS(Remote File System), have resulted in the creation of many small populations of multi-user systems which freely exchange executables. Even so, there is little exchange of executables between different "clusters" of systems. The following additional conclusions can be made: -To spread, viruses require a large population of homogeneous systems and exchange of executable software. Although many anti-virus tools and products are now available, personal and administrative practices and institutional policies, particularly with regard to shared or external software usage, should form the first line of defense against the threat of virus attack. Users should also consider the variety of anti-virus products currently available. There are three classes of anti-virus products: detection tools, identification tools, and removal tools. Scanners are an example of both detection and identification tools. Vulnerability monitors and modification detection programs are both examples of detection tools. Disinfectors are examples of a removal tools. A detailed description of the tools is provided below. Scanners and disinfectors, the most popular classes of anti-virus software,rely on a great deal of a priori knowledge about the viral code. Scanners search for "signature strings" or use algorithmic detection methods to identify known viruses. Disinfectors rely on substantial information regarding the size of a virus and the type of modifications to restore the infected file's contents. Vulnerability monitors, which attempt to prevent modification or access to particularly sensitive parts of the system, may block a virus from hooking sensitive interrupts. This requires a lot of information about "normal" system use, since personal computer viruses do not actually circumvent any security features. This type of software also requires decisions from the user. Modification detection is a very general method, and requires no information about the virus to detect its presence. Modification detection programs, which are usually checksum based, are used to detect virus infection or trojan horses. This process begins with the creation of a baseline, where checksums for clean executables are computed and saved. Each following iteration consists of checksum computation and comparison with the stored value. It should be noted that simple checksums are easy to defeat; cyclical redundancy checks (CRC) are better, but can still be defeated; cryptographic checksums provide the highest level of security. The following are necessary characteristics of a worm: - replication - self-contained; does not require a host - activated by creating process (needs a multi-tasking system) - for network worms, replication occurs across communication links A worm is not a trojan horse; it is a program designed to replicate. The program may perform any variety of additional tasks as well. The first network worms were intended to perform useful network management functions [SH82]. They took advantage of system properties to perform useful action. However, a malicious worm takes advantage of the same system properties. The facilities that allow such programs to replicate do not always discriminate between malicious and good code. Worms were first used as a legitimate mechanism for performing tasks in a distributed environment. Network worms were considered promising for the performance of network management tasks in a series of experiments at the Xerox Palo Alto Research Center in 1982. The key problem noted was "worm management;" controlling the number of copies executing at a single time. This would be experienced later by authors of malicious worms. Worms were first noticed as a potential computer security threat when the Christmas Tree Exec [Den90] attacked IBM mainframes in December 1987. It brought down both the world-wide IBM network and BITNET. The Christmas Tree Exec wasn't a true worm. It was a trojan horse with a replicating mechanism. A user would receive an e-mail Christmas card that included executable (REXX) code. If executed the program claimed to draw a Xmas tree on the display. That much was true, but it also sent a copy to everyone on the user's address lists. The Internet Worm [Spa89] was a true worm. It was released on November 2, 1988. It attacked Sun and DEC UNIX systems attached to the Internet (it included two sets of binaries, one for each system). It utilized the TCP/IP protocols, common application layer protocols, operating system bugs, and a variety of system administration flaws to propagate. Various problems with worm management resulted in extremely poor system performance and a denial of network service. The Father Christmas worm was also a true worm. It was first released onto the worldwide DECnet Internet in December of 1988. This worm attacked VAX/VMS systems on SPAN and HEPNET. It utilized the DECnet protocols and a variety of system administration flaws to propagate. The worm exploited TASK0, which allows outsiders to perform tasks on the system. This worm added an additional feature; it reported successful system penetration to a specific site. This worm made no attempt at secrecy; it was not encrypted and sent mail to every user on the system. About a month later another worm, apparently a variant of Father Christmas, was released on a private network. This variant searched for accounts with "industry standard" or "easily guessed" passwords. The history of worms displays the same increasing complexity found in the development of PC viruses. The Christmas Tree Exec wasn't a true worm. It was a trojan horse with a replicating mechanism. The Internet Worm was a true worm; it exploited both operating system flaws and common system management problems. The DECnet worms attacked system management problems, and reported information about successful system penetration to a central site. Several conclusions can be drawn from this information: - worms exploit flaws (i. e, bugs) in the operating system or inadequate system management to replicate. - release of a worm usually results in brief but spectacular outbreaks, shutting down entire networks. Protecting a system against a worm requires a combination of basic system security and good network security. There are a variety of procedures and tools which can be applied to protect the system. In basic system security, the most important means of defense against worms is the identification & authentication (I&A) controls, which are usually integrated into the system. If poorly managed, these controls become a vulnerability which is easily exploited. Worms are especially adept at exploiting such vulnerabilities; both the Internet and DECnet worms targeted I&A controls. Add-on tools include configuration review tools (such as COPS [GS91] for UNIX systems)and checksum-based change detection tools. Design of configuration review tools requires intimate knowledge of the system, but no knowledge of the worm code. Another class of add-on tools is the intrusion detection tool. This is somewhat analogous to the PC monitoring software, but is usually more complex. This tool reviews series of commands to determine if the user is doing something suspicious. If so, the system manager is notified. One type of network security tool is the wrapper program. Wrapper programs can be used to "filter" network connections, rejecting or allowing certain types of connections (or connections from a pre-determined set of systems). This can prevent worm infections by "untrusted" systems. Overlaps in trust may still allow infection to occur (A trusts B but not C; B trusts C; C infects B which infects A) but the rate of propagation will be limited. These tools do not protect a system against the exploitation of flaws in the operating system. This issue must be dealt with at the time of procurement. After procurement, it becomes a procedural issue. Resources are available to system managers to keep them abreast of security bugs and bugfixes, such as the CERT computer security advisories. Another class of security tools can be employed to protect a network against worms. The firewall system [GS91] protects an organizational network from systems in the larger network world. Firewall systems are found in two forms: simple or intelligent. An intelligent firewall filters all connections between hosts on the organizational network and the world-at-large. A simple firewall disallows all connections with the outside world, essentially splitting the network into two different networks. To transfer information between hosts on the different networks, an account on the firewall system is required. Personal computers have been immune to worms because they are single task systems. The increasing functionality of personal computer operating systems will soon change this. Personal computers will become true multi-tasking systems, and will inherit both the functionality and security vulnerabilities that those systems have exhibited. Multi-user systems have never been attractive virus targets, due to limited population, low software interchange rates, and because they use some form of access control. The advent of 486-class PCs is likely to change this. In addition to the increased performance of PC based machines,the UNIX workstation market is growing rapidly, producing high-performance machines at extremely affordable prices. Multi-user systems will be gaining market share, increasing their attractiveness to virus authors. This large homogeneous population of multi-user systems will be an attractive target for both virus authors and worm developers. Personal computer worms or virus/worm hybrids may become the new threat the 90s. With a large homogeneous population of systems available, it is conceivable that authors of malicious code will combine the previously disjoint attacks of viruses and worms. An attack consisting of a worm traversing a network and dropping viruses on the individual hosts becomes a startling possibility. As the functionality of personal computers continues to grow, new types of tools will be required to achieve the same degree of security. Scanners must be supplemented with configuration review tools. Identification & authentication tools (non-existent or neglected on most PCs) will become an important security tool on personal computers. Intrusion detection tools may become applicable to personal computers. Change detection will also play an increased role. Administrators of personal computer networks must become familiar with a new set of practices, tools, and techniques, such as firewalls. They will need to draw upon the world of multi-user systems for this knowledge. As the differences between PC and multi-user environments decreases, the likelihood of these environments facing similar threats will increase. Viruses will be more likely in the multi-user world; worms will become a threat in personal computer networks. Insiders, hackers and "phone phreaks" are the main components of the human threat factor. Insiders are legitimate users of a system. When they use that access to circumvent security, that is known as an insider attack. Hackers are the most widely known human threat. Hackers are people who enjoy the challenge of breaking into systems. "Phreakers" are hackers whose main interest is in telephone systems. The primary threat to computer systems has traditionally been the insider attack. Insiders are likely to have specific goals and objectives, and have legitimate access to the system. Insiders can plant trojan horses or browse through the file system. This type of attack can be extremely difficult to detect or protect against. The insider attack can affect all components of computer security. Browsing attacks the confidentiality of information on the system. Trojan horses are a threat to both the integrity and confidentiality of the system. Insiders can affect availability by overloading the system's processing or storage capacity, or by causing the system to crash. These attacks are possible for a variety of reasons. On many systems, the access control settings for security-relevant objects do not reflect the organization's security policy. This allows the insider to browse through sensitive data or plant that trojan horse. The insider exploits operating system bugs to cause the system to crash. The actions are undetected because audit trails are inadequate or ignored. The definition of the term "hacker" has changed over the years. A hacker was once thought of as any individual who enjoyed getting the most out of the system he was using. A hacker would use a system extensively and study the system until he became proficient in all its nuances. This individual was respected as a source of information for local computer users; someone referred to as a "guru" or "wizard. " Now, however, the term hacker is used to refer to people who either break into systems for which they have no authorization or intentionally overstep their bounds on systems for which they do have legitimate access. Methods used by hackers to gain unauthorized access to systems include: - Password cracking - Exploiting known security weaknesses - Network spoofing - "Social engineering" The most common techniques used to gain unauthorized system access involve password cracking and the exploitation of known security weaknesses. Password cracking is a technique used to surreptitiously gain system access by using another users account. Users often select weak password. The two major sources of weakness in passwords are easily guessed passwords based on knowledge of the user (e.g. wife's maiden name) and passwords that are susceptible to dictionary attacks (i.e.brute-force guessing of passwords using a dictionary as the source of guesses). Another method used to gain unauthorized system access is the exploitation of known security weaknesses. Two type of security weaknesses exist: configuration errors, and security bugs. There continues to be an increasing concern over configuration errors. Configuration errors occur when a the system is set up in such a way that unwanted exposure is allowed. Then, according to the configuration, the system is at risk from even legitimate actions. An example of this would be that if a system "exports" a file system to the world (makes the contents of a file system available to all other systems on the network), then any other machine can have full access to that file system (one major vendor ships systems with this configuration). Security bugs occur when unexpected actions are allowed on the system due to a loophole in some application program. An example would be sending a very long string of keystrokes to a screen locking program, thus causing the program to crash and leaving the system inaccessible. A third method of gaining unauthorized access is network spoofing. In network spoofing a system presents itself to the network as though it were a different system (system A impersonates system B by sending B's address instead of its own). The reason for doing this is that systems tend to operate within a group of other "trusted" systems. Trust is imparted in a one-to-one fashion; system A trusts system B (this does not imply that system B trusts system A). Implied with this trust, is that the system administrator of the trusted system is performing his job properly and maintaining an appropriate level of security for his system. Network spoofing occurs in the following manner: if system A trusts system B and system C spoofs (impersonates) system B, then system C can gain otherwise denied access to system A. "Social engineering" is the final method of gaining unauthorized system access. People have been known to call a system operator, pretending to be some authority figure, and demand that a password be changed to allow them access. One could also say that using personal data to guess a user's password is social engineering. The "phone phreak" (phreak for short) is a specific breed of hacker. A phreak is someone who displays most of the characteristics of a hacker, but also has a specific interest in the phone system and the systems that support its operations. Additionally, most of the machines on the Internet,itself a piece of the Public Switched Network, are linked together through dedicated, commercial phone lines. A talented phreak is a threat to not only the phone system, but to the computer networks it supports. There are two advantages of attacking systems through the phone system. The first advantage is that, phone system attack are hard to trace. It is possible to make connections through multiple switching units or to use unlisted or unused phone numbers to confound a tracing effort. Also by being in the phone system, it is sometimes possible to monitor the phone company to see if a trace is initiated. The second advantage to using the phone system is that a sophisticated host machine is not needed to originate an attack nor is direct access to the network to which the target system is attached. A simple dumb terminal connected to a modem can be used to initiate an attack. Often, an attack consists of several hops, a procedure whereby one system is broken into and from that system another system is broken into, etc. This again makes tracing more difficult. Today, desktop workstations are becoming the tool of more and more scientists and professionals. Without proper time and training to administer these systems,vulnerability to both internal and external attacks will increase. Workstations are usually administered by individuals whose primary job description is not the administration of the workstation. The workstation is merely a tool to assist in the performance of the actual job tasks. As a result, if the workstation is up and running, the individual is satisfied. This neglectful and permissive attitude toward computer security can be very dangerous. This user attitude has resulted in poor usage of controls and selection of easily guessed passwords. As these users become, in effect, workstation administrators, this will be compounded by configuration errors and a lax attitude towards security bugfixes. To correct this, systems should be designed so that security is the default and personnel should be equipped with adequate tools to verify that their systems are secure. Of course, even with proper training and adequate tools threats will remain. New security bugs and attack mechanisms will be employed. Proper channels do not currently exist in most organizations for the dissemination of security related information. If organizations do not place a high enough priority on computer security, the average system will continue to be at risk from external threats. System controls are not well matched to the average organization's security policy. As a direct result, the typical user is permitted to circumvent that policy on a frequent basis. The administrator is unable to enforce the policy because of the weak access controls, and cannot detect the violation of policy because of weak audit mechanisms. Even if the audit mechanisms are in place, the daunting volume of data produced makes it unlikely that the administrator will detect policy violations. Ongoing research in integrity and intrusion detection promise to fill some of this gap. Until these research projects become available as products, systems will remain vulnerable to internal threats. Connectivity allows the hacker unlimited, virtually untraceable access to computer systems. Registering a network host is akin to listing the system's modem phone numbers in the telephone directory. No one should do that without securing their modem lines (with dial-back modems or encryption units). Yet, most network hosts take no special security precautions for network access. They do not attempt to detect spoofing of systems; they do not limit the hosts that may access specific services. A number of partial solutions to network security problems do exist. Examples include Kerberos, Secure NFS [GS91], RFC 931 authentication tools [Joh85] and "tcp wrapper" programs (access controls for network services with host granularity). However, these tools are not widely used because they are partial solutions or because they severely reduce functionality. New solutions for organizations are becoming available, such as the Distributed Intrusion Detection System (DIDS) [L+92] or filtering network gateways. DIDS monitors activities on a subnet. The filtering gateways are designed to enforce an organization's network policy at the interface to the outside network. Such solutions may allow the organization to enjoy most (if not all) of the benefits of network access but limit the hackers' access. The Forum of Incident Response and Security Teams (FIRST), an organization whose members work together voluntarily to deal with computer security problems and their prevention, has established valuable channels for the dissemination of security information. It is now possible to obtain security bug fix information in a timely fashion. The percentage of system administrators receiving this information is still low, but is improving daily. Hackers continue to make better use of the information channels than the security community. Publications such as "Phrack" and "2600" are well established and move information effectively throughout the hacking community. Bulletin boards and Internet archive sites are available to disseminate virus code, hacking information, and hacking tools. Poor administrative practices and the lack of education, tools, and controls combine to leave the average system vulnerable to attack. Research promises to alleviate the inadequate supply of tools and applicable controls. These controls, however, tend to be add-on controls. There is a need for the delivery of secure systems, rather than the ability to build one from parts. The average administrator has little inclination to perform these modifications, and no idea how to perform them. The joint NIST/NSA Federal Criteria project holds the most promise to drive the creation of reasonably secure systems. By building upon the various criteria projects that precede it (the TCSEC, the ITSEC, and the Canadian criteria), this project intends to address security requirements for commercial systems in a meaningful way. The initial version, which will focus on criteria for operating systems, will include extensions/enhancements in integrity, communications, and other areas. Future versions will address criteria for distributed systems. Extensive connectivity increases system access for hackers. Until standards become widely used, network security will continue to be handled on a system by system basis. The problem can be expected to increase if and when the Integrated Systems Digital Network (ISDN) is implemented without appropriate security capabilities. A promising note for the future does exist. Multiple sets of tools do not need to be developed in order to solve each of the potential threats to a system. Many of the controls that will stop one type of attack on a system will be beneficial against many other forms of attack. The challenge is to determine what is the minimum set of controls necessary to protect a system with an acceptable degree of assurance. [CON91] Computer security: Hackers penetrate DoD computer systems. Testimony before the Subcommittee on Government Information and Regulation, Committee on Government Affairs, United States Senate, Washington DC, November 1991. [Den90] Peter Denning. Computers Under Attack: Intruders, Worms, and Viruses. ACM Press, 1990. [GS91] Simon Garfinkel and Eugene Spafford. Practical UNIX Security. O'Reilly & Associates, Inc., 1991. [HM91] Katie Hafner and John Markoff. Cyberpunk: Outlaws and Hackers on the Computer Frontier. Simon and Schuster, 1991. [Hof90] Lance Hoffman. Rogue Programs: Viruses, Worms, and Trojan Horses. Van Norstrand Reinhold, 1990. [HP91] Benjamin Hsaio and W. Timothy Polk. Computer-assisted audit techniques for unix. In 14th Department of Energy Computer Security Group Conference, 1991. [Joh85] Mike St. Johns. RFC 931: Authentication server, January 1985. [L+92] Theresa Lunt et al. A real-time Intrusion-Detection Expert System (IDES). In Final Technical Report for SRI Project 6784. SRI International, 1992. [Qua90] John Quarterman. The Matrix - Computer Networks and Conferencing Systems Worldwide. Digital Press, 1990. [S+89] Eugene Spafford et al. Computer Viruses: Dealing with Electronic Vandalism and Programmed Threats. ADAPSO, 1989. [S+91] Steven R. Snapp et al. DIDS (Distributed Intrusion Detection System) - motivation, architecture, and an early prototype. In Proceedings of the 14th National Computer Security Conference, 1991. [SH82] John F. Shoch and Jon A. Hupp. The "worm" programs - early experience with a distributed computation. Association for Computing Machinery, 25(3), March 1982. [Spa89] Eugene Spafford. The internet worm program: An analysis. Computer Communication Review, 19(1), January 1989. [Wac91] John Wack. Establishing a Computer Security Incident Response Capability (CSIRC). NIST Special Publication 800-3, National Institute of Standards and Technology, 1991.
<urn:uuid:cb07e880-cdd9-45b2-bace-b389f154780d>
CC-MAIN-2022-40
https://cybersoft.com/support/white-papers/nist-threat-assessment-malicious-code/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00253.warc.gz
en
0.941545
7,016
2.9375
3
New Training: Implement Source Control with Git In this 9-video skill, CBT Nuggets trainer Ben Finkel teaches you how to manage your source code history and versioning by using git. Learn how to use git commands, such as git init, git checkout, git reset, and git revert. Gain an understanding of version control and the git status of files: unmodified, modified, and staged. Watch this new Cisco training. Watch the full course: Cisco CCNP Automating Cisco Enterprise Solutions This training includes: 1 hour of training You’ll learn these topics in this skill: Understanding Version Control and Git Installing Git and Initializing a Repository Staging and Committing Files Viewing Repository History Comparing Changes with Git diff Git Checkout and Detached Head Git Reset and Revert to Undo Changes Untracking and Unstaging Files Using Git Summary What is the Difference Between Git Reset and Git Revert? The git commands reset and revert share the same purpose: to rollback commits of a git repository to a previous state. But they perform these rollbacks in a fundamentally different way. When you use git reset, the head of the repository points back to a previous commit. So, if you made four commits to a repository and want to remove the last one, you execute git reset while specifying the third commit. This is a destructive rollback, which means that the fourth commit is now gone forever. For this reason, git reset is often not your best choice for making rollbacks. As opposed to git reset, git revert performs a nondestructive rollback. It does this by reverting a previous commit but still leaving it in the repository. So, using our previous example, to revert the fourth commit we would execute git reset while specifying the fourth commit. You can also revert a range of commits if you have many to revert.
<urn:uuid:03c4f732-dc0a-4c13-be96-7233ac3cf648>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/new-skills/new-training-implement-source-control-with-git
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00253.warc.gz
en
0.852444
403
2.59375
3
As computers gained in popularity, more and more individuals started writing their own programs. Advances in telecommunications provided convenient channels for sharing programs through open-access servers such as BBS – the Bulletin Board System. Eventually university BBS servers evolved into a global data bank and were available in all developed countries. The first Trojans appeared in large quantities; programs that couldn’t self-replicate or spread, but did damage systems once downloaded and installed. The widespread use of Apple II computers predetermined this machine’s fate in attracting the attention of virus writers. It is not surprising that the first large-scale computer virus outbreak in history occurred on the Apple II platform. Elk Cloner spread by infecting the Apple II’s operating system, stored on floppy disks. When the computer was booted from an infected floppy, a copy of the virus would automatically start. The virus would not normally affect the running of the computer, except for monitoring disk access. When an uninfected floppy was accessed, the virus would copy itself to the disk, thus infecting it, too, slowly spreading from floppy to floppy. The Elk Cloner virus infected the boot sector for Apple II computers. In those days, operating systems were stored on floppy disks: as a result the floppies were infected and the virus was launched every time the machine was booted up. Users were startled by the side effects and often infected friends by sharing floppies, since most people had no idea what viruses were, much less how they spread. The Elk Cloner payload included rotating images, blinking text and joke messages: THE PROGRAM WITH A PERSONALITY IT WILL GET ON ALL YOUR DISKS IT WILL INFILTRATE YOUR CHIPS YES, IT’S CLONER IT WILL STICK TO YOU LIKE GLUE IT WILL MODIFY RAM, TOO SEND IN THE CLONER! Len Eidelmen first coined the term ‘virus’ in connection with self-replicating computer programs. On November 10th, 1983, at a seminar on computer safety at Lehigh Unversity, this grandfather of modern computer virology demonstrated a virus-like program on a VAX11/750 system. The program was able to install itself to other system objects. A year later, at the 7th annual information security conference, he defined the phrase ‘computer virus’ as a program which is able to ‘infect’ other programs by modifying them to install copies of itself. The first global IBM-compatible virus epidemic was detected. Brain, which infected the boot sector, was able to spread practically worldwide within a few months. The almost total lack of awareness in the computing community of how to protect machines against viruses ensured Brain’s success. In fact, the appearance of numerous science fiction works on the topic only strengthened the panic, instead of teaching people about security. The Brain virus was written by a 19 year old Pakistani programmer, Basit Farooq Alvi, and his brother Amjad, and included a text string containing their names, address and telephone number. According to the virus’s authors, who worked in sales for a software company, they wanted to gauge the level of piracy in their country. Aside from infecting a disc’s boot sector and changing the disk name to ‘© Brain’, the virus did nothing; it had real payload, and did not corrupt data. Unfortunately, the brothers lost control of their so-called experiment and Brain spread worldwide. Interestingly enough, Brain was also the first ‘stealth virus.’ When an attempt to read the infected sector was detected, the virus would display the original, uninfected data. That same year, a German programmer, Ralf Burger, invented the first programs that could copy themselves by adding their code executable DOS files in COM format. The working model of the program, named Virdem, was introduced by Burger in December 1986 in Hamburg at an underground computer forum, the Chaos Computer Club. Though most of the hackers at the event specialised in attacking VAX/VMS systems, they were still interested in the concept.
<urn:uuid:dc1a9ae2-40af-45c8-bee1-926848bc1041>
CC-MAIN-2022-40
https://encyclopedia.kaspersky.com/knowledge/years-1980s/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00253.warc.gz
en
0.965201
871
3.453125
3
There are three types of 802.11 frames: management, control, and data. Management frames are used to mange the BSS, control frames control access to the medium, and data frames contain payloads that are the layer 3-7 information. We will focus on the contents of each frame rather than understanding the context of the frame in the frame exchange process. Separate post to follow that will cover the various frame exchanges. As a consumer of all my own blog posts, I’ll be formatting this post in a way that it can be easily used as a reference and be as searchable as possible. This post covers the information you will be expected to know for the CWNA-107 and CWAP-403 exams about frame types, formatting, and values. As you can see below, the level of knowledge expected for the CWNA exam is much simpler. In the CWAP exam, it is expected that you can identify the frame type, which information elements (IE) contain which values, and understand what each value represents. CWNA-107 Objectives covered: - 3.2 Identify and explain the basic frame types defined in the 802.11-2016 standard - 3.2.1 General frame format - 3.2.2 MAC addressing - 3.2.3 Beacon frame - 3.2.4 Association frames - 3.2.5 Authentication frames - 3.2.6 Data frames - 3.2.7 Acknowledgement (ACK) frames - 3.2.8 Block ACK frames CWAP-403 Objectives covered: - 4.2 Identify and use MAC information in captured data for analysis - 4.2.1 Management, control, and data frames - 4.2.2 MAC Frame Format - Frame Control Field - To DS and From DS - Address Fields - Frame Check Sequence (FCS) - 4.2.3 802.11 Management Frame Formats - Information Elements - Association and Reassociation - Probe Request and Probe Response - 4.2.4 Data and QoS Data Frame Formats - 4.2.5 802.11 Control Frame Formats - Block Acknowledgement and related frames - 4.3 Validate BSS configuration through protocol analysis - 4.3.1 Country code - 4.3.2 Minimum basic rate - 4.3.3 Supported rates - 4.3.4 Beacon intervals - 4.3.5 WMM settings - 4.3.6 RSN settings - 4.3.7 HT and VHT operations - 4.3.8 Channel width - 4.3.9 Primary channel - 4.3.10 Hidden or non-broadcast SSIDs - 4.4 Identify and analyze CRC error frames and retransmitted frames - 5.2 Analyze QoS configuration and operations - 5.2.1 Verify QoS parameters in capture files General Frame Format 802.11 frames consist of three major parts: header, body, and trailer. The CWNA objectives include an understanding of the general frame format. The CWAP exam is all about understanding each frame type, which fields are used, and what each information element (IE) contains information about. We’ll cover the basics for now. The frame header contains information about the where the frame is going, the data rate, cipher suite used to encrypt data frames, and more! It is important to understand each field in the header. The four address fields are source, destination, transmitter, and receiver. The header contents are different for each frame type; the image below shows that some fields may be 0 bytes when not in use or X bytes. For example, the header of an acknowledgement (ACK) frame only uses one of four address fields, the receiver address (RA). The other values found in the frame control field of the header that are frequently referenced include: - DS Status – Indicates the directionality of the frame. Refer to the table below from the 802.11-2016 standard for the possible values and their meaning. - More Fragments – if set to 1, the frame has been fragmented and has more fragments to transmit - Retry – if set to 1, the previous attempt to transmit this frame failed. The example below is from a QoS Data frame therefor it includes a QoS Control field as well. The body of an 802.11 frame contains the layer 3-7 information that is encapsulated and, hopefully, protected (encrypted) as well. The body of a frame varies in size depending on the transmission. For example, voice traffic frames will be smaller than a file download that will increase the TCP window based on the speed/reliability of the connection end-to-end. The trailer contains the frame check sequence (FCS). This is a 32-bit cyclic redundancy check (CRC) used to validate that the contents of the entire frame have not been tampered with or become corrupted while being transferred over the wireless medium. All values of the frame header and body are ran through a calculation; the result is held in the FCS field. If the receiver runs the frame through the same calculation but the result is not the same, the frame is corrupt/damaged. The receiver will discard the frame and not send an ACK frame. The sender knows to retransmit the frame because it did not receive acknowledgement. This is typically a result of high interference/collisions. Typically, the station that receives a bad CRC will discard the frame instead of forwarding it onto the operating system so you will not be able to see “bad” frames within protocol analyzers such as Wireshark. All 802.11 frames fall under one of the three types: management, control, or data. The 802.11ac-2013 standard states that all data frames be sent as QoS data frames. In the header there is a frame control field that contains the values for type and subtype of the frame. The image below shows the three types of frames. Protocol version will always be 00 to indicate that 802.11 is in use. The type field indicates 0-management, 1-control, or 2-data. The subtype field indicates the type of management, control, or data frame. In our example here we see 8, 11, and 8 in the subtype fields. The management frame is a beacon, the control frame is a request-to-send (RTS), and the data frame is a QoS Data frame. Management frames are used to manage the BSS. This includes probing, associating, roaming, and disconnecting clients from the BSS. As shown above, management frames use a type of 0 in the frame control field within the frame header. Stations send association requests to access points (APs) requesting to join the BSS. In this frame, the station sends all its capabilities to the AP; it will only include capabilities that the AP has also advertised in the beacon or probe response frame. The AP responds to the station using an association response frame that includes an association ID (AID). Each station within the BSS has a unique AID. Stations send reassociation requests to APs that wish to roam to. The AP responds to the station the same way it does in the association request/response. The primary difference between reassociation and association requests is that the station will indicate the current AP it is connected to in reassociation requests. If the station does not receive a reassociation response for reasons such as load balancing, it will remain connected to the original AP and search for other APs to roam to. There are also cases where, after leaving a BSS for a short period of time, a station will send a reassociation request to an AP it was recently connected to. As part of the active ad passive scanning processes, stations send probe requests with a specific SSID, wildcard, or no value (null) in the “SSID Parameter Set” field to search for wireless networks. When the field is wildcard/null, the client is requesting any AP nearby to respond with all SSIDs using a probe response frame. When the probe request contains a specific SSID, the client is requesting any AP nearby to respond if they support that SSID. The probe response frame is a targeted beacon that is sent to the station who is “probing”. As you can see below, the probe response frame contains all but 3 of the same fields as beacon frames. The three differences are: the probe response frame does not contain a TIM, a QoS capabilities information element, and any information elements requested by the station. Be sure to understand the differences between active and passive scanning for both exams. APs send beacons at a regular interval called the target beacon transmit time (TBTT) to advertise the SSIDs they service. Beacons contain the configuration of the WLAN including whether it supports standards such as 802.11k, 802.11r, the required cipher suites and authentication key management (AKM) methods, whether protection mechanisms are required, etc. The presence of certain information elements (IE) indicate whether the related configuration is present. The figure below shows which fields are mandatory in a beacon frame. Note that this information is in the body of the management frame. Below shows a beacon frame in Wireshark. We can see a timestamp of 316618342401 which is used to keep time synchronized among stations in a BSS. Our beacon interval, also known as target beacon transmit time (TBTT) is the default of 102.4ms. The required “Capability Info” field is expanded below. The SSID being advertised by the beacon is “Taynouse” and supported data rates are listed following. It is important to capture your own beacons and start poking around; the number of optional fields is much longer than the required fields. It is important to know the names and purpose of all the beacon fields for the CWAP exam. I highly recommend downloading a copy of the 802.11-2016 standard for free here and searching for each of these fields yourself. The CWAP objectives state that you should be able to determine the configuration of a BSS from looking at a decoded BSS frame. I have highlighted the areas of importance below. Authentication frames are used to join the BSS as part of the open system authentication process. Open system authentication is a simple process used to verify that the station attempting to join the BSS has the capabilities to do so. The station sends an authentication request and the AP sends an authentication response. The body of the authentication frame includes the algorithm number, transaction sequence number, and status code. With open system authentication, the authentication algorithm number is 0. The sequence number will either be 1 or 2 to indicate which frame of the two-frame transaction you are viewing. The authentication response frame is always sequence number 2 and will include a status code indicating success or fail. The PCAP below shows deauthentication, disassociation, reassociation, authentication, and the 4-way handshake! A type of management frame sent from either the station or the AP. Disassociation frames are used to terminate the station’s association; it is a notification and does not expect a response. Clients may disassociate prior to powering off. APs may disassociate clients for various reasons including failure to properly authenticate, for load balancing or timeout reasons, entering a state of maintenance, etc. The 802.11-2016 standard includes a list of disassociation reasons. When a station is disassociated it still maintains its authentication. This makes it easier for the client to associate again in the future. The table below is part of table 9-45 showing reason codes for disassociation from the 802.11-2016 standard. In the example below, we can see reason code 8 (LEAVING_NETWORK_DISASSOC): Disassociated because sending STA is leaving (or has left) BSS. Deauthentication frames are used to reset the state machine for an associated client. The authentication process takes place prior to association therefor, if a station is deauthenticated, it is also disassociated. Deauthentication frames also include a reason code in the body of the frame from the table mentioned above. Know that deauthenticating a client resets their process in the 802.11 state machine back to step 1. Action frames are management frames that trigger an action to happen. The list of management frame subtypes had become exhausted, so instead of creating new management frames as new technologies required them, the action frame can be used. Action frames do not expect an ACK. They were first introduced in the 802.11h-2003 standard which also introduced transmit power control (TPC) and dynamic frequency selection (DFS). The 802.11-2016 standard includes action frames for many categories such as spectrum management, QoS, HT, VHT, radio measurements, and many more. The table below from 184.108.40.206 of the 802.11-2016 standard shows the spectrum management action frames. Below we can see the action frame type of “Action No Ack” and an example frame used to communicate a compressed beamforming report. This action frame is an “add block ack response” (ADDBA) action frame. It is used to setup the block ack policy for the exchange of blocks of QoS data frames. Timing advertisement frames were introduced in 802.11p-2010; this standard describes how Wi-Fi can be used in vehicular environments. This type of management frame is not in use today and is expected to be used to communicate time values to devices that cannot maintain their own timing. Control frames are used to control access to the medium and are used for frame acknowledgement. Control frames only contain a header and trailer, no body. The control frame types bolded in the table below are only used in point coordination function (PCF) based wireless networks. These were never implemented in the real world. |0100||Beamforming Report Poll| |0101||VHT/HE NDP Announcement| |0110||Control Frame Extension| |1000||Block ACK Request| Request to Send – RTS Stations send RTS frames to reserve the medium for the amount of time, in microseconds, found in the duration field in the frame header. RTS and CTS frames are very simple. The medium will not be reserved for the station until it receives a clear to send frame response from the access point. I explain the RTS/CTS process in detail in my Wireless Contention Mechanisms post. RTS/CTS are used as a NAV distribution method as part of the virtual carrier sense process. Clear to Send – CTS Frame sent by an AP in response to an RTS frame sent by a station. CTS messages are sent at the lowest mandatory data rate, allowing them to reach all stations in the BSS. They only use the receiver address (RA) field in the header. The station in the receiver address field is the one that will be transmitting frames. Acknowledgement – ACK ACK frames create a delivery verification method; they are expected after the transmission of data frames to confirm receipt of the frame. If the CRC check fails, the receiver will not send an ACK. If the sender does not receive an ACK, it will retransmit the frame. PS-Poll frames are used in the legacy 802.11-1997 power save method to request frames buffered on the AP while the client was sleeping. Clients include their AID in the Duration/ID field when sending PS-Poll frames. The process is covered in greater detail in my Power Save Methods post. Block ACK / Block ACK Request Introduced in 802.11e-2005, block acknowledgements are used to confirm receipt of a block of QoS data frames. A station will send multiple QoS data frames followed by a block ack request (BAR). The AP will send a block ack frame back that includes a bitmap that indicates which frames were received. With this method, only the frames indicated by the bitmap that weren’t received are retransmitted. This increases the overall network efficiency by reducing the amount of ACK frames that need to be sent. The block ack below shows a BA Ack Policy of 0 meaning immediate acknowledgement of the transmitted frames is required. Beamforming Report Poll Beamforming report poll frames are sent from the beamformer (the AP) to beamformees (STAs) to request additional feedback about the RF conditions. This frame is sent to the second and subsequent beamformees; it allows the AP to update its steering matrix for sending in MU-MIMO environments. VHT/HE NDP Announcement Null data packet (NDP) announcement frames notify the recipient that an NDP will follow. The figure below shows the frame exchange process. The beamformer (AP) will request that the station send an NDP sounding frame by setting the training request (TRQ) value in the Link Adaption Control subfield of the HT Control Field. The information gathered from the sounding frame can be used to calculate a steering matrix for the purpose of using beamforming for future transmissions to the same station. Per the IEEE 802.11-2016 standard, the control wrapper control frame is used to add the HT control field to other control frames. This is accomplished by “wrapping” (or encapsulating) the original control frame, minus duration/ID, Address 1, and the FCS, in a control wrapper frame. We can see below a “Carried Frame Control” value that indicates the subtype value of the control frame being carried. This is how 802.11n HT capability information is added to control frames. Control Frame Extension Added in 802.11ad – Directional Multigigabit (DMG), which defines the use of Wi-Fi in the 60GHz frequency range, control frame extension frames reuse 4 bits of the frame control field (B8-B11) for additional control frames that are used with DMG. The list of additional control frames for DMG can be found in the table below from the 802.11-2016 standard. Data frames are used to transfer information or trigger an event. Not all data frames contain a payload, some are “null data frames” and only contain a header and trailer. The data frame types bolded in the table below are only used in HCF controlled channel access (HCCA) or point coordination function (PCF) based wireless networks. These were never implemented in the real world. This leaves only 4 to pay attention to. |0001||Data + CF-ACK| |0010||Data + CF-Poll| |0011||Data + CF-ACK + CF-Poll| |0100||Null (no data)| |0101||CF-ACK (no data)| |0110||CF-Poll (no data)| |0111||CF-ACK + CF-Poll (no data)| |1001||QoS Data + CF-ACK| |1010||QoS Data + CF-Poll| |1011||QoS Data + CF-ACK + CF-Poll| |1100||QoS Null (no data)| |1110||QoS CF-Poll (no data)| |1111||QoS CF-ACK + CF-Poll (no data)| Used when communicating to a non-QoS station. Broadcast/Multicast traffic is typically sent as a simple data frame unless the station knows that all stations within the BSS are QoS capable. Used when a QoS station transmits to another QoS station. The header in QoS data frames contains a QoS control field that will indicate the access category (AC), policy type, and payload type. Null Data / QoS Null Data Used to transmit control information without carrying any data. Some stations may use null data frames to indicate that they are entering power save mode or that they are waking up. Attached is a PCAP file that you can use to apply filters to view the frames for yourself to better understand the frame format and values. The frames that can be found include: association request/response, authentication request/response, probe request/response, 4-way handshake, RTS/CTS, QoS and simple data frames, and more! It also includes captures of the data frames for inspection of layer 3-7. To decrypt the data frames in this capture, open preferences, select IEEE 802.11, select “Edit…” next to Decryption keys, and enter the PSK and SSID as shown below. Below is a list of filters you can apply and the types of frames or frame exchange that will be shown. |frame.number >= 9250 && frame.number <=9274||Disassociation, Deauthentication, Authentication, Association Request/Response, 4-way handshake (EAPOL), ACKs| |frame.number == 4505 || frame.number == 4507||Station using action frame to request 802.11k neighbor report and AP responding with report.| |(wlan.fc.pwrmgt == 1) && (wlan.fc.type_subtype == 0x0024)||Station using null data frame to notify the AP that it is going to sleep.| |wlan.fc.type_subtype == 0x000a||Station sends disassociation frame to AP with “STA is leaving BSS” reason code. AP sends disassociation frame to STA with “Unknown” reason code.| |wlan.fc.retry == 1||Shows number of times a frame had to be retransmitted. 2.4% of frames in capture.| |wlan.fc.type_subtype == 0x000c||AP sends deauthentication frames to STA with reason codes “Unknown” and “Class 3 frame received from nonassociated STA” meaning that the STA transmitted frames prior to association.| |wlan.fc.type_subtype == 0x0005 || wlan.fc.type_subtype == 0x0004||Shows all probe requests and probe responses.| |frame.number >= 15946 && frame.number <= 15949||AP sends RTS to STA, AP sends CTS with RA as itself to indicate that it is clear to transmit frames, AP sends QoS data frame to STA, and STA sends a Block ACK to confirm receipt.| It is very satisfying once you understand how to perform the detective work to troubleshoot a wireless issue that requires protocol analysis. The sheer number of frames and their unique elements may seem overwhelming when studying for the CWAP exam; especially the frames that only show up every so often and aren’t obvious in their intent, such as action and null data frames. Practice makes perfect. Real-world experience with over-the-air packet captures and performing protocol analysis goes a long way. For some of the more complex processes, such as NDP sounding, I found it best to focus on the basics. Many of these frame types have multiple levels of understanding. A The next step is to understand the frame exchanges in which these frames are used. I hope these short explanations, visuals, and attached PCAPs help you better understand the purpose of each frame type by showing the format and a decoded frame within Wireshark. I don’t believe there is such thing as “too much practice” for the CWAP exam, perform as many packet captures as you can and try to picture the stations communicating with the AP. IEEE 802.11-2016 Standard
<urn:uuid:7cca1c8e-2010-42f4-8118-030c5fa0e4c2>
CC-MAIN-2022-40
https://howiwifi.com/2020/07/13/802-11-frame-types-and-formats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00253.warc.gz
en
0.874394
5,368
2.8125
3
2FA or MFA (Two or Multi-Factor Authentication) The two-factor (2FA) or multi-factor authentication (MFA) method uses two or more factors to authenticate a user. It is considered more secure than the conventional single-factor authentication method described in the previous article (Guide to Digital Identity — Part 2). Due to the digital age, so much of our lives are happening on laptops and mobile devices, and cybercriminals often attack our digital accounts. 2FA or MFA forms an extra layer of protection to provide a more secure authentication process and helps in slowing down the rate of cybercrime. Two authentication methods, step-up and adaptive authentication, both use 2FA or MFA. Let’s start by talking about them. Step-Up Authentication: This method significantly lowers the risk of a hacker accessing your online accounts. It involves requesting a user to authenticate themself using the following factors during login: - First to authenticate using something you know (password). - Then to authenticate with a second factor via something you have (mobile phone, security key) or something you are (biometrics). For example, a banking portal requires you to provide user id and password, and then to enter the OTP received on your registered mobile number. In this case, the OTP on your mobile number works as a second factor of authentication. Similarly, another 2FA factor can be used instead of OTP via SMS. Adaptive Authentication: This method significantly secures users from the fraud in case of unusual account activity. It involves requesting a user to authenticate themself again based on the configured risk profile or the user’s tendency to use the application. It uses the something you have or something you are as the authentication factor. For example, an e-commerce application might require a logged-in user to authenticate themself in the following scenarios: - Multiple subsequent unsuccessful transaction requests (risk profile). - Bulk order creation that costs a considerable amount (unusual account activity, i.e., the user never created a bulk order in the past). Next, let’s talk about the popular types of 2FA or MFA used for Step-Up and Adaptive Authentication: 1. Security Key / Hardware Token These are the physical devices given to authorized users of a computer system or service for authentication. Hardware tokens are small-size portable devices that either store unique cryptographic keys or user biometric information. They can also refer to devices that display Personal Identification Number (PIN), which dynamically changes with a set frequency. After connecting the hardware token to a laptop or mobile, you can use it for authentication in the following ways: - The application reads the cryptographic key and authenticates you. - You scan the fingerprint on the device for authentication. - You enter the PIN displayed on the device for authentication. This 2FA method can be useful when: - Your targeted audience doesn’t have proper cell phone connectivity or internet on mobile to get an OTP or SMS. - You do not want the users to use their mobile phones for authentication due to security reasons. Advantages of hardware token-based 2FA: - It does not require internet connectivity to generate tokens. - Secure and reliable, as they are designed to perform one task. Disadvantages of hardware token-based 2FA: - Expensive to set up and maintain. - Easily lost or misplaced. 2. OTP (SMS or Voice) OTPs are generated on the server-side and sent to the user’s mobile number. OTP generation algorithms are used to create a random, unpredictable, and irreversible sequence of OTPs, which can be delivered via SMS or voice call. The user then enters the received OTP for the authentication. This method can be useful when you want to utilize the user’s phone number for the 2FA, or your targeted audience doesn’t have the proper internet connectivity on their mobile devices. Advantages of OTP-based 2FA: - User-friendly, since it is based on SMS/voice call. - Inexpensive to set up and maintain. Disadvantages of OTP-based 2FA: - Third parties can intercept SMS/voice calls. - A phone is required to receive an SMS/voice call and complete 2FA. In this case, the mobile device acts as a token and utilizes particular factors unique to the device. If the device’s unique factor and the value stored in the database are the same, the application completes the step-up or adaptive authentication. Step-Up or Adaptive authentication uses the following unique factors: - International Mobile Equipment Identity (IMEI) number: The IMEI number is unique for each mobile phone and is accessible on the mobile phone itself from the server’s database. It allows the user to identify themself by that device. - International Mobile Subscriber Identity (IMSI) number: IMSI is a unique number associated with a SIM card in the mobile phone, and is accessible on the SIM itself from the server’s database. It allows the user to identify themself by that SIM. If the IMEI or IMSI number of the device and the values saved in the application database are the same, then the user is authorized. This 2FA method can be useful for mobile apps where the apps are connected to the user’s mobile number by ensuring additional security. For example, when you set up a payment app on your mobile device, it requests you to set a PIN/Password/Fingerprint and then request an SMS from your linked phone number. This message contains the IMSI number of your SIM. Later, you can log into the payment app by just providing the set PIN/Password/Fingerprint as long as you have the phone number used for 2FA in your device. However, it is possible that the payment app will only allow one active session at a time on the mobile device. Advantages of device-based 2FA: - Once set up, it works in the background, i.e., no user involvement required until the user changes device or SIM. - Highly secure, as the user account cannot be accessed on any device other than the registered device. Disadvantages of device-based 2FA: - 3rd parties can intercept SMS messages. - A phone is required to receive SMS and complete 2FA. In SMS-based authentication, the mobile phone sends the user-specific unique identification information to the server via an SMS to authenticate the user. The server then checks the content of the SMS. If the content is correct, the server generates an OTP randomly and sends it to the mobile phone. You can use OTP within a fixed time interval. This 2FA method can be handy when the application is connected to the user’s mobile number. It ensures additional security. For example, when you set up a financial app on the mobile phone, the login process can be completed in the following two steps: - You enter the PIN/Password/Scan fingerprint. - The app requests you to select a phone number with which you have an account registered. Then you need to send an SMS to the app, which acts as the 2FA method for you to log into the account. Advantages of SMS-based 2FA: - User-friendly, since it is SMS based. - Inexpensive to set up and maintain. Disadvantages of SMS-based 2FA: - Third-parties can intercept SMS messages. - A phone is required to send SMS and complete 2FA. The biometric authentication is done using fingerprint, retina, or face recognition. For more details on these biometric authentication types, refer to the biometric authentication section of the previous article (Part 2). This 2FA method is useful when it is necessary to upgrade the security and ensure that only the desired user is logging into the application. For example, the employee can log in and perform various operations on the organization portal. However, to mark the attendance, the employee needs to scan their finger or use face recognition. Advantages of biometric-based 2FA: - Separate token generation is not required as the user is the token. - Multiple options are available, such as fingerprint, retina, face recognition. Disadvantages of biometric-based 2FA: - Requires additional hardware to read and verify biometric data. - Storing the biometric data raises privacy concerns. - If compromised, the biometric data cannot be reset. 6. Authenticator App / Soft Token The Authenticator App-based authentication uses a software generated one-time password. It is also referred to as a soft token. For this authentication method, the user must download and install a 2FA app on their mobile device. Soft tokens are time-based, i.e. it expires after reaching the configured expiration time. It adds an additional security layer when compared to the SMS-based OTP. Unlike SMS-OTP based authentication, this method requires an internet connection and a smartphone. This 2FA method can be useful when the targeted audience does not have the proper internet or cell network connectivity on their mobile devices. A popular authenticator app is the Google Authenticator app. Advantages of software token-based 2FA: - It does not require an internet or cell network connection to generate the token. - A single authenticator app can be utilized for multiple applications. - More secure when compared to SMS/OTP-based 2FA. Disadvantages of software token-based 2FA: - Requires additional software installation. - Expensive to implement and maintain. 7. Push Notifications In this case, a push notification is sent to the user for authentication. The user can either accept or decline the access request with a single touch. This method eliminates the need for entering an OTP or using biometrics for 2FA. Also, this is a highly secure 2FA method and helps in reducing the man-in-the-middle and phishing attacks. Unlike SMS and OTP-based authentication, this method requires an internet connection and smartphone. This 2FA method can be useful when the target audience has an internet connection on their mobile device, and for the ease of use, you don’t want them to type the OTP. Thus, you send them a push notification with the option to accept or deny. Advantages of push notification-based 2FA: - User-friendly, as users can click once to allow or deny authentication. - Secure, as it cannot be used until the mobile device is unlocked. Disadvantages of push notification-based 2FA: - It requires a phone to receive a push notification. The next article will be dedicated to Single Sign-On (SSO) authentication, with information on how it works, SSO types, and various SSO protocols. Stay tuned! Originally published at Medium
<urn:uuid:afd88ec6-17ed-4ebe-bf1c-3a934020ad65>
CC-MAIN-2022-40
https://guptadeepak.com/guide-to-digital-identity-part-3-2fa-or-mfa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00253.warc.gz
en
0.883241
2,345
3.421875
3
Introduction to ISP & VPN Internet service providers (ISP) facilitates internet navigation and helps in transmitting all your Internet packets however VPN creates a secure tunnel where data is encrypted during transmission. Often people get confused between the two terms and not clear on their purpose and not very sure what all getting exposed to the Internet Service Provider (ISP). Let’s understand today the difference between the two terms more in detail. What is ISP? Internet service provider (ISP) is a company which provides ISP and Internet related services to its customers such as hosting services, browsing services, live streaming, Internet TV subscriptions etc. The ISP creates a link between systems and Internet. Whenever an instruction is sent from system to access information from Internet it first goes to ISP address and then hits their destination. Some common examples of ISP are VSNL, AT&T, Verizon etc. The Internet access provided by ISPs can be divided into many types as under: – Dial up Internet access – is the oldest form of technology to provide Internet access by using modem to modem connection using telephone line. The user system is connected via a telephone line and modem to Internet. It is very slow and used where broadband is not available. Digital Subscriber line (DSL) – is advance version of dial up internet access. It allows Internet and phone connection both to operate simultaneously over high frequency. Wireless Broadband – It gives high speed wireless Internet. This requires a dish which is place at high area and it points in direction of transmitter of Wireless Internet Service Provider (WISP). Wi-Fi Internet – it is also called ‘wireless fidelity’ and this provides high speed wireless Internet connection through radio waves. It is commonly used in public places such as Malls, airports, restaurants etc. ISDN – stands for integrated services digital network and it is a telephone-based network which integrates with high quality digital transmission of voice and data on telephone lines. Ethernet – it is a wired LAN where systems are connected in a limited physical space. Devices communicate using a protocol and it offers different speeds starting from 10 Mbps to 10 Gbps. How ISP works? Internet service providers or ISP’s and analytical systems such as Google analytics captures all online activities performed by users. The ISP stores millions of user’s profiles in terms of age, gender, interests, preferences such as food, drinks, favourite music , movies, political preference etc. ISPs store and collect history of visited sites, when user types a URL in browser ISP analyses the contents of package you received. From the package all information related to login with passwords, search history etc. can be obtained (In case website is using unencrypted HTTP connection for authorization). The ISP knows where you are connecting and can read all unencrypted traffic. When the question or concerns arise on privacy of data over ISP then VPN (Virtual Private Tunnel) comes into the scene. When online privacy is a major concern VPN is the answer to problem. VPN creates a virtual private tunnel between you and itself. Data passing thru tunnel is encrypted. A VPN application mask the actual IP address and replace it with virtual IP Address. When the system is connected via VPN tunnel the traffic passing through tunnel is not visible to the outside world. How VPN Technology works? VPN route the device Internet connection through the chosen private VPN server instead of Internet service provider (ISP) so data transmitted to the Internet comes thru a Private tunnel it hides the actual IP address of system and protects the identity and since data is encrypted in transit it will be unreadable or can’t be intercepted until it arrives at its final destination. Encryption will hide data in such a way that it cannot be read until a strong password like key is available. The device is seen as being on same local network as the VPN and the IP address assigned is actually the IP address of VPN provider server. Key Protocols used in VPN Technology Some of the key protocols used in VPN technology now are days are: - Point to Point Tunneling protocol (PPTP) – It is one of the oldest protocols used on Internet. - Layer 2 Tunnelling Protocol (L2TP/IPSec) – This protocol is combination of PPTP and L2F protocol from Cisco. It is more secure than PPTP it does not have its own encryption capabilities, but it uses IPSec which is a security protocol. - Secure Socket Tunnelling Protocol (SSTP) – This is created by Microsoft used by websites for encryption purpose. It is a very secure protocol. - Internet Key Exchange, version 2 (IKEv2) – The more secure version of L2TP, IKEV2 made in collaboration between Cisco and Microsoft. It is bundled with IPsec and mainly used on mobiles. - OpenVPN – it is an open source technology and it is one of the most popular protocols and considered most secure. Benefits of VPN Technology Some of the benefits of VPN are as under: - It gives a secure connection and protect systems from hackers and other online threats - It allows restricted access to websites - It hides the actual IP address and protects your identity to outside world - VPNs are significantly cheaper and easy to setup - Ability to safely connect to public networks Comparison Table: ISP vs VPN Below table summarizes the differences between ISP and VPN: |Definition||Internet service provider (ISP) is a company which provides ISP and Internet related services to its customers such as hosting services, browsing services, live streaming, Internet TV subscriptions etc.||VPN creates a virtual private tunnel between you and itself. Data passing thru tunnel is encrypted.| |Features||ISP can see whatever you do, unencrypted traffic||Secure, data is encrypted, scalable, unrestricted access to websites, easier to setup and cheaper| |Providers||AT&T, VSNL, Verizon , Comcast, Qwest, AOL etc.||Cisco VPN, Express VPN, Nord VPN, IPVanish etc.| Download the comparison table: ISP vs VPN
<urn:uuid:4cf50c5b-f8b6-474d-89a5-41550fff7780>
CC-MAIN-2022-40
https://networkinterview.com/isp-vs-vpn-know-the-difference/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00253.warc.gz
en
0.912544
1,275
3.40625
3
There are few things as scary as a cyberattack – one of them being a hack that is undetectable. German researchers have recently discovered such a threat. While the exploit has yet to be leveraged maliciously, it still illustrates that anything is possible and cybersecurity must be a top priority for businesses around the world. Undetectable software hack underscores severity of threats Ruhr University computer experts have discovered a way to inject harmful code into unrelated downloads. This means that a file does not have to be malicious in nature to do serious damage. Now that this backdoor has been discovered, it is likely only a matter of time before it is exploited. "Our algorithm deploys virus infection routines and network redirection attacks, without requiring to modify the application itself," the Ruhr University group stated. "This allows to even infect executables with a embedded signature when the signature is not automatically verified before execution." Deception is one of the greatest tools in a hacker's arsenal. By fooling unsuspecting users or altering software that has innocent intentions, cybercriminals can develop all sorts of ways to attack a potential victim. This is why security strategies have to be all-encompassing and layered with several different kinds of protection. Faronics Deep Freeze to the rescue While there are a number of different programs that can be leveraged in this way, the cornerstone of any strategy should be Faronics Deep Freeze. Using a reboot to restore process, employees can wipe their computers of harmful programs without losing any of their settings. Having to reapply system specifications manually is time consuming and prone to human error. But with Faronics Deep Freeze, companies can make sure that when issues inevitably creep up, they are readily handled and downtime is minimal at most.
<urn:uuid:03a7ba75-7cdf-4123-9808-81072a0aa02f>
CC-MAIN-2022-40
https://www.faronics.com/de/news/blog/new-vulnerability-illustrates-need-for-deep-freeze-627
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00253.warc.gz
en
0.946895
357
2.796875
3
Engineers have completed the first “Center of Curvature“ test on the primary mirror of NASA's James Webb Space Telescope in order to measure the shape of the mirror before the telescope goes through a series of launch environment tests. NASA engineers collaborated with technicians from Ball Aerospace & Technologies and the Space Telescope Science Institute to conduct the first optical measurement of Webb's main mirror at the agency's Goddard Space Flight Center in Maryland, NASA said Wednesday. A team of optical engineers used wavelengths of light and a computer-generated hologram to compare and measure changes in the mirror's position or shape through the use of an interferometer. The team will repeat the test once the telescope completes its space launch environment tests. NASA completed the pre-Center of Curvature test after Northrop Grumman received the fifth sunshield layer from NeXolve for integration with the Webb telescope, which is the successor to the agency's Hubble Space Telescope. Northrop built the telescope as the project's prime contractor. NASA collaborated with the Canadian Space Agency and European Space Agency on the Webb telescope that will work to generate images of the first galaxies and planets.
<urn:uuid:2e9954e0-ce8f-4772-9f78-bd39fbf9c041>
CC-MAIN-2022-40
https://blog.executivebiz.com/2016/11/nasa-puts-webb-telescopes-primary-mirror-through-1st-center-of-curvature-test/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00453.warc.gz
en
0.903499
234
3.359375
3
Like adults, kids have little choice but to opt in when it comes to the internet: they use laptops and online platforms for school assignments and remote learning, and they use social media to stay connected to friends and family. In fact, many of them are ahead of the curve when it comes to tech. According to the Pew Research Center, children as young as 5 are now engaging with smart devices. Today’s parents might reflect back on their own behaviors as early adopters of the internet and the way we use it today: a tool that is ever-present. As young students access digital interfaces, parents can equip them with guidelines for painless navigation. Here are five tips to share with them: Kids who’ve been on nature field trips might be familiar with the idea of leaving a place as you found it. The same can be said for internet activity, including Google searches. Young students can benefit from the knowledge that search history is retained on phones and computers, and if they’re logged in with an email address, it can be traced back to them. Not that you should encourage them to cover their tracks or be ashamed of questions they might have for Google. Instead, explain to them that while search engines can be an excellent tool for research, internet activity like searches and web pages they visit are not anonymous and do not vanish once they turn off the computer. This is especially good to note while they are using school computers. Even if they don’t have bank accounts or credit cards, kids are still entitled to private information. When creating any new account, whether for school or personal use, they will be prompted to create logins. Email, Google Classroom, Zoom, Buncee, Class Dojo, and many other apps require passwords to protect students’ coursework and personal information. It’s important to impress upon kids the power of a good password and its ability to protect their data. Of course, this also applies to passwords that protect devices like smartphones and laptops, plus all of their social media accounts. To help them manage passwords, add a family password manager to your Dashlane plan. You won’t be able to see their passwords (unless they want you to), but you can help ensure that their passwords are strong and their logins are secure. As we mentioned above, it’s good practice for students not to save personal information to public computers, like the ones they use at school. Remind them to log out of any accounts on a school computer and not to store login information on shared devices. Misinformation and disinformation campaigns are the bane of social media platforms and online discourse. Many adults fall victim to the campaigns, often administered by nation states and carried out through AI or third parties on Twitter and Facebook. But it’s not for lack of vigilance—these strategies used by political actors are increasingly advanced and rampant. Younger students are encountering the world of digital discourse during a fraught time. As posts and ads vie for their attention, remind them of the tools at their disposal: Social media trends can range from innocuous to dangerous. TikTok has made headlines for “challenges” that appear as suggested videos for many users, including kids. Recently, challenges have included loading a toy gun with water beads, which led to the arrest of two teenagers, and a “How Far Can You Dig” challenge, which prompted a local Florida police department to issue this warning about the threat a giant hole in the sand poses to beachgoers and wildlife. Starting a dialogue with your kids about the content they’re viewing and how they absorb it can help deter them from acting on these challenges, despite what their classmates might be doing. (For more on whether TikTok is safe for kids, read Dashlane’s article here.) Some threat actors target kids via phishing scams. Remind your children not to click links in emails from unknown senders or texts from unknown phone numbers. If the numbers or addresses look familiar, check them thoroughly to make sure they haven’t actually been changed by a single character to look legitimate. A good rule of thumb for kids to remember is that generally, a personal contact or legitimate organization will not send a link via text or email, or ask for any personal information such as a social security number. VeryWell provides this list of red flag phrases for parents and their children to look out for in emails.
<urn:uuid:ffb419ff-be63-4dc6-b240-a9b55ed50f94>
CC-MAIN-2022-40
https://blog.dashlane.com/5-online-safety-tips-for-kids-as-the-new-school-year-starts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00453.warc.gz
en
0.949991
911
3.046875
3
We’re going to start the post with a quick exercise. Where do you live and work? Easy enough, right? Some of you probably can name a street, neighborhood, town, city, or state off the top of your head. Let’s take the first question and change a couple of words – whose land do you live and work on? Some of you might already know whose land that you live and work on. For those who do not, you can visit https://native-land.ca/ to find more information about the indigenous lands you currently occupy. As we wrap up Native American Heritage Month this week, we are taking some time to give some context around the land acknowledgment included in our recent talks. You can use the resources at the end of the post for your acknowledgments that go beyond a statement of whose land you’re on. Acknowledgment as The First Step LDH lives and works on the unceded, traditional land of the Duwamish People, the first people of Seattle. The above-italicized sentence is the start of the land acknowledgment in recent LDH talks. Many of us have encountered similar statements in various events and presentations. Land (or territory) acknowledgments sometimes stop here, naming the peoples whose land we’re on. However, this approach lacks the full acknowledgment of how the land became occupied. It also doesn’t acknowledge the present-day impact this occupation has on the people. The Duwamish Tribe was the first signatories on the Treaty of Point Elliott in 1855. The Tribe has been denied the rights established in the treaty for over 165 years. The United States Federal Government currently does not recognize the Duwamish Tribe, denying the Tribe the rights and protections of federal recognition. Naming the treaty is important in giving the historical context around the occupation of the land, but equally important is the explicit statement that the treaty has still to be honored by the federal government. The Duwamish Tribe is not federally recognized, which is important to acknowledge because of its historical impact on the Tribe and its current impact on the Tribe’s rights to funding for and access to housing, social services, and education, among other resources and services. The Duwamish People are still here, continuing to honor and bring to light their ancient heritage. Indigenous people are still here. It’s easy to leave the land acknowledgment to acknowledge the past and not venture into the present. But an acknowledgment of the present has to go beyond education and head into action. Calls to Action A portion of the speaker’s fee from the conference will be donated to Real Rent Duwamish. Real Rent serves as a way for people occupying this land to provide financial compensation to the Tribe for use of their land and resources – https://www.realrentduwamish.org/ The Tribe has started a petition to send to our state congresspeople to create and support a bill in Congress that would grant the Tribe federal recognition. The link to the petition is on the slide – https://www.standwiththeduwamish.org/ You are welcome to join me in donating to Real Rent or signing the petition. The second half of the acknowledgment are two specific calls to action. Each action provides the opportunity for event attendees to support or advocate for the Duwamish People whose land LDH occupies. Real Rent Duwamish provides financial support and resources for the Tribe through a voluntary land tax. The petition aims to gather support for a bill granting the Tribe federal recognition, giving the Tribe access to services and resources available to other treaty tribes. If attendees cannot financially donate to Real Rent, they can provide non-financial support through the petition. LDH’s acknowledgment focuses on calls to action around solidarity with the Duwamish People. Other land acknowledgments make the additional call for event attendees to research whose lands they occupy through https://native-land.ca/. Clicking on a specific territory will provide a page with resources where attendees can learn more about the Indigenous people whose land they’re on. For example, the Duwamish Tribe page on the site also links to ways to support the Tribe. Other calls to action found in land acknowledgments include supporting water protectors, such as supporting water protectors in stopping Line 3. The list below is some resources you can use to inform not only yourself and others about the land you occupy but also what you and others can do to be in solidarity with Indigenous people in your acknowledgments and beyond. - A guide to Indigenous land acknowledgment - Are you planning to do a Land Acknowledgement? - Territory Acknowledgement - Beyond territorial acknowledgments - Land Reparations & Indigenous Solidarity Toolkit - Rethinking Thanksgiving Toolkit - Acknowledging the Land, Building Deeper Relationships (h/t to Sandy Littletree for the link!)
<urn:uuid:f3518bb8-b5e3-480a-a863-6ec62877c612>
CC-MAIN-2022-40
https://ldhconsultingservices.com/2021/11/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00453.warc.gz
en
0.944144
1,026
3.203125
3
BrandPosts are written and edited by members of our sponsor community. BrandPosts create an opportunity for an individual sponsor to provide insight and commentary from their point-of-view directly to our audience. The editorial team does not participate in the writing or editing of BrandPosts. By Janet Morss In the development of artificial intelligence applications, the holy grail is the creation of an artificial neural network that functions like the human brain. This is an elusive goal, because the human brain is an extremely complex organ that functions in flexible and fluid ways that can be difficult to replicate in the world of AI. Today, a team of leading-edge scientific researchers are making breakthroughs in this area by using functional magnetic resonance imaging (fMRI) of the brains of people carrying out various cognitive tasks. The goal is to better understand and create computational models of how the brain works, and then use those models to train artificial neural networks to map images to actions quickly and accurately. For example, having a fully developed computational model of how memory works would make it possible to compare brain activity and to understand which model is playing out in the simulated brain of a patient. With this base, the research team could gain deep insight into the mechanics of memory function in those suffering from age-related brain illnesses, including Alzheimer’s disease and other forms of dementia. This would be a big leap forward for the AI world, according to one of the lead researchers on the project, Dr. Pierre Bellec, an associate professor at the University of Montreal. Dr. Bellec is the scientific director of the Courtois Project on Neuronal Modelling (NeuroMod), which is spearheading the collaborative research effort. “Something the brain does really well is to switch from one context to another,” Dr. Bellec explains in a Dell Technologies case study. “It has very elaborate organization, and specialized networks and subnetworks, and those networks and subnetworks are able to reconfigure dynamically. By contrast, current architectures used by AI researchers are extremely specialized for certain types of tasks, and have a hard time generalizing over different contexts.” The researchers hope that by mimicking the architecture of the human brain, they can develop a more versatile AI model that can generalize over different tasks, much the way the human brain does. To collect the datasets for this ambitious effort, the research team has recruited a small group of volunteers to watch videos, look at images and play video games while they are in an MRI machine. To enable these studies, the research team had to build a new game controller without any metal, printed in 3D plastic with a fiber optic cable connection. The machine allows the researchers to track and record the activity in the brains of the subjects as they carry out their tasks. The research team expects to gather many terabytes of data over the course of the five-year study as each subject will spend around 500 hours in the MRI machine. “Essentially, we are trying to find a new way to integrate activity from human neural networks to help train artificial networks,” Dr. Bellec says. “The hope is that if we manage to do that, we can create computational models of how the brain works. And potentially we can train new artificial neural networks that may perform better in some settings than what we have now.” To move this project forward, researchers from the University of Montreal teamed up with researchers from Dr. Alan C. Evans’ lab at McGill University who have extensive experience in high performance computing and work with MRI images that require large memory capacities. They also sought the help of Dell Technologies and Intel, along with the data science and supercomputing resources of the Dell Technologies HPC & AI Innovation Lab in Austin, Texas. The team is using the lab’s Intel-based Zenith cluster, which includes Dell EMC PowerEdge™ servers with Intel® Xeon® Scalable Processors and the Intel® Omni-Path Architecture. A CPU architecture with big memory After testing on a GPU architecture, the team found that a CPU-based model can maintain similar performance — with validation accuracy reaching 99 percent after 10 epochs in distinguishing five types of body movements, and 91 percent after 20 epochs in classifying eight types of visual working-memory tasks. At the same time, the CPU-based model requires much less training time — 20 minutes vs. 3 hours per epoch — when using 10 CPU nodes and two GPU cards, respectively. Considering CPU resources can often be more easily accessed, the project provides a feasible solution for the application of deep neural networks on large-scale neuroimaging data by training the model directly on CPU hubs instead of waiting for other resources. The deluge of data associated with this research effort makes it even more important to have ready access to systems with big memory, which is what the team is getting through the Zenith supercomputer. This all-CPU system is on the Top 500 list of the world’s most powerful HPC machines, and it has been designed to support massively parallel traditional scientific applications as well as emerging machine learning workloads. “Many people are excited about being able to evolve neural networks in ways that are inspired by biology, and it’s increasingly clear that we need a different type of hardware to do that,” Dr. Bellec says. “And that’s what we have with the Zenith cluster in the Dell Technologies HPC & AI Innovation Lab.”
<urn:uuid:d6f18375-aa4b-4fbd-ac87-77fe6960bf3c>
CC-MAIN-2022-40
https://www.cio.com/article/193760/accelerating-brain-mapping-with-ai-and-hpc.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00453.warc.gz
en
0.940032
1,131
2.90625
3
It seems that the healthcare system is finally catching up with the rest of the world from a healthcare technology standpoint. Advancements in healthcare IT are aiding in stabilizing costs and increasing access to healthcare treatment. However, these innovations, while much needed, raise serious concerns about data privacy and cybersecurity. Today we’ll take a look at how exactly the healthcare system is advancing technologically, and what that means for your data privacy. Technological Improvements to Healthcare While drastic improvements are being made to technology within homes and businesses, for many years it seemed that the healthcare system was stuck with the same outdated methods of doing things. Luckily, recent technological advances in Healthcare IT are bringing our antiquated healthcare system into the 21st century and making big improvements to its function and accessibility. Easier Access to Medication One of the biggest holdups when it comes to medication delivery has always been gaining prior authorization, meaning patients cannot gain access to their medication without a prescription from their doctor and insurance approval. This process used to take days or longer to complete, but with the digitization of the prior authorization process, patients no longer have to wait this excruciating amount of time before starting their treatment. Smarter Medical Devices Smart healthcare IT devices such as EMRs allow doctors, nurses, and patients to gain access to essential medical information faster than ever before. EMRs are electronic medical records, and they have completely changed how medical record keeping is done in doctor’s offices and hospitals. Not only does this make doctors’ appointments faster for the patient but allows healthcare workers information that may not have been available to them in the past. Better Emergency Strategies The COVID-19 pandemic, while a tragedy, has taught the healthcare IT world a lot about the essential updates needed in the event of another global emergency. Healthcare organizations now use advanced databases and other online tools to get ahead of potential outbreaks of contagious diseases quicker and with more accuracy. How Does this Effect Data Privacy? While all of this advancement is wonderful and has so many benefits for healthcare professionals and patients alike, It isn’t a perfect system yet. The more the healthcare system turns to technology for personal health information (PHI) storage and sharing, the greater the risk of cyberattack. This has been an inhibitor of advancement in Healthcare IT for many years, as cybercriminals are always on the hunt for an individual’s personal information, and nothing is more personal than PHI. Therefore, the more healthcare organizations and insurance companies focus on healthcare IT implementation through databases like EMRs, the more they’ll have to invest in cybersecurity. Since this is such a new issue, there are currently no regulations on the issue. There’s a push in the U.S. to have the Food and Drug Administration establish clear cybersecurity guidelines that organizations must meet to protect their patient’s information. Have you noticed an increase or change in the use of technology at your recent doctor’s visits? Leave your thoughts in the comments! Visit our other blogs as we discuss how technology is evolving and what that means for businesses and society today.
<urn:uuid:b82f97f7-e1f9-4041-9859-7a68045c060f>
CC-MAIN-2022-40
https://www.aetechgroup.com/healthcare-tech-improving-care-and-data-privacy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00453.warc.gz
en
0.958645
638
3.046875
3
As DoD Eyes Internet of Things for New Uses Cases, Security Concerns Remain The “Internet of Things” isn’t exactly a new concept, Kevin Ashton coined the term the Internet of Things as far back as 1999. Now, 15 years later, the Department of Defense (DoD) is eager to exploit it. What is the Internet of Things? Simply put, the “Internet of Things” or IoT, for short, refers to the next evolution of the internet when everyday objects are networked to the web and each other. Smart watches, connected cars, appliances, houses, and more – very soon every physical thing will be accessible through the internet. The use cases for IoT are infinite and varied. The GSA, for example, is working with IBM to monitor the energy use of its facilities. Similar sensor devices can be used to monitor jet engines, or any other structure you can think of! DoD leaders recently gathered at the Technology Training Corporation’s IoT symposium - a 2 day symposium on sensor technologies and their use by government. Several use cases are already underway, as noted by GovWin’s, Alex Rossino. For example, the U.S. Southern Command uses IoT technology in its GeoShare program for humanitarian assistance. GovWin also noted other use cases that the defense establishment is investigating, including: - Base Facilities Maintenance – trash pickup, light replacement, food replenishment - Vehicle management – maintenance prediction, location tracking - Secured, smart workplace – presence for workers integrated with facilities management - Logistics and transportation – inventory/tracking, automated assembly/packing, geo-location in supply chain - Robotics – autonomous drones and vehicles, sensor based maneuvering Security Concerns Remain These smart “things” don’t come without risk. As we noted late last year in The Internet of Hackable Things, as the rate of connected devices rises exponentially, the number of hackable things does too. Consider these scenarios: A man walks into a government agency and unknowingly sends network passwords to hackers with his “smart” shoes. Mr. Jones didn’t know that his Pentagon audience also included criminals in another state, who recorded everything he said with his ”smart” watch. According to the Federal Times, White House cyber Czar, Michael Daniel, sees the threat as inevitable. “If we thought that doing cybersecurity in a world of wired desktops was hard, now we’re going to do it in a world where your coffee maker, your car and your refrigerator are also a threat vector,” he said. “That makes the problem just that much more difficult…We want to improve our ability to actually deter those upfront, respond to them when they happen and mitigate any of the effects when they do occur,” Daniel said. If agencies are not already focusing on educating their employees about the dangers of cybersecurity, they need to. And if they have not considered mobile device management and endpoint security, they need to.
<urn:uuid:35468640-f281-4fb5-8c4d-935b95bfa13e>
CC-MAIN-2022-40
https://www.dlt.com/blog/2015/02/26/dod-eyes-internet-cases-security-concerns-remain
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00653.warc.gz
en
0.946681
632
2.78125
3
A new project called Gender Shades Facial Recognition Systems that considers both gender and race to measure three face classification of AI calculations from IBM, Microsoft, and the Chinese startup Face++. Gender Shades with AI data sets The subsequent investigation demonstrates that these genuine calculations have altogether bring down accuracy. While assessing dull female appearances than some other kind of face.We have to request more noteworthy decent variety in the general population who assemble. These calculations and more straightforwardness about how they function and technology behind it. Gender Shades crafted by Joy Buolamwini, researcher at the MIT Media Lab and the author of the Algorithmic Justice League. She could test these significant business frameworks by making this new benchmark face data set as opposed to a new calculation. It’s a method for focusing on inclination, which can regularly stay covered up. Buolamwini says that the benchmark data set, made out of 1,270 pictures of individuals’ faces. These are named by sex and skin variety. The primary data set of its kind, intended to test gender classifiers. Additionally considers skin tone. The general population in the data set are from the national parliaments of the African nations of Rwanda, Senegal, and South Africa. Also from the European nations of Iceland, Finland, and Sweden. The scientists picked these nations since they have the best sex value in their parliaments. Individuals from parliament have generally open pictures accessible for utilizing. Facial Recognition Systems with dull and light cleaned faces As algorithms from IBM, Microsoft, and Face++ accuracy with 87% and 93%. These numbers don’t uncover the inconsistencies between light-cleaned men, light-cleaned ladies, dull cleaned men, and dim cleaned ladies. The investigation found that the calculations are 8.1% to 20.6% less. When distinguishing female appearances than male faces, 11.8% to 19.8% less precise when identifying dim cleaned faces versus light-cleaned faces, and, most shockingly, 20.8% to 34.4% less precise when recognizing dull cleaned female countenances than light-cleaned male countenances. IBM had the biggest gap– a 34.4% contrast in precision when identifying dull cleaned females versus light-cleaned guys. IBM reacted to the exploration by completing a comparable report to recreate the outcomes on another rendition of their product. The organization reports it found far littler contrasts in precision with the new form. It presently can’t seem to discharge and says it has a few activities in progress to address issues of predisposition in its calculations. Microsoft says it is attempting to enhance the precision of its frameworks. Face ++ did not react to the exploration. The suggestions for inclination in facial acknowledgment frameworks is especially powerful for ethnic minorities. As police divisions utilize more facial investigation calculations. The inconsistency in exactness for darker-cleaned individuals is an immense risk to common freedoms. At the point when these frameworks can’t perceive darker appearances with as much precision as lighter faces. There’s a higher probability that guiltless individuals focused by law requirement. This sort of computerization empowers a similar sort of inclination that outcomes in cops capturing original individuals.
<urn:uuid:251b4f7e-cb5f-4abc-81ab-f8ebd45a6e01>
CC-MAIN-2022-40
https://areflect.com/2018/02/28/facial-recognition-systems-unfair-in-detecting-gender-shades/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00653.warc.gz
en
0.932623
655
2.578125
3
From caves to lab Agriculture and livestock played a key role in our evolution as a society. Moving from hunting to farming meant that humans could settle down in one area and invest their resources in cultural, social and technological developments. It wasn’t a matter of survival it was a matter of wellbeing. Since we stopped being nomads, evolution has been associated with agriculture and livestock production: the more you produced at a lower price, the better. However, that production rhythm is not sustainable anymore and its environmental impact is very high. Add the increasing awareness of animal welfare and the many quality issues and you get researchers in a lab finding solutions. By mid-century, the global protein market will look entirely different from now. On one hand, new sources of proteins such as those from plants (pea, canola or chia), mushrooms or insects will have more prominence. On the other hand, the production of meat at an industrial level will be possible without the need of using animals. Cultured meat, in vitro meat, lab-grown meat, cultivated meat or even synthetic meat… Sounds like a utopia? Let’s see what’s happening: Sustainable development: our global purpose Sustainable development must be considered from the perspective of the triple bottom line: social, environmental and economic factors must be considered. Viewed in these terms, cultured meat opens a range of possibilities. We are referring to meat produced by in vitro cell cultures of animal cells. - Environmental impact: Good Food Institute a life cycle assessment (LCA) and techno-economic assessment (TEA) show that by 2030, cultivated meat could have lower overall environmental impacts, a smaller carbon footprint, and be cost-competitive with some forms of conventional meat. However, other voices as Margaret Mellon, from Union of Concerned Scientists, point to the energy and fossil fuel needs of large-scale in-vitro meat production in a factory. The results offered by our tool Linknovate allow us to find articles analyzing the cultured meat production system in function of its environmental footprint. - Social impact: the rise of the middle class is a reality. As will be discussed at Food 4 Future, the future of food is a contested space, with multiple competing ideas about how the future will evolve. Cultivated meat appears as an opportunity to democratize access to this protein source, covering a demand that grows every day. Companies as BioTech Foods, work to supply the growing global demand for animal protein, while addressing the main drawbacks of factory farming: health issues, environmental sustainability, and animal welfare. - Economic impact: the impact of new companies entering the sector is expected to translate into more universal access to meat. Thus, companies like Future Meat Technologies raised record-breaking $347 million in Series B funding, while reducing cost of cultivated chicken to $7.70 per pound. This is already close to what is economically viable and it’s far from the 250,000€ burger (2013). According to this McKinsey report, and depending on factors such as consumer acceptance and price, the market for cultivated meat could reach $25 billion by 2030. If cultured meat is a tool to face the 2030 AGENDA, let’s meet the companies that are leading this process: Companies growing meat in the lab Using out platform, we can easily discover which companies are leading the progress in manufactured food from alternative raw materials. ThThese companies stand out for their contribution to the development of cultivated meat - Future meat: Israeli company claims to be able to produce lab-grown meat at an industrial level. It has achieved the largest round of financing in the laboratory meat sector - Biobetter: has developed a tobacco-based platform to enable low-cost mass production of the growth factors needed to create meat in a lab. - Mosa meat: this Dutch Company is worldwide known as been the first one of making a lab hamburger. Having raised $55 million is part of the necessary process to eat their steaks. - Upside Foods: formerly known as Memphis Meat is one of the most promising cultured meat companies. - Biotech-Foods: Recently, JBS has invested in the first Spanish cell-based company, which processes are based on tissue growth by in vitro cell culture processes. - Eat Just: this company is famous for obtaining the approval in Singapore for lab-grown chicken. As part of the cited McKinsey report, while most start-ups are focusing first on more popular species and breeds, Eat Jest’s GOOD Meat and the company Orbillion Bio are exploring other options such as Wagyu, and the company Vow is working to explore more exotic options, such as kangaroo and alpaca. A good way to stay updated is to follow https://www.cellularagriculture.eu/, a coalition of food companies that make meat, poultry, seafood, and ingredients that Europeans love with this innovative process known which is also known as cellular agriculture. LAST HUBS: ETHICS AND BUREAUCRACY Cultured meat must deal with two additional difficulties for its technological development: ethics and bureaucracy. On one hand, although governments promote the development of this technology at a strategic level, there are nor many examples of countries that have approved this type of product. Singapore is the only country that has approved its introduction to the market as of 2022. The US Food and Drug Administation (FDA) and the European Food Safety Authority (EFSA) are both working on regulating cultured meat at their own pace. The FDA will probably make advancements sooner as Europe is very reluctant with the use of genetically modified organisms. On the other hand, if, hypothetically, cultured meat were available in our usual supermarkets, a few issues could be frowned upon. Despite the clear advantages in using lab-grown meat, it is important to note that its name is linked to the use of fetal bovine serum, or blood taken from unborn cow fetuses. Also the by-products generated during the production may be harmful and not as environmentally friendly as expected. And what will happen with livestock farmers when this lab meat starts competing with them? In addition, some people have a reluctance to consume meat that comes from unnatural sources. In any case, it is a reality that many startups are developing new techniques to solve every issue and this kind of meat will soon be available. Companies like Agrenvec present alternatives for that bio-ethic issue. Using plants (Nicotiana benthamiana), they produce growth factors that may replace animal serum in order to make an alternative which is vegan in every step of the process. So what would you do? Would you rather eat a hamburger or some cultured meat dumplings?
<urn:uuid:f7d87227-5de0-4c1b-b53e-dbe687f402e0>
CC-MAIN-2022-40
https://blog.linknovate.com/cultured-meat-food-for-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00653.warc.gz
en
0.95641
1,388
3.15625
3
Statistics is the science related to creating and reading strategies for gathering, dissecting, interpreting, and introducing experimental information. Statistics is an exceptionally interdisciplinary field. Research in statistics discovers applicability in essentially all logical fields and exploration inquiries in the different logical fields persuade the improvement of new factual strategies and hypotheses. In creating techniques and contemplating the hypothesis that underlies the strategies analysts draw on an assortment of numerical and computational instruments. There is a lot to understand about Statistics, so are you ready to learn something new about statistics? Several reasons will clarify why we need to study statistics such as it helps in conducting research effectively, any researcher needs to know what statistics they need to use before they amass the information and many such reasons. This blog covers - Definition of statistics - Types of statistics, - Applications of statistics, - Variance in statistics, - Advantages and disadvantages of variance in statistics, - What is Bayesian statistics? Definition of Statistics In simple words, we can say that any raw data, when collected and organized in the form of numerical or tables is known as Statistics. Moreover; - It makes a set of data more easily understandable. - It is a branch of mathematics that analyses data and then uses it to solve different types of problems related to the data. However, many people think of statistics to be a mathematical science. Statistics helps to read and understand the data in a very easy way. Consider a real-life example of statistics. To learn the mean of the marks obtained by each student in the class whose strength is 60, so the average value is the statistics of the marks obtained. Now, presume you want to find out how many fellows are employed in a town. Before the town is settled with 10 lakh people, thus we will take an analysis here for 1000 people that is a sample, and based on that, we will build the data, which is the statistic. There are two basic thoughts in the field of statistics that are uncertainty and variation. There are numerous circumstances that we experience in science or all the more for the most part throughout everyday life wherein the result is uncertain. Sometimes, the uncertainty is because the result referred to isn't resolved at this point while in different cases the uncertainty is because even though the result has been resolved as of now we don't know about it. (Related article: Introduction to Statistical Data Analysis) There are several applications of statistics like applied statistics, theoretical statistics, mathematical statistics, machine learning and data mining, statistical computing, and statistics applied to mathematics or the arts. Some places from where you can find correct statistics are Statista, Nation Master, Google Public Data, Gallup, DataMarket, Dyytum, Gapminder, Freebase, SciVerse, and many more. “The number of people who think they understand statistics dangerously dwarfs those who do, and maths can cause fundamental problems when badly used.”- Rory Sutherland Types of Statistics Now after understanding the definition of statistics let’s move towards types of statistics. We can say that statistics make our work easier and also helps in providing a better picture of the work we do in our daily life. (Suggested read: Types of data in statistics) So, two types of statistics are Descriptive Statistics and Inferential statistics. In descriptive statistics, the information is summed up through the given perceptions. The summarization is one from an example of the populace utilizing boundaries, for example, the mean or standard deviation. Thus, it gives a graphical synopsis of information and it is just used for summing up objects, and so on. Descriptive statistics are applied to the data which is already known. It is an approach to coordinate, speak to and depict an assortment of information using tables, diagrams, and synopsis measures. For instance, the collection of individuals in a city using the web or using television. In simple words, we can say that it is a modest way to clarify our data. There are two categories under descriptive statistics- The Measure of Central Tendency There is another name for the measure of central tendency that is summary statistics. It is used to indicate the midst point or specific importance of a sample set or data set. Also, there are three popular measures of central tendency. - The first one is mean, which is a measure of the average of all values in a sample set. - Second is the median, where the data set is decreed from lowest to highest value and after that finds the actual middle. - The third one is the mode, it is the value repeated maximum of the time in the central set is what we call mode. (Recommended Read: Types of Statistical Analysis) The Measure of Variability There is another name for the measure of variability that is the measure of dispersion. It is used to portray variability in a sample or population. Also, there are three popular measures of variability. - The first one is a range, which in simple language means maximum value minus minimum value. - The second is variance, which clearly illustrates how much a spontaneous variable is different from the anticipated value. - The third one is dispersion, which is a measure of the dispersion of a set of data from its mean. Types of Statistics: Descriptive and Inferential Statistics In inferential statistics, predictions are made by taking any gathering of information in which you are intrigued. It tends to be characterized as an irregular example of information taken from a population to depict and make derivation about the population. Any gathering of information which incorporates all the information you are intrigued by is known as population. Basically, it permits you to make expectations by taking a little example as opposed to chipping away at the entire population. Accordingly, in simple words, we can say that it is a type of statistics used to clarify the significance of Descriptive statistics. That implies once the information has been gathered, dissected, and summed up then we use these details to portray the importance of the gathered information. There are several types of inferential statistics that are used widely and are extremely simple to interpret. They are One-sample test of difference/One sample hypothesis test, Contingency Tables and Chi-Square Statistic, T-test or Anova, Bi-variate Regression, Confidence Interval, and more. (Also read: Statistical Data Analysis Techniques) Variance in Statistics Variance in statistics is an estimation of the spread between numbers in a data set. That is, it quantifies how far each number in the set is from the mean and along these lines from each other number in the set. Variance is determined by taking the contrasts between each number in the data set and the mean, at that point squaring the distinctions to make them sure, lastly partitioning the amount of the squares by the number of qualities in the data set. It is one of the critical boundaries in resource allotment, alongside connection. Computing the change of resource returns causes speculators to grow better portfolios by advancing the return-instability compromise in every one of their ventures. Advantages and Disadvantages of Variance Analysts use variance to perceive how individual numbers identify with one another inside a data set, instead of utilizing more extensive numerical procedures, for example, organizing numbers into quartiles. It treats all deviations from the mean similar regardless of their path. The squared deviations can't be whole to zero and give the presence of no variability at all in the information. The disadvantages of Variance are; Variance is that it gives added weight to exceptions, the numbers that are a long way from the mean. Figuring out these numbers can slant the data. It isn't easily deciphered. Users of variance regularly use it principally to take the square root of its worth, which demonstrates the standard deviation of the data set. What is Bayesian statistics? Nowadays, the term 'Bayesian statistics' gets thrown around a lot. Also, - It is used everywhere like in social circumstances, games, and ordinary life with baseball, weather predictions and forecast, presidential election polls, and several more. - It is used in most logical fields to decide the consequences of an investigation, regardless of whether that be molecule material science or medication viability. - It is used in machine learning and artificial intelligence to anticipate what report you need to see or a Netflix show to watch. Bayesian statistics is a specific way to deal with applying the probability of statistical issues. It gives us numerical apparatuses to refresh our convictions about random events considering seeing new information or proof about those functions. Specifically, Bayesian inference deciphers probability as a proportion of believability or certainty that an individual may have about the occurrence of a specific function. Even more, - Bayesian statistics gives us strong numerical methods for joining our earlier convictions, and proof, to create new back convictions. - Bayesian statistics furnish us with numerical devices to reasonably refreshing our emotional convictions considering new information or proof. (Also read: Statistics for Data Science) In conclusion, we can say that statistics is important as you must have noticed that writers usually add statistics to make their point more valuable and stronger. It performs the task completely and provides an understandable resemblance of the task we perform regularly. The statistical techniques help us to inspect various regions, for example, medication, business, financial aspects, sociology, and many more. (Check also: Crush Course for Statistics) It furnishes us with various types of coordinated information with the assistance of charts, tables, outlines, and diagrams. Hope so, this detailed discussion on statistics will help you in understanding better and clear your doubts about statistics.
<urn:uuid:a7aadb6b-cd2f-4b97-b4f8-9bc1e9e9d680>
CC-MAIN-2022-40
https://www.analyticssteps.com/blogs/what-statistics-types-variance-bayesian-statistics
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00653.warc.gz
en
0.929106
2,036
3.1875
3
What are Industrial Switches? Industrial Networking Switches are computer networking devices aimed at establishing inter-connectivity between industrial devices such as PLCs, HVAC equipment, security cameras, and control room computers or servers. These types of switches are built for mission‑critical applications and are designed to withstand extreme temperatures and harsh conditions in a multitude of environments from factory floors to traffic controls. Typically they feature compact panel mount or DIN rail mount packages with convection cooling rather than fans, and low voltage DC rather than AC power. Managed Industrial Switches Managed Switches are oriented towards redundancy, offering real-time targeted communications through built-in protocols that enhance determinism and provide stable and consistent flow of data. These switches provide QoS (Quality of Service) which can prioritize bandwidth for data subsets, allowing more bandwidth to be allocated through the network to ensure IP data comes in smoothly, obtaining the sensor data without an interruption using minimal bandwidth. Managed switches also provide additional protocols like RSTP (Rapid Spanning Tree Protocol) allowing alternate cabling paths to prevent loop situations, which are usually responsible for network malfunction. Benefits of Managed Switches These switches excel for a vast array of features such as PoE capabilities, extended temperature ranges, DC power, and additional flexible SFP-slots with DIN rail mounting options. In short, they add value in: In applications such as an IP security camera backhaul, it is not practical nor economical to deploy hub and spoke topology since each node has a direct connection to the core aggregation node. With a ring network topology it’s possible to allow a loop to be constructed and avoid additional fiber trenching costs. When and Where to use Industrial Managed Switches? Managed switches are suitable for networks that have applications with fast response time requirements at companies that need to allowing engineers to reach optimal reliable network performance and maintenance by managing and troubleshooting networks remotely and securely. These switches are robust and appropriate for Industrial Network settings, made to stand up to harsh applications like extreme temperatures (-40 up to +75°), vibrations and shocks while contributing to a cost-effective, reliable, and secure network. Managed switches should be used on any network backbone switch so that segments of network traffic can be monitored and controlled such as: Black Box Industrial Network Switches Black Box Industrial Network Switches are designed for the far edges of an industrial network, delivering an easy and cost-effective way of integrating legacy networks and eliminating the need for additional configurations and modifications which reduces deployment time. These switches allow redundancy in mission-critical systems by rerouting the traffic over a backup link within milliseconds overcoming situations like an interruption of traffic control system communication in some infrastructures. Additionally, they are characterized by a DIN rail feature which is an effective mounting option that offers a metal rail of a standard type widely used for mounting circuit breakers and industrial control equipment within equipment racks. Our customers are very satisfied with the Industrial Gigabit Ethernet Switches. Uses include strengthening substations’ network connections, effectively increasing electric energy distribution, and monitoring and controlling security camera functions in the field. These switches are extremely easy to install, however if you should need help, our specialized technical support staff can advise you with individually optimized applications, completely free of charge. Learn more about our Tech Support Center by visiting our Black Box Tech Support Center page. To find out more about our industrial switch solutions, please visit our online store.
<urn:uuid:a02604f0-76e3-41e5-8bfe-5062c3dacfaf>
CC-MAIN-2022-40
https://www.blackbox.com/en-se/insights/blogs/detail/technology/2016/08/26/the-benefits-of-choosing-managed-ethernet-switches
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00653.warc.gz
en
0.918374
718
2.703125
3
Go language is an open-source programming language used for general purposes. Go was developed by Google engineers to create dependable and efficient software. Most similarly modeled after C, Go is statically typed and explicit. The language was designed by taking inspiration from Python's productivity and relative simplicity, with the ability of C. Some problems that Go addresses are slow build-time, uncontrolled dependencies, effort duplication, difficulty writing automatic tools, and cross-language development. Go works using 'goroutines' or lightweight processes, which allow additional efficiencies. Go uses a collection of packages for efficient dependency management. Some examples of organizations that use Go include Google, Cloudflare, Dropbox, MongoDB, Netflix, SoundCloud, Twitch, and Uber.
<urn:uuid:09240925-19cd-4a69-87ea-c8149e8e4f11>
CC-MAIN-2022-40
https://www.contrastsecurity.com/glossary/go-language?hsLang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00653.warc.gz
en
0.920939
151
3.421875
3
The attitudes and perceptions around the Internet of Things (IoT) span a wide array of views from the extreme excitement of consumers and manufacturers, to the concerns (bordering on paranoia) of privacy and security professionals. Some see the Internet of Things as the promise of ubiquitous connectivity across all aspects of our life. Others view it as another in a series of uber technology trends off of which to sell new products and services. Many view the IoT movement as nothing more than marketing fluff to stir up consumer interest in new (and sometimes puzzling) networking capabilities. There is one aspect of the Internet of Things that all of these various factions can agree upon. The long-term success of the trend and many of the new gadgets it introduces will depend largely on the community’s ability to deliver a secure platform for the IoT. This reality has many asking the question, “What does a secure network for the IoT look like?” While requirements will vary, there are some common elements that we will explore here. The two sides of the IoT security coin There is a two-fold dilemma when it comes to IoT security and securing a wide array of newly networked devices. The first side of this coin is the protection of things, i.e., the risk of the device becoming compromised. Here we have to worry about factors such as a huge increase in unknown vulnerabilities within these devices. Often for manufacturers of these “things”, security takes a back seat to creating buzz around networked capabilities, and selling. Often, these manufacturers will have relatively immature software development life cycles, particularly in the area of security. The other side of the IoT security coin is protection from things; the “thing” as the perpetrator of the attack. We see more and more theoretical stories about the rise of “thingbots” and massive systems of compromised devices (some perhaps more realistic than others). These things will often be hard to identify granularly as they typically share IP addresses and have unfamiliar operating systems. Their rapid proliferation and deployment onto the network, along with the above-mentioned vulnerabilities, suggest many may in fact be prone to compromises that can be exploited by bot herders. Steps for manufacturers to improve the security of “things” Consumer technology manufacturers can hardly be blamed for getting caught up in the excitement of the IoT. According to research from the Acquity Group , two out of three consumers expects to have at least one piece of “connected technology” in their homes by 2019. We have all seen the rise of these devices and their many displays in retails stores. In fact, those smart thermostats that draw us in are expected to have 43% adoption in the next five years according to that same study. Two things with regard to IoT security remain very unclear. The first is the extent to which delivering a secure network is a critical requirement for consumer adoption of increasingly network-enabled devices. The second is the lengths manufacturers are prepared to take to ensure security of their devices. Clearly, the IoT movement has many non-technology (or at least security) focused companies jumping into the networked world. These organizations need to ensure that they are adopting into product development stringent security practices as part of a Software Development Life Cycle. Applications and new code need to be put through a Quality Assurance (QA) process that test for security vulnerabilities. This vulnerability management process also needs to be applied not only to the development of the devices themselves, but also any networked components, such as databases or systems in the cloud serving and storing data from these devices. Some manufacturers are taking additional steps. In a previous article, we mentioned some vulnerabilities of Tesla cars that have been found and very publicly exposed. What we did not discuss are the steps Tesla has been taking to find any and all possible vulnerabilities within their product. This past June, Telsa formalized its already active and ongoing work with security researchers in launching “Bug Crowd,” a bug bounty program paying out initially $1,000 for vetted and validated vulnerabilities presented by independent researchers. In August, they increased the potential payout to $10,000, partly in response to the high profile vulnerability exposed at the DEF CON conference earlier that month. Some call this mostly a PR move, but it represents an increasingly popular means of leveraging the vast and sometimes mysterious security research community. Manufacturers who elect not to stimulate white hats (or more likely gray hats) to identify vulnerabilities might want to consider the fact that others may already be organizing efforts for them, just with less clear (but likely less altruistic) intentions for use. More recently, Zerodium announced a $1 million bounty for a fully executable exploit for iOS 9, highlighting the extreme value of vulnerabilities for commonly used devices- a fact that should also not be lost on those less concerned about protecting devices, but more concerned about protecting their organizations from IoT devices. Critical steps in protecting organizations from IoT risks The variety of new risks and threats posed by a future with billions of networked devices are too numerous to cover comprehensively here, but let’s take a look a couple of clear requirements. With the advent of billions of non-traditional IT devices, accurate device identification will simultaneously become more important, and more difficult. The primary tool that has long been used for device and user identification, namely IP addresses, is rapidly declining in its security value. Security teams also need to prepare to match the level of automation now clearly seen on the attack side of the ledger. The idea of “script kiddies” and dark rooms filled with hackers in sweatshirts manually coding attacks against individual adversaries is an antiquated notion. We know from the frequency, polymorphic nature, and rapid response of today’s attacks that we face an increasingly automated flow. This flow is one that cannot reasonably be defended against by a team of security professionals (regardless of their capabilities) reacting and manually implementing protective measures. Protection from new, previously unseen attack sources demands investment in automated means of attack detection and coordinated mitigation. Don’t count on regulations to protect anyone This past spring, the Federal Trade Commission released a report on the pros and cons, benefits and risks of the IoT, based on the results of a workshop held last November where experts gathered to discuss this fast-moving technology trend. Quite correctly, the workshop and the report give considerable focus to the issue of security concerns related to the IoT. In doing so, the report provides a welcome and needed step forward in both identifying and starting the process of addressing security concerns with a wave of additional devices becoming network-enabled. The report puts tremendous focus around consumers and consumer data protection, privacy, etcetera. Obviously, there are key issues that will need clear security solutions to ensure protection of Personally Identifiable Information (PII), both to facilitate adoption and to ultimately protect users. However, the report does not go far enough in addressing several other security issues. It also falls short of effectively defining the landscape of “IoT industries,” and the overall discussion suggests a narrow view of those needing to participate in the conversation. By no means is this an exhaustive list of the areas that many industries will need to tackle to ensure secure and safe IoT, but it represents a few particular areas we see a lack of focus thus far: - Employee Safety: both the opportunities and the challenges of IoT clearly extend into employee wok safety (e.g. production facilities and industrial control concerns) - Critical infrastructure protection: industries such as logistics, mass transit, electrical and heating elements all rushing in IoT technologies and, - Law Enforcement and Military Applications: of course in the world of drones, one can clearly see where IoT becomes more relevant to these important areas of national commerce & safety. Don’t forget about availability An important consideration for organizations looking to create a secure network to support IoT adoption is availability. For many years, and for a wide variety of reasons, availability has traditionally taken a back seat to the other elements of the CIA triad (confidentiality, integrity and availability). One major reason for this is the pressure on organizations to protect consumer data. Most industry driven security compliance initiatives (such as the Payment Card Industry Data Security Standard, PCI-DSS) similarly center on data confidentiality and integrity. In the past, losing network or application availability have mainly impacted revenue or productivity, but have been viewed as less severe an issue than an actual data breach. However, the movement of organizations to IoT principles means a greater dependency on the network to maintain increasingly critical operations, making availability more and more important and a serious security issue. Regardless of your organization’s interests around the IoT, be they monetization from the demand in more and more connected devices, or simply preparation for the impact on the threat landscape, the time has arrived to start taking proactive steps to ensure security. In the end, the full vision of IoT may or may not come to pass. It may take longer than some predict. What is undeniable is that we are in the midst of an explosion of connectivity, and while the average consumer may not be aware of IoT concepts, they will have expectations with regard to security. Similarly, they will be by-and-large clueless to the potential impact they (and their new gadgets) have on the threat landscape, and thus cannot be relied upon to maintain security capabilities on these devices. As a result, the burden of protecting organizations from the possible wave of new, larger threats falls to the security operations teams.
<urn:uuid:953de7bb-9b17-4815-a449-adfd0389ddda>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2015/10/29/creating-a-secure-network-for-the-internet-of-things/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00653.warc.gz
en
0.950682
2,076
2.609375
3
Windows Holographic is a Microsoft platform that allows for the combination of real-world elements, holographic computing, and virtual reality. The expansion of this “mixed reality” platform, which is accessible through Windows 10, was announced in late May of 2016 by Microsoft Vice President Terry Myerson at Computex in Taiwan and on the Windows Experience Blog. Myerson invited all of Microsoft’s original equipment manufacturers, original design manufacturers, and hardware partners to build PCs, displays, accessories and mixed reality devices to function on the Windows Holographic platform. This creates new business opportunities, unlocking mixed reality experiences across devices, and for apps as well. Windows Holographic apps can be written today with the confidence that they will run on the broadest set of devices. “Mixed Reality” involves creating and using devices that, like us, perceive the physical world and react in accordance to its programming. Myerson described it this way, “Imagine wearing a VR device and seeing your physical hands as you manipulate an object, working on the scanned 3D image of a real object, or bringing in a holographic representation of another person into your virtual world so you can collaborate.” Actual holograms like we see in Iron Man are virtually impossible to create with current technology. Light needs a surface on which it can to be projected, and air doesn’t cut it. By wearing these devices, the light projects on the glass and reacts to real-world movements. Devices map out your environment and digital content can be manipulated as easily as physical objects. People can teleport via holographic representations or travel together as a team. The possibilities are endless! They’ll be ready for mass production as approximately 80 million devices will be made for the year 2020.
<urn:uuid:0f86bf45-53d2-4348-b44a-d7a55b19ea30>
CC-MAIN-2022-40
https://ddsecurity.com/2016/07/14/windows-holographic-introduces-mixed-reality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00653.warc.gz
en
0.936429
369
2.828125
3
The healthcare sector is progressing rapidly, expanding both its reach and challenges. With an increased patient-doctor ratio, organizations must find a way to tackle the chaotic situation—a better management tool to handle the workload efficiently. Primitive book-keeping provides no scope for rapid scanning and locating a particular patient’s record; this results in delayed attention to the patient and decreases in severity. With broad adoption of the latest technology in the medical field, it is time that organizations enhance the overall healthcare system. Therefore, healthcare firms should embrace newer technologies that help facilitate better and faster resolutions to patient’s problems, while extending a scientifically-advanced atmosphere — using Big Data and Analytics helps organizations achieve these goals. These are the significant ways in which Big Data can help the healthcare sector flourish: Patient Health Tracking It so happens that doctors generally want to analyze the patient’s health history before exploring anything new. But, due to disorganized data-keeping, the patients themselves are not ready to furnish the health-related documents accumulated over the years. Big Data easily tracks the entire history of the patient’s health including all minor/ significant operations undergone; it has revolutionized the whole paradigm by introducing statistical analysis that predicts and warns about the future possible occurrences. Internet of Things aided by Big Data is a further leap in this revolution. From tracking heartbeats and sugar levels to breathing patterns and distance walked, smart wearables help provide more transparent data that can serve as a basis for medical assistance. Creating a unified database containing the citizen’s health history would enable health systems to fetch data in seconds, saving crucial time and human resources. With patient’s data a few clicks away, healthcare firms can obtain the entire history of the patients in seconds, making it easy for both patients and doctors — apart from saving time, this leads to reduced cost. The hands needed to keep the manual records, the data carrier, the data traveler and the data analyst, would all be required to put in their working hours. However, Big Data eliminates the mediating costs as well as the consumed time, resulting in a more efficient healthcare environment. Digitized data and statistical representation not only helps analyze the current situation but also assists in making predictions; this gives the healthcare sector an edge over the potentiality of certain diseases. The pattern of the disease will help the doctors make plans for the patient in advance—certainly rewarding in situations where the time is everything for the patient. Doctors can operate with better insights concerning the health state of a patient in a customized healthcare strategy. It is known that human error is bound to take place, no matter how much care is placed while working with data. The calculations, the sorting, and the interpretive analysis all require precise attention. With increased workloads (or even otherwise) workers may commit errors. Big Data reduces this error-rate by employing scientific and mathematically correct equations—equally robust every time they are applied. Big data can also be used to sort unrelated prescriptions added faultily in a patient’s record. So, Big data can take care of not just avoiding errors but can also of rectifying them. Adoption of Big data in the healthcare sector is not only a problem-solving tool but rather a way of growing operation. What use will expensive equipment and the latest medicines have if they don’t have a compatible platform to perform? A progressive environment consists of forces that work in cohesion, leading to an optimum output, all within the shadow of efficient operating. An environment which is readily embracing other advancements will show no progress if all improvements are not adequately attended. Big Data not only eases the healthcare procedures but also helps in the advancement of the infrastructure as a whole. Predicting the possible disease level, analyzing and representing data statistically, reducing the doctor-patient gap, and cutting down costs and time are all sign of progressive development. Without the help of Big Data, the healthcare sector would possibly never achieve this goal. Challenges are there in implementing Big Data fully in the healthcare sector, but there won’t be achievements without starting the process. To fully utilize the wealth of scientific intelligence, using Big Data seems unavoidable. If implementing it introduces so many positives to the sector and then why not apply it? How Big Data is Changing the Business World and Why it Matters Solidifying Cybersecurity with Big Data Analytics Big-data Analytics for Raising Data-Driven Enterprise How Big Data Is Changing the Financial Industry
<urn:uuid:32ad462a-1ff0-4de0-af0e-57aaf19e82b3>
CC-MAIN-2022-40
https://www.idexcel.com/blog/tag/big-data-in-healthcare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00053.warc.gz
en
0.92221
934
3
3
Machine learning is one of the most prevalent word in the world of technology now. At the very core, a machine learning algorithm is given a 'learning set' of data and then it is asked to use that data to answer a question. For example, you could give a computer a teaching set of photographs, some of which say, 'This is an airplane' and some of which say, 'This is not an airplane'. Thereafter, you can show the computer a series of new photos and it will begin to detect which photo is an airplane. Each photo that it detects - correctly or incorrectly - is added to the set of lessons, and the program effectively becomes 'smarter', and gets better at completing its work over time. It's practically learning. This is how the computer becomes intelligent this is how artificial intelligence comes into existence. In this blog, we will look at the top 4 uses cases of machine learning and AI. 1. Omnichannel Marketing Personalization The more you understand about your customers, the better you can serve them with contextual and personalized offerings. You probably had an experience where you visited an online store and saw a product but didn't buy it - and then watch digital ads on the web for that exact product for several days. This is omnichannel experience, and omnichannel personalization is just the tip of the iceberg. The emails customers receive, any direct mailings or coupons, products which appear as 'recommended' - are all designed to drive customers to sales more reliably. Omnichannel personalization has now been a cake walk for brands with the recent development in machine learning and AI. Data collected and communication taking place through chatbots powered by AI & ML is transforming the way brands offered contextual, relevant and omnichannel experience. 2. Fraud Detection Machine learning is getting better at identifying potential fraud cases across different fields. PayPal is using machine learning to fight money laundering. The company has tools that compares several million transactions and can precisely distinguish between legitimate and fraudulent transactions between buyer and seller. Machine learning is transforming the way automation is implemented in the industry. With AI huge amount of data is collected, disseminated and analysed for computers to learn and produce results that target any leakage and hence prevent fraud. Big Data, AI and ML are transforming the way brands are now identifying frauds and bridging the present loopholes or future emergencies. 3. Natural Language Processing (NLP) NLP is used in all types of exciting applications across disciplines. Machine learning algorithms can work with natural language for customer service agents and direct customers to the information they need. It is used to translate obscure legalities in contracts into clear language and help lawyers scan through large amounts of information to prepare for cases. Similarly, a machine trained to handle information of any kind for industry of all kind has the potential to transform the way brands offer their customers a truly 1:1 personalized experience. With recent developments around NLP, customers are finding the human face of AI pleasing. 4. Smart Devices In a survey conducted by IBM, it is expected smart cars will hit the road by 2025. Smart cars will not only integrate with the Internet of Things but will also recognize its owner and environment. You may adjust the internal settings - temperature, sound, seat position, etc. - based on the driver, reporting and solving problems by himself, driving by himself, and providing real-time advice on traffic and road conditions. With 50+ connected features, the MG Hector is India' one of the first car that will offer connectivity on the go. And the brain behind it is the revolutionary iSMART Next Gen technology. It combines hardware, software, connectivity, services and applications to make your driving experience easier, smoother, and smarter. Googles' Alexa, Apple's Siri, IBM's Watson are transforming the way users experience artificial intelligence and ML in their daily lives. AI & ML is here to stay and be a part of our daily lives. Espire has been seamlessly transforming the customer experience of our client through our robust customer engagement hub with full-proof DXM and WCM solutions. We have successfully developed new age chatbots for our clients powered by exceptional AI & Ml capabilities. To move towards a machine learning led AI solutions, drop us an email at [email protected] MORE FROM OUR BLOGS Implementation of an Efficient Customer Support System The client is an engineering, sciences and project delivery firm, based in Australia. Client's operations are divided into four broad business units: Buildings & Infrastructure, Power & Energy, Mining & Metals and Water & Environment. Establishing Accuracy and Transparency in the BI Reporting System The client wasn’t satisfied with its existing BI reporting system. There was a strong business need for better transparency and precision in the methods used to generate reports. Espire Optimizes Data Warehouse for Resolution of Data Overload Issues of a Global Engineering Conglomerate Find out how Espire optimized data warehouse of an Australia based engineering major through a detailed and systematic plan Espire Develops a Reporting Tool to Boost Operational Efficiency Espire empowered the Client with a comprehensive set of Business Intelligence technologies that helped them maximize the value of their business data. Designed a scalable Analytics and BI Platform for a leading UK-based Mutual Insurance Company The client is a leading UK-based mutual insurance company, which provides general insurance products to the members of the trade union and other not-for-profit organizations. Due to the constraints of their existing legacy architecture, they were looking a modern web-based architecture which could seamlessly integrate with multiple in-house applications being used by their different departments. How has the logistics industry reshaped in new normal and roadmap ahead Logistics industry has been undergoing a rapid boom initiated by globalization and catapulted to new heights by new means of communication and rapid advancement in supply chain technologies. However, the new normal as we face today is shaping the logistics industry in newer ways. Importance of ai strategy for brands in the post covid world Tech-savvy customers, increasing digital demands, growth in digital purchases, need for speed to market and elevated customer expectations have made it crucial for brands to plan, devise and roll their AI strategy in the post COVID world scenario to both remain competitive and provide superior customer experience. Technology trends disrupting insurance in the new normal and way forward Content with legacy models, insurance leaders appeared as late adopters of the digital boom that has resurfaced with digital transformation and evolution of disruptive digital technologies- led by artificial intelligence, data analytics, predictive modelling, blockchain, Internet Of Things, intelligent automation and machine learning Top challenges in claims management and revolutionizing claims outcomes With the advent and rise of new digital technologies like Machine Learning (ML), Artificial Intelligence (AI), Intelligent Automation, Internet of Things (IoT), Sensors, Wearables, Drones and Data Analytics, insurance leaders are forced to transition to these digital technologies for collecting and analyzing data and meeting customer expectations. Global Customers Served Resources Certified on multiple leading technologies Years of Experience in Digital Transformation & Total Experience
<urn:uuid:79f502ce-6906-479a-8a93-9c2bcc5539a0>
CC-MAIN-2022-40
https://www.espire.com/blog/posts/top-4-ai-and-machine-learning-use-cases-to-boost-business-returns
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00053.warc.gz
en
0.938045
1,480
2.625
3
The FCC and Transportation Department are working to establish a transition path to the next generation of 911 services. To date, Next Generation 911 is a goal more than a standard, a collaborative effort by the Federal Communications Commission, the Transportation Department’s National Highway Traffic Safety Administration and the Commerce Department’s National Telecommunications and Information Administration. Congress in 2004 authorized the NHTSA and the NTIA to administer a grant program for Public Service Answering Points to help upgrade equipment and to oversee research and development to define system architecture and transition plans to a digital, IP-based system. “The Next Generation 911 Initiative has established the foundation for public emergency communications services in a wireless mobile society,” the Transportation Department says of the program. Although DOT calls the national 911 system an unqualified success, it is showing its age at 40 years and must evolve to accommodate mobile wireless and voice over IP technology. “The spread of highly mobile, dynamic communications requires capabilities that do not exist today,” including connection with a wide variety of technologies and the ability to accurately locate the call. The initiative has developed a NG911 engineering architecture for connections to new technologies, developed emergency call center receiving software, and established testing programs implemented in a number of PSAPs in Washington State, Montana, New York, Minnesota, and Indiana. Systems to provide an IP infrastructure for 911 services are being deployed in some areas, and the FCC has outlined a five-step plan to help the public safety community take advantage of a new generation of services that can incorporate text, video and data as well as traditional voice calls. 1. Develop location accuracy mechanisms for NG-911: The FCC has launched the development of a framework for providing automatic location information in the NG911 environment. 2. Enable consumers to send text, photos, and videos to PSAPs: The FCC has released a notice of proposed rules to accelerate NG911 adoption, addressing practical, technical questions about how to enable text, photo, and video transmission to 911, including how to deliver the bandwidth PSAPs will need. 3. Facilitate the completion and implementation of NG911 technical standards: For NG911 to be effective, technical standards for the hardware and software are needed. The FCC is working with stakeholders to resolve standards issues and facilitate implementation of a standards-based architecture. 4. Develop a NG911 governance framework: Because no single governing entity has jurisdiction over NG911, the FCC will work with state, federal, and other governing entities to develop a coordinated approach to NG911 governance. 5. Develop an NG911 funding model: To assist 911 authorities and Congress in considering funding options, the FCC’s Public Safety and Homeland Security Bureau will prepare a cost model focused on the cost-effectiveness of the NG911 network infrastructure. NEXT STORY: MIT chip could power wireless sensors
<urn:uuid:e8582f9e-7082-4bea-ab76-40863be1c5ff>
CC-MAIN-2022-40
https://gcn.com/emerging-tech/2012/07/unified-comm-is-the-next-step-in-911s-evolution/280865/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00053.warc.gz
en
0.906251
588
2.65625
3
It’s never easy to set up an FTP server the moment firewalls get involved. But it gets even more difficult once you start using the secure version of FTP, known as FTPS. In this post, we’ll talk about the problem you’ll usually encounter when your FTPS server is behind a firewall and your client is attempting to perform a file transfer using passive mode or PASV. Let me explain. When an FTP/S client wants to conduct a data transfer using Passive Mode, it issues the PASV command. Upon receiving that command, the FTP/S server responds with the server’s IP address and the port number on which it wants the client to connect to. Note: The passive port number calculates to: (192 x 256) + 25 = 49177 Prefer to watch a video version of this blog post instead? This shouldn’t be a problem for direct connections. But once you have a firewall or NAT router in between, things can get pretty messy. So, let’s say we have an FTPS server sitting behind a firewall. Basically, the FTPS server is in an internal network and has an internal IP address assigned to it. The client, which is is outside the internal network, is connecting to the FTPS server via the firewall's external IP address. We’ll be using the term ‘firewall’ but this kind of situation applies to NAT routers, reverse proxies and other routing devices as well.So now, when the FTPS server responds to the PASV command, its response will specify the FTPS server’s internal IP address and the port number it will be listening on. What then happens is that, when the client, in turn, attempts to connect, it will attempt to connect to that internal IP address. Since the client does not belong to the internal network, it will naturally fail to connect and eventually time-out. In addition, if the port number specified in the response has not been opened on the firewall or routing device, that would also cause the connection to fail. Of course, modern firewalls, NAT routers, reverse proxies, and other routing devices, are smart enough to address this particular situation. Once they have identified the conversation taking place as that of FTP and are able to detect the PASV command, they simply assume that the client is going to be connecting back to the FTP server through another port and IP address. They then dynamically open that port to the FTP server in anticipation of the request from the client software. They also modify the response packet to instruct the client software to connect back at the external IP address, not the FTP server's internal IP address. Once they receive the client’s request, they then simply make the necessary substitutions. However, when the packets are encrypted with TLS, as in the case of FTPS, the firewall can't examine the packets and so will have difficulty determining what ports to open and what IP addresses to substitute with. This what you do to resolve the problem. In your FTPS server, you need to specify a passive IP address and a passive port range. These settings are going to be used when responding to PASV client requests. The passive IP address should be the external IP address of your firewall, NAT, reverse proxy, or other routing device. On the other hand, the passive port range should be the range of ports you want the FTPS server to be listening on. For this to work, those range of ports should likewise be opened on your firewall. To configure this on JSCAPE MFT Server, do the following. Login to the JSCAPE MFT Server Manager, navigate into the domain on which your FTPS service is running, and then go to the Services module and navigate into the FTP/S tab. Specify the external IP address in the Passive IP field. Tick the “Passive port range” check box and specify a passive port range. Once you’re done, click Apply. That’s it. Your internal FTPS server will now be ready to respond to PASV mode data transfers. Try this out yourself. Download the free, fully-functional evaluation edition of JSCAPE MFT Server.
<urn:uuid:6800fc28-dcf8-458d-919c-cbb2d64257e3>
CC-MAIN-2022-40
https://www.jscape.com/blog/setting-up-an-ftps-server-behind-a-firewall-or-nat-for-pasv-mode-data-transfers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00053.warc.gz
en
0.916377
880
2.53125
3
So you’ve worked hard to develop, implement, and continually improve your organization’s cyber security program. You’ve been successful in obtaining increases in cyber spending that have allowed you to purchase and deploy the most modern security technology. You’re feeling confident and optimistic about the outcome of a recently completed cyber risk assessment, and then you see the report… What are all these risks? How can there still be risk when you’ve thoughtfully implemented the strongest controls available? Don’t worry! You are not alone! Let’s take a look at some of the misconceptions that can lead to this confusion and frustration. Misconception #1: All Risk Is Bad Those of us in the Risk Management industry tend to think of risk only in a negative light. In reality risk is often quite necessary to allow for innovation, progress, and organizational success. Imagine an educational institution that decides to take every conceivable step to remove all risk from their IT environment. “The internet presents risks – shut down access!” “Data sharing via removable media presents risks – block all storage devices!” “Mobile computing presents risks – take back all laptops!” You can see how attempting to completely eliminate risk could be quite impractical and detrimental to achieving organizational goals. Mature cyber risk management programs will identify unacceptable and acceptable risk, rather than focusing only on the elimination of all risks. Misconception #2: Risk Can Be Eliminated It is tempting to believe that risk can be eliminated through the implementation of strong controls. In reality, there is no way to completely eliminate risk, and as I pointed out above, that’s ok. We couldn’t eradicate risk even if we were willing to suffer the negative consequences. There are several factors that contribute to risk which are important to understand. Here are some basic definitions that will facilitate this understanding: - A threat is any circumstance or event with the potential to do harm or have an adverse impact. - A vulnerability is a weakness that could be exploited by a threat source. - A risk represents the potential for loss or damage when a threat exploits a vulnerability. Risk is often expressed as a function of the likelihood of a threat event’s occurrence and the potential adverse impact should the event occur. - Inherent risk represents the amount of risk that exists in the absence of controls. - Residual risk is the amount of risk that remains after controls are accounted for. Ok, so what do all these definitions really tell us? Here are a couple of examples. In order to eliminate the risks related to earthquakes you would need the power to control the movement of tectonic plates. In order to remove all risk related to a state-sponsored hacker you would need to be able to persuade them that hacking is bad, or… eliminate them altogether. Of course, I’m being a bit facetious, but I hope I’ve illustrated the point. There are risks that cannot be removed without the power to eliminate associated threats and threat actors. Misconception #3: Performance Assessment = Risk Assessment The objective of many security assessments is to identify the degree to which controls are in place, operating as intended, and producing the desired results. This type of assessment is particularly good at identifying areas of non-compliance with applicable standards and policies. However, if the assessment stops there it is missing a very important element – risk. Let’s look at an example. During the course of a security assessment it is determined that a healthcare organization has implemented robust malware detection technology to identify known and unknown attacks. The anti-malware tools are updated with new signatures and behavioral heuristics in real-time and sensors are placed throughout the organization’s external-facing and internal network. This sounds like a pretty strong control implementation. The organization might assume that they have a fairly low level of malware-related risk and choose to take no additional actions. But what happens when we consider other factors? The healthcare industry creates, processes, transmits, and stores vast amounts of protected health information (PHI). PHI is one of the most valuable data types on the black market and is therefore the target of intense and frequent hacking attempts by well-funded and highly capable, malicious actors. To get a better understanding of risk we should take into consideration factors such as the capability, determination, and motivation of potential attackers, as well as the frequency and impact of successful attacks. These characteristics lead us to an estimation of inherent risk. In our example the inherent risk is likely quite high. Considering this high level of inherent risk, we may determine that a medium level of residual risk remains, despite the strength of the anti-malware control implementation. This presents a potential conundrum. You might be thinking, “The healthcare organization in your example has done everything they can. How are they supposed to respond when they are told that they are still at risk?” There are a few things that an organization in this situation may choose to do. In our example, the organization may: - Place additional monitoring and alerting functionality around their standard anti-malware control implementation, - Increase the ingestion of threat intelligence information related to malware attacks, - Increase staffing for SOC analyst positions, - Require SOC analysts to attend additional training on how to identify and respond to the latest malware attacks, or - Take no action, which is a perfectly acceptable response when all other reasonable steps have been taken. In conclusion, risk is a constant. One of the primary tasks of cyber risk management professionals is to determine how best to respond to risk. Effective risk management requires us to recognize that some risks are not only necessary, but beneficial to success. We must also realize that while it may sound like a worthwhile goal, attempting to completely remove all risk is futile. And finally, the days of getting by with compliance-focused, checklist-style assessments have passed. This is why the you need a risk assessment that provides risk-prioritized data that allows you to make informed decisions about what risks are acceptable and what risks must be addressed. Our assessments go well beyond a traditional compliance checklist so that you can see how industry, attack scenarios, and real-time threat data affect your third parties. To learn more about how CyberGRX can help you manage your third-party cyber risk, request a demo today. Book Your Demo
<urn:uuid:b34c08f8-361a-4045-a982-bc72bc931b19>
CC-MAIN-2022-40
https://www.cybergrx.com/resources/research-and-insights/blog/the-persistent-nature-of-risk-and-why-it-matters
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00053.warc.gz
en
0.946426
1,315
2.765625
3
But it’s important to choose and use your apps carefully. Some apps may be scams or contain viruses. What can you do to keep yourself safe? Any time you install an app, it’ll ask you to allow it permission to access functions of your device — stuff like the camera, location data, and contacts list. But should a fitness app need to use your camera, or a game need to know who you call? Click “Deny” to keep an app from getting certain permissions. Stick to the official sources for your apps. Research before you buy or download, and only install apps from a reputable developer. The Apple App Store and Google Play have standards for what apps they include, and something from the official store is less likely to cause problems for you – but still be cautious! Spot The Scam Check out the reviews and information about the app. If there are a lot of high ratings, but no actual reviews, or if the reviews appear suspiciously similar or low quality, it could be a scam. Also, look for information on the developer – if there’s little information, no responses, and no indication that the developer is supporting the app, think twice (or three times) about installing it. Vaccinate Your Device Make sure all your devices have antivirus and/or antimalware software installed. That way, even if you download a malicious app, or an app you’ve bee using for a while becomes a problem, you have another layer of defense to help secure your device. Follow the app safety advice in this infographic from INFOSEC – stay safe and “appy”!
<urn:uuid:1d72ea1e-747d-4d5a-9a4b-b812c5a6d553>
CC-MAIN-2022-40
https://milepost42.com/cybersecurity/dont-worry-be-safely-appy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00053.warc.gz
en
0.917162
352
2.515625
3
While governments and organizations are attacked daily, some attacks leave little, if any, damage. It really depends on the sophistication level of the threat actor, which varies with each hack. However, there can be a lot learned from the level of sophistication an attack brings with it. For instance, in 2013, hackers attacked European governments using a malware called ‘MiniDuke.’ MiniDuke malware was used to target 23 countries worldwide, exploiting a flaw in Adobe software. The North Atlantic Treaty Organization (NATO) was one of the organizations attacked with MiniDuke, but it was not compromised in the end. “This is a unique, fresh and very different type of attack,” said Kurt Baumgartner, a senior security researcher with Kaspersky Lab, according to an article from CNBC. “The technical indicators show this is a new type of threat actor that hasn’t been reported on before.” MiniDuke malware explained Security firm FireEye had found security bugs in Adobe’s Reader and Acrobat software two weeks before the MiniDuke campaign had started. Then, FireEye reported that hackers were infecting systems by circulating PDFs tainted with malicious software, a common strategy. The attackers in this situation combined old malware writing tactics with the recently discovered vulnerabilities in Adobe Reader to collect geopolitical intelligence from their targets. The PDFs were comprised of fake Asia-Europe Meeting (ASEM) human rights seminar information, Ukraine’s foreign policy and NATO membership plans. However, what they really contained were exploits attacking certain versions of Adobe Reader, bypassing its sandbox, which is a security mechanism that separates running programs to mitigate software vulnerabilities from spreading. Once a system was accessed, a downloader was dropped onto the victim’s desktop that had a customized backdoor. A cybersecurity expert, Boldizsar Bencsath, thought that the backdoors were installed in organizations that the hackers were interested in, so they could continue to take information they came across in the future. According to a Reuters article, the MiniDuke attackers’ approach to communicate with infected machines was unique. “The virus was programmed to search for Tweets from specific Twitter accounts that contained instructions for controlling those personal computers. In cases where they could not access those Tweets, the virus ran Google searches to receive its marching orders.” The premade tweets had specific tags labeling encrypted URLs for the backdoors and held access to the C2s, which then provided potential commands and encrypted transfers of additional backdoors onto the system through GIF files disguised as pictures. Once the pictures were downloaded onto the system, the attacker could carry out basic actions and execute new malware. Details of the attack MiniDuke struck at more than 20 countries, hitting 59 unique victims. According to Kaspersky Lab’s analysis, “a number of high-profile targets have already been compromised by the MiniDuke attacks, including government entities in Ukraine, Belgium, Portugal, Romania, the Czech Republic and Ireland. In addition, a research institute, two think tanks and a health care provider in the United States were also compromised, as was a prominent research foundation in Hungary.” Russia’s Kaspersky Lab and Hungary’s Laboratory of Cryptography and System Security (CrySyS) said that MiniDuke was designed for espionage, but researchers are still trying to figure out the attacks’ ultimate goal. Due to the the attacks’ sophistication and high-profile targets, experts suspected that a nation-state was behind them. CrySys identified servers in Panama, France, Switzerland, Germany and the U.S. as the source of the code; however, further examination of the code didn’t reveal any more information about its origin. The combination of techniques used in the MiniDuke attacks stuck out to cybersecurity experts. For example, Eugene Kaspersky, founder and CEO of Kaspersky Lab said, “I remember this style of malicious programming from the end of the 1990s and the beginning of the 2000s. I wonder if these types of malware writers, who have been in hibernation for more than a decade, have suddenly awoken and joined the sophisticated group of threat actors active in the cyberworld.” Old-school threat actors were able to create complex viruses. The people behind MiniDuke were able to use the same complexity as the old threat actors and add clever social engineering at high-profile organizations, making them very dangerous. MiniDuke may have stopped its campaign or decreased its intensity to stay off the radar for a while, but the threat didn’t stay quiet for long. Since 2013, there have been many variations of MiniDuke that have sprung up, such as CozyDuke and CosmicDuke, just to name a few. In 2014, CozyDuke targeted the White House and the U.S. Department of State, and CosmicDuke, which also targeted important organizations, was deemed the “new” MiniDuke. Where one malware or threat actors falls off, more will come out to take its place. Though malware threats and ransomware are still on the rise, it is important to stay vigilant and mitigate any vulnerabilities as they come.
<urn:uuid:12047f65-f4cc-43c9-a190-1c916aab0649>
CC-MAIN-2022-40
https://www.industrialcybersecuritypulse.com/facilities/throwback-attack-miniduke-malware-attacks-23-countries/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00053.warc.gz
en
0.965945
1,085
2.765625
3
Wi-Fi. It's a symbol that everyone recognizes, a term that most people are familiar with, yet many business Wi-Fi networks are left unprotected and exposed. Securing your Wi-Fi is more than the avoidance of a slower connection, it is an integral part of being proactive in protecting entire internal networks. SECURED VS. UNSECURED If Wi-Fi is unsecured, anyone can access it. The purpose of secured Wi-Fi is that it is only allotted for individuals who have been granted access. If the Wi-Fi is not secure, many individuals have no problem taking advantage of the connection or the information that comes with it. According to a study conducted by Symantec, 25% of individuals surveyed have accessed a Wi-Fi network without the owner's permission, and 8% admitted to guessing or hacking the password. Once someone is connected to an organization's Wi-Fi connection, there is a greater chance that they can gain access to an internal network. In terms of implementation, organizations can increase Wi-Fi security by using a strong password users must type in to access the Wi-Fi. For even more security, businesses can keep Wi-Fi traffic and their internal network separate. That way, even if someone happens to connect to the secured Wi-Fi, they won't have access to internal data. Another protective measure that is highly recommended is implementing a separate Wi-Fi network for internal staff and anyone the organization wants to give access, while having a separate guest wireless network for visitors. WHY IT'S IMPORTANT If Wi-Fi isn't secured, someone could accidentally or maliciously access an organization's internal network. Once within an internal network, malicious access could result in stolen data, the shutdown of devices, encryption of data that the organization would have to pay to get unencrypted, inputting keyloggers on the network to steal users' passwords, and the list goes on. These vulnerabilities can be extremely costly, which is why taking the extra step to secure your organization's Wi-Fi network is important to protect the business at large and its' users. Wi-Fi security goes beyond a slow connection, it's about decreasing a huge security risk that could lead to a loss of time, money, and confidential data. If you are unsure about the security of your Wi-Fi network, contact an IT team today to ensure the protection of your technology and information. It can make the difference in regards to the protection of your network. Your Wi-Fi network can be compromised, but so can your mobile devices. Educate yourself and your team about several signs that will tell you whether or not your mobile device has been hacked by clicking below.
<urn:uuid:7f90f7f7-66ca-4288-8d9e-c3a39d192634>
CC-MAIN-2022-40
https://blog.fivenines.com/topic/wi-fi
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00053.warc.gz
en
0.949394
559
2.5625
3
Cyberwarfare is the use of sophisticated cyber weapons (viruses, worms, trojans, etc.) by one nation-state to infiltrate, spy on, and disrupt critical infrastructure systems of another country. Most countries prepare for cyberwarfare digitally in the 21st century no differently than physical warfare. Wargames are practiced by each military organization focusing on both offensive and defensive capabilities. Cyber espionage is clandestinely conducted by most nation states with some receiving more attention than others for their prowess. Cyberwarfare is also practiced by terrorist groups aimed at furthering the specific goals. In January of 2020 the Department of Homeland Security put out an alert notifying citizens of a potential cyber attacks from Iran. This followed heightened tensions between the US and Iran following a drone strike that took out a notorious Iranian military leader. Some cybersecurity experts put Iran’s cyberwarfare capabilities right behind Russian and China. “Russia and China are Tier 1 cyber aggressors and very close behind them comes Iran, then North Korea. It is often difficult to distinguish between different countries in cyber terms as they probably use proxies in each other’s countries to mask the true originator. The U.S., U.K. and Israel are probably the West’s Tier 1 countries with sophisticated capabilities from both a defensive and offensive perspective.” Iran has hacked numerous government websites, taken down servers of corporate targets, and broken into email accounts of people speaking out against their regime. Their actions seem to be geared toward cyber vandalism, but that doesn’t mean that they aren’t capable of something far more serious. Experts regularly exchange ideas on Iran cyberwarfare capabilities. Christoper Krebs, head of the US’s Cybersecurity and Infrastructure Security Agency, warned about various scenarios his agency thinks is within Iran’s capability. He suggested Iran could take over our power grids and shut them down for days or weeks. The stock market could be hacked into, taken offline, or simply manipulated causing economic turmoil. Iran could take over water supply systems, leading to unsafe drinking water, or even hack into Tesla’s auto-drive feature to take over control of the vehicle. These may seem like exceptional hacking events, but increasingly cybersecurity researchers are showing them to be very possible. According to one DHS employee, “Iran is capable, at a minimum, of carrying out attacks with temporary disruptive effects against critical infrastructure in the United States.” Additional Reading: DHS Warns of Potential Cyber Attack from Iran What does this mean for an SMB? SMBs shouldn’t focus on Nation State attacks. However, the steps they take to prevent a breach at their SMB will provide a more difficult target for nation state attacks. SMBs ought to focus primarily on their employees taking simple measures to improve employee’s online security. 10 Steps every SMB should take to Protect themselves from Cyber Attacks: - Implement the Principle of Least Privilege. Remove administrator rights from employees local Microsoft Windows workstations. - Monitor computer systems with Network-based Intrusion Detection Systems to see where data is coming from, going to, and who accesses it. - Implement Data Loss Prevention technologies on your email systems to spot critical and sensitive data leaving your business via email. - Train employees on the cybersecurity best practices. - Phish test employees to keep them vigilant in their inboxes. - Govern staff with policies to guide behaviors and independent decision making. - Regularly backup all your critical data using the 3-2-1 approach. - Adopt a Password Manager for all employees. - Enable two-factor authentication on all critical Internet enabled services. - Buy enough Cyber Insurance to cover a catastrophic breach event. Become more aware to become more secure.
<urn:uuid:f30566a1-9820-43bc-a76d-3277a51d8d78>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/cyberwarfare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00254.warc.gz
en
0.939098
779
3.15625
3
Cyber attacks against state and local governments have been dramatically increasing. In 2019 alone, there were 140 ransomware attacks – an average of 3 per day – targeting public, state and local government and healthcare providers. This is up 65% from the previous year. Just in the past month, four cities in the US were hit with ransomware infections. These cities, including New Orleans and Pensacola, Florida, all had essential government services sabotaged or halted. After the ransomware attack on New Orleans, the mayor was forced to declare a state of emergency. In Pensacola, the sanitation department lost email and telephone systems, internet servers and their online payment system. Earlier in 2019, when an encrypting ransomware attack took Baltimore’s IT systems hostage, the attack froze thousands of government computers and disrupted everything from real estate sales to water bill payments. Even with the help of FBI, Secret Service and cybersecurity experts, the cost to the city will be astronomical at an estimated $18 million. Cyber attacks against state and local governments show no sign of slowing in 2020. In fact, the Cybersecurity and Infrastructure Security Agency (CISA), a division of the Department of Homeland Security, recently released a statement urging vigilance against cyber attacks and encouraging the adoption of better cybersecurity practices. That warning proved to be extremely timely. Over the past few days, the Texas Department of Information Resources has faced a spike in attempted cyber attacks, with 10,000 attempts to probe their systems occurring every minute. What can state and local governments do to rise to meet this challenge? IT teams working for local agencies are often already making do with too few personnel and a stretched budget. So, while an improved cybersecurity posture is essential in the face of recent threats, it can be hard to figure out where to start. So, start by protecting the most critical assets. Government entities frequently have access to a lot of personally identifiable information and other types of data that would be disastrous if an attacker got their hands on it. If privileged access to this data is kept safe, even in the case of a network breach, the most vital information would stay secure. Privileged access is the gateway to these critical assets, and compromised privileged credentials have played a central role in almost every major targeted attack. That makes it a perfect starting place when it comes to securing state and local government systems against an ongoing tide of cyber attackers. This is why the Center for Internet Security (CIS) has controlled use of administrative privileges as the fourth Basic CIS Control, only behind inventory and control of hardware and software assets and continuous vulnerability management. Here is how a typical attack works: The cyber attacker starts by establishing a beachhead on the endpoint of the organization that they are aiming to breach. After gaining initial access and establishing persistence, the attacker escalates privileges to gain access to another system that brings them one step closer to their target. From there, the attacker can continue to move laterally until the target is reached, data is stolen, and operations are disrupted – or completely taken over. By protecting the privileged credentials cyber attackers need, Privileged Access Management (PAM) provides security where it’s needed most. In the face of an onslaught of cyber attacks, state and local governments need more than ever to establish a proactive, sustainable cybersecurity program. Instead of getting overwhelmed, start with Privileged Access Management and keep the most vital assets protected. It’s time to learn more about Privileged Access Management and staying safe from cyber attacks.
<urn:uuid:8fb1c3e2-da30-44b9-9626-f50a5911a437>
CC-MAIN-2022-40
https://www.cyberark.com/resources/blog/cyber-attacks-against-state-and-local-governments-surge
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00254.warc.gz
en
0.951364
710
2.96875
3
Who remembers seeing Jurassic Park in theaters in 1993? By now, the dinosaur flick is ancient history, but with Jurassic World hitting theaters today, you might have huge lizards on the brain. What does this have to do with technology? Well, researchers in Kenya are using 3D scanning and printing to preserve fossils, so the real ones can be safely removed from hazardous weather conditions in the Turkana Basin. According to CNet, 3D scanners and printers have been used in many different industries for several purposes. They can be used for construction, making prosthetic limbs, and even recreating crime scenes for forensic specialists. However, one of the more innovative features is using 3D printing technology to create 3D models of fossils. These replicas can be subject to study while the real deals are kept safe and preserved in a museum, where they belong. The scanners in question were created by Artec, the company responsible for the Shapify 3D-printed selfies. As you might expect, the climate of Kenya, where the fossils were located, isn’t exactly hospitable to relics that are millions of years old. The Turkana Basin is particularly well known for its harsh climate. With sunlight that’s powerful enough to give most electronics a run for their money, the laptops that would normally receive the 3D scans from these handheld scanners were rendered useless. To make matters worse, there wasn’t a nearby power source, limiting the amount of time the researchers could spend with the fossils in any given sitting. With the help of two 3D specialists, the team was able to make the batteries last for a whole two days. Over the course of two weeks, the excavation team was able to uncover and replicate a crocodile skull, a full crocodile skeleton, a huge tortoise, and an extinct species of elephant. As you might expect, the harsh conditions heavily damaged the fossils, so it was up to the team to get them to safety as quickly as possible. By replicating the fossils with 3D technology, the scientists were able to study the fossils more in-depth without risking their integrity due to the undesirable environmental conditions. Just like anyone who wants to change the face of the industry, these brave scientists decided to do something differently in order to find a better way to accomplish their goals. People like these innovators are always working toward optimizing the way in which professionals perform their daily functions. This is why ExcalTech provides quality managed IT services. We want to help businesses like yourself better accomplish your goals through the wondrous power of technology. If your business is ready to let go of your fossil-like technology, give us a call at (877) 638-5464. We can help you dust off your business practices so you can get back in the game.
<urn:uuid:68b39c08-a1d3-4f0d-9c24-9895d5a19b7b>
CC-MAIN-2022-40
https://www.excaltech.com/3d-scanners-give-scientist-glimpse-real-jurassic-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00254.warc.gz
en
0.967371
569
3.375
3
When devices communicate across the Internet, they send messages to one another. These messages are similar to mail in that they have a ‘to’ address and a ‘from’ address. The ‘to’ address is the IP address of the device it wants to send the message to, and the ‘from’ address is the IP address (and port number) to send any return messages to. One of the rules of private IP addresses is that they should never be seen on the Internet because theoretically millions of devices could share that IP address, and routing to a private IP address would not work. Therefore, they need to be removed somehow when a device with a private IP is sending messages. The Network Address Translation (NAT) is the process of rewriting the addresses on a message before it is transmitted across the Internet or rewriting it after it has been received. To keep track of everything, when the private IP and port number is overwritten, a record is kept of where the message goes. This is dynamic, and the record is created when an outbound message is sent. After a specified duration, the record is deleted. This means that when a reply is received with the same details, the incoming message can be rewritten and readdressed to the device that initially sent the message. NAT allows numerous devices to share one IP address and the necessary process of rewriting and translating the address (routing) information as messages are sent and received from the Internet.NAT and VoIP calls VoIP calls primarily consist of two parts. NAT can cause problems for VoIP calls, the most common of which is one-way audio. - Setting up the call, and then - Transmitting the audio The transmitting of audio will happen on different ports than the ones used to set up the call. So if we remind ourselves about how NAT works, when a message is sent out, the router makes a record of the destination IP address and port. When a reply is received from that exact location, it is compared against the record, and the message can then be redirected to the original internal address. This works fine for the first part of a VoIP call (setting up the call). However, when audio transmission begins, it will use a different port. There is no record of a message to this IP address and specific port being used recently in the records, so the router doesn't know where to send the information. So it does the only thing it can do—it ignores it. This means that your VoIP call has been set up without issue, but you won't hear the other person as the audio information doesn't get through.
<urn:uuid:17eaf005-6ad5-4cbd-910c-f8896ddfc85b>
CC-MAIN-2022-40
https://support.flowroute.com/329740-What-is-Network-Address-Translation-NAT-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00254.warc.gz
en
0.952876
546
3.6875
4
The fact that Wi-Fi stands for Wireless Fidelity hints at how long Wi-Fi has been around, but it was only in 1999 that the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark, under which most products are sold. Today, Wi-Fi is on the top of the list of must-haves for businesses of all types and sizes. People will simply vote with their feet if good and, usually free, Wi-Fi is not available. But this demand for anytime, anyplace connectivity can mean that some of us are prepared to jump onto Wi-Fi hotspots at cafes, hotel, airports or company guest networks, with only a fleeting consideration of security – a fact that has not gone unnoticed by cyber criminals. There are over 300,000 videos on YouTube alone explaining how to hack Wi-Fi users with tools easily found online, according to Tony Evans from Wick Hill. Risks from unprotected Wi-Fi Wi-Fi password cracking – Wireless access points that still use older security protocols such as WEP, make for easy targets because these passwords are notoriously easy to crack. Hotspots that invite us to log in by simply using social network credentials are increasingly popular, as they allow businesses to use demographic information such as age, gender and occupation to target personalised content and advertisements. Eavesdropping – Without encryption, Wi-Fi users run the risk of having their private communications intercepted, or packet sniffed, by cyber snoops while on an unprotected network. Rogue hotspots – Cyber criminals can set up a spoof access point near your hotspot with a matching SSID that invites unsuspecting customers to log in leaving them susceptible to unnoticed malicious code injection. In fact, it is possible to mimic a hotspot using cheap, portable hardware that fits into a backpack or could even be attached to a drone. Planting malware – There are common hacking toolkits to scan a Wi-Fi network for vulnerabilities, and customers who join an insecure wireless network may unwittingly walk away with unwanted malware. A common tactic used by hackers is to plant a backdoor on the network, which allows them to return at a later date to steal sensitive information. Data theft – Joining an insecure wireless network puts users at risk of losing documents that may contain sensitive information. In retail environments, for example, attackers focus their efforts on extracting payment details such as credit card numbers, customer identities and mailing addresses. Inappropriate and illegal usage – Businesses offering guest Wi-Fi risk playing host to a wide variety of illegal and potentially harmful communications. Adult or extremist content can be offensive to neighbouring users, and illegal downloads of protected media leave the businesses susceptible to copyright infringement lawsuits. Bad neighbours – As the number of wireless users on the network grows, so does the risk of a pre-infected client entering the network. Mobile attacks, such as Android’s Stagefright, can spread from guest to guest, even if the initial victim is oblivious to the threat. There are established best practices to help secure your Wi-Fi network, alongside a drive, to extend well-proven physical network safeguards to the area of wireless, providing better network visibility to avoid blind spots. Implementing the WPA2 Enterprise (802.1x) security protocol and encryption is a must, while all traffic should, at a minimum, be inspected for viruses and malware, including zero day threats and advanced persistent threats. Application ID and control will monitor and optionally block certain risky traffic, while web content filtering will prevent unsuspecting users from accidentally clicking a hyperlink that invites exploitation, malware and backdoors to be loaded into your network. The use of strong passwords, which are changed frequently, should be encouraged, along with regular scanning for rogue Access Points (APs) and whitelisting MAC addresses, when possible. While WIDs (Wireless Intrusion Detection Systems) are common in many Wi-Fi solutions, WIDs require manual intervention to respond to potential threats. This may be OK for large organisations with IT teams that can manage this, however WIPs is a fully-automated system, which makes it far more attractive to SMEs and organisations such as schools and colleges.
<urn:uuid:06b6edeb-d0d7-47b1-88d0-be6d05d57505>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2017/01/05/wifi-secure-hotspot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00254.warc.gz
en
0.932647
860
2.890625
3
In the previous article, we left off with the basic storage model having its objects first existing as changed in the processor’s cache, then being aged into volatile DRAM memory, often with changes first logged synchronously into I/O-based persistent storage, and later with the object’s changes proper later copied from volatile memory into persistent storage. That has been the model for what seems like forever. With variations, that can be the storage model for Hewlett-Packard Enterprise’s The Machine as well. Since The Machine has a separate class of volatile DRAM memory along with rapidly-accessible, byte-addressable persistent memory accessible globally, the program model could also arrange for doing much of its processing in cache and DRAM, only later forcing the changes into persistent memory. So now consider how long it takes to copy log records and then object data from DRAM into persistent memory as opposed to copying the same out to I/O-based disk drives. Simply saying that it is faster is a gross understatement. Still, given this speed difference, is this previously outlined storage model the right model for a system having such persistent memory? Transaction-synchronous log record writes followed by asynchronous data writes are done that way partly because of the speed of that persistent storage. That need not be the storage model for The Machine; DRAM need not be involved in holding the object or the log records at any time. The volatile DRAM space should be used to handle all of the normally transient data, but the persistent objects proper and associated log records, both normally residing in persistent memory, can be byte-addressed and worked on directly. Well, that is, directly by way of the processor’s volatile cache. Persistent memory, though, is not some kind of a magic bullet as it relates to the ACID property requirements of transactions. It is typically not enough just to dump changes into persistent memory from the cache. The basic issue with ACID is that any transaction can require multiple changes to an object to fully execute, and a power failure can occur between any of them. Even if the transaction’s changes had all successfully been made in the cache and most of them had succeeded in being written to persistent memory, a failure at that point still leaves the object in an inconsistent state. Keep in mind that today’s volatile-memory-based program model would have considered the object’s changes complete at the point where the changes were still residing in the cache. It follows that at least the essentials of object recovery based on log records described in the previous article are still required. Assuming such a recovery log with records describing for each transaction the object’s state and changes, does the processing of a persistent memory object’s transaction need to look just like that of an object on disk? Maybe not. Consider, is it reasonable (read that as would transactions be sufficiently fast) for a transaction’s processing AND persistence be completely synchronous as in the following? - An exclusive lock is applied to the object. - A set of log entries are built describing each of the changes to be made. The transaction is finally marked in the log with a record saying “committed.” With these transaction descriptions logged and forced to reside in persistent memory, a post-“commit” power failure, then would allow recovery code to (re)build the object. If, though a failure had occurred prior to successfully forcing this “committed” entry into persistent memory, recovery after restarting would act as though the transaction had never started. - The object’s changes are subsequently made, first in the processor’s cache, then each forced from there out to the object’s location(s) in persistent memory. (This might mean forcing out as few as one cache line.) Once all of the object’s data blocks reside in persistent memory, the changed object is – at least physically – now visible as being in a consistent state. - Once all of these data blocks are known to reside in persistent memory, and still under the Exclusive Lock, there is now no need for the transaction’s log entries. The transaction is truly complete. These log entries can be trimmed. (Upon starting the next transaction, it can be quickly known that no recovery is needed.) - The lock is freed. After this point, the changed object is both physically and logically accessible to other transactions. Seems straight forward enough, certainly much faster – and much more reasonable – than attempting the same approach with disk drives as the persistent storage. But is even this fast enough? Can the process of forcing the object out of the cache and into persistent memory be deferred (that is, be handled asynchronously), perhaps allowing such persistent object methods to complete faster? A Quirk in The Machine? Before we go there, I need to (re)observe a bit of a quirk in this context about The Machine. Up to now, in this article, I’ve tried to speak generically about persistent memory, allowing the Program Model to apply elsewhere (i.e. on HPE’s near-future persistent memory-based competitors) as well. The Machine’s Quirk. Cache coherency. You will recall in the disk-based storage model that, even if the changed object’s pages are not immediately forced out to disk, the changed object is still visible in volatile DRAM – once the locks are freed – to other transactions, even if those transactions execute on other processors. This is true even if all or parts of that changed object happen to still reside in the processor cache; within a single cache-coherent SMP, any processor (or attached I/O device) can still see the changes. Said differently, the changed object is globally visible by any processor within a cache-coherent SMP even if that change is still only in volatile memory (i.e., cache or DRAM). On The Machine, though, this cache coherency is scoped to the processors of individual nodes; a change in a Processor A’s cache is visible to a Processor B if Processor B resides in the same node, but a Processor C, residing in another node, is incapable of seeing that cached change. Processor C can see any persistent memory proper, but not necessarily all of the cached data blocks holding changed data from – and so destined to – that persistent memory. In The Machine, a changed object still residing in cache is visible to processors on the same node. For processors throughout The Machine (that is, on other nodes) to see the changed object, that change must reside in persistent memory. Again, the changed object becomes visible to off-node processors after the change successfully makes its way into persistent memory. Even though a transaction on an object is considered to be complete, in The Machine no other node’s processors can see the transaction’s results until the changes are in persistent memory. So, yes, on The Machine, the actual write-back of changed objects to persistent memory can be handled asynchronously (allowing the object to stay in cache indefinitely) – say by later having another thread force the object out of the cache – if that object is only scoped to threads limited to a single node. It follows that only synchronous write-backs would be allowed if a persistent object is to be shared by processor residing across The Machine’s node boundaries. This would seem to create an inconsistency in a persistent memory program model; or, more to the point, perhaps what it creates is two views of persistent memory. For example, consider some existing multi-node system which today is also fully cache-coherent NUMA-based system. In this, and within each node, let’s replace some of the DRAM with persistent memory. All cache, all DRAM, and all persistent memory is accessible from any processor. Being fully cache-coherent, object changes still residing in any cache are visible from any processor on any node. In such a globally cache-coherent system, the two views of persistent memory need not exist. But, please, don’t get me wrong; we are here talking about a subtlety with the programming model. Even without a fully cache-coherent system, all that it takes to allow a persistent object to become globally visible throughout The Machine is to be additionally aware that the changed object does need to be explicitly pushed out of the cache and into persistent memory. It’s a difference, yes, but still a subtlety. For comparison, ask yourself what it would take to do the same thing in a truly distributed-memory cluster (i.e., one where the nodes are connected via even a high performance Ethernet or InfiniBand link); big bucks and a lot of smarts go into minimizing that effect. Rather than add this extra level of confusion to the remainder of this article, the program model we will be discussing in the next section assumes full cache coherency. Still, let’s momentarily take a look at post-failure recovery given we instead (additionally?) used asynchronous writes of object changes into persistent memory. Recovery Based On Asynchronous Writes To Persistent Memory Recall in the disk drive-based persistent storage, the transaction was allowed to complete before the actual changes to the object had made their way to disk. The log writes to disk were synchronous with the transaction (i.e., they were done before the transaction completed), but the actual object changes in memory made their way out sooner or later – asynchronously. So, let’s suggest doing the same thing with persistent memory and see what that means. In this model we are requiring synchronously writing log entries to persistent memory before the transaction completes, but we will allow the actual persistent object changes to continue to reside in cache even after the transaction completes and be visible from there. Again, in what follows, we are talking about persistent memory, not disk drives. Unlike the synchronous model, we are not forcing the actual changed object into persistent memory before the transaction’s lock is freed. The beauty of this is that, simply based on normal aging of data blocks in each cache, the changed object tends to make its way back out to persistent memory sooner or later. Tends to, yes, but any data block can also stay in some processor’s cache for who knows how long. So, the object sooner or later becomes persistent, yes, the problem is that you just don’t know when. Recall, though, that you need the changes described in the recovery log to remain there until you do know. In short, the log can’t be trimmed until you know for sure. Post-failure recovery needs it. But, just as with asynchronous page writes to disk drives, you can know when the object has been flushed from the cache and do asynchronous write backs. Perhaps more to the point, you can know that the cached object write-backs are done prior to some point in time, not necessarily exactly when. You can have the write backs be done later by another thread of execution, one separate to the thread(s) doing the program processing. All that that support thread needs is a list of all of the data blocks – data blocks which may or may not still reside in cache – for which that support thread is responsible for forcing back out into persistent memory. Once successfully written by this thread, this thread can then also know that the persistent object’s changed state is now again consistent in persistent memory, at least from the point of view of the transaction for which these changes were made. Knowing this, it can trim the log. Interestingly, the log itself describes the location of those very data blocks for those transactions which are already known to be complete. Changing concepts, we observe that recovery after restarting post-failure is also a bit more complex. Since you can’t know which objects might be in a damaged and inconsistent state at recovery time, all of the logs need to be processed prior to restarting most anything to ensure that persistent objects really are in a consistent state prior to any access. So we’ve just said that ensuring that an object’s changes have been forced from the cache and into persistent memory – making it durable – is a prerequisite for cleaning up the recovery log. Let’s turn it around and assume that the hardware had quickly aged some transaction’s object’s changed data block out into persistent memory, doing this prior to the point that that the transaction log had recorded a “commit” for that transaction in persistent memory. That is, the changed object now resides in persistent memory before the log also says that the transaction had reached a “committed” state. Yes, this can easily happen. Let’s next also say that, prior to that “commit” event, and so also before the lock(s) are freed, a power failure occurs. This object is in a consistent state in persistent memory, but are the results of this transaction durable (in the sense of ACID requiring Durability)? Upon recovery and processing the log, the recovery code will find the transaction in the log, but recovery will not find that it had been committed. As far as this recovery code is concerned, sans commit record, there might have been more changes required to that object to make it consistent. So the recovery code is responsible for restoring the original – pre-transaction – state of the object, backing out the object’s changes that just happen to be already residing in persistent memory. More On The Program Model All of the preceding many words, though, have largely been background into the Program Model. Of course, the folks working on attempting to abstract all of this know this stuff to the extreme, and likely have recurring bad dreams involving it. We are going to attempt to outline next some of the work being done by Dhruva Chakrabarti and his team at Hewlett-Packard Labs in support of such a persistent-memory programming model. Let’s start with their notion of a persistent region (PR) in persistent memory. Although there are OSes for which virtual addresses are also persistent (IBM i’s Single-Level Store comes to mind), this program model starts – much like a named file – by having your program first access an object in even persistent memory by way of some type of name; a file name in a directory or global object handle are examples. The name effectively represents a Persistent Region in persistent memory in which your object resides. In their program model, you - Provide an object name to get an object handle. - Provide an object handle to get a Process-local virtual address representing a root into the object. At the bottom of it all, though, this region is really a portion of persistent memory (where every byte is addressable), so this persistent region also represents a contiguous real address space, a physical portion of this persistent memory. Just like tracks on a disk drive used for a file, it exists as something physical, but you don’t really need to worry much about where; you are provided a virtual address to allow you to find and work directly within it. Perhaps you could think, for example, of a persistent region as being used for a named persistent heap. At any moment in time, the storage of this persistent heap could be in use by a large set of persistent objects. Once these objects become freed, their storage returns to the persistent heap for subsequent reallocation. The persistent heap object roots the Persistent Region such that memory backing both the objects residing in the heap and the freed storage managed by the heap are addressable within this persistent region. Indeed, part of this programming model has, just like today’s volatile memory heap, the ability to have un-referenced objects in the persistent heap be automatically garbage collected. This program model also provides the means of creating a persistent region after having detected that it does not yet exist. The program model also works to provide the expected transaction semantics (i.e., ACID). If a program does not happen to be executing within the bounds of a transaction, you can assume that the object’s state is consistent. As each transaction executes, executing as though one transaction follows the next, the object’s state effectively transitions from one consistent state to the next. We saw earlier that transactions are actually assumed to be concurrently executing. Such potential concurrent sharing often requires locks. An exclusive lock helps ensure that one and only one thread is updating an object at any moment in time. Such locking is required today even for volatile memory; the locking is protecting the fact that the object is shared, not necessarily that it is persistent. Ok, let’s now turn it around. Let’s also now put that object into persistent memory, say on a persistent heap. You have further gone to the trouble of putting a lock around that object, again because it is shared. So, can the program model assume that – with an object in persistent memory and protected via a lock – what you really intended is for that object to be protected as though it were part of a transaction? So, quoting from a paper by members of HPE’s Program Model team – Atlas: Leveraging Locks for Non-volatile Memory Consistency: “For lock-based programs, our goal is to guarantee that durable program state appears to transition from one consistent snapshot to another. We preserve all existing properties of lock-based code. There is no change to the threads memory model and memory visibility rules with respect other threads. Isolation is provided by holding locks and it remains the responsibility of the programmer to follow proper synchronization disciplines. The only semantics we add is failure-atomicity or durability, only for memory locations that are persistent”, i.e. those within a PR.” Within this paper, they go on and say that such consistency is guaranteed only between transactions; while executing a transaction, the state is often inconsistent. Failure, though, can occur at any time. If the failure occurs at a time when no transactions are executing, the object’s persistent state is, upon restart, in a consistent state. If not, it is for such cases where the object recovery is needed. So part of the trick is for the program model to provide the needed, but minimal, hints needed to allow your programs to describe the scope of transactions and the recovery needed upon failure. So: “We thus assume that data structures are inconsistent only in critical sections, and hence treat lock and unlock operations as indicators of consistent program points. We will call program points at which the executing thread holds no lock-objects thread-consistent. If no lock-objects are held by any thread, all data structures should be in a consistent state.” So there is part of it. Recall also that the program model also knows at least one more thing about your objects; they reside in either volatile or non-volatile memory and the program model knows the difference. After all, your program told it which one when it constructed the object. So, at compile time, the program model can know what you consider in the need of protection from failure by looking also for both what is to be saved in persistent memory AND what you are protecting as a transaction via locks. According to that same paper Atlas: “A compilation pass instruments synchronization operations and store operations that appear directed to persistent memory. This results in calls to the Atlas runtime library, whereby synchronization operations and stores to persistent memory are tracked in a persistent log.” It would seem to follow that this same instrumentation would include the code necessary for ensuring that the persistent objects, as well as log entries, really have been forced out of the processor cache and into persistent memory. From the paper: - “Log entries become visible in [persistent memory] before the actual stores to the corresponding locations, so that all stores not part of the consistent state can be properly undone. - Log entries become visible in [persistent memory] in order. - Every log modification step is an atomic operation and its effect is visible in [persistent memory] atomically.” You will recall the earlier overview on trimming the log and maintaining a consistent state. Apparently Atlas support is intended to manage it asynchronously … “After a failure, a recovery phase, that is initiated in a programmer-oblivious manner, performs the reconstruction. A helper thread examines [the log] asynchronously and computes a globally consistent state periodically. … Computation of a consistent state renders some entries of [the log] unnecessary, allowing them to be pruned out.” As implied, the log is maintained in persistent memory: “The initialization phase must be called at program start-up. This phase performs two main tasks: - Creation of a process-private persistent region … to hold [the log]. - A helper thread is created to perform consistent state identification and log pruning.” [Note: A process-private log suggests that this program model is limited to only the threads of individual processes and to cache-coherent environments. You may recall that The Machine maintains cache coherency only amongst the processors of its individual nodes. Persistent memory resides on many nodes and all of it is accessible from any processor on any node, so such objects could reside anywhere and be accessed from anywhere in The Machine, but this program model is not yet to a point where concurrent cross-process or cross-node sharing is supported. This is not to say that cross-process sharing via persistent memory is not possible, it is just handled differently.] So, What Is To Follow? Very fast persistent storage. Some of us will look at that and understand that we can save and load our files a lot faster. True. Others of us will look at that and realize that this rather does change things up quite a bit. Anything that we could have created in volatile memory, we can now – given some constraints on addressing – be much more rapidly created and maintained in non-volatile memory and be sure it is as we left it when the power comes back on. We are going to see this as something rather revolutionary. Indeed, quoting Dhruva Chakrabarti, “The aim is to embed persistence in programming languages so that it is readily available for potentially all data. The approach described in Atlas tries to provide automatic support for most of the additional tasks required, hoping to ease the transition of existing code into the world of persistent memory.” There is, though, still a bit of a gap between the program model being produced to abstract away the warts on such hardware and the world most of we programmers live in. When programming, all we really want to say is that we want an object – name your favorite object – to be constructed, do all the things that we do to it today in volatile memory, and, oh by the way, we want that object to reside in persistent memory and be failure atomic. Easy to say, right? Fortunately, with some work from the open-source community, it can be relatively easy. I had earlier referred to a dictionary object, an object class today supported in a number of different languages. Many of us have used dictionary objects, and have been glad that others have done the enabling of such an object for us. The semantics of a dictionary’s use really are easy to use; we don’t much care how it is really done, as long as it is fast. By now many of you can picture what it would take to enable even a dictionary object for persistence. Most of you reading this also know that enabling that one object for persistence is really just the tip of the iceberg. There are scads of objects out there. That is why the folks at HPE have been asking the open source community for their involvement. For as much as I honor the HPE folks for investing big time in their The Machine and of getting out ahead of this technology, the folks at HPE know something else as well; they will not be the only company developing systems like The Machine. When they ask for assistance, they know that the open-source community is not particularly interested in developing single-vendor software. But HPE, and all of the companies that will be following soon in their wake, know that an entire software stack, a complete solution, is needed before such systems really take off. When the hardware could hit the market – and soon it will be hardware from a number of vendors – they all know that the software needs to be available as well.
<urn:uuid:49ed350d-7250-431f-bbd7-f0fba7892d70>
CC-MAIN-2022-40
https://www.nextplatform.com/2016/04/25/first-steps-program-model-persistent-memory/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00454.warc.gz
en
0.945561
5,110
2.9375
3
‘Chew with your mouth closed’ ‘Say please and thank you’ ‘Don’t interrupt – wait your turn to speak’ How many times have you issued those commands? Me – thousands! Teaching our kids manners and safety is something many parents just do automatically. We provide the hot tips as we go about our daily lives. But how many of us extend our advice to cyber issues? Recent research by McAfee entitled Tweens, Teens and Technology highlighted that our kids are embracing technology with great gusto. Not only are our tweens (8-12 year olds) using between 3-4 internet enabled devices and spending 1.5 hours per day online but 67% of them are using a social media website such as Skype, Facebook, Club Penguin and Instagram. Being a good cyber citizen and smart operator online is absolutely critical in our digital age. So instilling online safety messages into our children should be one of our biggest priorities as parents in the 21st Century. Here are my top 5 online etiquette rules that will help your child become a good cyber citizen. 1. Treat others online as you would like to be treated. If you or your kids are ever in doubt about how to handle an online situation – always revert back to this rule. The right course of action will become crystal clear. 2. Double check before you hit ‘send’. Play attention to typos, grammar and most importantly tone – these all help to create an impression of you online aka your digital reputation. 3. Don’t say something online that you wouldn’t say to someone’s face. If you have an issue with someone, don’t raise it online. In person is always best. 4. Understand that you will never agree with everyone online. There is a polite way of sharing your opinion online without attacking or abusing others. Harassing or attacking others online aka ‘flaming’ is not acceptable at all. Not only will you lose online friends but it will damage your online reputation real fast! 5. UNDERSTAND WHEN TO USE CAPS. Typing in caps means you are shouting. It is OK to use a word here or there but don’t do it all the time. It is aggressive and hard on the eyes. Next time your kids ask to go online (or you find them online) why not take the opportunity to throw in a few of the above netiquette tips. And if they roll their eyes – ignore them. You wouldn’t let them visit a friend’s house without a timely reminder to use their manners – there is no difference! Whether it is offline or online, good manners are essential. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:0dc7a7f6-5e19-4882-ab4a-496be36a606f>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/family-safety/netiquette-teaching-kids-online-manners/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00454.warc.gz
en
0.920755
598
2.703125
3
Russia’s 2021 antisatellite weapon test is creating a huge number of near-misses between space debris and active satellites. The satellite was destroyed by a surface-launched ASAT missile in November, creating a debris cloud eventually thought to be made up of 1,500 pieces. SpaceNews reports, according to space situational awareness company COMSPOC, that the November test is causing so-called “conjunction squalls” of thousands of close approaches, or conjunctions, with satellites over just a few days. “In the first week of April, in that week alone, there will be 40,000 conjunctions that we predict purely from that one event,” said Travis Langster, vice president and general manager of COMSPOC, during a panel at the 24th annual FAA Commercial Space Transportation Conference Feb. 17. November saw Russia destroy the defunct Cosmos-1408 satellite, which was previously orbiting at an altitude of approximately 485 kilometers According to COMSPOC, as the debris orbits precess, they overlap the orbits of remote sensing satellites going in the opposite direction. “When they sync up, you have the perfect storm: they’re in the same orbit plane but counter-rotating, crossing each other twice an orbit, again and again,” said Dan Oltrogge, director of integrated operations and research at COMSPOC. Those squalls last for several days until the orbits precess out of sync. The company said it had noticed thousands of conjunctions – approaches within 10 kilometers – in January as the debris cloud passed by a series of satellites operated by Planet. It is predicting a peak of 14,000 a day in April and another “conjunction squalls” six months later. Other companies including SpaceX’s Starlink constellation are likely to see a large increase in conjunction alerts as well. Oltrogge warned that the conjunction squalls may overwhelm space situational awareness (SSA) systems and make it difficult for operators to identify other potential collisions. “The SSA systems, legacy, and commercial, are all going to get hammered by this,” he said. “If you want to find a needle in a haystack, get rid of the hay. This is adding a lot of hay.” Cosmos 1408 was a Soviet ELINT (Electronic and Signals Intelligence) satellite launched in 1982 and had a mass of 2,200kg. The Russian Military acknowledged the ASAT test: "On November 15, the Defense Ministry of Russia successfully conducted a test, in which the Russian defunct Tselina-D satellite in orbit since 1982 was struck," it said in a statement. Despite rebuke from the US government and others, Russia said the test and subsequent debris "did not and will not pose any threat to orbital stations, satellites and space activity.” COMSPC said the US Defense Meteorological Satellite Program weather satellite DMSP 5D-3 F18 (USA 210) was the satellite most at-risk following the original test – along with several other DMSP machines – but that risk will reduce as debris de-orbits. The company said the bulk is expected to de-orbit within three years. Last month the Space Debris Monitoring and Application Center of the China National Space Administration (CNSA) issued a warning of an “extremely dangerous” close encounter between the Tsinghua Science satellite and one of the pieces of debris created by the ASAT test. Space traffic causing road rage Even beyond weapons use, the increased number of satellites is posing issues for space agencies and operators. A number of companies and agencies replied to the Federal Communications Commission (FCC) in response to SpaceX’s proposal for a second-generation Starlink constellation with 30,000 satellites, saying the plans could increase collision risk. “With the increase in large constellation proposals to the FCC, NASA has concerns with the potential for a significant increase in the frequency of conjunction events and possible impacts to NASA’s science and human spaceflight missions,” read NASA’s response. “NASA wants to ensure that the deployment of the Starlink Gen 2 system is conducted prudently, in a manner that supports spaceflight safety and the long-term sustainability of the space environment.” NASA also warned such a large fleet could impact launch windows for space missions as well as ground-based astronomy. Companies including Amazon (via its Kuiper subsidiary), ViaSat, Kepler Communications, OneWeb, and its investor EchoStar wrote letters against the proposals. Starlink’s constellation already outnumbers that of all its rivals, and would grow its fleet by a factor of ten. “An NGSO (non-geostationary orbital) constellation of this size and complexity presents a unique risk to other systems through its potential to (i) undermine spectrum access by other NGSO FSS (fixed-satellite service) systems in the Ku- and Ka-bands, and (ii) negatively impact the orbital environment by increasing the collision risk for all operators— including SpaceX—operating in and traversing through low Earth orbit,” OneWeb said. SES accused SpaceX of “taking a “ready, fire, aim” approach to its second-generation system.”
<urn:uuid:6eeac842-b336-41f0-a053-59d77c5c78a0>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/news/russian-asat-test-creating-thousands-of-conjunction-alerts-for-satellite-operators/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00454.warc.gz
en
0.951934
1,103
2.53125
3
All over the world, societies and industries rely on the supply of energy and water. For energy and particularly electricity, in the past two centuries this has come from industrial scale generation based on fossil fuels. As of the second decade of the new century, it has become a matter of urgency to address challenges never seen before, writes Therese Cory, a senior analyst at Beecham Research. These challenges are as significant as they are numerous and encompass the following issues that must be addressed if smart utilities are to become a reality and play their part in the realization of the dream of smart cities and smart living. Key challenges today include: - National and international regulations driving the replacement of fossil fuel generation with cleaner, renewable methods of generating electricity to reduce Co2 emissions. - The distributed nature of renewable energy leading to a vast increase in the number of assets needing to be managed in real-time. - Secure monitoring of energy flows for management and financial settlement purposes as households and businesses become prosumers. - Flat customer demand in developed economies and rising demand in developing economies - The balancing of supply and demand to make electricity grids more efficient - The rapid growth in grid scale storage for demand and frequency response applications - The electrification of transport, resulting in millions of batteries being connected to the grid and time-shifting of peak demand. None of these challenges can be met without wholesale application of IoT technology. Energy utilities cover a wide range of services – in Figure 1 we identify the broad range of sectors and activities, from energy generation through to distribution and consumer use. The outer layers of the chart offer examples of applications of relevance to these sectors. Many will be common to all the sectors, such as asset management, troubleshooting and predictive maintenance. Whilst the way in which analytics is performed depends on the context of the application, the same data collected at source should, where possible, be utilised for all these applications, rather than the same data collected many times over – provided that this data is accurate and has not been corrupted. Figure 2 illustrates the components of the energy supply chain, from generation through transmission, distribution and finally consumption and billing. Parallel to these activities are the data analytics necessary for billing and performance measurement. More recently consumer-based generation such as use of community microgrids, solar panels and ‘behind the meter storage’ have begun to be added to the demand/supply equation. The figure shows that connectivity is key for all parts of the supply chain, from transmission and distribution through to final analytics. Connectivity goes beyond the connection itself. It includes: - Hardware and components – silicon components, modules, sensors, devices and others for the gathering of data - Connectivity services enabling the exchange of data between and across IoT platforms for managing the devices, analysing the data and enabling application development - Data storage management and analytics providers and solution integrators Connectivity in the electric power industry poses several challenges including: - Large volumes of data - Proprietary legacy IT systems - The need for enhanced security because of connectivity with external systems - The differing lifespan of utility assets and connectivity technologies Different types of connectivity technologies are proving useful for different purposes. They include power line communications and radiofrequency (RF) mesh technologies such as Wi-Fi and Zigbee in combination with wireless wide area network (WAN) technologies for backhaul. Various types of cellular technologies are also used including 3G/4G with low power cellular, such as narrowband IoT (NB-IoT) to come, and also unlicensed wide area wireless technologies such as LoRa and SigFox. LoRa is proving a favourable choice for many applications to do with energy monitoring and metering. Proponents cite its long battery life, in-building and in-ground capabilities along with its low costs. The technology also supports geolocation for scenarios in which it is important to know where the assets are and their status. One example of this is the Belgian operator Proximus, which has added NB-IoT connectivity to its network and intends to greatly increase its LoRa footprint. More details can be read here: https://www.proximus.com/en/news/proximus -launches-nb-iot-network-support-digitalmeters How IoT applications contribute to grid reliability and resiliency The applications depicted in Figure 1 all start with the collection of data from connected elements of the grid and generation assets connected to it. Different devices collect different types of data including voltage, frequency, power loading, switch status, temperature, vibration and other parameters that engineers are interested in that best describe the condition and workings of the network. Why Asset tracking is instrumental Asset tracking is part of the Industrial IoT (IIoT). Instrumenting of assets is key to tracking them and avoiding the occurrence of stranded assets, an issue particularly associated with telecoms networks. Electrical equipment is typically inspected on site by powerline technicians, sometimes aided with helicopters fitted with cameras. These identify the problems that could result in power failures, fires or explosions. In addition, with the increasing shift to renewable energy, more advanced drone technologies are needed to monitor the new connections that link renewable sources to the standard electric grid. Drones are also being used to examine the grid in remote parts of Europe – the drone sends back data to allow technicians to create virtual models of sections of the grid. Energy-related IoT smart devices, including meters, inverters, appliances and thermostats, provide utilities with measurement data that can be used for asset performance, usage, deployment and optimisation. However, this data can only be made actionable and intelligent for utility operations if it can be processed and presented in near real-time. Predictive maintenance is common to many industries. It entails comparing the specified part of the network’s real-time status with a history of faults. Analysis of the data in realtime can show if failure is likely to occur soon in order that the part can be identified and replaced before failure. This improves grid reliability by reducing downtime. New ways of handling data are being continually found to bring new perspectives and understanding to data collected from utility networks. As an example of this, the Weather Company, owned by IBM, has created a predictive model which combines big data and machine learning to provide utilities with safe and efficient outage management processes. Predictive maintenance and other IoT applications also form part of an iterative cycle of continuous improvement – a concept well understood in manufacturing industry. As new data is collected and added to the expert database, more is learned about the workings of various parts of the network and grid, intelligence that was not possible to garner previously. In this way, predictive maintenance becomes precision maintenance, enabling an ever-growing understanding of detailed grid workings. At the consumer end of the supply chain, smart meters are being installed at premises all over Europe, as per an EU mandate. Smart metering is a subset of the smart grid which itself evolved from automated meter reading (AMR). In addition to getting timely and accurate readings of customer use, smart metering also offers opportunities for supplying data for a range of useful applications upstream and downstream – demand response, home energy management, prosumer integration and in the future more and more electric vehicle charging and storage. Customer relationship management Customer relationship management (CRM) is becoming more of an imperative thanks to demands from regulators and customers unwilling to put up with poor service. With the expected rise of customer requirements, utilities must become more efficient and do more with less, yet keep customers supplied and happy with good service and reliable billing information. Major changes are coming to the traditional grid Electric vehicle (EV) charging is set to become an issue as more people buy and charge their vehicles from their businesses and residences. The electric grid could be affected by the risk of overloading by domestic car owners. This will require additional monitoring of the pressure on the grid. The growing numbers of electric vehicles could constitute a drain on the grid, but also serve as reservoirs of power for homes and businesses. According to Bloomberg, cumulative passenger EV annual sales worldwide were set to hit four million in August 2018. In response to this, the French car maker Renault, for example, has signed agreements with key players in the European energy markets. It has formed ventures in partnership with EDF, Total and Enel with the aim of developing a smart electric ecosystem to promote the large-scale take-up of electric cars. Renewables added to the mix A renewed focus has been placed on renewable energy since the United Nations announced that major changes would be needed across the world to limit the effects of climate change. Consistent with this, it has recommended a 45% cut in carbon emissions by 2030. For its part, the UK government has announced its intention to phase out coal power by 2025. In a further sign of change, in 2018 Scottish Power became the first major UK energy firm to cease fossil fuel generation in favour of wind power, having sold off its remaining gas and hydro stations. The company plans to invest more in other renewable energy sources, including sunlight, tidal and wave energy, increasing its total renewable capacity IoT and Water resiliency There is also an urgent requirement to achieve water resiliency worldwide considering the increased frequency and severity of droughts, as well as population growth in urban areas and corresponding demand for more water supplies. The International Energy Agency claims that more than 34% of pumped water is lost as non-revenue water because of tampering, theft, meter errors and faulty distribution networks. Other factors driving greater investment in water and wastewater infrastructures include ageing infrastructures, and the increasing complexities in maintaining and managing water and wastewater facilities. Supervisory control and data acquisition (SCADA) systems are proving useful in obtaining data from remote devices such as valves and pumps and offer remote control from a host software platform. There have been calls for the UK government to compel water companies to introduce compulsory metering, using smart meters. It is considered that the water industry should collectively be aiming to reduce leakage by 50% by 2040, rather than 2050. Thames Water, the country’s largest water company, has launched several projects in 2018 to examine its pipelines and detect leaks across its 20,000-mile underground network. These include a fleet of drones, aeroplane and satellite monitoring to send back state of the art images to reveal leaks from above ground. The company aims to achieve a leakage reduction target of 15% between 2020 and 2025; previous targets have not been met, resulting in penalties from the regulator. The use of low power connectivity is also finding applications in water metering and management as well as power. In Spain, FACSA is to deploy LoRa-enabled smart water meters in its smart city project, following a successful pilot. Telstra is also reported to be introducing NB-IoT for water monitoring in Australia. A joined-up future with smart cities? Smart energy and smart grid solutions are being explored and designed today to support and enable the objectives outlined in this report. In addition to addressing current challenges, the long term IoT vision is to tie in power supply with smart city projects, joining up all services such as public lighting with power generation. This might employ a generic connectivity technology supporting the entire interlinked system of systems. However, all of this is contingent on a highly secure network resistant to cyberattack – another key element requiring continuous improvement. For more expert articles and blogs, go to https://www.iot-now.com/blogs/
<urn:uuid:e9d644ec-db0b-4474-b16b-9a1d94ab28f0>
CC-MAIN-2022-40
https://www.iot-now.com/2022/05/06/121111-7-iot-utility-applications-that-will-unlock-the-smart-city-dream/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00454.warc.gz
en
0.948016
2,382
2.828125
3
You can use curl to see the http headers from an email link, or webpage is going to send you to a known bad page. If you want to see the contents of the page, without risk of infection (look for base64 encoding is a good sign of an infected page, but common sense will prevail) – just curl the http://address without parameters. But if you want to get the headers from an http request using curl instead of your browser – perhaps to troubleshoot http response codes, ssl certificates or other website problems, then this is a technique that will help you… - 1. Standard in Linux, or Install http://cygwin.com and curl (cygwin has most of the unix binaries available to run on windows, and is free) - 2. Use “Curl” as explained below Examples of Using CURL to See HTTP Headers Example of looking at “proxy”, but it could be any url you are unsure about: $ curl -sI http://proxy/ HTTP/1.1 302 Found Location: http://proxy/?cfru=aHR0cDovL3Byb3h5Lw Cache-Control: no-cache Pragma: no-cache Content-Type: text/html; charset=utf-8 Proxy-Connection: close Connection: close Content-Length: 1097 http://proxy does a temp redirect (302) in this example to: http://proxy/?cfru=aHR0cDovL3Byb3h5Lw 1097 Bytes are sent and there are no cookies dropped. The following example is a program I wrote to manage shortcuts for myself, it uses easy mnemonics on clickable.biz to help me remember complicated or fre URLs. Bit.ly is a good example of a service that does this too. This curl http header example shows how to go through a local proxy to see what the redirector service is doing: $ curl -x proxy:80 -U james http://clickable.biz/keywords -Is Enter proxy password for user 'james': HTTP/1.1 302 Found Date: Thu, 30 Sep 2010 23:14:45 GMT Server: Apache Location: https://adwords.google.com/select/KeywordToolExternal?forceLegacy=true Content-Type: application/octet-stream Proxy-Connection: Keep-Alive Connection: Keep-Alive curl http headers are saying that clickable.biz/keywords goes directly to the google keyword tool link as a 302. standard 301 redirect looks like… $ curl -x proxy:80 -U james http://google.com Enter proxy password for user 'james': <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8"> <TITLE>301 Moved</TITLE></HEAD><BODY> <H1>301 Moved</H1> The document has moved <A HREF="http://www.google.com/">here</A>. </BODY></HTML> Same 301 redirect url, but header instead of content $ curl -x proxy:80 -U james http://google.com -Is Enter proxy password for user 'james': HTTP/1.1 301 Moved Permanently Location: http://www.google.com/ Content-Type: text/html; charset=UTF-8 Date: Thu, 30 Sep 2010 23:20:20 GMT Expires: Sat, 30 Oct 2010 23:20:20 GMT Cache-Control: public, max-age=2592000 Server: gws X-XSS-Protection: 1; mode=block Content-Length: 219 Proxy-Connection: Keep-Alive Connection: Keep-Alive Curl can be used for many reasons – and curl is much more than what I’m showing here too. Showing http headers with curl is simply one good use that helps you to investigate questionable emails from Aunt Mildred about “click here for your virtual card shorturl.go/1771 ” type stuff. Or “you HAVE to watch this movie: http://bit.ly/funny Normally you’d either delete the email, or take a chance and hope you don’t get infected… now you don’t have to guess. You can see what the actual content is going to be before you click it and load up the infections because you can see the http headers using curl. wget can be used in a similar fashion, however – this article is about using curl to get http headers!
<urn:uuid:4b94dbc2-8f2e-45f6-926c-de5b4ca67564>
CC-MAIN-2022-40
https://digitalcrunch.com/debian/use-curl-or-wget-to-see-http-headers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00654.warc.gz
en
0.786948
988
2.703125
3
Our daily lives revolve around the internet more than ever, and with that comes risks. Cyberattacks have become an increasing issue and data breaches are the most common form of cyber crime. Experts estimate that there’s a new victim every two seconds. Despite this, data breaches aren’t hard to protect yourself from it just takes some care and skepticism. Here’s everything you need to know about data breaches, including how they work, how to protect yourself, and what to do if you’re hacked. What is a Data Breach? A “data breach” is a general term for any time that someone accesses electronic data or information that they’re not supposed to. The simplest example of a data breach is a hacked email account. If someone gets your email password and logs into your account, they’ve breached your data. Hackers also target bank accounts by attempting to gain access to your credit card information, social security numbers, or even online banking password, which can wreak a lot of havoc on your finances. Things get trickier if it’s the servers of a major company that get breached. The customers can’t do anything to protect themselves here, it’s on the company they trusted with their personal data that caused the problem. If you use a password manager, you might occasionally get a notification saying that a password of yours was included in a data breach. This doesn’t necessarily mean that your accounts have been hacked, more likely, your password was included in a massive company leak. A great deal of breaches stem from social engineering scams, such as phishing attacks, where a user gets tricked into giving up their passwords to a scammer they think they can trust. Some breaches even happen accidentally, maybe a company stores user passwords on a public website without realizing. No matter the cause, when it comes to your personal cybersecurity, there are a few best practices you should follow to protect yourself. How to Protect Yourself from a Data Breach It’s never been easier to have access to tools that make yourself a much more difficult target. You can protect yourself against data breaches and hacks in the same ways that you protect against most cyber crimes: Be proactive, be unique, and be skeptical. The best time to worry about cybersecurity is before you’re ever in danger. This means making a security plan and sticking to it. If you have data stored online that you can’t risk losing, make backups of it. This might mean taking screenshots, downloading documents, cloud storage and moving data onto an external hard drive. There are many advantages of data backup and recovery, the more backups you have, the safer you are. Keep an eye on your finances. Cybersecurity experts recommended signing up for a credit monitoring service that keeps track of any suspicious activity in your credit report. Computer users should make sure they have a good antivirus program installed. Enable two-factor authentication on your devices, websites, apps, and all other accounts. It’s a simple but powerful way to lock strangers out of your data. Enterprise users and companies should invest in a good firewall, keep a dedicated cybersecurity team on retainer, and perform regular “vulnerability tests” to see how strong your defenses really are. Make sure you have a cyber insurance policy that can keep you safe in the event of a hack. Keep all your devices updated to ensure you have the latest security patches that will help keep you safe against new threats. Most websites only ask for a single username and password combo to log in. This means that if you have an easy-to-guess password, or use the same password on multiple websites, it’s incredibly easy to break into your account. You’ll want to use a different password for all your separate accounts. To create a strong password follow these steps: - It should be long (at least 12 characters, minimum) - Use upper and lowercase letters, numbers, and symbols - Don’t use common words or phrases like “password,” or personal details like your birthday Cybersecurity experts recommend using a password manager which will create incredibly strong passwords for all your apps, and then automatically enter them when you need them. This lets you keep your data secure without needing to remember dozens of different passwords. Backups are important, updates are important, and passwords are crucial. However, all the time you spend keeping yourself safe doesn’t mean anything if you don’t apply common-sense skepticism. If you receive an email from someone you don’t know asking you to download an attachment, you probably know not to do it. What if you get a text, seemingly from your bank, warning about fraud on your account? Or a private message from a friend asking you to click a “hilarious” link? These types of scams are designed to prey on users who aren’t thinking about what they click, or who completely trust that they’re protected. Don’t click links if you don’t know exactly where they’re taking you. When you get a suspicious email or text, ask yourself: “Was I expecting to receive this? Do I know the sender? Is it even important?” If something seems too good to be true, it probably is. If you’re not sure, directly contact your bank, or friend, or whoever is claiming they know you and ask. If you’re managing a large group of people, make sure that they’re educated about internet scams, data breaches, and suspicious links. It doesn’t matter how strong your locks are if someone inside just opens the door. What to do if your Data is Breached What if someone does manage to slip past your defenses and access your accounts? How do you recover and repair the damage? In the aftermath of a data breach, you have to keep calm and keep your common sense. Lots of scammers are trained to strike at people who have just been scammed by someone else, hoping to take advantage of their desperation. Keep your guard up and stay skeptical. Ideally, you’ll want to figure out how many accounts were hacked, and change all their passwords. If you use a password manager, change your master password too. Triple-check your financial records, and if anything seems off, don’t hesitate to freeze your accounts and credit. If you’re a company that’s been breached, get in touch with your cyber insurance team and report the breach, along with your in-house legal and IT team which would be us. While you might be tempted to delete everything that the hacker saw, you shouldn’t do it. If you do decide to get law enforcement involved, deleting too much data can count as destroying evidence.
<urn:uuid:aff83fdf-6074-4c17-a4e7-e2fbccfab55a>
CC-MAIN-2022-40
https://www.bvainc.com/2021/12/22/how-to-protect-yourself-from-data-breaches-and-what-to-do-if-your-data-is-compromised/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00654.warc.gz
en
0.944141
1,434
3.234375
3
RAID is a form of data management that spreads your data across multiple drives. With a RAID, you can use multiple drives, but your computer will recognize the RAID as one disk volume. There are various reasons why you may want to use RAID: - Data Recovery: You can copy your data across multiple drives so that in the case of a drive failure you will be able to easily and quickly recover your data. To do this, you should use RAID 5. - Backup: You can mirror your drives with RAID 1 so that you can remove one copy for safekeeping, preferably in a storage place off-site. - Speed: For speed, you should choose RAID 0. The data added to a CRU enclosure will be split into parts and spread across all drives in the enclosure at the same time, which increases the speed of your data transfers. Your computer will see a single volume with roughly the capacity of all the drives in the enclosure. Introduction to different common RAID levels To see a full table of RAID levels, click here. |Combines two or more hard drives together and treats them as one large volume. For example, two 250GB drives combined in a RAID 0 configuration creates a single 500GB volume. RAID 0 is used by those wanting the most speed out of two or more drives.||Because the data is split across both drives, the speed of data reading and writing increases as more disks are added.||Every drive has a limited lifespan and each disk adds another point of failure to the RAID. Every disk in a RAID 0 is critical – losing any of them means the entire RAID (and all of the data) is lost.| |Mirroring creates an exact duplicate of a disk. Every time you write information to one drive, the exact information is written to the other drive in your mirror. Important files (accounting, financial, personal records) are commonly backed up with a RAID 1.||This is the safest option for your data. If one drive is lost, your data still exists in its complete form, and takes no time to recover.||Your investment in data safety increases your drive costs since each RAID 1 volume requires two drives.| |RAID 5 (parity striping)||A common RAID setup for volumes that are larger, faster, and more safe than any single drive. Your data is spread across all the drives in the RAID along with information that will allow your data to be recovered in case of a single drive failure. At least three drives are required for RAID 5. No matter how many drives are used, an amount equal to one of them will be used for the recovery data and cannot be used for user data.||You can lose any one disk and not lose your data. Just replace the disk with a new one.||Your investment in data safety increases your drive costs since at least three drives are needed.| Hardware vs Software RAID RAID can be implemented in hardware, in the form of special disk controllers that are typically built into a multi-drive enclosure, or in software, with an operating system module that takes care of the housekeeping required for data to be written properly to the disks used in the RAID configuration. The Windows, Mac OS X, and Linux operating systems all offer the ability to create a RAID configuration without any additional software. The drawback to using your operating system or other software to create a RAID is that it will add to the computational load on your computer, which will likely slow your computer’s performance. Using a hardware RAID system, in an external drive enclosure or an expansion card installed in the computer, would not slow down your computer’s performance. How do I RAID? You need at least two hard drives. Some RAID levels require at least 3 disks, but some need 4 or 5. You’ll want to buy matching drives for your RAID, so plan accordingly. If you attempt to RAID disks of different sizes together, most RAID methods treat each of the disks as if they are the same size as the smallest disk in the RAID. You will also need a way to RAID your drives together, whether via hardware or software. Many CRU drive enclosures include RAID capability that can be configured on the enclosure itself so you don’t need additional software. However, CRU does provide the CRU Configurator which is compatible with many of our RAID enclosures, which provides SMS and email notifications when a drive fails and allows you to view or update your device’s firmware as well as configure what kinds of events cause your enclosure’s audible alarm to sound. Want to learn more? Or, contact us with your questions. We’d be happy to speak with you.
<urn:uuid:b024d95c-9b16-4171-a7aa-9eecbf9014c2>
CC-MAIN-2022-40
https://www.cru-inc.com/data-protection-topics/understanding-raid/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00654.warc.gz
en
0.921106
1,015
3.078125
3
Facts about Web Hacking Verizon Business conducted a 2009 study of 90 Web data breaches. The results of this study were presented in The Data Breach Investigative Report (DBIR) and included the following facts and figures: - 285 million data records were exposed in the 90 data breaches, the equivalent of 9 exposures each second. This significantly exceeds the combined 230 million exposed records in the previous 5 years of this study. - Organized crime was responsible for 90% of all compromised data records that were used in a crime. - 74% of data breaches were initiated by external attacks. - 64% of data breaches are enabled by a combination of events. Hacking, malware, SQL injection and other forms of attack may all come into play in a single data breach. Matthijs van der Wel, Verizon Business Security Solutions forensics manager, described a typical data breach scenario. “The end user makes a mistake. The attacker takes advantage of some mistake committed by the victim company, hacks into the network, perhaps using an SQL injection attack, and installs malware on a system to collect data.” Reported in PC World April 18, 2009 The 2009 Web Hacking Incident Database (WHID) Annual Report includes these facts: - Web 2.0 sites are the primary target for hackers. 19% of all attacks target these sites. - Website defacement takes place in 28% of web attacks. - Loss of sensitive data takes place in 26% of web attacks. - Changes to website content takes place in 19% of web attacks. The most common attack methods are: - SQL injection. Query commands are typed into Web input fields or URLs in order to access internal data or plant malware that will infect site visitors. - Cross-site scripting. Allows malicious code or data to be transferred from another site, exposing the risk of data breach. The top motivations for Web application hackers: - Website defacement which results in unauthorized changes to Web applications is the top motivation for hackers. This includes changes to the appearance of a website as well as the planting of malware (malicious code). Website malware is replacing malicious e-mail as the distribution vehicle for computer or website virus infections, and Trojans. - Ideological defacement is the next top motivation for website attacks. Hackers change the appearance of websites to reflect their own beliefs, usually either political or religious in nature. This may or may not result in monetary loss, but is still dangerous because it reveals website vulnerabilities. The most targeted categories of hacked Web applications: - Social networking sites such as Twitter and Facebook were the most attacked category of websites in 2009. The motivation was malware injection and ideological defacement. - Retail, Media, Technology and Internet-related organizations were the next most-attacked category of Web applications. This includes e-commerce websites, retail shops, ISPs (internet service providers) and search engines. The motivation of attacks in this category is often theft of secure data. - Law enforcement, government, political and financial websites saw a drop in the incidence of attacks in 2009. This most likely reflects improved security measures which are being taken by these organizations. The Data Breach Investigative Report summarizes its findings with this statement; “While researchers are exploring ever more advanced attacks such as CSRF, hackers are still successfully exploiting the most basic application layer vulnerabilities such as SQL injection or information left accidentally in the open.” The unfortunate reality is that some of the most frequently visited Web applications – those that perform retail and e-commerce functions – are still not protected against the most common and well-known attack methods. Get the latest content on web security in your inbox each week.
<urn:uuid:b6ca0b22-189e-4d9f-beeb-e8c66b6ae260>
CC-MAIN-2022-40
https://www.acunetix.com/blog/articles/web-hacking-facts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00654.warc.gz
en
0.933675
763
2.921875
3
The advent of cloud computing has brought many exciting changes to companies’ IT strategies. One aspect of the cloud that is frequently overlooked, however, is energy efficiency. On the face of it, one might expect cloud computing to be more energy efficient than the alternative. But is it really? Let’s take a quick look at the three drivers behind increased energy efficiency in cloud environments. First and most obvious is economies of scale. It’s not rocket science to understand that fixed costs are best allocated among a greater quantity to bring down per-unit cost. Similarly, conducting a benchmarking exercise to measure Power Usage Effectiveness entails significant fixed costs in devoting resources to counting equipment and measuring individual devices’ power consumption. There are certainly economies of scale to be gained in doing this for a larger datacenter than for a smaller one. The second driver of energy efficiency in cloud environments results from the abstraction of the physical and virtual layers in the cloud. A single physical server running multiple server images will obviate the additional power load from purchasing additional physical servers. Also, if a virtualized environment incorporates redundant server images on different physical boxes, then individual boxes do not need multiple power supplies. The failure of one machine becomes a non-issue when redundancy is built in. Finally, a datacenter serving cloud clients will have more users from more disparate places, each with different needs. This means that system loads will be more evenly spread throughout each day (and night), which enables the datacenter to average higher system loads and thus more efficient utilization of equipment. Everest Group research shows that individual servers in a cloud datacenter experience three to four times the average load of those in an in-house datacenter. By now it should be clear that a large cloud datacenter has distinct energy efficiency advantages over a smaller, in-house datacenter. But there are corresponding energy drawbacks to cloud migration that may not be immediately apparent. First, as processing and storage shift to the cloud, energy usage increases. This is primarily from the routers transporting the data over the public Internet; their power use increases with throughput and frequency of accessing remotely stored data. Also, in a SaaS, PaaS, or simple cloud storage scenario, frequent data access can cause data transport alone to account for around 60 percent of the overall power used in storing, retrieving, processing, and displaying information. At this point, the efficiency advantages gained by the three drivers cited above may be lost due to the extra power required to move the data between the user and the cloud datacenter in which it is stored or processed. It is true that migration to the cloud can yield significant gains in energy efficiency for certain applications. However, for applications involving high transaction volumes, an in-house data center can provide better energy efficiency. As power prices become increasingly important in determining data center operating costs, energy efficiency will play a greater role in companies’ cloud strategies.
<urn:uuid:6f6785e3-326f-4d12-a39d-21f893dc3bf5>
CC-MAIN-2022-40
https://www.everestgrp.com/tag/energy-efficiency/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00654.warc.gz
en
0.922718
604
3.046875
3
An established, clandestine network of compromised computers could become the launching pad for a superworm that would have a massive impact on the Internet. The malware network was created by an unpublicized Trojan — a malicious program that poses as a benign one — called Sinit, which has already infected hundreds of thousands of computers, according to a report released Monday by Clearswift, a UK-based maker of software for managing and securing communications. Sinit has created an underground peer-to-peer network that’s removed the single point of failure that is often targeted by law enforcers to terminate viruses, the company explained in a statement. With Sinit, there is no central server that can be shut down. Each infected host becomes part of a peer-to-peer network through which additional Trojans can spread. Great Deal of Malice “It’s spooky in the sense that it seems to have the potential for a great deal of malice,” Greg Hampton, Clearswift vice president for U.S. marketing, told TechNewsWorld. “How it will be used is still unclear, so we don’t want to raise any false alarms.” “The reason why Sinit is quite concerning is that it opens up a port on a machine, much like opening a window in your house,” Sharon Ruckman, senior director for security response at Symantec, told TechNewsWorld. Through that open window, she explained, a hacker can filch a computer’s network information, perform remote tasks on the computer, capture keystrokes and download more malware onto the machine. “It opens up a machine to anyone to come in and do whatever they want,” she said. According to the Clearswift report, the network has been used to hijack modems and run up the phone bills of unwary victims. But Clearswift said that, curiously, “the potential for much broader abuse remains as yet untapped.” Superworm in the Works That broader abuse includes the spread of a superworm that could move rapidly and exponentially through the Internet, Hampton said. “It could start and stop before anyone had a chance of doing anything,” he noted. “Whatever damage it did would be done in a hurry.” The reason it could replicate so quickly is because it wouldn’t require human intervention, explained Steven Sundermeier, vice president for products and services at Central Command, an antivirus software maker in Medina, Ohio. The superworm — should one be released — would use a network of compromised machines to replicate itself from machine to machine, as we would see with a magnified version of the Slammer worm. “The danger of these fileless infectors is the fact that they can replicate so fast,” he told TechNewsWorld. Although superworms have the potential to carry out massive mischief, not everyone believes that potential will be exploited by virus writers. “It’s a buzzword that people like to throw out there,” Joe Stewart, a senior security researcher at LURHQ, a managed-security provider headquartered in Myrtle Beach, South Carolina, told TechNewsWorld. “Whether we’ll see one, I’m not sure. “What we’re seeing more now than people writing things just to be malicious or writing things to prove a concept is writing malware to make a profit,” Stewart continued. “If there’s profit in writing a superworm, someone will do it pretty soon.” Stewart cited several money-grabbing schemes used by malware scribblers: spammers using infected machines to distribute their messages and avoid being shut down; spammers using infected machines to host their own Web sites; modem and browser hijacking; and denial-of-service attacks to impair the operations of competitors or extort money from individuals. Writing malware for financial gain will be a growth business in 2004, according to Central Command’s Sundermeier. “We’re anticipating an increase in the creation of Internet worms — maybe in collaboration with spammers or hackers — in order to have some sort of financial gain,” he said. “In the past, viruses were written for the virus writer’s own notoriety,” he continued. “Now we’re seeing kind of a scary trend toward writing virus code and replication in order to ruin the livelihood of Internet users.”
<urn:uuid:50195c48-fe04-4e74-9776-15f32fa53767>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/secret-trojan-network-could-produce-superworm-32602.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00654.warc.gz
en
0.946621
953
2.515625
3
Researchers have found what they believe is a previously undiscovered botnet that uses unusually advanced measures to covertly target millions of servers around the world. The botnet uses proprietary software written from scratch to infect servers and corral them into a peer-to-peer network, researchers from security firm Guardicore Labs reported on Wednesday. The botnet, which Guardicore Labs researchers have named FritzFrog, has a host of other advanced features, including: - In-memory payloads that never touch the disks of infected servers - At least 20 versions of the software binary since January - A sole focus on infecting secure shell, or SSH, servers that network administrators use to manage machines - The ability to backdoor infected servers - A list of login credential combinations used to suss out weak login passwords that are more “extensive” than those in previously seen botnets Administrators who don’t protect SSH servers with both a strong password and a cryptographic certificate may already be infected with malware that’s hard for the untrained eye to detect.
<urn:uuid:71ee457a-187a-4bbf-86d3-f20c8af0aa87>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/expert-comments/experts-insight-on-fritzfrog-botnet-targeting-millions-of-servers-including-government-agencies-and-banks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00654.warc.gz
en
0.943273
220
2.6875
3
Latin may be the most famous “dead” language, at least in that it isn’t widely spoken beyond its use by the Catholic Church, but it is not lost to the ages. Linguists have been warning that nearly all human languages — save for the most commonly spoken tongues — could become extinct within the next 200 years. Of the 7,000 languages currently spoken around the globe, 50 percent will not survive the century, according to the Endangered Languages Project, which aims to preserve these languages by uploading video, audio and even text files of rare dialects to a central website. The project is a collaboration between the First Peoples’ Cultural Council, the Institute for Language Information and Technology at Eastern Michigan University, and Google’s philanthropic arm, Google.org. A Google spokesperson was not immediately available to comment for this story. While First Peoples’ and ILIT have been looking to preserve these dying languages, having Google on board could help bring additional attention, along with its technology. “Google has very carefully approached a number of leaders in the fields of endangered languages and technology, and they’ve been incredibly respective of those native speakers and the cognizant threat that these languages could be lost in our lifetime,” said Peter Brand, manager of FirstVoices for the First Peoples’ Cultural Council. “To have such an important corporation’s not-for-profit arm take on the responsibility to raise the awareness of this issue and offer their technology is a very important contribution to preservation of these indigenous languages as well as its documentation and revitalization,” he told TechNewsWorld. It therefore can’t help but raise awareness globally when a company such as Google takes on this responsibility, he added. It also gets Google some good PR in the process. “Projects like these have two purposes,” said Rob Enderle, principal analyst of the Enderle Group. “One is to improve a company’s image — and Google, as the new ‘evil empire,’ definitely needs that at the moment. The other is so you can acquire and retain a key resource, and at the core of the man/machine interface is language.” To have a few language experts on staff could significantly improve Google’s speed in creating a man/machine interface breakthrough that could bypass Apple and Microsoft, Enderle told TechNewWorld. “Finally, Google’s core goal is to make information accessible,” he pointed out. “If a language dies out, then the information that was created with it likely dies as well, so this one is consistent with the core mission. So, in this case, there are tactical, strategic, and core mission reasons to do this.” Google’s Return to Do-Gooding While Google has famously been tied to the “don’t be evil” mantra for years, the company has lately been refocusing on its core business. Whether that is evil is left to debate, but philanthropic efforts like the Endangered Languages Project certainly don’t hurt its business as long as they don’t become a distraction. In fact, when they’re done right, the company can do good and gain business opportunities in the process. “The thing I’ve always liked and respected about Google is the company’s willingness to stretch the boundaries of conventional IT,” said Charles King, principal analyst for Pund-It. “From a purely engineering perspective, computing is mostly about moving or storing digitally encoded information in efficient, robust and elegant ways.” Even in a longer or broader view, technology is still about facilitating the sharing of information, which makes virtually anything communications-related the purview of open-minded IT vendors, added King. “Will Google ever build a commercial business around their work saving endangered languages? That seems unlikely,” he said. “However, it’s easy to imagine how what they learn from those efforts might enlighten or enrich any number of other company projects and initiatives.” Old Languages and New Technology The other aspect of this is how technology can be used to save old languages that might not even have words for the very technology being used to save it. “That situation is becoming rarer than you think,” said Anthony Aristar, professor of linguistics and co-director of ILIT. In New Guinea, for example, in addition to the traditional grass huts that seem to be a throwback to another time, Aristar also saw a satellite dish. “You would think these speakers are very isolated — but even there, modern technology has intruded,” he told TechNewsWorld. The same technology that appears to apply pressure to those languages is now being used to save them. “If we can preserve this data digitally and make sure it isn’t lost, then this can be a very useful project to maintain diversity,” said Aristar. Already, technology has been created that could aid this process, including an app from FirstVoices that allows “chatting” in indigenous languages. But what really needs saving are those languages that are hardly being spoken. “While there are certainly indigenous people who are living close to their original lifestyle, there are many who are living in the same ways as non-indigenous people around them,” said Brand. “We’re looking to save those languages that aren’t spoken much at all. There are those who only use their indigenous language today maybe to speak to their pet.”
<urn:uuid:13caefaa-c40f-444a-a86f-0079501e8c6e>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/google-embarks-on-language-rescue-mission-75448.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00654.warc.gz
en
0.962601
1,186
3.046875
3
Home » Geolocation’s Role in Politics Geolocation’s Role in Politics In the 2012 U.S. elections, geolocation played a pivotal role in helping politicians reach groups of voters through geotargeted online ads in swing states. Fast forwarding to today and traveling “across the pond,” we’re seeing The Liberal Democrats, a British political party, forging its own political history through geolocation. The Liberal Democrats have deployed NetAcuity Edge™ hyperlocal IP geolocation technology to create locally relevant website content for voters. It will be the first political party in the world to harness the power of location, automatically determining visitors’ locations at a sub-regional level, to accurately serve website content relating to their local candidate. Additionally, voters are served details of local campaign messages and policies alongside the party’s national communications. This approach increases local engagement and raises awareness of key regional issues and policies, creating a relevant experience that instantly engages voters. “Viewing web content that is more personal and less generic can really impact the way people vote, and small margins can make a huge difference to the final result. In this election in particular, we’re going to see a lot more marginal seats, so those few extra votes can really make the difference,” explained Bess Mayhew, Head of Digital Communications for the Liberal Democrat party. The party expects between four and five million visits to its website in the run-up to the general election – and given the main purpose of the site is to influence voters – the relevance of homepage content will make a significant difference in gaining marginal seats. Candidates across the full spectrum of politics, from international to U.S. state and gubernatorial seats all the way to the Presidential race all recognize the importance of connecting with voters in ways that are meaningful. Read the full press release to find out more about the role geolocation can play in politics. We would love to learn more about your specific use case. Please contact one of our experts to discuss how we can best address your unique needs.
<urn:uuid:dc7d9f8a-2ada-47f4-a7b8-599cd1f61079>
CC-MAIN-2022-40
https://www.digitalelement.com/geolocations-role-in-politics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00654.warc.gz
en
0.929678
441
2.890625
3
The Multi-Factor Factor (or How to Manage Authentication Risk) As we debate the necessity of various authentication factors, particularly for passwordless projects, it’s good to take a step back and remember how we got here. There are key three types of authentication: The 3 Key Types of Authentication 1. “Something you know,” otherwise known as a “shared secret.” This used to be something you memorized, but it turns out that fallible organic storage is not that great for storing complex character strings that now number in the hundreds (you ARE using unique passwords for every account, right? … Right?). 2. “Something you have,” meaning something that can’t be possessed by more than one entity at a time. This could be something that is too difficult to copy or generate independently, that is tied to storage and can’t be removed, or that exists as a unique physical item (such as a hard token or a key). 3. “Something you are,” referring to an attribute that is physically unique to an individual, such as a fingerprint, a palmprint, a retinal pattern, a gait, a typing pattern, or even a heartbeat. Each of these factors comes with a downside: “Something you know” = “Something you forgot,” or “Something that someone beat out of you.” A shared secret that is guessed or derived … is not a secret any more. Worse yet, it can be silently stolen without anyone noticing. But it’s also the cheapest factor, in the sense that it can be created, changed, expanded, distributed and used without having to buy any extra technology. If you need to identify someone more definitively, you ask them for information they’re not likely to forget, such as the name of the street they grew up on. But any of that historical information is increasingly available on the Internet, or can be tricked out of the user through phishing or social media “quizzes.” Another downside to “something you know” is that it may appear to be cheap in terms of technology, but in terms of support cost — help desk time when someone forgets a username or password, or can’t log in for another reason — it can be more expensive than a better-designed factor that is harder to get wrong. This is why we’re working on the journey to a passwordless future. “Something you have” = “Something you lost,” or “Something you broke.” One of the biggest threats today is SIM theft, in which an attacker manages to steal an assigned mobile phone number so that they can receive SMS authentication codes. This is nefarious because once again, it can be stolen silently; the victim still has the physical phone but may not realize that the number has been assigned to someone else until it’s too late. Hard tokens that generate codes can run out their batteries in a few years; they’re also unwieldy to carry around if you have several of them for different accounts. Generally speaking, if a user loses the “something you have,” the fallback is “something you know,” which we’ve just discussed above. “Something you are” = “Something that aged” At least in my case; gait analysis for me would lose its baseline every time I had an arthritis flare-up. The other problem with biometrics is that you can’t change your retinal patterns or fingerprints if the records of them are stolen. Covid has revealed some problems with biometrics. For example: If you’re wearing a mask, FaceID doesn’t work; shared fingerprint readers aren’t sanitary these days. But biometrics are extremely convenient as a factor because you can’t forget them, you can’t leave them behind in the taxi, and chances are good that nobody can steal the originals without you noticing (water glasses in spy movies aside). But what risks are these authentication factors actually trying to address? Let’s list some out. Risks of Authentication Someone is trying to log in at the user’s machine with the real user’s username and password. The real user walked away from their unlocked machine and now an attacker is trying to use it. Someone is remotely connected to the user’s machine and is trying to pretend to be the user sitting at that machine. Someone is trying to log in with the real user’s username and password from a different system (such as a compromised machine in a botnet). The real user is trying to log in, but the machine is compromised and could be used to steal the username and password, or plant malware. The real user is trying to log in from one location, but someone else is also trying to log in as that user from a different location. Someone has gained access to the real user’s username, password, and second factor (such as a hard token or phone number for receiving SMS texts), and is trying to log in from a different device. Someone is listening in on the network stream and trying to hijack the user’s session in progress. When we do threat modeling, we come up with these sorts of attacks and more. CISOs often run through a whole laundry list of possible attacks in their head whenever they’re looking at a new proposal. Then they have to pick the controls that address as many of the risks as possible. For example: Controls to Authentication Risks A 2FA factor that is physically separate from a user’s laptop would protect against 1), 2), 3), 4), and 6 listed in the previous section) — assuming that the user has that factor with them and doesn’t leave it near the laptop. A session timeout, requiring reauthentication, is often used to protect against 2), 3), and to some extent 8). Marking a laptop as trusted, bound specifically to the user, is used to prevent 4), 6), and 7). Ensuring that the network connection is encrypted all the way between the user and the application protects against 8). Using a biometric for authentication is intended to protect against 1), 2), 3), 4), 6), and 7), but that’s assuming that the user isn’t under duress (being forced by an attacker to supply it). Checking the user’s device for security state and any evidence of compromise is meant to protect against 2), 3), and 5). Using a second factor such as a U2F key, that requires a physical response from the user to activate, also protects against 3) and 5) -- it proves that the user is actually present and intends to authenticate. Set Policy Controls As Guardrails For added protection set policy controls, using other factors, as guardrails. Factors such as location (either by GPS or IP address) can help to narrow down the vectors of attack if, for example, you never expect a user to try to authenticate from anyplace other than a certain network or geographic region. But we know that IP addresses aren’t foolproof — all you have to do is gain access to a system on the “right” network. So these can’t be the sole authentication factors to rely on. Think of these more as a narrowing function: you are blocking more attacks right from the outset, leaving fewer to sift through and validate. As you can see, there are layers upon layers of defense that you can build to try to address the most common risk scenarios. But you also have to take into account the downsides of each factor when designing the solution. If you have an endlessly changing roster of 30 people using the same point of sale system, you can’t register a biometric or phone app for each of them, make each of them log in and out of accounts if they are rushing to serve a line of customers, or make them all share a hard token. The modern enterprise ends up with a portfolio of factors, deployed where they work the best and where they address the right risks. We’ve learned a lot this year about assumptions we made when choosing the original authentication factors for an organization — factors that stopped working so well when we became physically separated from other people. As we make plans for the future state of authentication, it helps to go back to first principles and update the above lists for a flexible outcome. Try Duo For Free With our free 30-day trial and see how easy it is to get started with Duo and secure your workforce, from anywhere and on any device.
<urn:uuid:99949c9e-a7e1-4705-9d1a-4d1c7a851f40>
CC-MAIN-2022-40
https://duo.com/blog/the-multi-factor-factor-or-how-to-manage-authentication-risk
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00054.warc.gz
en
0.948716
1,838
2.609375
3
Understanding the Security Triad When studying for just about any security related exam including Security+, CASP, SSCP, and CISSP, you’ll come across confidentiality, integrity, availability as three core concepts that are commonly referred to as the security triad and their importance cannot be overstated. You might also hear the trio referred to as the CIA security triad or the AIC security triad. Confidentiality ensures that data is only viewable by authorized users. In other words, the goal of confidentiality is to prevent the unauthorized disclosure of information. Loss of confidentiality indicates that unauthorized users have been able to access information. If there is a risk of sensitive data falling into the wrong hands, it should be encrypted to make it unreadable. This includes encrypting data at rest and data in motion. Data at rest is any data stored as a file on a hard drive, mobile device, or even a USB flash drive. Data in motion is any data traveling over a network. AES is the most common symmetric encryption protocol used to encrypt data at rest. SSH, IPsec, SSL, and TLS are some common encryption protocols used to encrypt data in motion. Additionally, data should be protected with access controls to enforce confidentiality. Master Security+ Performance Based Questions Video The goal of integrity is to verify that data has not been modified and loss of integrity can occur through unauthorized or unintended changes. Integrity is commonly enforced by controlling data to prevent it from being modified, and by using hashes. Hashing algorithms such as MD5, HMAC, or SHA1 can calculate hashes to verify integrity. A hash is simply a number created by applying the algorithm to a file or message. No matter how many times you calculate a hash, it will always be the same when calculated on the same data. However, if the data changes and you recalculate the hash, the hash will different. Hashes are calculated at different times and then compared to each other to verify that integrity has been maintained. For example, if you calculate a hash on a file on Monday and it is 123456, and then you recalculate the hash on Wednesday and it is still 123456, you know that the data is the same. However, if you calculate the hash on Friday and it is 459459, you know that the data is no longer the same because the two hashes (123456 from Monday and 459459 from Friday) are different. The goal of Availability is to ensure that data and services are available when needed and often addresses single points of failure. You can increase availability by adding fault tolerance and redundancies such as RAID, clustering, backups, and generators. HVAC systems also increase availability. Other Security+ Study Resources - Security+ blogs organized by categories - Security+ blogs with free practice test questions - Security+ blogs on new performance-based questions - Mobile Apps: Apps for mobile devices running iOS or Android - Audio Files: Learn by listening with over 6 hours of audio on Security+ topics - Flashcards: 494 Security+ glossary flashcards, 222 Security+ acronyms flashcards and 223 Remember This slides - Quality Practice Test Questions: Over 300 quality Security+ practice test questions with full explanations - Full Security+ Study Packages: Quality practice test questions, audio, and Flashcards
<urn:uuid:246f9878-811a-43ca-bcc8-f74dd1083ebe>
CC-MAIN-2022-40
https://blogs.getcertifiedgetahead.com/confidentiality-integrity-availability-security-triad/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00054.warc.gz
en
0.933593
693
3.34375
3
The COVID-19 pandemic has prompted the United States government to explore new voting technologies, since voting in person at the polls could prove to be a “super spreader” event that leads to thousands of new coronavirus infections and ultimately, deaths. There is a newer but highly secure technology that some have proposed as a solution that would allow Americans to vote remotely. That technology is blockchain. Using blockchain for voting may seem like a great idea in concept, but there are some obstacles that will need to be overcome. So let’s examine a blockchain solution for voting, how blockchain technology works and why the government may still pursue blockchain for voting in the future — once the technology has been refined and adapted for this unique process. The Problems of Voting by Mail in U.S. Elections Voting by mail has always been an option for those requesting absentee ballots, for example. But the November 3, 2020 Presidential election is expected to push the already-struggling USPS to the limit. In recent months, President Donald Trump controversially appointed a new Postmaster General, Louis DeJoy — a longtime Republican fundraiser and donor to the President’s campaigns. Postmaster General DeJoy promptly proceeded to make a number of controversial changes within the USPS, such as removing dozens of high-volume mail sorting machines and banning overtime, along with prohibiting the practice of of making additional mail runs to accommodate surges in mail volume. These changes were quite concerning, as they left the public (and politicians) questioning whether the USPS could handle an onslaught of millions of ballots. Courts have already intervened to compel the USPS to restore the recently-removed high-volume mail sorting machines. The USPS was also ordered to pre-approve overtime for the two weeks surrounding election day in an effort to ensure that all ballots are delivered and counted in a timely manner. Courts have ordered that all ballots are prioritized in a manner equal to or better than USPS First Class Mail. If we put USPS-related issues aside, we still have the issues of alleged mail-in voter fraud. President Trump has alleged that mail-in voting is prone to fraud, despite an abundance of evidence to the contrary. In fact, there are many states with fairly large mail-in voting programs which have reported extraordinarily low rates of fraud. But even with evidence that the current mail-in programs work well, combined with the voter’s ability to track their ballot’s movement through the system, some fear that voter confidence may already be adversely impacted. This has prompted some to suggest the usage of a fairly new and very reputable technology: blockchain, which has long been used as the foundation of cryptocurrency transactions involving Bitcoin (amongst others). Using Blockchain for Voting: How Would it Work? Using blockchain for voting would work in a fairly straightforward manner. Using a decentralized and distributed network of computing and data storage resources, blockchain allows data to be transferred and recorded in a safe, reliable manner. The records, called “blocks,” are linked or “chained” together using cryptography technology. This interlinking mechanism is an essential element of blockchain, as you cannot edit a single record without impacting an interlinked record. Blockchain data is distributed across the entire system, so a piece of data doesn’t “live” in a single location. This makes it essentially impossible to modify blockchain records without detection. The decentralized distributed nature of blockchain also means it’s impervious to outages. The US could theoretically create a blockchain-based voting system, with an online portal that would allow for in-person voting at the polls or remote voting. Each voter would theoretically be provided with a login — perhaps their social security number — which would be used to create a blockchain record that would include the individual’s votes. This sort of system would be highly secure and efficient. What’s more, the costs of an election could be dramatically reduced since you would have the ability to centralize voting in a single platform that would not require any sort of manual ballot handling or counting. But as wonderful as this all sounds, we’re just not ready to use blockchain for voting on a national scale. There are some major obstacles that we’ve yet to overcome, at least in time for the 2020 election. Why Isn’t the US Using Blockchain for Voting in the 2020 Election? First and foremost, the COVID-19 pandemic onset was sudden and unpredictable. Unfortunately, even if we had identified the need for an alternative to in-person voting at the very start of the pandemic, we still wouldn’t have had sufficient time to build a one-of-a-kind online voting framework using blockchain technology. This sort of platform would require extensive engineering and testing to ensure its security, reliability and integrity. We just didn’t have sufficient time to build and refine a blockchain voting system in time for the November 3, 2020 election. Blockchain technology has already been tested in a number of different industries, such as the healthcare industry, where it was trialed for medical record storage. In fact, blockchain voting was trialed in 2017 and 2018. It was found that while the actual blockchain infrastructure was robust, the voting terminals — made by several different manufacturers — were vulnerable to attacks. If the data that’s inputted into the blockchain system is invalid, then you’ve successfully undermined the entire system. The Problem of Voter Authentication and Identity Verification It’s very easy to impersonate another individual when using an online interface. This poses a major obstacle for the implementation of a blockchain voting technology. Brazil is one nation that’s actively pursuing the use of an Ethereum blockchain for its 145 million registered voters. All was going well until they encountered a major problem that arose from using blockchain for voting: voter authentication / verification. The democratic process is easily undermined if you cannot verify a registered voter’s identity. So, Brazil is tasked with developing a tamper-proof system that allows users to authenticate their identity at the time when the vote is being cast. The Problem of Scalability and Speed Bitcoin has seen noticeable and fairly significant slowdowns in transactions as values have spiked. These times of high demand took a noticeable toll on the cryptocurrency’s blockchain infrastructure. This is a bit concerning because election day will bring millions of transactions over a fairly brief period of time. A delay in processing could be problematic since results would be expected at the end of the day. “It’d be incredibly difficult to scale. There’s a statistic on Bitcoin transactions that [the blockchain] can only handle seven transactions per second. If you [use this technology] in an election, it might be possible in a voting context if you’ve got thousands rather than millions [of voters].” – Catherine Hammon, digital revolution knowledge lawyer at Osborne Clarke, told ZDNet. Clearly, scalability is an issue, but it’s theoretically possible that blockchain voting could work at a smaller, more localized scale, where there are thousands of voters instead of many millions. Perhaps multiple stand-alone blockchain frameworks would be preferable to a single national voting blockchain. But there are still some challenges surrounding identity verification / authentication for voters. If we can resolve these issues, then using blockchain for voting could offer some major benefits such as the creation of tamper-proof voting data that could not be altered. Hammon explained to ZDNet, “Once the data about the voting is on the blockchain, it’s locked down, it can’t be changed, you can add up the count and see it’s correct. That’s really valuable…It isn’t a cure-all remedy for [electronic] voting, but there are many ways in which it does help with some of the problems.” Clearly, now isn’t the right time to introduce blockchain for voting, but you can still leverage online technology to register to vote and to request a ballot by mail. And blockchain is very well-suited to other applications and usages. Blockchain is just one of the many innovative technologies that we work with here at 7T. We specialize in digital transformation through emerging technologies, as we integrate cutting-edge solutions into virtually every development project. From mobile app development, to custom software projects such as CRM platforms or ERP development, we’re ready to deliver collaborative, multi-phased software development services. 7T has offices in Dallas, Houston, Chicago, and Austin, but our clientele spans far beyond Texas and the midwest. If you’re ready to harness the power of a custom software platform and today’s most innovative technologies, contact 7T today.
<urn:uuid:73fd3950-da3d-4728-86b1-cb3c6289c93b>
CC-MAIN-2022-40
https://7t.co/blog/use-blockchain-for-voting-in-us-elections/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00054.warc.gz
en
0.948934
1,861
2.953125
3
Not all cybersecurity threats and attacks occur on hardware and software components. Instead, humans are also vulnerable to social engineering attacks, a kind of cyber-attack. Social engineering psychologically manipulates people to trick them into performing actions or revealing sensitive information. As said before, humans are more prone to making mistakes than hardware or software such as antivirus programs. Disgruntled or poorly trained employees often commit mistakes and deliberately or inadvertently allow threat actors to penetrate a corporate network. For example, if you are not aware of the phishing emails, you might be opening one of them and installing a piece of malware onto your computer. Many people open spam emails and download malicious attachments or open the infected links inside the emails. Since social engineering is a human-based attack and not all humans are equally trained against social engineering attacks, thus, there are great chances that social engineering will prove to be a major threat for organizations in 2020. The year 2020 started in an unexpected way, with the announcement of the coronavirus pandemic made by WHO, many citizens, once safe and reasonable, ended up being bombarded by false news and alarmism. This type of scenario is ideal for social engineering attacks, as they aim to emotionally destabilize the victim. While baiting attacks tend to decrease, since they use physical devices and people are leaving homeless, others like phishing, pretexting and quid pro quo tend to fire. The three modalities presented can easily use government aid and benefits and legitimate market strategies, and are already doing so, to increase the number of victims this year, since, with advertisements considered reliable in various media, the work of convincing that the attacker needs to perform becomes easier. This, coupled with the fact that with the improvement in security algorithms, hackers had to look for alternative sources of attack, makes social engineering attacks the main threats for the current year. In addition to relying on the emotional destabilization that has been affecting millions of people, concerned about their health, as well as that of their families, the attackers have a lack of preparation for work in the home office regime that was practically imposed on many who never imagined acting in this way. This type of operational unpreparedness makes employees even more subject to access malicious links, as well as providing information to malicious agents, since, often, their computers were previously configured to perform specific activities. In this way, the psychological situation coupled with the technical unpreparedness of a good part of the population ends up helping social engineering attacks to become the main threats of the year 2020. Despite being largely neglected, employees must be well prepared for security in digital media, since more than 40% of failures and attacks in this regard are not associated with the technology itself, but with people and the way data, information and systems are used in organizations. In other words, we can say that they are little or no technological techniques, part of humanity's oldest coup category, responsible for losses that are often billionaires around the world. Undoubtedly, social engineering attacks are very dangerous and can have serious consequences for organizations in 2020. However, if organizations are taking appropriate security measures, such as deploying a security suite like SIEM or/and SOAR, then the chances of survival are to a large extent. Logsign SIEM is the next-generation security tool that can be your first line of defense against social engineering attacks. It raises alerts as soon as any malicious activity is detected. Logsign SIEM allows incident responders to respond quickly and remediate the incident. Cyberterrorism and cyberwarfare are hot topics today. Do you know what they are or how you can protect your organization against them?
<urn:uuid:d453d6f9-314f-4f79-b672-e77a44796b14>
CC-MAIN-2022-40
https://www.logsign.com/blog/why-social-engineering-are-major-threats-in-2020/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00054.warc.gz
en
0.967209
742
3.25
3
What You Need To Know About the HIPAA Security Rule In this day and age of rampant cybercrime, protecting a patient’s electronic health information is of the utmost importance. But, how do you know if the protections are adequate? Well, that’s where the HIPAA Security Rule comes in. What is the difference between the privacy and security of health information? With respect to health information, privacy is defined as the right of an individual to keep his/her individual health information from being disclosed. This is typically achieved through policy and procedure. Privacy encompasses controlling who is authorized to access patient information; and under what conditions patient information may be accessed, used and/or disclosed to a third party. The HIPAA privacy Rule applies to all protected health information. Security is defined as the mechanism in place to protect the privacy of health information. This includes the ability to control access to patient information, as well as to safeguard patient information from unauthorized disclosure, alteration, loss or destruction. Security is typically accomplished through operational and technical controls within a covered entity. Since so much PHI is now stored and/or transmitted by computer systems, the HIPAA Security Rule was created to specifically address electronic protected health information Now, the HIPAA Security Rule isn’t extensive regarding the regulatory text. However, it is quite technical. It is the codification of specific information and technological best practices and standards. The HIPAA Security Rule mainly requires the implementation of three key safeguards, that is, technical, physical, and administrative. Other than that, it demands certain organizational requirements and the documentation of processes, as it is with the HIPAA Privacy Rule. Developing the necessary documentation for the HIPAA Security Rule can be complex, compared to the requirements of the HIPAA Privacy Rule. Healthcare providers, especially smaller ones, need to be given access to HIT (Health Information Technology) resources for this purpose. Having said that, the HIPAA Security Rule is designed to be flexible, which means covering all the required aspects of security shouldn’t be tough. There is no need for leveraging specific procedures or technologies. Organizations are allowed to determine the kind of resources necessary for ensuring compliance. The Security Rule applies to covered entities and their BAs All covered healthcare entities and their respective BAs (Business Associates) are subject to the HIPAA Security Rule. So, if you’re a covered healthcare provider who makes use of a vendor that has access to ePHI, you need to enforce a BAA or Business Associate Agreement. A BAA dictates how ePHI will be used, protected, and disclosed. In the case of a breach, both the BA and the covered healthcare provider will be liable to penalties. There are three key areas where measures need to be taken As established earlier, the HIPAA Security Rule requires providers to implement security measures that prevent the theft of ePHI. ePHI should only be accessible to authorized personnel, meaning improper access must be prevented. The HIPAA Security Rule categorizes the necessary safeguards into three levels. First, we have the ‘Physical Safeguards,’ where efforts must be made to protect the data on a physical level. This means providers need to implement robust security measures such as security systems, surveillance systems, window locks, door locks and so on. Physical access to computers and servers must be monitored, and unauthorized access must be prevented. Policies that regulate the use of mobile devices inside the premises and hardware/software removal should be put in place. Next, we have the Administrative Safeguards, which relate to the procedures and policies that a healthcare provider must put in place to prevent breaches. These safeguards must spell out rules for data maintenance, roles and responsibilities, documentation processes, and training requirements. In short, Administrative Safeguards ensure that the Physical Safeguards are implemented appropriately. Finally, we have the Technical Safeguards, which relate to the technological aspects of data protection and security. The goal here is to establish technical standards that are necessary to ensure the protection of ePHI, as well as its organization. According to The Department of Health and Human, healthcare providers need to strike a balance between vulnerabilities to ePHI and identifiable risks, their own capabilities, the cost of protective measures, and the scale. Always conduct a risk analysis To ensure proper compliance with the HIPAA Security Rule, healthcare providers must carry out risk analysis. This entails an assessment of possible threats, vulnerabilities, and risks to the ePHI stored in the providers’ servers. The chosen methodology or approach can vary. However, it must include scope analysis, methods used to collect data, identification of vulnerabilities or threats, the likelihood of a breach, the level of risk, and reviews/updates on a periodic basis. You are required by law to comply with the HIPAA Security Rule All healthcare providers must comply with the HIPAA Security Rule. This is required by Federal Law. Failure to comply can lead to severe penalties and fines. Civil penalties start at $25,000 and go up to $1.5 million per year. Criminal penalties also exist for unauthorized use or access of ePHI, as well as the sale of ePHI. The penalties range from exorbitant fines to imprisonment. Fines can go up to $250,000, while prison sentences can go up to ten years. Patients must be notified if a breach occurs Healthcare providers are required to alert patients if and when a breach occurs. If the breach affects over 500 patients, the provider must notify the Secretary of the HHS and the media. Protecting ePHI is a must and a never-ending process. However, it is the only way providers can ensure that their patients are protected, and they aren’t liable for damages.
<urn:uuid:ae17383e-a4b6-4209-853b-f74f0362f44c>
CC-MAIN-2022-40
https://luxsci.com/blog/5-things-you-need-to-know-about-the-hipaa-security-rule.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00254.warc.gz
en
0.928629
1,195
2.515625
3
The evolution of Android malware has made incontestable progress in the last few years and it often follows in the footsteps of PC-based malware, except that it happens at an accelerated pace. Often, malicious apps gain control of a system in several steps, using different modules. There is typically only one initial module which, once it gets executed, either drops some embedded modules or downloads other modules and installs them to achieve its full range of mischievous behavior. On the Android platform, users have limited visibility when installing packages, especially when side-loading (i.e. manually installing packages or installing from non-official app markets). This is why most Android malware that includes other malware does this embedding in the simplest way: they simply include the payload either as a raw resource or as an asset in their own package. This used to be the case for PC-based malware when simple Trojan horse programs (often called “Droppers”) were simply attaching other malicious executable files as resources or appending them to their main executable. However, this process soon changed and malware authors started to use encryption to obfuscate those embedded modules in an attempt to slow down discovery and detection by security products. It looks like the same thing is currently happening with Android malware. For instance, Android.Gamex is using a trivial encryption (byte XOR with 0x12) to hide a package in assets/logos.png. Here is an encrypted block from that file: And here is that block after decryption: The actual code to decrypt the data is quite small: The availability of strong cryptographic functions in the standard Android API makes them an equally easy-to-implement option for obfuscation. For instance, Android.Pris uses DES encryption to hide its payload in assets/config1. In the following image we can see what the decryption code looks like: Certainly, it is more complicated than the XOR decryption in the preceding example, but it is still very easy to code and provides a much stronger encryption. We can see in the above example that DES decryption is used with the secret key 19821208. Not a particularly strong key, but it does the job. This key actually looks like a date-it could be anything from the author’s birthday to the day that his favorite pet died. The threat is employing the same decryption scheme for downloaded packages and for command-and-control (C&C) server traffic. Strangely enough, we have seen the code from this threat reused in some droppers for Android.Jsmshider, but even though some embedded packages are encrypted with DES, the package for Android.Jsmshider is stored as a plain unencrypted raw resource. I guess malware writers have to cut corners sometimes to meet deadlines! Today’s mobile processors provide enough computational power for industry-standard encryption and these strong encryption schemes become a standard component of the operating environment. It seems only natural to expect this trend of encrypting components to consolidate for mobile malware, but Norton Mobile Security is not fooled by these techniques and is able to detect these threats. Leave a reply
<urn:uuid:9c1528e9-afe7-48d1-b235-06b9a1c33a3c>
CC-MAIN-2022-40
https://dataprotectioncenter.com/general/obfuscating-embedded-malware-on-android/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00254.warc.gz
en
0.927239
648
2.609375
3
All the sessions from Transform 2021 are available on-demand now. Watch now. It was only a matter of time before machine learning transformed the world of chip design. Cadence Design Systems, which makes design tools that engineers use to create chips, is using it to make chip engineers far more productive using its Cerebrus Intelligent Chip Explorer machine learning tool. Automating chip design (electronic design automation, or EDA) has been evolving for decades, with a hierarchy of tools all operating at different levels of abstration. Cadence got started in 1988 to use the benefits of computing to design the next generation of computing chips. But it has become increasingly hard for engineers to keep up with the intricate designs for chips that have billions of on-off switches dubbed transistors. The process of design has become like trying to keep track of all of the ants on the planet. And with machine learning, Cadence Design Systems has been able to add an extra layer of automation on top of the design automation tools that engineers have been using for many years, said Kam Kittrell, senior product management group director in the Digital & Signoff Group at Cadence, in an interview with VentureBeat. The results are pretty awesome. With machine learning, the company can get 10 times better productivity per engineer using the design tools. And they can get 20% better power, performance, and chip area improvements. That’s a huge gain that could ultimately make each chip more affordable, reliable, and faster than it otherwise would have been, Kittrell said. That could mean billions of dollars saved. This kind of productivity gain is necessary as Moore’s Law, the metronome of the chip industry, has begun to slow. The law predicts that chip performance doubles every couple of years, but lately the gains from moving to a new generation of manufacturing have been limited, as we’re entering miniaturization technology on the atomic level and that is running into barriers from the laws of physics. Meanwhile, with billions of transistors per chip, engineers who worked on chips a few generations ago, such as 28-nanometer chips, can barely function with the requirements for chip design of today’s modern 7-nanometer chips, where the width between circuits is seven billionths of a meter. “These are three-dimensional puzzles,” Kittrell said. Enter machine learning With compounding pressure to deliver new chips quicker than ever before, engineers have to become more efficient. The answer is through machine learning, Kittrell said. Just like today’s “intelligent” consumer devices provide users with information at their fingertips, machine learning automates chip design processes so that engineers can complete projects “intelligently”, faster and with fewer mistakes. Machine learning also creates a level engineering playing field, whether you’re an established semiconductor player, a company outside the industry who has brought chip design in-house or a small start-up. “There have been some refinements over time for chip design, but it’s been basically the same way. And so it’s been getting more and more complicated for an engineer to take a chip through to completion,” Kittrell said. “For example, someone who may be very good at building chips at 28 nanometers will have a huge learning curve to do a five-nanometer chip, today. The technology has changed so much.” Cerebrus doesn’t replace the flow of tools and how human’s interact with the tools. But it works as a driver’s assistant, Kittrell said. “Power, performance, and area are always the key objectives that anyone drives whenever they’re making a chip,” Kittrell said. “It has to be manufacturable. But after that, there’s a squeeze on power and performance and area. And so we use reinforcement learning in our Cerebrus tool. It controls the tool and does experimentation for the engineer in order to find the best solution.” Machine learning isn’t threatening the jobs of chip engineers, who are more sought after than ever, Kittrell said. Rather than replace them, machine learning has become an engineer’s “helper”, reducing the learning ramp-up time and doing many traditional engineering tasks automatically. “This is an example where that improves the it improves the productivity of the engineer, while also delivering better power performance scenario,” Kittrell said. Cerebrus uses unique machine learning technology to drive the Cadence RTL-to-signoff implementation flow. This is where the engineer designs on a level of abstraction where he or she can understand the logical flow of electrons through a chip. Cadence’s existing, earlier tools would take the logical flow and convert it to the physical layout of the chip. The logical level is the Register Transfer Level and it is converted to the final sign-off tools and actual placement and routing of wiring throughout a chip. There are often multiple ways to implement a logical design in a physical layout, and optimizing that can save a lot of material, energy, and costs. An engineer can handle this part of the design on one pass. But Cerebrus can take another run through it and improve the results. The engineer delivers the final design in a database format dubbed GDSII, and then it’s off to manufacturing. “There’s always a push to find a way to optimize for power, performance, and area. This can take a lot of time in the design process. And this is where Cerebrus can help. It can take a list of anything within the RTL to GDSII and do experiments. “You don’t have to spend a lot of time training a model upfront, in order to get started. Right from the beginning, Cerebrus can start doing searches based on your vector and your design, and within a few runs can find a better solution,” Kittrell said. From chip design to your living room Once the chip designer’s job is done, he or she hands the design over to the factory engineers. Inside a chip factory, there are hundreds of steps that are like an assembly line to build a chip one layer of material at a time. Robotics handle a lot of the tasks, but machine learning has been applied to the giant hardware machines that pattern materials on top of chips as well. This is what it takes to get the latest Nintendo Switch or PlayStation 5 in the hands of the gamer in your family. The results are as previously mentioned, and they can help many different chip applications in consumer, hyperscale computing, 5G communications, automotive, and mobile design, Cadence said. It scales engineering resources so they can handle more projects or bigger ones. Cadence has deployed the tool to more than a dozen different customer locations so far across all of those applications, Kittrell said. Now the company is making the tool available to all customers. Cerebrus is part of the broader Cadence digital full flow of tools. The machine learning can reinforce engineers, considering solutions that humans might not explore. It also allows design learnings to be automatically applied to future designs, and it offloads work from humans. It enables distributed computing, with better on-premises or cloud-based designs. Satoshi Shibatani, a customer at Renesas, said in a statement that automated design flow optimization is critical for making products quickly, and he said Cerebrus has improved design performance by more than 10%. So his company is adopting it for its latest projects. Samsung vice president of design technology Sangyun Kim also said that Samsung Foundry used the Cerebrus tool and saw an 8% power reduction in its chip and it had 50% better timing, which improved overall performance. It’s taken a while for machine learning to impact chip design, but it’s hard to find an industry that it won’t impact. VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access: - up-to-date information on the subjects of interest to you - our newsletters - gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More - networking features, and more
<urn:uuid:fd4e2dec-409c-4411-a930-09da5a316cc5>
CC-MAIN-2022-40
https://www.businessmayor.com/cadence-design-systems-launches-cerebrus-machine-learning-for-chip-design/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00254.warc.gz
en
0.942043
1,784
2.78125
3
Disaster Recovery is an essential component of every business plan that provides a framework that plans for the worst in the event of a natural or man-made disaster. Within Information Technology: Disaster recovery (DR) is an area of infrastructure and security planning to minimize the imact of significant negative events within your organization. A disaster recovery plan is a structured document that instructs your staff on what to do in the event of significant, unplanned incidents. Disaster recovery plans enable businesses to maintain, replace, or resume mission-critical functions following a disaster. Is Your Data Center Disaster-Proof? With hundreds of Delta Air Lines flights recently grounded in the wake of a power outage, companies of all sizes started taking a hard look at their own IT operations. At the top of the list is an assessment of disaster recovery plans. More than likely, the overall damage to Delta has yet to be assessed. According to the Wall Street Journal, the carrier not only is facing millions of dollars in lost revenue; it is suffering a major blow to its brand reputation as one of the nation’s leading international airlines. Companies the size of Delta will survive. Organizations any smaller are not as likely to quickly get back on course after a significant setback. For anyone operating a business in the Midwest, the threat of snowstorms, tornadoes, hail, and floods is constantly at the forefront of any emergency management plan. Indiana, for example, has had more than 18,200 weather extremes during a 60-year period leading up to 2010, with a good portion of them involving those weather events. Earthquakes? Unlike California, which had 1,987 earthquakes with magnitudes of 3.5 or above during that period, Indiana only experienced one significant earthquake since 1950. That earthquake, which measured at a magnitude of 4, struck in 1984. However, as you reassess your company’s emergency management plan, you’d be wise to consider the worst possible scenario involving an earthquake. According to the Federal Emergency Management Agency, the Midwest could face the “highest economic losses due to a natural disaster in the United States” because of its location on the New Madrid Seismic Zone, one of the most active faults in the nation. More than seven years ago, FEMA had issued the warning for the states of Alabama, Arkansas, Illinois, Indiana, Kentucky, Missouri, Mississippi, and Tennessee, pointing to a study that showed an earthquake with a 7.7 magnitude is possible. Some scientists had reported the event could happen within 50 years. Hurricanes, floods, hackers or any other natural or human-made disruptions can pose a threat to your data centers with little or no warning at all. Therefore, having a solid Disaster Recovery plan is no longer a choice – unless you want to start over from scratch. Most organizations today have a secondary data center or external wholesale colocation backup facilities, but having a planned Disaster Recovery colocation blueprint is critical to assess the vulnerabilities of various risks affecting the data center reliability and functionality. The DR blueprint should have careful risk assessment keys, which illuminate the impact of a disaster on direct users, stakeholders, etc. facilitating efficient prioritization and scoring of risk assessment. Business Continuity Plans vs. Disaster Recovery Plans Neither big or small organizations can afford downtime in their data center operations. The losses due to downtime have jumped to almost $138,000 on an hourly basis in 2012, which is three times more than the loss calculated in the initial days of data center usages way back in 2004 when the figures were just around $42,000 an hour. Taking the downtime factor out of your operational formula is impossible because, even if you have the necessary resources, you still must deal with Mother Nature. The recent examples of Hurricane Sandy and the devastating Japanese Earthquake are testimonials to this fact. Therefore, ensuring business continuity is vital, and this calls for radical steps. But you should not confuse a business continuity plan with a disaster recovery plan since these are two different concepts that ensure data protection. Disaster recovery is just a piece of the bigger business continuity plan. The aim of disaster recovery is to restore your data after an unexpected disaster in your data center facility. On the other hand, business continuity combines all efforts and managerial principles that are involved in making sure that key business functions like IT, resource management, and finance are not impacted in the event of a disaster. Without a business continuity plan, it would be very difficult for organizations to get things back to normal. As IBM pointed out in 2011, nearly 43 percent of companies experiencing a big data loss never reopened their business when disaster recovery alone was not enough to undo the damage. So, foresight and planning for inevitable threats and determining the impact of disasters on your facilities are quite important to ensure continuity. In other words, your data center operations need to have a business continuity plan if they are to survive significant downtime threats. How Effective is Your Data Center’s Disaster Recovery Plan? More than likely you’ve invested the resources, time, and cash to insure your business properly in the event of a fire, PR disaster, or other emergency/catastrophe. When it comes to data centers, it’s important to implement a sound disaster recovery plan as part of that equation. According to recent statistics, data center outages cost companies in the United States a total of $700 billion every year. And the typical data center can expect to lose about $9,000 a minute because of an unplanned data center outage. Fire, storm, or flood can immobilize a business, especially when servers and other IT equipment are vulnerable to damage. The best way to protect your data and essential equipment is to perform a disaster recovery risk assessment, which can guide you in determining the types of disasters that could hit your business, the solutions that may be required, and the overall expenses you’ll need to recover. As part of your disaster recovery (DR) plan, it’s also important to consider backup contingencies, commonly called redundancies, for your essential IT infrastructure. Consider the following points sound reminders for a secure, quick, and efficient DR plan: - Assess the impact on business – Risks can vary with industry, geography, and various other factors, but there are four general categories: Financial loss, operational commotion, reputation damage, or regulatory penalties. The assessment should answer two major questions in the event of a disaster: how much data would suffer and what would be the resultant financial loss, and how soon are you required to resume the operations? - Prioritization plans. As with any company, some areas of your operations will require more immediate data recovery attention than others. Determine which areas are a priority for getting back online based on your company’s objectives in ensuring that clients receive services. - Take inventory. Review your equipment, both hardware, and applications, taking note of what will need to be replaced in the event of damage during an event. This ensures that components can be quickly addressed by contacting the vendor for replacements or solutions to help your company recover. - Assess your downtime tolerance. What is your level of dependence on your servers? Are you operating primarily online? Or are you operating a service agency with limited use of technology for specific functions? Depending on your answer, you may be better able to determine your recovery point objective and recovery time objective. - Categorize your functions. Determine which applications are a priority under your disaster recovery plan. One category should include applications that are critical and must be addressed urgently in the event of a disaster. Other categories can range from several hours to several days in priority, based on how critical they are to the success of your operations. - Team communications and having a trained team on standby – While the goal is to have 99.995% uptime, running a data center means planning for disaster and requires a recovery plan in the event something catastrophic, whether digital or natural. Two of the most common occurrences a data center may experience is a facility becoming too hot or too cold, which will affect operations. Both of these situations can cause damage to supporting hardware, and they each require different techniques to remedy the situation. What most people don’t think about is that different types of disasters call for different staffing requirements. Different disasters call for different staffing requirementsRecovery in a hot site means all your supporting hardware infrastructure and data are available in the event of a disaster, making it easier to perform the recovery process. In this situation, almost everything is already set up to perform a recovery procedure, so you may only need a few staff members to be available during this type of disaster. Having a support staff during recovery in a hot site isn’t always needed, but having a few experts will suffice for the operation.Recovery in a cold site means that there might not be any infrastructure or data, which requires more “all hands on deck.” A large supporting staff is needed to meet the requirements for a recovery in a cold site. This type of disaster can also be costlier due to the time and hardware it takes to resolve the issue. Although each of these scenarios requires different staffing requirements, there needs to be a disaster recovery plan in place for a variety of scenarios, so your staff knows what to do when disaster hits. Knowing how the staff will get to the site promptly and what type of accommodations they will need to make are important factors when coming up with a plan. When choosing the right IT staff for different scenarios, make sure you know which professionals you will be able to call on immediately after a disaster. - Establish an emergency plan. Your employees are the most valuable assets of your company. Regularly keep them trained on what to do in the event of an emergency, regularly updating them on safety drills to perform during tornadoes and other extreme weather. If you haven’t done so already, incorporate earthquake drills. Numerous businesses throughout the Midwest have already performed a massive earthquake drill in connection with the U.S. Department of Homeland Security. For tips on performing these drills, see the checklist for the Great Central U.S. ShakeOut. - Assign roles of responsibility. Identify the members of your team who will be contacted for different roles and at different times. Critical team members should be contacted immediately to handle specific responsibilities under your disaster recovery plan. Remember that employees come and go. You could be in a disaster situation, and a previously identified employee for your plan may no longer be with the company. Frequent testing will uncover these issues. Also, identify any third-party consultants who are critical to your plan - Establish a communications plan. How do you plan to get the word out to employees on next steps in the wake of a natural or human-made disaster? Outline those guidelines in your plan. Under challenging circumstances, how will employees receive information on where to go next? If systems are down, including phone and the internet, you may need to establish alternative methods of communication. You can also distribute protocol in writing beforehand. Train employees on how to access those details. - Train your employees to ensure adoption. Engage your employees in the disaster recovery plan testing. It’s not enough to test your disaster recovery plan every couple of years. Going through the exercise may be disruptive, but it could be the key to keeping your operations viable when disaster does strike. - Do we have a plan for safety? Safety measures include more than the hardware and appliances at your location – any staff who might be unaware of a sudden disaster or the safety plan can be detrimental during a disaster. Make sure that your plan for disasters and recovery is circulated among all the personnel at your data center so that everyone knows what to do during a crisis. - Do we have a trained team on standby? You can’t expect all personnel to know how to react to a disaster. It is important to have an experienced and trained team who are ready to take control. A lead team member who can get to the data center even in harsh natural conditions is beneficial to have in case of an emergency. - Technology Risks – To have proper data restorations, the original backup data should be relevant, validated, and free of errors. Issues like pulling data from old versions and legacy based systems, or version control issues, etc. should be avoided. - Have we planned for a power back up? It is unlikely to have a data center that doesn’t have a backup plan, but it is important to check the details of how long the backup would last in case of a prolonged outage. You may also need to identify priority servers, as well as the inactive ones so that you could distribute the backup power efficiently. - Have we planned for Data redundancy or back up? Geographical redundancy of your data is vital, and with the advent of the cloud, you no longer need to depend on multiple locations for your centers to achieve this. If your data is backed up regularly and accessible from the cloud, you are home safe - Always be prepared – Is your structure disaster-ready? In most cases, your most valuable resources are stored in a well-insulated and safe structure but think about the implications a natural disaster such as a flood or storm might have in your surrounding areas. Any small repair that is left unattended, such as a blocked roof drain, could have major impacts during an unexpected natural disaster. Remember, you may need to physically relocate, ramp up operations in an emergency, account for transport, power outages, etc. at any time So, be ready with sufficient, efficient, and quick team members. - Assess your physical space. Next to your employees, protecting your physical investment is a top priority. Get an assessment of how secure your building will be in the event of flooding, tornadoes, and other natural disasters. If you’re planning to move into a new office or warehouse space, make sure you understand how well it was built to withstand strong winds, floods, and other extreme weather. After Superstorm Sandy in 2012, several businesses in New York realized that basements aren’t ideal locations for servers. Floodwater poured into the basements of several office buildings, disabling servers along with diesel fuel pumps needed for backup power for generators. Because of the storm, data center representatives started to re-examine the scope of their disaster plans, considering more severe storms as potential risks. Don’t wait until the next disaster to assess whether your servers are stored in a secure environment. If stored in a basement, for example, research the feasibility of moving them to ground level or higher. If that’s not possible, consider moving your server to a safer offsite location. Also, consider hiring a professional consultant to help you make the right choice. - Basic operations. Where will your employees work if a disaster destroys your office building, or you lose power for a prolonged time? After a weather-related disaster, road conditions and other hazards may mean working remotely is the only option. But if it’s safe for employees to travel, having a disaster recovery hot site – with servers, laptops, and office space – can allow you to continue running your business. A cold site, one that offers only office space, can be a viable alternative if you’re able to effectively install the equipment you need to resume operations. It may be a less expensive option overall, especially in the event of an extended outage. - Develop a backup plan. Whether a natural disaster or other events cripple your business, it’s wise to protect your mission-critical data by setting up a secondary support infrastructure that allows you to continue your operations seamlessly. Colocation centers can not only provide automated backup and recovery services, but they can also provide a location for your employees to work if your business location is inaccessible. Steps to Choosing Colocation for Disaster Recovery When creating a disaster recovery (DR) plan for your company, you have numerous options. But one of your main objectives is moving data to a location where it can be secure while, ensuring that your team can monitor the data, or at least its security. While you could build an additional data center offsite, another viable alternative, as many companies have discovered, is using colocation as a cost-saving and effective solution. However, using a third-party service provider requires plenty of research on your part to ensure that you have the right fit. Here are some of the things you should consider: - Analyze the location. Is it far away enough from your facility to ensure that you’re not facing the same challenges if a natural disaster strikes? Also, check to see if the location is an area that has a low risk for natural hazards like flooding. - Ask about experience. You want to know if the provider has experience working with clients like yours, in size and challenges. Also, check its track record for compliance and industry standards as well as its ratings for uptime. Don’t forget to ask for referrals. - Understand all fees. Make sure the company provides an extensive explanation of costs and fees as well as policies. Have your legal team or an attorney review the contract beforehand. - Ask about other services. Do you need experts outside of your company to manage your IT assets? Ask the provider if this is provided. Some colocation providers offer a wide range of services, while some are more focused on specific areas and needs. Analyze your needs and choose a provider that best meets them.[/section] Disaster Recovery Tests and Updates Conduct semi-annual tests for your DR plans depending on the changes taking place within your environment to refine it constantly. With major floods hitting states across America, including Oregon, Illinois, Missouri, and Tennessee, scientists are now predicting there’s plenty more to come over the next five years putting residents, businesses and other organizations at risk. That’s why experts are increasingly concerned about the need for data centers to put disaster recovery plans in place to minimize disruptions. “Sea-level rise has already doubled the chances of extreme flooding in locations around the U.S., and that will only accelerate in the coming decades,” says Benjamin Strauss, vice president for climate impacts and director of the Program on Sea Level Rise at Climate Central. Strauss said, by 2020, coastal areas in states like Florida and Louisiana could be significantly affected by the sea-level rise. More than likely, you already have a disaster recovery strategy in place. However, that process does not end with making sure you have an effective plan to protect your company against significant loss. It also includes developing steps to test, review, and adjust if necessary, to ensure that you’re not at risk for overlooking flaws in the DR plan. Here are five crucial steps to make sure that your operations can successfully recover in the event of flooding or another critical event. - Appoint employees to a planning committee. Designate employees and key managers to be part of a DR planning committee that review and update your DR strategy. Make sure you have a team that represents different areas of your organization to address unique needs. - Perform a risk assessment. One of the top priorities of the planning committee is to conduct a risk assessment that covers a variety of scenarios, including the inability to assess data or to communicate with one another. Be exhaustive in determining the risks, as well as determining the costs with each scenario. - Prioritize functions. Obviously, not all functions are created equal. Determine which departments and department functions are a priority for recovery after a disruption. Rank them in order of importance and include that in your recovery plan. - Assess your backup plans. One of the key components of a disaster recovery plan is developing backup resources. Many companies choose to build a backup plan through a third party to avoid the expense of building out the facility or purchasing the equipment themselves. - Test and test again. With your DR plan finalized, it’s important to regularly test and review it for areas that may need updating. Also, make sure that you keep a printed copy in a safe area outside of your facility that you can reach in the case of a disruption. Update Your Disaster Recovery Plan by Answering These 2 Questions Every business, whether large, medium, or small, should have a comprehensive disaster recovery plan that guides employees on how to recover in the event of a breach caused by a natural or human-made event. Most of us realize that. But when was the last time you reviewed your plan and identified any gaps that may require tweaks or significant changes? Whether you’re just now developing a disaster recovery plan or at a stage where you need to review it, start the process by asking these two questions: - How fast do you need to recover after an event? - What do you need to recover quickly? By outlining the answers to those questions, you can start implementing a strategic plan based on your specific needs and requirements, according to Alex Carroll, who discusses the important elements of a disaster recovery plan in this video. Carroll said company officials would need to bring numerous team members to take a granular look to determine which areas are more critical than others. The solutions could vary extensively depending upon which functions of your company you need to be recovered immediately and, as a result, will be the most expensive recovery projects in your plan. You also may determine that other data can wait a week before being required to get your company back to normal or return to operations, as described in the industry. Companies also may want to determine if they need work group recovery — another location that allows employees to physically gather to keep working in the event of a disaster or other event that prevents them from working at the company. - Determine the cost. If your business has experienced growth since you last reviewed your disaster recovery plan, it’s time for an update. A solid plan should consider what is required to get back to service as quickly as possible, as well as the expenses involved with different disaster recovery scenarios. - Look beyond nature. With cyber attacks increasing, your threats have expanded beyond acts of nature like floods and earthquakes. Factor in how malicious attacks can impact your systems and outline a plan for recovery. - Identify your key people. It happens in every business. People come and go. Some move up in the company. Make sure your plan reflects the key employees who will be critical to addressing issues with the data center in case of an emergency. Make sure all contact information is current, even if you are dealing with the same list of employees. - Retrain employees. Review your disaster recovery procedures to make sure instructions on what to do in the wake of an event still make sense. Once the plan is solid, remind employees of their roles. Consider it a tornado drill. Everyone needs a reminder. Floods, tornadoes, severe storms, earthquakes, and hail were among the natural disasters hitting the United States in 2016. Some regions of the country, of course, are better than others when it comes to the damaging effects of Mother Nature. But companies globally must deal with the increasing threats caused by cyber-attacks. No one is immune. These criminal acts have impacted small companies to big names such as Wendy’s, the FBI, Verizon, LinkedIn, Citibank, and the Democratic National Convention, according to The Heritage Foundation. With threats coming from all sides, companies need to focus on developing effective disaster recovery plans — ensuring that they’re updated and regularly tested to minimize the fallout caused by disastrous events. Doing so could be the difference between your company being down for days, even weeks, or being able to fully recover and operate within an hour or two of an event crippling your system. Entrusting your facility operations to a reliable data center service provider like Lifeline Data Centers can certainly help you considerably reduce downtime concerns in your operations. Take a quick look at our services and our client testimonials.
<urn:uuid:58d32b6c-9bae-421b-8b8a-2ecd5d3e9eaf>
CC-MAIN-2022-40
https://lifelinedatacenters.com/disaster-recovery/effective-disaster-recovery-plan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00254.warc.gz
en
0.949359
4,910
2.75
3
A BankInfoSecurity article ranked cybersecurity as one of the top five IT skills in demand right now. It’s easy to see why – from high profile attacks like the Flame virus, which many experts believe had the backing of an entire country, to more targeted attacks designed to break into individual bank accounts. The importance of cybersecurity was also confirmed by a San Francisco Chronicle article written by Suzanne Spalding, deputy undersecretary for the National Protection and Programs Directorate, and Mark Weatherford, deputy undersecretary for cybersecurity at the U.S. Department of Homeland Security. According to Spalding and Weatherford, everyone has a role to play in improving online security. Many businesses are shifting to a heavier focus on security, but more may need to be done. Efforts must be made to improve the ability of organizations to share threat information and to increase the supply of professionals with cybersecurity expertise. The University of Washington recently developed a card game designed to address the cybersecurity skills gap. According to a Phys.org article, “Control-Alt-Hack” is a video game that puts players in the role of security experts working for the fictional company, “Hackers, Inc.” The article highlighted comments from Yoshi Kohno, an associate professor at the university. “The target audience is 15- to 30-year-olds with some knowledge of computer science, though not necessarily of computer security,” the article stated. “The game could supplement a high school or introductory college-level computer science course, Kohno said, or it could appeal to information technology professionals who may not follow the evolution of computer security.” While the game is designed mostly for fun, in order to generate interest in security, some of the content is based on real-world security vulnerabilities, Phys.org reported. In-game characters also have different skill sets, such as social engineering or “software wizardry.” Do you think more should be done to generate interest in cybersecurity? Would you enjoy going up against digital cyber criminals in games like “Control-Alt-Hack”?
<urn:uuid:41522289-bdb4-4645-bfe7-08b2bd7a9be5>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/security-experts-design-game-to-generate-cybersecurity-interest
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00254.warc.gz
en
0.959546
435
3.125
3
Digital to analog converter bridges the gap between internet and electronic hardware Researchers at the George Washington University and University of California, Los Angeles, have developed and demonstrated for the first time a photonic digital to analog converter without leaving the optical domain. Such novel converters can advance next-generation data processing hardware with high relevance for data centers, 6G networks, artificial intelligence and more. Current optical networks, through which most of the world’s data is transmitted, as well as many sensors, require a digital-to-analog conversion, which links digital systems synergistically to analog components. Using a silicon photonic chip platform, Volker J. Sorger, an associate professor of electrical and computer engineering at GW, and his colleagues have created a digital-to-analog converter that does not require the signal to be converted in the electrical domain, thus showing the potential to satisfy the demand for high data-processing capabilities while acting on optical data, interfacing to digital systems, and performing in a compact footprint, with both short signal delay and low power consumption. “We found a way to seamlessly bridge the gap that exists between these two worlds, analog and digital,” Sorger said. “This device is a key stepping stone for next-generation data processing hardware.” Read the study, “Electronic Bottleneck Suppression in Next-Generation Networks with Integrated Photonic Digital-to-Analog Converters,” here (https://onlinelibrary.wiley.com/doi/epdf/10.1002/adpr.202000033). This work was funded by the Air Force Office of Scientific Research (FA9550-19-1-0277) and the Office of Navy Research (N00014-19-1-2595 of the Electronic Warfare Program).
<urn:uuid:a0070d6a-340f-4ab2-bf40-e0e14cd6152c>
CC-MAIN-2022-40
https://www.ecmconnection.com/doc/researchers-create-novel-photonic-chip-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00254.warc.gz
en
0.883379
379
2.828125
3
Safe Software: Making the Impossible Possible Using Spatial Data The deteriorating climatic conditions, depletion of natural resources, and other environmental issues have made sustainability very important. Due to the advancement of technology and improved network connectivity, people have become more aware of environmental problems and other issues of global concerns. Due to this, there is pressure on business owners, leaders, and administrators to incorporate sustainable practices into their businesses. Lately, businesses are embracing sustainable business practices like installing appliances that consume low energy, educating the employees about the relevance of sustainable practices, encouraging recycling programs, and more. Many businesses are also switching their suppliers for more green options. Thus, they are adopting and incorporating all viable solutions that can turn their business into a more sustainable one. One such solution that can accelerate sustainability is AI. As we know, AI is transforming the way we do business. Starting from the data analysis software that will process large unstructured data quickly, AI can improve forecasting and do predictions faster, enabling businesses to create products based on customers' needs. Now, let us discuss how AI can accelerate sustainability in business. AI for personalised journeys The success of a business depends on your ability to market products in a way, which will gauge audience attention. But if you need to grab the attention of the conscious customers, you will have to state brands' sustainable initiatives. With AI, you can promote your business, based on the needs of each customer. Businesses can send personalised marketing emails and notifications which will grab the attention of conscious customers. AI to make data-driven decisions For getting profit and to save money, businesses and manufacturers need to know the exact number of products that can be sold. Once they know the exact number, they can manufacture products as per the demand, thus reducing wastage and optimising resource usage. As we know, the majority of businesses leave a huge amount of carbon footprints and consume a lot of resources and energy to make the product. But once you know the correct data, you can understand the demand for the product and manufacture it only as per the need. This will reduce the number of unsold items and the amount of carbon emitted. But how can you achieve it with AI? The Conundrum, a UK-based Technology Company states that, with proper investment in AI, you will be able to forecast sales and predict the demands not only based on the post-sales analysis but also based on events like weather forecasts and cultural events. Forecasting demand with AI will help the business to know what consumers want and companies can make items based on their needs. Thus, businesses will be able to make data-driven decisions, which will help in manufacturing products as per the requirement. This will also help in reducing waste, saving resources, and minimising expenses for businesses. AI to manage inventories Inventories form a major part of any business. For the efficient running of a business, all the inventories starting from manufacturing to product distribution must be managed carefully. But managing inventories is a time-consuming and tedious task. By applying AI, you can optimise inventory management and reduce the chances of committing any mistakes. With AI, you can make inventory management pre-planned, automated, and based on demand. It will reduce waste and boost profit. AI to lower energy bills One of the biggest expenses of any business is the energy cost. By installing AI in your office building and factories, you can effectively cut your energy bill. AI can monitor and collect information on the energy consumption of the building. Based on the evaluation, AI will manage energy consumption and reduce its use during peak hours. By analysing data, AI can predict problems such as bottlenecks and detect equipment failures, enabling businesses to come up with a plan to mitigate the issues. To reduce the energy bills, it is also advised to compare business gas prices and electricity prices through Business Energy UK. By comparing the bills, you can understand whether you are given the best deal in the market. If not, then you can switch suppliers and energy tariffs through this site. If you are trying to transform your business into a sustainable one, you can also find suppliers who are generating energy from greener sources through Business Energy UK. AI-based model imagery If you are in the fashion industry, you will be aware of the high carbon emission caused by photoshoots. The lighting and sampling of products will result in wastage of energy and add to your business expenses. Along with this, the pollution and waste due to the packaging, shipping, and excess inventory also need to be considered. In this case, you can use an AI-based Virtual Dressing Room to solve these problems. With the help of a virtual dressing room, the shoppers can try these outfits in a model that resembles them. They can mix and match the dresses and can visualise how they will look in them. This will help the customers to buy the dress that they love and fit them without stepping out of their houses. Also, since there are fewer chances for any misfits, customers will not be returning the products, thus reducing the waste caused by repackaging. Also, AI enables retailers to use the technique for fashion photoshoots, thus saving time, resources, and expenses, and reducing the carbon footprints. AI to boost the conservation of natural resources For the operation of any business, a lot of natural resources are being used. By introducing AI in water resource management, industries can use water resources sustainably. They can incorporate satellite imagery, machine learning, and sensors into their operation and can understand real-time water loss and its misuse. This will help the companies to understand where water usage is maximum and can come up with water management strategies to resolve the excess use of water. For instance, if you are in the irrigation industry, AI can streamline work by identifying the amount of water present in the soil and can calculate the water demand. This will reduce the wastage of water resources. AI to detect the sources of pollution With IoT and AI, businesses can understand and figure out the sources of pollution. They can discover and analyse air quality quickly in real-time and understand the source of pollution at an early stage. Businesses and industries will get an insight into the issue and can come up with plans to mitigate them. AI can accelerate sustainable options One of the major drawbacks of sustainable energy is that one cannot predict the amount of solar or wind energy available during a particular day. But now there are various weather forecasting models with cognitive self-learning capabilities available in the market, which can help you to forecast the amount of renewable energy available during a particular day. Thus, AI can overcome one of the biggest drawbacks of using renewable energy sources. AI is revolutionising the way we do business and is transforming all industries. It is helping them to mitigate many issues and is also helping in accelerating sustainability, which is important for the growth of both business and the health of the planet.
<urn:uuid:3a584d8a-e3ec-4382-904a-de364e2b7b62>
CC-MAIN-2022-40
https://em360tech.com/tech-article/ways-which-ai-can-accelerate-sustainability-business
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00254.warc.gz
en
0.951216
1,417
2.703125
3
The concept of emotional artificial intelligence or ‘emotion AI’ conjures up visions of humanoid robots in customer service roles, such as the lifelike ‘receptionist’ welcoming guests at a Tokyo hotel. A number of companies have added emotion recognition to their personal assistant robots so they too can have more humanlike interactions. But humanoid robotics is just one of many potential uses for emotion AI technology, says Annette Zimmermann, research vice president at Gartner. “ By 2022, 10% of personal devices will have emotion AI capabilities” Tech giants, as well as smaller startups, have been investing in emotion AI for over a decade, using either computer vision or voice analysis to recognize human emotions. Many of these companies started with a focus on market research, analyzing and capturing human emotions in response to a product or TV commercial. Commercial deployments are slowly emerging in virtual personal assistants (VPAs), cars, call centers, robotics and smart devices. Gartner predicts that by 2022, 10% of personal devices will have emotion AI capabilities, either on-device or via cloud services, up from less than 1% in 2018. “We will continue to find many new and exciting uses for emotion AI technology over the coming year,” Zimmermann says. “However, smaller providers will need to focus their efforts on a limited number of applications and industries, instead of trying to be everything to everyone.” New uses are evolving quickly In the past two years, emotion AI vendors have moved into completely new areas and industries, helping organizations to create a better customer experience and unlock real cost savings. These uses include: - Video gaming. Using computer vision, the game console/video game detects emotions via facial expressions during the game and adapts to it. - Medical diagnosis. Software can help doctors with the diagnosis of diseases such as depression and dementia by using voice analysis. - Education. Learning software prototypes have been developed to adapt to kids’ emotions. When the child shows frustration because a task is too difficult or too simple, the program adapts the task so it becomes less or more challenging. Another learning system helps autistic children recognize other people's emotions. - Employee safety. Based on Gartner client inquiries, demand for employee safety solutions are on the rise. Emotion AI can help to analyze the stress and anxiety levels of employees who have very demanding jobs such as first responders. - Patient care. A ‘nurse bot’ not only reminds older patients on long-term medical programs to take their medication, but also converses with them every day to monitor the their overall wellbeing. - Car safety. Automotive vendors can use computer vision technology to monitor the driver's emotional state. An extreme emotional state or drowsiness could trigger an alert for the driver. - Autonomous car. In the future, the interior of autonomous cars will have many sensors, including cameras and microphones, to monitor what is happening and to understand how users view the driving experience. - Fraud detection. Insurance companies use voice analysis to detect whether a customer is telling the truth when submitting a claim. According to independent surveys, up to 30% of users have admitted to lying to their car insurance company in order to gain coverage. - Recruiting. Software is used during job interviews to understand the credibility of a candidate. - Call center intelligent routing. An angry customer can be detected from the beginning and can be routed to a well-trained agent who can also monitor in real-time how the conversation is going and adjust. - Connected home. A VPA-enabled speaker can recognize the mood of the person interacting with it and respond accordingly. - Public service. Partnerships between emotion AI technology vendors and surveillance camera providers have emerged. Cameras in public places in the United Arabic Emirates can detect people's facial expressions and, hence, understand the general mood of the population. This project was initiated by the country's Ministry of Happiness. - Retail. Retailers have started looking into installing computer vision emotion AI technology in stores to capture demographic information and visitors' mood and reactions. However, barriers to adoption remain. A recent Gartner consumer survey revealed that there are still considerable trust issues around emotion AI technologies; that is, users feel less comfortable with emotion AI via camera capture compared to voice analysis. “Providers need to convince us that our emotion data is safeguarded and only used in an anonymized way to train other systems by implementing transparent data management policies,” cautions Zimmerman.
<urn:uuid:f54dee79-31aa-4b37-b926-0471b9b90a8b>
CC-MAIN-2022-40
https://www.gartner.com/smarterwithgartner/13-surprising-uses-for-emotion-ai-technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00254.warc.gz
en
0.94245
933
2.59375
3
In this article, we are going to review and see how we can schedule and run tasks in the background automatically at regular intervals using the Crontab command. Dealing with a frequent job manually is a daunting task for system administrators and such tasks can be scheduled and run automatically in the background without human intervene using cron daemon in Linux or Unix-like operating system. [ You might also like: How to Create and Manage Cron Jobs on Linux ] For instance, you can automate Linux system backup, schedule updates, and synchronization of files, and many more using Cron daemon, which is used to run scheduled tasks from commandline or use online tools to generate cron jobs. Cron wakes up every minute and checks schedule tasks in countable – Crontab (CRON TABle) is a table where we can schedule such kinds of repeated tasks. Tips: Each user can have their own crontab to create, modify and delete tasks. By default cron is enabled to users, however, we can restrict users by adding an entry in /etc/cron.deny file. Crontab file consists of command per line and has six fields actually and separated either of space or tab. The beginning five fields represent time to run tasks and the last field is for command. - Minute (hold values between 0-59) - Hour (hold values between 0-23) - Day of Month (hold values between 1-31) - The month of the year (hold values between 1-12 or Jan-Dec, you can use the first three letters of each month’s name i.e Jan or Jun.) - Day of week (hold values between 0-6 or Sun-Sat, Here also you can use the first three letters of each day’s name i.e Sun or Wed. ) - Command – The /path/to/command or script you want to schedule. 1. List Crontab Entries List or manage the task with crontab command with -l option for the current user. # crontab -l 00 10 * * * /bin/ls >/ls.txt 2. Edit Crontab Entries To edit the crontab entry, use -e the option as shown below. In the below example will open schedule jobs in VI editor. Make necessary changes and quit pressing :wq keys that save the setting automatically. # crontab -e 3. List Scheduled Cron Jobs To list scheduled jobs of a particular user called tecmint using option as -u (User) and # crontab -u tecmint -l no crontab for tecmint Note: Only root user have complete privileges to see other users’ crontab entries. Normal users can’t view others. 4. Remove Crontab Entry Caution: Crontab with -r the parameter will remove complete scheduled jobs without confirmation from crontab. Use -i option before deleting user’s crontab. # crontab -r 5. Prompt Before Deleting Crontab -i the option will prompt you confirmation from the user before deleting the user’s crontab. # crontab -i -r crontab: really delete root's crontab? 6. Allowed Special Characters (*, -, /, ?, #) - Asterisk(*) – Match all values in the field or any possible value. - Hyphen(-) – To define range. - Slash (/) – 1st field /10 meaning every ten minutes or increment of range. - The Comma (,) – To separate items. 7. System-Wide Cron Schedule System administrator can use predefine cron directory as shown below. 8. Schedule a Jobs for Specific Time The below jobs delete empty files and directory from /tmp at 12:30 am daily. You need to mention the user name to perform the crontab command. In the below example root user is performing a cron job. # crontab -e 30 0 * * * root find /tmp -type f -empty -delete 9. Special Strings for Common Schedule |@reboot||Command will run when the system reboot.| |@daily||Once per day or may use @midnight.| |@weekly||Once per week.| |@yearly||Once per year. we can use the @annually keyword also.| Need to replace five fields of the cron command with keywords if you want to use the same. 10. Multiple Commands with Double ampersand(&&) In the below example, command1 and command2 run daily. # crontab -e @daily <command1> && <command2> 11. Disable Email Notification. By default, cron sends mail to the user account executing cronjob. If you want to disable it add your cron job similar to the below example. Using >/dev/null 2>&1 option at the end of the file will redirect all the output of the cron results under /dev/null. [[email protected] ~]# crontab -e * * * * * >/dev/null 2>&1 conclusion: Automation of tasks may help us to perform our tasks in better ways, error-free, and efficiently. You may refer to a manual page of crontab for more information by typing the ‘man crontab‘ command in your terminal. If You Appreciate What We Do Here On TecMint, You Should Consider: TecMint is the fastest growing and most trusted community site for any kind of Linux Articles, Guides and Books on the web. Millions of people visit TecMint! to search or browse the thousands of published articles available FREELY to all. If you like what you are reading, please consider buying us a coffee ( or 2 ) as a token of appreciation. We are thankful for your never ending support.
<urn:uuid:34f90ea0-1d28-455d-b540-aaab72ea7a55>
CC-MAIN-2022-40
http://dztechno.com/11-cron-scheduling-task-examples-in-linux/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00254.warc.gz
en
0.770063
1,351
2.625
3
Back in April, Instagram introduced a data export feature to comply with the General Data Protection Regulation (GDPR), a regulation in European Union law intended to protect citizens’ privacy. But, in an ironic twist, the way in which Instagram implemented its data export feature inadvertently leaked some users’ passwords. The Facebook subsidiary notified affected users via e-mail in November. Instagram’s “Download Your Data” feature worked correctly if a user clicked on the Submit button after entering their password. However, if the user had instead pressed the Return or Enter key to submit their password, the site reportedly put the user’s password in plaintext in the URL of the resulting page. That’s a bad thing, because it means that in some very specific circumstances, it may have been possible for unauthorized parties to discover affected users’ passwords. Instagram has since fixed the bug, so it’s safe to use the Download Your Data feature now. Who Had Access to Leaked Passwords? Instagram’s server logs unintentionally collected these passwords in plaintext, so the company is reportedly deleting any passwords found in those logs. Neither Instagram nor its parent company Facebook have responded to our inquiries about who may have had access to the affected logs or whether the log sanitization has been completed. Historically it was much worse for a password to be found in a URL because it meant that any “man in the middle” (anyone in between your browser and the server to which you are connecting) could see the complete address. But now that most sites use HTTPS, only the domains—for example, instagram.com—are visible to in-between parties, not the full URLs. Well, at least that’s true in most cases. If you or someone with access to your device has installed a special root certificate authority and configured your system to always trust it, then a man in the middle (MITM) who possesses the matching private key could even potentially see complete HTTPS URLs you’ve accessed. Example of an explicitly trusted, non-default root CA The practice of leveraging an MITM certificate is common on enterprise or school networks for the purposes of monitoring employee or student activity. If a victim of the Instagram bug had such a certificate installed, then it’s possible that their password may have been stored in plaintext in their organization’s Web filter logs too. Malware like OSX/MaMi can also engage in MITM attacks. Of course, anyone with direct access to a victim’s device could also search the browser history to look for Instagram URLs that may contain a password. If you were unfortunate enough to have used a publicly shared computer to download your Instagram data, anyone who used the computer after you could have gotten your password unless you remembered to delete your browsing data. (For this and a plethora of other reasons, users should avoid logging into any accounts when using public kiosks at places like hotels, libraries, labs or Internet cafés.) How Many Users Were Affected? An Instagram spokesperson has stated that “a very small number of people” were affected by the bug, which was “discovered internally”—in other words, not reported to Instagram by a third party. Thus it seems plausible that very few people outside of Instagram were aware of the bug until after Instagram’s disclosure to affected users. What To Do If You Might Have Been Affected If you’re not sure whether your Instagram account was affected, check your e-mail; you should have received notification from Instagram around November 15. If you didn’t receive an e-mail but you did use the data export feature before that date, it wouldn’t hurt to change your Instagram password and delete your browser history just to be safe. If you’ve used the same password on other sites, you’ll want to change your password elsewhere too and avoid reusing passwords across multiple sites in the future. How Can I Learn More? We discussed this topic on episode 59 of the Intego Mac Podcast. Each week we bring you engaging discussion of the latest Apple security news, so be sure to subscribe to the podcast to ensure you don’t miss any episodes. You’ll also want to subscribe to our e-mail newsletter and keep an eye here on The Mac Security Blog for updates. CloudGuard root CA screenshot credit: Patrick Wardle. Man in the middle diagram image credit: Nasanbuyn (CC BY-SA 4.0) and Apple; modified by Joshua Long. Girl holding CHANGE YOUR PASSWORD sign based on image by Kasuga~enwiki (CC BY-SA 3.0).
<urn:uuid:cc506612-6690-452e-b286-6b1beaf55794>
CC-MAIN-2022-40
https://www.intego.com/mac-security-blog/did-instagram-leak-your-password/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00454.warc.gz
en
0.93653
987
2.53125
3
Passwords have been the long-time guardian of our personal lives and data ever since the dawn of the internet. Though passwords might still retain most of their relevance, they are not robust enough to protect today’s digital economy. Passwords are not good enough anymore (at least on their own) The recent wave of high-profile security breaches has made us rethink online security. A few lines of code is all that it takes to expose millions of login credentials all across the globe. Today, it might not be enough just to keep changing your passwords regularly. Accounting for a stunning 61% of breaches, credentials are the primary means threat actors use to hack their way into an organization, according to the 2021 Data Breach Investigations Report. Relying solely on passwords is like keeping the keys to your sensitive data under your doormat. You might as well leave the door open. You need an additional means to secure your data, an extra layer of security that can come in handy when the key to your data is in the wrong hands. This is where multi-factor authentication (MFA) and two-factor authentication (2FA also sometimes referenced as TFA) is vital. What 2FA is, and why your organization needs it 2FA is a second way to verify your identity online and ensure that only you have access to your information. In addition to your conventional password, 2FAs prompt you to prove your identity. The additional level of verification here is usually a time-based one time password (TOTP) generated from an authenticator app or from physical verification, such as a fingerprint or a face unlock. But when did passwords become useless? According to a Wired article, passwords in the mid-1960s were as useless as they are today. In 1961, MIT had enabled its students to log in to a massive time-sharing computer called CTSS with unique passwords for every student. In what would become the first data breach in the history of computing, MIT students figured out a way to hack into the computer, access all the passwords, and print them at will. Though secure passwords might not have been a priority in the early days of computing, the story of our modern day enterprises is no different. The true cost of compromised passwords In May 2021, Colonial Pipeline, the largest fuel pipeline in the United States, had to stall fuel deliveries in 12 states for several days due to a cyberattack. The extent of the damage forced the US Environmental Protection Agency to announce an emergency fuel waiver to ease gasoline shortages. In the end, the CEO of Colonial Pipeline, agreed to pay the $4.4 million ransom, all because of a single compromised password. The amount of ransom paid should come as no surprise since the average cost that companies shelled out in 2021 was close to $4.24 million per incident on average, which also happens to be at a 17-year high. Cyber mishaps have led governments across the world to enforce stricter cyber hygiene measures. The US Federal government, for instance, urged agencies in an executive order to adopt MFA. Even regulatory bodies have emphasized the need for MFA, with PCI and NIST being the notable ones. If you’re thinking that regulations and compliance are the only reasons to enforce MFA in your enterprise, you might want to think again. Benefits of MFA Delivering an extra layer of security, MFA can block over 99.9% of account compromise attacks, according to Microsoft as reported by ZDNet. If that isn’t reassuring enough, here are a few additional compelling reasons that make a case for utilizing 2FA: Peace of mind In an enterprise setup, 2FA gives sysadmins peace of mind since it ensures that the account cannot be accessed even if the password falls into the wrong hands. Enabling 2FA puts you at the helm of your data and gracefully compensates for your weak passwords. Weed out human errors Password-related mistakes form a major chunk of threats arising from human error. Using 2FA solves the need to remember complex passwords or write them down on sticky notes. Even if you use something that’s not so easy for you to crack, it’s easy for cybercriminals to test thousands of stolen passwords on something that’s only privy to you, like your bank login. Handle multiple accounts with ease The convenience of online life has made us open multiple accounts to do almost everything imaginable. More accounts mean more passwords and passphrases. It can also lead to reusing the same password in multiple websites. While the bad habit of recycling the same passwords might be hard to stop, adding an extra layer of security like 2FA gives you the comfort of convenience and security at the same time. This goes without saying. The inability to enforce MFA leaves the door open for attackers to access sensitive corporate data solely relying on credentials, scarring the organization’s reputation. Having that additional level of authentication makes a world of difference. Enabling 2FA in Desktop Central Enabling 2FA in Desktop Central is a frictionless process. From the console, navigate to Admin > Security Settings. Under Secure Login, select Enforce Two Factor Authentication. 2FA is probably the simplest way to secure your enterprise against a vast multitude of cyberattacks starting from phishing and credential stuffing to brute force and man-in-the-middle (MITM) attacks. It is high time MFA becomes a core part of your enterprise security.
<urn:uuid:637290dc-a29a-464b-8e45-b2007a996730>
CC-MAIN-2022-40
https://blogs.manageengine.com/desktop-mobile/desktopcentral/2022/01/19/why-you-need-two-factor-authentication-more-than-ever.html?utm_source=Zcampaigns&utm_medium=nlmail-news&utm_campaign=ME-Newsletter-COMMON&utm_term=february2022
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00454.warc.gz
en
0.94579
1,138
3.015625
3
There are several components that comprise an effective business network. Of these components, one of the most important is the security of the network. Policies, strategies, and techniques have to be put in place to protect the users as well as the data stored or transmitted within a business network. Over the past several years, there has been a significant increase in the number of cyberattacks and data breaches due to an increase in digital transactions by businesses. Cybercriminals devote a considerable amount of time and effort devising different strategies that can be used to attack and penetrate business networks to steal or corrupt their data. In the first half of 2019, there were 3,800 disclosed data breaches, representing a 54 percent increase over the half of the preceding year, 2018. To ensure that your business does not fall victim to a data breach or other types of cybercrime, you need to engage the services of skilled network security specialists, such as those at Cyber Sainik, to develop an effective network protection strategy. One of the tasks carried out by security specialists is identifying the various threats that your business network may be susceptible to and developing solutions to counter these threats. Discussed in further detail below are some of the network threats and attack strategies commonly used by cybercriminals to compromise business networks. TOP THREATS TO NETWORK SECURITY 1) Malware/Ransomware: When a network is compromised by cybercriminals, one of the actions that they may take is to introduce malware or ransomware into the system. These are malicious bits of code that corrupt data in a variety of ways, depending on the nature of the code. Some malware may encrypt all the data within the network thereby rendering them useless; this type of malware is known as ransomware. With ransomware, the cybercriminals possess the decryption keys and hold the encrypted data hostage until a ransom is paid after which the data is then decrypted. Other types of malware may steal data out of a network, known as data exfiltration, or may even erase the data outright. 2) Botnets: With botnets, cybercriminals are not interested in the business data when the network is compromised. Rather, cybercriminals are interested in end-user devices such as desktop computers and laptops that are used to connect to the network. These end-user devices are hijacked and remotely controlled by the cybercriminal, most times without the knowledge of the end-user. The cybercriminal often hijacks hundreds and thousands of individual end-user devices known as Zombies. These zombies are then used a variety of cyberattacks, one of the most popular being Distributed-Denial-of-Service (DDoS) attacks whereby heavy traffic is directed at a server such that it becomes overwhelmed and eventually crashes. 3) Computer Viruses: Viruses are small computer programs that infect devices connected to a network, thereby corrupting them. When a virus infects a system, it immediately begins to replicate and spread to other devices within the network. The replication and spread of the virus within a network continue until either all the connected devices have been infected or actions are taken by the network security administrator to contain the spread of the virus. On infected computers, the viruses corrupt and destroy core systems and processes, rendering them inoperable. 4) Phishing Attacks: Phishing is one of the most common and popular network attack strategies used by cybercriminals to compromise business networks and steal sensitive or confidential information. With phishing, users within a network are sent emails containing links with malicious code embedded. When the unsuspecting user clicks on the malicious link, the malicious code is then released into the network where it can then wreak significant havoc. In other instances, clicking on the malicious link may lead to a fake site where the user is then prompted to provide personal information; this information is then used by the cybercriminal for illegal activities. 5) Trojan Horses: Trojan horses are similar to phishing because they are designed to fool unsuspecting users into clicking or downloading them. In addition to being embedded in links within emails, Trojan horses may also masquerade as legitimate files or folders. When these are downloaded, malware is released into the device which can perform a variety of actions such as monitoring keyboard strokes and hijacking the computer webcam, among other things. 6) Rootkits: Rootkits are one of the most dangerous as well as destructive network attack strategies used by cybercriminals. With rootkits, cybercriminals take advantage of network vulnerabilities to install programs that give them administrator-level privileges. These are often very well hidden and difficult to detect. Once a rootkit is installed, the cybercriminal has unrestricted access to the entire network and can execute a host of illegal activities such as keylogging, corrupting core files, and disabling antivirus solutions. 7) SQL Injections: These are network attack strategies that target the databases and database server within a network. With SQL injections, cybercriminals use malicious SQL code to penetrate the database. The malicious SQL code can be used to obtain the account credentials of other users, alter, or even delete data stored within the network database, depending on the nature of the code. 8) Cryptojacking: Cryptojacking is when cybercriminals hijack end-user devices and use them to mine cryptocurrency. Cryptomining requires a lot of CPU resources and so cybercriminals use a variety of methods such as phishing and Trojans to recruit more devices for this purpose. With cryptomining, the user is often unaware that the CPU has been hijacked. Sometimes, the only indicator of cryptojacking are devices that run slower than normal. 9) Advanced Persistent Threats: Also known as APT attacks, this type of network threat differs from the others because it takes place over a lengthy period. After penetrating a network, the cybercriminal installs malware in a location where it can stay undetected for a long duration. Some malware can stay hidden for months, and even years, without detection. From its hidden location, the malware is able to siphon sensitive information to sites outside the network. At Cyber Sainik, we provide the skills and the services needed to ensure that your business network remains fully secure from all sorts of network threats. With our cloud-based Security-as-a-Service (SECaaS) solutions, monitored by our 24×7 security operations center (SOC), you can rest assured that your network will have round-the-clock network security protection. Contact us today to learn more about our security solution, and to get started.
<urn:uuid:8ee0fe73-1827-4cef-8f7f-d3c4a5d5ae43>
CC-MAIN-2022-40
https://cybersainik.com/what-are-the-greatest-threats-to-network-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00454.warc.gz
en
0.935783
1,334
2.640625
3
JSON Web Token or JWT is a secure open standard way, which securely helps in transmitting all the information between two respective parties. JWT can be signed with the help of any secret key with a proper algorithm. When systems exchange confidential data, the JSON secure app is used, which helps in identifying the user without any private credentials. JWT is currently the latest technology used by app development services, which helps in securing the APIs. API is the application programming interface, which helps in communication between two applications. API is used in the iPhone applications, which can be designed by the iPhone app Development Company. When you use an application and you get a response from it, it is due to this software programming interface. Now, there is a need to secure APIs, which can be done with the help of JWT. The app development services help you to secure the API with the help of JWT. What is API Authentication? API is a software protocol that helps interaction between the client and the server. API is known to be a profound entity. App development services are securing API with the help of JWT. It is capable of accepting and also responding to the protected requests by the users and its clients. These API must be well equipped to ensure safety and check the authenticity of the data, which the client tries to access. The procedure of certifying the identity of clients, accessing the resources on the server is API authentication. API authentication is a must for the iPhone application, which the iPhone app Development Company can design for you. It actually authenticates and certifies the users, who are accessing the server for information. Benefits of securing API API is authenticated with the help of app development services in your mobile applications. API management and authentication help your mobile applications to work securely. JSON web tokens build an app, which has encoded confidential data, to provide security. - Create customized authorization servers. The personalized authorization servers help it to manage the API access for various clients’ applications for different customer types. - Custom scopes and the claims are created. You can map these claims to your profile and maintain them in the user directory. - You can adhere to the compliance's. Stay safe and secured with API authentication. JWT for apps helps in API authentication. IT helps to certify the clients and the users who access the server. Confidentiality and privacy are maintained highly by JSON secure app. - Personal credentials are not passed on or exchanged. Personal information is coded with the help of tokens, and tokens are exchanged. The JWT token helps in carrying the payloads for the user context. Your personal credentials are not leaked and used. Rather coding in place of credentials. - You can manage the API access with proper rules and compliances. Specifying the particular conditions under actions gives a much clearer and precise access to API Keys. - Enjoy high quality, API access at any time. With the right app development services, you can have the best access to your API keys. Real-time applications use the JWT token to record every communication that happened within the client and user. - Standard changes and automatic updates are done to your API using the JWT API platform. - The users in the application are all authenticated and their identities are true. Such a measure will prevent data stealing and misuse will not be taking place. Only certified and registered users will have access to data and communication. - Unique tokens registered by the JWT web will be a unique one. It is different for every user, and each time of login would require the respective token. - When your application is secured with the help of a unique code, access is restricted to anyone who proves harmful for your application. Restricted and certified access protects your application and data. Ways JWT helps in securing API - JWT provides a mechanism that shares secured information across various security domains in real-time applications. This happens when parties exchange data with the help of the API medium. - JWT strengthens the connection and relationship between the two respective parties sharing data through the API. JWT has a collection of data and it allows the API to transfer only secured data. - JWT asserts identity associated with trust between the two communicating parties. The interactive application development helps to secure apps and enable secure interactions within applications. - JWT helps to create and use tokens. Establish trusted entities and then completely control access to services, data, and resources. JWT API platform helps API to identify the right identities in the form of coded tokens. - Uses quotas and respective throttling. It helps to prepare quotas on the number of API calling history. More calls than expected signals in abusive relationships. JWT determines and prevents abusive API usage. - The Nodejs app development identifies the vulnerabilities. The operating system, drivers, API components and the network is always looked upon by the app development company. It identifies the vulnerable weak spots. It uses sniffers to register and detect weak areas. How JWT works to secure API - The user and the client app first send in the option of sign in. You need to start using the app with your login credentials. - Once verified, your application API will generate a JWT token and then sign in using that API secret key. JWT for apps is a compulsion, as it provides secured communication and data exchange. - The API then will give back the token to the respective client application. - After the client app receives the JWT token, it verifies its authenticity. It then uses it subsequently every time without the users sending their personal credentials again. - The structure of the JWT token, returned by API is then converted into an encoded string. It is divided into various parts and each part has vital data. It will have a header, containing information related to the type of the token used. It completely looks for security standards and compliance measures. - The payload part will have the data, which the users want to access. They are generally a combination of standard value pairs. This is a part of JWT which is used in implementing the API. - Using the token includes admin permission. There will be normal users who can just review information. Again, there will be users with high access, having access to data editing, and issuing of payments. - Any type of interaction that takes place through the JWT API platform will have to be first secured with the JWT token. - Client applications can decipher or decode the token once they receive it. They will validate the source and then sign in. This step ensures that the content remains unchanged and it is very safe to be used. Applications are now facing threats and hacking. The API can be secured with the help of JWT tokens. The application and confidential data remain secured. Adding JWT token security to your API will provide full protection. App development services secure your API with JWT security, which is affordable and the best security method. JWT is the best technology for heavy load web applications. It is the best technology, providing a secured API.
<urn:uuid:6a382ab8-efd0-4a9a-9522-76db4ddc87c1>
CC-MAIN-2022-40
https://www.appknox.com/blog/how-jwt-helps-in-securing-your-api
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00454.warc.gz
en
0.923894
1,444
3.140625
3
Since our way of life depends on computer infrastructure and the digital technology that operates it, cybersecurity has become the talk of the town. It’s been reported that Americans worry far more about identity theft or theft of their financial details than they do about being shot or otherwise injured. In a 12 month period between 2013 and 2014, the FBI revealed that a staggering total of 519 million financial records were stolen by hackers in the US alone. The financial impacts of these cybersecurity attacks are of course enormous; cyber crime is estimated to have cost the global economy more than $445 billion. Large data breaches and even the vulnerabilities which might lead to them have been highly publicised in recent years. The issue of cybersecurity is becoming increasingly important to governments, businesses and individuals alike. However, while people have certainly heard of the term cybersecurity, many are confused about what should be done. So what can you do to prevent coming under a cybersecurity attack? - Automatic software updates: Many software programs automatically connect and update to defend against known risks. While this might be irritating, having updated protection is very important. - Get a firewall: an efficient firewall will block against most viruses, malware etc. - Maintain your computer: Keep your security software, web browser and operating system updated. - Filter for spam: Spam emails can carry malicious software and phishing scams, some aimed directly at businesses. A good spam filter will block most of it, making your email system safer. - Scan all devices: Make sure to scan any devices such as USB sticks to ensure no viruses or malware are introduced. Use a web vulnerability scanner: Regularly scan your website with a web vulnerability scanner to detect vulnerabilities which would allow hackers into your site What cyber criminals are particularly after are people’s personal and financial data … the greater the customer database the greater the prize for them. Therefore, website owners need to make sure that their website is able to withstand attacks. Hackers use a variety of methods including: - SQL injection; which basically modifies SQL queries in order to gain access to data in the database. - Cross-site Scripting attack; whereby a hacker executes malicious scripts on your visitor’s browser. - Cross Site Request Forgery (CSRF); a type of malicious exploit of a website whereby unauthorized commands are transmitted from a user that the website trusts. - Distributed Denial of Service attack (DDOS), where a website is overloaded with requests in an attempt to make a machine or network resource unavailable to its intended users. This can also distract the owners of the website while a separate attack is carried out. Once a website is already built the best way to check if it’s vulnerable to any of these cybersecurity attacks is to run a web vulnerability scanner, such as Acunetix WVS which identifies all variants of the possible vulnerabilities and offers advice on how to fix them.
<urn:uuid:c2c416ec-c958-4591-a44d-f470200062fb>
CC-MAIN-2022-40
https://www.acunetix.com/websitesecurity/cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00454.warc.gz
en
0.941848
593
3.296875
3
DataOps, an umbrella term, refers to an organizations' collections of technical practices, workflows, cultural norms, and architectural patterns surrounding how they integrate their data assets into their business goals and processes. This means that each organization's data pipelines are likely to be configured differently, however, in general, DataOps efforts intend to enable four capabilities within the company: 1. Rapid innovation, experimentation, and testing, to deliver data insights to users and customers. 2. Heighten data quality with extremely low error rates. 3. Synchronized collaboration across teams of people, environments, and technologies. 4. Orchestrated monitoring of data pipelines to ensure clear measurements, and transparent results. What Is A Data Pipeline Or Data Factory? Data Pipelines are a key concept in DataOps and encompass all data processing steps to rapidly deliver comprehensive and curated data to data consumers. Analogously, these are sometimes referred to as data factories to impart the feeling of tangibility to data assets and the thinking that data is a raw material to be processed. Data engineers may architect multiple data pipelines in their DataOps designs, making pipelines between data providers and data preparers, and pipelines to serve data consumers. For example, one data pipeline may flow application data into a data warehouse, another from a data lake to an analytics platform, and further pipelines may feedback into themselves simply for processing, like sales data back into a sales platform. Technological and business conditions, however, have only over the last decade set the stage for DataOps as a recognized emerging set of practices, developing out of other practices like DevOps. A formalized framework has yet to coalesce around developing best practices. In all, many believe that DataOps is still transitioning through the early stages of the hype cycle, though the marketplace has welcomed many vendor solutions, including end-to-end DataOps platforms. What Are The Foundational Methodologies Of DataOps? DataOps has foundations stemming from many processes historically grounded in DevOps. Today three schools of thought comprise the main foundational principles of DataOps: Agile Development, DevOps, and Lean Manufacturing. ·Agile Development —A popular software development process, agile development methods in DataOps are founded on rapid development cycle times, and highly collaborative environments. Agile teams work in short cycles, called 'sprints', which may last just a day to weeks. Quick cycle times allow the team to reevaluate their priorities often and make course corrections to their application as necessary based on the needs of users. In practice, teams can hunt down bugs and roll out fixes faster (potentially within hours of discovery), as well as receive user feedback and deliver improved application features as they are imagined, instead of bundling updates and fixes to be released in a single version late to market. The flexibility and quickness of agile development methods provide developers an ideal framework for rapid development around growing DataOps assets. DevOps (Development Operations) — DevOps, a building block of DataOps, relies on the automation of repetitive tasks, like testing code, to produce an environment of continuous development and accelerate the build lifecycle. DevOps practices allow teams to easily communicate and collaborate, release fixes and updates faster and reduce time-to-resolution. Although DataOps borrows liberally from the ideas of DevOps, DevOps is a process amongst technical roles on development teams, whereas DataOps serves not only technical but non-technical roles, like data consumers, inside and outside the organization. Lean Manufacturing — Lean Manufacturing is a set of principles and methods that aim to reduce wastage while maintaining productivity in manufacturing settings. These principles have proved to be portable, and DataOps borrows methods that suit its needs well, like statistical process control (SPC), which has been applied to data factories to great effect. In the case of data errors, SPC allows the testing of data that flows through the DataOps pipeline to ensure its validity at every stage: at the input stage tests catch data supplier inconsistencies, tests verify data integrity throughout processing, and output tests can catch final data errors before passing data downstream to other data consumers. To maximize data as an enterprise asset, DataOps takes a holistic approach, focused on improving communications, integrations, and automation. Fundamentally, DataOps is founded on people, processes, and technology. DataOps platforms provide end-to-end data control encompassing everything from data ingestion to analytics and report, whereas DataOps tools target one of the 6 capabilities of DataOps: Meta-Orchestration — The capability to orchestrate complex data pipelines, toolchains, and tests across teams, locations, and data centers. Testing and Data Observability — The ability to monitor applications, production analytics, and validate new analytics before deployment. Sandbox Creation and Management — The capability to create provisional self-service analytics environments, and the tools to iterate upon new ideas created in those sandboxes. Continuous Deployment — The ability to develop, test, and deploy to production environments. Collaboration and Sharing — An end-to-end view of the entire analytic system to foster greater collaboration and sharing. Process Analytics — The ability to measure analytics processes in order to understand weaknesses and improvements over time. What Is A DataOps Framework? At small levels, companies can improve their DataOps processes and accuracy using specialized data tools that improve their data pipelines. The overarching mission of DataOps, however, is to achieve a full organization-wide culture change that appreciates data first and drives to maximize all data assets. The following DataOps framework elements help guide organizations in thinking holistically about their people, processes, and technology. 1. Enabling Technologies — Use technologies such as IT automation, and data management tools. 2. Adaptive Architecture — Deploy systems that allow for continuous integration (CI) and continuous deployment (CD). 3. Intelligent Metadata — Technology that automatically enriches incoming data. 4. DataOps Methodology — Game plan for deploying analytics and data pipelines, and adhering to data governance policies. 5. Culture and People — Cultivation of organizational ethos that appreciates and utilizes data and aims to maximize data assets. What Is The DataOps Process? DataOps is not DevOps, but DataOps processes have benefited significantly from DevOps, one of its foundational methodologies. DevOps introduces two capabilities that enable Agile development within DataOps, continuous integration (CI) and continuous deployment (CD). Agile methods demand quick development times, in the form of sprints, however, when it comes to running tests and deployment, the process is manual. This process is slow and error-prone. But, with CI and CD capabilities automation does away with the challenges of Agile thinking, namely the time-consuming and risky aspect of manual workflows. DataOps introduces to its workflow the CI and CD concepts, enabling the same agile thinking in its data preparations and designs, yet also automating process ala DevOps thinking, so that for data users, data factories seem to disappear. Common stages for a DataOps workflow are: 1. Sandbox Management — The process of creating an isolated environment for experimentations. 2. Development — The design and building of apps. 3. Orchestrate — Two data orchestrations stages occur, the first orchestrates a representative copy of data for testing and development, and the second orchestrates the data factory itself. 4. Test — The testing stage targets code rather than data. However, in the orchestration stages, the testing of data is a primary task. 5. Deploy — Similar to DevOps, after successful code tests, the code is deployed to production. 6. Monitor — At all stages, monitoring occurs, but particular attention on end monitoring of the data factory so data exits pristine and honest. What DataOps Roles Are There? A defining characteristic of DataOps is the numerous roles that interact and contribute to the accumulation, processing, and use of a company's data assets. Towards the extreme, companies whose data assets are their main value proposition have the most immediate need to understand the people engaging with proprietary information. These DataOps roles can be generally classified as data consumers, data preparers, and data suppliers. Data Suppliers — Data suppliers are the end data owners, like database administrators, responsible for data management, processing, and user access control of a company’s DataOps. Data Preparers — Due to the ever-complicating nature of DataOps, a middle ground of roles is developing between data engineers, data suppliers, and data consumers. Data engineers build the pipelines that refine raw data into new usable, valuable, and monetizable data. Data curators is a developing role that begins with the needs of consumers to optimize accordingly DataOps content to businesses the needed context for enhancing final assets. Another developing role due to heightened requirements around data governance is the data steward which is responsible for developing company data governance policies and ensuring compliance. Data Consumers — Data Consumers receive the final data output and the largest group that interacts with DataOps assets. Many roles have emerged: data scientists apply data to solve business problems, data citizens are frontline workers in need of real-time information, and data developers need accurate DataOps as they build business applications that use those pipelines. Business Email Address Thank you. We will contact you shortly. Note: Since you opted to receive updates about solutions and news from us, you will receive an email shortly where you need to confirm your data via clicking on the link. Only after positive confirmation you are registered with us. If you are already subscribed with us you will not receive any email from us where you need to confirm your data. "FirstName": "First Name", "LastName": "Last Name", "Email": "Business Email", "Title": "Job Title", "Company": "Company Name", "Phone": "Business Telephone", "LeadCommentsExtended": "Additional Information(optional)", "LblCustomField1": "What solution area are you wanting to discuss?", "ApplicationModern": "Application Modernization", "InfrastructureModern": "Infrastructure Modernization", "DataModern": "Data Modernization", "GlobalOption": "If you select 'Yes' below, you consent to receive commercial communications by email in relation to Hitachi Vantara's products and services.", "EmailError": "Must be valid email.", "RequiredFieldError": "This field is required."
<urn:uuid:b34937db-d6dc-4502-b505-a5725a202224>
CC-MAIN-2022-40
https://www.hitachivantara.com/en-in/insights/faq/what-is-dataops.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00454.warc.gz
en
0.908579
2,291
3.0625
3
Biometrics is a pretty straight-forward term: bio = life and metrics = measurement. Therefore, the definition of biometrics is simply the measurement of life. More specifically, biometric is the measurement of different biological components unique to each individual to confirm identity. Though the primary function of biometrics is identification and access control, some biometric technology functions as a method of surveillance among groups of people. The primary pull is that biological traits allow quick — and accurate — identification of people without overdependence on complicated passwords, overused PINs, or easily misplaced security tokens. Biometric authentication is quick, convenient, and more secure than outdated password systems and may even replace antiquated password requirements soon. Examples of Biometrics Biometrics have been used for centuries as a means of identification. A signature on a contract, fingerprints in a database, and DNA collected from a crime scene are all examples of biometric identification. Now, thanks to advances in technology, biometric identification and authentication can come in many forms. We can further contribute to the definition of biometrics by breaking them down into two subgroups, physiological and behavioral, each of which requires different technologies and standards. Physiological Biometrics Examples Physiology refers to characteristics of the body and varies significantly from person to person. Fingerprints, facial recognition, ear shape, and hand geometry are just a few examples of physiological biometrics. Other examples of this definition of biometrics include the following: - Iris recognition: Measures features surrounding the iris - Retina recognition: Measures veins in the back of the eye - Vein recognition: Scans hands or fingers for veins using specialized infrared systems Understanding Behavioral Biometrics Behavioral biometrics refers to the measurement of behavioral characteristics like gestures and voice patterns. This can include the way a user holds his phone, how he swipes his screen, or which shortcuts he uses to access his favorite apps. Behavioral biometrics are ongoing, collecting dozens upon dozens of data points then combining them into useful bits of information. The process is very different from physiological biometrics, which only requires a single scan to confirm identity. This makes behavioral biometrics significantly harder to hack and protects users from unforeseen data breaches that might otherwise be hard to recover from. To be clear, behavioral biometrics cannot replace traditional passwords. Rather, behavioral biometrics helps improve security by analyzing one’s behavior against expected behavioral patterns. For example, a financial institution may collect many months’ worth of data from a single user to help identify and flag fraud concerns. If a behavior is flagged, the bank may restrict access to certain functions or log the user out altogether. Growing Prevalence of Biometric Technology Biometric technology is seeing a steady growth in popularity, witnessing an average CAGR of almost 20 percent. At this rate, biometrics will be worth $60 billion by 2025! Contributing to this boom is the expansion of biometrics from sectors like government, security, and transportation into a much broader scope of applications. This includes healthcare, finance, electronics, hospitality, fitness, retail, entertainment, and more. Of course, with the growing prevalence of biometric technology comes increased concern. After all, biometric data can still be stolen (though significantly harder) and extremely difficult to replace. Companies that utilize biometric technology without a substantial emphasis on data security not only risk brand reputation and financial loss, but they also risk compromising valuable personal information. Maintaining Biometrics Security Standards Biometrics have the potential to make life so much easier in a million different ways — they can increase security, speed user access, identify health concerns, and offer customized entertainment suggestions to name a few — but with the technology comes great responsibility, too. That’s why the FIDO Alliance developed a set of protocols for biometric technology testing. The organization aims to ensure the upholding of biometric standards and to promote widespread adoption of biometric technology. FIDO explains that standard passwords are not secure, which puts both individuals and society as a whole at risk for fraud or other malicious attacks. Last year, FIDO announced the roll-out of their new cohort project called FIDO2. Together with the World Wide Web Consortium (W3C), an international organization that sets the standards for web-based technologies, FIDO2 hopes to bring secure FIDO protocol into popular browsers and operating systems. Perhaps the best way to describe FIDO2 is “FIDO for browsers” and in some cases, “FIDO for Mobile.” Some popular operating systems aligning themselves with FIDO2 include Chrome, Edge, and Mozilla. Biometric technology is growing substantially and may soon be part of every household and public space. To protect consumer information and privacy, biometric technology — and the software that stores it — should always be tested by an accredited third party.
<urn:uuid:3aac38ce-f6e2-4aae-8312-bf7b6fe1c8f8>
CC-MAIN-2022-40
https://www.ibeta.com/definition-of-biometrics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00654.warc.gz
en
0.915737
1,020
3.5625
4
Four Types Of IT Maintenance That You Need To Know About These days, maintenance is essential to ensure the security of your software and systems in your company. Regular IT maintenance is crucial because it ensures the IT system operates effectively. When considering the different types of IT maintenance, you should take into consideration two aspects. The first one is that it includes both hardware and software, which are the most important components that influence your system’s operations. The second aspect explains that various types of maintenance can work at the same time. For instance, in cases of corrective maintenance, preventive maintenance may not anticipate the problem. Let’s check and learn more about the four types of IT maintenance. Predictive IT Maintenance This type of maintenance is carried out to diagnose possible failures in your system and to try to avoid them before they occur. This type of IT maintenance is carried out in a very significant way – through the monitoring process of your computer systems. So, one or several technicians properly control the operating of equipment and systems by using monitoring software. This monitoring software is used in order to control different kinds of features like battery levels, the temperature of the SPU, and many others. Preventive IT Maintenance In comparison with predictive maintenance, preventive IT maintenance is a very frequent type of maintenance. This type is carried out in order to prevent possible errors and improve the function of your system. However, according to many studies and situations, it was proved that preventive maintenance is very useful in many aspects. For instance, it can easily detect weak points in your system that may affect its functions, and it also decreases the number of system failures or reduce the numeral repairs. Therefore, when we discuss this type of maintenance of software includes operations like the formation of backup copies, clearing the hard disk space and RAM, or even scanning of the computers through antivirus. Nevertheless, it was noticed that there are two different kinds of preventive maintenance known as “active preventive maintenance” and “passive preventive maintenance.” Corrective IT Maintenance IT maintenance has to be applied in situations when predictive and preventive maintenance doesn’t work properly and is not able to avoid the error. For instance, in a case where there is a hardware breakdown, immediate repairing or replacement is required to be in function again. This type of maintenance is crucial to solving the problem, determining the cause that may affect other parts of the system, and prevent it from happening again in the future. Evolutionary IT Maintenance The evolutionary IT maintenance is not intended to prevent and correct possible errors but to develop the computer resources that are accessible. As we already know, technology is rapidly developing, which means that the tools are constantly changing. Nonetheless, with this evolutionary maintenance, we hope to ensure that computer systems do not become outdated but rather stay current to provide customers with the best technology choices possible, based on and company’s and organization’s capabilities. Therefore, depending on the requirements, this form of maintenance will involve anything from software updates to full equipment or device replacements. How Do You Know What’s Best For You? So far, we’ve looked at the most common forms of IT maintenance. As you would expect, due to the difficulty of this form of project, it’s normally left to professionals like system managers or specialist companies that provide maintenance services to businesses, professionals, and individuals. Additionally, the complex IT maintenance tasks involve company security. For this reason, many businesses outsource to a professional IT support company that has a team of experts with the required experience and skills. Also, IT maintenance security involves significant system and software backups, guaranteeing the safe storage of your company’s important information. All relevant IT documentation needs to be immediately accessible for successful backup and recovery in case of a major server letdown or even a cyber-attack. A decrease in your computing function’s reliability would have a detrimental effect on employee morale, as activities performed on malfunctioning software will be interrupted. So, you have to make sure that your software is up to date correctly. You can handle your IT maintenance in a variety of ways, including: Hire an in-house IT expert This is just an option for bigger businesses that can bear the high compensation that seasoned in-house IT experts command. Based on the servers’ size and use, a single IT professional can rarely offer the required IT support. In this situation, you’ll have to recruit a squad at a higher cost. Delegate the IT maintenance duties to current workers This can appear to be a more cost-effective option for small companies but may lead to inadequate performance in the employee’s original job and tech maintenance tasks. Outsource the IT maintenance This prevents delegating duties to other workers while being a cost-effective solution for small companies. You’ll also learn from the expertise and experience of a committed, specialist team. To sum up, to function properly, computers, similar to any other electronic device, need proper maintenance. The importance technology has in modern life, and business makes IT maintenance a necessity. The complexity of computer equipment means that its maintenance requires particular attention. Considering the importance of IT maintenance, we believe you should consider checking out 1-800 Office Solutions. We are ready to support you with all your IT maintenance needs, so don’t hesitate to contact us at any time and for any need.
<urn:uuid:04e44696-73a3-4cd1-9f13-c76cfa273bbe>
CC-MAIN-2022-40
https://1800officesolutions.com/four-types-of-it-maintenance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00654.warc.gz
en
0.934
1,127
2.625
3
Hybrid sheep embryo with human stem cells Towards transplanting human organs into animals, Stanford researchers successfully created the hybrid sheep embryo with human stem cells. In which, one in every 10,000 cells was human. The new research is another progression towards organ transplantation that has made inside animals. Previously, the same research team has made human-pig embryos. Brew Whitlow, a professor of animal biotechnology at the Scottish Institute of Roslin, said, in genetic science the newest achievement of embryo production with human cells as a remarkable step. To create hybrid embryos, researchers allowed human stem cells into sheep embryos, which develop for 28 days. The stem cells will develop to replace the missing organ. Dr. Pablo Ross from the University of California, said, one in 10,000 cells in these sheep embryos are human, but still not generate an organ. Function of developing organs in sheep and pig are very similar Researchers said, the function of developing organs in sheep and pig are very similar as human beings. But, there are several advantages to using sheep embryos. For a pig we typically transfer 50 embryos to one recipient, but with the sheep we transfer four embryos to one recipient. This implies less embryos needed for one experiment. Using animals as hosts for developing human organs for transplantation could reduce both organ holding up records. In the UK, last year almost 500 people died while waiting for a transplant. As the patient’s own cells could be used in the procedure, which means the organs will be genetically compatible with the patient receiving them. We are investigating all alternatives to give organs to poor people.
<urn:uuid:bffac8e5-c81c-4298-a0f0-f78638d5a5a2>
CC-MAIN-2022-40
https://areflect.com/2018/02/28/geneticists-made-hybrid-sheep-embryo-with-human-stem-cells/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00654.warc.gz
en
0.942266
332
3.375
3
Google Chrome is a web browser developed by Google. It used the WebKit layout engine until version 27 and, with the exception of its iOS releases, from version 28 and beyond uses the WebKit fork Blink. It was first released as a beta version for Microsoft Windows on September 2, 2008, and as a stable public release on December 11, 2008. Net Applications has indicated that Chrome is the third-most popular web browser when it comes to the size of its user base, behind Internet Explorer and Firefox. In September 2008, Google released the majority of Chrome’s source code as an open source project called Chromium, on which Chrome releases are still based. Notable components that are not open source are the built-in PDF viewer and the built-in Flash player. Chromium is freely available in Kali repositories. However Google Chrome is not. There’s few different ways to install and run Google Chrome in Kali Linux. Installing Google Chrome in Kali Linux: Two ways you can install Google Chrome: - Download and Install .deb package from Google. - By adding Google Chrome Repositories (we are skipping Debian Repository) There’s many posts available in search result on how to add Debian Repositories and install Google Chrome in Kali Linux. However, readers should note that - Option 1 (download and install .deb package) from above involves manual download and will probably not offer auto update via apt-get update/upgrade. - Kali doesn’t recommend adding unsupported repositories such as Debian. - By adding Debian repositories you are probably going to break something in your Kali Instillation - You will end up updating packages that are incompatible/UN-tested in Kali Linux (such as new kernel) - Lastly, why would you add a whole Debian repository when you can just get Google Chrome straight from Google without jeopardizing your Kali Installation! Running Google Chrome in Kali Linux You have follow choices to run Google Chrome in Kali Linux: - Run Google Chrome as a Standard user in Kali Linux - Create a Standard non-root user and run Google Chrome – See How to add remove user (standard user/non-root) in Kali Linux? - Run Google Chrome as Root user in Kali Linux - By modifying /opt/google/chrome/google-chrome file - Run Google Chrome as Standard User (while Logged in as root) in Kali Linux - Run Google Chrome using gksu - Run Google Chrome using sux Of these 3 options, 1 is the secure most, but somewhat beats the purpose of using Kali Linux in the first place. Option 2 is somewhat a security risk, but then again, you’re probably running IceWeasel/Firefox/Opera and who knows what browser already while signed into your Google account. Option 3 is probably the best way to go about it. Follow Part 2 of this guide How to Install Google Chrome in Kali Linux? – Part 2 – Installation for installation instruction.
<urn:uuid:aa7648ad-3448-4d40-a910-9fcfeccee562>
CC-MAIN-2022-40
https://www.blackmoreops.com/2013/12/01/install-google-chrome-kali-linux-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00654.warc.gz
en
0.881059
618
2.5625
3
Your company has started to use Artificial intelligence (AI), but are you effectively managing the risks involved? It’s a new growth channel with the potential to boost productivity and improve customer service. However, particular management risks need to be assessed in cybersecurity. Start by considering AI trends to put this risk in context. Why Is AI an Emerging Cybersecurity Threat? Artificial intelligence is a booming industry right now with large corporations, researchers, and startups all scrambling to make the most of the trend. From a cybersecurity perspective, there are a few reasons to be concerned about AI. Your threat assessment models need to be updated based on the following developments. Early Cybersecurity AI May Create a False Sense of Security Most machine learning methods currently in production require users to provide a training data set. With this data in place, the application can make better predictions. However, end-user judgment is a major factor in determining which data to include. This “supervised learning” approach is subject to compromise if hackers discover how the supervised process works. In effect, hackers could evade detection by machine learning by mimicking safe code. AI-based Cybersecurity Creates More Work for Humans Few companies are willing to trust their security to machines. As a result, machine learning in cybersecurity has the effect of creating more work. WIRED magazine summarized this capability as follows: “Machine learning’s most common role, then, is additive. It acts as a sentry, rather than a cure-all.” As AI and machine learning tools flag more and more problems for review, human analysts will need to review this data and make decisions about what to do next. Hackers Are Starting to Use AI for Attacks Like any technology, AI can be used for defense or attack. Researchers at the Stevens Institute of Technology have demonstrated that fact. They used AI to guess 25% of LinkedIn passwords successfully after analyzing 43 million user profiles in 2017. In the hands of defenders, such a tool could help to educate end users on whether they’re using weak passwords. In the hands of attackers, this tool could be used to compromise security. The Mistakes You Need to Know About Avoid the following mistakes, and you’re more likely to have success with AI in your organization. 1. You Haven’t Thought Through the Explainability Challenge When you use AI, can you explain how it operates and makes recommendations? If not, you may be accepting (or rejecting!) recommendations without being able to assess them. This challenge can be mitigated by reverse engineering the recommendations made by AI. 2. You Use Vendor-provided AI Without Understanding Their Models Some companies decide to buy or license AI from others rather than building the technology in house. As with any strategic decision, there’s a downside to this approach. You can’t trust the vendor’s suggestions that AI will be beneficial blindly. You need to ask tough questions about how the systems protect your data and what systems AI tools can access. Overcome this challenge by asking your vendors to explain their assumptions about data and machine learning. 3. You Don’t Test AI Security Independently When you use an AI or machine learning tool, you need to entrust a significant amount of data to it. To trust the system, it must be tested from a cybersecurity perspective. For example, consider whether the system can be compromised by SQL injection or other hacking techniques. If a hacker can compromise the algorithm or data in an AI system, the quality of your company’s decision making will suffer. 4. Your Organization Lacks AI Cybersecurity Skills To carry out AI cybersecurity tests and evaluations, you need skilled staff. Unfortunately, there are relatively few cyber professionals who are competent in security and AI. Fortunately, this mistake can be overcome with a talent development program. Offer your cybersecurity professionals the opportunity to earn certificates, attend conferences, and use other resources to increase their AI knowledge. 5. You Avoid Using AI Completely for Security Reasons Based on the previous mistakes, you might assume that avoiding AI and machine learning completely is a smart move. That might’ve been an option a decade ago, but AI and machine learning are now part of every tool you use at work. Attempting to minimize AI risk by ignoring this technology trend will only expose your organization to greater risk. It’s better to seek proactive solutions that leverage AI. For instance, you can use security chatbots such as Apollo to make security more convenient for your staff. 6. You Expect too Much Transformation from AI Going into an AI implementation with unreasonable expectations will cause security and productivity problems. Resist the urge to apply AI to every business problem in the organization. Such a broad implementation would be very difficult to monitor from a security point of view. Instead, take the low-risk approach: apply AI for one area at a time, such as automating routine security administration tasks, and then build from there. 7. Holding Back Real Data from Your AI Solution Most developers and technologists like to reduce risk by setting up test environments. It’s a sound discipline and well worth using. However, when it comes to AI, this approach has its limits. To find out whether your AI system is truly secure, you need to feed it real data: customer information, financial data, or something else. If all this information is held back, you’ll never be able to assess the security risks or productivity benefits of embracing AI. Adopt AI with an Eyes Wide Open Perspective There are certainly dangers and risks associated with using AI in your company. However, these risks can be monitored and managed through training, proactive management oversight, and avoiding these seven mistakes.
<urn:uuid:c11989ba-d409-46ca-86fb-56b3c55fd06c>
CC-MAIN-2022-40
https://www.avatier.com/blog/the-top-7-most-common-cybersecurity-mistakes-made-with-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00654.warc.gz
en
0.932174
1,174
2.625
3
Are unpatched vulnerabilities leaving you at risk? At the rate of almost half of all attack types, hackers are using injection attacks to take advantage of vulnerabilities in operating systems and applications to penetrate networks and databases. Download this whitepaper and learn: - How attackers are using injection attacks to achieve a variety of nefarious goals; - The prominent types of injection attack and vulnerabilities; - Steps you can take to help protect your systems and data.
<urn:uuid:e4761ec4-7044-4d5a-a3cc-f3c6d126416b>
CC-MAIN-2022-40
https://www.bankinfosecurity.com/whitepapers/what-you-need-to-know-about-injection-attacks-w-3867
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00654.warc.gz
en
0.915851
93
2.515625
3
- October 31, 2018 - Posted by: Kerry Tomlinson, Archer News - Categories: Archer News, Cyberattack, Hacking, Industrial Control System Security, Posts with image, Ransomware Your beer can be hacked. But a nearly 200-year-old brewery is taking on new strategies to protect your pilsner. Original — Updated Back in 1842, workers hauled chunks of ice from rivers and lakes into miles of cellars at the Pilsner Urquell brewery in the Czech Republic. There, they started the pilsner brewing trend, still alive almost two centuries years later. But now, this old-style brewery is digital, which means it’s open to what you might call a cy-beer attack. Computers help turn out 120,000 bottles per hour. Unless something stops them. Barrels inside the Plzensky Prazdroj brewery. Image: Plzensky Prazdroj In October 2017, the brewing computers mysteriously showed the “blue screen of death” and shut down. “During three days, every morning at 4:00,” said Ondrej Sykora, who works to protect the Plzensky Prazdroj — or Pilsner Urquell — brewery in the town of Plzen. “It was just for, let’s say, five to ten minutes,” he added. Not long, but long enough to possibly leak thousands of dollars away. They finally tracked down the source — a worker had plugged in a USB that let malware seep into the brewing system. He or she is not the first employee to bring down the house with a USB drive. A critical systems worker in the Middle East tried to watch La La Land on company computers last year and accidentally downloaded malware. Plzensky Prazdroj brewery in Plzen, Czech Republic. Image: Plzensky Prazdroj Was the brewery worker watching a movie? “We don’t know exactly what,” said Sykora. “Probably yes, because it’s the standard situation.” So standard that they must protect the systems not only from outside hackers, but their own people making errors, he said. Sykora spoke at the Center for Industrial Cybersecurity summit in Madrid in October 2018, sharing his work to help others running factories and power plants. Plzensky Przdroj monitors their systems for signs of attack, separates networks so bad guys can’t jump from one to another, and checks for security gaps, among other strategies, he said. Not to keep the original Pilsner’s 176-year-old recipe safe from spies, but to keep the beer flowing. “To prevent all of our technologies against unplanned shutdowns. That’s the main thing,” Sykora said. “We want to be ready, to let’s say, protect, and also if something happened, to very quickly start again the processes.” Tanks inside the Plzensky Prazdroj brewery. Image: Plzensky Prazdroj His co-workers joke that they don’t need cyber protection. “They just say, ‘We are the brewery, not the nuclear power station,’” he said with a chuckle. But these days, cyber attackers can cost companies millions of dollars, stopping production, damaging equipment — even hurting people. “We have to think about it. We have to maybe think a little bit more effective about it,” Sykora said. Too good to hack? Plzensky Prazdroj’s brews are so popular that every other beer drunk in the Czech Republic comes from the brewery, according to Sykora. He’d like to believe cyber attackers will stand down in the name of good beer. “We cannot protect or prevent everything,” he said with a smile. “I just want to tell them, ‘Don’t do it because you destroy the beer, maybe what you like.'” Pilsner Urquell in its traditional rounded mug. Image: Archer News Network Probably Whisky Drinkers Unfortunately, some cyber crooks may prefer a different drink. They’ve already attacked a couple of breweries in the past five years. Hackers stole passwords and other customer info from another 19th-century brewer, Marston’s, in Britain in 2013. Sykora has some guidance for other breweries that are starting their cybersecurity journey now: learn about what kinds of attacks may come your way. “If you don’t know for what you fight or against what you fight, you cannot design a good solution,” he said. “First, to talk, to study, for example, some last attacks,” he explained. “Then think about how to how to protect.” Main image: Pilsner Urquell in a traditional mug. Image credit: Archer News Network
<urn:uuid:2403b379-f686-47ce-81ab-b96cf198bc40>
CC-MAIN-2022-40
https://archerint.com/lift-a-glass-to-cy-beer-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00054.warc.gz
en
0.92563
1,100
2.65625
3
AI: The Good, The Bad and The Ugly From Siri to Alexa to Watson, we are living in an AI world; it understands when we ask it to play our favorite song; it knows what book we will want to read next. AI can recognize the face of an individual, and the distinctive look of a cancer cell. It understands concepts and context. It can write a song that sounds like the Beatles. From the weather to the shop floor to robotic surgery, AI’s massive processing power is far faster than the human brain in computational ability, and is progressing in conventionally “human” areas like strategic thinking, judgement and inference. What if the right AI technology falls into the wrong hands? Are we right to be frightened? Leading thinkers like Stephen Hawking, Elon Musk and Bill Gates have declared their deep fear that AI – created by humans – could end up controlling us. I can tell you that all the world-changing gifts of AI – provided in large part by open collaboration on its development – can be exploited by the criminal hacking community. These black hats can do it whilst enjoying the same access as the white hatters – those committed to the values of open source. So yes: AI can easily be twisted to become a cybercriminal’s most valuable ally – leveraging its very “intelligence” and self-sufficiency as a partner-in-crime unlike any ever seen before. So how can we ensure that AI applications are used in ways that benefit individuals and society? How can we tell whether AI software underlying many of the platforms we utilize aims to emulate the good guy’s brain… or the criminal mind – or whether it started off well-intentioned and was perverted along the way? The Basic Characteristics of AI Let’s start with a short primer on the fundamental nature of AI as it relates to the world of cybersecurity. On the simplest level, there are two predominant types of applications: supervised AI and unsupervised AI. Supervised AI applications are those that are pre-programmed offline, through a fully managed process, and then released to do their job. These applications are typically used for the automatic identification of certain images and data such as faces in photos, certain kinds of structured and unstructured text, and context, via training sets. They are trained by exposure to large, relevant data sets that allow them to generalize their classification capabilities. Siri and Alexa, for example, use Natural Language Processing (NLP) and speech recognition (speech-to-text translation applications), while Watson uses mainly NLP to answer questions. These applications get smarter and smarter all the time – which is precisely what makes Hawking et. al. so nervous. Unsupervised AI applications are generally also pre-trained, but can learn from the environment and adapt accordingly, creating new patterns on the fly. These applications study and process their environment to create new classes of behaviors. They then adapt, independently, to better execute various decision-making functions, mirroring human thinking patterns and neural structures. Some examples include applications able to learn an individual’s text message or email style, browsing behavior and interests. Facebook and Google employ this approach to study user behaviors and adjust their results (and adverts) accordingly. When Things Get Ugly: AI’s Malicious Potential Both AI applications – supervised and unsupervised – can be used maliciously in different ways. Actors looking to do harm may use supervised AI applications to target confidential or embarrassing data, or any data that can be used for ransomware. Imagine – a phone is infected with malware that is “trained” to identify and retrieve potentially compromising texts, photos or voice messages. Unsupervised AI applications can do this too, and can also mimic human behaviors. For example, an unsupervised AI application could imitate a manager’s style of writing, and use it to direct one of his/her employees to download malware files, make a shady transaction, or relay confidential information. The risks and dangers are enormous. Nevertheless, the security industry is not adequately discussing how to prevent AI from being abused by hackers. When Bill Gates and Stephen Hawking warned about AI turning on humans, they were not talking about the risk of cybercriminals helping to “set AI free” – but perhaps they should be. One such attack was carried out in Israel, in which Waze was hacked to report fake traffic jams. If an unsupervised AI application were to perpetuate these kinds of bogus reports unabated, we’d see real-life chaos on the roads. Six Principles for Preventing the Abuse of AI Always map AI objects in your environment – Know where all AI software objects and applications exist in your environment (which servers, end-points, databases, equipment/accessories, etc.). This is not a trivial task; it forms the basis for all other methods of preventing AI from being abused by cybercriminals. To do this, we need new methods of analyzing code and pinpointing mathematical evidence of AI that we would not find in regular code (regression formulas, usage of specific libraries of optimized linear algebra, etc.). Do an AI vulnerability assessment – Currently, there are various types of tools and procedures that help evaluate possible vulnerabilities or weaknesses in a product or code. But these tools and procedures have not yet been adapted to the era of AI code and solutions, and they should be. These tools would begin, for example, by looking for basic API flaws, with the goal of finding holes within the AI application that can potentially be exploited. Understand intent – Utilizing any software with AI should be accompanied by a thorough understanding of the capabilities of that AI. The AI application vendor should provide an AI spec that describes the potential of the AI code, listing, for example, whether it can adapt, and the types of data it can classify (i.e. voice or text or traffic classification). This will help indicate the risks associated with the AI application, and give a sense of what can potentially happen if the AI is used in the wrong way. If the vendor does not provide this spec, the AI software should not be deployed. Identify data content type – After understanding what AI objects are in our environment and what their capabilities are, we should then determine what type of data exists virtually “nearby” these AI objects – meaning what data is in the AI’s reach. This knowledge should enable us to begin assessing which data it can reach to, and the necessity of the AI object in that environment. Monitor APIs – Establish control over the API activity in your environment by identifying the type of APIs with which the AI object can potentially integrate. In principle, API activities should be logged and can even be integrated to send out alerts. If, for example, Siri was used to convert speech into an email or text messages, or if an app used the device graphics processing unit (GPU) to retrieve data, you should be alerted to it, or at least have a record of it. Train employees – Companies need to teach their employees about AI chicanery. Just when people are learning not to open spear phishing emails, along comes AI which can operate at new levels of deceit. Imagine you’re having an email exchange with someone, and they write that they will send you a resume. A half-hour later it shows up. Who wouldn’t open that? Who would suspect it was a malicious chatbot? The AI revolution holds both great promise, and potentially, great peril. Like all new technologies, it is vulnerable to exploitation by the unscrupulous. That is why AI cybersecurity – developed and deployed by those deeply cognizant of our unnatural neural foe – is a key imperative for the 21st century. See original article here
<urn:uuid:90308319-5a9d-46d9-a4f5-2eade93d3178>
CC-MAIN-2022-40
https://empow.co/ai-the-good-the-bad-and-the-ugly/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00054.warc.gz
en
0.937262
1,603
2.890625
3
Nowadays, there are so many devices that it can be challenging to remain secure while browsing the internet, shopping online, or communicating with co-workers. Businesses need to implement strong security strategies to protect themselves from malicious actors. Hackers are finding innovative ways to gain unauthorized access to company computer systems. However, one of the simplest ways to do so is through an unsecured Wi-Fi network. One U.K. survey suggests that 72% of companies that suffered a data breach found that their network was infiltrated using an unsecured wireless device. Companies worldwide and in every industry need to focus on protecting their wireless devices and network to prevent hackers from exploiting their vulnerabilities. Let’s learn about some of the issues associated with using an unsecured network, vulnerabilities that crop up, and how you can secure your network to keep hackers at bay. Issues With Unsecured Wi-Fi Networks Unsecured Wi-Fi networks, as their name suggests, are Wi-Fi networks that can be connected to and lack any securit y features such as a password and username requirement. On the other hand, secure networks require users to log in with an email, agree to legal terms and conditions, enter a password, or register an account with the provider. Companies often use various security technologies and techniques to protect their Wi-Fi networks. These include routers with firewalls, including pre-programmed security features, antivirus software, and public and private access control methods. It is worth noting that connecting to an unsecured Wi-Fi network is more convenient. However, users should be aware that some data may be exposed when using these networks. When a Wi-Fi network is not secure, it can lead to several vulnerabilities that hackers can exploit. Let’s explore some of these vulnerabilities so you’re more prepared to expect them if you plan to use unsecured networks. Vulnerabilities Caused by Unsecured Networks Below are some of the most common risks and vulnerabilities associated with using Wi-Fi networks that lack basic security features: ● Piggybacking: When a hacker can access your internet connection, even if they’re 150-300 feet away. This can open your network to unintended users. ● Evil twin attacks: Malicious actors create another network meant to impersonate your network, and as people connect to this false network, they can steal user data. ● Unauthorized access: Hackers can gain access to files you’ve unintentionally made available to those who connect to your network. ● Wardriving: A specific kind of piggybacking where hackers drive through cities and towns with a Wi-Fi-enabled device to search for unsecured networks. ● Wireless sniffing: Hackers use sniffing tools to locate sensitive information, including usernames, passwords, and credit card numbers. ● Shoulder sniffing: This is a simple yet dangerous issue — hackers spy over your shoulder in public areas to steal personal or identifiable information. All of these potential issues, among others, such as cyberattacks or data breaches, are some of the risks you open yourself up to when you fail to secure your wireless network. Secure Your Wi-Fi Network to Protect Your Business The Federal Trade Commission (FTC) has a website dedicated to helping people and businesses secure their personal or company Wi-Fi networks to ensure their protection. We will focus on how companies can improve their cybersecurity measures by securing their wireless network. Here are some FTC suggestions: ● Change pre-set router passwords and continuously update the router’s software. ● Enable full-disk encryption for laptops and mobile devices that connect to your network remotely. ● Turn off the automatic connection setting on company smartphones. ● Use updated antivirus software on any company devices connected to your network. ● Secure remote access to your network. And here are some final tips to maintain a secure network: ● Before letting devices connect to your network, ensure they meet your security standards. ● Warn your employees about potential vulnerabilities that come with using unsecured wireless networks. ● Help employees understand current cybersecurity practices. ● Always provide new employees with cybersecurity training. ● Encourage staff to use strong, unique passwords. ● Create a VPN for employees to use when connecting to your network remotely. ● If you offer Wi-Fi to guests at your business, do not allow them to connect to your business network. Hopefully, you know why it’s important to secure your wireless network to avoid falling victim to hackers or other cybercriminals. Avoid Using Unsecured Wireless Networks While they are more convenient than your secure Wi-Fi network, unsecured wireless networks threaten your company’s security. It’s best to take preventive steps to lessen the chance of experiencing a cyber incident.
<urn:uuid:48bf56b8-e189-4ad1-bfed-6e3f35b409d8>
CC-MAIN-2022-40
https://www.drchaos.com/post/how-unsecured-wi-fi-networks-lead-to-vulnerability
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00054.warc.gz
en
0.936426
1,008
3
3
Core Network Layer: Explained In telecommunications, the core network is the central element of a network that provides services to customers who are connected by the access network. There are a number of services that the core network provides, but one of the main functions is to route telephone calls across the PSTN (public switched telephone network). Typically, in telecommunication networks, the term ‘core’ is used by service providers and refers to the high capacity communication facilities that connect primary nodes. A core/backbone network provides paths for the exchange of information between different sub-networks. Core/backbone networks usually have a mesh topology that provides any-to-any connections among devices on the network. Many service providers would have their own core/backbone networks that are interconnected. Some large enterprises have their own core/backbone network, which are typically connected to the public networks.. The devices and facilities in the core/backbone networks are switches and routers such as the Ericsson AXD 301 and Siemens EWSD. The trend is to keep the core devices fast, but not particularly ‘clever’, leaving the devices with the intelligence and decision making in the access and edge areas of the network. Technologies used in the core and backbone facilities are data link layer and network layer technologies such as SONET, DWDM, ATM, IP, etc. For enterprise backbone network, Gigabit Ethernet or 10 Gigabit Ethernet technologies are also often used. Core networks typically provide the following functionality: Core nodes offer the highest level of aggregation in a service provider network (aggregation applies to various methods of combining multiple network connections in parallel in order to increase throughput beyond what a single connection could sustain, and to provide redundancy in case one of the links should fail). Equipment within the core network has the function to decide whether the user requesting a service from the telecom network is authorized to do so within this network or not. Call control or switching functionality decides the future course of a call based on the call signalling processing. E.g. switching functionality may decide based on the “called number” that the call be routed towards a subscriber within this operator’s network or with number portability more prevalent to another operator’s network. Core network equipment is able to handle the collation and processing of charging data generated by various network nodes. The Core network performs the task of service invocation for its subscribers. Service invocation may happen based on some explicit action (e.g. call transfer) by user or implicitly (call waiting). It’s important to note however that service “execution” may or may not be a core network functionality, as third-party network/nodes may take part in actual service execution. Gateways are present in the core network to access other networks. Gateway functionality is dependent on the type of network it interfaces with. Physically, one or more of these logical functionalities may simultaneously exist in a given core network node. What technologies are used in Core Networks and what are their functions? Below, you will find a selection of popular technologies which are used within the core network. Click to read further information on the specific functionality and capabilities of each product. Carritech stock a large range of Core Network products and parts. For more information and to browse our stock, click here. Get all of our latest news sent to your inbox each month.
<urn:uuid:5ca59dc3-52b2-4d80-883f-4ca7df2fc3bf>
CC-MAIN-2022-40
https://www.carritech.com/news/core-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00054.warc.gz
en
0.923698
734
3.5625
4
Even as the IT community observed the ‘World Password Day’ on May 5, it is surprising and alarming that organizations keep on suffering data breaches due to the misuse of passwords. The idea of ‘World Password Day’ is to remind, reassess and rethink the password management policy in respective organizations. On an individual level, we do consider our email passwords or social media account passwords delicate enough to secure our personal information. Who likes to see his/ her privacy breached? Similarly, on a larger scale, organizations require a robust password management policy to ensure the primary security of confidential databases, cloud resources, critical financial assets, or even overall access management of IT resources. A single credential breach or unauthorized access can wreak havoc on the organization – both financially and reputation-wise. Here are some incidents of password breaches and consequent catastrophes that happened in the first quarter of 2022. These incidents show that somewhere we are still leaving loopholes behind when it comes to managing and protecting passwords for the security of sensitive and confidential business information. - A data analytics firm in the USA exposed data of almost 198 million voters due to unprotected passwords – misused by a hackers’ group - More than 214 million social media users’ details of a European agency were exposed due to easy and password-less access to the database – abused by some insider - A risk and compliance startup from the APAC region suffered a data breach due to the compromise of passwords by some unauthorized and unknown users Why are passwords vulnerable? To answer this million-dollar question, we must analyze the procedures that IT security teams follow to manage passwords in their organizations. Even if they are privileged passwords, then also organizations throw a lackadaisical attitude towards managing them securely. More than 80% of data breach incidents happen due to poor privileged password protection strategies (no randomization of passwords, no frequent changes of the passwords etc.). A single data breach incident can cost organizations millions of dollars, yet the IT security measures that organizations take to prevent catastrophes are very minimal or sometimes nothing. Still, organizations fail to provide utmost security to all the passwords available in the enterprise network. There are several reasons behind password vulnerabilities: - Users tend to have simple passwords (eg. names, date of birth, ongoing calendar year etc.) so that they can memorize them easily. However, it increases the vulnerability due to the predictability factor. - Users maintain an excel sheet or sometimes a simple word file of all the passwords for convenience. Alternatively, they keep it written somewhere for easy access. None of the processes is secured as it paves the way for the malicious actors to misuse passwords as per their wishes. - Passwords shared through emails bear a high risk of misuse. In situations where employees leave the organization and his/ her emails are accessed by someone else, then he/ she could get unwanted access to the passwords. Moreover, if emails are hacked, then also the passwords could lose their confidentiality. - Another risk, probably the maximum risk lies with shared passwords. A single password shared with multiple end-users can be disastrous as there are chances that the main culprit remains undetected in case of compromise of the account. In worse conditions, it could result in the loss of ownership of the account. In a vast IT infrastructure, there are thousands of privileged accounts that have privileged credentials to access all the sensitive and confidential business information. Ideally, these credentials should have a robust security mechanism to ensure data security and data privacy. However, organizations often fail to value the crux of passwords and hence, resulting in the vulnerabilities mentioned above. Adding to the woes, hacking techniques are getting more sophisticated day by day and the number of passwords is also increasing uncontrollably for various accounts, systems or databases. In this backdrop, the organizations must take adequate IT security measures to ensure secure password management practices. How do you ensure Password Protection? ARCON, being a global thought leader in IT risk preventive solutions, always propagates enterprises for implementing the right and adequate password protection techniques. It is one of the most crucial IT security areas for enterprises to protect information assets from malicious and unauthorized access. Moreover, when it comes to privileged passwords, extra preventive measures become mandatory as they are the gateways to all confidential business information. The vulnerability of passwords is more evident in a shared and distributed environment. If privileged accounts or credentials are shared by multiple users, information assets are prone to breaches. Hence, organizations must ensure that privileged accounts are resistant enough against password hacks. ARCON’s flagship solution Privileged Access Management (PAM) offers a robust password vault engine that rules out the chances of unauthorized access and password abuse. This powerful automated engine makes sure that – - The Privileged passwords are stored in a highly secured manner with AES-256 encryption. It creates a centralized secure repository of passwords for multiple systems so that no password can be duplicated by anyone under any circumstance. - The passwords are automated and frequently rotated and randomized so that the prerequisites of a strong password policy are mandated. It creates a virtual preventive fortress that stops any unauthorized user from accessing any sensitive information at any point of time. - Privilege password vaulting assists the IT administrators to adopt a robust privileged access management practice that helps in forensic analysis to find out who has done what to the passwords. Strong passwords are the safety locks that protect the treasure trove of business information from unwanted thefts. ARCON | PAM’s Password Vault tool offers an extra security layer around the credentials in real-time to ensure authorized access to the critical systems and mitigate data breach threats.
<urn:uuid:9ce9d0db-a10b-4a85-97a6-35efc628820f>
CC-MAIN-2022-40
https://arconnet.com/blog/the-key-to-critical-it-resources
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00054.warc.gz
en
0.929291
1,147
2.671875
3
The secret to designing your sound system is to start at the ears. The design of an IP paging system starts with the sound level required for people to hear an announcement. Once we know the right sound level, we can define everything else. The sound level we hear is affected by the background noise, the distance from the IP speaker, and if we are indoors or outdoors. We require different paging equipment if the system is used in an office building, or a noisy manufacturing plant. We select different IP speakers and amplifiers when we are inside a classroom, or when we are in the hallways of the school. This article describes how to calculate the speaker, the power required at each speaker, and the best place to mount the speakers. How Sound is Measured – The Decibel Sound level is measured in dB or decibels. This is a relative measurement that starts at 0 dB, which is the lowest sound level that a healthy ear can hear. It measures the relative difference between sounds levels. Sound travels through a medium such as air or water to be heard. That’s why you can’t hear anything in a vacuum. It creates slight changes in the pressure of the medium, which are defined as sound pressure changes. The decibel level is a measure of this sound pressure level change in the air. It is also measured at a specific frequency. There are two types of sound specifications that help us define the right type of IP paging system. There are the frequency response curves that define what we can hear, and there are amplifier and speaker frequency curves that define what our paging system can provide. What We Can Hear Most people can hear sound at 1000 Hz but we may not hear any sound at over 20,000 Hz. This variance in the perceived sound levels at different frequencies is measured using a slightly different measurement called dBA or A-weighting. A-weighting consists of a family of curves that relate frequency (Hz) and perceived sound level (dB). There are A, B, C, D, and Z weightings curves. The A-frequency chart was the first one used, and it’s used by the international standard IEC 61672. What the Paging System Can Provide The output audio quality from the IP paging system is determined by the amplifier and the speaker frequency response. The “speaker frequency response” curve define what the speaker is capable of providing. The curve shows the speaker audible response (dB) at a range of frequencies. When we select the speaker, we look at this curve to determine if it can be used for voice or music. Even though there are a number of ways to measure sound, we usually stick to a simplified measure that just measures the dB at a certain frequency. When you look at the specifications for a speaker, it will define the sound level (in dB), 1 meter away, using 1 watt of power, and at a specific frequency. Sound at the Ear The normal sound level of a person talking is about 60 dB. At this level, we can easily hear a person talking, but if we are outside and there is traffic noise, we may not be able to hear them. There are two factors that affect how we hear something, one is the background noise and the other is indoor or outdoor location. If we are indoors, the sound seems louder because it can echo off the surrounding walls and ceiling. Outdoors the sound dissipates and it’s harder to hear. As a rule of thumb, we suggest providing a sound level that’s about 10 dB above the background noise. To learn more about sound levels, look at our article, What is The Right Sound Level for Your Paging Speakers. Sound Levels at a Distance Sound levels decrease as we move away from the source of the sound. The inverse square law defines the sound level at a certain distance. According to this law, the sound pressure level is inversely related to the radius from the source (1/r2). This law works best in an ideal environment, where the sound propagates equally in all directions, and there are no reflective surfaces. Instead of calculating the sound level based on the inverse square law, we use a simple rule of thumb. The sound level decreases by 6 dB for every doubling of the distance from the speaker. In the chart above, the 90 dB sound level drops to 84 dB (90 – 6 = 84) when we are 2 m away, and 78 dB when we are at 4 m away. Selecting the Speaker and Amplifier Suppose we want to hear a page from a speaker that’s 8 meters (26 ft.) away. Here’s how we calculate the sound level required at the speaker. Suppose we are in a busy building lobby and the background level is 70 dB. The person will need to hear sound levels at 80 dB to hear above the background noise (10 dB more than the background). Let’s assume that the speaker is 8 meters (26 ft.) away. Now that we know the sound level at the person, we can select the speaker and the amplification power we will require to achieve this sound level. As we move closer to the speaker the sound level increases. At half the distance (4 m) the sound level is 6 dB more or 86 dB, at half of the remaining distance (2 m), the sound level is increased another 6 dB (to 91 dB), and finally at half of that (1 m), we need the speaker to provide 97 dB at 1 m. Now that we know the sound level, we can select the speaker and the amplifier power required. As I mentioned, speakers are specified by their dB level at 1 m at 1 watt of sound power. For example, the wall speaker, model PBC6XT72K, can provide 91 dB a 1 watt, at 1 meter. Since we need 97 dB, we need to increase the amplification provided by our IP amplifier. In this application we have selected the IP7-SE8 network attached amplifier from Digital Acoustics. We can adjust the power to the speaker. The amplification of the sound level to the speaker increases the sound output. The rule of thumb is that doubling the power will increase the sound level by 3 dB. At 2 watts, the speaker will put out 94dB (91 dB + 3 dB = 94dB). By doubling the power to 4 watts, we get another 3 dB or 97dB. In summary, the speaker required for this application is model PBC6XT72K, the IP7-SE8 amplification power should be set at 4 watts, and the speaker on the wall, is 8 meters away from where the sound will be heard. To learn more take a look at our article, Selecting the Right Speakers for Your Paging System. It reviews the different speakers and where they should be placed. Real World Considerations In the real world, we may need to adjust the volume from the speaker. There may be people further away from the speaker, or too close to the speaker. We should be careful because people who are very close to the speaker (1 meter away) will hear a sound level of 97 dB. This may be too loud in some situations because repeated exposure to sound above 85 dB can cause hearing damage and should be avoided. The best way to solve this problem is to use more speakers that are placed closer to the people. For example, instead of using one speaker on a wall, we could use a number of speakers in the ceiling. In most cases, paging systems are used for voice announcements. If we want to support high quality music, we may select a different paging system. A system that supports a wider audio range will cost more so we need to make sure we understand our objectives. We select the speaker and the power required based on the sound level we want to hear. We can use the speaker specifications, and then use the distance calculations and speaker power to adjust the sound level. If you need help selecting your IP paging system, please contact us at 800-431-1658 in the USA, or at 914-944-3425 in the USA, or just use our contact form.
<urn:uuid:01dc95b2-8a90-4655-a02e-17b3b7ef9c1f>
CC-MAIN-2022-40
https://kintronics.com/design-ip-paging-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00054.warc.gz
en
0.923634
1,699
3.71875
4