text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Types of Topologies in Network Environments Types of Topologies in Network Environments This content is from our CompTIA Network + Video Certification Training Course. Start training today! Topologies are something a networking professional using when they begin setting up their environment. Therefore, it’s important to define what exactly a topology is. A topology is defined as how the physical media links to the network nodes. From a human perspective, a topology is how does a user get to Google, Bing, or whatever they’re trying to search for on the internet. What mechanisms or technology allows this to occur? Types of Topologies: Let’s examine some of the options. Initial technology provided us with the Bus Topology. A Bus is a network setup where all the devices attach to the same Local Area Network (LAN) thicknet cable. One of the downsides with bus topology is, if one node is transmitted, all devices connected to the bub topology would read the transmission. In other words, all the machines had to listen and then decide if they care about this, if they didn’t, they dumped the message. Today, a Bus Topology is not very efficient use of CPU cycles and bandwidth. Ring topology is very obvious from the diagram above. Data travels in a ring around the circle. This diagram shows a dual ring which is common because if there was a breakage or outage, the other users will be able to continue transmitting packets. If there’s a dual ring and I have an outage, then, I can fix the issue while connectivity continues. At no time am I losing user access to the network time. In fact, that’s one of the rules we get in troubleshooting, restore connectivity quickly and try to avoid downtime. In a Star Topology, there’s physical layout of your network and logical layout of your network. It’s unlikely that you’ll walk in any boardroom, or cubicle farm and see a network that’s laid out exactly the way it is in the above diagram. In fact, if you were able to peel the lid back off of a room and hover over it in a balloon. It doesn’t really look like a star. Physically, this doesn’t look like a star but notice what gives it away, single point of failure. Single Point of Failure (SPoF) A Single Point of Failure is bad from the standpoint of a network admin because if something’s going to break, your single point of failure goes down and so does everybody else is. If you see a network diagram like this, think about the layout, there may be a widely disparate geographic regions that are far apart with many connections. In fact, this could be a sample CompTIA Network + exam question. It would be presented as a scenario: “You’re the network admin for this company and we have five sites. We want to connect them all. What should we choose?” FULL MESH. Now the question is, “how many total connections do I need to have full mesh?” Well, here’s option one, count them. You would need to count every possibility where the line on the diagram intersects with another line of circle. It would be easily to skips a connection or accidently count a connection twice. Here’s a better way to find the answer: You just need to know a simple formula. Full mesh is the total number of nodes (circles) Looks like this. Number of nodes (times the number of nodes minus one) over two. For this example, we have 5 nodes. 5 * 5 ‑ 1 is 4 divided by 2. That’s 20. Which equals10. I didn’t have to count anything. Those are various topology types that you need to be familiar with. Until next time…. Video Certification Training: CompTIA Network + You May Also Like Mark Jacob, Cisco Instructor, presents an introduction to Cisco Modeling Labs 2.0 or CML2.0, an upgrade to Cisco’s VIRL Personal Edition. Mark demonstrates Terminal Emulator access to console, as well as console access from within the CML2.0 product. Hello, I’m Mark Jacob, a Cisco Instructor and Network Instructor at Interface Technical Training. I’ve been using … Continue reading A Simple Introduction to Cisco CML2 In this video, you will gain an understanding of Agile and Scrum Master Certification terminologies and concepts to help you make better decisions in your Project Management capabilities. Whether you’re a developer looking to obtain an Agile or Scrum Master Certification, or you’re a Project Manager/Product Owner who is attempting to get your product or … Continue reading Agile Methodology in Project Management In this Office 365 training video, instructor Spike Xavier demonstrates how to create users and manage passwords in Office 365. For instructor-led Office 365 training classes, see our course schedulle: Spike Xavier SharePoint Instructor – Interface Technical Training Phoenix, AZ 20347: Enabling and Managing Office 365
<urn:uuid:417ae6c5-b00f-47f0-85c6-8b818e5b2a5d>
CC-MAIN-2022-40
https://www.interfacett.com/blogs/types-of-topologies-in-network-environments/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00238.warc.gz
en
0.901878
1,120
3.40625
3
What started as a version control system for code has become a platform for ideas and collaboration. Every morning, the first waking task most humans perform is checking email or the latest updates on their social media accounts. For developers, that initial daily fix is GitHub, the social coding platform that has captured the hearts of millions of hackers and tech enthusiasts around the world. The social network for professional developers and everyday hackers aims to bring distributed, open collaborations to the world, one repository at a time, and it's beginning to find its way into government. Founded in 2008 by P.J. Hyett, Chris Wanstrath and Tom Preston-Werner, the San Francisco-based company claims 6 million people have created more than 13 million repositories to date on its platform. With an ever-growing population of users, an aggressive expansion of features and more than $100 million in venture capital funding, GitHub is going beyond just a tool for the tech elite and is poised to be Silicon Valley's next big public offering. The term "GitHub" is a riff on the open-source distributed version control system Git developed in 2005 by Linux kernel creator Linus Torvalds. Other VCS platforms include Subversion (created by CollabNet and now part of Apache Software Foundation's ecosystem) and Perforce. Years before Git, SourceForge was the first centralized platform that allowed developers to share code and manage software development projects. Although GitHub hosts millions of open-source projects, the company retains all rights to the platform code, which is built using Ruby and Erlang. Open-source alternatives include Git, GitLab and Gitorious, each of which offers the ability to download the source code and use it freely on internal development environments, similar to GitHub Enterprise but without the associated licensing fees. In many ways, the rise of open source in government in recent years is a direct correlation with GitHub's growth and its attractiveness to influential early-adopter agencies, including NASA, the Federal Communications Commission and the Consumer Financial Protection Bureau (CFPB). Today, government and civic hackers' open-source accomplishments are synonymous with shiny new GitHub repositories. But GitHub isn't government's first foray into version control and code repositories. The Defense Department launched Forge.mil, a centralized platform available only to DOD collaborators, in 2009. From a technical perspective, rather than being a unified code base, Forge.mil is an integration of the content management system Drupal and the proprietary version control system CollabNet. GitHub's big government breakthrough came in May 2012 at TechCrunch Disrupt, when U.S. CIO Steven VanRoekel, alongside Chief Technology Officer Todd Park, introduced the Obama administration's Digital Government Strategy, which calls for agencies to "participate in open-source communities." At the conference, VanRoekel announced that the White House would begin publishing to GitHub, and officials subsequently released the complete code base of the "We the People" online petition platform. The larger wave of government GitHub adoption came with the implementation of the digital strategy, and since then, federal agencies have been on a social coding spree. Today, more than 300 government agencies are using the platform for public and private development. Cities (Chicago, Philadelphia, San Francisco), states (New York, Washington, Utah) and countries (United Kingdom, Australia) are sharing code and paving a new road to civic collaboration. Civic-focused organizations -- such as the OpenGov Foundation, the Sunlight Foundation and the Open Knowledge Foundation -- are also actively involved with original projects on GitHub. Those projects include the OpenGov Foundation's Madison document-editing tool touted by the likes of Rep. Darrell Issa (R-Calif.) and the Open Knowledge Foundation's CKAN, which powers hundreds of government data platforms around the world. According to GovCode, an aggregator of public government open-source projects hosted on GitHub, there have been hundreds of individual contributors and nearly 90,000 code commits, which involve making a set of tentative changes permanent. Getting started on GitHub is similar to the process for other social networking platforms. Users create individual accounts and can set up "organizations" for agencies or cities. They can then create repositories (or repos) to collaborate on projects through an individual or organizational account. Other developers or organizations can download repo code for reuse or repurpose it in their own repositories (called forking), and make it available to others to do the same. Collaborative aspects of GitHub include pull requests that allow developers to submit and accept updates to repos that build on and grow an open-source project. There are wikis, gists (code snippet sharing) and issue tracking for bugs, feature requests, or general questions and answers. GitHub provides free code hosting for all public repos. Upgrade offerings include personal and organizational plans based on the number of private repos. For organizations that want a self-hosted GitHub development environment, GitHub Enterprise, used by the likes of CFPB, allows for self-hosted, private repos behind a firewall. GitHub's core user interface can be unwelcoming or even intimidating to the nondeveloper, but GitHub's Pages package offers Web-hosting features that include domain mapping and lightweight content management tools such as static site generator Jekyll and text editor Atom. Notable government projects that use Pages are the White House's Project Open Data, 18F's /Developer Program, CFPB's Open Tech website and New York's Open Data Handbook. Indeed, Wired recently commented that the White House's open-data GitHub efforts "could help fix government." Although GitHub, in its nascency, is largely about code, innovative agencies are expanding its use beyond 1s and 0s and capitalizing on its collaboration features for public engagement on procurement requests for proposals, legal code, website hosting and even solicitation of general suggestions. For example, in its /feedback repo, the National Archives and Records Administration asks: "Do you have feedback, ideas or questions for the U.S. National Archives? Use this repository's Issue Tracker to join the discussion." Canada published its front-end common look-and-feel framework, the Web Experience Toolkit, to GitHub. And late last year, San Francisco posted the city's legal code to its GitHub account "in technologist-friendly formats that can power new applications that enhance understanding, improve access and lead to new insights around the laws." In addition, Philadelphia used GitHub to publish its open-government plan, API standards (forked from the White House's standards) and even a request for proposals for a mobile application. In a retrospective post on his personal blog, former Philadelphia Chief Data Officer Mark Headd said, "By the end of the submission process, we had received nine high-quality responses from local vendors -- a number that far outstrips the number of responses received for similar [miscellaneous purchase order] projects and three times the number of responses required." GitHub's true measure of civic success will be when government projects are regularly repurposed by others, and there are early indications that the future of government code will be forked. New Zealand made history by downloading the United Kingdom's front-end templates for the beta version of its new website. To date, the White House Web API standards have been forked 111 times, and Mexico recently forked Project Open Data. Perusing USAxGITHUB, an up-to-the-minute feed of government GitHub activity created by former White House Presidential Innovation Fellows Adam Becker and Ben Balter (the latter of whom now serves as GitHub's government evangelist), one sees the future of government development: a transparent look into every public issue with the ability to jump in and collaborate at will. You could literally watch government technology unfold in real time. Fundamentally, GitHub is a platform where people can share and repurpose ideas and information. In the not-too-distant future, it could be your code repository, collaboration platform, content management system and Web host all neatly wrapped into one website. As GitHub continues to expand its offerings and build a business beyond just sharing code, agencies should be watching. New ways of collaborating with the public are almost certain to emerge. NEXT STORY: Thoughts coming off the 4th of July
<urn:uuid:b9aace3e-6b60-4694-a150-6606e33c3418>
CC-MAIN-2022-40
https://fcw.com/2014/07/github-a-swiss-army-knife-for-open-government/255435/?oref=fcw-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00238.warc.gz
en
0.924676
1,721
2.65625
3
Cars are an essential part of our everyday life, and are crucial to transporting thousands of commuters on a daily basis through busy towns and cities, and from one country to another. With “smart” vehicles playing such a major role in our day-to-day lives, it’s no surprise that semi- and fully-autonomous transportation and the potential for driverless cars have become hot topics. Countries such as the United Kingdom, France, and Switzerland are already testing forms of autonomous cars on public roads. According to Gartner, driverless vehicles will represent approximately 25 percent of the passenger vehicle population in use in mature markets by 2030. While highways full of driverless cars may be a shining vision of the future for some, from a hacker’s perspective they represent yet another opportunity to wreak havoc. Driven home by the rise in increasingly sophisticated cyberattacks and data breaches over the past several years, ensuring driver safety from cyberthreats has become a major development focus and challenge in the automotive and security industries. A driverless car is a very advanced mode of transportation, possibly even without a readily available steering wheel. They have considerably more electronic components than “traditional” cars, and rely on sensors, radar, GPS mapping, and a variety of artificial intelligence to enable self-driving. These new guidance and safety systems must be integrated into the electronic onboard systems already present in modern day vehicles, connect wirelessly to the manufacturer, and probably even offer third-party services via the Internet. And that’s where the problems begin: with hackers remotely accessing a vehicle and compromising one of its onboard systems, resulting in a range of risks from privacy and commercial data theft, to actual physical risks to people and property. Here are some attacks that are likely to be targeted at highly connected and autonomous cars: Privilege escalation and system interdependencies: Not all systems and in-car networks will be created the same. Attackers will seek vulnerabilities is lesser-defended services, such as entertainment systems, and try to “leap” across intra-car networks to more sensitive systems through the integrated car communications systems. For instance, a limited amount of communication is typically allowed between an engine management system and an entertainment system to display alerts (“Engine fault!” or “Cruise Control is Active”) that can potentially be exploited System stability and predictability: Conventional, legacy car systems were self contained, and usually came from a single manufacturer. As new autonomous cars are developed, they will very likely need to include software provided by a variety of vendors – including open source software. Information technology (IT), unlike industrial controls systems such as legacy car systems, are not known for predictability. IT systems, in fact, tend to fail in unpredictable manners. This may be tolerable if it is just a matter of a web site going down until a server re-boots. It is less acceptable in the event of a guidance systems being degraded even slightly when an adjacent entertainment or in-car Wi-Fi systems crashes or hangs. Also expect to see known threats be adapted to this new target, expanding from common Internet platforms like laptops and smart phones an IoT device like an autonomous car. For instance: Ransomware is certainly on the rise on PCs and mobile phones. But driverless cars represent an almost ideal target. Imagine the following scenario: a hacker uses the in-car display to inform the driver that his car has been immobilized and that a ransom must be paid to restore the vehicle to normal operation. While a laptop or tablet may be restored relatively easily with potentially no damage, assuming backups are available, a car is a very different story. The owner may be far from home (the ransomware could be programmed to only launch when the car is a predetermined distance from its home base.) Naturally, few dealerships would be familiar with resolving this sort of problem, and specialist help would most likely be required to reset affected components. The cost of such a ransom is expected to be very high, and will likely take time. In the meantime, the vehicle may have to be towed. So the question is, what is the amount of the ransom demand that we expect to see? Estimates are that it is likely to be significantly higher than for traditional computer ransomware, but probably less than any related repair costs so that the car owner is tempted to pay. Perhaps a more attractive target for hackers is collecting data about you through your car. Driverless cars collect massive amounts of data, and know a lot about you – including your favorite destinations, your travel routes, where you live, how and where you buy things, and even the people you travel with. Imagine a hacker, knowing that you’re traveling far from home, sells that information to a criminal gang who then breaks into your home, or uses your online credentials to empty your bank account. That last risk exists because your driverless and connected vehicle is likely to become a gateway for any number of electronic transactions, such as automatic payment of your daily morning coffee, or parking charges, or even repairs. With sensitive information stored in the car, it becomes another attack vector to obtain your personal information. And with RFIDs and Near Field Communications (NFC) becoming commonplace in payment cards, accessing their details through your car would be another way to capture data about you and your passengers. And last but not least, there are legal and authenticity issues. Can we consider the location data of the car as authentic? That is, if your car reports you opened it, entered it, and traveled to a particular location at a certain time of the day, can we really assume everything happened as recorded? Will such data hold up in court? Or can this sort of data be manipulated? This is an issue that will need to be addressed. Similarly, if cars contain software from several different providers, and spends the day moving from one network to another, who is accountable or liable for a security breech and resulting losses or damage? Was it a software flaw? Was it negligent network management? Was it on-board user-error or lack of training? So, the question becomes, how do we secure autonomous cars?
<urn:uuid:28f5ebf5-f5b0-410b-8a7a-0fe4e6686c37>
CC-MAIN-2022-40
https://techmonitor.ai/technology/software/driverless-cars-new-cybersecurity-challenge
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00238.warc.gz
en
0.957897
1,276
3.078125
3
The nation's critical infrastructure—the assets that power our homes, bring water to our faucets, and move people and goods from place to place—has grown frighteningly vulnerable to new kinds of threats. To protect these essential systems, federal IT leaders and infrastructure managers need to continue down the path of digital transformation, placing special emphasis on data management, analytics, and improved portfolio management. If the steady drip of headlines on the subject has you worried about the nation’s vulnerability to infrastructure disruption, you’re not wrong to be concerned. The Colonial Pipeline attack sent ripples of pain along the entire East Coast, and its impacts pale in comparison to what could result from an assault on positioning, navigation, and timing (PNT) systems like GPS, or an electromagnetic pulse (EMP) attack meant to take down a major electrical grid. While these types of threats escalate in scope and sophistication, the attack surface itself is expanding beyond the surly bonds of Earth with the recent push to classify satellites, sensors, command and control systems, and other space assets as critical infrastructure. Thankfully, government agencies are taking meaningful steps to respond. The Department of Energy launched a 100-day collaborative plan with the electricity industry and the Cybersecurity and Infrastructure Security Agency (CISA) to enhance industrial control system and supply chain cybersecurity. For its part, the Department of Homeland Security (DHS) Science & Technology Directorate recently released PNT resources and algorithms to thwart attempts at GPS spoofing. The Space Policy Directive-5 gives DHS and CISA lead roles in enhancing the nation’s cyber defenses for key systems involved in such critical services as global communications, navigation, and weather monitoring. Moves like these are admirable and necessary, but we must go further. To that point, let’s take a closer look at how and why critical infrastructure has become so vulnerable.
<urn:uuid:0078d56e-856a-4b34-b4fe-cac252af8522>
CC-MAIN-2022-40
https://www.boozallen.com/insights/critical-infrastructure/digital-innovation-key-to-securing-us-infrastructure.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00238.warc.gz
en
0.922564
379
2.625
3
- 22 January, 2020 California utilities should have used digital twin technology instead of power shutoffs Northern California’s proactive power outages were not necessary last fall. Digital Twin technology can predict utility line failures and turn off power in milliseconds to avoid the potential of sparks igniting the surrounding area. Digital twin technologies are gaining traction across industries and use cases. Initially devised as a means of monitoring assets and production settings in manufacturing, this technology has quietly seeped into other verticals like hospitality, construction, and building management and soon, electricity delivery. The premier problem digital twins will solve is predicting power grid failure, which would alleviate the social, economic, and political issues that resulted from efforts to reduce the incidence and degree of catastrophes, property loss, and deaths stemming from downstream effects of power grid failure—such as recurring wildfires. Digital twins can allay these concerns because they’re based on real-time signals from a comprehensive set of factors that could be indicative of power grid woes related to environmental, meteorological, or technology concerns. Moreover, they can deliver accurate predictions for each of these factors well in advance of failure—in some cases as much as 28 days. Read the full article at PowerGrid International.
<urn:uuid:bfc51eec-a455-453a-878e-4f1ccabf79e2>
CC-MAIN-2022-40
https://allegrograph.com/articles/california-utilities-should-have-used-digital-twin-technology-instead-of-power-shutoffs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00438.warc.gz
en
0.941615
253
2.75
3
The internet is a valuable, strategic resource, but it harbors a number of risks, especially for users who are most unaware of the dangers that may lurk in the depths of the web. In this article, we focus upon the risks present for the youngest users and see how FlashStart’s DNS cleanbrowsing software can ensure their safety anytime, anywhere. 1. Together for a better internet February 8 was Safer Internet Day, the international day for a more secure internet. The event is held annually and aims to promote safer use of the internet and new technologies for children and young people around the world. The target audience is, therefore, a young audience, because educating internet users about the more responsible usage of online resources today can contribute to a better internet tomorrow. Indeed, the mantra of the initiative is “Together for a better internet.” The size of the network on which the event is based is quite impressive. Created in 2004, Safer Internet Day can now count on a wide range of promoters, from the European Centers for Internet Safety to the Global Safer Internet Day Committees, to associations and organizations of varying sizes. This network also includes the European Commission, which has been active for some time now with various projects which aim at building a safer internet on the continent and, above all, at creating policies, at the community level, for the management of the web, with an eye on a safer future. >> FlashStart’s DNS cleanbrowsing tool is totally in the cloud and can be easily activated → Try it now 1.1 Measures in Europe In May this year, a couple of months after Safer Internet Day, the Commission launched the New European Strategy for a Better Internet for Girls and Boys. The strategy starts with the assumption that the internet “is a great place to learn, play, share, watch, connect, express oneself,” and has the three-fold objective of: » Making the digital world safe by ensuring that young people see only age-appropriate contents, do not feel uncomfortable, sad, or embarrassed by the content to which they are exposed on the web, and are protected from online bullying and hate speech. » Having young people develop the skills, knowledge, and support needed to learn how to use the internet safely, even when adult supervision is not present, knowing how to distinguish between what is true and false and between trustworthy and untrustworthy parties. » Enable younger people to have their say and, therefore, to safely express their opinions on issues which matter to them, using the appropriate channels. 1.2 The strategy in America The focus on younger children online in the United States has been moving forward since the year 2000, when the U.S. Congress passed the Children’s Internet Protection Act (CIPA). The CIPA requires that all public schools and libraries which receive discounts for internet access through a state program must show that they have an internet safety policy supported by technology and, specifically, by protective measures aimed at blocking access to contents that are considered obscene or dangerous. >> FlashStart’s DNS cleanbrowsing software protects from a vast array of threats and blocks access to malicious sites → Start your free trial now 2. The risks for the youngest users The above-mentioned efforts by the European Commission and the U.S. Congress, along with the desire to focus specifically on younger children, illustrate how pervasive the issue is. Exposing young children to dangers on the web and, especially, to online content that can adversely affect their mental health and psychological development is a widespread problem. These risks are also linked to specific physiological reasons. Indeed, psychologist and psychotherapist Giuseppe Lavenia, who specializes in technological addictions, points out that “During adolescence, brain development is not yet complete, and there is no effective communication between the various brain regions. Therefore, emotions can emerge quickly and intensely without the ability of executive functions which are present in the developing prefrontal cortex to regulate them. This is why adolescents are often governed by action rather than reflection and by emotion rather than reason. This, in itself, already clearly exposes them physiologically to greater risks because of their reduced sense of limits and danger.” >> FlashStart is the DNS cleanbrowsing software that blocks 80 million unwanted sites every day → Activate your “Free trial” now. 3. Preventive measures: DNS cleanbrowsing training and software Given the potential dangers to development linked to the exposure of inappropriate online contents, and given the lower sensitivity of younger people to risks in general, it is essential to devise preventive measures that will allow them to have a peaceful and fruitful internet experience. The prevention and awareness campaigns that many agencies are spearheading are important and can have a positive effect, especially in school settings, where young people meet with each other under adult supervision. However, unfortunately, raising awareness among children is not enough, as the problem has a very broad range. This requires the help of technology, which is, therefore, both the cause and a good solution to the problem. In this regard, the use of DNS cleanbrowsing software has become one of the most popular practices for limiting exposure to unwanted contents which can have equally undesirable effects on the growth and development of young people. >> FlashStart is the web filtering tool distributed in 109 countries → Activate it; it is free for fifteen days. 4. FlashStart: DNS cleanbrowsing which protects young people With a history of more than two decades in online content filtering, FlashStart offers a DNS cleanbrowsing tool suited to meet the online safety needs of young and old alike. 4.1 FlashStart: a complete tool for all-around safety FlashStart’s DNS cleanbrowsing software is a comprehensive tool. In fact, it can be set up to block access to every type of content imaginable: » Dangerous contents: the FlashStart tool automatically blocks all contents deemed to be hazardous to the security of the PC and the data it contains or the networks connected to it. In fact, attempts to access sites carrying malware, Trojans, ransomware, viruses, sites related to phishing attempts and online fraud are blocked. To learn more, read our related article. » Unwanted contents: these are the contents that we discussed above, so primarily sexual and violent contents, but also online gaming platforms and sites related to illegal trafficking. In this post, we discuss, in detail, how web filter services work. » Distracting content: this category includes all audio and video streaming platforms, online shopping sites, and all social networks that are now an integral part of children’s lives and often affect their choices and actions. Click here for more information about sites that cause distractions and how to block them. » Other contents: the network administrator can decide for himself/herself to add specific sites that he/she does not want users to access by creating his/her own personal blacklists. >> If you have already activated FlashStart, read this guide which explains how to extend Blacklists for internet DNS filtering. 4.2 FlashStart: DNS cleanbrowsing suitable for the youngest users When a user types in the name of the site he or she wants to reach, FlashStart’s DNS cleanbrowsing performs a super-quick check in the FlashStart cloud to see if this site is part of the blocked ones, either automatically by the tool or as an add-on by the network administrator. FlashStart is suitable for young people because its controls are extremely fast. Thanks to the global anycast network on which it is based, it provides total redundancy with multiple data centers operating at major national interchange points. This allows for near-zero latency, an element that counts FlashStart among the top ten global DNSs and makes it suitable for young users who want everything immediately. When a user tries to access a site that is part of a category blocked by the administrator, access is blocked, and a warning message appears on the screen. What if users think that they can bypass the protection? FlashStart is extremely secure: the only way to bypass the protection is by entering a specific password in the tool’s control panel, which is accessed only with restricted credentials. 4.3 FlashStart: active and up-to-date protection, anytime, anywhere Finally, Flashstart uses a mix of artificial intelligence and machine learning in order to constantly scan the internet, searching for new threats, and, therefore, to guarantee continuously updated protection. Artificial intelligence is a system of algorithms which allows a machine to imitate processes which are typical to the human mind, such as reasoning and learning, in order to be able to make reasonable decisions. Flashstart’s artificial intelligence examines up to 200 thousand sites every day, supports twenty-four languages, and recognizes ninety categories of sites based upon content. You can take advantage of FlashStart’s AI-based, DNS, cleanbrowsing tool anywhere: » When installed at the router level, FlashStart provides protection and security for all devices connected to the network. » When installed at the individual device level via the ClientShield application, FlashStart guarantees protection of the user’s pc, tablet, or smartphone wherever the user is, continuously filtering the content he or she wants to access. >> FlashStart is a leader in competitiveness → Ask for prices You can activate the FlashStart® Cloud protection on any sort of Router and Firewall to secure desktop and mobile devices and IoT devices on local networks.
<urn:uuid:217fd799-6ca9-4f18-a14f-c4f70e12f6d8>
CC-MAIN-2022-40
https://flashstart.com/flashstart-the-dns-cleanbrowsing-software-that-guarantees-the-online-safety-of-young-people/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00438.warc.gz
en
0.931136
1,974
3
3
BLOCKCHAIN TECHNOLOGY has become the internet sweetheart of those looking to bring security to transactions via an incorruptible, distributed electronic ledger. Businesses, though, aren’t yet taking full advantage of its disruptive potential. Will the Internet of Things (IoT) be just the use case blockchain has been waiting for? Blockchains first emerged as the underpinning of cryptocurrencies such as bitcoin. Many businesses have now started to adopt the concept of protected, fully auditable digital ledgers shared across a peer network of PCs to instill a high level of trust into other processes, such as supply chains. WalMart, for example, has adopted a blockchain for its produce supply chain, recording and tracking every movement from farm to market. Yet Walmart’s use case has not had the impact that many pundits predicted. Research from Gartner reveals that even by 2021, 40 percent of the projects currently using permissioned blockchains will not be taking advantage of the technology’s unique aspects, such as the concept of decentralization, where a ledger is distributed across multiple systems and no one system can be used to corrupt all the transactions held across the blockchain. “CEOs will have seen articles about blockchain or heard that their competitors are using it, so they’ll call up Gartner and say, ‘Just tell me how it works and how I can use it,’” says Avivah Litan, Gartner vice president and distinguished analyst. That’s an indication that business executives just don’t get blockchain. She adds, “Inquiring enterprises can’t think of use cases that really need blockchain.” IoT Enters the Equation Perhaps it comes down to a lack of imagination. Consider the exponential growth of the IoT market. Businesses are deploying sensors, controllers, and other devices into their supply chains, and embedding IoT technology into critical systems such as environmental controls or transportation systems. Simply put, IoT creates data, and that data is used to advance business. However, businesses need to know they can trust that data, as well as audit and potentially share it—requirements that scream for an immutable record placed in an incorruptible ledger. “Blockchain technology is a natural extension of IoT that allows corporations to extend the value of their connected devices by adding the capability to transact and participate in the machine economy,” says Allison Clift-Jennings, CEO of Filament, a developer of blockchain systems for IoT. Adding blockchains to IoT solutions offers additional benefits as well. For instance, Gartner research continues to show that security is the biggest area of technical concern for organizations deploying IoT systems. That’s a viable use case for blockchain technology, which can ensure that information reported by IoT devices isn’t corrupted or altered, preventing fraud. “Blockchain supports the security of IoT with a secure and scalable framework among devices,” Clift-Jennings says. However, many organizations still find identifying specific IoT-related deployment opportunities for blockchain somewhat challenging. Adopters, experts say, must start by identifying IoT use cases that will benefit from a blockchain, such as sensors and devices that are responsible for maintaining compliance or contractual requirements. The blockchain can secure the data recorded and enhance trust in any transactions those devices are involved in. In the past, building a blockchain could involve demanding tasks like deploying underlying infrastructure and building a distributed, decentralized network. However, blockchain as a service is an emerging market with offerings such as Amazon Managed Blockchain, Microsoft’s Azure Blockchain, and SAP’s Leonardo Blockchain. And companies like Filament are paving the way for blockchain-enabled IoT devices, creating different opportunities for channel players. For IoT integrators looking to build on the potential of blockchain, partnering with those blockchain-as-a-service providers may be a good starting point in what is sure to be a disruptive market.
<urn:uuid:a1e9c7d4-d1f6-4969-8aac-5012bce6159d>
CC-MAIN-2022-40
https://www.channelpronetwork.com/article/when-iot-and-blockchain-intersect
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00438.warc.gz
en
0.943508
819
2.53125
3
The typical organization stores data in multiple locations, including hard drives, virtual servers, physical servers, flash drives, and databases. There are equally many ways to move this data, including through VPNs, wirelines, and wireless. This modern reality begs the question of whether it is possible to protect sensitive data. What is Data Loss Prevention (DLP) Anyway? Data loss prevention (DLP) refers to a combination of software, techniques, strategies, and processes to prevent the loss, misuse, or unauthorized access and transmission of sensitive data. More commonly, DLP is used to refer to software and other technologies that protect sensitive data. This protection may include filtering data streams, controlling endpoint activities, and monitoring data stored in the cloud. Organizations implement data loss prevention for many crucial reasons, including: Data Privacy Obligations: Today, every organization that collects, stores, and uses sensitive customer data is accountable for safeguarding this data. Regulations such as GDPR, HIPPA, and PCI-DSS enforce compliance with the set-out data protection measures. Companies wanting to avoid hefty fines and other penalties implement DLP to guarantee compliance with regulators. Cyber Security Threats: Many organizations face increasing internal and external threats to sensitive data. Corporate espionage and malicious attacks are on the rise, forcing organizations to take a security-first approach to handling data. Data loss prevention helps to identify sensitive data and protect it from internal and external attacks. Data Visibility: It is hard to track the movement of data within an organization. DLP offers a 360-degree view of how data moves through the various networks, endpoints, and the cloud. This visibility helps identify how each employee interacts with data. Bring Your Own Device (BYOD) Policies: BYOD helps increase workforce mobility, cut down the cost of business-owned devices, and increase efficiency and productivity. However, these devices pose a security threat, including employees unwittingly sharing sensitive information via their personal laptops, mobiles, or tablets. DLP brings these devices to the fold, securing all data leaving the organization regardless of the device. How Data Loss Prevention Works Data loss prevention is a simple concept with two main objectives. The first objective is to identify sensitive data that needs protection. The second objective is to protect sensitive data from loss. However, there are technical details that go into accomplishing these two seemingly simple objectives. Identifying Sensitive Data Data exists in different states within any given infrastructure. These states include: Data in use: Refers to data that’s actively making its way through the IT infrastructure, such as in RAM, CPU registers, or cache memory. This includes data in the process of being generated, updated, amended, viewed, or erased. Data in motion: Refers to data being moved or transferred from one place to another via a network. An example of data in motion is downloading files from a web browser to a local app. Data in motion is also referred to as data in flight or data in transit. Data at rest: Refers to data collected in one place, such as databases, data lakes, file systems, or the cloud. DLP solutions deploy agent programs to comb through data in different states and locations. During this process, the software can identify sensitive data using various techniques such as rule-based matching. In this case, the agents use familiar patterns to locate sensitive information matching specific rules. For example, nine-digit numbers may indicate to the program that the file contains social security numbers, while 16-digit numbers may indicate credit card numbers. Other techniques that DLP solutions use to find sensitive data include: - Exact file matching - Database fingerprinting - Partial document matching - Statistical analysis Protecting the Sensitive Data Once the DLP solution finds the sensitive data, it takes active measures to protect the information. To this end, the DLP software monitors data to identify violations. It is first necessary to come up with DLP policies and procedures. The policies and procedures essentially tell your software what to do when it identifies a security breach or violation. In the event of a violation, the solution implements security controls, such as blocking data transfer, cutting off network access, popping up a warning, or sending an alert to the network administrator. Let’s take a look at a few real-world examples of how data loss protection works. Example 1: Omni American Bank The Omni American Bank is based in Ft. Worth, Texas, and has been in business for more than 60 years. The bank has a customer base of more than 86,000 and holds more than $1 billion in assets. The bank already had several data security solutions in place, including a URL filter and content filtering. The bank’s Chief Information Officer noted that the organization needed more visibility into data in addition to boosting data security. The solution was simple. The bank’s CIO looked to Websense, a leading DLP software provider. As a result, network administrations now have a full view of where the data is stored, where it is going, and every touch-point in-between. Administrators know precisely who is using the data and for what purpose. This visibility allows administrators to control to whom employees can send information and who can view the information. One challenge that administrators face is securing data coming into the organization. Omni American Bank’s DLP system blocks employees from replying directly to emails containing sensitive information. For example, a customer might send an email containing their social security number. The standard procedure is to create a new email so that it doesn’t include sensitive information. If a rep hits the “reply” button, the software blocks the outgoing message and sends an alert to the information security department. Additional benefits that Omni American Bank enjoys from its DLP solution include: - Securing all data points, including USBs and CDs - Comprehensive reporting and documentation for its compliance programs - Identifying potential data leaks quickly - Centrally managed data security Example 2: US Department of Veteran Affairs In 2006, a data analyst working for the Department of Veteran Affairs had his laptop and external hard drive stolen during a home burglary. Home invasions are hardly unusual, But this particular one has a twist. The hardware contained personal information belonging to 26.5 million US military veterans and military personnel. This personal information included social security numbers, names, birthdates, and other personally identifiable information (PII). After a much-publicized nationwide search and a $50,000 reward, the FBI retrieved the laptop. Thankfully, the thieves had not accessed the information. Still, this didn’t stop the people whose information was on the laptop from filing a class-action lawsuit. After a prolonged three-year legal battle, the US Department of Veteran Affairs was forced to pay out $20 million. The lesson here is plain: Bring-your-own-devices can be a severe security threat if not appropriately addressed. Part of data loss prevention is securing personal devices that employees bring to the workplace. In this incident, the department’s network administrator or IT service provider might have been able to wipe out the sensitive data or shut down the device remotely had there been a comprehensive DLP policy in place. Better yet, the employee shouldn’t have been able to store such sensitive information on their personal device. How to Get Started With Data Loss Prevention Implementing a data loss prevention program is crucial to your organization’s data security. Even so, the process doesn’t have to be complicated. Here’s what you can do to get started with data loss prevention: Step 1: Analyze and Categorize Data The obvious place to start when rolling out a data loss prevention plan is data discovery. Data discovery lets you identify the type of data you collect, where it is located, and how it is accessed, used, and shared within the organization. A data discovery tool automatically finds data in different locations, including on-premise, cloud, and hardware storage. You can then collect all this data in one place, so it’s easy to manage. Next, perform data classification. This means categorizing the data you collect. This step helps you identify the data that you need to protect and why. Typical data categories include: - Payment Card Information (PCI) - Personally Identifiable Information (PII) - Customer Information - Public Domain Information - Customer Information - Internal-only Information - Intellectual Property Once again, you can use a data classification tool to help you in this step. These tools use a variety of methods to identify and categorize different types of data. Finally, tag the data appropriately so you can track how the data is used. Step 2: Identify Regulatory Compliance Requirements With your data neatly categorized and tagged, you can now identify the DLP regulatory compliance requirements relating to your industry. This will help you to set the baseline for your data loss prevention policy and procedures. For example, if your business processes credit cards, you are bound by the PCI-DSS. Companies in the healthcare industry are required to comply with HIPPA regulations. Common data protection regulations and standards include: Remember that regulatory compliance is just a baseline for your data loss protection needs. This step is just to make sure you don’t unknowingly breach your industry’s regulations. Such a breach could result in hefty fines, loss of customer trust, and reputation damage. You’ll also need to identify other essential data assets to protect, such as intellectual property, growth strategy, financial reports and information, and strategic planning information. Think about the consequences if a specific piece of information were to be leaked. This should help you figure out where to focus your data security. Step 3: Develop a Data Loss Protection Policy Your data loss prevention policy sets the framework for how your organization handles sensitive data. This policy also helps create a repeatable process for securing sensitive data. Some of the features of a successful policy include: - How data is classified - Critical data to protect - Clearly defined roles for employees involved in data loss prevention - Criteria for vetting data loss prevention solutions vendors - Data loss prevention success metrics - Behaviors that put sensitive data at risk Keep your policy simple at the start. Start with the most critical data you want to secure and create your policy around it. You can slowly start to build on your policy when you begin to see success. Step 4: Choose a Data Loss Prevention Solution The key to rolling out your data loss prevention policy lies in DLP solutions. These solutions help to automate the process. There are two main types of DLP solutions. The option you choose depends on the nature of your business and your data security needs. Integrated DLP Solutions: These solutions are generally designed for purposes other than data loss prevention but are adapted to add some DLP functionality. Though not always, integrated DLP solutions tend to focus on a single data security area such as device control software, email security solutions, and secure web gateways. Enterprise DLP Solutions: Enterprise DLP solutions take a more comprehensive approach to data security. These products cover a wide range of network protocols, including email, HTTP, HTTPS, and FTP traffic. Enterprise DLP is often complex and expensive but offers very effective DLP capabilities. These systems also often replace other management interfaces by providing a unified management console. While Enterprise DLP solutions are comprehensive and attractive, they aren’t always the obvious choice. For example, many organizations already use multiple security technologies, including firewalls, antivirus software, identity management, secure web, and email gateways, and IT asset management. Integrated DLP solutions easily integrate with existing technologies. This helps you make the most use of the existing technologies and save costs on more expensive Enterprise solutions. Important features to consider when choosing your DLP solution include: - Content analysis helps you to analyze data to find sensitive information. - Data in its life cycle where the solution can handle data in its various states, including in motion, at rest, and in use. - Policy management to create and enforce data security policies. - Admin management offers a central interface where administrators can manage the entire solution. - Real-time analytics that sends out notifications and analytics for sensitive data in real-time. Step 5: Automate Your Data Protection Policy If you picked the right product in the previous step, you can now start rolling out your data protection policy. The best part about DLP technology is it allows you to automate the data loss protection process. This process involves setting up rules for what employees can’t do with sensitive data. These rules tell your DLP solution what to do when it encounters a violation. The software may revoke sharing, send notifications and alerts, quarantine information, un-sanction an application, or temporarily suspend a user account. Depending on your industry, you can start with relatively loose restrictions and tighten the restrictions incrementally. You may want to take the opposite approach in a strictly regulated industry such as healthcare. In this case, start with rigid rules and slowly open up access if needed. Finally, train your team on your data loss prevention policy. It is often said that your data security is only as good as your weakest link. Employees that understand their obligations complement your DLP solution. How to Protect Your Data with Real-time Access Control Nira is a real-time access control system that provides visibility and management over who has access to company documents in Google Workspace, with more integrations coming soon. Contact us for a demo, and we’ll review your current setup or help you implement a real-time access control system for the data you already have.
<urn:uuid:98b55ae3-9c25-430e-a628-ab3aedebbc10>
CC-MAIN-2022-40
https://nira.com/data-loss-prevention/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00438.warc.gz
en
0.913545
2,876
3.4375
3
The ASC 2019 Student Supercomputer Challenge (ASC19) is underway, as more than 300 student teams from over 200 universities around the world tackle challenges in Single Image Super-Resolution (SISR), an artificial intelligence application during the two-month preliminary. Teams are required to design their own algorithms and train AI models using PyTorch, so as to use supercomputers to restore 80 blurred images back to high-resolution ones in the shortest possible time and meet set standards for similarity. Simple, effective and easy to use, PyTorch has quickly gained popularity in the open source community since its release and become the second most frequently used deep learning framework. Super-Resolution (SR) technology is a visual computing technology that has received great attention in recent years, aiming to recover or reconstruct low-resolution images into high-resolution ones. As deep learning techniques, especially Generative Adversarial Networks (GAN), are introduced into SR research, this technology can be widely used in satellite and aerospace image analysis, medical image processing, compressed image/video enhancement and other applications. GAN can bring more and finer texture details and make pictures look more delicate, real and natural to the naked eye. Therefore, it has become a hot area of research in the field of image SR. The difficulty of the SR task lies in that participating students who mostly major in computer science and math have to study extensive papers on SR technology and deep learning within two months to design their algorithms, complete AI model training on the supercomputer system, and optimize their algorithms continuously. Furthermore, to meet the requirements and obtain better results, the students must take into consideration the trade-off between the distortion parameter and perception parameter. A paper published in CVPR 2018 elaborated an interesting “paradox” in SR technology, that is, the higher the similarity between the restored or reconstructed high-resolution image and the original image, the worse the definition observed by the naked eye, and vice versa. The reason behind it is the difference in the focuses of the distortion parameter and the perception parameter. Ding Wenhua, Academician of the Chinese Academy of Engineering expressed a hope that the SR challenge can help students lay a solid foundation for deep learning, model training and optimization and promote the application of SR technology in more scenarios. Cheng Jian, representative of supporting organizations of the SR challenge, researcher of the Institute of Automation, Chinese Academy of Sciences, said that the development of AI has led to a surge in demand for computing. Training an image classification model requires exascale floating-point operations while fast, large scale image SR requires ever more computing. He also noted that hopefully the SR task can drive college students to better combine supercomputing with artificial intelligence and provide new ideas for SR technology application. The ASC Student Supercomputer Challenge is the largest student supercomputer hackathon in the world, aiming to promote exchange and development among young supercomputing talents across countries and regions, improving their application level and research capabilities, taking advantage of the driving force of supercomputing to promote technological and industrial innovation. The preliminary round of ASC19 has begun, with more than 300 university teams tackling challenges for CESM, the Community Earth System Model for studying climate change; SR, single image super-resolution; and HPL and HPCG, internationally accepted HPC benchmarks. The top 20 teams in the prelims will move on to the finals at Dalian University of Technology from April 21st to 25th.Learn more ASC at https://www.asc-events.org/
<urn:uuid:b464171b-e5ad-4a64-b325-f30bddee0637>
CC-MAIN-2022-40
https://www.hpcwire.com/2019/01/24/asc19-teams-tackle-single-image-super-resolution-challenge-featuring-pytorch-and-gan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00438.warc.gz
en
0.934924
719
3.171875
3
Whenever a computer user accesses the internet, records of their activity are automatically stored on their PC. This information might include the keywords they have searched for, pages they have visited and even the data they have entered into online forms. For users wishing to keep certain activities away from prying eyes, the most popular internet browsers have now introduced ‘In Private’ browsing which offers users a way to stop the most obvious traces of activity from being stored. Microsoft Internet Explorer 8 introduced this in the form of ‘In private’ browsing, while Google Chrome offers an ‘Incognito’ mode and the soon to be released Mozilla Firefox 3.1 offers ‘Private Browsing’. In Internet Explorer 8, the feature works by either preventing information from being created or by automatically deleting it once the user has finished browsing. For example, ‘history’ entries, which keep a record of the pages visited, are not created and form data and passwords are not stored. However, cookies, which provide websites with certain site-specific information about the user such as shopping cart contents, are stored as ‘session cookies’ meaning that they are cleared when the browser is closed. In Google Chrome, ‘Incognito’ mode offers similar features, as web pages visited and files downloaded are not recorded, and all cookies are deleted when the Incognito window is closed. While such features may afford a degree of privacy from other users, both Microsoft and Google have stressed that the modes are not designed to hide user activity from computer forensic experts or security specialists. In fact, after a round of testing, Dutch computer forensic expert Christian Prickaerts deemed the privacy afforded by Internet Explorer 8’s ‘In private’ browsing feature to be purely ‘cosmetic’, and warned that it should not be confused with anonymous web surfing. In fact, the security offered by ‘In private’ mode is mainly aimed at local level internet information, so that data regarding a user’s internet activity may still be stored by the visited websites, the Internet Service Provider or the network administrator in the case of internet cafes or corporate workspaces. In addition, while entries are not made into the browser’s ‘History’ file, details of web pages visited are still left intact in other areas of the computer’s registry and ‘cache data’, which includes images and other information that IE stores to speed up browser times, is also left untouched. Such data is usually easily accessible with computer forensic software, even if it has been deleted manually. For home users then, the ‘In Private’ feature offers a useful way to keep information private from other users who are unlikely to deliberately pry, but for internet café users the feature should not be considered to offer significant additional security. As such, it should not be considered a replacement for other forms of internet security and the same level of caution should be exercised with regard to the type of data accessed from public locations. For businesses running corporate networks, it is important to ensure that systems administrators are aware of the feature, since ‘In Private’ browsing may remove the more obvious traces of wrongdoing. But for the time being at least, the feature offers no significant barrier to a successful investigation if computer misuse is suspected.
<urn:uuid:e163d867-b319-48e4-944a-fdb3591f8738>
CC-MAIN-2022-40
https://www.intaforensics.com/2010/09/24/how-private-is-in-private-browsing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00438.warc.gz
en
0.941468
684
2.71875
3
Sugar impacts on body with huge loss and prone to get other diseases quickly.The human body not made to consume excessive amounts of sugar. Especially fructose, actually a hepatotoxin metabolized directly into fat. It can cause a whole host of problems that can have far-reaching effects on your health As a general recommendation, keep your total fructose consumption below 25 grams per day, including that from whole fruit Fructose fools your metabolism by turning off your body’s appetite-control system Moreover, eating too much sugar impact causes a barrage of symptoms known as classic metabolic syndrome. However, Avoid all sources of fructose, particularly processed foods and beverages like soda. These include weight gain, abdominal obesity, decreased HDL and increased LDL cholesterol levels, elevated blood sugar, elevated triglycerides and high blood pressure. on sugar impact your liver metabolizes alcohol the same way as sugar. As both serve as substrates for converting dietary carbohydrate into fat. This promotes insulin resistance, fatty liver and dyslipidemia
<urn:uuid:384f8f40-a347-4ab6-9826-13de199278f2>
CC-MAIN-2022-40
https://areflect.com/2018/07/04/todays-health-tip-sugar-impact-on-your-body/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00438.warc.gz
en
0.918756
207
3.125
3
According to the Ponemon Institute, more than 1 in 4 companies will experience at least one cyberattack incident in the next two years. And with an increase in work-from-home business models, the remote workforce is a prime target for opportunistic thugs. One more downer, most companies have unprotected data and poor cybersecurity practices in place, making them vulnerable to data loss. Malware is the overarching term to label most of the types, categories, and threat levels of these attacks. As the name implies, malware has made cybercrime a very profitable business for hackers. iTBlueprint has compiled a list of the most common malware definitions so you can know what you’re up against and how to protect yourself. Ransomware is malicious software that holds your computer for ransom. Disguised as a legitimate file, ransomware infects your systems, blocking or encrypting access to files with threats to publish or delete that information if a ransom is not paid. In 2020, COVID-19 created a breeding ground of ransomware hounds—at an increase of more than 150%—looking to take advantage of the chaos. Remember WannaCry? The attack hijacked computers running Microsoft Windows, demanding a ransom paid in Bitcoin. Risk modeling firm, Cyence, estimated the potential cost of this attack to be about $4 billion. Like a virus in the human body that attaches itself to cells and replicates, a computer virus attaches to software, reproducing when the software is run. And like COVID, it’s most often spread through sharing, in this case software or files between computers. Once the program is run, the virus can go about stealing passwords or data, logging keystrokes, corrupting files, or even taking over the machine. 3. Rogue Apps As a result of remote working, mobile devices—from smartphones to tablets—have become essential tools for employees. In addition to weekly Webex team calls, employees are using them to access sensitive company files and SaaS applications. Which makes them perfect in-roads for devious exploitation, using rogue apps. Rogue apps are counterfeit mobile apps designed to mimic trusted brands or apps, but they carry a malware payload. Unaware users install the app, leaving the door open for hackers to steal sensitive information, such as credit card data or login credentials. Phishing attacks use emails, looking like they’re from reputable companies (FedEx, Microsoft, your bank), to bait victims into clicking a link that takes them to a non-legitimate website or downloading an infected attachment to steal financial or confidential information. Phishing attacks account for more than 80% of reported security incidents, costing about $17,700 every minute. Spear-phising is more targeted for an individual or group, usually from a trusted source. Whaling is when hackers impersonate senior management, such as a CEO or CFO, to leverage their authority to gain access to sensitive data or money. Another phishing lure is baiting. The email or text includes something to entice the victim to act, such as a free download of some kind or looks like it is was accidentally sent to them with a link to “Confidential Information.” Once they click on it, their computer or device becomes infected and allows the hacker to infiltrate the network. Pretexting is old school, relying on person-to-person interactions. Instead of starting with an email or text, the victim receives a phone call from someone impersonating a fellow employee, IT representative, or vendor. The impersonator asks questions that convince the victim into providing confidential or other sensitive information. This information is used to get into and move freely throughout your network. Cybercriminals even outsource pretexting to call centers. It’s such problem that the Gramm-Leach-Bliley Act (GLBA), known for improving financial data security, has made pretexting illegal. 7. Distributed Denial of Service (DDoS) A Distributed Denial of Service (DDoS) attack uses botnets (botnets are next) to crash a company’s web server or online system by overwhelming it with data. The most common kind of DDoS attacks are floods which send a massive amount of traffic to the targeted victim’s network, consuming so much bandwidth that users are denied access. Protocol attacks go after the network to exploit any weaknesses in protocols. Application layer attacks target web servers, web application platforms, and specific web-based applications with a goal to crash the server completely. Industry experts predict that by 2023, DDoS attacks worldwide will escalate to 15.4 million. Not always considered malicious, bots can actually be very helpful, such as when search engines use them to crawl the internet and index pages of information for our searches. When bad bots come together, they create a botnet that can carry out attacks against websites and even IoT devices. Run as a payload for another form of malware or through a contaminated file downloaded by the user, botnets can spread to other machines. In 2016, the Mirai botnet infected IoT devices, such as thermostats, webcams, home security systems, and routers. Using the internet connection from roughly 100,000 IoT devices, the botnet launched a DDoS attack on the company that manages the connections between forbes.com domain names and the server that hosts the forbes.com website. This resulted in thousands not being able to connect to a variety of websites, bringing some businesses to a standstill. Rootkits are sneaky, since they wait to strike by opening the door for attackers to gain administrator-level access to systems without your knowledge. Once inside, they can do almost anything they want with the system, including recording activity, changing system settings, accessing data, and mounting attacks on other systems. And what makes rootkits so insidious, is that many can hide out in the open disguised as necessary files. 10. Trojan Horses Named after a very dangerous wooden horse from Greek mythology, Trojan Horses hide a harmful code inside a harmless-looking file to create backdoors that allow attackers unauthorized access to share your financial information, passwords and other sensitive materials with criminals. The banking Trojan Grandoreiro took advantage of the COVID-19 crisis using fake websites. Grandoreiro was just one of scams hackers used to play on people’s pandemic fears. Hiding in videos on fake websites that promise to provide vital information about the virus, the Trojan downloads a payload on the victim’s device when they play the video. Exactly as the name implies, spyware is software that spies on you and gathers information about you without your consent. Though some spyware is benign such as cookies to monitor shopping habits, other types of spyware have been used to steal intellectual property and highly classified information as a form of corporate espionage. Spyware spreads by piggybacking on another piece of software or a file. 12. Data Leakage Data leakage, also known as low and slow data theft, is the unauthorized sharing of data from within a company to an outside recipient. With the increase in mobile device usage and people working from home, data leakage has become a significant problem from many organizations. Whether leaked via a gap in security caused by a cyberattack, by mismanagement or on accident by an employee multi-tasking and accidentally sharing something they shouldn’t, data leakage can cost companies a lot—from declining revenue to a tarnished reputation or massive financial penalties to crippling lawsuits. No need to be alarmed. iTBlueprint’s full-spectrum security solutions deliver unmatched protection—from core to cloud. Nothing slips past iTBlueprint’s team of specialists. Experienced in modern attack vectors, we combine sandboxing techniques, behavior best practices, and a range of tools and services for protection onsite or in the cloud. Hackers aren’t going to stop anytime soon, so let iTBlueprint fill in those gaps with a range of services to monitor, alert, mitigate and resolve attacks when they happen. Find out how we can help defend your business from cyberattacks.
<urn:uuid:8812795c-f93f-4a41-8b34-5fae5537a14c>
CC-MAIN-2022-40
https://itblueprint.ca/the-dirty-dozen-common-malware-definitions-you-should-know-to-protect-yourself/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00438.warc.gz
en
0.930227
1,726
2.578125
3
Whatever Happened to XML Schemas? Early in the growth of XML as a data format, even before the widespread adoption of Web Services, one of the most popular and heated debates was on how best to represent the structure and syntax of data in an XML document. Commonly known as XML schemas, a wide range of proposals emerged for how to best indicate which elements were required in an XML document, as well as the nature, repetition, and hierarchy of those elements. The goal of these formats was simple: provide an easy way of defining the requirements of an XML document, and then validating those documents against those requirements so that two unrelated parties can reliably exchange and process XML documents. Most of the XML schema proposals hinged on the general assessment that the traditional way of detailing schema, the Document Type Definition (DTD), was too arcane, limiting, and cumbersome to use. After much hassle, it seemed that the W3C XML Schema (informally called WXS) was the approach that won out. But since the declaration of WXS as the winning standard, the conversation about the importance of XML schemas has died down, begging the question of the importance of XML schemas or even their necessity in system-to-system interchange. Basically, why are people not talking about XML schemas as much anymore? Is it because schemas are no longer important, or is it simply that we’ve moved on to more complex issues now? Validating the XML Payload: Is it Necessary? There are really two parts to answering the question about the relevance of XML schemas: their importance to the data that companies exchange and care about (the message payload), and their relevance to the interfaces to access the data (specifically, Web Services). Remarkably, each application of XML schemas is leading to different patterns of adoption and value to enterprise users. When applying XML schemas to message payloads, companies use XML schemas as a way of verifying that incoming XML documents are valid enough to be processed by their automated systems. Clearly, such validation is important because systems aren’t capable of coping with data that are not properly structured or that they can’t easily parse and understand. The XML syntax of schema languages like those specified in the WXS is meant for machines to easily consume, parse, and transform messages that follow that syntax, by using standard XML technologies. Yet, the XML basis of these schema languages are also their downfall, because XML is verbose: it consumes considerable processing power, bandwidth, and storage space. In addition, developing an XML schema in an XML language is an incredibly complex proposition. While other formats, such as the REgular LAnguage description for XML Next Generation (RELAX NG) Compact Syntax, emerged to solve the development difficulties of WXS, creating and debugging XML schema documents still requires significant experience and training. However, as we detailed in an earlier ZapFlash, evidence shows that most companies are not even making use of the basic validation benefits of XML schemas. Instead, companies that are using schemas at all are using them only during the testing and debugging phases of their projects, and turning off runtime validation during the production phase. Why? Because validating XML documents every time they are parsed eats up too much processing power, developing XML schemas is too complex for iterative, cross-organization implementations, and many validity issues (such as whether or not a requested item is in a given database) can’t be resolved within a single document anyway. Given these serious implementation challenges, it’s not entirely clear that XML schemas are even necessary for handling basic document validity issues. That role might more clearly fall to the Service providing the XML document itself. Given that, how can XML schemas be used not just to validate the payload, but provide added value to a Service interface? XML Schemas: Necessary for the Service Contract If XML schemas are not being used by most companies to make sure that data are provided in a format that their systems can handle, what will be the mechanism companies will use to guarantee validity, interoperability, and semantic integration? How about the Service interface itself, specified in a Web Services Description Format (WSDL) document, and all the other pieces of Web Services technology that define the Service contract? The creators of WSDL smartly realized that they needed to provide a way to represent what Services can expect as inputs and outputs (what you might call the contract semantics) in a way that systems could automatically process. However, it is difficult to make any strict data typing assumptions in the Service interface without jeopardizing the loose coupling of the interface. To put it simply, if one Service provider demands that a given input be a string of a certain size, and therefore all Service consumers comply with that demand, when the Service interface changes, all the consumers break. Such behavior is clearly not loosely coupled. Thus, the developers of WSDL perceived XML schemas as a way to share Service semantics without tightly coupling the requirements into the Service implementations. As a result, WSDL does not introduce a new type definition language, but rather supports WXS as its canonical type system. The WSDL creators also aimed to dodge the bullet of recommending a single XML schema format, thus allowing for the use of other type definition languages via the extensibility of XML. Through the use of XML schemas, WSDL can allow Services to utilize a wide range of primitive types (such as strings, integers, and other data types) as well as complex structures such as enumerations and regular expressions, lists, unions, extension and restriction of complex types, and more. Since the validation of the interface happens only upon Service discovery and binding, there is little overhead in the actual transactions themselves. Validation thus happens upon Service negotiation, and as long as the Service requester complies with the requirements of the Service interface, there is no need to validate every single message that flows through the interface. However, there is a more important reason to consider the use of XML schemas at the Service interface. Too many developers are thinking about building Services from the bottom-up. As ZapThink has discussed numerous times, to gain all the advantages of a Service-Oriented Architecture, namely loose coupling, coarse granularity, and asynchrony, you must conceive of Services separate from the technologies that implement them. Basically, you can’t be thinking of Java’s data types when you create a Service interface. Instead, there must be some data typing system that exists separate from the implementations that underlie the Services. XML schemas are the answer to this loose coupling need. By thinking of Service interfaces in terms of schema data types, you can escape the pitfall of only including those data types that a given implementation can support. Many features of XML schema, such as lists, unions, disjunctions and restricted complex types are not present in many programming languages, but might be a requirement of a given Service. If IT shouldn’t be an impediment to meeting the needs of business requirements, why should the data types of some arcane language? Starting from the WSDL Service definition, Service producers and consumers can share and enforce the same vision of message content, using the robust typing of XML schema. Validation at the Service Interface Increasingly, ZapThink is seeing that the Service interface is performing a greater number of functions. At first, this interface simply provided the details that Service requesters can use to identify and bind to Service functionality. Now, however, the Service contract takes on a wider variety of functions. Validation of the business payload is one of those functions that the Service interface can handle. Likewise, security, management, process, and quality-of-service decisions and actions can also happen at the Service interface. But, in order to handle all these needs, users need a means to specify the requirements of the Service interface in an application-neutral language. And so, while XML schemas may no longer be the center of our daily discussion, they still remains the center of how we can achieve the basic goals of loosely coupled, asynchronous, and coarse-grained integration.
<urn:uuid:595676a2-d88f-4020-af04-8f19655ae13b>
CC-MAIN-2022-40
https://doveltech.com/innovation/whatever-happened-to-xml-schemas/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00638.warc.gz
en
0.926312
1,692
2.515625
3
In the next five years, schools will begin to redesign learning spaces to accommodate more immersive, hands-on activities, and begin to rethink how schools work in order to keep pace with the demands of the 21st century workforce and equip students with future-focused skills, according to The NMC/CoSN Horizon Report: 2016 K-12 Edition, published by the New Media Consortium and the Consortium for School Networking. The report examines trends that will emerge in education technology over the next five years. Long-term Trends: Driving Ed Tech Adoption in K-12 Education for Five or More Years - Redesigning Learning Spaces–As education shifts from a teacher-centric model to being more student-focused, classrooms will need to be adjusted. Rather than listening to lectures from teachers, students will becoming increasingly engaged in active learning through interactive lessons and collaborative exercises. The report explains that learning spaces of the future might include more flexible spaces, with movable walls and mobile furniture. Students will be able to move easily from small group discussions with a teacher to practicing coding on a computer or working with a group of classmates. There will also be an increased focus on environmental sustainability and social awareness, according to the report. - Rethinking How Schools Work–The report explains that there is a movement toward changing the current classroom paradigm and shifting the entire school experience. Methods such as project, competency, and challenge-based learning require that students be able to move from one learning activity to another more organically, removing the limitations of bell schedules. Midterm Trend: Driving Ed Tech Adoption in K-12 Education for the Next Three to Five Years - Collaborative Learning–As with redesigning learning spaces, collaborative learning calls into question the teacher-centered learning model. The report explains that collaborative learning will shift knowledge diffusion from solely being teacher-to-student and will now have students learning from and with each other. “The approach involves activities that are generally focused around four principles: placing the learner at the center, emphasizing interaction, working in groups, and developing solutions to real problems,” the report says. - Deeper Learning Approaches–Rather than simply focusing on rote memorization, a deeper approach to learning asks students to engaging in critical thinking, problem-solving, collaboration, and self-directed learning. The report highlights strategies including problem-based learning, collaborative group work, internships, and longer-term assessments such as portfolios or exhibitions, as ways to achieve deeper learning. Short-term Trend: Driving Ed Tech Adoption in K-12 Education for the Next One to Two Years - Coding as Literacy–By 2020 there will be 1.4 million computing jobs, but only 400,000 computer science students to fill them, according to a recent project from Code.org. With that statistic in mind, it’s no wonder that schools are emphasizing coding in younger grades. President Obama’s administration has also placed a heavy emphasis on coding in school, and especially targeting female and minority students. Obama’s Computer Science for All initiative aims to equip K-12 students with the computational thinking skills they will need to be active participants and creators in a digital world. According to the report, states will receive $4 billion in funding and school districts $100 million to expand training programs for teachers as well as access to high quality instructional materials. - Students as Creators–A shift is taking place in schools all over the world as learners are exploring subject matter through the act of creation rather than the consumption of content, the report claims. Rather than asking students to simply read a book or listen to a lecture, teachers are attempting to engage students by asking them to become the subject matter expert. As students become more accustomed to creating content in their free time–photos on Instagram, videos on Snapchat, etc.–teachers want to engage students on their level. Many educators believe that honing these kinds of creative skills in learners can lead to deeply engaging learning experiences in which students become the authorities on subjects through investigation, storytelling, and production, the report explains. The report also highlights the challenges facing the education industry, as well as new technologies that will shake things up over the next five years.
<urn:uuid:2d957139-f5fa-4e18-b728-9fa8d0d94d7e>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/redesigned-learning-spaces-collaborative-learning-on-the-horizon-for-k-12/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00638.warc.gz
en
0.952543
864
2.640625
3
Printer Lingo: What Does Collate Mean In Printing Unless you work in the printing industry, then you might not be familiar with the term collate printing. That is completely normal and understandable. If you find yourself wondering about any of these questions: - What does collate mean in printing? - What does it mean to collate copies? - Is there such a thing as uncollated copies? (Spoiler Alert: There is!) - What is collate printing used for? - When to collate vs. when to not? - Can all printers collate copies? You are in the right place. Let’s get started! 1. What Does Collate Mean in Printing ? The term “collate” means to collect, accumulate and combine. Once everything is collected, it is then assembled in a specific order of sequence. 2. What Does it Mean to Collate Copies? In printing lingo, collate is often used to mean “collate copies.” That means that instead of printing individual papers, the printer “accumulates” these documents together to create a complete set. The next time you are printing a document, check out the print preview page. There, there will be an option to print collated copies. 3. Is there such a thing as uncollated copies? Yes! Unless you work for a printing or publishing firm, you will most likely print uncollated copies for personal use. For example, if you were typing out a ten-page essay, you would print them out separately. Each of those ten sheets of paper would not be combined together to make any sort of set, that is what it means to print uncollated copies. 4. What is collate printing used for? Collate printing is mostly used to print out sets of documents. Books are a big one for example. As well as: - Instruction manuals 5. When to collate vs. when to not? Collate copies are mainly used for color copies because the copies can be put together and assembled without being bound together. More expensive printers even allow collating print copies, hole-punching them, and/or stapling them. However, unless you make some sort of colored manuscripts or catalogs, you will most likely not be printing collated copies. 6. What are the Most Common Types of Binding for Collate Copies? - Saddle-stitched printing is perfect if you are trying to collate smaller collate booklets. Books, catalogs, and magazines with less than 100 pages would normally use this kind of binding. - Saddle stitching works by printing the document on both sides, arranging the documents in order, folded in half, and then stapling through the folds. 2. Perfect bound: - Perfect bound book printing is one of the most common collated printing methods, especially for a paperback. - This method is inexpensive, but you certainly would not be able to tell. Books that are perfectly bound are very durable and look very professional. - Perfect bound is the perfect method if your book is on the longer side, over one hundred pages. - This is also the preferred method for heavier books such as yearbooks and directories. - They are also a cheaper alternative to hardcover books. 3. Spiral and Wire-O: - Spiral book binding uses a plastic coil to hold your book together. You probably have seen or used a book bound this way for school. - Wire-O binding also uses a plastic coil, but it is more professional-looking. - If you want a more colorful and fun binding experience, go with spiral binding as they offer many different color coils. The Wire-O method, however, only comes in black. 7. Can all printers collate copies? The short answer: yes. Collating is not a special feature that you will need to pay more for to have. Almost all standard printers will offer this function. 8. Do Printing Stores Offer this Service? Yes. If you want to get your brochures, books, magazines, or any kind of document collated but do not own a printer, you can do this at your local print store. Check out these printing services: - Office Depot - Office Max - Collating is just a fancy term for assembling and organizing. - Almost all printers offer this option, but if not, you can check out your local print shops or convenience stores to use their printers. - Most people only collate their pages if they put together a book, manuscript, or directory where they need the pages to print out in order. - To access collate printing mode, click on your computer’s printing system. There, it will give you the option to collate or not.
<urn:uuid:f6b8da1e-ffc8-43e3-a952-01fe471746a8>
CC-MAIN-2022-40
https://1800officesolutions.com/collate-mean-in-printing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00638.warc.gz
en
0.926254
1,057
3.359375
3
As denial of service attacks based on hash-flooding are not a new topic, Jean-Philippe Aumasson and Martin Boßlet started with an introduction about this topic. Storage of data in hash tables is usually done for any array-based information, such as data sent for a GET or a POST request towards a website. Instead of relying on the parameter name for the array index, a hash gets generated and stored for performance reasons. If now an attacker is able to generate several parameter names resulting in the same hash, the effort to search a given value in a hash table passes from a linear time (o(n)) to an order of n2. As an example, submitting a 2 MB POST request containing special crafted data is handled on a recent machine will require 10 seconds for the process, as ~ 40 billion string compares will need to be performed. Such attacks aren’t new, the CCC having featured this topic back in 2011 as well. As a fix after these attacks, an improved hash generation algorithms called MurmurHash2 and MurmurHash3 were released, involving better hash generation and the introduction of some random values in the calculation. Jean-Philippe decided to apply some differential cryptanalysis on this algorithm and discovered it was still vulnerable to the same root issues. Another hash algorithm, named cityhash developed by Google was also found vulnerable to the same issues. Armed with the theoretical knowledge on how to perform such an attack brought by Jean-Philippe, Martin decided to have a look at RAILS implementation to see if it was exploitable in real conditions. A first attempt to exploit this in a POST request failed due to encoding issues and finally due to size limitations implemented in the framework. Is RAILS therefore safe? No, because other features such as the JSON parser use a vulnerable hash table implementation and a demo showed us how submitting a request containing 211 (2048) chosen values took ~ 5 seconds to execute, while 214 (16384) chosen values took two minutes, close to the defined timeout of Rails. Facetious people might argue that this is just (yet another) example of Ruby just being slow, compare to other languages, so what about Java? A submission of 214 not colliding values was handled by Java in 0.166 seconds. But 214 colliding values in 9 seconds… What is the solution to fix once for all this issue? Implement submission size limits as it was done for POST requests within Rails? While this may sound appealing at a first look, it turns to be unrealistic as many user influenced values may rely on hash tables. Instead of fixing all usage scenarios involving hash tables, let’s fix the algorithm generating the hashes. This is where Jean-Philippe and its new algorithm SipHash steps in again. SipHash is based on diffusion and confusion with 4 rounds and got implemented recently in Ruby by Martin. Oracle, alerted on September 11st, did not yet answer to the researchers at the time of the conference. Update – the following timeline of events which happened after the conference illustrate the novelty of the presentation we had on 08.11.2012: On 13.11.2012, CERTA issued an advisory http://www.certa.ssi.gouv.fr/site/CERTA-2012-AVI-643/CERTA-2012-AVI-643.html based on the Ruby security bulletin of 09.11.2012, crediting the authors for the discover and the fix of the issue in this language: http://www.ruby-lang.org/en/news/2012/11/09/ruby19-hashdos-cve-2012-5371/. On 23.11.2012, oCERT issued an advisory referring the affected software and their contact with Oracle and other vendors: http://www.ocert.org/advisories/ocert-2012-001.html
<urn:uuid:13d2580a-14bc-4a16-aad7-521d39472069>
CC-MAIN-2022-40
https://blog.compass-security.com/2012/12/asfws-hash-flooding-dos-reloaded-attacks-and-defenses/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00638.warc.gz
en
0.916939
806
2.65625
3
McKinsey, in its report on the impact of AI on the world economy, confirmed that by 2030 AI is expected to gradually add 16% (about US$13 trillion) to the global economic output. AI will annually contribute to efficiency development of around 1.2% between 2020 and 2030, says a McKinsey report based on the simulation models of the impact of AI at the national, sector, organizational, and worker levels. The report is focused on AI adoption of five general classifications of AI technologies: NLP, RPA, and advanced ML, among other things. It was based on a survey conducted from around 3,000 firms and economic information from various organizations, including the World Bank, the United Nations, and the World Economic Forum. Prominent companies and government organizations are prioritizing AI and ML as rising effectiveness and productivity are allowing exponential growth of the worldwide economy. Most nations have barely started to contemplate their AI future, even though the largest economies in the world have launched their AI initiatives in 2017 and 2018. Various components influence the AI-driven efficiency of nations, including innovation, labor automation, and new competition. Miniaturized scale factors, like, the pace of adoption of AI, and full-scale factors, including the global connectedness or labor-market structure of a nation, also contribute to the impact. As it accelerates, the pace of AI growth will by no means be linear. The commitment to growth needs to be several times higher by 2030. An S-curve pattern of adoption of AI is actually a modest beginning. This is because the expenses and investment in deploying these advancements are enormous. There is, nevertheless, a need for acceleration driven by increasing competition and innovation-led improvement in corresponding capabilities. After a certain level of AI adoption, the globally optimized value chain will offer a way for digital technology adoption to become cheaper, faster, and easier. It will be far easier for more integration across products and services to happen thereafter. Clearly, this will influence the growth of independent global platforms for goods and services exchange. Read More : In 2020, how do CIOs Avoid a Career Downslide Today, the most significant near-term challenge for ML to become a part of the transformation journey is just sharp innovation. The world currently needs to focus on technologies like AI, to increase creativity, flexibility, and drive strong problem-solving skills. AI is perceived as the fundamental driver of future development, innovation, productivity, competitiveness, and job creation for the 21st century. It is now the responsibility of business leaders and policy-makers to take measurable actions to address the challenges, support data scientists, researcher’s business analysts for complete AI inclusion in the ecosystem.
<urn:uuid:0e12540a-754e-4e8d-b7d0-b0e7478512ef>
CC-MAIN-2022-40
https://enterprisetalk.com/featured/artificial-intelligence-era-transforming-the-global-economy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00638.warc.gz
en
0.947878
555
2.703125
3
According to U.S. Government forecasters, the Atlantic hurricane season is expected to be much busier than usual. They’ve predicted that we’ll have as many as six major storms during the season. On May 23rd, The National Oceanic and Atmospheric Administration (NOAA) said there’s a 70 percent chance that between 13-20 storms will form in the Atlantic Ocean this season. They believe that 11 of these storms could strengthen and intensify, turning them into hurricanes with winds of 119 KM/hour or higher. Whether or not these storms make landfall over the U.S. is anyone’s guess. Mother Nature can be unpredictably fierce. NOAA explained that the reason hurricane season may be more threatening than normal is because average waters in the Caribbean Sea and the Atlantic are warmer this year. This along with other atmospheric conditions create a prime environment for hurricanes to form and strengthen. In 2012, there were ten Atlantic hurricanes, including Hurricane Sandy, which devastated the New Jersey and New York coasts in October. The hurricane season officially begins on June 1st, and lasts until November 30th. NOAA has released this announcement to ensure Americans are fully prepared for potentially devastating and life-threatening storms. The “life” of businesses is threatened as well. It’s time to ensure all backup systems are running optimally, electronic information is stored offsite in a storm-sheltered data warehouse and that all disaster recovery systems are in place. Call us today before you need a data recovery service. Call today to book your complimentary backup review with our technology professionals.
<urn:uuid:2bb4a5c4-7297-4e71-9153-a27c17d5332d>
CC-MAIN-2022-40
https://integrisit.com/us-government-forecasters-predict-a-threatening-atlantic-hurricane-season/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00638.warc.gz
en
0.960637
332
2.875
3
The COVID-19 pandemic has had a profound impact on education, bringing about a sudden boom in remote and online learning. While the transition has forced many schools to implement innovative solutions, it has also revealed stark vulnerabilities in their cybersecurity strategies, which is especially concerning given that schools have become a new target for cyber criminals. A big problem is that even before the pandemic, cybersecurity hasn’t been a priority in education. A lack of funding and skilled personnel has meant that schools have basic system set-up errors or leave old issues unpatched. Now, in the mass digital movement, these gaps can be even more damaging, and schools are quickly realizing that they need the knowledge and updated technological infrastructure to continue virtual learning securely in the long-term. Here’s why cybersecurity is evolving in education, and what schools can do to keep up: A new landscape, new threats The education industry is so attractive to cyber criminals because of the volume of data it holds: staff and student information, alumni databases, supplier details, and research data – all extremely valuable. Cybercriminals also know that as schools embrace digitalization, there will be a number of opportunities to exploit the move because many institutes run on legacy systems that aren’t equipped for modern, sophisticated threats. In fact, during the pandemic, the UK National Cybersecurity Center issued a specific warning about the heightened number of ransomware attacks aimed at universities. These attackers steal or delete data from users’ systems and then render the system or computer inaccessible, while demanding financial compensation in exchange for access and the data returned. Currently, some of the most common ransomware infection vectors are through Remote Desktop Protocol (RDP), vulnerable software and hardware (typically from a third party vendor), and phishing emails that trick users into sharing sensitive information. Another issue is that students are increasingly using personal devices to connect to school networks, and these are more likely to compromise systems, as they create multiple entry points that make it easier for hackers to gain access. Systems are essentially only as strong as their weakest point, and personal devices are often not compliant with system protocols and protections, so can render entire networks vulnerable. That said, there are ways that schools can safely operate in the new digital landscape without being so exposed to these emerging threats. Investing in education One of the most effective ways to boost cybersecurity in education is by adopting a proactive mentality, rather than a reactive one. Schools cannot afford to wait until an attack happens to put processes in place to defend themselves. Instead, they need to create a “cyber curriculum” that informs everyone – IT teams, teachers, and students alike – about staying secure online. This curriculum should include documentation that people can refer to at any time, guiding them on the risks and warning signs of cyber attacks, as well as best practices for smart online use. Likewise, the curriculum should include on-demand training courses, current cybersecurity news and trends, and the contact information for the people who are responsible for taking action if the network is compromised. At the same time, IT admins need to be conducting regular penetration tests and appoint a “red team” to expose possible vulnerabilities. This team should test the school’s system under realistic conditions and without warning, so as to identify weaknesses that may not be immediately obvious. By running such tests, schools can then develop an incident response plan to manage recovery and mitigation if the need arises. IT admins should additionally be accountable for reviewing all third-party vendors, backing up systems, and enacting entitlement reviews to assess network permissions. It’s worth noting that all these duties should be scaled according to the impact of the pandemic: if the school system is supporting three times its usual capacity, all cybersecurity measures need to perform at the same level. Establishing cyber maturity, for good The digital transformation in education is set to be a long-term change, and schools need cybersecurity processes and technology that actively evolve with the “new normal” cyber sphere. Implementing vulnerability management, patching procedures, multi-factor authentication, anti-virus software, and disabling scripting environments and macros are all solid techniques to stay protected. Encompassing these solutions, Unified Endpoint Management (UEM) can be a powerful element of any cybersecurity strategy. UEM adds a greater layer of security to all devices used in education – whether laptops, tablets or phones – as it enables schools to manage the complete lifecycle of all endpoints and applications over the air and in real-time. Schools therefore have optimal visibility over device usage and can utilize mobile threat detection to thwart and potential attacks. Remote view enables lecturers and IT staff to view students’ screens and support them with steps – meaning risky behavior is curbed before it can manifest into a larger issue. Meanwhile, the remote wipe option removes information from devices and prevents sensitive data leaks if a device is lost or stolen. Not to mention, USB, tethering, and Bluetooth can all be restricted on UEM devices to reduce data transfer breaches. UEM also offers the functionality to block unwanted websites or select secure-only websites and prevent any other URLs being viewed. On top of that, all data and apps have 360-degree protection, while passcodes, device, and disk encryption provide physical device security, blocking any unauthorized access to the school network. UEM admins can also create open-in policies that stop content or applications being opened from unmanaged sources, so if a student or teacher tries to access materials from a compromised device, that action is halted. The COVID-19 crisis has brought to light shortcomings in the education sphere when it comes to cybersecurity, however, it also presents an opportunity for schools to integrate solutions that can better protect systems and students now, and in the future. Acknowledging the new climate, striving for in-depth cybersecurity knowledge, and building cybersecurity maturity with comprehensive tools such as UEM are the core building blocks for schools to shield against growing digital threats. Just like a virus, jeopardized cybersecurity arises from infections in a system, and the best way to curb the outbreak is to control it as early as possible. From this point onwards, schools must take cybersecurity seriously and continuously evolve security measures to be and stay healthy.
<urn:uuid:b0fdadf2-6903-4386-93b1-386ec6e0f361>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2020/12/23/education-cybersecurity-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00638.warc.gz
en
0.946955
1,281
2.734375
3
When you are studying Cisco and access-lists you will encounter the so-called Wildcard Bits. Most CCNA students find these very confusing so I’m here to help you and explain to you how they work. Let’s take a look at an example access-list: Router#show access-lists Standard IP access list 1 10 permit 192.168.1.0, wildcard bits 0.0.0.255 20 permit 192.168.2.0, wildcard bits 0.0.0.255 30 permit 172.16.0.0, wildcard bits 0.0.255.255 Access-lists don’t use subnet masks but wildcard bits. This means that in binary a “0” will be replaced by a “1” and vice versa. Let me show you some examples: Subnet mask 255.255.255.0 would be 0.0.0.255 as the wildcard mask. To explain this I need to show you some binary: This is the the first octet of the subnet mask (255.255.255.0) in binary, as you can see all values have a 1 making the decimal number 255. This is also the first octet but now with wildcard bits. If you want the wildcard-equivalent you need to flip the bits, if there’s a 1 you need to change it into a 0. That’s why we now have the decimal number 0. Let me show you another subnet mask…let’s take 255.255.255.128. What would be the wildcard-equivalent of this? We know the 255.255.255.X part so I’m only showing you the .128 part. That’s the last octet of our subnet mask, let’s flip the bits:
<urn:uuid:03312331-a97c-470b-acc1-7858554413d9>
CC-MAIN-2022-40
https://networklessons.com/cisco/ccie-routing-switching/wildcard-bits-explained
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00638.warc.gz
en
0.774471
524
3.375
3
What Is Structured Data? What Is Unstructured Data? |Structured data is typically stored in tabular form and managed in a relational database (RDBMS). Fields contain data of a predefined format. Some fields might have a strict format, such as phone numbers or addresses, while other fields can have variable-length text strings, such as names or descriptions. Structured data might be generated by either humans or machines. It is easy to manage and highly searchable, both via human-generated queries and automated analysis by traditional statistical methods and machine learning (ML) algorithms. Structured data is used in almost every industry. Common examples of applications that rely on structured data include customer relationship management (CRM), invoicing systems, product databases, and contact lists. |Unstructured data includes various content such as documents, videos, audio files, posts on social media, and emails. These data types can be difficult to standardize and categorize. Unstructured data often consists of data collections rather than a clear data element—for example, a document with thousands of words addressing multiple topics. In this case, the document’s contents cannot easily be defined as one entity. Generally, tools that handle structured data cannot parse unstructured documents to help categorize their data. Unstructured data is manageable, but data items are typically stored as objects in their original format. Users and tools can manipulate the data when needed; otherwise, it remains in its raw form—a process known as schema-on-read. Structured Data Pros and Cons Pros of structured data: - Easy to use for business users—structured data can be used by business users who understand the subject matter related to the data. It is useful for entry level users with access to basic tools like Excel, and can be even more useful for power users familiar with SQL or business intelligence (BI) tools. - Extensive tools support—structured data is several decades old and most data management and analytics tools support it. There is a huge variety of RDBMS, data analytics, and big data management tools for structured datasets. - Instantly usable—structured data can be used, with no further processing, by a variety of business processes. For example, customer data in structured form can be visualized and manipulated by a CRM system. Cons of structured data: - Data preparation—data often needs to undergo complex transformations before it can enter a flexible data store. - Not flexible—structured data requires users to create schema data definitions in advance. It is difficult to change the structure over time, and because there is a fixed, predefined structure, data can only be used for its intended purpose. This limits the use cases that can be served by structured data. - High overhead—structured data is often stored in data warehouses, which can store structured data at large scale and enable fast access for user queries. A data warehouse is a complex system requiring significant resources to run, operate and maintain. - Complex data structures—as organizations grow, the number of databases, tables, and fields grows exponentially. It becomes difficult to manage structured data, and it is common to have overlaps between datasets, redundant data, and stale or low quality data. Unstructured Data Pros and Cons Pros of unstructured data: - Native format—unstructured data can be stored in its native format until needed, with no pre-processing. - Flexible—unstructured data can be used for many different purposes and can contain a much wider variety of data, including textual data, images, videos, and source code. - Low overhead—unstructured data can be stored and processed at much lower cost using elastically scalable data lakes. Cons of unstructured data: - Lack of visibility—it is difficult to tell what is stored in a data lake and whether the data is useful. Data lakes can turn into “data swamps” with large amounts of data, which is not useful for the organization, yet incurs costs to store and manage it. - Requires advanced analytics—there is typically a need for data science skills and advanced algorithms to analyze and extract insights from unstructured data. This also means it is not useful for most business users, who do not have the skills to perform advanced analytics. - Requires dedicated tools—retrieving and processing unstructured data requires specialized tooling and expertise. Structured Data vs. Unstructured Data: Key Differences The following elements differentiate structured and unstructured data. Usually, structured data is in the form of numbers and text, presented in standardized, readable formats. XML and CSV are the most popular formats. In structured data models, the data format is predetermined. On the other hand, unstructured data often comes in various shapes and sizes. It does not conform to a predefined data model and stays in the native (original) formats. Examples include video (i.e., WMV, MPW) and audio files (i.e., MP3, WAV) Structured data follows a predefined relational data model describing the relationship of data elements. Unstructured data does not have a set data model but can have a hidden structure. Organizations store structured data in relational databases. Data warehouses help centralize large volumes of stored structured data from different databases. Organizations store unstructured data in raw formats, not in databases. Data lakes can store large amounts of unstructured data. Structured data typically resides in a relational database, arranged in tables with rows and columns. Labels specify the data types. A table’s schema consists of the data column and type configuration. Relational databases process data using SQL, an easy syntax for users to read. Unstructured data often resides in a non-relational (NoSQL) database. This database type stores multiple data models without tables—this is usually a document, wide-column, graph, and key-volume database. It can process large data volumes and handle high loads. A NoSQL database contains collections of documents that resemble rows but don’t use a tabular schema, so there can be different data types in the same collection. The non-relational model enables faster queries. Searchability and Ease of Use Structured data is usually easier to search and use, while unstructured data involves more complex search and analysis. Unstructured data requires processing to understand it, such as stacking before placing it in a relational database. Structured data is older, so there are more analytics tools available. Standard data mining solutions cannot handle unstructured data. Quantitative vs. Qualitative Structured data is quantitative, meaning that it has countable elements. It is easier to analyze by classifying items based on common characteristics, investigating the relationships between variables, or clustering the data into attribute-based groups. Unstructured data is qualitative, meaning the information it contains is subjective, and traditional analytics tools and methods can’t handle it. For example, customer feedback on social media can generate data in text form, requiring advanced analytics to process it. Techniques include splitting and stacking data volumes into logical groupings, data mining, and pattern detection. Protecting Structured Data and Unstructured Data with Imperva Imperva Data Security Fabric protects all data workloads in hybrid multicloud environments with a modern and simplified approach to security and compliance automation. Imperva DSF flexible architecture supports a wide range of data repositories and clouds, ensuring security controls and policies are applied consistently everywhere.
<urn:uuid:29a8e3a0-0208-4d05-9b9d-dcf543775fc3>
CC-MAIN-2022-40
https://www.imperva.com/learn/data-security/structured-and-unstructured-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00638.warc.gz
en
0.901615
1,591
3.65625
4
What does SCADA stand for? SCADA stands for Supervisory Control and Data Acquisition. It's a computer system for gathering, analyzing and processing data in real-time. Such systems were first used in the 1960s, and as the evolution of SCADA continues, systems are becoming more efficient and more valuable to their enterprise than ever. The SCADA industry was born out of a need for a user-friendly front-end. The need was to control a system containing PLCs. What is the meaning of PLC SCADA? Programmable Logic Controller (PLC) is a simple SCADA software supervisory system that allows remote monitoring and control of an amazing variety of devices in industrial plants. This includes water and gas pumps, track switches, and traffic signals. One of the key processes of SCADA is the ability to monitor an entire system in real-time. This happens via data acquisition or collected data. These include meter reading and checking the statuses of sensors. These data points are communicated at standard intervals depending on the system. Besides the data being used by the RTU, it is also displayed to a human. The human is able to interface with the system to override settings or make changes when needed. Modern systems have many data elements called points. Each point is a monitor or sensor and these points can be either hard or soft. A hard data point can be an actual monitor. A soft point is an application or calculated value. Data elements from hard and soft points are usually always stored and logged to create a timestamp or history. In essence, a SCADA application has two elements. They are: Throughout this article, I'll also cover other concepts related to the application of this system. There are three main elements to any system: Each RTU collects real-time data at a site. Communications bring that information from the various plant in the operating system (or regional RTU sites) to a central location. They can also return instructions to the RTU. Communication within a plant is conducted by data cable, wire or fiber-optic. Regional systems most commonly utilize radio. The HMI is a PC system running powerful graphic and alarm software programs. The HMI software displays this information in an easy to understand graphics form. It archives the data received, it transmits alarms, and it also permits operator control as required. Now, the initial question of "What is SCADA?" has been answered, the next step is to look at the way this system operates as a network. A SCADA network consists of one or more Master Terminal Units (MTUs). These are utilized by staff to monitor and control a large number of Remote Terminal Units (RTUs). The MTU is often a computing platform, like a PC, which runs specialized software. The RTUs are most likely small devices that are hardened for outdoor use and industrial environments. As we saw earlier, there are several parts of a working system. This system usually includes signal hardware (input and output), controllers, networks, user interface (HMI), communications gear, and software. Altogether, the term SCADA refers to the entire central system. The central system usually monitors data from various sensors that are either in close proximity or off-site (sometimes miles away). A SCADA system performs four functions: These functions are performed by several kinds of SCADA components: There are five phases in creating a functional system: A complex SCADA system can be complex to configure. However, it is usually much easier to operate. Modern SCADA systems are an extremely advantageous way to achieve industrial process monitoring and process control. They are great for small uses, such as climate control. They can also be effectively used in large applications. This could include monitoring and controlling a nuclear power plant, oil and gas plant, or transit system. SCADA can come in open standard communications protocols. Smaller systems are very affordable. They can be purchased as a complete system. They can also be mixed and matched with specific components. Large systems can also be created with off-the-shelf components. SCADA software can also be easily configured for almost any application, removing the need for custom software development. As demonstrated in this knowledge base, building the right system to monitor your network isn't simple. It's easy to spend too much money on unnecessary features and capacity, but we can help you improve efficiency in ways you probably haven't thought of. It's hard to learn everything you need to know and still perform your daily job. We can help you plan your SCADA implementation, with expert consultation, training, and information resources. DPS telemetry gear is built with the capabilities and capacity you need. We're committed to helping you get the best monitoring system for your specific needs. You need to see DPS gear in action. Get a live demo with our engineers. Download our free SCADA tutorial. An introduction to SCADA from your own perspective. Have a specific question? Ask our team of expert engineers and get a specific answer! Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
<urn:uuid:3329f42e-5776-47a4-997e-016efffadf9d>
CC-MAIN-2022-40
https://dpstele.com/scada/knowledge-base.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00038.warc.gz
en
0.947484
1,091
3.328125
3
HPE Focusing on the Edge of Internet of Things In a recent story for Mashable, Hewlett Packard Enterprise discussed its vision for the Internet of Things (IoT). This comes on the heels of Discover London 2015, HPE’s largest customer event with over 10,000 IT professionals in attendance. At the show, HPE announced IoT systems and connectivity solutions that will enable customers to more efficiently collect, process and analyze IoT data. We’ve covered IoT in the past, and the news that HPE is beginning to seriously roll out systems that will allow IoT to become a reality is welcome. IoT is the idea of outfitting the physical world with connection to the internet, in order to allow the collection of data and the streamlining of processes. This will be completed in an infinite number of ways depending on the physical object and the industry it operates in. The big problems, however, come first with providing the bandwidth to essentially connect the entire world to the internet, and second with analyzing the towering amount of data that will be collected by IoT. It sounds like HPE is tackling the latter issue. HPE explains that there are three key elements to IoT: - Devices– Rather than just smartphones and servers, in IoT, “devices” can include everything from household items like thermostats and trash cans to industrial systems such as wind turbines and medical equipment. - Data – The real value in IoT is harnessing the data from these connected devices and using the insights for better decision making. - Connectivity – IoT requires a reliable network connection that allows data to travel seamlessly from point A (where it is collected) to point B (where it is processed and analyzed). The article goes on to explain the difference between computing at the core and computing at the edge. Computing at the core means collecting data and sending it to the core of the system, usually a data center, for analysis. Central to HPE’s IoT strategy, computing at the edge is a recent idea that involves moving data acquisition and management to the edge of the network, outside of the data center. This allows faster access to data using less bandwidth. Data is sorted immediately so that only relevant data is sent to the core for analysis. Ultimately, moving computing power to the edge will result in smoother, more efficient and effective analysis of the vast amount of data that IoT will produce. This is the crux of HPE’s plans for the future of its role in IoT. The end user’s first and last stop for making technology decisions.
<urn:uuid:ce9111d0-7086-4c5f-9082-9632129ef3d5>
CC-MAIN-2022-40
https://www.iotplaybook.com/article/hpe-focusing-edge-internet-things-0
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00038.warc.gz
en
0.921245
527
2.578125
3
Of all the emerging technologies, quantum computing is the one that excites, frustrates, and makes techies nervous in equal measure. In 2019, Google's quantum computer completed a calculation in under four minutes. It's a task that would have taken the world's most powerful computer 10,000 years to achieve, making it around 158 million times faster than the fastest supercomputer. The potential of these advances in technology encouraged brands from Samsung to BMW to explore the art of the possible by embracing quantum computing. As a result, some scientists are thinking even bigger about how quantum could tackle climate change and transform the planet where we all reside. However, many also warned about the hype surrounding an emerging technology built on future promises. Although the full potential of quantum computers might still be decades away, we need to identify the dangers and the opportunities waiting on the horizon. For example, while much of the world is discussing how the Ukrainian war might kickstart a new nuclear weapon threat, academics are becoming more concerned with the possibility of a quantum arms race that will transform warfare rather than our world. Russia, India, Japan, the European Union, and Australia are all carrying out significant quantum research and running development programs. But the US and China have the most skin in this quantum arms race, with the prize being protecting or decrypting their opponent's emails, financial records, and state secrets. Ultimatley, whoever has the most processing power will secure an advantange over their rivals, whether it be unhackable communications or sophisticated cyberweapons. One of the biggest fears is that quantum cryptography could break classical encryption that underpins financial stability and the global economy. Cryptography is something that we all take for granted in everything from messaging apps to online banking. But if quantum lives up to its future promises, we could have the genuine problem of it breaking cryptography within the next decade. For example, if intelligence services were equipped with quantum computers, in theory, they would be able to break 2048-bit RSA encryption in under 8 hours. In comparison, the same task would take the world's fastest supercomputers around 300 trillion years to overcome with traditional brute-force methods. Many are warning of a modern Y2Q moment waiting on the horizon that businesses need to prepare for now. With everything from toasters to smart cities connected to the internet, the logistics of transitioning the whole world to new post-quantum algorithms that are resistant to quantum computing cyberattacks is no small feat. As end-to-end systems become more deeply interconnected, businesses must secure and safeguard their entire critical cyberinfrastructure. Unfortunately, traditional threat detection software will be incapable of going toe to toe or even identifying malicious attacks by quantum computers. But rather than compare today's defenses with future attack methods, it's time to prepare by building new solutions. Planning for a quantum future is already underway For a quantum future to live up to its hype, we will need to see significant leaps forward in quantum physics, systems engineering, and computational science. In addition to technology, the road ahead will be built with human skills and knowledge united in writing the future roadmaps for quantum computing. Banking and the entire financial landscape are already preparing for a future where quantum computers will inevitably break many of today's encryptions. Household names such as J.P. Morgan, Wells Fargo, Barclays, and Goldman Sachs are already preparing to bolster their defenses to protect against future quantum threats while also unlocking opportunities at the same time. Elsewhere, HSBC is on a three-year collaboration with IBM as it prepares to make a quantum leap of its own. Sure, businesses of all sizes will need to prepare for the overhaul of their entire information security to ensure systems are quantum-safe. But much like Samuel Beckett's Waiting for Godot, we all find ourselves waiting for quantum computing, not knowing if it will actually arrive. Behind the scary headlines is the reality that preparing for the migration to new cryptographic standards is inevitable and already underway. The saddest part of this modern digital tale is how little the human race has progressed as we get dragged back to our primitive roots. We have an opportunity for industries such as manufacturing to revolutionize their processes and leverage new opportunities. We should also be focussing on using tech to achieve goals such as providing clean water or building sustainable cities and communities. But we have sadly retreated to an old-fashioned game of defense and attack. Assuming quantum computing delivers on its future promises, rather than hitting the panic button, we will need to brace ourselves for a flurry of software updates and upgrades while frustratingly muttering, same as it ever was.
<urn:uuid:aef3cb38-d017-4978-b90a-0b405d7dc81e>
CC-MAIN-2022-40
https://cybernews.com/security/dangers-of-quantum-computing-from-new-style-warfare-to-breaking-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00038.warc.gz
en
0.942535
946
2.90625
3
Factory robots with AI technology can help customers save costs and improve efficiency in a number of ways. In the future, machine learning and AI will enable robots to self-learn and self-adjust, enabling improved performance. Through machine learning and AI, we see opportunities to further develop human-robot collaboration and to make robots more autonomous, within set parameters. The robot should be self-learning or self-optimizing in the long term. It's not about copying human abilities. We want to enable robots to work in unstructured environments. For this, it is essential that the robots recognize specific patterns, for example, labels on bottles , and allow them to correct errors independently. In the future, robots can learn new tasks from other robots as well. Swaminathan Ramamurthy, GM of OMRON Automation Centre & Robotics at OMRON Asia Pacific, listed how AI makes industrial robots more efficient and smarter. - Ability to sense and respond: The AI-enabled robots are well equipped with "Part Agnostic" know-how. They can pick objects for which there is no predetermined trajectory and a specified spatial location in the working space. Machine learning makes them do so. This also leads to a better and informed incoming parts inspection during an assembly process. - Ability to move around autonomously: The modern robots equipped with AI are more autonomous and precise in their navigation around complex and unpredictable environments. They can sense the obstacles in their path and re-plan the route accordingly. - Ability to adapt to the changes around: Embedded AI in the servo controller helps robots adapt to changes in environments and payloads seamlessly. - Optimization of processes: With AI, manufacturers can now take full control of repair costs and breakdowns by gaining access to real-time data from sensors delivering much better reliability. Subrata Karmakar, President of Robotics and Discrete Automation at ABB India, said that his company already offers several applications that combine robotics and machine learning. These include using AI to enable robots to sense and respond to their environment, inspect and analyze defects, and optimize processes autonomously. For instance, robots equipped with vision sensors can use AI to identify objects regardless of their position. In autonomous process optimization, solutions like ABB's robotic paint atomizer enable real-time smart diagnostics and paint quality optimization. By monitoring the condition of critical variables such as acceleration, pressure, vibration, and temperature, the atomizer reduces internal waste during color changes by 75 percent. It reduces compressed air consumption by 20 percent. "ABB and Silicon Valley AI start-up Covariant have a partnership to bring AI-enabled robotics solutions to market, starting with a fully autonomous warehouse order fulfillment solution," Karmakar continued. "Today, warehouse operations are labor-intensive, and the industry struggles to find and retain employees for picking and packing. While robots are ideally suited to repetitive tasks, they lacked the intelligence to identify and handle tens of thousands of constantly changing products in a typical dynamic warehouse operation. ABB's partnership with Covariant is part of our strategy to expand into new growth sectors such as distribution and e-commerce fulfillment and leverage the scaling potential in these fields. We identified a significant opportunity for robotic solutions across a broad range of applications, including logistics, warehousing, and parcels & The ability to understand the environment with relatively inexpensive sensor technology is one of the evolving capabilities in this industry. And this is where AI is of paramount importance. Stefan Nusser, Chief Product Officer at Fetch Robotics, explained that this enables robots, like their own, to recognize things on a factory or warehouse floor and act accordingly. "That is the area where AI and computer vision progress is helping us," Nusser said. "Similarly, for example, for a robotic arm, computer vision is what allows it to pick. Some companies in this space are looking at automating the picking process. The general trend here is just understanding the location and position of objects in the environment, which allows the robot to interact with it."
<urn:uuid:44e38107-032d-492f-89b9-f9570b4d4e40>
CC-MAIN-2022-40
https://www.asmag.com/showpost/32034.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00038.warc.gz
en
0.93839
841
3.21875
3
Email Phishing Protection Is Essential To Protect Your Organizational Network From Phishing Attacks Why deploying email phishing protection should be top priority. Phishing is among the oldest forms of cyberattacks and dates back to 1990. However, it is still widespread, and adversaries are developing increasingly sophisticated forms of this cyber attack. It is worth noting that around a third of all breaches involve phishing. Phishing attacks use sophisticated social engineering techniques to dupe victims into parting information and finances, and email phishing protection is vital for all organizations’ well-being. What Is A Phishing Email? Verizon’s 2019 Data Breach Investigations Report (DBIR) showed that 94% of malware was delivered via email. In a phishing email attack, an adversary contacts their victim through an email while pretending to be a legitimate organization. Attackers use social engineering techniques to manipulate employees or individuals into taking actions that compromise sensitive information. Users are led to take action, such as clicking on a malicious link or downloading an infected attachment. Such actions cause a loss of financial data, passwords, and Personally Identifiable Information (PII). What Do Phishing Emails Do? Phishing emails can compromise unsuspecting victims in a large number of ways. Adversaries are coming up with better ways to target victims due to well-made templates and tools that they can purchase right off the shelf. Here are a few things a cybercriminal could do with confidential data obtained using phishing. - Steal usernames and passwords - Gain access to internet banking and cause financial loss - Apply for new credit cards and request new PINs - Make purchases online using stolen credentials - Abuse a user’s Social Security Number, thereby causing harm to their reputation - Resell information to other parties on the dark web Some Famous Phishing Email Examples Several phishing email scandals have been impactful enough to make world news. Some of the more well-known phishing email examples are: - Hillary Clinton Email Scandal: In this phishing email attack, Presidential candidate Hillary Clinton’s campaign chair John Podesta gave away his Gmail password. - Celebrity Private Images Leaked: The intimate photos of several prominent celebrities were leaked due to a series of successful phishing attacks dubbed “Celebgate.” - Ukrainian Power Grid Attacks: Malware delivered by Spear phishing attacks caused three energy providers in Western Ukraine to lose power, affecting several hundred thousand citizens. The attacks on the power plants were suspected to have been carried out by state actors. How Do You Identify a Phishing Attack Online? Identifying phishing attacks requires training and alertness on the part of vulnerable employees. Although organizations may be employing some of the best phishing protection software, these only stop 99% of phishing emails. The few emails that land up in an inbox can still cause tremendous harm. Here are some of the common traits in phishing emails that users should be on the lookout for. - Requests Confidential Information: A phishing attack may request confidential info through an email. - Uses Urgent Language: Phishing emails often use urgency and scare tactics to convince the victim to part information. - Grammatical Errors: Phishing emails often have spelling mistakes in an effort to bypass word filters. - Incorrect Addresses: Such emails often have spelling mistakes in the domain names of the sender, or the email addresses have a different TLD (top-level domain) such as .info instead of .com. - Lack of Customization: Most phishing emails lack personalized greetings or customized information. Types Of Phishing Attacks – ppt You Can Include In Your Phishing Prevention Plan The different types of email phishing attacks are given below - Common Email Phishing: The most well-known kind of phishing attack, such attacks steal information via emails that seem to be genuine. Such attacks are not targeted. - Spear Phishing: Spear-phishing attacks are highly targeted and well-researched. The attacks are usually focused on executives in organizations, public figures, or high-value targets. - Clone Phishing: In such attacks, the adversaries clone a legitimate email and insert malware into it. They then replace existing links with malicious ones. The adversaries take control of a person’s emails and infect the contacts of the victim. - Business Email Compromise: Such emails appear to come from someone associated with an organization and request employees to take urgent action such as purchasing gift cards or wiring money. - Whaling: Whaling is a type of spear phishing where adversaries impersonate extremely high-value targets to mislead employees. It is also popularly known as CEO fraud. How to Stop Phishing Emails: Email Phishing Protection Techniques The following techniques can protect organizations from phishing emails. - Avoid Clicking Links: Users must not click on links in promotional emails, instead go to the website and avail promotions directly from them. - Don’t Provide Confidential Information: Refrain from providing personal and confidential information to any unsolicited request. - Use Spam Filters: A high-quality email spam filter should be used to deal with phishing in both inbound and outbound emails. - Use Sites With HTTPS: Users should never divulge data on sites without HTTPS in the web address. Checking for the “lock icon” on the web browser is recommended. - Call and Verify: It is always advisable to call the organization that is seemingly seeking details and seeks confirmation. Call on official numbers and not those provided in the email. - Check Grammar and Punctuation: Emails with poor grammar are usually fraudulent. - Have a Strong Password Policy and MFA: A strong organization-wide password policy along with multi-factor authentication can prevent several scams. How To Mitigate Phishing Attacks For Your Organization? Organizations should set up incident response plans to counter phishing attacks and mitigate fallout caused by them. An efficient incident response plan necessitates that analysts segregate useful information from noise and gain intelligence that they can act upon from user-reported emails. Phishing analysis tools can be used to automate this process, which IT security teams can adopt, to quarantine suspicious emails without disrupting the email environment. Final Words – Awareness is the Best Phishing Protection When it comes to phishing, awareness is the best protection. Phishing is directed to employees, and although they are often the weakest link in cybersecurity, properly trained employees can be an organization’s greatest asset. Having the right software solutions to complement such training is necessary for superior email phishing protection. Join the thousands of organizations that use DuoCircle Find out how affordable it is for your organization today and be pleasantly surprised.
<urn:uuid:5d3bb6eb-edc0-4244-9816-88596984a034>
CC-MAIN-2022-40
https://www.duocircle.com/content/email-phishing-protection
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00238.warc.gz
en
0.916245
1,395
2.890625
3
In addition to the more traditional methods of searching for virus signatures, like virus ‘mask’ matching, there also exists a range of detection technologies capable of recognizing the latest, unknown, malicious programs. The quality of these new technologies helps to raise the overall security level provided by each individual product. Such proactive protection methods include heuristic technologies for detecting malicious code and also behavior blockers. Now and again, manufacturers of antivirus programs try to invent some innovative piece of technology that would solve all of the problems discussed so far in one hit. They are seeking to develop a kind of panacea that could protect every computer from every type of malevolent attack, once and for all. They try to ‘proactively’ protect the user by seeking to be able to detect and delete viruses and other emergent malware, even before it is created and launched on an unsuspecting world. Unfortunately, this well-intentioned quest remains unfulfilled. Universal solutions can only be applied to generic problems, and computer viruses just don’t play by the rules. They are not the product of some well-documented process, but originate in the often sophisticated mind of the hacker. Viruses follow constantly changing paths which are largely dependent upon the aims and desires of those that inhabit the darker side of the digital world. Let’s look at how a behavior blocker differs in detection methodology from a more traditional signature-based antivirus solution. They use two very different approaches to virus detection with the intention of arriving at the same end. Signature detection compares a program’s code with the code of known viruses, looking for a positive match. A behavior blocker monitors the launch and operation of programs to ensure that they conform to expected rules and blocks them if they appear suspicious or obviously malicious. Both methods have their own advantages and disadvantages. On the plus side, a signature scanner is guaranteed to trap any ‘beast’ that it recognizes. The minus being that it may well miss the ones that are not familiar to it. Staying on the minus side, there are innumerable antivirus databases and this can push up the use of system resources considerably. Behavior blockers are advantageous as they are able to detect malicious programs, even those that they are unfamiliar with. However, it can easily miss well-known variants of malware, as the behavior of modern viruses and Trojans is so unpredictable that no one set of rules can ever encompass everything. Another downside of behavior blockers is that every once in a while they can throw up false positives, as even legitimate programs can behave in unexpected ways. Thus occasionally a behavior blocker will miss a malicious program and block the operation of a legitimate one. A behavior blocker has one more inherent drawback and that is its inability to get to grips with some of the newer viruses. Let’s take as an example Company X, which has developed a behavior-blocking program called AVX capable of trapping 100% of all current viruses. How would hackers react to this? Surely they would invent an altogether different way of infecting the system, invisible to AVX. The AVX antivirus will then need to update its behavior recognition rules. So Company X issues updates. Then more updates, and more again, as the hackers and virus-writers constantly find new ways around the updates. Finally we end up with a signature scanner again, where the signatures take the form of ‘behavior’ instead of ‘fragments of code’. The above scenario also encompasses the heuristic analyzer, another proactive protection method aimed at monitoring a programs launch and operational behavior and stopping it if it appears malevolent. As soon as such anti-virus technologies start to seriously thwart the hackers, preventing them from attacking their victims, a new set of virus technologies emerge that are geared towards avoiding heuristic protection methodology. As soon as a product that features advanced heuristics and behavior-blocking technology becomes popular, they fail to be efficient. Thus these newly-invented proactive technologies tend to have a very limited shelf-life. Whilst amateur hackers may take weeks or months to bypass new proactive technologies, the more experienced among them may find a way around it in hours or even minutes. As effective as it is, a behavior blocker or heuristic analyzer requires constant improvement and updating. It should be remembered that to add a new signature to an antivirus database takes just minutes, whereas finalization and testing of proactive protection technologies is much more time consuming. In actual fact, the speed with which virus signatures can be added to databases and released in the form of updates is often considerably faster than updated solutions can be issued for similar proactive technologies. This has proven to be the case in many email and network worm epidemics, as well and in relation to spyware and other criminal software. All this does not mean of course that proactive protection methods are useless. They do their job and are capable of blocking a great deal of unsophisticated malware developed by relatively unskilled hackers. Therefore they can be considered as a worthwhile addition to traditional signature scanners, but should not be wholly relied upon in isolation.
<urn:uuid:f2dd596e-1bca-4b40-bb93-926f04afea32>
CC-MAIN-2022-40
https://encyclopedia.kaspersky.com/knowledge/new-technologies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00238.warc.gz
en
0.949754
1,036
2.84375
3
Cyber Tip of the Day - Public Computers - Cybersecurity Awareness Training Public computers that offer you Internet access are high-risk spaces where your personal and sensitive information can and will get stolen. Here are some tips to protect yourself: - Disable “Autocomplete” and “Save Password” features in the browser before you start browsing. - Consider using private or incognito mode in browsers that makes sure nothing is stored on the public machine. - Do not use toolbars and don’t click on ads that appear on the sites you visit. - Be wary of anyone trying to snoop over your shoulder. Avoid entering sensitive information such as credit card numbers or financial details while using public computers. - Don’t leave your machine unattended while you’re using it. - Clear the temporary internet files and browsing history when you finish using the public computer. - Remember to logout of websites when you’re done working. Don’t just close the browser or type another site address in the URL. - If possible, avoid using public computers such as those in hotel lobbies or public libraries altogether. You never know who’s been using them and what’s on them. Be Smart. Be Aware. Be Secure. ERMProtect. Get a curated briefing of the week's biggest cyber news every Friday. Turn your employees into a human firewall with our innovative Security Awareness Training. Our e-learning modules take the boring out of security training. Intelligence and Insights
<urn:uuid:df3b5fd0-66f0-4294-af22-1a009042c7a9>
CC-MAIN-2022-40
https://ermprotect.com/blog/cyber-tip-of-the-day-public-computers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00238.warc.gz
en
0.846303
325
2.921875
3
Whether you are buying something from an online store, reading your email in the browser, checking your account balances, or uploading photos / videos to social media, most websites require an individual username and password when accessing their services. This raises various problems. What’s with ALL the Passwords? Using the same password for all the websites you access is a bad idea and horribly insecure. If we run a quick check on the “Dark Web” for your email address, it would likely show that hackers already know the one password you have been using forever. So the only other option is multiple passwords, which can easily go beyond the limits of our feeble human brains to keep track of OR people start creating a list that is typically typed up and saved on the computer – if a hacker gets into the computer then all the passwords are theirs too. So then the option is to find a secure way of storing and backing up these passwords, not to mention trying to make them easy to use. Rangle Them Passwords! That is the job of Password Management done by a small piece of software known as a password manager. It takes the complexity down to remembering the one password to open the software, then it tracks the rest from there. The good ones have the ability to generate passwords for you, store them in connection with the website you are visiting, auto-filling the password fields on the websites when you visit them again, and backup your passwords to the cloud – all with strong security and encryption to keep the hackers out of your business. If your company is still typing passwords into a list, or worse have a paper list, then contact us for assistance migrating to a password manager.
<urn:uuid:0d637ca6-3a46-4d41-8ea4-5d14cf337270>
CC-MAIN-2022-40
https://www.farmhousenetworking.com/cloud-services/password-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00238.warc.gz
en
0.940197
344
2.703125
3
When it comes to ushering in the next-generation of computer chips, Moore’s Law is not dead, it is just evolving, so say some of the more optimistic scientists and engineers cited in a recent New York Times article from science writer John Markoff. Despite numerous proclamations foretelling Moore’s Law’s imminent demise, there are those who remain confident that a new class of nanomaterials will save the day. Materials designers are investigating using metals, ceramics, polymeric and composites that organize via “bottom up” rather than “top down” processes as the substrate for future circuits. Moore’s Law refers to the observation put forth by Intel cofounder Gordon E. Moore in 1965 that stated that the number of transistors on a silicon chip would double approximately every 24 months. The prediction has lasted through five decades of faster and cheaper CPUs, but it’s run out of steam as silicon-based circuits near the limits of miniaturization. While future process shrinks are possible and 3D stacking will buy some additional time, pundits say these tweaks are not economically feasible past a certain point. In fact, the high cost of building next-generation semiconductor factories has been called “Moore’s Second Law.” With the advantages of Moore’s Law-type progress hanging in the balance, semiconductor designers have been forced to innovate. A lot of the buzz lately is around “self assembling” circuits. Industry researchers are experimenting with new techniques that combine nanowires with conventional manufacturing processes, setting the stage for a new class of computer chips, that continues the price/performance progression established by Moore’s law. Manufacturers are hopeful that such bottoms-up self-assembly techniques will eliminate the need to invest in costly new lithographic machines. “The key is self assembly,” said Chandrasekhar Narayan, director of science and technology at IBM’s Almaden Research Center in San Jose, Calif. “You use the forces of nature to do your work for you. Brute force doesn’t work any more; you have to work with nature and let things happen by themselves.” Moving from silicon-based manufacturing to an era of computational materials will require a concerted effort and a lot of computing power to test candidate materials. Markoff notes that materials researchers in Silicon Valley are using powerful new supercomputers to advance the science. “While semiconductor chips are no longer made here,” says Markoff referring to Silicon Valley, “the new classes of materials being developed in this area are likely to reshape the computing world over the next decade.”
<urn:uuid:04330ef7-45ac-4a72-81f4-c2e681f7e913>
CC-MAIN-2022-40
https://www.hpcwire.com/2014/01/10/moores-law-post-silicon-era/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00238.warc.gz
en
0.935885
561
3.625
4
We would like to keep you up to date with the latest news from Digitalisation World by sending you push notifications. For far too many people, data centres are like utilities – few people think about them unless they go wrong. However, change is afoot and data centres are being thrust into the spotlight. The already increasing demand for power and space has soared over the last two years, fuelled by a surge in digital data to facilitate connections for businesses in a home-working environment, for video streaming, increased social media activity and downloaded content for home entertainment. This coupled with the fast-growing trend for businesses to move their applications into the cloudmeans that estimates the data centre market could grow by 15 per cent a year between now and 2024, are very likely. However, data centres and society’s appetite for all things digital comes at a cost. According to the International Energy Agency, data centres account for 1 per cent of the total electricity usage worldwide and approximately 2 per cent of greenhouse gas emissions. As for the future, recent predictions state that the energy consumption of data centres is set to account for 3.2 percent of the total worldwide carbon emissions by 2025 and they could consume no less than a fifth of global electricity. And, by 2040, storing digital data is set to create 14 per cent of the world’s emissions. This being said, we shouldn’t be overly alarmed because data centres are at the frontier in the fight against climate change and are already making great strides to mitigate their impact on the environment. Despite these statistics, data centres are already far more energy efficient in comparison to previous models of computing. Large efficient data centres are indeed the most effective way of providing for modern computing’s massive demands; one of the most efficient ways to deliver a unit of computing (energy per compute unit) is to put it in a large, modern, advanced data centre on a cloud platform. Whilst a laser focus on sustainability is absolutely key to the future of the data centre sector, it is not new. For a long time, the industry has recognised the need to develop more efficient facilities with lower and lower Power Use Effectiveness (PUE) and Water Use Effectiveness (WUE) designs to ensure the right services are delivered to customers at the right cost. These key focuses have driven a more sustainable outcome for the industry overall. Data centre providers continue to innovate, but also to broaden their view of sustainability - from the carbon impact of the physical construction of the buildings, right through to how natural resources can be used such as rainwater harvesting, aquifers to access natural water resources, and even living walls on the exterior of facilities. The industry is working relentlessly to ensure facilities are operated, and maintained sustainably – working towards a smarter, cleaner way of ensuring advancements in technology continue to move society forwards whilst upholding the highest environmental and sustainability standards. Developments in power and cooling have also enabled greater data centre efficiency, and today’s data centre providers are at the forefront of deploying some of the most sustainable buildings across the globe. As customers’ demands grow and increase their data centre space requirements, the industry is both duty-bound and regulated (the EU Commission set a “green deadline”, noting that the industry “should become climate neutral by 2030”) to lead innovations in how to make facilities as energy efficient as possible. Maximising efficiency has always been a mainstay of the leading data centre providers, not only for the commercial benefits it passes on to customers, but also for minimising its environmental impact. The continued growth of the global data centre market is being driven by an explosion of cloud and internet services. However, this exponential growth in data traffic comes at the cost of significantly higher energy demands. Because of this, providers are increasingly committing to using 100% carbon-zero energy – powering sites solely with truly renewable energy from wind, solar and tidal sources. Another way providers address their environmental impact is to increase the efficiency of cooling systems, which currently account for approximately 40 per cent of a data centre’s energy consumption. This is where the industry is improving sustainability by deploying innovative technologies such as independent fresh air cooling and using low energy indirect evaporative air solutions. Whilst cooling is a vital part of keeping data centres operational, an Uptime Institute report estimated that in the US alone nearly 12.5 billion kW hours would be wasted by over-cooling in data centres and improper airflow management. This points to a wider trend of energy waste in the sector, including “zombie servers” and a significant amount of retired equipment being sent to landfill rather than recycled. To tackle this, providers should invest in comprehensive recycling schemes and use highly efficient UPS (uninterruptable power supply) systems which have the ability to hibernate parts of the system when they’re not being used. We in the technology industry know that we are in the early stages of a global digital and data transformation, with data centres and networks at its core. It will become increasingly common for sophisticated everyday tasks to be carried out by computers using AI, without human intervention. Market demand has been growing year-on-year and will continue to do so for the foreseeable future as more devices connect to the internet and more data than ever is produced. And there is no getting away from the fact that data centres are they the lifeblood of modern civilisation – without them, society simply couldn’t function in this digital age. Whilst this is exciting, it is also a huge responsibility and challenge for providers to keep their data centres safe, secure, available, and of course, sustainable.
<urn:uuid:88508737-1575-4552-9ba6-63427e98cf32>
CC-MAIN-2022-40
https://digitalisationworld.com/blogs/56903/data-centres-are-they-really-sustainable
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00238.warc.gz
en
0.947023
1,151
2.640625
3
Today more and more healthcare facilities, both hospitals and private practices, are moving from paper files to electronic medical records. The benefits of electronic medical records are numerous for both patients and healthcare providers. Benefits of Electronic Medical Records With electronic records, communication between doctors and patients is improved by leaps and bounds. Each party is able to access a full medical history whenever they want; patients can access the records from home, whenever they need to. Electronic health records contain significantly fewer errors than paper records, according to experts. Ease of Follow Up With electronic records, doctors have an easier time tracking their patients care, especially when they are referred to specialists or other healthcare practitioners. Everything is recorded in the same electronic record. They can then easily follow up with their patients and with the other doctors. Test results can be seen online, without a doctor having to make phone calls to another office. In cases of emergency, doctors can easily access a patient’s electronic health records. In casualty situations, like natural disasters, doctors can use EHRs to get a more accurate picture of a patient’s medical history quickly. In this type of situation, patients are not likely to bring their paper medical records along, so when a healthcare professional has access to electronic records, it can help speed the process of care along, perhaps saving lives. In our quickly transforming, highly digital world, EHRs can be accessed by Smart phones, tablets, laptops and more. Your records are available anywhere you are. Electronic medical records are much more secure than paper records. Electronic records are stored within secure databases; they will never be lost or misfiled. And, these records are always backed up for extra protection. Electronic records can improve a patient’s quality of care, too. They will help prevent oversights and errors such as incorrect prescriptions or related issues. With EHRs, patient records are updated in real time. There is no downtime; there is no need for anyone to type in a doctor’s notes. These records can be accessed by multiple users all at once, too, so multiple physicians can access them at once. This allows for an easy exchange of information between locations, even different countries. Electronic medical records can benefit both patients and healthcare providers. They are safe, secure and they even take up less space. Patients and doctors can access their records whenever and wherever they want.
<urn:uuid:eb326a99-1eb3-47d8-ba6c-82af2db20a28>
CC-MAIN-2022-40
https://blog.mesltd.ca/the-benefits-of-electronic-medical-records
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00438.warc.gz
en
0.948553
518
2.6875
3
No one wants to be on Santa’s naughty list, but temptation—especially when there’s something to gain—often gets the better of us. The proof? Barely a day passes without new headlines reporting another cyber attack, policy violation, or data breach. Secretly, we breathe a sigh of relief that it happened to someone else, but most of us know that we’ll all eventually feel the impact in some capacity. So who’s responsible for the security of an IBM i server and DB2 database? IBM builds the hardware and develops the operating system, application vendors and internal programmers write the business applications, and users maintain the corporate data. Who shoulders the responsibility for its integrity and its protection? After more than a decade of working as a security specialist, my answer is a resounding “EVERYONE!” System users, such as programmers and administrators, should be managed by a combination of their profile settings and good auditing controls. Application users should be managed by the applications. Ultimately, however, users will do whatever applications allow, not necessarily only what they’re designed to do. And it’s programmers who build those applications, so we have to talk about how to build a more secure application. The goal of this article is to share 13 important development considerations that will empower programmers to enhance the security of our data rather than undermine it. Personal experience has taught me that documentation is a dirty word in the programming community. It hampers our simple desire to code. But documentation is also the map that guides future travellers. Without it, every enhancement and bug fix begins with a time-consuming discovery task and increases the likelihood that the application could be broken by the change. Before a single line of code is written, planning documents should identify operational requirements such as these: - Application libraries - Profile ownership - Authorization lists - Authority schemes—public and private - Runtime attributes Documentation should also be embedded in the application programs. Comments should be clear, concise, and in plain English (or your language of choice). In other words, do not simply repeat what any programmer can read from looking at the code. 2. Segregate Libraries When building an application, many programmers lump all of the objects into a single library, as it seems simpler on the surface. However, programs and files typically have different security requirements, and having them cohabitate can make securing them more complicated. Segregating non-data objects and data objects into different libraries permits library-level security to control access. 3. “Softcode” Library Lists I was taught that hard-coding a library name into a program was a cardinal sin, but we’ve learned that search path manipulation is an easy way to wreak havoc and even circumvent security. An acceptable balance may be found by storing library names in data objects, such as data areas, so that they cannot be altered by users but can be changed without requiring modifications to the application code. 4. Own Your Objects Every object has an owner, and if you’ve ever had to delete a user’s profile, then you understand why the programmer shouldn’t retain ownership. Objects should never be owned by a group profile, especially one that consolidates the application users; each member would indirectly own the objects, giving them excess privileges. Instead, create a profile that has only one purpose in life: to own the application objects. Just remember, if new objects are created during the execution of the application, the program should also establish the correct ownership and authorities. And try to avoid assigning ownership to IBM-supplied profiles because if anyone—including IBM—makes a change to the profile, it might have an unpredictable impact on the application. 5. Design the Database Take time to classify the data in order to identify how stringent the access control should be. For a simple scheme, consider public, semi-public, and private. The database should be normalized to prevent information redundancy, and I recommend encrypting fields that are classified as private. Strong encryption functions are inherent within IBM i, and third-party providers can assist if code modification isn’t possible. Journaling is a popular technique that originated with application recovery and now has uses for replication and for auditing. Consider collecting before and after images of data changes in critical files and archiving that journal data according to corporate policy and compliance mandates. 6. Read the Menu Application menus were a solid approach to security back when terminal displays were the only way to access data. However, they fall flat on their face now that other interfaces exist that can facilitate direct access to the database. There’s nothing wrong with using menus to improve the user experience and to restrict user movement, but don’t rely on them to police data access. When a user steps outside the constraints of a menu, other controls such as command line restrictions, object-level security, and exit programs become even more critical. 7. Good Code While it seems obvious, write good code! My development career exposed me to some of the most horrendous applications I could imagine. Unwieldy programs with unmanageable top-down design, missing—or worse, inaccurate—comments, and redundant functions. Programs that are modular, concisely coded, and well-documented are easier to maintain and easier to review for unauthorized code modifications. ILE constructs make this easy, and you should be taking advantage of them. 8. Public and Private Authority IBM i contains a unique concept known as *PUBLIC. This is the default level of access granted to a user who hasn’t been assigned a more-specific access level. Many open configurations on servers running IBM i can be traced to the fact that IBM ships the default *PUBLIC authority level as *CHANGE, which is sufficient to read, change, and delete data in a file. Public authority is determined when the object is created from a parameter on the associated create command. By default, the command defers to a library attribute and that, in turn, defers to the QCRTAUT system value. I recommend setting the appropriate explicit value on the library as it allows different defaults to be used for each application library. The popular “deny by default” data access model calls for the public to be excluded, and the application users—or better yet, user groups—with a demonstrable need are granted private authority. 9. Authorization Lists Granting authority to hundreds or even thousands of individual objects can be an overwhelming administrative chore. Authorization lists allow many objects to be addressed as a single entity and are a useful mechanism. A significant benefit of using lists is that authorities can be maintained while the associated objects are in use. This can be very beneficial in a 24×7 shop where obtaining object locks can be a monumental task. Just remember that *PUBLIC authority will come from the individual object unless you specify otherwise. 10. Adopted Authority and Profile Swapping Two techniques are available to allow a program to run with different authority than the user who invoked it. Instead of granting authority to individuals or even groups, authority can be elevated temporarily while the program is running, alleviating the requirement to grant authority to the users themselves. In the case of adoption, a runtime attribute on the program object controls the inheritance. Both special and private authority is culled from the owner of the program, which is beneficial if following the recommendation of using a unique owning profile. Profile swapping allows a user to assume the identity of another and is accomplished via calls to several easy-to-use IBM-supplied APIs. Both approaches have pros and cons, and more detailed information can be found using your search engine of choice. Overall, these are two of the most useful functions in a security-conscious programmer’s toolbox. 11. Cover Your Exits IBM i contains a registry of exit points, which are almost like subroutines in the OS’s functions. When a function is invoked, the OS can pass control temporarily to the assigned exit program—if one exists—to perform any ancillary task before the OS resumes control and processes the request. There are dozens of exit points in several different categories, but about 30 of them pertain to network access and should be considered critical. While object-level security remains effective during these requests, the OS can only enforce one setting across all interfaces, so a user who can change data on a green-screen can also change data through an FTP or ODBC connection. While these exit programs can be written in-house, regulatory compliance typically frowns upon self-policing, so I strongly recommend evaluating a commercial solution to selectively process and audit user requests. Preventing data leaks is a requirement shared between exit programs and object-level security. 12. Plan for High Availability Surprising to some, system availability is often a requirement of regulatory compliance. Keeping systems running in the face of a technological disaster is a popular business initiative and one that continues to grow in popularity. When designing a new application, consider the expectation that the application may need to be replicated. A side-benefit of journaling for HA purposes is the audit and recovery trail that can be built into the application. 13. Query and Other Tools When designing a security infrastructure, consider how the application data will be accessed. Programs can use adoption to temporarily access secured files, but other tools may not have that luxury. Placing the invocation command for a native tool (e.g., WRKQRY) inside a CL “wrapper” can allow it to take advantage of the same technique. Consider authorizing some generic profiles to the authorization list to allow users to access data in a limited capacity. For example, grant *USE access to a profile named READONLY and then swap to that profile if a user needs to use ODBC or Query to view the data. If data is encrypted, native reports may have to be authored if plain-text viewing or reporting is required. In my capacity as an auditor, I discover vulnerabilities on hundreds of servers every year. In the vast majority of cases, these servers are poorly configured and are running applications that rely heavily on legacy security mechanisms or have no security considerations at all. Ironically, many of those same organizations blame IBM for allowing users to extract data, instead of taking responsibility and investing in the design of a secure application. Programmers are the linchpin when establishing secure applications. Regardless of whether they’re developing internal solutions or commercial products, acknowledge the risk posed by not having their cooperation, train them in security functions, and work to obtain their buy-in regarding the fact that securing an app is one of the most critical aspects of securing data and maintaining compliance. Reprinted with permission of MC Press, LLC, www.mcpressonline.com. All rights reserved.
<urn:uuid:af72ae4f-8c00-42b5-9850-ac3e0aca9986>
CC-MAIN-2022-40
https://www.helpsystems.com/resources/articles/writing-secure-ibm-i-application-and-13-ways-check-it-twice
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00438.warc.gz
en
0.928104
2,250
2.5625
3
Pushing the number of transistors it can cram onto silicon, chip giant Intel announced a step forward in its advance to the 65-nanometer manufacturing process, a move toward further miniaturization to boost chip efficiency and performance. In a sign it is on course to begin manufacturing computer chips with the new process — and maintain the Moore’s Law paradigm of doubling the number of transistors on a chip roughly every two years — Intel said it had built functional static random access memory (SRAM) chips with more than half a billion transistors using the new process at a facility in Hillsboro, Oregon. Analysts said the 65-nanometer mark — even smaller than the recently-introduced 90-nanometer manufacturing technology — is a significant milestone on the Moore’s Law “highway.” But the development also highlights the increased pressure on chipmakers to keep up with the Intel’s formula for chip advancement, which is now pushing physical limits. “We’re getting into a period where the law of diminishing returns is starting to kick in,” said Mercury Research president Dean McCarron, referring to Moore’s own predictions that size limitations would become a road block in 2012. “We won’t be able to get smaller. The technology will have to change to quantum transistors or something like that.” Intel said transistors in its new 65-nanometer process will have gates measuring only 35 nanometers, which are 30 percent smaller than the length of the on-off switches in the 90-nanometer process. Intel said about 100 of the gates could fit inside the diameter of a human red blood cell. The company, which has led the way to 90-nanometer manufacturing but has stumbled in doing so, said the new technology also addresses power and heat issues with a “second generation of strained silicon” to cut power leakage and “sleep transistors” that shut off current flow during downtime. “Intel has been actively working on the power and heat dissipation challenges faced by the semiconductor industry,” said Intel senior vice president and general manager of Intel’s technology and manufacturing group Sunlin Chou. “We have taken a holistic approach by developing solutions that involve systems, chips, and technologies, and include innovations on our 65nm technology that will go beyond simply extending prior techniques.” IDC program manager Shane Rau said that Intel’s announcement is an indicator of the progress toward real volume production using the 65-nanometer processes. Rau said the company is on track to roll 65-nanometer manufacturing into volume next year and is sensitive to the technical issues around the transition. However, Rau said it is becoming more difficult to keep up with Moore’s Law. “The technical challenges, such as heat dissipation, increase as the industry advances to smaller and smaller process technologies,” Rau said. Gartner research vice president Martin Reynolds agreed, saying that while keeping up with Moore’s Law entails investment and complexity, it is a requirement for the industry. “Moore’s Law has always been hard, requiring enormous investments to stay on track, but it is the only way to maintain the replacement cycles on which the semiconductor industry so depends for its revenue,” Reynolds said. “The economic issues are the challenge, but Intel continues to beat them. As long as the value can be extracted, the industry will keep up.” Learning from Last Time Analysts agreed that the difficulty of shifting to smaller transistors and larger complexity showed when Intel transitioned to the 90-nanometer process. “We saw Intel struggle with power consumption on 90 nanometer,” Reynolds said. “The 65-nanometer process will force circuit and system design changes that deal with these issues. Circuit techniques used for mobile products will appear across the range.” McCarron said the “bumpy transition” to the 90-nanometer process demonstrated it is getting harder to maintain Moore’s Law. However, McCarron added that is the reason Intel is laying out its roadmap and getting to the 65-nanometer process in stages, such as the announced SRAM chips. Rival Right There As Intel has pushed its own manufacturing efficiency and processor performance, rival AMD has remained a steady second, capitalizing on some of Intel’s missteps with increased market share and performance advantages. McCarron said that although AMD might be moving more quietly, the company also is working on the transition to 65-nanometer technology and is partnering with IBM to do so. “AMD announced for revenue shipments, but it will take them several quarters to catch up with Intel,” Reynolds said. “In terms of 90-nanometer maturity, Intel has opened the gap against other manufacturers, in part because of its continued investment through the downturn. But others will benefit from Intel’s work, and will catch up.”
<urn:uuid:582582fc-fe69-4d6f-bfac-2896037367ba>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/intel-builds-new-chips-with-65-nanometer-process-technology-36256.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00438.warc.gz
en
0.9464
1,048
2.90625
3
A Guide to State Government Cybersecurity The sheer volume of data held by government entities continues to grow exponentially. Like every public or private organization, states must balance the need for security and protection of its citizens’ personal data with ease-of-use and convenience of its services. Cybercriminals are taking note. Last November, Louisiana Gov. John Bel Edwards declared a state of emergency following a cybersecurity attack on state government servers. The state was forced to activate its cybersecurity response team following a ransomware shutdown that caused an outage of state websites and emails, affecting many services citizens rely on. The cyber team decided to take what it deemed “extreme emergency protective measures,” including completely halting all server traffic. The emergency declaration allowed several of the state’s agencies, including the Office of Motor Vehicles, Department of Transportation and Development and the Department of Revenue, to waive fees and fines resulting from citizens not being able to meet filing deadlines and the like. The attempted malware may not have gained a ransom for cybercriminals, but it brought the state’s ability to serve its citizens to a screeching halt. And these type of attacks are happening more often than we may realize. Certain facets of state governments make them some of the most attractive targets for cybercriminals. Lack of Budget Traditionally, states aren’t making appropriate investments in cybersecurity. A National Association of Chief Information Officers/Deloitte cybersecurity survey found that a lack of budget has been the number one concern of state-level chief information security officers (CISOs) every year since 2010. The majority of states spend only 1 to 2 percent of their IT budgets on cybersecurity, and nearly half of states do not have a cybersecurity budget that is separate from their IT budget. In contrast, federal-level agencies and private sector organizations generally spend between 5 and 20 percent of their IT budgets on cybersecurity. Of course, state governments are aware that cybersecurity is a pervasive security issue. But the sheer volume and variety of attack techniques nor requires consistent investment — both in personnel and in resources — to stay ahead of the bad actors. Large Attack Surface Local and state governments usually exist in federated structures, meaning data flows from centralized sites but individual departments still retain autonomy. While these structures lead to operating efficiency, one weak link easily compounds into something more vulnerable. Federated structures also create challenges in standardizing and enforcing policies, such as employee phishing awareness and training programs. Yet human error remains the most exploited vulnerability in technological environments. Legacy systems that have not been updated, due to budget constraints or employee reluctance, may also stand out as weak link in infrastructure. And criticality of these systems make them ever more attractive to cybercrooks. Lengthy Supplier and Third-Party Lists Inbound email attacks continuously evolve in sophistication. Vendor email compromise is becoming a way for cybercriminals to gain access to local governments. Attackers start by hacking into a suppliers email, then silently sit and read through all the messages that come through the vendor’s inbox. Eventually, they will join legitimate email threads and attempt to divert government funds to private bank accounts. Even if a state government has the technologies and systems to protect their own attack surface, it can still become the victim of a cybersecurity attack because an external vendor’s account was taken over by cybercriminals. Taking into account the sheer volume of third-party vendors that governments must interact with, this creates a sobering reality for state IT workers. Traditional mass-produced phishing attacks are often flagged by traditional security products, but the new categories escape detection. Email attacks like those coming from third-parties manage language and intent better than ever, successfully tricking users into being sure that the email they’re replying to is legitimate. Instead of sending attachments that can be analyzed, scanned and deemed malicious, today’s attackers prefer to play a waiting game, sending multiple messages with no real purpose except to gain the recipient’s confidence. Then the attack comes their way. In today’s IT environment, visual scrutiny and phishing awareness cannot fully protect us. States must partner to embrace a more holistic approach to analyzing email content, context, and metadata. Lingering Media Coverage Research suggests that state governments are less likely to pay ransom after being affected by cyberattacks than private sector organizations. However, government attacks receive more media coverage more quickly than other compromises — perhaps because they affect private citizens. The excessive media attention turns into a brutal cycle, with more attackers now focused on vulnerable government systems, which leads to more attempted attacks. Link to Cyber Insurance State governments have had limited IT budgets and aging legacy systems for decades, and ransomware itself is not new, so what has changed? The recent increase in cyber insurance may play some role. That growth has been driven by two factors. First, transferring cyber risk to an insurer can be a cost-effective strategy in today’s IT world. Second, the market is proving a lucrative one for insurers. While other areas of insurance are flat, cyber insurance remains a profitable, if uncertain, segment. Loss ratios are half of traditional property and casualty policies. More cyber insurance policies paying out more ransoms is part of the issue, along with poor defenses and the criticality of services. By attacking states, cybercriminals are successfully requiring more money more often. For example, in the second quarter of 2019, governments that chose to pay ransoms ended up paying 10 times more than their commercial counterparts. (Graphic reflecting stat that when they choose to pay governments pay 10x more than private sector companies in ransomware attacks) This appears to create a dynamic where the most vulnerable government organizations are paying more than better-protected ones, thanks to cybersecurity insurance policies. A Checklist for Cybersecurity Control the Human Element Creating a security-conscious workforce is critical, as the majority of attacks can still be attributed to human error. - Establish a password policy that includes routine password changes, strong password selection and train employee to never write them down or reuse them. - Implement data usage controls that can block risky actions like uploading information to the web, sending email messages to unauthorized email addresses and saving data to external drives. - Monitor IT processes for complexity. Avoid encourage users to look for shortcuts. Keep easy use in mind when updating processes. - Implement ongoing training and education, keeping up with the latest sophistications in attacks. - Implement best-in-class procedures, such as two-factor authentication and password managers. Document and manage hardware and software Employees are naturally drawn to the latest devices and useful downloads. To effectively protect your environment, you have to know what’s included. - Take inventory and document all devices that could access your IT network, including personal ones. - Automatically install software updates and security patches on all computers, using inventory tools to keep record. - Detect and expel any unauthorized devices from the network or any devices running unauthorized software. Protect private data Focus on safeguarding sensitive data and also protecting data that is critical to continuity. Create maps of data and systems, and limit access only to those employees who are required to have it. - Restrict access to those who need it to perform their jobs. - Follow the same protocol for physical access. - Monitor all user access to the network, record authentication errors and unauthorized access. - Restrict administrative privileges and carefully manage those who have them. Understand Emerging Threats Protecting state government networks is complex because of the long list of services they offer. Stay educated and aware on how threats are changing and anticipate the updates needed to stay current. - Prioritize your budget to stay current with sophisticated threats and focus on contingency plans for responding. - Regularly research and consult with well-informed cybersecurity intelligence. - Join information sharing and analysis centers to take part in sharing information and jointly protecting networks and data. Specialized centers have been developed for public sector environments. Implement 24×7 Real-Time Visibility Your IT team must have cybersecurity operations that respond to threats and risky activity immediately. - Manage and identify vulnerabilities and monitor and detect threats. - Prioritize what needs updating and patching in a methodical fashion based on severity of risk. - Create a detailed documented response plan that goes beyond prevention. Analyze Audit Logs Without maintaining and monitoring audit logs, potential attacks could be overlooked, welcoming additional intruders and potential technological disasters. - Look beyond what’s required for audit purposes. The bad guys do. - Record and examine log activity, then analyze the resulting log information. - Continuously monitor for threats, creating an audit trail when an incident occurs. - Conduct regular risk assessments to find weak links in your environment. - Be ready to report. Use managed vulnerability assessment services to understand your risk profile and IT security posture. Ransomware continues to be the fastest-growing threat for state governments. Housing a second set of your data ensures continuity of services and enables recovery in the event of a system failure or natural disaster. - Consider could and physical backup solutions, accounting for the frequency of data changes in your backup schedule. - Keep processes flexible and secure to provide quick access to data. Provide a recovery solution that allows applications to return online seamlessly. Consider Added Complexities of COVID-19 Response Along with the standard checklist for cybersecurity, the global pandemic of COVID-19 has added additional complexities to how we think about cybersecurity. Because of safety concerns, many state workers are now working remotely, outside the traditional network that IT protects. We must take into account some special considerations as we dodge enhanced attacks. It’s critical to consider the increase in cyberthreats for remote workers. Here are some additional tips to consider: - Security must be a team effort. That’s especially true now when individuals must consider up their personal security posture as they work from home. - Leverage VPNs as much as possible, if your network can handle it. It its not possible for everyone, determine what other secure connection options like remote desktops are available. - Constantly remind users to never trust anything until you verify the source, including apps on mobile devices, ads, maps and browser plugin downloads. - Use two-factor authentication for everything. - Utilize encryption for sensitive communications and document sharing. - Encourage better password management. Usernames and passwords are even easier to steal now as employees work from home networks. - Security training is more critical than ever. Use the routines that would be followed in the office to continue educating work-from-home employees, including regular reminders on how to spot scams and fake websites. - Be prepared for changes in employee behaviors as they change workspaces, including printing more sensitive documents than usual, saving data on home machines or sharing computers without logging off work sites. Present guidelines on how to best prepare employees to handle sensitive issues at home. - Set up and monitor secure channels and procedures for third parties and supply chains. - Create an emergency plan that assumes the team is out of the office. Ensure your IT staff can easily and effectively collaborate in an emergency. Partnering For Complete Security Services More than ever, consider linking arms with the experts to keep bad guys at bay. Advantage Technologies offers cybersecurity management, along with network design and monitoring and full-service technical support. It all starts with an onsite technology assessment. We send one of our professional technology consultants to evaluate your specific needs and develop your custom security solution. Schedule your confidential assessment online or call us at 877-723-8832 to see how we can help improve cybersecurity for your organization.
<urn:uuid:ae2b1a3f-c89b-4b4c-9864-0e655247e054>
CC-MAIN-2022-40
https://www.getadvantage.com/state-government-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00438.warc.gz
en
0.932917
2,445
2.59375
3
IBM’s New Quantum Computer Is Double the Size of China’s Jiuzhang 2 (FierceElectronics) Computing giant IBM announced that it has built the world’s largest superconducting quantum computer, called Eagle, a press statement reveals. The new machine is larger than Google’s Sycamore as well as China’s Jiuzhang 2. In October, researchers at China’s University of Science and Technology (USTC) in Hefei announced that their quantum computer Jiuzhang 2 worked using 60 superconducting qubits, and that it was a staggering 10 million times faster than Google’s Sycamore quantum computer. Now, IBM’s new Eagle processor will more than double the size of Jiuzhang 2 by using 127 qubits to solve problems. IBM’s 127-qubit Eagle processor is now, theoretically speaking, the most powerful quantum computer in the world, though it is yet to be put through its paces. Unlike Google and China’s USTC, IBM hasn’t published an academic paper detailing tests conducted on its quantum computer to demonstrate its performance. Qubits count is also not the be all and end all when it comes to quantum computing power. The Jiuzhang 2, for example, had a total of 66 qubits, and it was 10 million times faster than Google’s 54-qubit Sycamore, due, in part, to its use of light photons. “Quantum computing has the power to transform nearly every sector and help us tackle the biggest problems of our time,” said Dr. Darío Gil, Senior Vice President, IBM and Director of Research. “The arrival of the Eagle processor is a major step towards the day when quantum computers can outperform classical computers for useful applications,” he continued.
<urn:uuid:29863d11-6508-42fd-b24b-a67329832d85>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/ibms-new-quantum-computer-is-double-the-size-of-chinas-jiuzhang-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00438.warc.gz
en
0.939419
390
2.625
3
- 1 October, 2018 Is Deep Learning Too Superficial? Deep learning has been broadly acclaimed for its advanced pattern recognition abilities that are primed for working with data at scale. It can detect non-linear patterns with quantities of variables that make it difficult, if not impossible, for humans to do so. Deep Learning’s strengths are recognizing patterns and translating them into predictions with high accuracy rates. The Deep Learning Dilemma Critiques of Deep Learning take issue with how its achievements are produced. Deep Learning needs an inordinate amount of training data to generate reliable results. Moreover, a significant portion of its training data requires annotations for labeled outputs of the model’s objectives. Although there are methods for reducing the data requirements of Deep Learning, (such as transfer learning), there are a number of tasks when such massive amounts of labeled training data just aren’t available. An even greater limitation is Deep Learning models don’t necessarily understand what they’re analyzing. They can recognize patterns ideal for Natural Language Processing or image recognition systems, but are somewhat restricted in their ability to understand a pattern’s significance and, to a lesser extent, draw inferences from them. Read the Full Article
<urn:uuid:37269095-c9dd-4cb9-9b38-6249441de17d>
CC-MAIN-2022-40
https://allegrograph.com/articles/is-deep-learning-too-superficial/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00438.warc.gz
en
0.929076
250
3.0625
3
SpaceX has a chance at beating the aerospace giant Boeing to be the first private company to fly humans into orbit. This post has been corrected. The Space Exploration Technology rocket factory is a large, white hangar-like building near Los Angeles international airport, with a parking lot filled with late-model motorcycles and Tesla electric cars. The vast metal structure once churned out 737 fuselages for Boeing. When you get through the front doors, past security and a cubicle farm stretching the width of the building, there it is: Science fiction being wrought into shape, right in front of you. Right in front of all the workers, too. The company’s two-floor cafeteria is practically on and overlooking the manufacturing floor. Designers and accountants can eat lunch watching technicians build space capsules and rocket stages. There’s a lot to see: Rockets, like good suits, are bespoke objects, hand-made to order; a SpaceX tour guide says much of the work is too precise for robotic assembly. Visitors can’t snap pictures—the technology is considered sensitive to national security. An enormous robot encircles a carbon-fiber shell that enfolds a satellite mounted on top of a rocket, using sonic waves to test for invisible imperfections. Human workers align nine rocket engines in an octagonal frame before they are installed into the enormous aluminum tube; others use a crane to lift a large panel and move it between workspaces. Even higher overhead, the first Dragon space capsule to leave the atmosphere and come back again hangs as a trophy. Mounted beside it is a project still in development: An enormous metal arc, one leg of a landing tripod large enough for a rocket. Last month, NASA said it would pay SpaceX its largest single contract ever , $2.6 billion, to shuttle US astronauts up to the International Space Station (ISS). It’s one of two companies that will build vehicles to replace the discontinued space shuttle and return the US to the list of spacefaring nations. The other, SpaceX’s frequently testy competitor Boeing, will do the same job but at more than half again the cost—some $4.2 billion. In fact, SpaceX has a chance at beating the aerospace giant to be the first private company to fly humans into orbit. This is an enormous milestone for the firm, and also its most dangerous task so far. But building cost-effective space vehicles gives SpaceX a chance to save US space efforts from their own torpor. Despite successes in planetary science, like the Curiosity rover on Mars, NASA’s manned space program has been floundering. The first plan to replace the space shuttle was cancelled; a new effort to send people to explore the solar system is behind schedule and over budget, to the point where it may be unfeasible . Even the basic effort of getting astronauts up to the ISS—real estate in which the US has invested $75 billion—has been outsourced to Russia . In the private sector, the US, once the leader in satellite launches, now lags behind European and Russian competitors. An existing joint venture between Lockheed Martin and Boeing, the United Launch Alliance (ULA), is using engines bought from a Russian state company until 2017. And as China and India show their prowess to catch-up to the advanced economies with cost-conscious space stations and Mars probes of their own, a US side-bet on commercial space companies has now become the most likely way for the US to get off the ground. When NASA officials first got involved with SpaceX eight years ago, they thought they were hiring a temp worker for scut work—a so-called “space taxi” while the government focused on higher aims. But now the commercial project may be NASA’s best hope for getting humans into space. A vanity project on a multi-planetary scale When Elon Musk founded SpaceX in 2002, it was, at best, a millionaire’s flight of fancy. He had made his fortune from tech startups Zip2 and PayPal, and was still two years away from starting Tesla, the electric-car firm. Musk, as he will gladly tell you , has a vision: Colonize Mars and make humans a multi-planet civilization. He sees it as insurance against a global catastrophe that leads to human extinction. Per Musk, the only sensible policy in this universe is redundancy. Newly wealthy and with time on his hands, he concocted a scheme to send a greenhouse full of plants to Mars as a kind of grand gesture, but couldn’t find any cost-effective rocket to send it there, even on a multi-millionaire’s budget. He did find people like Tom Mueller, a frustrated engineer at the conglomerate TRW’s aerospace division, who was building a rocket engine for fun in his garage. That—the largest liquid-fueled engine ever built by an an amateur—turned out to be the earliest version of the Merlin, which powers SpaceX’s rockets. Musk also met Hans Koenigsmann, a German engineer who became the company’s fourth technical employee, at a rocketry club launch in the Mojave desert. “My German accent helps in presentations,” Koenigsmann says . “When I say, ‘This will work,’ it is more convincing than other accents for some reason.” Musk decided to start a company to provide the service he couldn’t find—an affordable ticket to Mars. Successful tech entrepreneurs love starting space companies: Jeff Bezos (Amazon), Paul Allen (Microsoft), Larry Page and Eric Schmidt (Google), and Richard Branson (Virgin) are all involved in firms dedicated to space tech. Most are seen, to varying degrees, as vanity projects. “So many of his friends advised him not to do SpaceX,” Luke Nosek, who helped build PayPal with Musk, told Quartz. Nosek is now a member of SpaceX’s board of directors. Finding a partner in crime Just as Musk’s company was beginning to approach the space business with a clean slate, NASA was, too. The impending expiration of the space shuttle program, which flew US astronauts and cargo into orbit from 1981 to 2011, prompted a scattered response in the US space agency. In 2005, the Bush administration launched the first successor program, Constellation, intended as a ticket to both the ISS and the moon. The cost was originally estimated at $97 billion; it would eventually be cancelled in 2009. But Mike Griffin, the aerospace engineer who became the top NASA administrator in 2005, had a bit of an unusual background: He was a former president of In-Q-Tel, the CIA’s in-house venture-capital fund for national security tech. And like Musk, he saw space travel as a key to the future of humanity. He just thought it was a job for NASA, not the private sector. With so much money going into Constellation, Griffin decided to spend $500 million on a commercial space program, outside of the traditional NASA contracting approach, in the hopes of producing a cheap way to service the orbital distraction while NASA focused on grander aspirations. This commitment had his top staff wondering if he saw the ISS—at a total cost of $150 billion, the most expensive single object ever built by mankind, but of relatively limited scientific and economic value—as “a huge rat hole we’re just throwing money down.” Advising Griffin was a physicist and venture capitalist, Alan Marty. One of the first things he did was write a two-page book report on Clayton Christensen’s classic Silicon Valley tome The Innovator’s Dilemma and distribute it to senior NASA executives. At Marty’s insistence, NASA’s attorneys were able to exploit a loophole created by the slapdash nature of the agency’s original founding. In the panic after the Soviet Union got to space first with Sputnik, the White House had demanded a civil space agency fast, and to avoid missing any opportunities, a young attorney had added a kind of universal action clause (section 203, sub-section a, part 5) to the 1958 law that founded NASA. “You know how Sherwin-Williams [Company] paint covers the world?” NASA general counsel Michael Wholley said . “He basically said, ‘If I’ve forgotten something, use this.’” And so in 2006 Griffin and his colleagues came up with a system to sort-of invest in two companies, SpaceX and Rocketplane Kistler, to develop space transit. There would be no sharing of equity or intellectual property, but also no guarantee of payment before technological and financial milestones were reached. “I knew enough about the federal government to know that if you invested money and you got none of your money back, everybody would get angry,” Marty said. “But it also turns out that if you invest money and you get five times your money back, everybody gets angry too, because then you’re competing with the private sector.” Rocketplane Kistler would eventually be dropped from the program, eventually flaming out in bankruptcy after failing to raise enough money from New York hedge funds and pension investors it targeted just as the economic crisis began. SpaceX, on the other hand, would eventually collect $396 million from NASA while contributing $454 million of outside capital, including an initial $100 million of Musk’s own money in 2006. The company’s outside fundraising strategy was simple: Turning to Musk’s deep-pocketed friends in Silicon Valley, who were more willing than hard-pressed New York financiers to take a flyer on something new. There was also an attractive quirk of the satellite launch business: Customers pre-pay to build their rocket. That meant if the company could prove its concept in a successful test, the company wouldn’t need to raise another round of working capital, protecting early investors’ stakes from dilution. What does SpaceX actually make? Rockets are marvelous pieces of technology. They seem to rise in fairly stately fashion when you watch them launch, but to reach orbit they must fly 7.7 km per second or about 18,000 miles per hour, nearly 25 times the speed of sound in air. Nothing else made by man goes that fast with people in it. Rockets are mostly fuel—for SpaceX, $200,000 worth of kerosene and liquid oxygen—with an almost delicate metal skin, mostly aluminum. Musk once asked an investor to imagine his 64 meter (224 ft) rocket, shrunk down to the size of a Coca-Cola can: The walls of the tiny explosive would be many times thinner than the drink in your hand. It is easier and cheaper to use solid-fuel engines. That would typically make them the first choice of the company’s chief designer—also Musk—but for the fact they are harder to control once ignited. For safety’s sake, more complex liquid-fueled rockets are the standard for taking people to space. The engines are spidered with metal capillaries that use the vessel’s own chilled fuel as coolant to keep the 3-D printed nozzle from melting in the wash of its own exhaust. Human flight was always the standard to which Musk’s associates say he aspired; and so the Merlin was the first new liquid-fueled rocket engine to fly in the United States since the 1990s. Rockets headed for space typically have two stages. The first stage provides the massive thrust to get into space; then it’s discarded, and the second stage glides the payload to its final destination in orbit. A satellite, encased in a custom-made carbon fiber fairing, or a Dragon space capsule full of cargo—someday, passengers—perches on top of the rocket at launch. SpaceX’s first rocket prototype, the Falcon 1, used one Merlin engine in its first stage. There are nine in the Falcon 9 rocket that is the company’s main product. And there will be 27 in the putative Falcon Heavy, as yet unrealized, for massive cargo—and trips to Mars. Falcon, Merlin, Kestrel, and Dragon: Not the Victorian virtues—Enterprise, Endeavor, Discovery—honored by the space shuttles they replace, nor competitor NASA’s classical Atlas, Orion, Apollo, and Saturn. SpaceX’s machines were made by people who read pulp fantasy novels as children, or the paperback science fiction of Musk’s childhood in Pretoria, South Africa. The only thing that matters is cost Regardless of its inspirations, the company was forced to adopt a prosaic initial goal: Make a rocket at least 10 times cheaper than is possible today. Until it can do that, neither flowers nor people can go to Mars with any economy. With rocket technology, Musk has said , “you’re really left with one key parameter against which technology improvements must be judged, and that’s cost.” SpaceX currently charges $61.2 million per launch. Its cost-per-kilogram of cargo to low-earth orbit, $4,653, is far less than the $14,000 to $39,000 offered by its chief American competitor, the United Launch Alliance. Other providers often charge $250 to $400 million per launch; NASA pays Russia $70 million per astronaut to hitch a ride on its three-person Soyuz spacecraft. SpaceX’s costs are still nowhere near low enough to change the economics of space as Musk and his investors envision, but they have a plan to do so (of which more later). The secret to the low cost is relatively simple, at least in principle: Do as much as possible in-house, in an integrated manufacturing facility, with modern components; and avoid the unwieldy supply chains, legacy designs, layers of contractors, and “cost-plus” billing that characterized SpaceX’s competitors. Many early employees were attracted to the company because they wanted to avoid the bureaucracy of the traditional aerospace conglomerates. “I guess I would call them bureaucratic integrators, people at large entities integrating other people’s technologies,” says Scott Nolan, who joined the company out of college as an early employee and is now a partner at Founders Fund. “SpaceX was the first real tech startup in that space developing their whole platform from the ground up, questioning everything.” But there’s a reason for everything to be the way it is, and the reason the dominant aerospace contractors were slow-moving behemoths of paperwork is that their prime customers—indeed, the prime customers of the entire space world—are governments. From SpaceX’s point of view, much of the blames lies with cost-plus contracting, the common government strategy of hiring companies to do work and paying their expenses plus a guaranteed profit margin. “When you do that, your engineering force is brain-dead,” Nosek says. “The incentive structure destroyed their ability to create true innovation.” At first, the partnership between a government agency used to paying whatever bill it was presented with and a start-up bent on cutting every corner it could find went much as you might imagine. “For every NASA person you put on my site, I’m going to double the price,” Musk warned Marty. The company made hatch handles out of parts for bathroom stall latches to save $1,470, and found that using racing-car safety belts to strap in astronauts was more comfortable and less expensive than custom-built harnesses. It used live people inside a full-size model to make sure that astronauts could move about the cargo capsule, rather than computer simulations. SpaceX executives disdained NASA’s love of acronyms and documentation. Employees took pride in working late on Friday nights. “They’d say, ‘Well, we could go buy this from this vendor, but it’s like $50,000. It’s way too expensive, it’s ridiculous. We could build this for $2,000 in our shop,'” said Mike Horkachuck , the NASA official who was the primary liaison with the company. “I almost never heard NASA engineers talking about the cost of a part.” But NASA did bring a focus to the company, one that other space start-ups often seem to lack. And, just as important, it brought early revenue. ‘”Okay, we have a schedule, we have milestones, we have funding,’ which was obviously key as well at the time, being such a small company,” Giger said of the NASA award. “That really gave us focus.” “I think we brought them up from being a little 100-man company, if that, to what they are today,” Horkachuck said. “Early on, [NASA] was what was keeping the lights on in the company. They’ve just evolved into less of a hobby shop and more of a real aerospace company that’s building production rockets.” But SpaceX always thought of itself as a tech firm, and its clashes with NASA often took a form computer developers—or anyone familiar with the troubled roll-out of healthcare.gov —would recognize as generational. SpaceX followed an iterative design process, continually improving prototypes in response to testing. Traditional product management calls for a robust plan executed to completion, a recipe for cost overruns. “We weren’t just going to sit there and analyze something for years and years and years and years to the nth degree,” said David Giger , a SpaceX engineer. “SpaceX was built on ‘test, test, test, test, test.’ We test as we fly. We always say that every day here, ‘Test as you fly.'” That was to become painfully evident in its first attempts at flying.
<urn:uuid:d26897a6-b945-4ad6-bb40-16f74047c6b0>
CC-MAIN-2022-40
https://www.nextgov.com/emerging-tech/2014/10/how-spacex-leapfrogged-nasa-and-became-serious-space-company/96995/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00438.warc.gz
en
0.958385
3,763
2.71875
3
In this article we are going to look at the UK General Data Protection Regulation, or rather, how the General Data Protection Regulation (GDPR), will affect how data protection is dealt with in the UK. The GDPR takes effect from 25 May 2018, so it’s important that any businesses and organisations are well prepared for its implementation by then. Hopefully, the information will help your business or organisation to better understand how the GDPR will work in the UK. GDPR – the basics For anyone who is not aware of what the GDPR is; here is a basic explanation. The GDPR is intended to provide a level of consistency concerning the way data protection is addressed in EU states. It also provides an improved level of control for EU citizens surrounding their personal data. It’s important to note that GDPR does not just apply to businesses and organisations that are based within the EU. It applies to any business or organisation that is involved with the processing on the personal data of EU citizens. Currently, the GDPR applies to UK companies as UK citizens are EU citizens. Although this will no longer be the case following Brexit, GDPR will probably still apply to the majority of business and organisations in the UK. This is because they will likely still be involved in processing the personal data of people from other EU counties. What happens at present? Currently, each state which is a member of the EU has its own data protection laws, which it operates under the data regulation of 1995. In the UK, these rules are detailed in the Data Protection Act of 1998, and data protection is overseen by the Information Commissioner’s Office (ICO). The GDPR will bring chances to the way personal data is handled in the UK. These changes will be detailed in the Data Protection Bill which has already been written. The new data protection bill in the UK This data protection bill will be used to implement the changes that have been brought about by the GDPR. Effectively, this is the UK General Data Protection Regulation. The bill has already been published, on 14 September 2017, but it only becomes law once it has passed through both Houses of Parliament. Like many other countries, the UK has added some exemptions to the GDPR as part of this bill. These exemptions help to protect certain professional roles, such as journalism and anti-doping agencies. One of the additions to the bill deals with the anonymisation of personal data. This addition states that researchers who find that they can actually identify an individual, or individuals, from data that has been anonymised must report their findings to the ICO. This bill is not yet law, but once it has passed through both Houses it will be. At this point the previous Data Protection Act will be repealed. This needs to happen before the GDPR becomes law, in order to ensure that UK businesses and organisations comply with the requirements of the GDPR. UK General Data Protection Regulation – non compliance It’s vital that UK businesses and organisations comply with the requirements of the GDPR and the new data protection bill. We will take a look at some of the requirements of the GDPR soon, but let’s first examine what can happen if a business or organisation fails to comply. The full range of potential fines and sanctions has yet to be defined, but the maximum potential fine is 20 million Euros or 4% of annual turnover, whichever is higher. In reality, it’s unlikely that this maximum fine will ever be imposed but there will still be severe consequences for non compliance. Like every Data Protection Authority (DPA), the DPO will have some leeway to decide on the fines and sanctions it imposes. But it will still be expected to liaise with others DPAs. This liason is required in order to maintain a level of uniformity regarding how data protection is dealt with throughout the EU. Does GDPR make is necessary to always have consent? Many businesses and organisations are under the false impression that it’s always necessary to have consent in order to process data. This is not the case. Consent is only one of the legitimate reasons for processing personal data; others include the need to process data for the completion of a contract between the data controller and the data subject and the need to process personal data in respect of ongoing legal action. However, if consent is the legitimate reason that is being used for processing data then there are rules that need to be followed. - Consent must always be given freely and be informed. - Consent must be given for each different reason for which data is processed.Requests for consent cannot be hidden away in other terms and conditions for the business or organisation. - Action needs to be taken by the data subject in order to give consent. This means that methods of getting consent such as pre-checked tick boxes are no longer legitimate. In addition to clarifying the situation with regards to consent, the GDPR also solidifies the rights of EU citizens when it comes to how their personal data is dealt with. The rights of EU citizens The GDPR gives several rights to EU citizens, regarding the holding and processing of their personal data. These rights include: - The right to be informed – this means that businesses and organisations need to inform individuals about what their data is being used for. - The right of access – as with many of the rights, this right currently exists. But, the GDPR changes it. System Access Requests (SARs) now need to be responded to within 40 days. The other major change is that businesses and organisation cannot charge for the service, except when requests are unreasonable or repeated on a regular basis. - The right to have mistakes rectified – this means that individuals can ask for mistakes in personal data to be corrected. - The right to be forgotten – individuals can ask for personal data to be deleted. It’s important to note that businesses and organisations do not necessarily have to comply with such requests if they have a legitimate reason to continue processing the data. - The right to restrict processing – this applies when an individual requests that you stop processing their personal data. It applies to the processing only and you can still retain the data if you have a legitimate reason to do so. - The right to data portability – this means that businesses and organisations must provide individuals with a copy of the data held, in a machine readable format. What this means in practice is that it will be easier for individuals to forward data to other third parties. - The right to object – individuals have the right to object to their personal data being processed for legitimate interest or public interest, for direct marketing or for scientific research. Processing must be stopped unless the business or organisation can show that it has a legitimate reason for processing that overrides the rights and freedoms of the individual, or that processing is required in reference to legal claims. - Rights surrounding automated decision making and profiling – there are strict rules regarding this area, in the GDPR. They include having a legitimate reason for profiling and informing the individual of how information is being used. You can see that there are plenty of requirements that need to be complied with, when it comes to UK General Data Protection Regulation. But, how do businesses and organisations make sure that they comply? Responsibility for compliance with the GDPR For the first time, with the GDPR, data controllers are not solely responsible for the protection of data. Individuals can now take action against both data controllers and data processors if they feel as though the GDPR have not been complied with, when it comes to the processing of their personal data. This makes it even more essential that everyone involved in the processing of personal data complies with the stipulations of the GDPR. The role of the Data Protection Officer The rules surrounding whether or not a business or organisation needs to have a data protection officer (DPO) in place are slightly open to interpretation. The GDPR states that a business or organisation needs to have a DPO if they are involved in the processing of data which requires the need for large scale systematic monitoring of individuals, or if they process certain types of sensitive personal data. For many businesses or organisations it may make sense to have a DPO in place anyway. The DPO needs to have extensive knowledge of the GDPR and needs to know how to plan and implement an effective data protection process. An individual with this type of knowledge could be invaluable in helping a business or organisation comply with the stipulations of the GDPR. It’s important to note that DPOs do not need to have formal qualifications for the role. The GDPR only stipulates the knowledge that is necessary. It’s up to the business to ensure that the DPO has the right credentials for the role, and that they operate completely independently, with no influence pressed onto them. Businesses and organisations should also ensure that everyone who works there, and is involved with the handling or processing of personal data, understands the rules of the GDPR, as well as the implications of non-compliance. Providing this knowledge is an important part of any business or organisation ensuring that it complies with the UK General Data Protection Regulation.
<urn:uuid:88e9cc73-c4ec-4a97-85f2-cf0ee61d71d5>
CC-MAIN-2022-40
https://www.compliancejunction.com/uk-general-data-protection-regulation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00638.warc.gz
en
0.944214
1,884
2.65625
3
We’ve all had some experience in Microsoft Word, perhaps the most popular program in the Office Suite (many would argue). But many still don’t realize that there are quite a few hidden features in Word that, when learned, will help make you into a master of the globally-instituted document composition platform. Here are 10 key ways to master your use of Microsoft Word and make your working life that much more enjoyable. - Enjoy the use of more of Word’s symbols as you type. Normally, when you are typing in a Word doc you see a lot of empty space between the words and lines, but there is a lot more going on than what is visible. If you want to see what you’re missing in terms of helpful formatting symbols, Go to File, Options, then Display, then Always Show These Formatting Marks on the Screen. Under that heading, you will see a list of options that will allow things like paragraph signs and dots marking the amount of space between words to become visible: - How many ways can you format a paragraph? The answer is: There are many ways to format paragraphs, and you can easily master this and take your Word authorship to a new level. By allowing the paragraph symbol to be shown (as in step 1), this will allow you to copy over the formatting along with the text to wherever you want to next paste that text. - Know Thy Word sections. Learn to organize your Word docs better by utilizing the different breaks found in the use of sections. Access the Breaks portion on the Page Layout menu, and see your document as Microsoft Office sees it. By setting up your Word doc in sections, you can independently format each section and attain a level of mastery over your document not otherwise found. - Master the use of Styles. You can create style templates in Word which can be used again and again for future documents. For example, if you write a lot of memos, you can create a style template for memos, and so on. You can go to Design >> Themes for some good style ideas. - Format your document prior to writing. Formatting your doc prior to beginning the writing of it is a good idea, so you can get a well-formed idea of the format before commencing the actual writing part. Many of us have experienced the frustration of wording a document only to have to format and perhaps reformat it in a different setting because we didn’t establish (and save) the formatting from the get-go. - Customize your paste options. You can control how MS Office pastes your text by clicking on the Office logo (the button at the top left of the screen), going to Word Options, then to Advanced. You should then see a Cut, Copy, and Paste option that lets you configure customized options. This will do things like disable hyperlinking when pasting, along with other handy things to make your use of Word more enjoyable. - Use fully justified formatting. This is perhaps one of the better-known Word formatting options – fully justified formatting will give you equally-aligned margins without the ragged edge on the right side that’s so commonplace in writing. It appeals to those who want a tidy, clean, and perhaps more professional look to their text, though “there’s no arguing taste” but with the beholder (or writer) in this case. Nevertheless, if you want to access this option, click the Office logo >> Word Options >> Advanced, then expand the Layout Options and set fully justified formatting there. - Hide the Ribbon. This is another common option used by Word aficionados. For those who get a bit too distracted by the visual busy-ness of their ribbon toolbar, there is a shortcut to hiding it: Click CTRL+F1. Do it again to make it reappear. - Clear all formatting. Here’s one many may not know of: The Clear All Formatting option, which does exactly what it says. This will give you a chance to clear the formatting slate and start over again. Select however much text you want to clear, and click the button that looks like the letter A holding an eraser right beneath References on the main ribbon interface. - Spike your copy and pasting. Here’s a special way to copy and paste that allows you to copy from different places in a document and then paste them all together elsewhere. The CTRL+F3 command will allow you to cherry-pick the various places in your doc and put them all together in another area, or new document. The spike-pasted text will also display where the original cuts were, for comprehensive editing purposes. Talk to a Software and Office Specialist If you need further help with Microsoft Office programs like Word, you can speak to a specialist at Hammett Technologies, which is a proven leader in providing IT consulting and software support in Washington, DC or Baltimore. Contact us at (443) 216-9999 or send us an email at [email protected] today, and we can help you with all your questions or needs.
<urn:uuid:54747791-4670-4d07-8d3d-5761530d99e7>
CC-MAIN-2022-40
https://www.hammett-tech.com/10-ways-to-master-your-use-of-microsoft-word/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00638.warc.gz
en
0.914485
1,050
2.5625
3
February 1, 2017 | Written by: Avi Alkalay, IBM Cloud Advisor Share this post: Object storage is a relatively new option for data storage, optimized for general binary or unstructured data, often multimedia. It has gained a lot of importance in the last few years due to the exponential growth of audio, video and images on the web, and thanks to the huge growth of mobility and social networks. When developers create new apps, they must decide where and how to store data. Structured data (such as name, date, ID, and so on) will still be stored in regular SQL or NoSQL databases. Images and other binary data that pass through the app will have a better home with object storage, thanks to the following benefits: - Ease of use. Each object gets a unique ID and a HTTP URL that can be publicly accessible. Data read and write operations are very simple and may be performed directly by user’s browser, via representational stat transfer (REST), without having to go through the control of the server app. The app is released from the rigid structure of database tables and file system hierarchy. - Scalability. Unlike classical storage using files and tables, the infrastructure to store objects doesn’t grow in complexity when data grows. Object storage can grow quickly, without limits. - Agility. File systems and databases are complex and require constant care by the sysadmin or database administrator. Thanks to the simplicity of objects, the developer or app owner doesn’t have to depend on these professionals, which eliminates bottlenecks in the app’s evolution. A developer has more freedom to change an app without the help or blessing of the infrastructure team. This agility aspect is what makes object storage so attractive for modern apps. Object Storage is not a substitute for older storage methods such as file systems or databases. Rather, it complements them with new features. There are also inaccurate comparisons between object storage and block storage. These are very different things that solve different problems. Objects are used by programmers at the application level, while block storage is a concern of the infrastructure architect. There is no relation between the two. To compare object and block storage is like comparing cars and tires. Object storage is being used in place of physical tapes and libraries in backup solutions. Maintaining the physical integrity of tapes and robots and transporting them to other locations requires a lot of logistical work. It is precisely the elimination of these logistics that make object storage-based backup so attractive. You don’t even have to change your backup solution. Object storage can be easily integrated and plugged to what you already have. Object storage with IBM IBM Cloud Object Storage focuses on hybrid agility, which means you can have your objects in your own data center, protected by your own firewall, in the public cloud or a mix of both. In the public cloud, object storage that uses S3 and Swift protocols can be activated in the IBM Bluemix catalog. Behind the firewall in one’s own data center, there are software options that can be used with an organization’s own, low-cost hardware. There are also integrated options with high-performance hardware. It also enables transparent hybrid architectures to help organizations find the best cost and benefit balance between their own data centers and public or private clouds. Here’s a short list of takeaways about object storage: - It does not replace more traditional methods of storage, but complements them with a lot of agility. - It can be used to simplify and reduce the cost of existing and new backup solutions. - IBM Cloud Object Storage helps organizations have consistently integrated data in their own data centers, the public cloud or both. Learn more about IBM Cloud Object Storage. Shutterfly has nearly 150 Petabytes of images within its site. Hear how they use IBM Cloud Object Storage to manage—and continue to grow—live at IBM InterConnect.
<urn:uuid:5625d6bf-7baa-416c-961c-6734a61b91fa>
CC-MAIN-2022-40
https://www.ibm.com/blogs/cloud-computing/2017/02/01/object-storage-benefits-myths-and-options/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00638.warc.gz
en
0.92825
832
2.625
3
When most people think of “compatibility”, they’re not thinking about computer software programs. But they honestly should be! Some types of compatibility have nothing to do with how people or animals get along. Software compatibility testing, for example, helps determine whether or not a software program or application is capable of working on different environments, operating systems, hardware, other software or mobile devices. Most in-house QA teams will test their software on all of their available equipment, but this doesn’t cover all of the types of systems on the market. Consumers who can’t run a particular software on their systems may tarnish a company’s reputation with negative online reviews. Purpose of Compatibility Testing Compatibility testing helps ensure that your customers can install and run their purchased software. This type of testing determines how well the overall operation of the system is working with your software. Compatibility testing ensures the following: - The software can install and function on multiple environments - Variances in screen size, resolution, and operating systems do not corrupt the software - The minimum specs required to run the software - Software is tested against various hardware interfaces like graphics cards, headphones, various ram types, etc. This type of testing helps ensure that your software works on different versions of operating systems, different types of computers and software programs, and different network environments. This helps ensure that a user can run the software on many types of user configurations without annoying glitches. How Software Compatibility Testing Works When a QA team runs a compatibility test, the software will be tested on different hardware systems under many different conditions. For example, the QA team will test your software on: - Different browsers like Firefox, Chrome, Internet Explorer and Safari - Operating Systems, including different versions of Windows, Chrome, iOS and Linux - Various levels of computing capacity - Hardware peripherals - Multiple versions of system software Why Conduct Compatibility Testing A software program that doesn’t work correctly on a majority of systems poses a risk to both finances and reputation. You won’t be able to sell the software to all users, therefore drastically reducing profits. Plus, users that do buy the software only to find out that it doesn’t work, may never purchase products from your company again. Businesses that encounter compatibility problems too late will have to try to correct the issue after the product is already released, which will result in costly recalls and repairs. Performing compatibility tests before the product is released can help your product avoid many pitfalls. If your in-house QA team ever needs help with the workload, consider outsourcing your software compatibility testing needs to a professional QA lab like iBeta. Contact us to learn more.
<urn:uuid:017423fd-081b-44c4-af91-84ad58e473fb>
CC-MAIN-2022-40
https://www.ibeta.com/what-is-software-compatibility-testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00638.warc.gz
en
0.910725
564
3.09375
3
The Air Force Research Laboratory has built a supercomputer that is driven in part by several hundred Sony PlayStation 3 consoles. The Air Force Research Laboratory has built a supercomputer driven by several hundred Sony PlayStation 3 consoles, reports Warren Peace in Stars and Stripes. AFRL has assembled 336 PlayStation 3s in a cluster and, together with off-the-shelf graphic processing units, created a supercomputer nearly 100,000 times faster than high-end computer processors currently on the market. It's not the first time the gaming console has been harnessed for more serious uses. The University of Massachusets Dartmouth, for example, in 2007 built a powerful computer out of eight PlayStations to study black holes. The technology concept is made possible by the console’s cell processor, which was designed to integrate with other cell processors to multiply processing power and crunch numbers. As a result, the Air Force researchers were able to use that power to run applications such as the back-projection synthetic aperture radar imaging formation, high-definition video image processing, and neuromorphic computing, a method of replicating human nervous systems. Mimicking humans helps the machine recognize images during tasks such as target recognition, the officials said. Even though it replicates some attributes of a supercomputer, the arrangement falls considerably short of high-end supercomputers, AFRL officials said. For starters, the way the consoles connect online or to each other is relatively slow compared to regular supercomputing setups. Thus, the researchers are limited in what types of programs they can efficiently run on the PS3 supercomputer, known as the 500 TeraFLOPS Heterogeneous Cluster. The system is located at AFRL’s Affiliated Resource Center in Rome, N.Y. The system, which uses mostly off-the-shelf components, is a relatively inexpensive and green machine, the officials said. It uses 300 to 320 kilowatts at full speed and about 10 percent to 30 percent of that in standby mode, whereas most supercomputers use 5 megawatts. What’s more, much of the time the cluster will only be running the nodes it needs and will be shut down when not in use. The team that built the supercomputer has ordered 1,700 more consoles to augment the existing cluster’s power. The additional PlayStation 3s were ordered through the Defense Department’s High Performance Computing Modernization Program. “Supercomputers used to be unique with unique processors,” said Richard Linderman, AFRL’s senior scientist for advanced computing architectures. “By taking advantage of a growing market, the gaming market, we are bringing the price performance to just $2 to $3 per gigaflops.” For more on military supercomputers, see "DARPA ponders taking supercomputing to the extreme."
<urn:uuid:87ae1c46-532e-4441-89a8-74301b2ae250>
CC-MAIN-2022-40
https://fcw.com/2010/01/playstations-power-air-force-supercomputer/251184/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00638.warc.gz
en
0.934035
593
3.046875
3
Help Net Security writes that security threats related to IoT and related devices within healthcare environments have remained sorely under-addressed, despite increased investments in healthcare cybersecurity. Data shows that 53% of connected medical devices and other IoT devices in hospitals have a known critical vulnerability. Additionally, a third of bedside healthcare IoT devices – which patients most depend on for optimal health outcomes – have an identified critical risk. If attacked, these vulnerabilities could impact service availability, data confidentiality, or patient safety – with potentially life-threatening consequences for patient care. - IV pumps are the most common healthcare IoT device and possess a lion’s share of risk: IV pumps make up 38% of a hospital’s typical healthcare IoT footprint and 73% of those have a vulnerability that could jeopardize patient safety, data confidentiality, or service availability if it were to be exploited by an adversary. - Healthcare IoT running outdated Windows versions dominate devices in critical care sectors: Devices running versions older than Windows 10 account for the majority of devices used by pharmacology, oncology, and laboratory devices, and make up a plurality of devices used by radiology, neurology, and surgery departments, leaving patients connected to these devices vulnerable. - Default passwords remain a common risk: The most common IoMT and IoT device risks are connected to default passwords and settings that attackers can often obtain easily from manuals posted online, with 21% of devices secured by weak or default credentials. - Network segmentation can reduce critical IoMT and IoT risk: Network segmentation can address over 90 percent of the critical risks presented by connected medical devices in hospitals and is the most effective way to mitigate most risks presented by connected devices.
<urn:uuid:9d066c12-3c3b-473c-9386-6e38f4ef7711>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/expert-comments/53-of-medical-devices-have-a-known-critical-vulnerability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00638.warc.gz
en
0.940087
340
2.515625
3
D-Wave’s quantum computer leverages quantum dynamics to accelerate and enable new methods for solving discrete optimization, sampling, material science, and machine learning problems. It uses a process called quantum annealing that harnesses the natural tendency of real-world quantum systems to find low-energy states. If an optimization problem, for example, is analogous to a landscape of peaks and valleys, each coordinate represents a possible solution and its elevation represents its energy. The best solution is that with the lowest energy corresponding to the lowest point in the deepest valley in the landscape. Real more on quantum annealing.
<urn:uuid:dcaf3ccb-6048-49c4-a53b-5706fdfd9fa9>
CC-MAIN-2022-40
https://support.dwavesys.com/hc/en-us/articles/360009752414-What-Is-D-Wave-s-Technology-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00038.warc.gz
en
0.900659
123
3.375
3
This is one of those little things that make life much, much easier. Imagine the scenario; you go to eject a USB disk drive or USB flash and Windows tells you that it is still in use. How to track down which application is using the drive? Eject, Eject, Eject In the below example, the USB flash drive E: will be ejected. Or at least, we will try... Alas, no! There is a problem ejecting the USB mass storage device. Who, What Is Using The Drive Previously this would mean using Sysinternals Process Explorer or handle.exe to figure which process had a lock onto the file. Windows 10 reports this into the System event log in EventID 225. No need to run additional tools! As an example below, Microsoft Word from Office 365 Professional Plus had a file opened on the E: drive with pending changes that had not been saved. When trying to eject the USB drive, the executable and its Process ID (PID) are returned. The user then knows the application using the USB drive, and can take the appropriate action. This could be to finish off the task, save the changes to the file and close down the application. Interestingly enough, when testing this Windows 10 allowed the drive to be ejected even though Word was running. There had to be unsaved changes to prompt the above behaviour.
<urn:uuid:331ba9f5-af62-4776-9b31-3a5ff271d36b>
CC-MAIN-2022-40
https://blog.rmilne.ca/2017/04/14/quick-tip-windows-10-unable-to-stop-device-notification/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00038.warc.gz
en
0.938372
295
2.78125
3
What is the Best Intercom? The first intercoms were developed in the late 1800s. They used speaking tubes to talk to other people in the building. Business organizations were the first to use these intercoms. The office manager could reach his secretary or other people in various departments in the building. The invention of the telephone in the early 1900s made it easier to connect everyone. By the 1950s, the telephone connected people all over the world. The intercoms used the same technology as the phone system. In the 1970s, the analog intercoms made it easy for the manager to talk to the accounting department or the manufacturing floor. Today’s Apartment Intercoms provide so much more capability. Digital IP intercoms that attach to the network were made available in the 1990s. It made it much easier to install intercoms around schools, offices, and hospitals. The latest intercom systems use smartphone apps and cloud servers to enable direct communication all over the world. This article provides the pros and cons of different intercom systems. Analog Intercom Systems for Apartments The analog intercoms had wires going from the intercom panel in the lobby to each of the apartments. Analog intercoms use electrical signals with voltage and variable current levels to transport the sound signal. A low-voltage and current are used to power the system. The intercoms are available as two-wire and four-wire systems. Two-wire intercoms provide voice communication in one direction at a time, so are considered “half-duplex” systems. Multiple intercoms can be daisy-chained so that many people can listen and talk. This is called a “party-line” system. To talk to the other person, you need to push the “talk” button. Pros: The two-wire systems use less wire than the four-wire systems so it can save money in cabling. This is a simple system. Because this is a hard-wired system, it is considered to be more secure. Cons: The early implementations of this intercom system didn’t allow you to select the person. It was primarily used to connect to one or many people. Some systems use routers to select the person. It allows talking in only one direction at a time, and the user must push a button to talk. It doesn’t support video. The four-wire system provides communication in both directions, so it is called a “full-duplex” system. In this system, people at both ends can talk and hear at the same time. The Plain Old Telephone (POTS) analog telephone system is an example of a four-wire system. Pros: This is a full-duplex system that allows simultaneous talking in both directions. There is no “talk” button required. These systems are available with a “master” station in the lobby that includes a button for each remote intercom you would like to talk to. Cons: It doesn’t support video. It uses more wiring than the two-wire system. Wires must be connected from the lobby panel to the intercoms in the apartments. Digital Intercom Systems The digital intercom systems connect intercoms (and telephones) using special digital protocols. Voice over IP (VoIP) and SIP are examples of protocols used to carry voice signals. These systems provide much more control than the analog systems. The Ethernet network connects many different intercoms (or telephones) together. The addressing schemes allow a single telephone with buttons to address and connect to another telephone on the network. Intercoms without buttons (like the Digital Acoustic intercoms) are assigned to a server. Many intercoms are connected over the network to a single control point like a computer or apartment intercom. The central computer (or panel) can be used to select and communicate with one intercom at a time. These centralized systems are usually used in an office environment where there is a guard or concierge at a desk in the lobby. To learn more about how IP intercoms work, take a look at our article, How Network Attached Amplifiers and IP Intercoms Work. Pros: It uses the network infrastructure to communicate from the central station to all the intercoms on the network. It can support IP video. Cons: It requires a separate IP camera at each intercom station to include video. Apartment House Intercoms The apartment door entry system in the lobby allows you to select the person or apartment you would like to contact. The classic system includes a one or two-line display that allows the visitor to scroll through a list of apartments. Once they select the apartment, they can push the call button to notify the tenant that they are in the lobby. The latest lobby intercoms include a large touch-screen panel. This multi-tenant intercom system provides a wireless connection from the lobby intercom station to an app on your smartphone. The new systems have large LCD screens that allow you to see many of the residents in the building. You can select 7-inch or 10-inch panels. The larger displays provide a better user interface and make it much easier to find the right person. The lobby panel includes a camera so the person at the door can be seen as well as heard. These touch-panel systems can be used in office buildings, hospitals, and other large multi-tenant organizations. The systems use a cloud server that directs the communication from the lobby panel to the smartphone apps. The app allows the user to see and talk to the person at the door. They can release the lock in the lobby using their smartphone. To learn more about how cloud-connected intercoms work, see our article, How IP Intercoms Communicate with Smartphones. Pros: They provide the most advanced communication capability. The new systems allow you to talk and see the person at the door using an app on your smartphone. The panel can connect to your smartphone as long as you have a cell connection. No wiring is needed between the lobby station and the apartment. Cons: Requires a monthly subscription fee that supports the cloud server connection. Summary of Apartment Intercoms Intercoms have evolved from speaking tubes to analog electrical systems, to digital IP systems, and finally to wireless digital IP systems. There are still many analog intercom systems in use today. The latest intercoms include large touch-screen displays and communicate the cell network to the smartphone of the person in the apartment. The system uses cell communication so you can contact the person no matter where in the world they are. If you need help selecting the right apartment intercom system, please contact us at 800-431-1658 in the USA, or at 914-944-3425 everywhere else, or use our contact form.
<urn:uuid:f124908e-9dd5-43ed-a1bd-a0dc421e3fec>
CC-MAIN-2022-40
https://kintronics.com/comparison-of-apartment-intercom-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00038.warc.gz
en
0.941261
1,463
2.59375
3
While a number of different types of firewalls exist, two of the most important steps in the evolution of the firewall are the introduction of the stateful firewall, invented by Check Point Founder and CEO Gil Schwed in 1993, and the transition from the traditional data center firewall to the next-generation cloud firewall. One of the major milestones in the development of early firewalls was the transition from stateless to stateful firewalls. The original, stateless firewalls were not designed to store any information about a particular connection from one packet to the next. This meant that they were capable of catching obvious attacks but missing more sophisticated and subtle ones. A stateful firewall, first offered by Check Point, stores some information throughout the life of a connection. This enables it to detect more subtle anomalies. For example, a DNS response packet without a corresponding request could indicate an attempted DNS spoofing or amplified Distributed Denial of Service (DDoS) attack. These sorts of attacks would be invisible to a stateless firewall that assumed that any inbound DNS response was the result of a valid request. A more recent and major stage in the evolution of the firewall was the transition from traditional firewalls, designed to protect on-premises data centers, to the cloud or “next-generation” firewall, which is capable of securing modern, cloud-based infrastructure against the current cyber threat landscape. Traditionally, firewalls were deployed at the network perimeter and performed traffic filtering based upon IP addresses, port numbers, and protocols. These network firewalls were typically designed as standalone appliances that could identify and block attacks targeting anything within the network perimeter. Since organizations operated their own data centers on-site and controlled their network infrastructure, this was a workable approach to network security. A next-generation firewall incorporates additional features above and beyond those of a traditional firewall, including application inspection, threat prevention, and integrated threat intelligence. A cloud firewall is a next-generation firewall that is designed to protect the modern network, which includes cloud-based infrastructure. Instead of protecting a defined network perimeter that no longer exists in the modern network, cloud firewalls are deployed in cloud environments and protect an organization’s cloud-based applications from attack – wherever they are located. The main difference between traditional firewalls and a next-generation firewall is that a modern firewall provides a range of features above and beyond simple port and protocol-based traffic inspection. A modern firewall should include certain core functionality that is essential to effectively protecting an organization against cyber threats: Modern companies face a sophisticated and evolving cyber threat landscape and are protecting growing and diverse network environments. A modern firewall should enable security teams to monitor and manage security across their entire network from a single console. The modern firewall should provide protection against both basic and advanced cyber threats. This requires both core prevention technology – such as anti-virus, anti-malware, and anti-phishing protection – as well as the ability to ingest threat intelligence feeds and use them to identify more sophisticated attacks. Different applications, systems, and users within an organization’s network require varying permissions, levels of access, and security policies. A modern firewall should be capable of identifying the application, system, or user associated with a network flow and applying specific security policies based upon this information. Almost every enterprise is using a multi-cloud deployment, and the majority of these have hybrid cloud infrastructure. A cloud firewall should be capable of enforcing consistent security policies across an organization’s entire network infrastructure while enabling the security team to manage these policies from a single console. Traditional, hardware-based firewalls do not scale well, making it difficult and expensive for an enterprise to adapt to changing conditions. Modern firewalls should be capable of leveraging cloud technology to rapidly scale to meet the evolving needs of the business that they protect. Firewalls have undergone a number of stages in their evolution from the original, stateless firewall to the modern, cloud firewall. While earlier iterations of the firewall were capable of protecting an organization against the threats of their day, only a next-generation firewall is capable of providing adequate protection against the modern cyber threat landscape. Using a firewall without the five core features of a next-generation firewall jeopardizes your organization’s network security. To learn more about these core features and how to make the right choice when selecting a firewall to protect your company and network against cyberattacks, check out this guide. And of course, you’re welcome to contact us or schedule a demo to learn why Check Point firewalls are the best choice for securing your network.
<urn:uuid:9200eff0-e025-4209-ba2e-d04987bb4a1a>
CC-MAIN-2022-40
https://www.checkpoint.com/cyber-hub/network-security/what-is-firewall/evolution-of-the-firewall/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00239.warc.gz
en
0.938421
947
3.046875
3
Increasingly, modern technologies are helping people’s secrets move into the public domain. There are many such examples, from massive leaks of personal data to the online appearance of private (and even intimate) photos and messages. This post will leave aside the countless dossiers kept on every citizen in the databases of government and commercial structures — let’s naively assume that this data is reliably protected from prying eyes (although we all know it isn’t). We shall also discard the loss of flash drives, hacker attacks, and other similar (and sadly regular) incidents. For now, we’ll consider only user uploads of data on the Internet. The solution would seem simple — if it’s private, don’t publish it. But people are not fully in control of all of their private data; friends or relatives can also post sensitive information about them, sometimes without their consent. The information that goes public might be close to the bone, quite literally. For example, your DNA might appear online without your knowledge. Online services based on genes and genealogy, such as 23andMe, Ancestry.com, GEDmatch, and MyHeritage, have been gaining in popularity of late (incidentally, MyHeritage suffered a leak quite recently, but that’s a topic for a separate post). Users voluntarily hand over a biomaterial sample to these services (saliva or a smear from the inside of the cheek), on which basis their genetic profile is determined in the lab. This can be used, for example, to trace a person’s ancestry or establish genetic predisposition to certain diseases. Confidentiality is not on the agenda. Genealogical services work by matching profiles with ones already in their database (otherwise, family members will not be found). Users occasionally disclose information about themselves voluntarily for the same reason: so that relatives also using the service can find them. An interesting nuance is that clients of such services simultaneously publish the genealogical information of family members who share their genes. These relatives might not actually want people to track them down, especially based on their DNA. The benefits of genealogical services are undeniable and have resulted in more than a few happy family reunions. However, it should not be forgotten that public genetic databases can be misused. At first glance, the problem of storing genetic information in a public database might seem contrived, with no practical consequences. But the truth is that genealogical services and biomaterial samples (a piece of skin, nail, hair, blood, saliva, etc.) can, under certain circumstances, help identify a person, without so much as a photograph. The reality of the threat was highlighted in a study published in October in the journal Science. One of the authors, Yaniv Erlich, knows firsthand the ins and outs of this industry; he works for MyHeritage, which provides DNA analysis and family tree services. According to the research, roughly 15 million people to date have undergone a genetic test and had a profile created in electronic form (other data indicate that MyHeritage alone has more than 92 million users). Focusing on the United States, the researchers predicted that public genetic data would soon allow any American with European ancestry (a very large proportion of those so far tested) to be identified by their DNA. Note that it makes no difference whether the subject initiated the test or whether it was done by a curious relative. To show how easy DNA identification really is, Erlich’s team took the genetic profile of a member of a genome research project, punched it into the database of the GEDmatch service, and within 24 hours had the name of the owner of the DNA sample, writes Nature. The method has also proved useful to law enforcers, who have been able to solve several dead-end cases thanks to genealogical online services. How the DNA chain unmasked a criminal This past spring, after 44 years of unsuccessful searching, a 72-year-old suspect in a series of murders, rapes, and robberies was arrested in California. He was fingered by genealogical information available online. Lab analysis of biomaterial found at the crime scene resulted in a genetic profile that met the requirements of public genealogical services. Acting as regular users, the detectives then ran the file through the GEDmatch database and compiled a list of likely relatives of the criminal. All of the matches — more than a dozen in all — were rather distant relatives (none closer than a second cousin). In other words, these people all had common ancestry with the criminal tracing back to the early nineteenth century. As described by the Washington Post, five genealogists armed with census archives, newspaper obituaries, and other data then proceeded to move from these ancestors forward in time, gradually filling in empty slots in the family tree. A huge circle of distant but living relatives of the perpetrator was formed. Discarding those who did not fit the age, sex, and other criteria, the investigators eventually homed in on the suspect. The detective team then followed him, got hold of an object with a DNA sample on it, and matched it against the material found at the crime scene many years before. The DNA in the samples was the same, and 72-year-old Joseph James DeAngelo was arrested. The case spotlighted the main benefit of genealogical online public services over the DNA databases of law-enforcement agencies from the viewpoint of investigators. The latter databases store information only on criminals, whereas the former are full of noncriminal users who cast a virtual net over their relatives. Now imagine that a person is wanted not by the law, but by a criminal group — maybe an accidental witness or a potential victim. The services are public, so anyone can use them. Not so good. DNA-based searches using public services are still fairly niche. Besides creating genetic profiles, a more common way for well-meaning friends and relatives to inadvertently reveal your whereabouts to criminals, law-enforcement agencies, and the world at large is through the ubiquitous practice of tagging photos, videos, and posts on social media. Even if no ill-wishers are looking for you, these tags can cause embarrassment. Let’s say a carefree lab technician decides to upload photos from a lively staff party and tags everyone in it, including a distinguished professor. The photos immediately and automatically pop up on the latter’s page, undermining his authority in the eyes of students. A careless post such as this could well lead to dismissal or worse for the person tagged. By the way, any information in social networks can readily form the missing link in the type of search described above, using the public databases of genealogical services. How to configure tagging Social networks allow users to control tags and mentions of themselves to varying degrees. For example, Facebook and VK.com let you remove tags from photos published by others and limit the circle of people who can tag you or view materials with tags of you. Facebook users can keep the photos they upload from being seen by friends of people tagged in them, and the VK.com privacy settings let users create a white list of users allowed to view photos with tagged individuals. Curiously, Facebook not only encourages users to tag friends through hints generated by face-recognition technology (this feature can be disabled in the account settings), but also helps to control their privacy: The social network sends a notification if that technology spots you in someone else’s pic. As for Instagram, this is what it has to say on the matter: All people, except those you have blocked, can tag you in their photos and videos. That said, the social network lets you choose whether photos with you tagged appear on your profile automatically or only after your approval. You can also specify who can view these posts in your profile. Despite these functions offering partial control over where and when you pop up, the potential threats are still numerous. Even if you slap a ban on people tagging you in pictures, your name (including a link to the page) might still be mentioned in the description or comments on a photo. That means that the photo is still linked to you, and keeping track of such leaks is near impossible. With friends like these Friends and relatives aren’t the only ones who might give away your secrets to third parties. Technologies themselves can also do it, for example, because of the peculiarities of the recommendations system. VK.com suggests friending people with whom users have mutual friends in the social network. Meanwhile, the Facebook algorithm is far more active in its search for candidates, sometimes recommending fellow members of a particular group or community (school, university, organization). In addition, the friend-selection process employs users’ contact information uploaded to Facebook from mobile devices. However, Facebook does not disclose all of the criteria by which its algorithm selects potential friends, and sometimes you may be left guessing about how it knows about your social connections. How does this relate to privacy? Here’s an example. In a particularly awkward case, the system recommended unacquainted patients of a psychiatrist to each other — and one of them even divined what they had in common. Health-related data, especially psychiatric, is among the most sensitive there is. Not many would voluntarily agree to it being stored on social media. Similar cases were cited in a US Senate Committee appeal to Facebook following the Senate hearing in April 2018 on Facebook users’ privacy. In its response, the company did not comment on cases involving patients, listing only the abovementioned sources of information for its friend-suggestion algorithm. The Internet already stores far more social and even biological information about us than we might imagine. And one reason we can’t always control it is simply that we don’t know about it. With the advance of new technologies, it is highly likely that the very concept of private data will soon become a thing of the past — our real and online selves are becoming increasingly intertwined, and any secret on the Internet will be outed sooner or later. However, the problem of online privacy has been raised lately at the level of governments worldwide, so perhaps people can still find a way to fence themselves off from nosy outsiders.
<urn:uuid:06ef9abc-86de-412e-b788-46256b74f4a4>
CC-MAIN-2022-40
https://me-en.kaspersky.com/blog/mentions-and-dna-vs-privacy/12329/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00239.warc.gz
en
0.948778
2,127
2.78125
3
Network Video Recorder (NVR) The Network Video Recorder, also known as the NVR, is another essential element to any I.P. camera system. Connected to the same I.P. network, you can install the NVR virtually anywhere in your building. The NVR allowed you to record and store video on a hard drive, snap images and transmit them to your computer or remote device for living and recorded viewing. Network Video Recorders usually have multiple channels for inputting security camera feeds and are an all-in-one place for combining feeds and keeping a comprehensive eye on your surveillance provides. NVRs and DVRs may be placed on a shelf or desk, wall-mounted, or mounted behind a false wall. NVR’s differ mainly from DVRs. They record video from I.P. cameras, while DVRs specifically record analog-based video to a digital format. Standard DVR recorders use coaxial cables, while many NVRs connect through Ethernet cables, such as a cat5e or cat6. Which is Better, DVR or NVR? At the core, both NVR and DVRs are responsible for video recording. DVR stands for Digital Video Recorder, whereas NVR stands for Network Video Recorder. The difference between NVR and DVR is how they process video data. DVR systems process the video data at the recorder. In contrast, NVR systems encode and process the video data at the camera, then stream it to the NVR recorder, which is used for storage and remote viewing. As DVRs and NVRs handle the video data differently, they require different types of cameras. Most NVRs are used with I.P. cameras, whereas DVRs are used with analog cameras. It’s important to note that a DVR-based system is a wired security system. In contrast, NVR systems can be wired or wireless systems. DVRs with coaxial cables generally have image quality that deteriorates after around 300 feet. With an NVR system, you can get around this by using a POE extender, POE injector, or POE switch to extend cables over long distances while maintaining high image quality. NVRs offer high flexibility — connected to the same I.P. network, can install NVRs virtually anywhere in your building. Since NVRs use a software program to record video in a digital format automatically. They can easily transmit data over computer networks and even remotely stream security footage on a mobile device in real-time. NVRs are also typically newer and more advanced systems that offer higher video quality, compatibility with more cameras, and more flexible features. Installing a DVR is the best bet for business security systems with existing coaxial wiring and analog cameras. For commercial security camera systems starting from scratch, NVRs are a great choice, which offers higher-resolution IP cameras and remote video feed access. DVR Security System – Pros & Cons Advances in analog high definition within the last five years have reduced the gap in resolution between the two systems. You’ll probably notice that DVR-based security systems are priced lower than NVR systems. The lower price point is an attractive advantage of DVR systems, but what are the tradeoffs? To answer this, we need to break down each of the components of a DVR system. Camera Type – Analog The cameras used by a DVR system must be analog security cameras, better known as CCTV cameras. Most of the cost savings found by using a DVR system are due to the camera. While you can mix and match cameras in your property security system, there is less flexibility in the type of cameras you can use with DVR systems. In a DVR system, the analog cameras stream an analog signal to the recorder, processing the images. The advantage of this system is the reduced complexity required of the camera compared to an NVR system. Cable – Coaxial BNC Cable The camera connects to the DVR recorder via a coaxial BNC cable. Although the use of coaxial cable may not seem significant, it does have some limitations: - As the coaxial cable doesn’t provide power to the camera, two threads are included within one covering – a passion and video cable. The lines separate each end to give separate functions. As such, you’ll need to install your DVR recorder near a power outlet. - The size and rigidity of coaxial cables can make installation more challenging. The coaxial cable is more comprehensive in diameter than Ethernet cables used with NVR systems, making it more challenging to run lines in tight spaces. Coaxial cables also tend to be more rigid, compounding this problem. - However, suppose your property has existing coaxial connections for a previous security system. In that case, you can use the same cable to connect your new system. - Standard coax cables do not support audio. A variant with an added RCA connection is needed. Still, a DVR has a limited number of audio input ports, so only a small number of cameras can record audio. - The image quality on the coaxial cable will begin to degrade after about 300ft/90m, limiting the ability to extend your security presence outward. The lower quality cable will result in a signal loss at shorter distances. DVR recorders rely on a hardware chipset known as an A.D. encoder responsible for processing the raw data streaming from the camera into legible video recordings. DVR systems also have different requirements when it comes to the recorder. Specifically, in a DVR system, the user must connect every camera directly to the writer. In comparison, an NVR system only requires that each camera connects to the same network. Also, in a DVR system, the recorder doesn’t provide power to the cameras. Each camera connection will need a splitter that supplies power to enable cameras to function. DVR security systems are less flexible than their NVR counterparts in terms of camera type and mounting options. Whereas NVR based systems can integrate wired and wireless security cameras, DVR systems can only use wired security cameras. DVR systems also have less flexible mounting solutions because routing coaxial cable can be more difficult in tight situations. A power outlet is required for each camera. Image & Audio Quality As we’ve discussed, the cameras transmit analog video via the coax cable directly to the recorder in DVR systems, and images process at the recorder level. The analog signal results in a lower quality image compared to NVR systems. Coaxial cables also don’t natively transmit an audio signal. DVR recorders usually have a limited number of audio input ports. NVR Security System – Pros & Cons NVR security camera systems incorporate the newest technology to provide an enhanced, feature-rich security system. Also known as POE security camera systems, NVR based systems are more flexible and complex than DVR systems. Camera Type – I.P. Camera As NVR systems process the video data at the camera rather than on the recorder, the cameras in NVR systems are much more robust than their DVR counterparts. NVR systems use IP cameras which are standalone image capturing devices. I.P. cameras each have a chipset capable of processing the video data, then transmitted to a recorder. Unlike analog cameras, IP cameras are typically all capable of recording and sending audio and video. The more powerful hardware on IP cameras also enables improved smart functionality and video analytics, such as facial recognition. Cable – Ethernet Like DVR systems, NVR systems connect the camera to the recorder. However, how they connect the camera to the writer is entirely different. NVR systems use standard Ethernet cables, such as cat5e and cat6, to transmit data. Professional installers prefer ethernet cables due to the number of advantages compared to coaxial cables: - Ethernet cable powers the camera using Power over Ethernet (PoE). That means your camera needs one cable running to capture video, audio and control the camera, thus eliminating the need for messy splitters like a DVR system. - Ethernet cable tends to be easier to route and terminate because it is thinner and has a smaller connector allowing for less drilling. - Ethernet is cheaper than coaxial cable and much more readily available, making cable replacement or system expansion more accessible and affordable. Many modern homes and businesses are being built wired for Ethernet, making installation even more accessible. - An added advantage of Ethernet cable is that every camera on the system can transmit audio since Ethernet can send audio data natively. - Cables do not need to run between every camera and the recorder. They need to be on the same wireless network. Installation is more straightforward and cleaner as multiple cables aren’t required. - Despite a shorter max Ethernet cable length, 328ft or 100m, network switches can extend total distance without impacting image quality. Unlike a DVR system, the recorder in an NVR system doesn’t process video data. That step is completed at the camera before it is transmitted. NVR recorders are only used for storing and viewing the footage. NVR systems are inherently more flexible because security cameras don’t necessarily have to be physically connected directly to the recorder. Instead, IP cameras only have to be on the same network. As such, you could feasibly have cameras worldwide on the same network that connect to your NVR can then be viewed as a comprehensive system. Image & Audio Quality As NVR recorders receive a pure digital signal from the cameras, video quality is better than a DVR at the exact resolution. In addition, as Ethernet cables carry audio, all cameras with microphones could record audio to the NVR. An NVR makes it easy to record video surveillance footage, but you will need connected hard drives to store this footage. Choosing the right amount of storage for your surveillance camera installation can seem like a confusing gamble, but it doesn’t have to be. It’s simply a matter of calculating the video length you need to store by the bitrate and resolution your camera shoots at. When recording 4k security camera video, this can end up being a large number requiring terabytes of footage. For lesser archival needs, you can usually get away with much less.
<urn:uuid:a8727245-41b5-427a-bfa6-7abb47c371d4>
CC-MAIN-2022-40
https://dicsan.com/nvr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00239.warc.gz
en
0.921884
2,168
3.015625
3
What is Random Number Generation (RNG) and how does it effect video games? Here’s everything you need to know about random number generators (RNGs) and how they’re utilised in games. Have you ever felt daunted by the sheer quantity of acronyms in the gaming world? There’s a lot to remember, from genres to technical jargon. What is RNG, for example? We will define RNG in the context of video games in this post. We’ll look at what RNG means, look at various examples, and see how it relates to speedrunning. What Is Random Number Generation (RNG)? The term “random number generator” refers to a computer programme that generates random numbers. A random number generator is a device or algorithm that generates numbers by chance. RNG, in game terms, refers to events that do not repeat themselves every time you play. While it may appear to be straightforward, computers have difficulty generating random numbers. This is due to the fact that computers are programmed to obey instructions, which is the polar opposite of randomness. Giving instructions on how to chose something randomly is an oxymoron, therefore you can’t just order a machine to “come up with a random number.” Pseudo-RNG and True RNG As a result, if you want to generate a really random number with a computer, you’ll need to use a hardware random number generator. To generate random numbers, this method employs minute physical phenomena such as electronic noise. Because no one can forecast or know what is going on with these undetectable events, they are as random as possible. This type of RNG is essential in security-centric systems, and it’s why it’s used in so many encryption schemes. It would be a major problem if someone could figure out how the system generated “random” integers for an encryption protocol. With game RNG, though, this isn’t an issue. Many programs, especially games, employ pseudorandom number generation for increased speed and reproducibility. int rand = (a * milliseconds + b) % c Pseudo-RNG generates a random number using an algorithm (think of it as a formula) that performs mathematical operations on a seed (beginning) value. It’s critical to select a seed that’s as random as possible to get different results each time. A simple example would be to use the current milliseconds as a seed and then conduct the following actions on it: Because the same seed produces the same outcome every time, this isn’t truly random. But it’ll enough for video games. In gaming, RNG is used in a variety of ways. Now that you’re familiar with RNG’s mechanics, let’s examine at some examples of RNG in games to understand how it functions. In loot-focused games like Destiny, Borderlands, and Diablo, RNG plays a significant part. The reward you receive when you unlock a treasure chest or beat an adversary is not always the same. Because the system chooses it at random each time, you can get lucky and acquire a super-rare item right away, or you might get a low-level piece of armour over and over. Of course, loot drops aren’t fully random in order to keep the game balanced. They’ve set up procedures to keep you from receiving the best weapon in the game from the first treasure chest you open. Each game handles this differently, for example, by limiting the items you get dependent on your player level. One of the reasons video games are so addictive is the need to constantly acquiring greater stuff. Using RNG to Calculate Chance Percentages RNG is used in many games to determine the probability of a specific event occurring. This is very prevalent in role-playing games (RPGs). When you attack in JRPGs like Persona 5 or Chrono Trigger, for example, you could get a critical hit, which deals extra damage. This occurs at random in many games, though you can boost your chances by using specific items. RNG controls how often wild Pokémon battles occur and which creatures you encounter in a Pokémon game. Similar examples can be found in games such as Super Smash Bros. Mr. Game & Watch features a move called Judge that when used displays a number from one to nine. Each time you use RNG, the value is determined, with the exception that you cannot get the same number twice in a row. RNG-based procedural generation The Random Number Generator (RNG) is at the heart of procedural generation, a popular gaming trend. The term “procedural generation” refers to the method of developing game content using an algorithm rather than by hand. Minecraft and Spelunky are two well-known games that use procedural generation. These games use a seed value to create distinct worlds, ensuring that each player has a different experience each time they play. Game developers, like other types of RNG, impose constraints so that worlds aren’t generated wholly at random. In Minecraft, for example, you won’t discover random floating ground blocks above an ocean. Randomness in Speedrunning There’s a chance you’ve heard of RNG in video games when it comes to speedrunning. Because speedrunners want to finish a game as quickly as possible, they put in a lot of practise time to learn the game inside and out. RNG provides an element of uncertainty to speedruns, which is understandable. RNG isn’t necessarily negative in speedruns, even though it frequently causes problems. Because it’s easier to increase your time with a little luck, having some variation between runs can make the game more enticing. RNG for minor to moderate speedruns RNG can be largely irrelevant to a run at times. The precise location of foes in a room, or if you receive a critical hit in a typical encounter, will have little impact on your overall duration. RNG can also cause severe slowdowns in other situations. In Super Mario Sunshine, for example, the boss battle against King Boo is spinning a slot machine with five different results. You must first match three pineapple images to produce a variety of fruits before you can injure him. After that, toss a pepper at him to burn his tongue on fire, then hit him with any other fruit you can think of. With favourable RNG, the peppers will emerge at a convenient spot, allowing you to finish the fight swiftly. However, if you have bad luck, you may have to spin the roulette wheel several times and waste time. Random Number Generator (RNG) that Breaks the Rules In other games, getting unlucky with the RNG can completely derail a run. Banjo-Kazooie is a good example of this. You enter a quiz show called Grunty’s Furnace Fun near the end of the game, where you are tested with questions concerning your trip. The game’s villain, Gruntilda, is the subject of one type of square on the board. Brentilda, her sister, will provide you with the answers to these questions if you speak with her during the game. On each replay, however, the right answers to these questions are different. Because speedrunners don’t want to waste time conversing with Brentilda, they must make educated guesses at this stage. This is entirely dependent on chance; if they choose the incorrect answer too many times, they may die and lose a significant amount of time. For a speedrun, having this RNG-dependent portion at the end of the game is irritating because the runner has no choice but to hope for the best. Despite this, as shown in the video below, speedrunners frequently devise ways to circumvent these hurdles. Manipulation of the Random Number Generator As previously stated, the pseudo-RNG in video games isn’t genuinely random because the same seed may be used to repeat the same outcomes. The seed in many games has an internal timer, which might be tough to hack. Other games, on the other hand, are considerably easier to tinker with. Golden Sun, a Game Boy Advance RPG, is an excellent illustration of this. The rewards you receive for completing a combat are determined by the activities you did throughout the battle. This means that if you fight the same foes with the same techniques every time, you’ll get the same drops every time. Emulation tools can be used by speedrunners to evaluate a game and see if they can alter the RNG. They can then take advantage of it by ensuring a desired outcome or reducing RNG-related time waste. This type of RNG, on the other hand, is as good as random to the average player who doesn’t comprehend the seed. You now have a better understanding of the role of RNG in gaming. In video games, too much randomness can lead to emotions of despair or futility, resulting in “gaming weariness” or “gaming burnout.” If you’re experiencing this, you might want to check at ways to deal with gaming tiredness and burnout. We’ve discussed what RNG is, how it influences games, and how it relates to speedruns. As a result, you should have a greater knowledge of the importance of RNG in gaming. Now is the time to look at some of our favorite video game speedruns.
<urn:uuid:ff8cafac-cda1-4859-8aa9-d9dc492c11f6>
CC-MAIN-2022-40
https://cybersguards.com/what-is-rng-a-lesson-for-gamers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00239.warc.gz
en
0.924666
2,003
3.125
3
Find out more about How to Learn and Use Advanced Vocabulary Word Efficacy as part of your advanced English vocabulary builder goals, watch ▶️ https://youtu.be/2Ot3GqurIfU A basic definition of efficacy is the degree and ability of how useful something works at reaching a desired effect or result. Learn a new word today, efficacy and efficacious. To help you communicate more effectively add Grammarly’s FREE Online Writing Assistant to your desktop to write with confidence and ensure that your Grammar, Spelling, and Punctuation are mistake-free! To learn more about what Grammarly can do for you – ⭕️https://grammarly.go2cloud.org/SH2i1 #efficacy #learnanewwordtoday #howtolearnandusenewwords #advancedenglishvocabulary #shorts
<urn:uuid:171f18ac-fe70-4e71-adf9-80a8e3fa8b77>
CC-MAIN-2022-40
https://dztechno.com/how-to-learn-and-use-advanced-vocabulary-word-efficacy-shorts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00239.warc.gz
en
0.818612
189
3
3
Humans can retain just 7 (plus or minus 2) items in short-term memory. Yet Homer’s Odyssey and Iliad were recited long before they were written down. We carry on conversations that take place over hours. We write long essays. We deliver speeches. We rocket to the moon. We perform hours-long operas. If we carry only 5-9 items in our short-term memory, how is that even possible? This limit was explored and quantified in the late 1950s by George A. Miller of Harvard University. As a consequence of his research, he was able to identify the 5-9 capacity range in his descriptive paper, The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information (1956). His “Magical Number” findings became known as Miller’s Law. Psychologists and neurologists have discovered a wide range of strategies and techniques humans have developed to circumvent the limitation of 5-9 items. One adaption is that we break things into chunks. A home phone number of (555) 374-6377 might look like a random string of 10 items. But not to our brains. It breaks this into three easy-to-remember chunks: the area code, the three-digit prefix, and the four-digit number. By “chunking,” we get around the Magical Number constraint. Humans do amazing things that defy biological constraint. And technology like Entefy provides the tools to do even more. The next time you’re singing along to your favorite song, look for signs of how your brain is using the Magical Number to help you remember all of those lyrics. Watch the video version of this enFact here.
<urn:uuid:f35123f9-ebac-42d7-86cb-0d340e9e2a5e>
CC-MAIN-2022-40
https://www.entefy.com/blog/magic-number-7-more-or-less/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00239.warc.gz
en
0.945407
365
3.078125
3
In July of 2016, a refrigerator truck packed with explosives detonated next to a crowded apartment block in Baghdad’s Karrada neighborhood. The blast killed 323 people and was one of the worst Vehicle–Borne Improvised Explosive Device (VBIED also known as car bombs) attacks ever recorded. On May 30, 2017, a VBIED in a tanker truck ripped through the embassy quarter of Kabul, killing more than 150 people. Several embassies, including those of Germany and France, sustained damage despite the presence of blast protection structures. In recent years, several massive VBIEDs (also known as car bombs) have been thwarted by local security forces throughout hotspots in the Middle East and Asia, and by U.S. coalition forces in Afghanistan. VBIEDs continue to pose a real and evolving threat to even the most secure compounds. (At least 90 people were killed and around 400 were injured in a suicide bombing. The blast hit close to the German embassy, and not far from Afghan government buildings. Courtesy of CBS Evening News. Posted on May 31, 2017) The Explosives Division (EXD) of the Department of Homeland Security’s (DHS) Science and Technology Directorate (S&T) has taken measures to address this threat directly. EXD’s Homemade Explosives (HME) program conducts Large–Scale VBIED testing to mitigate the threat posed by massive car bombs and to ensure such attacks do not occur in the U.S. This program is part of S&T’s Homeland Security Advanced Research Projects Agency. Recently, S&T EXD conducted a series of explosives tests with varying charge sizes to learn more about mitigating these threats, based on the size and composition of the explosive device. These large-scale explosives tests, conducted at Fort Polk, Louisiana, brought together the HME preparation expertise of the U.S. Naval Surface Warfare Center’s (NSWC) Indian Head facility and the live fire testing capability of the U.S. Army Corps of Engineers’ Engineering, Research, and Development Center in Vicksburg, Mississippi. “Due to the wide variety of types of and materials used to make improvised explosives, we often must use simulations to model the behavior of large scale events,” according to HME Deputy Program Manager Dave Hernandez. “When current methods are no longer effective, we have to conduct controlled real-life events to discover new ways of combatting emerging trends in explosives.” The data from the Fort Polk tests will allow us to understand the damage that different types of HME mixes can inflict. Such information on large-scale detonations could not be accurately calculated before these tests were conducted. This information will facilitate the development of new mitigation techniques for larger-scale explosions. The results of these tests will also be detailed in a report for stakeholders and archived for future reference and distribution by the program office. “The S&T HME Characterization program informs the explosives community on current material threats, explosive characteristics, and any potential data enabling mitigation measures such as the development of detection technologies and support those responsible for safety and security in the blast communities in order to prepare for, and prevent, such an attack in the United States”, said HME Program Manager Elizabeth Obregon. “The information generated from this testing will aid the Department of Defense and law enforcement communities by revealing data on the impact of a large–scale VBIED, enabling better protection for vulnerable targets.” “As the HME threat is constantly changing, a continued effort in this area is required in order to provide timely information to those organizations conducting analysis and acquisitions,” Obregon concluded. Reflecting the DHS “Unity of Effort” goal, these tests included participants from the National Ground Intelligence Center, ERDC, U.S. Navy, Combating Terrorism Technical Support Office, ATF, the Defense Threat Reduction Agency, the Department of State, and other U.S. agencies. The program has also gained visibility within the blast mitigation and effects community as well as the international community.
<urn:uuid:06cac8b7-42b1-499a-ae3b-67e647cea833>
CC-MAIN-2022-40
https://americansecuritytoday.com/dhs-works-mitigate-large-scale-car-bomb-attacks-us/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00239.warc.gz
en
0.949198
855
2.765625
3
Installers and integrators working with intruder alarm systems will be aware of the Grading system set out in BS EN 50131-1. The Grading approach allows alarm systems to be categorised based upon the level of threat they are likely to face. It is important to remember that the lowest Grade device used sets the level for the entire system. Therefore, a system using Grade 3 components, but which has one Grade 2 device, can only achieve Grade 2. It is therefore important that all devices meet Grade 3, including all detectors. Grade 3 detectors are required to be able to signal masking attempts. Anti-masking technology is used to detect any attempts to mask the lens of the detector while the system is unset. Masking maybe due to a deliberate attempt to block the field of view, thus allowing intruders to go undetected when the system is set. Alternatively, detectors can be masked accidentally; for example, boxes in a warehouse might be stacked in the front of the device, preventing it from operating correctly. When a masking attempt is detected, a masking fault signal is sent to the control panel. As a result, the system cannot be set until the masking fault is cleared. This ensures the system operates as expected whenever it is set. Masking takes many forms. Deliberate masking attempts can include the detector lens being covered with various materials to prevent the detection of infrared activity, or might include spraying or painting the lens to prevent correct operation. Covering a detector lens while the system is unset is a rare occurrence but is addressed because Grade 3 is specified for medium to high-risk sites, where ‘intruders are expected to be conversant with the alarm system and have a comprehensive range of tools and portable electronic equipment’. When detecting masking, there are two technologies which can be used. The first, microwave, uses the doppler effect to detect the presence of a mask. It does this by creating a microwave bubble in front of the detector unit. Microwave detection does detect masking and is a common approach as it is a relatively low-cost implementation for manufacturers. However, microwave isn’t always consistent when used close to the source. This can result in missed masking attempts, especially if the intruder uses one of the more subtle masking methods. The other technology is active infrared detection. This is more accurate and detects most masking attempts. It carries a higher cost for manufacturers but can be susceptible to white light interference. To deliver the best level of anti-mask detection, Texecom’s new Capture Grade 3 detectors use a combination of active IR using a transmitter and receiver pair, and microwave detection when sensing masking attempts. This ensures high levels of accuracy without the false alarms which can occur when the individual technologies are used alone.
<urn:uuid:6a51ab5f-bcb3-4dde-9f64-4340ef78b032>
CC-MAIN-2022-40
https://benchmarkmagazine.com/getting-the-right-performance-from-anti-masking-detection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00439.warc.gz
en
0.926574
578
2.921875
3
Given the global public debate over online privacy and mounting concerns that big tech giants are tracking and monitoring children, the attention of lawmakers in both in the U.S. and Britain is shifting to children’s privacy online. In the U.S., Senator Ed Markey is leading a bilateral and bicameral push to update the Children’s Online Privacy Protection Act (COPPA), in order to make it harder for tech giants to collect personal data and location information for children younger than age 15. And in the UK, the Information Commissioner’s Office (ICO) has published a 16-point draft code of practice for children’s privacy. Together, these two moves represent a fundamental shift in the debate over online privacy and positive momentum in the move towards stronger data protection laws. Children’s privacy and COPPA 2.0 Citing the need for stronger and more effective protections online, Senators Ed Markey (Democrat – Massachusetts) and Josh Hawley (Republican – Missouri) have introduced new bipartisan legislation that updates the Children’s Online Privacy Protection Act (COPPA), one of the few pieces of federal privacy legislation in the United States. Calling this new legislation for protecting children an “accompanying bill of rights” to COPPA, the two senators have set out new rules that will make it much harder for tech companies to track teens and young children. The original COPPA was authored back in 1998, during the early days of the modern Internet. This “COPPA 2.0” updates the legislation for the 21st century and the emergence of new innovations like Facebook and other social media platforms that rely on the collection of personal information. First and most importantly, this new bicameral COPPA 2.0 legislation will extend privacy protections for the first time to teenagers age 13 to 15. Tech companies will not be able to collect personal information from these teens without their consent. Moreover, the legislation will prohibit tech companies from collecting personal information from children under 13 without parental consent. According to Senator Markey, tech companies are getting so sophisticated at getting kids to use their products and services that there need to be much stronger safeguards built into the online experience to avoid children’s personal data falling into the wrong hands. One innovation within the legislation is a so-called “Eraser Button” that follows through on the “right to be forgotten.” The Eraser Button makes it possible to delete all personal information with a single click. Another innovation is the establishment of a new Youth Privacy and Marketing Division at the Federal Trade Commission (FTC), which will be responsible for making sure that tech companies are not tracking adolescents’ every move online, and that they are not improperly targeting young children with their advertising. A new Digital Marketing Bill of Rights will limit the personal collection of young children, and in fact, will ban all targeted advertising for kids. Marketing directed at children will no longer be tolerated. This new legislation also makes an effort to regulate the use of Internet-connected toys and devices. It will prohibit the sale of Internet-connected devices for children or minors unless they adhere to “robust” cyber security standards. Moreover, packaging for these toys and devices must clearly state what type of data is being collected and how it is being used. In today’s 24/7 digital world, the reality is that kids are using the Internet to do homework, chat with friends and play games. And, until recently, big tech companies were leveraging this fact in order to collect data on children and minors, much as they collect data on adults. The youngest are bombarded with marketing messages and potentially deceptive practices that encourage them to turn off privacy settings. The new legislation will put an end to this practice. Not surprisingly, then, this new piece of legislation has the full support of privacy advocacy organizations, such as the Center for Digital Democracy. Children’s privacy and new UK child protection guidelines In the UK, too, attention is turning to the world of children’s privacy. As called for by the Data Protection Act of 2018, the UK Information Commissioner’s Office (ICO) has published a 16-point draft code of practice related to children’s privacy that builds on the framework of existing data protection laws. The 16 standards will be under public consultation until May 31, and a final version of this code of practice is expected to come into effect by the end of the year. The 16 standards represent a sort of “best-in-class” approach to dealing with children’s privacy. As such, these standards cover not only online services such as social media platforms, but also the world of connected toys and digital devices. In general, they take a very strong stance on children’s personal data. For example, all makers of connected devices should assume that their products will be used by children, unless they are able to integrate specific age authentication features directly into the product. Many of these standards are common sense, such as the suggestion that all children’s privacy settings should come with a default setting of “high privacy.” With the goal of protecting children and ensuring children’s privacy, the ICO suggests that there should be no third-party data sharing and that all geolocation tracking features should be turned off. In general, online services must meet very rigorous standards when it comes to children’s privacy. The goal is to meet and exceed all standards of the European General Data Protection Regulation (GDPR), the touchstone data protection act that went into effect in 2018 and that provides for stiff penalties for improperly processing personal data. One potentially controversial aspect of the new standards involves the clamping down on so-called “nudge techniques” that are designed to get children to share more of their personal data than they would like, or to encourage them to turn off data privacy settings. As CNBC highlighted, a strict interpretation of this nudge term would include the “like” button found on most social media platforms. After all, every “like” is a new data point used by a company to develop a sophisticated profile of a user. This could potentially present a thorny problem for Facebook, which might have to disable the “like” button across wide swathes of the platform. Protecting children is the new priority Clearly, around the world, data privacy has become such a mainstream topic of discussion that just about everybody can agree that devices or online platforms accessed by children need to take into account children’s privacy. In 2019 and beyond, look for other nations to follow the lead of the U.S. and UK and take steps to beef up their data protection laws to protect the interests of the child.
<urn:uuid:462b2ece-7982-4340-ade5-637647917dac>
CC-MAIN-2022-40
https://www.cpomagazine.com/data-protection/lawmakers-in-the-u-s-and-uk-turn-their-attention-to-childrens-privacy-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00439.warc.gz
en
0.945068
1,391
2.625
3
A hoax ( to trick into believing or accepting as genuine something false and often preposterous ) is the word we use for a fake warning. Since they are not only annoying and confusing, but sometimes even potentially harmful, they deserve some attention. Hoaxes started out as emails and the idea was the same as the one behind chain letters. I used to call them "lazy viruses" since they depend on the receiver to spread the hoax further amongst his contacts. Nowadays hoaxes are most active on social media, especially Facebook. This has considerably increased the speed with which they spread. How can we recognize a hoax? The first thing that should ring an alarm-bell is the request to forward the message to all your friends. A hoax will always say that it is important and try to convince you that it is not a joke, usually by claiming that someone knowledgeable that they know (but you don't) confirmed the information in the hoax. In a mail you can tell by the number of forwards (lots of Fwd: in the title or >>> in the body) how many times a message has been forwarded before it reached you. Many if those will tell you that the message will no longer be fresh and the claimed problem not that acute. A number of points in the content, that will help you recognize hoaxes. Not all of these have to be true, but usually most will be. - A successful hoax will first try to get you interested, then reveal some kind of threat and then ask you to do something about it. The threat can be aimed at you or your computer, but it can also be aimed to make you feel bad for not participating or look stupid for not grabbing the given opportunity. - The cure will always include you forwarding the message to everyone you know. - A specific date when something happened or when the counter-action needs to be done is hardly ever mentioned, usually terms like yesterday, or last Friday are used. - A source where you can countercheck the alleged subject is never given or only in vague terms, like "Adobe is still working on a solution", or "Fox News reported it was terrible". What is the goal? Sometimes it is just the fun to play a trick on someone, but hoax mails could be used by spammers to gather email addresses. Since many people do not use the BCC option when forwarding these mails and the mail itself often travel in circles, spammers would be able to obtain many valid email addresses. Another goal could be to hurt certain people. There are many hoaxes on Facebook, claiming that you should not add certain people to your friend list because you would get infected. These claims are obviously untrue. This type of warning should not be reposted. And sometimes a hoax is spread to make a point. Consider for example the supposed move of "The Pirate Bay" to North Korea. They announced this was a hoax a few days later with the statement: "We've hopefully made clear (once again) that we don't run TPB to make money. A profit hungry idiot (points at MAFIAA with a retractable baton) doesn't tell the world that they have partnered with the most hated dictatorship in the world. We can play that stunt though, cause we're still only in it for the f*in lulz and it doesn't matter to us if thousands of users disband the ship. We've also learned that many of you need to be more critical. Even towards us. You can't seriously cheer the "fact" that we moved our servers to bloody North Korea." Their point was that they felt they were threatened in the “free world”. They went to great lengths to make this hoax look real. They even made it look like their IP was in North Korea and changed their logo. Pirate bay logo with the Korean flag on the sail How can hoaxes be harmful to your computer? The most well-known example was a hoax that circulated for years about the Bugbear virus. It urged people to remove the file JDBGMGR.EXE, a normal Windows file that happened to have an icon shaped like a bear. Using this method more serious damage could be done, if the named file was more crucial then this Java debugger. And the fun for the source of the hoax would be the complaints on the internet by users who had compromised their own system. Besides, what is the point of warning your contacts about a virus in this way. Advise them to install a decent AV plus Malwarebytes Anti-malware and to keep those updated. That helps a lot more. So if you have received a warning about a virus from someone, please do not send it to other persons just like that. Make sure that it is a legitimate warning, or otherwise contact someone who may be able to give you further advice on the message and the (possible) virus. This way we will work towards eradicating hoaxes from the Internet. Hoaxes can even be harmful for your health, for example I recently read a hoax providing false information about removing ticks. Advising people to use water and soap rather than tweezers. This hoax is a perfect example of how they resurface after a while. I saw it in English in the beginning of 2011 and came across it in Dutch a few days ago. Personally, I don’t ask the poster to remove it. I think it is better to add a warning that it is a hoax. And if the poster doesn’t, then I will, especially in the cases where the hoax can potentially do harm. Summary: don’t just forward anything without checking the validity of the statement. If you investigate the matter and you are still not sure, it is better to refrain from spreading it further. If there is validity behind the information, spreading it to your friends via a new and fresh e-mail or post, along with sources could alleviate any concerns they have about its legitimacy.
<urn:uuid:91bbd32c-12ec-4271-b040-7ddc6ea60615>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2013/03/hoaxes
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00439.warc.gz
en
0.973447
1,235
3.125
3
(Sponsored Content) By its very nature, high performance computing is conflicted. On the one hand, HPC belongs on the bleeding edge, trying out new ideas and testing new technologies. That is its job, after all. But at the same time, HPC centers have to compete for budget dollars and they have to get actual science done, and that means they also have to mitigate risk just like every other kind of IT organization. What this has meant, historically, is that HPC centers have had a rich tradition of experimentation that is not unlike – but which definitely pre-dates – the fast-fail methodology espoused more recently and more famously by the big hyperscalers that dominate the Internet with their services. Whether HPC or hyperscale, the effect is the same during a technology transition. When an idea works, it is quickly tweaked across many different architectures and vendors and before you know it, the way HPC is done radically and swiftly changes. To simplify quite a bit, this is how the industry moved from proprietary vector supercomputers to parallel clustered RISC/Unix NUMA machines to massively parallel systems based on generic X86 processors running Linux and the Message Passing Interface (MPI) to allow nodes in clusters to share data and therefore scale workloads across ever larger compute complexes. It is also how we have come full spiral back around again to GPU-accelerated vector compute engines, which mix the best of the serial CPU and the parallel GPU to create a hybrid that can push the performance envelope for many simulation and modeling applications. The first time that AMD broke into the bigtime with server processors, back in the early 2000s with its “Hammer” family of Opteron CPUs, HPC customers were on the front end of the adoption wave because of the substantial benefits that the Opteron architecture offered over the Intel Xeon processors of the time. The Opteron chip had fast HyperTransport pipes to link CPUs together in NUMA shared memory clusters and to hook peripherals, such as high-speed interconnects from Cray, into their compute complexes. Opteron was also architected from the ground up to have 64-bit memory addressing (thus breaking the 4 GB memory barrier of the 32-bit Xeons of the time). And it was designed to scale up inside the socket, with multiple cores on a die to drive up performance per socket when Moore’s Law hit its first speed bump. Every chip maker hits its own speed bumps, and there are no exceptions. In the mid-1990s, Intel was working to extend the Xeon line with 64-bit memory addressing when it decided instead to partner with Hewlett Packard to create the Itanium chip which had a radically different architecture that, in many ways, was superior to the X86 architecture that had come to dominate the datacenter. We could go on at length about the Itanium strategy and products, but suffice it to say that Intel’s bifurcated strategy of keeping Xeons at 32-bits and pushing Itanium for 64-bit processing and memory addressing left a big gap for AMD Opterons to walk through right into the datacenter. And while the Itanium enjoyed some success in HPC, thanks in large part to Hewlett Packard and SGI, it was the Opterons that took off, eventually powering a quarter of the bi-annual TOP500 rankings of supercomputers. History never precisely repeats itself, but it does spiral around in a widening gyre. Now, some ten years after AMD ran into issues with its Opteron line, it is Intel that is struggling – in this case, both with architectures for HPC systems (the Knights family that was aimed at HPC and AI workloads has been shut down, as has been Itanium finally) and with its vaunted manufacturing capability to push the Moore’s Law curve stalled due to substantial delays in rolling out its 10 nanometer wafer etching processes. And if all goes according to plan, AMD looks to capitalize on Intel’s woes with its next generation Epyc processors, codenamed “Rome,” due in the middle of this year. It has taken more than six years to get to this point for AMD, which exited the server field of combat a decade ago and which has spent the last two years talking up (and selling) its first generation Epyc processors, formerly codenamed “Naples,” and building credibility with public cloud, hyperscaler, enterprise, and HPC customers alike. To re-establish trust with both the OEMs and ODMs who design and build servers and the customers who buy them takes time, and that means setting performance and delivery targets and hitting them. To its credit, AMD has thus far kept to the roadmap it laid out two years ago with the Epyc processors, and confidence in these chips is building just as the company is preparing its assault on the HPC space. A lot of things, not the least of which being the 7 nanometer manufacturing processes from Taiwan Semiconductor Manufacturing Corp, which is now etching AMD’s server CPU and GPU processors, are coming together to give AMD a very good chance to revitalize its HPC business. “Last year was an important time in the ramp of Epyc,” Daniel Bounds, senior director of data center solutions at AMD, tells The Next Platform. “From cloud builders and hyperscalers to high performance computing, the value of Epyc caught hold and helped those organizations raise the bar on the performance they can and should expect from a datacenter-class CPU. There is such a stark difference in the economies of scale compared to what the competition can bring. For HPC, there are two camps. The first deploys memory bandwidth-starved codes, which is where the current generation of Epyc really shines giving customers a massive advantage either with the capability of more performance per rack unit or equivalent level of performance with less gear. With codes like CFD and WRF, you can really think about hardware configuration differently and increase the effectiveness of every IT dollar spent. The second camp has been split between taking advantage of current offerings like the Epyc 7371, which has a killer combo of high frequency cores at very reasonable price points or waiting for the next generation 7 nanometer Epyc product due out mid-year which we expect will bring 4X the floating point performance.” The jump in floating point math capability is going to be substantial moving from the current generation of Epyc to the next generation, as we discussed in detail last November when AMD revealed some of the specs of its future CPU. The vector units in the next generation Epyc cores will be twice as wide, at 256-bits per unit, and there will be twice as many cores on the die as the first gen parts, at 64 cores. So, at the same clock speed, the next generation Epyc products should be able to do four times the floating point work as the first generation Epyc chip, which was already able to stand toe-to-toe with the “Skylake” Xeon SP-6148 Gold and Xeon SP-6153 Gold processors that are commonly used in the price-conscious HPC sector. This last bit is the important thing to consider, and it gives us a baseline from which we can compare the first generation and the next generation Epyc chips from AMD to the current Skylake and newly announced “Cascade Lake” processors from Intel. Most HPC shops cannot afford the heat or the cost of the top bin Skylake Xeon SP parts from Intel, and they tend to use low-cost parts where memory capacity and NUMA scaling are not as important as in those top bin parts. Here is how AMD stacks up a pair of its top bin first gen Epyc 7601 processors, with 32 cores running at 2.2 GHz each, against a pair of the mainstream Skylake Xeon SP 6148 Gold processors on the SPEC integer and floating point tests, which is the starting point in any comparison: At list price, the performance increase is basically neutralized by the difference in price of the machines. The Intel Xeon SP 6148 Gold processor has a list price of $3,072 when bought in 1,000-unit quantities, while the AMD Epyc 7601 costs $4,000 a pop in those same 1,000 unit quantities. But list price is just a ceiling on the price that HPC centers pay for the large number of processors they deploy in their compute clusters – a starting point for negotiations. AMD and its partners can – and do – win deals by being competitive with street prices for processors for HPC iron. There are a lot of ways to dice and slice the processor lineups from AMD and Intel to try to gauge performance and price/performance. Another way to do it is to try to get two processors that have something close to the same core counts, clock speeds, and raw double precision floating point performance. In this case, AMD likes to compare its Epyc 7371 chip, which is essentially an overclocked Epyc 7351 that just started shipping in the first quarter of 2019, which has 16 cores running at 3.1 GHz (29 percent higher than the base frequency of the Epyc 7351), to Intel’s Xeon SP-6154 Gold processor which has 18 cores running at 3 GHz. The Epyc 7371 started sampling last November, and it is specifically aimed at workloads where single-threaded performance is as important as the number of threads per box, and wattage is less of a concern because of the need for faster response on those threads. Think electronic design automation (one of the keys to driving Moore’s Law, ironically), data analytics, search engines, and video transcoding. In any event, AMD Epyc 7371 and Intel Xeon SP-6154 Gold chips have essentially the same SPEC floating point performance, and the AMD chip, at $1,550, costs 56 percent less than the Xeon in this comparison, which has a list price of $3,543. Of course, SPEC is not a real application, but just a microbenchmark that illustrates theoretical peak performance for a given processor. It is a good starting point, to be sure – akin to table stakes to play the game. To make a better case for the Epyc chip, AMD pit the top bin Epyc 7601 against Intel’s top bin Skylake Xeon SP-8180M, which has 28 cores running at 2.5 GHz, running the Weather Research and Forecasting (WRF) weather simulation using the CONUS 12 kilometer resolution dataset. Here is how a single node running WRF stacked up: Each server running the WRF code had a pair of the AMD and Intel processors, and the AMD machine delivered 44 percent more performance, due to the combination of more cores and more memory bandwidth. In this case, the Xeon SP-8180M has a list price of $13,011, which is 3.25X as expensive as the Epyc 7601, so the price/performance advantage is definitely to AMD on this one. Walking down the Skylake bins will lower the price, but it will also lower the performance, so there is no way for Intel to close this gap until it launches “Cascade Lake,” and shortly thereafter, AMD is expected to be able to quadruple its floating point performance with the next generation Epyc, potentially burying even the two-chip Cascade Lake AP module on performance and price/performance. We won’t know for sure until both products are announced. But the game is definitely afoot. AMD has also done a series of comparison benchmarks using the ANSYS Fluent computational fluid dynamics simulator on single nodes and across multiple nodes equipped with Skylake Xeon and Epyc processors. In these tests, AMD compared two socket machines using the middle bin Xeon SP-6148 Gold (20 cores at 2.4 GHz) to the middle bin Epyc 7451 (24 cores at 2.3 GHz) and the new HPC SKU, the Epyc 7371 (16 cores at 3.1 GHz). The Epyc 7451 delivers more performance than the Epyc 7371, but at $2,400 each compared to $1,550, it also costs more, too. Take a look: It is a bit peculiar that not all of the ANSYS Fluent tests were performed on the Epyc 7451 in the chart above. But it almost doesn’t matter since the Epyc 7371 costs less and still competes with or beats the Xeon SP-6148 and costs 49.5 percent less per CPU. This more interesting HPC comparison is one that AMD did for the ANSYS Fluent simulation of a flow through a combustor with 71 million cells, a fairly dense simulation, across 1, 2, 4, 8, and 16 nodes with these three different processors. The nodes with the Epyc 7371 processors bested the performance of those with the Xeon SP-6148 Gold, and shot the gap between these Xeon chips and the more expensive and more cored Epyc 7451 processors. The incremental cost of the higher bin Epyc chip is greater than the performance gain, and this would also be true if you jumped from the Epyc 7451 to the Epyc 7601. This is just the nature of chip manufacturing – higher bin parts are more rare and therefore cost more to make, hence they have a higher price – whether it is the Xeon or Epyc lines. Or pretty much any other processor, for that matter. The question you have to ask is who ramps the prices fastest as you climb the bins. Incidentally, all of the results shown above are using the open source GCC compilers on both the Intel Xeon and AMD Epyc processors. The HPC Ramp Takes Time Given the pressure that HPC centers are always under to get the most compute for the buck or euro or yen or renminbi, you might think that the current generation Epyc processor would have taken off in HPC already. But the HPC ramp has taken time as software vendors have tuned their code for the Epyc architecture. The 32-core Epyc chips pack four eight-core chiplets on a die, which are linked together using PCI-Express transports and the Infinity Fabric protocol (a kind of new-and-improved HyperTransport), in a single socket to look like a single compute element. These chiplets present NUMA regions in the socket to the operating system kernels, and that can be an issue for certain kinds of workloads, such as a virtual machine running on a hypervisor that spans two or more chiplets in a socket or multiple chiplets across a pair of sockets. But with capacity-class supercomputers that have lots of users and codes that run on CPUs and that cannot be accelerated (or haven’t been accelerated by GPUs because of the hard work involved), HPC centers usually just stack up MPI ranks, one per core, and off they go a-scaling. Those latencies across the chiplets inside of a multicore module don’t matter as much because the HPC workload is not trying to span two or three or four chiplets at once. But supercomputers tend to be on a three or four year upgrade cycle, and a lot of customers, who first started hearing about the current generation Epyc processors with any detail in 2016, took a wait and see attitude, especially after many thought AMD burned them by leaving the field all to Intel back in 2009. (It is hard to quantify how much the lack of competition in the X86 server processor space added to the cost of compute, but it could be easily billions of dollars.) Here is the fun bit. The next generation Epyc processors, which will use a mix of 7 nanometer cores and 14 nanometer I/O and memory controller chips, will be better suited to HPC because of architecture tweaks, both in the floating point units and in the Infinity Fabric within the socket, at exactly the time when Intel is starting to run out of gas on its 14 nanometer process used for the prior two generations of Xeon chips and also for the Cascade Lake and future “Cooper Lake” Xeon SP processors. For all of the talk about application scalability, a lot of HPC jobs only need 64 cores or 128 cores, and with the next generation Epyc processors, what that means is that one or two sockets of aggregate compute will do the trick for running most jobs – a very interesting development indeed. This could mean an uptick in the deployment of single-socket server nodes in the HPC sector, just like The Nikhef National Institute for Subatomic Physics in Amsterdam has done with the current generation of Epyc using a Dell EMC PowerEdge R6415 cluster. “Without sharing the number, there was a big, big chunk of our overall 2018 volume that was for one socket servers,” says Bounds. “The way we think about it is: Why wouldn’t everybody want to build off a single socket platform if there are licensing as well as operational, power, and cooling benefits? If it is a better and more performant solution, why wouldn’t you do that?” And the answer was something interesting, there’s still a notion that you need need a two-socket server because it is redundant. Yeah, we know. A regular two-socket server is not a redundant machine, just a scaled one. You lose one processor, you lose the whole machine. But apparently a lot of people in the IT biz don’t know the difference between NUMA shared memory and a Stratus or Concurrent fault tolerant machine from days gone by. “With the next generation Epyc processor, you combine the fact that we have introduced this idea to the market and we anticipate functionally doubling if not quadrupling what you can get out of a socket – integer and floating point, respectively – and that is going to continue to grow the adoption of single socket,” says Bounds. And there is a very good chance that HPC shops will be as enthusiastic about the next generation Epyc chips as they once were with the Opterons, no matter what Intel does with Cascade Lake and its next-in-line “Cooper Lake” – and maybe even the 10 nanometer “Ice Lake” CPUs coming in 2020, too.
<urn:uuid:0491a3ad-4aa6-4482-8ac3-7337dfbeba46>
CC-MAIN-2022-40
https://www.nextplatform.com/2019/04/04/back-to-the-hpc-future-with-next-generation-amd-epyc-processors/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00439.warc.gz
en
0.954688
3,880
2.609375
3
An Intrusion Detection System (IDS) is a network monitoring solution that detects and alerts suspicious network traffic. The relevant personnel can investigate the alerts to determine whether they need further attention. An Intrusion Detection System can be either host-based or network-based. Host-Based IDS (HIDS) A host-based Intrusion Detection System is installed on endpoints, as opposed to being installed on the network perimeter. A HIDS serves to protect the endpoints from internal and external threats, by monitoring the traffic flowing to and from them, as well as monitoring internal processes and event logs. With a HIDS, visibility is limited to the device it is installed on. Network-Based IDS (NIDS) A network-based Intrusion Detection System is designed to monitor the traffic flowing through the entire network and is usually installed just behind the firewall, on a dedicated machine somewhere in the network. A NIDS has visibility into all network traffic, although it has limited visibility into the activity that takes place on the endpoints. A NIDS will analyze the contents and metadata of the traffic using deep packet inspection (DPI), in order to determine whether the traffic is malicious or not. NIDS provides more context than HIDS and is thus able to detect more sophisticated threats. Naturally, the best option would be to use both a HIDS and NIDS solution in tandem, or use even use a unified threat management solution that integrates multiple threat management solutions into one system. How Do Intrusion Detection Systems Detect Threats? There are essentially three methods that IDS solutions use to detect potential threats, which are as follows: This method identifies threats according to their signature, which is similar to a fingerprint. Each time a new threat has been identified, a signature is generated and added to a list of threats, which the IDS will use as a reference. Given that signature detection is based on known threats, it doesn’t produce any false positives, although the downside is that it can’t detect zero-day vulnerabilities. This method uses machine learning algorithms to establish a baseline of what would be considered “normal” behavior. Using this baseline the IDS can alert on traffic patterns that deviate from this baseline, beyond a certain threshold. Unlike signature detection, anomaly detection is able to detect unknown (zero-day) threats. The downside of this approach is that it not particularly accurate, and thus tends to produce a lot of false positives/negatives. This method is essentially the best of both worlds, in that it uses both signature-based and anomaly-based detection to identify a broader range of threats, with fewer false positives/negatives. Intrusion Detection Systems vs Firewalls Both intrusion detection systems and firewalls are designed to protect your network from malicious traffic, however, an IDS solution doesn’t actually do anything after a threat has been identified. Essentially, it is up to the administrator to investigate the potential incident and respond accordingly. A firewall, on the other hand, can block traffic based upon predefined rules. Intrusion Detection Systems vs Intrusion Prevention Systems An Intrusion Prevention System (IPS) is very similar to an IDS, only, as with a firewall, an IPS is able to actively block identified threats. In fact, next-generation firewalls (NGFWs) go a step further and integrate IDS and IPS functionality into one system. It’s likely that we will see such integrations becoming the norm in future threat management solutions. However, it should be noted that perimeter security is not as relevant as it once was. These days, with IT environments becoming increasingly more distributed, organizations are shifting their focus to more data-centric methods of keeping their network secure, which includes monitoring access to privileged accounts and sensitive data, in real-time.
<urn:uuid:a881e2f9-b156-4497-bf15-07b31c719acb>
CC-MAIN-2022-40
https://www.lepide.com/blog/what-is-an-intrusion-detection-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00439.warc.gz
en
0.946152
808
3.109375
3
Failover is the ability to seamlessly and automatically switch to a reliable backup system. Either redundancy or moving into a standby operational mode when a primary system component fails should achieve failover and reduce or eliminate negative user impact. A redundant or standby database server, system, or other hardware component, server, or network should be ready to replace any previously active version upon its abnormal termination or failure. Because failover is essential to disaster recovery, all standby computer server systems and other backup techniques must themselves be immune to failure. Switchover is basically the same operation, but unlike failover it is not automatic and demands human intervention. Most computer systems are backed up by automatic failover solutions. What is Failover? For servers, failover automation includes heartbeat cables that connect a pair of servers. The secondary server merely rests as long as it perceives the pulse or heartbeat continues. However, any change in the pulse it receives from the primary failover server will cause the secondary server to initiate its instances and take over the operations of the primary. It will also send a message to the data center or technician, requesting that the primary server be brought back online. Some systems instead simply alert the data center or technician, requesting a manual change to the secondary server. This kind of system is called automated with manual approval configuration. Storage area networks (SAN) enable multiple paths of connectivity among and between data storage systems and servers. This means fewer single points of failure, redundant or standby computer components, and multiple paths to help find a functional path in the event of component failure. Virtualization uses a pseudomachine or virtual machine with host software to simulate a computer environment. This frees failover from dependency on physical computer server system hardware components. What is a Failover Cluster? A set of computer servers that together provide continuous availability (CA), fault tolerance (FT), or high availability (HA) is called a failover cluster. Failover clusters may use physical hardware only, or they may also include virtual machines (VMs). In a failover cluster, the failover process is triggered if one of the servers goes down. This prevents downtime by instantly sending the workload from the failed component to another node in the cluster. Providing either CA or HA for services and applications is the primary goal of a failover cluster. CA clusters, also called fault tolerant (FT) clusters, eliminate downtime when a primary system fails, allowing end users to keep using services and applications without any timeouts. HA clusters, in contrast, offer automatic recovery, minimal downtime, and no data loss despite a potential brief interruption in service. Most failover cluster solutions include failover cluster manager tools that allow users to configure the process. More generally, a cluster is two or more servers, or nodes, which are typically connected both via software and physically with cables. Some failover implementations include additional clustering technology such as load balancing, parallel or concurrent processing, and storage solutions. Active-Active vs Active-Standby Configurations The most common high availability (HA) configurations are active-active and active-standby or active-passive. These implementation techniques both improve reliability, but each achieves failover in a different way. An active-active high availability cluster is usually composed of at least two nodes actively running the same type of service at the same time. The active-active cluster achieves load balancing—preventing any one node from overloading by distributing workloads across all the nodes more evenly. This also improves response and throughout times, because more nodes are available. The individual settings and configurations of the twin nodes should be identical to ensure redundancy and seamless operation of the HA cluster. Load balancers assign clients to nodes in a cluster based on an algorithm, not randomly. For example, a round robin algorithm evenly distributes clients to servers based on when they connect. In contrast, although there must be at least two nodes in an active-passive cluster, not all of them are active. Using a two node example again, with the first node in active mode, the second will be on standby or passive. This second node is the failover server, ready to function as a backup should the primary, active server stop functioning for any reason. Meanwhile, clients will only be connecting to the active server unless something goes wrong. In the active-standby cluster, both servers must be configured with the very same settings, just as in the active-active cluster. This way, should the failover server need to take over, clients will not be able to perceive a difference in service. Obviously, although the standby node is always running in an active-standby configuration, actual utilization of the standby node is nearly zero. Utilization of both nodes in an active-active configuration approaches 50-50, although each node is capable of handling the entire load. This means that if one node in an active-active configuration consistently handles more than half of the load, node failure can mean degraded performance. With an active-active HA configuration, outage time during a failure is virtually zero because both paths are active. However, outage time has the potential to be greater with an active-passive configuration as the system needs time to switch from one node to the other. What is a SQL Server Failover Cluster? A SQL server failover cluster, also called a high-availability cluster, makes critical systems redundant. The SQL failover cluster eliminates any potential single point of failure by including shared data storage and multiple network connections via NAS (Network Attached Storage) or SANs. The network connection called the heartbeat, discussed above, connects two servers. The heartbeat monitors each node in the SQL failover cluster environment constantly. What is DHCP Failover? A DHCP server relies on the standard Dynamic Host Configuration Protocol or DHCP to respond to client broadcast queries. This network server assigns and provides default gateways, IP addresses, and other network parameters to client devices automatically. DHCP failover configuration involves using two or more DHCP servers to manage the same pool of addresses. This enables each of the DHCP servers to backup the other in case of network outages, and share the task of lease assignment for that pool at all times. However, dialogue between failover partners is insecure, in that it is neither authenticated nor encrypted. In most organizations this is unnecessarily costly, because DHCP servers typically exist within the company’s secure intranet. On the other hand, if your DHCP failover peers communicate across insecure networks, security is far more important. Configure local firewalls to prevent unauthorized users and devices from accessing the failover port. You can also protect the failover partnership from accidental or deliberate disruption by third parties by using VPN tunneling between the DHCP failover peers. What is DNS Failover? The Domain Name System (DNS) is the protocol that helps translate between IP addresses and hostnames that humans can read. DNS failover helps network services or websites stay accessible during an outage. DNS failover creates a DNS record that includes two or more IP addresses or failover links for a single server. This way, you can redirect traffic to a live, redundant server and away from a failing server. In contrast, failover hosting involves hosting a separate copy of your site at a different datacenter. This way no data is lost should one copy fail. What is Application Server Failover? Application server failover is simply a failover strategy that protects multiple servers that run applications. Ideally these application servers should themselves run on different servers, but they should at least have unique domain names. Application server load balancing is often part of a strategy following failover cluster best practices. What is Failover Testing? Failover testing is a method that validates failover capability in servers. In other words, it tests a system’s capacity to allocate sufficient resources toward recovery during a server failure. Can the system move operations to backup systems and handle the necessary extra resources in the event of any kind of failure or abnormal termination? For example, failover and recovery testing will assess the system’s ability to power and manage multiple servers or an additional CPU when it reaches a performance threshold. This threshold is most likely to be breached during critical failures—highlighting the relationship between security and resilience and failover testing. Does Avi offer a Cloud Failover Strategy? A 3-node Avi Controller cluster can help achieve high availability, provides node-level redundancy for the Avi Controller and optimizes performance for CPU-intensive analytics functions. Find out more about Avi Controller cluster high availability failover operation here.
<urn:uuid:2d22c304-cae0-40fd-9c00-1cba8cb45952>
CC-MAIN-2022-40
https://avinetworks.com/glossary/failover/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00639.warc.gz
en
0.907184
1,773
3.46875
3
The possibilities of the Internet of Things (IoT) are seemingly endless, and its applications extend beyond being used by homeowners and commercial businesses. Although there are many similarities in the way IoT protocols are implemented, there are some differences that set them apart when they are used in a commercial setting or a residential setting. When most people think about the features of IoT, the first trait that comes to mind is connectivity. Most people dwell on the convenience of interconnectivity between devices. This focus is not necessarily a bad thing, but it often stops people from seeing how they can implement IoT in a bid to make their physical security smarter and more efficient. Once you stop to ponder the possibility of smarter security, you begin to wonder how IoT is accomplishing this. Most executives have a basic grasp of IoT, and are actively seeking to introduce greater levels of automation into their companies, but they overlook the ways this can impact security. Today’s article will take a look at the way IoT protocols can be implemented alongside physical security, and explore whether or not this makes physical security smarter. How Can IoT Be Implemented Alongside Physical Security? Every commercial space that values its business, data, and clients focuses on physical security. At its core, physical security measures are merely meant to protect the interests of your company. Some basic physical security measures include the use of high-security commercial door locks, surveillance cameras, commercial safes, access control systems, etc. In order to effectively implement IoT protocols alongside some of the measures that were listed, company executives should have an intimate understanding of the way their company works, the security goals they wish to accomplish, and the benefits or limitations of the physical security measures. IoT can either aid or deter physical security efforts, but this all depends on how the automation solutions are implemented. For most cases, IoT protocols will serve as an auxiliary measure that either relays or monitors environmental data and actions that revolve around your physical security. For instance, IoT-enabled access control devices will give business owners more control over who has access to their company, and it will also allow them to monitor the way employees and clients navigate a space or facility. This action makes it much easier for businesses to compartmentalize their security and set up various levels of security access. In a sense, you could say that pairing access control devices with IoT made them smarter, or extended their reach. You would be correct in making this assumption, but how far does this relationship go? Let’s take a look at some of the ways IoT can potentially make physical security smarter: 1. Access To Real-time Security Alerts IoT helps enabled security devices to deliver real-time security alerts, which can positively change the landscape of physical security protocols. The presence of IoT means that data about a particular physical environment can be processed and relayed much faster. If access control devices are able to communicate with surveillance cameras and other security deterrents, it means that companies will receive security alerts right when they happen. Without the presence of IoT connectivity, an unauthorized individual could easily bypass locks meant to restrict access, and employees and executives would be none the wiser until it was far too late. If someone knows how to pick locks, they can potentially gain access without any alarms being raised. However, with IoT being paired with physical security measures, the scope of physical security broadens and becomes more effective. This is because IoT devices serve as an intermediary that take on the role of processing and relaying data. 2. Digital Trail of Security Events IoT devices can negate the limitations of physical security measures which, in turn, helps make them much smarter. Security consists of passive and active measures, and the coexistence of each of these elements is what helps keep security effective. However, it is often very difficult for some of these measures to keep a log of security events. Even if you have the best keypad door locks installed in an office complex, most of them afford only a little audit trail. Once your security measures are paired with IoT-enabled devices that bolster interconnectivity, you will have almost unlimited access to a digital trail of security events. This might not sound like something that companies should be interested in, but it is. As I pointed out earlier on, security is about passive and active measures. One of these passive measures often involves tracking down security events that might have been logged after a security breach or other related emergency. Having access to this data helps companies catch culprits and refine the layout of their security to ensure that nothing similar occurs. 3. Geofencing Capabilities Implementing a structure of IoT devices also allows companies to take advantage of geofencing capabilities, something which many industries are gradually implementing because they understand the benefits it brings. Imagine that you were able to have all the locks in your facility talk to one another, and also relay information to surveillance cameras and other devices. What does that give your company? An interconnected barrier that becomes rather difficult to penetrate if you do not have access. A geofence is essentially a geographical fence or barrier, and it is often used by mobile app developers to target customers in specific locations. However, companies who pair physical security measures with IoT devices can take advantage of the benefits of geofencing. This measure will allow you to create a barrier around your company, which allows you to meet physical security threats head-on by detecting breaches in more efficient manner. The future of physical security will most likely involve IoT. With the Internet of Things industry slated to grow exponentially over the next few years, it will begin to pervade every facet of modern society, and this includes security. I think that IoT can make physical security smarter, but this does not mean that IoT security itself should be let off the hook. Security is all about layers, so even if IoT enhances your physical security measures, companies should still strive to implement cybersecurity protocols that keep them covered on all fronts. Written by Ralph Goodman, author at United Locksmith
<urn:uuid:a2b1fdd1-2ce4-437b-8072-b2d1dc2670c9>
CC-MAIN-2022-40
https://www.iotforall.com/iot-physical-security-technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00639.warc.gz
en
0.958115
1,247
2.578125
3
When a computer connects to a network in an office for example, a world of working possibilities opens up to employees: sharing documents, messaging, instant access to data in other systems… In an office, the issue of security is not very problematic, as we are talking about an enclosed, generally closely-monitored environment. Similarly, when you connect a computer to the Internet, the working possibilities are the same: share documents, access to information in any kind of computer, send e-mails to anywhere in the world. However, the Internet is always looked upon without considering its dark side. Since an Internet connection can provide access to information in a myriad computers, so maybe these computers can also access the information you have. Generally, the process is not that simple, although for many hackers it can just take a few minutes, nothing more. Some basic safety measures can prevent the Internet connection from becoming a problem instead of an advantage. The first thing to consider is that the main danger of the Internet at this moment is the proliferation of too many viruses. Even seemingly innocuous actions can carry with them the risk of infection. Have you received an e-mail? Well, with the infamous virus ” I love you” you couldn’t even trust a love letter. You want to browse the Web? There are many Internet servers with viruses in their pages (and not always unintentionally). Thinking of installing the latest free game everyone is talking about? It might contain a lovely Trojan Horse that, like its namesake, opens the door to attackers. Besides the virus issue, which will be dealt with in more detail in another article in this series, while you browse the Internet, you actually receive lots of information in the form of files that stay in your computer. Most of these files pose no threat (see the section on cookies), but it’s not always the case. For example, a web page might need to install a component on your computer to be displayed correctly. This can be a Java Applet (a small file with a program that carries out a certain action on your PC, like displaying an animation, a special effect or some web page feature), or an ActiveX control. These elements must be ‘signed’, that is, they must incorporate an authentication system guaranteeing that the content is the same at source and at destination. You should always reject an item whose signature is no longer valid or comes from an unreliable site, as it could have disastrous effects. The much-maligned cookie. A cookie is just a little bit of information about you that Internet servers store on your PC in order to know you have visited a certain web page and make things easier for you. For example, if you visit a sport newspaper’s web page to see the latest score from the Arsenal (or Real Madrid) match, the page will store that data in your PC in the form of a cookie, so that the next time you enter the page it will display information about your favorite team automatically. However, these ‘harmless’ cookies could pose a threat. If somebody looks at the cookies in your PC, they will know which sites you have visited, learning, for example, the kind of music you like, your hobbies or even your sexual tendencies. These files can reach your computer in several ways. As previously explained, one of them is almost unintentional and results just from browsing itself, whereas on other occasions it is users themselves that bring files into their computers. A file download, for example, of an updated driver for a video card, a list of the closest drugstores, a summary of an ornithology congress, a new game level, etc. In these cases, it is you that is requesting for information to enter your computer and exposing yourself to danger. Files can enter your computer for many reasons. Not only can you choose to download a file (through a link in a web page or an FTP session), but you can also receive files from other people through a chat session. Moreover, there are vulnerabilities in some programs that prevent users from being informed of file downloads. To sum up, keep a close control of things that enter your PC. Increasing the security of your browser takes only seconds and will save you a lot of trouble. Installing a personal firewall is very complex, expensive and only for expert users. If you want to avoid dangerous files from entering your PC, install an antivirus on your computer, let it update itself once a day and relax.
<urn:uuid:6d028c7c-b110-49cc-8efe-0af022aaae48>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2002/09/23/security-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00639.warc.gz
en
0.937825
915
3.015625
3
Northrop Grumman‘s Astro Aerospace subsidiary has completed the preliminary design review of a radar antenna reflector technology the business developed for a joint satellite project between NASA and the Indian Space Research Organization. Northrop said Tuesday the AstroMesh reflector will be integrated onto the NASA-ISRO Synthetic Aperture Radar satellite that will operate at L-band and S-band frequencies to offer scientists a detailed view of Earth. The company added Astro Aerospace is now ready to proceed with the detailed design and fabrication phase of the AstroMesh initiative. NISAR is designed to observe Earth’s ecosystem distrubances, ice-sheet dynamics and natural hazards such as earthquakes, tsunamis, volcanic eruptions and landslides as well as the planet’s other complex processes. Northrop noted the satellite, which is scheduled for launch in 2021, will work to help researchers study the crust and evolution of Earth. The NISAR project follows the deployment and spin up of the Jet Propulsion Laboratory’s Soil Moisture Active Passive satellite in 2015.
<urn:uuid:9e9bbd91-6e37-4dac-ac21-002c0369ee5c>
CC-MAIN-2022-40
https://blog.executivebiz.com/2016/08/northrop-subsidiary-reviews-design-of-radar-antenna-reflector-for-joint-nasa-india-satellite/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00039.warc.gz
en
0.891949
221
2.53125
3
Although the vast majority of research on the gut microbiome has focused on bacteria in the large intestine, a new study—one of a few to concentrate on microbes in the upper gastrointestinal tract—shows how the typical calorie-dense western diet can induce expansion of microbes that promote the digestion and absorption of high-fat foods. Several studies have shown that these bacteria can multiply within 24 to 48 hours in the small bowel in response to consumption of high-fat foods. The findings from this work suggest that these microbes facilitate production and secretion of digestive enzymes into the small bowel. Those digestive enzymes break down dietary fat, enabling the rapid absorption of calorie-dense foods. Concurrently, the microbes release bioactive compounds. These compounds stimulate the absorptive cells in the intestine to package and transport fat for absorption. Over time, the steady presence of these microbes can lead to over-nutrition and obesity. “These bacteria are part of an orchestrated series of events that make lipid absorption more efficient,” said the study’s senior author, Eugene B. Chang, MD, the Martin Boyer Professor of Medicine and director of the NIH Digestive Diseases Research Core Center at the University of Chicago Medicine . “Few people have focused on the microbiome of the small intestine, but this is where most vitamins and other micronutrients are digested and absorbed.” “Our study is one of the first to show that specific small-bowel microbes directly regulate both digestion and absorption of lipids,” he added. “This could have significant clinical applications, especially for the prevention and treatment of obesity and cardiovascular disease.” The goals of the study, published April 11, 2018 in the journal Cell Host and Microbe, were to find out if microbes were required for digestion and absorption of fats, to begin to learn which microbes were involved, and to assess the role of diet-induced microbes on the digestion and uptake of fats. The study involved mice that were germ-free, bred in isolated chambers and harboring no intestinal bacteria, and mice that were “specific pathogen free (SPF),” meaning healthy but harboring common non-disease causing microbes. The germ-free mice, even when fed a high-fat diet, were unable to digest or absorb fatty foods. They did not gain weight. Instead, they had elevated lipid levels in their stool. SPF mice that received a high-fat diet did gain weight. This diet quickly boosted the abundance of certain microbes in the small intestine, including microbes from the Clostridiaceae and Peptostreptococcaceae families. A member of Clostridiaceae was found to specifically impact fat absorption. The abundance of other bacterial families decreased on a high-fat diet including Bifidobacteriacaea and Bacteriodacaea, which are commonly associated with leanness. When germ-free mice were subsequently introduced to microbes that contribute to fat digestion, they quickly gained the ability to absorb lipids. “Our study found that, at least in mice, a high-fat diet can profoundly alter the microbial make-up of the small intestine,” Chang said. “Certain dietary pressures, such as calorie-dense foods, attract specific bacterial strains into the small intestine. These microbes are then able to allow the host to digest this high-fat diet and absorb fats. That can even impact extra-intestinal organs such as the pancreas.” “This work has important implications in developing approaches to combat obesity,” the authors conclude. This includes decreasing the abundance or activity of certain microbes that promote fat absorption, or increasing the abundance of microbes that may inhibit fat uptake. “I would say the most important takeaway overall is the concept that what we eat—our diet on a daily basis—has a profound impact on the abundance and the type of bacteria we harbor in our gut,” said Kristina Martinez-Guryn, PhD, lead author of the study, and now an assistant professor at Midwestern University in Downers Grove, IL. “These microbes directly influence our metabolism and our propensity to gain weight on certain diets.” Although this study was very preliminary, she added, “our results suggest that maybe we could use pre- or probiotics or even develop post-biotics (bacterial-derived compounds or metabolites) to enhance nutrient uptake for people with malabsorption disorders, such as Crohn’s disease, or we could test novel ways to decrease obesity.” More information: “Small intestine microbiota regulate host digestive and absorptive adaptive responses to dietary lipids,” Cell Host and Microbe (2018). DOI: 10.1016/j.chom.2018.03.011 Journal reference: Cell Host and Microbe Provided by: University of Chicago Medical Center
<urn:uuid:3c48d6da-59bf-44c8-8ad0-57c1bcf037d7>
CC-MAIN-2022-40
https://debuglies.com/2018/04/12/specific-bacteria-in-the-small-intestine-are-crucial-for-fat-absorption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00039.warc.gz
en
0.946868
1,023
3.265625
3
A journey through the ages We are currently in the midst of what is commonly referred to as the Fourth Industrial Revolution. This has seen a shift from the traditional engineered mechanical technologies of the 19th and early 20th centuries to digital technology and all the innovation this has engendered. So how has that impacted policing, and more importantly, how can digital data play a part in helping fight against crime and benefit the general public? 1990s in-house designed systems – limited reporting ability, siloed data Policing has had to undergo significant changes in the last 30 years to keep up with the increasingly complex technologies that are now a part of everyday life. It’s worth briefly considering just how far forces have had to come on that journey from analogue radios and paper-based filing systems to today’s increasingly sophisticated crime fighting tools that deploy mobile technology. Prior to the 1990s, information was largely stored on paper-based filing systems. Officers from that era will recall manually typing or handwriting crime reports on triplicated carbon paper with copies sent to various filling systems. Only small amounts of data were entered by data clerks into individual forces’ own mainframe computers. That effort would produce what would be considered in today’s terms as the most basic of statistical information covering crimes and road accidents. Nationally, information was shared across the UK’s law enforcement agencies via the Police National Computer, which covered nominal records relating to criminal history, wanted and missing notifications as well as vehicle and property (such as machinery and heavy plant) records. In addition, Scottish forces also had their own nominal based criminal records system1, but compared with today’s digital capability the ability to share data more generally across any of the UK forces was otherwise quite restricted. Forces often relied on files being copied and physically transferred between agencies. The development and deployment of ICT systems during the early 1990s saw a huge increase in the appetite of Chief Constables to oversee the rollout of new computer systems across forces. With that came an increase in the variety of data collected by police forces. Old fashioned manual collator techniques were replaced with shiny new intelligence systems. Station logbooks to record incidents were replaced with electronic versions. Custody leger books were replaced by custody systems providing accountability in terms of managing people in police custody. Systems to record evidence and lost and found property replaced the numerous books designed to record such items. All of these solutions helped improve accountability and security of data and led to the beginning of standardisation of data. Many systems enabled officers to enter the data onto the systems themselves meaning users had access to records more quickly than having to request limited searches from a clerk. The main issue was that many systems were designed in-house with the primary function of storing data, but few allowed the joining up of data, which remained siloed within its own system. A person in a system for recording missing persons would need to be manually checked against another system for recording warrants. Indeed, it became a skill in itself to query and navigate the multiple systems a force may have. 2000s - accountability and the implementation of the National Intelligence Model Police chiefs came under increasing pressure to be accountable for their expenditure and needed to target resources where and when they were required to cut crime and reassure the public. As home computing and the internet took off, policing took a while to catch up. However, when a new model for intelligence led policing - the National Intelligence Model (NIM) - was launched in the early 00s, policing reform was well underway. It was finally recognised that the rich picture of data that organisations had collated over the last 10+ years was not being used to its maximum benefit. The need to identify and focus on key priorities drove an essential requirement to get data out of the siloed systems and have it inform the strategic and tactical direction of UK policing. Commercial off the shelf products The dawn of the new millennium brought yet more sophisticated technology. Most forces moved from local control rooms to larger divisional or force-wide control rooms with increasingly sophisticated toolsets that integrated call data with maps and resource deployment boards. Some systems adopted a ‘golden nominal’ concept, which provided a single view of a person throughout their involvement with the police, be this from missing persons or accidents, to custody systems and criminal justice services. The availability of improved data sources provided forces with opportunities for better insight into the demand for and use of their resources. Some of the early adopters of commercial off the shelf products began to realise that, although the user interface and data storage seemed to fit the bill, the reporting capabilities of their systems was very limited. This resulted in a new specialism within policing and led to the rise of the analyst who could exploit new digital technology such as data reporting tools and geographic information systems to mine data and join up the siloed information. They could then produce an array of maps and reports on crime and intelligence subjects, and they became instrumental in the desire to make sense of the data captured on a daily basis. 2020s - data migration and business intelligence solutions Policing has developed an appetite for improvement and strives to deliver better value against a background of budget pressures. Digital policing strategies across the UK have now replaced many of the disparate systems implemented over the last 20 years with modern systems that allow integration with mobile technologies. This not only releases officers from desks but enables faster data input and access, increasing insight into demand, trends and performance. However, this process of upgrading systems can present other challenges, such as understanding which data is helpful in fighting against crime and which is superfluous. Guidance and legislation have changed since these early systems came into being and the requirement to be compliant with Part 3 of the Data Protection Act 2018 and guidance from the Information Commissioner brings a desire to take across only data that is necessary and proportionate. Many of the old systems had no weeding capability, and data maintenance to address data quality issues was often seen as a luxury forces could do without. This is where specialists in data migration can help. Working as a data partner enables those skilled in data science to extract what is relevant, transform it into a suitable format accepted by new data destinations, and load it into new systems with the minimum of disruption to business as usual processes. A successful migration of legacy data is of vital importance as it mitigates the need to keep legacy datasets accessible and ultimately reduces costs and risk for the organisation. Back to the future It’ virtually impossible to implement digital transformation overnight, but with thorough preparation, communication and planning, it can be successfully achieved. That process of change will bring benefits and insights into the data that organisations already have and will continue to cultivate. Improving data quality will allow data integration with internal and external datasets and provide an advanced capability to intelligently fight crime whilst delivering accountable policing. The reality is that technology will continue to advance but, if we consider how far we have come and develop solutions that take the data that is valuable on that journey, organisations will gain long term benefit from meaningful data. Whilst predicting the future can never be an exact science, the last year has seen increasing demand for instant information and people have become more interested in what the data actually means. People who had never previously looked at spreadsheets or charts have become armchair statisticians. It’s likely that demand will not diminish and the appetite for instant access to, and analysis of, data is expected to increase. If the data is in good health, it will enable forces to use it as an asset to help achieve their aims and objectives of preventing and detecting crime and keeping communities safe. The Scottish Criminal Records Office database, now the Criminal History System.
<urn:uuid:e4f92e80-eabf-4903-b63e-42d6ccd82e0b>
CC-MAIN-2022-40
https://www.capita.com/our-thinking/evolution-of-data-policing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00039.warc.gz
en
0.965449
1,575
2.921875
3
The term “cloud services” refers to a wide range of services delivered to companies and customers over the Internet. These services make it easy for your employees to have access to various business applications and information. Whether they’re aware of it, from checking email to collaborating on documents, most employees use cloud services throughout the workday. Cloud computing vendors and service providers fully manage services provided through cloud computing. You don’t have to host applications on your servers. Instead, they can be accessed from the servers of a cloud service provider. It would help if you decided how to leverage the cloud. It could be a public cloud, a private cloud, or both. Services that a provider makes available to numerous customers over the web are called public cloud services. The service providers noted above are all providing cloud-based services. Cloud services are precious because they let organizations share resources at scale, which allows them to offer employees more capabilities than they could afford if they had to rely on physical servers and storage. Services that a provider does not make generally available to corporate users or subscribers are referred to as private cloud services. The private cloud model is a common method of providing web hosting services. Data and applications are provided through the organization’s own internal infrastructure. As a result, you don’t have to give away all of your intellectual property. Companies dealing with sensitive data often use private clouds to leverage advanced security protocols and increase their resources. These companies include hospitals, banks, insurance firms, and pharmaceutical companies. A private cloud solution is combined with public cloud services in a hybrid cloud solution. This model is often used when an organization needs to store sensitive data in the private cloud. Still, employees need to access applications and resources in the public cloud for everyday communication and collaboration. Proprietary software is used to connect multiple cloud services. There are three basic types of cloud services: 1. Software as a Service (SaaS) The most widely recognized type of cloud service is the so-called Software-as-a-Service or SaaS. This broad category encompasses a variety of services, including file storage and backup, web-based email, and project management tools. Examples of Cloud Service Providers are Dropbox, Google Suite, Microsoft Office 365, Slack, etc. 2. Infrastructure as a Service (IaaS) It would be best if you had an infrastructure to provide the infrastructure for running the cloud services you use, such as Amazon Web Services (AWS), Google Cloud Platform, and Microsoft Azure. It’s software that lets you run your web applications and databases on the cloud without worrying about installing and maintaining the technology yourself. These service providers maintain all of the storage servers and networking equipment, and they may also offer additional services such as load balancing, firewall, and web application security. Many well-known SaaS providers run on IaaS platforms, including AWS, Google Cloud, Microsoft Azure, and VMware Cloud. 3. Platform as a Service (PaaS) With its new cloud computing model called platform as a service, or PaaS, Amazon Web Services (AWS) lets developers build applications on a shared web hosting infrastructure. Platform as a Service (PaaS) is an on-demand service that provides a platform and operating system, and programming language so that software developers can create their cloud-based applications. Many IaaS vendors offer PaaS capabilities, including the examples listed above. Key advantages of using cloud services include: Because the cloud service provider supplies all necessary resources and software, a company can save money by not investing in resources and additional staff. In addition, this makes it easy for you to scale your business as your users’ needs change. Whether that means adding more licenses to support a growing staff or expanding and enhancing the applications themselves, you’ll get all of your work done in no time. Some cloud services are provided on a monthly or annual subscription basis, eliminating purchasing on-premises software licenses. By using this cloud-based computing model, organizations can access software, storage, and other services without investing in the underlying infrastructure or handling maintenance and upgrades. Cloud services give companies the ability to purchase services on an on-demand basis. There are a couple of ways to end an app. First, if the business is still active, but there is no longer a need for that particular application, the business can cancel the subscription and shut down the service. With cloud computing becoming more prevalent, its applications are expanding as well. On-premises software deployments will continue to simplify how organizations deliver mission-critical apps and data to the workforce. In just a few short years, cloud-based services have transformed the way people work and businesses operate. There are many different cloud-based services to choose from, from application delivery to desktop virtualization solutions and beyond. When using AppViewX ADC, it’s easy to deploy virtual machines, application software, or your data. It’s what works best for your business. You may be looking for a way to keep business-critical apps in your private cloud or move them to multiple public cloud services. Still, you may be worried about the security and performance of these different cloud environments. To support many hundreds or thousands of people, organizations need to be able to do so securely—even if those people are in other locations.
<urn:uuid:8c5cb162-28bf-4d32-9cb1-510c9d481907>
CC-MAIN-2022-40
https://www.appviewx.com/education-center/cloud-services/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00039.warc.gz
en
0.940706
1,102
2.90625
3
Containers sound so simple. We know what the word means: It’s something that you use to hold stuff. Just do a Google image search: The top visual explainer is a shipping container. This translates reasonably well to a software context: A container is still essentially something that we put stuff in; in this case, the “stuff” is an application’s code as well as everything that code needs to run properly. Simple enough, right? Software containers can nonetheless be a bit challenging to explain, particularly if the audience isn’t technical and doesn’t understand certain fundamentals about how software gets built and operated. [ Read also: What's a Linux container? ] So we asked several experts to take a stab at explaining containers in the plainest possible terms. Their definitions will give you a solid basis for explaining containers to others, whether your audience is technical or not. Gordon Haff and William Henry, technology evangelists, Red Hat: Haff and Henry offer a good container primer in their eBook, From Pots and Vats to Programs and Apps: How Software Learned to Package Itself. The book certainly gets technical at times, yet Haff and Henry ultimately define containers in wonderfully straightforward fashion, relying little on terms that require a software development or systems administration background. You don’t even need to know what an operating system is, for example. Here’s how they explain containers: “Imagine you’re developing an application. You do your work on a laptop and your environment has a specific configuration. Other developers may have slightly different configurations. The application you’re developing relies on that configuration and assumes specific files are present. Meanwhile, your business has test and production environments which are standardized and have their own configurations and their own sets of supporting files. You want to emulate those environments locally as closely as possible, but without the work of recreating the server environments manually. So, how do you make your app work across these environments, pass quality assurance, and get your app deployed without massive headaches, rewriting, and break-fixing? The answer: Containers. The container that holds your application also holds the necessary configurations (and files) so that you can move it from development, to test, to production – without nasty side effects.” Ed Featherston, VP and principal cloud architect at Cloud Technology Partners: Like Haff and Henry, Featherston says it often helps to explain the historical context for containers, including the virtualization boom. Think of this as the “why.” For example, for all of virtualization’s benefits, it came with costs, both literal and figurative. As Featherston has written: “Containers try and find a balance between isolating the applications virtually without requiring the overhead and licensing issues of bringing along virtual hardware and a guest OS that virtual machines require.” Eva Tuczai, engagement & product Manager, advanced engineering, Turbonomic: “Containers solve the packaging problem of how to quickly build and deploy applications. They’re akin to virtual machines, but with two notable differences: they’re lightweight and spun up in seconds; and they move reliably from one environment to another (what works on the developer’s computer will work the same in dev/test and production).” “In the digital era, applications are the business – speed and innovation are creating winners and losers across all industries. The beauty of containers, and why organizations are moving in this direction, is that they dramatically speed-up development.” [ Related read: 5 Kubernetes success tips: Start smart. ] Amir Jerbi, CTO and co-founder at Aqua Security: “Containers are a way for developers to easily package and deliver applications, and for operations to easily run them anywhere in seconds, with no installation or setup necessary. They enable this by embedding all the code needed in the container and using a process called a container engine to run the containers atop an operating system such as Linux or Windows. Containers also make it easy to implement microservices architectures, and when used in such environments they also make the application very easy to scale up or down, because every component can be run on-demand, even for a few seconds or minutes, and then be shut down equally rapidly.” [ Read also: Microservices and containers: 6 things to know at start time. ] Colby Dyess, director of cloud marketing at Tufin: “Containers are similar to virtual machines, with two key differences. First, virtual machines package application code and an entire operating system. Containers package merely the code and the essential libraries required to run the code. Second, virtual machines provide an abstraction layer above the hardware, so a running instance can execute on any server. By contrast, a container is abstracted away from the underlying operating environment, enabling it to be run just about anywhere; servers, laptops, private cloud, public cloud, etc. These two characteristics free developers to focus on application development, knowing their apps will behave consistently regardless of where they’re deployed.” [ Need to explain Kubernetes, too? Get our Kubernetes glossary cheat sheet for IT and business leaders. ] Subscribe to our weekly newsletter. Keep up with the latest advice and insights from CIOs and IT leaders.
<urn:uuid:5da37306-9f9e-433f-bcf9-0319e7e42f75>
CC-MAIN-2022-40
https://enterprisersproject.com/article/2018/8/how-explain-containers-plain-english?sc_cid=70160000000h0axaaq
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00039.warc.gz
en
0.924112
1,125
2.78125
3
This is a depressing, mostly unsatisfying review of the challenge that global warming poses. Additional, global-level possible solutions are considered by its author, but you may not be fully persuaded by the solutions as framed in this text. No matter. You must read this book anyway. Sedjo’s credentials are impressive (shared the Intergovernmental Panel on Climate Change Nobel Peace Prize), but the writing doesn’t have the readability and narrative integrity of, say, an Elizabeth Kolbert, who also writes in this space. As a result, when Sedjo claims that the Paris Agreement is inadequate, the argument seems at one time both unsurprising and yet alarming. You read on, but doggedly so. It’s mandatory homework. Study, or doom many species of flora and fauna to extinction. And that’s just the beginning. He argues that greenhouse gas reductions are insufficient, and that the proposed metrics under the Paris agreement fail to cover important topics such as geoengineering, urban adaptation to alternative sources, reductions in global reflectivity (albedo) using aerosols. Sedjo’s argument studies the Mount Tambora 1815 volcanic eruption as a possible example. Aside: for a sample academic take, see papers such as Achmad Djumarma Wirakusumah and Heryadi Rachmat 2017 IOP Conf. Ser.: Earth Environ. Sci. 71 012007. Landscapes such as grasslands, snow and prairie reflectivity, he argues, should be considered along with other methods — even though it has what he says in an interview “has little policy relevance.” For urban areas, he says, the discussions tend to cover open space and tree planting but ignoring albedo effects. Sedjo’s general thread, or “Plan B,” is to pursue greenhouse gas reduction a la Paris Agreement, but that other meta-approaches, really big approaches, should also be considered. The index, footnotes and references in this book were only partially satisfactory. (To learn why, read on.) RECOMMENDED Get this book anyway, despite these limitations, The topic is too important to ignore “what else we need to be doing,” even if the arguments are only partially persuasive. This nonfiction genre: Awkward at best This sort of book awkwardly straddles genres between TLDR essay and academic text. The classic TLDR essay is the sort you’ll find in Harpers, Atlantic, New Yorker, and many others. The genre is characterized by lightweight citations — if any citations are provided at all, and an editorial stance that everything must be explained without external references (I.e., assume lazy readership). Instead of citing papers, the author mentions the researcher’s affiliation — as if that makes the work more credible. Footnotes, if provided, may not be inline. Yes, you can read endnotes at the end of a chapter, but good luck connecting them all to the relevant claims in the body of the work. The academic genre solves these problems, but likely cites more references than most readers have access to, and will often cite concepts and principles that are highly domain specific — which means readers better have Wikipedia open and ready. If forced to choose, pick the academic genre. The long form essay format (taking to book form, as is the case with Surviving Global Warming) serves certain rhetorical purposes and is a more accessible story format, but is ultimately unsatisfying for discerning readers– especially for challenges as dire as global warming. This may serve Amazon’s bookseller purposes (more books!), but your intellectual task will feel incomplete.
<urn:uuid:c925e236-faf0-449c-8448-b61e10e7592d>
CC-MAIN-2022-40
https://knowlengr.com/category/resource-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00039.warc.gz
en
0.924373
763
2.78125
3
HIPAA & The Privacy Law HIPAA Protects Patient Privacy Over the last 30 years, patient records shifted from paper to electronic formats and digital records. The digitization of individual medical histories also meant there was an increase for more individuals to gain access and use personal health data to treat their patients. It also meant more opportunity to have bad actors misuse and disclose sensitive and personal health data of individuals. Health care providers, understanding their relationships with their patients, and the amount of trust it requires, have always kept to a long-standing tradition to protect patient privacy. Before developing the HIPAA regulations, previous legal protections and regulations to protect patients at the federal, tribal, state, and local levels were deficient and ineffective. Privacy concerns for patient data were addressed with the development of the HIPAA regulations. HHS developed the privacy standard to give basic protections to patients while still balancing public health and safety needs. The passing of the HIPAA regulations also required employers to offer health insurance coverage (COBRA) to employees who are voluntarily leaving their jobs or even fired. The legislation developed national standards for electronic health care transactions. These regulations help to ensure patient privacy while also improving their access to quality medical care. Medical records can be shared instantly in an emergency or other health situation, versus waiting days or even until the doctors can converse about their health needs. This instantaneous exchange system allows doctors immediate access to their patient’s records in emergency rooms, thus facilitating better health care. The Privacy Law – What is it? As part of HIPAA, Congress added additional regulations in 2000 and updated them in 2003. HHS developed rules that protect the privacy and security of specific health information. Two Rules were enacted that are known as The Privacy Rule and The Security Rule. The Privacy Rule is more formally known as Standards for Privacy of Individually Identifiable Health Information. The Privacy Rule set national standards for the protection of specific health data. The Security Rule is a set of federal rules for protecting detailed health data stored or transferred in electronic form. Both regulations work together to protect patients. The Security Rule puts into operation the protections that are contained in the Privacy Rule by defining the challenges that must be overcome by standardizing technical and non-technical safeguards for the patient’s health data. Another part of HHS, the Office for Civil Rights or OCR, holds the responsibility for enforcing both the Privacy and Security Rules through voluntary compliance and severe civil and criminal penalties for those who violate patient security. For any time of personal data, companies and health providers alike face risk management procedures. Risk management involves steps to protect the data they store from being breached or hacked. The Administrative Safeguards are requirements in the Security Rule that demand covered entities or health providers to enact risk analysis procedures as part of their security management. HIPAA addresses risk analysis and risk management by encouraging and promoting specific security measures that are considered reasonable and appropriate. For health providers, risk analysis impacts the implementation of safeguards. Risk analysis procedures include the following activities but are not explicitly limited to them. - Health providers must take time to evaluate the potential and impact of potential risks. - Providers should be prepared to enact appropriate security measures that address all the potential risks identified in the risk analysis. - Security measures should be documented and include the rationale for any measures adopted. - Providers should not only implement security measures but provide and maintain continuous appropriate security protections for their data. - Risk analysis is an ongoing process and should be done regularly. - Covered entities should regularly review access and security incidents. - Providers should periodically have an evaluation of the effectiveness of their implemented security measures. These reviews should also include regular assessments of new and current potential risks to patient data. Who Does the Law Impact? HIPAA regulations were enacted to be a multi-tiered approach to improving the health insurance and data system. The laws impact nearly everyone, with different rules depending on if you are on the providing or receiving end of the healthcare system. - HIPAA assists those individuals who have group insurance plans through their employers or unions. - The regulations also help those who are self-insured by employers. - The rules impact healthcare workers. The change includes all health system workers from the cleaning staff, administration, and physicians regarding how patients are approached and how their personal data is handled. - HIPAA affects insurance companies, healthcare providers, clinics, therapists, and patients. - ALL healthcare employees and organizations that use, store, maintain, or transmit patient data must maintain compliance with HIPAA regulations. Congress enacted HIPAA to ensure patient privacy, reduce fraud, and improve the healthcare data system. New regulations or controls are added to the rules for them to continue to remain effective from time to time. By following the rules, the government has estimated that it could save healthcare providers billions of dollars annually. For providers, knowing about how to prevent security risks that could result in significant penalties for lack of compliance – HIPAA regulations give them the incentive to learn more about maintaining their data. HHS’s additional education and resources help providers learn to keep their data secure and save money. Through the available resources and the help of privacy professionals, providers, and other organizations can put their focus on their profit margin, as they no longer have to fear being audited continually. Protecting Data – The Rules The Privacy Rule was created to protect certain information that providers use and disclose to other parties for the patient’s health care. The data is required to be kept safe from disclosure or breaches is called Protected Health Information or PHI. This identifiable health information that is transmitted, stored, or shared through electronic media or other networks. PHI is data that relates to: - hThe past, present, or future physical or mental health, health status, or conditions of an individual patient. - Provisions for the treatment of health care of an individual; or - information regarding payments for providing health care to an individual. The data that is stored in the systems are redacted, and de-identified data sets. These are statistical data stripped of individual identifiers. De-identified data does not require privacy protections, and the Privacy Rule does not cover these data sets. De-identifying data is processed by an adequately qualified statistician or privacy professional that uses analytics to lower the risk to data sets by substantially limiting the data. The elimination of data and making sure specific details are missing, removed, or encrypted reduces the ability for bad actors or breaches that would allow recombination of data points to determine a person’s identity. These combinations are generally done through the ‘safe-harbor method’ in which the business or its cleared employee de-identifies data by removing 18 identifiers. The business or covered practitioner would no longer have the actual original data. It protects the health care provider as well as the patient. In some instances, clinical research and other activities may find that working with de-identified data has limited value. What are the 18 data points that are removed for de-identifying data through the safe-harbor method? All HIPAA 18 identifiers must be removed before the data is stored. As listed by the HHS, these data points are: - Address (all geographic subdivisions smaller than a state, including street address, city county, and zip code) - All elements (except years) of dates related to an individual (including birth date, admission date, discharge date, date of death, and exact age if over 89) - Telephone numbers - Fax number - Email address - Social Security Number - Medical record number - Health plan beneficiary number - Account number - Certificate or license number - Vehicle identifiers and serial numbers, including license plate numbers - Device identifiers and serial numbers - Web URL - Internet Protocol (IP) Address - Finger or voice print - Photographic image – Photographic images are not limited to photos of the face. - Any other characteristic that could uniquely identify the individual Automated Redaction Saves How do companies get ahead of the curve? Removing, redacting, or de-identification of data can take hours if done manually by humans. These rules could put the cost of data protection out of reach of most small health care providers or physicians. Using an intelligent redaction system such as CaseGuard can help companies meet all their goals for privacy protection and many other beneficial uses to grow your consumer base. Automated redactions software is used to locate and redact sensitive information automatically. CaseGuard, an automated redaction software, works on all digital media, including documents, databases, images, audio, and video files. It works by using artificial intelligence and machine learning to create a smooth process of removing restricted identifiers in electronic data. It is designed to fit or integrate with most of today’s business software, hardware, and data management systems. It can be automated to redact personally identifiable information at the time the data file is being created, or it can be used to remove data from previous records, recordings, or images. Automating the redaction process using artificial intelligence makes the entire redaction process far more manageable, efficient, and accurate. Accuracy is essential as human error, leaving out a single frame in video content can reveal an individual’s identity. It can be far more critical than merely a breach of someone’s identity when concealing data from law enforcement agencies. For example, one lost frame can cost an officer or other informant their lives. For all companies, having the assurance of total accuracy and complete process of the redaction of sensitive data protects their data assets and the company’s well-earned reputation. The cost savings for companies comes with a reduced workload that requires personnel hours. What once took several hours to do a complete redaction now can be done in minutes. More benefits to the company’s bottom line come with all the additional features included in CaseGuard’s redaction software. Elements that contain translations, transcription and captioning of video data make the redactions system so much more than a data privacy system. Imagine how much further your company’s message will go if your social media posts and video content can be translated into 32 different languages or captioned at the push of a button. Growth. Data protection. Security. Leadership. Companies that choose CaseGuard to protect their data lead the market in their industries. Leadership matters. The benefits provided by CaseGuard’s intelligent redaction software are features that have made CaseGuard the number one leader in privacy and redactions systems.
<urn:uuid:7290035a-517a-4d39-b403-542a9175e87f>
CC-MAIN-2022-40
https://caseguard.com/articles/hipaa-the-privacy-law/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00039.warc.gz
en
0.931371
2,280
3.03125
3
Virtualization provides the bridge between how information technology services are delivered in the current data center environment to how those same services and applications are delivered in a cloud environment. Understanding virtualization is critical to understanding cloud computing. Cloud computing is the model for enabling on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services). It features rapid provisioning with minimal management effort or service provider interaction. Cloud computing is composed of three service models, four deployment models, and five essential characteristics. Virtualization involves the sharing of physical computer components and includes a logical abstraction (a software representation) of computer physical assets (i.e., network interfaces, operating systems (OSs), disk drives, motherboards, memory, etc.). In other words, virtualization is the use of software to enable hardware to support multiple instances of operating systems, networks, and storage arrays at the same time. The three primary types of virtualization are network virtualization, server virtualization (operating systems), and storage virtualization: Network virtualization – Software enables available hardware (routers, switches) and bandwidth to be split into channels, and then logically associated with servers. Virtualization, in this case, disguises the true complexity of the network by breaking it up into smaller, more manageable pieces. Server virtualization – Another term used for server virtualization is server consolidation, and illustrates the more traditional concepts of virtualization, using the hardware more efficiently. To accomplish this, the physical components of a server, such as processors, memory, and network interface cards, are abstracted into a single unit. Server virtualization logically breaks up a server into smaller, more manageable units, just as network virtualization logically breaks up a network into smaller, more manageable pieces. Storage virtualization – Storage area networks (SAN) have done most of this work for us. The basic virtualization concept in effect here is the pooling of all physical storage and making it appear as a single unit that can be managed from a central console. By virtualizing the physical assets, the virtualized assets can be viewed as independent building blocks with less structure than the original physical asset. The virtualized assets now become independent building blocks that can be mixed and matched to suit a specific need. In short, the virtualized computer system will share the physical assets of the original system architecture but can be easily configured to address the specific needs of multiple service requirements. From the instant that the first computer ENIAC was born on February 14, 1946, the effort to better utilize hardware resources began. In the 1950s, as the evolution of computers proceeded, the need to have “supervisors” or “operating systems” was borne of the need for a simpler user interface; the concepts of multitasking and multithreading were thus invented. By the 1960s, these concepts were becoming reality, and the concept of virtualization was born. By 1972, the first commercially available virtual operating system was released. The adoption of virtualization was slow until the first cloud arrived in the early 1990s: the Internet. By 1999, Salesforce.com (a cloud computing company founded by four former Oracle executives and headquartered in San Francisco) implemented a solution to deliver enterprise applications through a website, which was an early example of cloud computing. With the introduction of Salesforce.com, software as a service (SaaS) became a reality. Amazon Web Services In 2002, Amazon Web Services (AWS) was launched. This service provides customers with the ability to virtualize their storage, network, and computing. This is Infrastructure as a Service (IaaS). Amazon became a cloud services provider. Not sitting on their laurels, Amazon took this static structure to the next level when they launched their Elastic Compute Cloud (EC2) in 2006, which allowed organizations to lease computing systems to run their own applications. Google Browser Applications In 2009, Google and other service providers started offering browser-based enterprise applications. This was made possible by the advances in browser technology and the increased popularity and processing power of personal computing devices, including small laptop computers, smartphones, and tablets. These services empower us to use our computing devices as extensions of ourselves, optimizing and informing our work and play. Virtualization: A Modern Resource Sharing Virtualization leverages the fact that at any given moment a computer system is rarely using 100% of its available resources. Since some resources are not being used, they can be allocated or assigned to do other tasks, thereby increasing productivity. Virtualization decouples the traditional one-to-one relationship between applications and the physical hardware on which they run, enabling efficient resource sharing.
<urn:uuid:dd888896-19b1-44e6-a0aa-553ad9f3cf5c>
CC-MAIN-2022-40
https://electricala2z.com/cloud-computing/virtualization-in-cloud-computing-virtualization-types/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00039.warc.gz
en
0.939379
979
3.515625
4
Health information is among the most sensitive categories of data. Collected in high volume and stored in systems that are often vulnerable, health data has become an attractive target for malicious outsiders. The rise in attacks has led to a string of high-profile data breaches the world over, with healthcare companies bearing the reputational, legal and financial consequences in their aftermath. The healthcare sector reported the highest cost per data breach of any industry in 2019. According to the Ponemon Institute and IBM Security’s Cost of a Data Breach Report, the cost of a data breach for healthcare institutions was approximately $6.45 million, 65% more than the average cost per breach. The long-term impact of data breaches means that companies continue to pay for them years after they took place, with highly regulated industries such as healthcare seeing high costs continue even three years after a data breach has occurred. It is therefore in the interest of healthcare organizations to protect their data and avoid the disastrous consequences of a data breach. But what are the key data security concerns they face and how can they address them? Let’s take a closer look! One of the biggest contributors to data breaches are employees themselves. Whether it’s carelessness on their part as they work with sensitive data or their susceptibility to phishing or social engineering attacks, insiders often pose the highest risk to personal information. It is therefore essential that employees receive adequate training that educates them on the best practices of handling sensitive data and the importance, both regulatory and reputational, of following them. Training is especially effective when it comes to outsider threats that target individuals through infected attachments, links to malicious websites, or seemingly legitimate requests for sensitive information. Employees are often unaware of the existence of such practices and, once informed, are vigilant about them, reducing their chances of being tricked into sharing sensitive data or infecting their work devices or the company network. When it comes to human error, however, training is less effective as it implies an unconscious mistake made by an employee. This means it can happen to anyone, regardless of how well-informed they are on the dangers of data breaches. Employees feeling the pressure of a deadline or who are simply feeling tired or unwell, can easily send an email to the wrong person or publish a document publicly. In these sorts of cases, solutions like Data Loss Prevention (DLP) tools can support healthcare companies to keep sensitive data secure. Through predefined policies, they can monitor and control the transfer of sensitive data to ensure it is not sent through unauthorized channels such as messaging apps, third-party file transfer websites, or cloud services. DLP solutions, through their sensitive data monitoring options, can also help healthcare providers identify which employee practices need to be corrected, supporting them in building effective training that addresses real-world situations employees face. Data Protection Legislation Specialized legislation such as the Health Insurance Portability and Accountability Act (HIPAA) in the US and the General Data Protection Regulation (GDPR) in the EU makes the protection of health information mandatory by law and puts the burden of responsibility squarely on organizations’ shoulders. Noncompliance comes with significant fines. For example, depending on the level of perceived negligence at the time of a HIPAA violation, a healthcare company can be required to pay as much as $1.5 million per year for each violation. Compliance is, therefore, a key concern for many healthcare companies. They must research which legislation applies to them and the requirements that they are obligated to follow. Auditing is an essential part of any compliance efforts as are data discovery tools. Organizations must, first of all, find out what data they are collecting falls under the incidence of laws such as HIPAA, where it is being stored on their network, and how it is being used by their employees. This can easily be done through DLP tools that come with predefined policies for legislation such as HIPAA or GDPR. By using them, companies do not have to go through the trouble of defining what sensitive data means in the context of such laws but can use policies already verified for compliance use. Data Breach Response No data protection strategy is foolproof. Even the strictest policies can sometimes prove insufficient. For example, when a new system vulnerability is exploited before it’s patched or an employee is targeted by a very convincing social engineering attack. For these possibilities, no matter how small, healthcare companies must be prepared to deal with a data breach. This can be done through a data breach response plan. By preparing for the eventuality of a data breach, when a security incident occurs, employees will already know what is expected of them and how they can best mitigate its consequences which leads to quick reaction times that are critical when it comes to dealing with a data breach. A data breach response plan also helps companies save money. According to the 2019 Cost of a Data Breach Report, organizations that already had an incident response team in place and extensively tested their data breach response plan saved over $1.2 million when they were breached. Third-Party Security Practices Many healthcare companies work with contractors and while they themselves might have strong data protection strategies in place, these third parties may not. Legislation like HIPAA and GDPR restrict how personal information can be shared with third parties, with organizations collecting the data still liable in the face of the law in case a data breach occurs. This means that, should a vendor suffer a data breach, fines would be issued not only to the party responsible for it but also the data controller who had an obligation to protect the data it collected. Healthcare companies must therefore verify that any contractors they work with have data protection policies in place that align with their own cybersecurity strategies, ensuring a satisfactory level of protection for any data that would be transferred to them. Download our free ebook on Data Loss Prevention Best Practices Helping IT Managers, IT Administrators and data security staff understand the concept and purpose of DLP and how to easily implement it.
<urn:uuid:40c5eda4-28b8-4b78-82d2-6cb9100c003d>
CC-MAIN-2022-40
https://www.endpointprotector.com/blog/data-security-concerns-in-healthcare-and-how-to-solve-them/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00039.warc.gz
en
0.958013
1,227
2.671875
3
How to Password Protect a ZIP File Today, data compression is an integral part of the workplace. Just think about it. It’s likely you have received or sent out a ZIP file sometime this week. What makes ZIP files so ubiquitous is their ability to carry more data at faster speeds than simply sending individuals files. Essentially, ZIP is a type of file format that is used to compress and archive multiple files together into a single location or file with the .zip extension. While ZIP files might be great for transferring data quickly and easily, they are not famous for their security. But there’s a way to up the security of ZIP files and the way, as you might’ve guessed, is through passwords. Today we’re looking at how you can password protect a ZIP file to ensure that the file is protected. Even if it happens to land in the wrong hands, its contents will remain secure. How to password protect a ZIP file on Windows The following instructions apply to most popular and active Microsoft Windows operating systems such as Windows 7, Windows 8, Windows 10, and Windows 11. To password protect a ZIP file on Windows, first you will need to download a free and open-source file compression application called 7-Zip. Once you have downloaded 7-Zip, you need to install as you will use it to password protect a ZIP file. So without further ado, here’s how you password protect a ZIP file with the help of 7-Zip: - Select the File or Folders that you want to compress into a ZIP file. - Right-click and in the drop down menu locate 7-Zip. - Select Add to archive. - In the menu window, locate the Archive format section and select zip. - Locate the Encryption section in the menu window. - Type in the password that you wish to use into the Enter password field. - Retype the password in the Reenter password field. - Click OK. That’s it. Now your ZIP file is password protected and is safe from the praying eyes. Level up your online safety With advanced features. How to password protect a zip file on macOS Password protecting a ZIP file on macOS is, in essence, a very similar process to how you password protect a ZIP file on the Windows operating system. However, instead of 7-Zip, macOS users should download Keka — which is an equivalent app to 7-Zip — and use it to compress and password protect files. - Once you have downloaded the Keka app, you need to install it and launch it. - Close the Preferences window. - In the smaller window, enter your password and repeat it. - Now drag and drop the Files or Folders that you want to compress on top of the Keka app window. - Select the location where you want to save the new ZIP file and click Compress. That’s it. You’re all done. How to open a password protected file To open a password protected ZIP file, all you need to do is simply double click and enter the correct password. Once you enter the password, the ZIP file will be decompressed and you will be able to view its content. Note that this applies to both Windows and macOS computers. Store and securely share passwords with NordPass Passwords are a fact of life. We use them every day for protecting our online lives. Here’s a quick fact: an average internet user has to handle around 100 passwords. The number is huge. But that’s where NordPass comes in. It’s a secure and intuitive password manager. With NordPass, you can safely store and access passwords, credit card details, personal information, and notes. Additionally, you can securely share passwords with other NordPass users with a Premium Plan. You should use the NordPass secure item sharing feature as a quick and easy way to share passwords that protect your ZIP files. Thanks to the NordPass Autosave and Autofill features, you will no longer need to type your login credentials. Advanced security features such as Password Health and Data Breach Scanner allow you to identify whether any of your passwords are vulnerable or if your data has ever been compromised in a breach. These days, a password manager is an essential security tool for any person who wishes to protect their data and identity online. Subscribe to NordPass news Get the latest news and tips from NordPass straight to your inbox.
<urn:uuid:70d753ca-2fe5-4b5f-afda-2f30bf7eed5f>
CC-MAIN-2022-40
https://nordpass.com/blog/how-to-password-protect-zip-file/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00039.warc.gz
en
0.90458
934
2.640625
3
Normal skin contains a patchwork of mutated cells, yet very few go on to eventually form cancer and scientists have now uncovered the reason why. Researchers at the Wellcome Sanger Institute and MRC Cancer Unit, University of Cambridge genetically engineered mice to show that mutant cells in skin tissue compete with each other, with only the fittest surviving. The results, published today in Cell Stem Cell suggest that normal skin in humans is more resilient to cancer than previously thought and can still function while a battle between mutated cells takes place in the tissue. Non-melanoma skin cancer in humans includes two main types: basal cell skin cancer and squamous cell skin cancer, both of which develop in areas of the skin that have been exposed to the sun. Basal cell skin cancer is the most common type of skin cancer, whereas squamous cell skin cancer is generally faster growing. There are over 140,000 new cases of non-melanoma skin cancer each year in the UK. However, every person who has been exposed to sunlight carries many mutated cells in their skin, and only very few of these may develop into tumours. The reasons for this are not well understood. For the first time, researchers have shown that mutated cells in the skin grow to form clones that compete against each other. Many mutant clones are lost from the tissue in this competition, which resembles the selection of species that occurs in evolution. Meanwhile, the skin tissue is resilient and functions normally while being taken over by competing mutant cells. Professor Phil Jones, lead author from the Wellcome Sanger Institute and MRC Cancer Unit, University of Cambridge, said: “In humans, we see a patchwork of mutated skin cells that can expand enormously to cover several millimetres of tissue. But why doesn’t this always form cancer? Our bodies are the scene of an evolutionary battlefield. Competing mutants continually fight for space in our skin, where only the fittest survive.” In the study, scientists used mice to model the mutated cells seen in human skin. Researchers focussed on the p53 gene, a key driver in non-melanoma skin cancers. The team created a genetic ‘switch’, which when turned on, replaced p53 with the identical gene including the equivalent of a single letter base change (like a typo in a word). This changed the p53 protein and gave mutant cells an advantage over their neighbours. The mutated cells grew rapidly, spread and took over the skin tissue, which became thicker in appearance. However, after six months the skin returned to normal and there was no visual difference between normal skin and mutant skin. The team then investigated the role of sun exposure on skin cell mutations. Researchers shone very low doses of ultraviolet light (below sunburn level) onto mice with mutated p53. The mutated cells grew much faster, reaching the level of growth seen at six months in non-UV radiated clones in only a few weeks. However, despite the faster growth, cancer did still not form after nine months of exposure. Dr. Kasumi Murai, joint first author from the Wellcome Sanger Institute, said: “We did not observe a single mutant colony of skin cells take over enough to cause cancer, even after exposure to ultraviolet light. Exposure to sunlight continually created new mutations that outcompeted the p53 mutations. We found the skin looked completely normal after we shone UV light on the mice, indicating that tissues are incredibly well-designed to tolerate these mutations and still function.” Dr. Ben Hall, senior author from the MRC Cancer Unit, University of Cambridge said: “The reason that people get non-melanoma skin cancer is because so much of their skin has been colonised by competing mutant cells over time. This study shows that the more we are exposed to sunlight, the more it drives new mutations and competition in our skin. Eventually the surviving mutation may evolve into a cancer.” More information: Kasumi Murai et al. (2018) Epidermal tissue adapts to restrain progenitors carrying clonal p53 mutations. Cell Stem Cell. DOI: 10.1016/j.stem.2018.08.017 Journal reference: Cell Stem Cell search and more info website Provided by: Wellcome Trust Sanger Institute
<urn:uuid:7428c61c-11f0-4062-8627-4c2a01cb6a8a>
CC-MAIN-2022-40
https://debuglies.com/2018/09/27/scientists-uncover-why-mutated-cells-in-our-skin-go-on-to-eventually-form-cancer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00239.warc.gz
en
0.953144
907
3.640625
4
Vital biometrics—the measurement of a user’s internal body functions, like heart rate, ECG and blood oxygen level—play a unique and increasingly prominent role in mobile identity tech. Also known as physiological biometrics, these indicators are most often found in wearable devices for the purpose of fitness tracking. But the same capabilities that enable those functions also make vital biometrics a valuable tool in healthcare, and particularly remote care. Wearable devices are increasingly being used to let patients record and transmit important health data without the need to go to the hospital in-person, and while many of these devices are specifically designed for healthcare purposes, even some consumer-facing wearables like the Apple Watch now have sufficiently advanced biometric capabilities to allow them to be integrated into remote care systems. Innovations in vital biometrics are also opening the door to authentication applications. Researchers have found that even physiological biometrics like heart rate can produce patterns that are unique to an individual, and some pioneering firms have started to design wearables that can actually leverage these vital signs to continuously confirm the identity of the wearer.
<urn:uuid:5f306996-4648-4442-b494-00ae8a8c2b94>
CC-MAIN-2022-40
https://mobileidworld.com/solutions/vital/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00239.warc.gz
en
0.943236
227
2.53125
3
U.S. Consortium Tackling ‘Error Factor’ in Quantum Computing (Optics.org) Researchers in a U.S. consortium led by the Department of Energy’s Fermilab, in Chicago, say they are moving closer to solving one of the biggest challenges posed by quantum computing: the “error factor.” They hope their work will help to open pathways to the high hopes for quantum computing that researchers have been pursuing for decades. A federal grant of $115 million is funding work at Fermilab – a leading player in research on the peculiar behavior of qubits as a computational resource – and the other institutions in the consortium, called the Superconducting Quantum Materials and Systems Center, or SQMSC, to advance quantum computing. “If we don’t deliver error correction, there will be no computing,” says Bane Vasic, a University of Arizona professor of electrical and computer engineering. “No communications. No nothing, without error correction.” Vasic is director of the University of Arizona’s Error Correction Laboratory, and an architect of cutting edge codes and algorithms used in communications industries and data storage. Error in the quantum realm is any unwanted behavior that alters your information. “For example, in conventional computing an alpha particle could hit the silicon,” Vasic said. “It could destroy or flip your bit.” “This era is like what happened when electrical engineering emerged within physics,” said Vasic. “Now quantum is everywhere. Now that the theory is established, engineering challenges need to be solved to translate it into reality. The concepts of quantum computation are very exciting and beautiful.”
<urn:uuid:fd6c70ab-983c-43fc-ae4f-39ec6189c81a>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/u-s-group-tackling-error-factor-in-quantum-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00239.warc.gz
en
0.926915
355
2.640625
3
Virlock ransomware was first seen in 2014 but in September of 2016 was found to be capable of spreading across networks via cloud storage and collaboration apps. This strain uses a variety of threat techniques. So how does Virlock work? Unlike most ransomware, this strain not only encrypts selected files but also converts them into a polymorphic file infector just like a virus. In addition to files we are used to seeing being targeted, Virlock also infects binary files. This means Virlock weaponizes every single file it infects, any user that opens an infected file will also infect and encrypt all of that user's files. User A and User B are collaborating through the cloud storage app Box, using a folder called "Important". Both users have some of the files within the folder synced to their own machine. User A gets infected with Virlock ransomware on their machine, encrypting all their files, including the files which are synced on Box. So, Virlock also spreads to the cloud folder and infects the files stored there, which in turn get synced to User B's machine. User B then clicks on any of the files in the shared folder on their box, the infected Virlock file is executed and the rest of the files on the machine of User B become infected and in their turn becoming Virlock file infectors just like a virus. This example is by no means limited to just User A and User B. Because each infected file becomes an infector, the infection will spread to all users that are collaborating within the same cloud environment. Consider the User-File collaboration graph below of a typical enterprise with users (in black dots) and the files (in blue dots). All the users within the red boundary could get infected with Virlock ransomware within minutes. Virlock asks the victim for a payment in Bitcoin to decrypt their files, however, unlike other ransomware strains, Virlock appears to be an anti-piracy warning from the FBI. The ransom note claims pirated software has been found on the infected machine and threatens the victims with prison and a $250,000 fine if they don't pay a $250 'first-time offender' fine. This is an old social engineering tactic that has been used by cybercriminals for years in an attempt to scare victims into just paying the ransom quickly. Threat actors count on ransom payments to continue upgrading their infrastructure. The problem with Virlock is that say a victim pays the ransom, how can they be sure every file is completely restored? If just one file is left infected the entire network is still vulnerable and how many organizations would pay ransom twice to recover from the same infection? Our guess is not many. The spread of an infection like Virlock counts on users not paying attention. Yet another reason to make sure all of your users have proper security awareness training!
<urn:uuid:1c076b93-58a1-45d5-867e-3ba94895b64e>
CC-MAIN-2022-40
https://www.knowbe4.com/virlock-ransomware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00239.warc.gz
en
0.961877
592
2.875
3
Disclaimer: Building strong cryptosystems is way harder than breaking them. Props to the system author for going for it and publishing the system – it’s a nice and simple design which is useful for learning cryptographic concepts. I hope this post helps inform future designs! I recently heard about the Sarah2 pen-and-paper cipher, touted as a “strong pen-and-paper cipher” and posted publicly here: http://laser-calcium.glitch.me/. The cipher is quite simple in order to make it amenable for pen-and-paper use. When I first heard about the cipher, I immediately started thinking of ways to break it, and a few hours later I had two functional attacks that fully compromise the key with relatively small numbers of chosen-plaintext messages (in the best attack, less than 2000 4-character messages). This writeup explains the technical details of the attack. Because the cipher is quite simple, the attack is similarly simple to explain, and I hope it makes for a good illustration of a neat cryptanalytic technique. The Sarah2 Cipher Sarah2 encrypts strings of characters from the English alphabet, plus the “_” character for spacing and padding, for a total of 27 letters. The secret key is a permutation of all possible letter pairs (an S-Box, in cryptography parlance), mapping every pair of letters to another random pair. Thus, there are (27*27)! = 729! possible secret keys. This equates to a 5886.98-bit key – quite a bit larger than the tiny 128-bit keys commonly used in AES. Because the cipher operates on pairs of letters, the input must be of even length, and is padded (with a “_” character) if it is not. Then, encryption proceeds as a series of rounds. In each round, every (non-overlapping) pair of characters is passed through the secret S-Box to yield a new pair of characters. Then, the characters are permuted using a kind of “inverse shuffle”, where the characters in the odd-numbered positions (1, 3, 5, …) are collected and then concatenated with the characters in the even-numbered positions (2, 4, 6, …), so that for example the string “12345678” would become “13572468”. This shuffle breaks up all the pairs to provide diffusion in the cipher. The permutation is omitted in the final round as it is unnecessary (it would add no security and would make the algorithm harder to execute on pen and paper). Decryption simply inverts these steps – the characters are passed through the S-Box in reverse and shuffled each round. Here’s a worked example to make things clearer. The secret S-Box we’ll use is To use this, find the row for the first character, then the column for the second. For example, encrypting “cf” would yield “bm”. Now let’s take a plaintext, like “ attack_at_dawn” and encrypt it through one round We first translate each pair “at”, “ta”, “ck”, “_a”, … and translate via the table, to get “ fmjheqgzzmxohd“. Then, we “inverse shuffle” this to get “ In the next round, we translate each pair “fj”, “eg”, “zx”, “hm”, … to get “ cchvcsxwmkjmnh“. Then inverse shuffle to get “ chcxmjncvswkmh“. You get the hang of it. Visually, on a four-character plaintext: So, this is a simple cipher indeed. The author makes three recommendations about the number of rounds to use, based on the attack model: log2n rounds: “I’m only encrypting a handful of messages, and adversaries will never get a chance to make me encrypt anything they choose.” log2n+2 rounds: “I’m encrypting tons of messages, but adversaries will still probably not get a chance to make me encrypt anything they choose.” 2 x log2n rounds: “My adversary can encrypt and decrypt things using my key, but they don’t know the key.” My expectation is that this provides a fairly strong level of security, even against attacks involving a fair amount of computational power.http://laser-calcium.glitch.me/ Let’s Do The (Electric) Slide Attack The author notes that, after log2n rounds, a one-letter difference will have propagated through to all ciphertext characters. If you play with the cipher a bit, you might feel like it’s pretty strong; after all, if you encrypt attack_at_dawn you might get attack_at_down would yield the completely different wmhkpqfpspcrcb – and that’s after just 4 rounds. Going for the highest-security 2 x log2n rounds (8, in this case) seems like it should be very secure, indeed, especially if the S-Box is properly random. And, indeed, the author explicitly explains that 2 x log2n “provides a fairly strong level of security, even against attacks involving a fair amount of computational power”. However, there’s a super-powerful attack which will break this cipher no matter how many rounds are used! The key insight is that this cipher, at its core, is the same round operation applied a whole bunch of times. If we add a final permutation onto the encryption operation (which we can do, because the permutation doesn’t depend on the key), an n-round Sarah2 encryption operation straightforwardly becomes n iterations of the substitution+permutation round. Let’s call the round function F, such that the encryption operation (with extra permutation) is F applied n times: Fn. Suppose that we have the ability to encrypt anything we want but no access to the key (the third scenario outlined above): that is, for any message a of our choosing, we can encrypt it to get b = Fn(a). Now suppose that, somehow, we’re able to find (or guess) c such that c = F(a). In that case, we can ask for the encryption of c, which is d = Fn(c) = Fn+1(a) = F(Fn(a)) = F(b). This seemingly simple relationship is actually extremely powerful – if we know a single relation c = F(a) of the round function, we can obtain a second relationship d = F(b) which enables us to learn more about the round function. And with that second relationship, we could obtain a third, and so on and so forth. Note that these are relationships on one round of the cipher – not on the full multi-round encryption function. This will allow us to attack the round function directly – no matter how many rounds the encryption uses. This technique is known as a slide attack, since it visually looks like “sliding” the cipher across itself: So, if we can encrypt anything we want, we can effectively reduce this n-round cipher down to a single round. Is one round of Sarah2 secure? Unfortunately, no. A single round only performs a single substitution through the secret S-Box, and a simple fixed permutation. Now, the only problem left is to guess a single relationship c = F(a). Luckily, we can just apply brute force. If the input a consists only of repeats of a single character pair (e.g. ababababab), one round will only use one entry of the S-Box. Attack 1 – Long Messages This leads us to attack idea #1: what if we encrypted a really long message, say, 10000 underscores (_)? Since the message has only a single character pair, we know that the first substitution will turn it into xyxyxyxy....xyxyxy for some xy, and then the first permutation will turn it into xxxxxxxx...yyyyyyyy. So, we could also encrypt every xxxxxxxx...yyyyyyyy for each of the 729 (27*27) pairs of characters x, y. How do we know which one is the right match? If we tack on an extra permutation step to the first encryption (of ___...___), then the resulting ciphertext will be just one substition step away from the encryption of the correct xxxx...yyyy. The ciphertexts are basically random, but for long-enough messages they must contain duplicate pairs (by the Pigeonhole principle). For each candidate pairing of ciphertexts, we can check to see if the duplicates line up – if they don’t, then there’s no way they could be separated by a single substitution step. For example, in the example below the encryption of ___…___ happens to start with the duplicate pair “ei ei” (although in general the duplicate pairs might occur anywhere in the text). The encryption of xx…yy starts with ddxh…, which is impossible since “ei” could not map to both “dd” and “xh”. But, encrypting the correct kk…zz produces a ciphertext that starts with “gu gu”, which is plausible given the “ei ei” in the original ciphertext. So, by leveraging these duplicate pairs, we can get two ciphertexts that only differ by a single substition step. Since the messages are very long, we expect that almost all possible letter pairs exist, from which we can easily reconstruct the whole table. With 10000 characters (5000 letter pairs), the probability that the ciphertext does not contain all 729 letter-pair combinations is (728/729)^5000 = 0.001; that is, with 99.9% probability the ciphertext contains every possible letter pair and therefore we get the entire secret S-Box. This only requires us to encrypt at most 730 messages of 10000 characters, or around 7,300,000 characters. Attack 2 – Short Messages We can also pull off the slide attack even if we restrict ourselves to short messages. Let’s restrict ourselves to 8-character (4 letter pair) messages. What we’ll do is obtain the encryptions of every message of the form abababab and every message of the form aaaabbbb – a total of 1431 messages (skipping duplicates of the form aaaaaaaa). Let’s let A(ab) denote the encryption of “ abababab” with an extra permutation (as with the long message attack), and B(ab) denote the encryption of “ aaaabbbb“. Let S denote the secret substitution function; for example, S(abcd) = fwit for the S-box above. Then, for any letter-pair X, we have S(A(X)) = B(S(X)) – that is, A(X) and B(S(X)) are related through a single substitution (using the slide attack). All we need to do is figure out the pairings between A and B messages. As with the long message attack, we can do this by looking for messages with duplicated letter pairs. For example, if A(ic) = cfcftlvc and B(gh) = bmbmsmlc, and no other B messages have this pattern (the same pair twice, followed by two different pairs), then we can be certain that S(ic) = gh, S(cf) = bm, S(tl) = sm and S(vc) = lc. Once we learn a new pair, such as S(cf) = bm, we can look at A(cf) and B(bm) to get even more pairs, and this way iteratively derive the entire key. The probability that any A message has no duplicated letter pairs is (728/729)^4 = 0.9945, so the probability that every A message has no duplicate pairs is 0.9945^729 = 0.018. Therefore, this particular attack strategy succeeds over 98% of the time. We could improve this by using longer messages (e.g. with 16 characters it would succeed 99.9% of the time). We could also use a more brute-force approach where we guess pairings, propagating the resulting new pairings and continuing until we get a full key or hit a contradiction and backtrack. This attack only requires 1431*8 = 11448 characters to be encrypted, or around 54,434 bits of information. Given that the key itself is around 5887 bits in size, this equates to only needing around 9 bits of information per bit of key – not a bad ratio! The theoretically optimal attack would require observing 5887 bits of encrypted (or decrypted) data, so this is pretty close. Improving the Cipher As we can see, the slide attack is a pretty clever attack that works whenever you iterate the same function over and over again in a cipher. So, one of the best ways of dealing with this is simply to add a little bit of per-round variability. The AES cipher and its predecessor DES both use the concept of subkeys – the main key is used to derive a series of subkeys using a key schedule, and each round uses a different subkey. This way, each round function is slightly different, which is enough to defeat the slide attack. In Sarah2, this might be achieved by using the S-Box to derive some per-round modifications to use. As an illustrative (and perhaps not very strong) example, you could interpret a letter pair like “kz” to mean a Caesar shift (mapping k to z, l to a, m to b, etc.), and apply one Caesar shift per round according to a particular entry in the S-Box (e.g. the “__” entry for round 1, the “aa” entry for round 2, etc.). The LC4 pen-and-paper cipher uses a similar secret S-Box, albeit a much smaller one (6×6). It encrypts characters one at a time (a stream cipher), and moves the entries of the S-Box around after encrypting each character. This ensures that the S-Box changes over the course of encryption rather than remaining static. A similar approach of shifting the entries in the Sarah2 S-Box after each round could help mitigate the slide attack. Summary and Conclusions Above, I’ve outlined two attacks on Sarah2 which reveal the entire secret key, under the assumption that the attacker can encrypt messages of their choice. While these attacks don’t break the cipher if the attacker can only passively observe messages, they should be taken as a strong indication that the Sarah2 cipher is not safe or secure. The slide attack principle underpinning these attacks means that the number of rounds is immaterial; these attacks work even if you were to do thousands of rounds, demonstrating that “confusion and diffusion” are not the only requirements for building solid cryptosystems. The aim of this post has been to illustrate a practical slide attack on a simple, understandable cipher, and I hope it serves as a useful tool for learning more about this type of attack. The attacks aren’t hard to implement – I strongly recommend implementing them yourself if you want to understand the slide attack better! For reference, my implementations in Python can be found in this GitHub repo.
<urn:uuid:da18b7ea-61b4-47f1-8a28-0f062dd087d0>
CC-MAIN-2022-40
https://www.robertxiao.ca/hacking/sarah2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00239.warc.gz
en
0.80468
5,234
2.921875
3
By Sean Lauziere, MPA Emergency professionals are trained and prepared to deal with every unexpected situation imaginable. 9-1-1 serves as the incident command during times of emergency. They dispatch resources to the scene like first responders who are given essential information, follow procedures and work as a team. What is Self-Dispatch? Self-Dispatch is when any person shows up to an active scene without being sent by 9-1-1. Self-Dispatching is seen most often in two different scenarios: One example of self-dispatching occurs when an off-duty public safety professional like a police officer responds to an incident. They might hear something over the radio or be in the area of an active situation and go to the scene of the incident without knowing crucial details or specific protocols for the situation. While they have high level information like the location, problems arise when safety professionals decide to self-dispatch. Solo missions lead to increased risk for everyone involved. The other scenario is when everyday citizens or even off-duty emergency professionals decide to self-dispatch. They may hear information about an incident in real-time on the news, through a friend or read an update on social media. It may seem heroic, but acting on this unreliable information unnecessarily adds to the chaos and danger involved for all. Lack of Information When 9-1-1 takes a call, the dispatcher enters all the information they receive into their system to share with the first responders sent to the scene. The critical information they collect and share ranges depending on the incident, but includes life-saving facts like location within a building, personal information and details of what’s happening on the scene. The physical and mental health of any citizens involved in the initial call are also important factors for dispatched resources to know when assessing risk. Self-dispatchers don’t have access to this critical information. By reacting to information overheard on the radio or news, they are missing the full picture and are walking into the situation blind. Police cars 20 miles from an incident will receive an alert about it. Cars from several different towns may start heading towards the danger, but they are often on different radio bands and not communicating. These uncoordinated efforts result in multiple protocols being broken. Not having the correct information needed hinders organized rescue efforts. Easily identifiable safety vests are often given out to approved first responders so they can be easily recognized by citizens and among each other. This helps people know to get out of their way. Self-dispatchers can be difficult to identify in a chaotic situation, creating a lack of information for all the responsible parties. Risk for First Responders There is always a risk for any first responders when first entering a scene. Critical situations can involve risks including chemical, biological, radiological and explosive devices. These natural and man-made disasters are a threat to anyone on scene, but even more so when a self-dispatcher arrives blind to the already risky situation. Uncoordinated resources like self-dispatchers add additional risk for emergency personnel. If an on-duty police officer decides to self-dispatch, they are leaving the area under their supervision without notice. Lives are at risk when safety personnel leave their local communities. It reduces the level of protection and support should a second emergency occur, making the area more vulnerable. Self-dispatchers also take away resources from fire-fighters and other emergency personnel assigned to the scene. When 9-1-1 dispatched resources arrive on location, they are all aware of the procedures in place for the incident at hand and are accounted for. During the Ferguson riots of 2014, police officers arrived to help without riot gear. Resources are limited, and without the proper equipment like shields and gas masks, responders are on no value on the front line. When resources show up that have not been requested, the incident management system fails. The scene of an emergency is already chaotic, and self-dispatchers create additional risks for emergency professionals that are unnecessary and avoidable. Self-dispatchers don’t have context into the scene they’re walking into. They don’t know what to prepare for or have access to floor plans to know where to go. Self-dispatchers enter the scene blind with zero situational awareness. The International Association of Fire Chiefs (IAFC) and the National Volunteer Fire Council (NVFC) discourage the practice of self-dispatch among emergency response personnel to emergency incidents without notification or request. The association sites that in past incidents, fire departments have struggled to allocate additional resources to managed self-dispatchers that arrive on scene. This uncontrolled and uncoordinated arrival of self-dispatches at emergencies creates accountability issues as well as an additional safety risk to all because they are not aware of the overall strategic plan. 9-1-1 centers are responsible for everyone on-site during an emergency. This includes the citizens that called, the deployed first responders, and many other involved groups depending on the situation. 9-1-1 is accountable when self-dispatchers arrive, creating a hindrance. Not only is the call center dealing with the situation at hand, they also have to worry about this additional liability. 9-1-1 also puts all first responders in the same notification chain – sheriffs, police agencies, safety officers, authorities, and more. This allows all involved parties to stay informed and take appropriate action immediately. If a community has a critical communications system in place, (like Panic Button) they can share critical response data for enhanced coordination between authorized app users, 9-1-1 call takers and first responders that saves time when it matters most. This streamlined communication helps reduce confusion. (See briefly how a panic button works. How, in seconds, Rave Panic Button clearly communicates an emergency to 9-1-1, faculty, staff, and security resource officers.Courtesy of Rave Mobile Safety and YouTube.)
<urn:uuid:a5cddd2e-9a7c-4494-ab0f-3aef933be05a>
CC-MAIN-2022-40
https://americansecuritytoday.com/how-self-dispatch-can-slow-down-emergency-response-create-additional-safety-risks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00439.warc.gz
en
0.933742
1,282
3.546875
4
The last decade has seen a significant increase in the sophistication and frequency of cybercrime. Businesses of all sizes have become targets, with malicious actors using ever-more sophisticated techniques to exploit vulnerabilities and steal data. The cost of these attacks can be high regarding the financial loss caused by the theft of data or intellectual property and the reputation damage resulting from a breach. In response to this threat, businesses have had to invest increasingly in cybersecurity in terms of technology and training staff to spot and deal with potential threats. One way to do this is using a SIEM (Security Information and Event Management) system. SIEM can help businesses detect and respond to malicious activity quickly and effectively. The Top 6 Industries Targeted by Cyber Criminals When it comes to cybercrime, certain target industries are more attractive to criminals than others. This is because these groups tend to have more valuable data or be more vulnerable to attack. Here are some of the top targets for cybercriminals: Cybercriminals often target businesses because they have valuable data that can be sold or used to extort money. These data include customer data, financial data, and trade secrets. Businesses also tend to be more vulnerable to attack because they usually have less robust security than individuals. The healthcare industry is a prime target for cybercriminals. Healthcare organizations are especially vulnerable to attacks because they maintain large amounts of sensitive patient data. In addition, many healthcare organizations use outdated or legacy systems that can be easier for attackers to exploit. Attackers often target healthcare organizations to steal patient data, which they sell on the black market. In some cases, attackers may also demand a ransom from the organization for not publicly releasing the stolen data. The financial services industry is a major target for cybercriminals due to the large amounts of money involved in this sector. In addition, the sensitive nature of data that financial institutions store makes them attractive to hackers. On January 17, 2022, hackers exploited a bug in the blockchain service to steal around $1.4 million from Multichain, a platform that permits users to exchange tokens between blockchains. Government agencies are another common target for cybercriminals. Government is a prime target because they often have sensitive information that could be used for political gain or to embarrass the agency. Government agencies also tend to have weaker security than businesses, making them an easier target. The United States government was hit particularly hard by cyber attacks in 2015. The Office of Personnel Management (OPM) lost the personal information of over 21 million current and former government employees. The education industry is a prime target for cybercriminals. In the past year, several high-profile attacks on schools and universities have resulted in the theft of sensitive data and disrupted operations. One of the most recent and notable examples is the attack on the University of California, Berkeley, which resulted in the theft of over 800,000 student and staff records. This attack highlights the vulnerabilities of the education sector to cybercrime. Cybercriminals understand the critical importance of energy and utilities to our way of life. They also know that these industries are often behind the curve regarding cybersecurity. As a result, energy and utility companies are prime targets for cyber attacks. Cybercriminals target the energy sector because it is critical to national infrastructure. Top Types of Cyber Attacks There are many types of cyber attacks, but some are more common than others. Here are the top five types of cyber attacks: Phishing is a social engineering attack where an attacker tries to trick users into giving them sensitive information, such as passwords or financial information. They may do this by sending an email that looks like it’s from a legitimate company or by setting up a fake website that looks like a legitimate website. 2. DoS and DDoS Attacks A DoS attack is when an attacker attempts to make a system or network unavailable by flooding it with traffic or requests. It can cause the system to crash or become overloaded and unusable. DoS attacks can make it even harder for the system to recover from the attack. On the other hand, DDoS attacks are types of cyber attacks that overload the system with the requests, making them unavailable to legitimate users. DDoS attackers use botnets, networks of infected computers that can be controlled remotely, to send large amounts of traffic to their target. This can overwhelm the target’s servers, preventing them from being able to respond to legitimate requests. Ransomware is malware that encrypts a victim’s files and demands a ransom payment to decrypt them. It can be a very costly attack for businesses, resulting in losing important data. Ransomware is typically spread through infected websites. Once your system is infected, the ransomware will start encrypting your files and demand a ransom for the decryption key. 4. SQL Injection Attacks SQL injection is a type of attack where an attacker inserts malicious code into a database query that is then executed by the server. SQL injection can allow the attacker to access sensitive data or even take control of the server. SQL injection attacks take advantage of vulnerabilities in web applications that allow attackers to execute malicious SQL commands. Cybercriminals use these commands to access sensitive data, modify data, or even delete data. 5. MITM Attacks Man-in-the-middle (MITM) attacks are another type of common cyber-attack. As the name applies, an attacker can intercept communication between two parties in a MITM attack. There are a few different ways cybercriminals carry out a MITM attack, but one of the most common is using ARP poisoning. ARP is where the attacker sends false ARP (Address Resolution Protocol) messages to a network, which causes devices on the network to believe that the attacker’s machine is the machine they want to communicate with. SIEM: The Top Tool for Cybercrime Prevention SIEM is security software that collects, analyzes, and monitors data from your IT infrastructure to give you visibility into potential cyber threats. SIEM can help you detect and respond to attacks in real-time and track down the source of an attack so you can prevent it How SIEM Can Help Prevent Cyber Attacks There are a few key ways that SIEM can help prevent cyber attacks: 24/7 monitoring and alerts: SIEM tools can monitor your IT infrastructure for signs of an attack around the clock. If something suspicious is detected, you’ll be immediately alerted so you can investigate and take action to stop the attack. - Improved visibility: One of the biggest challenges in cybersecurity is getting visibility into all the data and activity across your IT environment. SIEM can help by collecting data from multiple sources and giving you a centralized view so you can identify potential threats more easily. Get a Top-Notch Security Service Today! Cyber Sainik offers a wide range of security services that can help protect your business from cyber threats. We provide monitoring and management, virtual Ciso services, laptop security, and vulnerability protection. To discover how we can help you defend your organization from cyber assaults, contact us today for a consultation.
<urn:uuid:b0c80df7-1818-42c6-8b04-31324e5d1594>
CC-MAIN-2022-40
https://cybersainik.com/top-targets-for-cybercriminals-how-to-protect-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00439.warc.gz
en
0.942684
1,489
2.875
3
Of all the different forms of malware, none are more destructive than ransomware. Ransomware can encrypt your data while demanding that you pay a fee or “ransom” to regain access to it. Research shows that the number of ransomware attacks has increased by over 60% within the past two years. While most of these attacks target desktop computers and servers, some of them target mobile devices. The Basics of Mobile Ransomware Mobile ransomware is exactly what it sounds like: It’s ransomware that specifically targets smartphones, tablets and other mobile devices. Most people today own a mobile device. Cyber criminals acknowledge the popularity and widespread usage of mobile devices, so they use mobile ransomware to target these devices. Mobile ransomware is similar to standard ransomware; the only difference is that it targets mobile devices. How Mobile Ransomware Works During a mobile ransomware attack, a cyber criminal will deploy malware on your mobile device. All standard and mobile ransomware attacks begin with the deployment of malware. Ransomware is a form of malware. Once deployed on your mobile device, the mobile ransomware will either encrypt or otherwise lock your data, meaning you won’t be able to access it. At the same time, you may see a pop-up demanding that you pay a fee. Tips to Protect Against Mobile Ransomware While mobile ransomware has become increasingly common, there are a few things you can do to prevent it. For starters, make sure your mobile device is running up-to-date software. Mobile devices have operating systems just like desktop computers — and these operating systems need to be updated. If your mobile device has an outdated operating system, it could have a vulnerability that renders it a target for mobile ransomware. You should also avoid downloading apps from an official marketplace. Official marketplaces are those like Google Play and Apple’s App Store. When developers add their app to an official marketplace, the app will be checked for malware. Therefore, instances of mobile ransomware are rare when downloading from an official marketplace. If you download an app from a third-party marketplace, on the other hand, your device may become infected with mobile ransomware. Using an antivirus app can protect your mobile device from mobile ransomware. Antivirus apps are designed to scan and neutralize malware — just like antivirus software for desktop computers. Some antivirus apps are free. Others are paid. Regardless, using an antivirus app can protect your mobile device from all forms of malware, including mobile ransomware. #mobile #ransomware #whatisit
<urn:uuid:ccf0fb66-1744-495d-ace3-151c9f294122>
CC-MAIN-2022-40
https://logixconsulting.com/2021/10/06/what-is-mobile-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00439.warc.gz
en
0.911479
517
2.890625
3
For people who work with storage units designed to hold liquids, gasses, and various other items, a necessary part of their operations is the ability to accurately determine the volume they are currently holding. Tank monitoring is the practice of keeping an eye, so to speak, on the contents of a storage tank. This is similar to how various other SCADA systems work. Utilizing a system of devices designed to measure the levels of the contents linked to computers to collect and decipher the information, you are able to constantly monitor the changing levels, or to determine if they have moved outside of operation parameters. When a generator turns on during a power outage, you would expect it to work properly, but what happens if it sprung a slow leak that over time drained the entire tank? Most people would say they had a technician checking it, but how easy is it to overlook that step if there are no alarms to alert you to the fact that you are out of gas? You don't want to find out after-the-fact that your equipment was not up to full capabilities. For people using irrigation control systems, it is necessary to tell how much water is in your reserves so you can tell when it needs to be refilled. Through effective tank monitoring solutions, pumps can be turned on, workers automatically informed of procedures to take, or even auxiliary equipment turned off to prevent damage or waste. These tasks can be started remotely or even on-site with an alert through a signal light or alarm sound. Without an effective way to monitor the contents of your tanks, you may as well be a fish out of water when it comes to being able to operate your equipment. Learn more about fuel tank monitoring solutions, reach out to us. You need to see DPS gear in action. Get a live demo with our engineers. Download our free Monitoring Fundamentals Tutorial. An introduction to Monitoring Fundamentals strictly from the perspective of telecom network alarm management. Have a specific question? Ask our team of expert engineers and get a specific answer! Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
<urn:uuid:7a5d0f5e-b20d-434c-a053-18a9f5064cb7>
CC-MAIN-2022-40
https://www.dpstele.com/network-monitoring/fuel/tank.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00439.warc.gz
en
0.955086
469
2.53125
3
Tiny Nanosatellites Could Bring Quantum Internet to People on Earth (Futurity.org) Tiny nanosatellites such as the SpooQy=1 could bring the quantum internet to people on Earth, researchers report. Headline-grabbing experiments by China’s satellite Micius have shown that quantum signals can reach Earth from satellites with their spooky and useful properties intact, pointing the way to building a global quantum internet. Now, researchers have now shown that nanosatellites might do the job for less money compared to using larger satellites. “In the future, our system could be part of a global quantum network transmitting quantum signals to receivers on Earth or on other spacecraft,” says Aitor Villar, who worked on the quantum source for nanosatellite SpooQy-1 while getting his PhD at the Centre for Quantum Technologies (CQT) at the National University of Singapore. A country the size of Singapore could have a fiber-based quantum network—and in separate projects the Infocomm Media Development Authority, Singtel, and ST Engineering have been looking at the technology—but innovation is needed to go global. “We are seeing a surge of interest in building quantum networks around the world. Satellites are a solution to making long range networks, creating connections across country borders and between continents,” says Alexander Ling, principal investigator at CQT and an associate professor in the NUS physics department. He leads CQT’s satellite program.
<urn:uuid:90e95128-9c7c-4c81-9a3d-e70841004501>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/tiny-nanosatellites-could-bring-quantum-internet-to-people-on-earth/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00439.warc.gz
en
0.91077
315
2.640625
3
Guru: Using SELECT * With Cursors November 27, 2017 Ted Holt From time to time someone brings to my attention the use of SELECT * with SQL cursors in RPG programs. Specifically, is that a good idea or a bad idea? I have learned that the answer to that question is “It depends.” Using SELECT * in a cursor declaration may or may not get you into trouble. To set the stage, let’s begin with a simple example — an RPG program that reads one table (physical file) and prints each row (record). Even though most programs use data from more than one table, programs that read only one table are not uncommon, and a program that reads only one table is a perfect candidate for the use of SELECT * in a cursor. Here’s the DDL for a table of employee data. create table employees (clock dec(3) primary key name char(12), phone dec(7)); insert into employees values ( 101, 'Barney Fife' ,4445555), ( 102, 'Luther Heggs',2223333); Let us consider two versions of a program that uses SELECT * in a cursor. First, static SQL: H dftactgrp(*no) actgrp(*new) option(*srcstmt: *nodebugio) D EmployeeData e ds extname('EMPLOYEES') D cSQLEOF c const('02000') *inlr = *on; exec sql declare c1 cursor for select * from employees order by clock; exec sql open c1; assert (SQLState < cSQLEOF: 'Open failed, state=' + SQLState); exec sql fetch c1 into :EmployeeData; assert (SQLState < cSQLEOF: 'Fetch failed, state=' + SQLState); dump(a); exec sql close c1; assert (SQLState < cSQLEOF: 'Close failed, state=' + SQLState); return; * ============================================================= * Abruptly end the program if an unexpected condition arises. * ============================================================ P Assert B D Assert PI D Condition N Value D Message 80A Value D QMHSNDPM PR ExtPgm('QMHSNDPM') D MsgID 7 Const D MsgFile 20 Const D MsgDta 80 Const D MsgDtaLen 10I 0 Const D MsgType 10 Const D MsgQ 10 Const D MsgQNbr 10I 0 Const D MsgKey 4 D ErrorDS 16 D ErrorDS DS 16 D BytesProv 10I 0 inz(16) D BytesAvail 10I 0 D ExceptionID 7 D MsgDta S 80 D MsgKey S 4 IF (not Condition); QMHSNDPM ('CPF9898': 'QCPFMSG QSYS': Message: %len(Message): '*ESCAPE': '*PGMBDY': 1: MsgKey: ErrorDS); ENDIF; RETURN; P Assert E Let me point out a few things about this program. First, notice that the employees table — the table that the program reads — provides the external definition of the EmployeeData data structure. This is comforting to me. I know that the fields in the data structure will be adequate to receive the fetched data. Second, if the Employees table can contain null data, then I would have to define a null indicator array. I don’t think most IBM i shops use nulls in the database, so I will not unnecessarily complicate the example by adding code to handle null values. Third, to simplify the program I omitted the loop that would process the entire table. A program that only fetches one row doesn’t need a cursor, but a SELECT with the INTO clause. Last, the purpose of the assertions is to let me know when an SQL statement fails. I would not use assertions in this manner in a production program. Here’s the same program with dynamic SQL: H dftactgrp(*no) actgrp(*new) option(*srcstmt: *nodebugio) D EmployeeData e ds extname('EMPLOYEES') D Statement s 96a D cSQLEOF c const('02000') *inlr = *on; Statement = 'select * from employees order by clock'; exec sql prepare s1 from :Statement; exec sql declare c1 cursor for s1; exec sql open c1; assert (SQLState < cSQLEOF: 'Open failed, state=' + SQLState); exec sql fetch c1 into :EmployeeData; assert (SQLState < cSQLEOF: 'Fetch failed, state=' + SQLState); dump(a); exec sql close c1; assert (SQLState < cSQLEOF: 'Close failed, state=' + SQLState); return; P Assert B . . . as before . . . P Assert E What happens if I add a new column? alter table employees add column email varchar(30) not null with default Does the program continue to run properly without modification and without compilation? This is where it depends. - Is the SQL static or dynamic? - Where did I add the column? In this case, I added the new column at the end of the row (i.e., the record format). The static version runs as before. The reason for this is that the SQL precompiler expands the column list, so. . . select * from employees is equivalent to. . . select clock, name, phone from employees Since the program object selects three columns only, the addition of another column to the table does not affect the program. The program does not require recompilation. The dynamic SQL version also continues to retrieve the data correctly, but the FETCH operation sets the SQL state to 01503 (Number of host variables less than result values.) That is, there is no room in the data structure for the email column. This is a warning, not an error. The program runs and retrieves the first three columns correctly. Let’s add another column, but this time, let’s add it within the row. alter table employees add column dept dec(3) not null with default before phone Again, the static SQL continues to run properly because the precompiler expanded the column list. The static version continues to select the same three columns, even though there are now five columns in the table. However, things are not so rosy with the dynamic SQL. I again get the 01503 value in the SQL state, but the data is not accurate. The phone number is zero for all employees, because the program retrieves the department number, which was initialized to zero and has not yet been populated with the true values. The program runs, the data is bad, and no one is the wiser. The answer is, of course, to recompile this program and all others that use a dynamic cursor over the employees table. After all, you don’t have anything better to do on the weekends. Be sure not to overlook any of them during your analysis. Now that you understand how SELECT * works in a cursor, let me show you what I think is a better way. If a database is even somewhat normalized, almost all programs require data from more than one table. Let’s add the name of the department to the previous query. To maintain third normal form, we must place the department name in a table that is keyed on department number. create table departments for system name dept (ID for column DeptNo dec ( 3) primary key, Name char(16) not null with default); insert into departments values ( 1, 'Shipping' ), ( 2, 'Receiving'); To use data from both tables requires a join. You can place such a join in a lot of programs, but I propose that a better idea to place the join in one place — a view. create or replace view empv1 as select e.clock, e.name, e.phone, e.dept, coalesce(d.name,'*Invalid*') as DeptName from employees as e left join departments as d on e.dept = d.id Notice that this view does not return null values, even though it uses a left join. The coalesce function takes care of any possible nulls, replacing them with a dummy department name of *Invalid*. All programs that need this data can use this view, and guess what? They can use SELECT *, just as the one-table example did. Here’s the static SQL version: H dftactgrp(*no) actgrp(*new) option(*srcstmt: *nodebugio) D EmployeeData e ds extname('EMPV1') D cSQLEOF c const('02000') *inlr = *on; exec sql declare c1 cursor for select * from empv1 order by clock; exec sql open c1; assert (SQLState < cSQLEOF: 'Open failed, state=' + SQLState); exec sql fetch c1 into :EmployeeData; assert (SQLState < cSQLEOF: 'Fetch failed, state=' + SQLState); dump(a); exec sql close c1; assert (SQLState < cSQLEOF: 'Close failed, state=' + SQLState); return; P Assert B . . . as before . . . P Assert E As I did in the one-table query, I’ve used the view to describe the data structure. SELECT * works well because it retrieves the values in the same order that they are listed in the view. Using views is even more robust if you don’t change them. Suppose we need another column in the query. If we change the view, we may have to recompile the programs. But if instead we create another view, the only programs that have to be recompiled are those that need the new column. You can change those programs to use the new view, and you can do so without a code freeze. The bottom line, then, seems to be that using SELECT * in a cursor definition in conjunction with an externally-described data structure is bulletproof. Well, not quite. Implicitly hidden columns fowl up the works if you query a table, because implicity hidden columns are included in the data structure, but not in the SELECT * field list. Chances that this will happen are very, very slim, as many if not most shops don’t use implicitly hidden columns. Columns cannot be hidden in a view, so you won’t encounter this problem if you run SELECT * against a view. If it is customary in your shop to query tables, consider that there is much to be said for querying views instead.
<urn:uuid:2e772458-7664-444d-a697-57324e6dfcbe>
CC-MAIN-2022-40
https://www.itjungle.com/2017/11/27/guru-using-select-cursors/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00439.warc.gz
en
0.828678
2,311
2.640625
3
SMS is Broken and Hackers can Read Text Messages. Never use Regular Texting for ePHI. Security firm Positive Technologies has published a report (see their overview of attack on one time passwords and PDF of the SS7 security problems) that explains how attackers can easily attack the protocols underlying the mobile text messaging networks (i.e. the Signaling System 7 or “SS7” protocol). In their report, they indicate how this makes it easy to attack the two-factor login methods and password recovery schemes where a one-time security code is sent via an insecure text message. Devices and applications send SMS messages via the SS7 network to verify identity, and an attacker can easily intercept these and assume identity of the legitimate user. This result also means that attackers can read all text messages sent over these networks. Beyond the serious implications with respect to attacking accounts and identity theft via access second-factor authentication codes, this work means that all communications over text message must really and truly be considered insecure and open to the public. In the past, we have all acknowledged that text messages are insecure in that they pass through the cellular carriers in plain text, can be archived and backed up, could be surveilled by the government, etc. However, most people have been very complacent about this insecurity … still trusting their cellular carriers, trusting that attackers would really have a hard time actually accessing these text messages. As a result, people have been sending sensitive information via insecure text message for some time … giving security short shrift and going for convenience. This includes ePHI — medical appointment notices sent to patients, communications about patients between medical professionals, etc. The publications by Positive Technologies confirm what the security community has known since at least 2014 … the infrastructure underlying SMS is old and fragile and easily attacked. What can attackers do? They can transparently forward calls, giving them the ability to record or listen in to them. They can also read SMS messages sent between phones, and track the location of a phone using the same system that the phone networks use to help keep a constant service available and deliver phone calls, texts and data. The danger of breach, especially a targeted breach, is real. As a phone number is identifying information, any text message that refers to that person’s past, present or future medical condition or scheduling or billing is ePHI and must be protected by the medical community. It is easy to imagine an automated attack on the SS7 protocol that could identify very large numbers of such messages laden with PHI. That attack would be an automatic breach and the discovered use of SMS for ePHI could be considered willful neglect under HIPAA. The take away? Sensitive communications, especially where HIPAA compliance is involved, must never take place over insecure text message (SMS) channels. It is time to move on to secure communications applications or secure texting solutions.
<urn:uuid:81a5ee20-9dc4-4add-89fb-53218ac7fda0>
CC-MAIN-2022-40
https://luxsci.com/blog/sms-is-broken-and-hackers-can-read-text-messages-never-text-ephi.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00439.warc.gz
en
0.9328
595
2.53125
3
Mars Rovers and deep space telescopes are already high-tech, but NASA wants to use artificial intelligence to make some of its most exciting creations even more ingenious. The agency has plans to employ AI throughout its operations, from conducting basic financial operations to finding extra radio frequencies aboard the International Space Station. It’s taking to heart the message of a recent executive order recognizing the importance of AI to the federal government, and calling on agencies to focus on making the United States a leader in AI technology, says John Sprague, acting associate CIO for NASA’s transformation and data division. “There’s a lot of interest — not only in the government, it’s going crazy in industry. There’s really cool stuff going on,” Sprague said Tuesday at the 2019 GITEC Emerging Technology Conference in Annapolis, Md. For more articles from GITEC 2019, check out our conference landing page. Bots and AI Take Care of Rote Work for Humans On the most fundamental level, NASA employs a bot named Washington at its shared services center in Mississippi to handle routine financial transactions. Other bots named Adams and Pioneer deal with financial and procurement-related tasks, and the newest, named Beacon, is expected to go on line later this year. Aboard the ISS, the Space Communications and Navigation Testbed uses machine learning to find unused radio frequencies in a crowded spectrum. This could help troubleshoot communication problems and give astronauts a way to transmit more data. “You’ve got unlimited frequencies you can use now,” Sprague says. “It can mitigate space weather, electromagnetic radiation interference — all kinds of problems that are really hard to solve.” Artificial intelligence is giving a boost to one of the agency’s most famous pieces of gear, the Hubble Space Telescope. Conceived in the 1940s, designed in the 1970s and launched in 1990, the telescope that’s provided spectacular images of deep space “doesn’t have AI. There was no AI back then, or it would have been on it,” says Sprague. Researchers, astronomers, universities and other institutions request time and images from the telescope, and that used to be plotted out and scheduled by a team of humans. But now, its scheduling is done by AI, he says. The AI-driven scheduler can more quickly figure out the most efficient paths for the Hubble telescope to take to get all of the images that researchers need, Sprague says. Mars Mission and New Rover May Rely on AI NASA has turned to AI to help with its own research. In 2017, Google machine learning tools scoured images from space to find an eighth planet orbiting Kepler-90, creating a tie with our solar system for the sun with the most planets (Pluto is still not considered a planet). Today, NASA is using IBM’s Watson Content Analytics technology to search the existing scientific literature for ideas for solutions to a number of challenges, including how to shield astronauts from radiation on a long trip to Mars. That particular life-threatening problem is one of the major hurdles to a crewed mission. “There’s not enough protection right now to protect the astronauts,” Sprague says. “What can you put up there to put around any spacecraft to keep the astronauts safe? It’s extremely hard to do.” AI will be valuable for the nonhuman crew on a Mars mission as well, Sprague says. In 2010, the Spirit Rover got stuck in the Martian sand and couldn’t move. Trapped in a position where it couldn’t catch the solar energy it needed for power, it stopped communicating with NASA shortly thereafter. Spirit was hampered by the lag in communications between Mars and Earth. Radio signals can take between 5 and 20 minutes to get from one planet to the other, depending on their positions in space. Spirit had to alert NASA that it was stuck, and then NASA had to tell Spirit to stop spinning its wheels and using up its power. That shouldn’t happen with the Mars 2020 Rover, says Sprague. “Let it do it all on its own,” he says. “Let it figure this out, let it have the sensors, get the data and make the decisions on its own, and save this very expensive piece of equipment.”
<urn:uuid:44710422-4c3d-470f-9e82-17201f387e38>
CC-MAIN-2022-40
https://fedtechmagazine.com/article/2019/05/gitec-2019-nasa-artificial-intelligence-complements-rocket-science
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00639.warc.gz
en
0.941354
913
2.859375
3
These days, a password is a pretty weak barrier between a hacker and your private data. At the end of the day, it doesn’t really matter all that much whether your password is five characters or 50. As a singular form of authentication, a password is decidedly weak. Maybe it could fly 10 years ago, but that’s not the world we live in anymore. The inherent vulnerability of passwords is highlighted by two recent breaches: an attack on game video streaming service Twitch and a separate incident in which Uber users’ passwords were stolen. Twitch hack leads to password resets In the gaming world, Twitch is a huge hub of activity. The service is like ESPN for gamers. You go to the site to watch live feeds of expert gamers playing video games, and can learn strategy from and interact with them. It’s a hugely popular platform, with 45 million gamers using it every month. Unfortunately, the site’s users recently experienced an event that temporarily brought them out of the world of video games and put them face-to-face with the reality of cyberattacks. In late March, Twitch experienced malicious activity on its servers that led to the service requiring all users to create new passwords, as Ars Technical reported. While Twitch employs password protection, the service admitted in an email to select users that “we believe it’s possible that your password could have been captured in clear text by malicious code when you logged into our site on March 3.” When password attacks like this happen, it doesn’t really matter if your password is long or short, symbols-heavy or just lowercase letters. If a hacker deploys malicious code that intercepts passwords as they’re entered, every entry has the potential to be stolen. That’s one reason every user was required to reset his or her password. But if a password hack on the site happened once, what’s to stop a similar one from happening again in the future? In this way, Twitch’s response to its breach – to have users create new passwords – doesn’t really solve the problem. Instead of mandating an additional identity-verifying wall in the form of two-factor authentication, Twitch merely asked its users to recreate something that will still be inherently vulnerable to attack. Uber user password leak reveals lucrative trade of password selling Twitch is not the only business entity making headlines due to password problems. The popular transportation service Uber is currently dealing with the news that thousands of its users’ passwords were put on the dark net for sale – although Uber has stated that it was not attacked. The dark net is the hacker underbelly of the Internet. It is a place where cybercriminals communicate, plan attacks and sell (often illicit) things. Recently, a bunch of Uber usernames and passwords appeared on one such dark net site for sale to other criminals, as Mashable reported. But Uber said that the hacked passwords did not come from an attack on them. In denying that a criminal intrusion had happened on its end, Uber also added: “This is a good opportunity to remind people to use strong and unique usernames and passwords, and to avoid reusing the same credentials across multiple sites and services.” That is sound advice to be sure – but it is not enough. Which is to say, nobody’s arguing that you shouldn’t have strong and unique passwords. A weak password is easier to breach than a strong one. But when discussing password strength, what we’re actually talking about is a spectrum of relative weakness. So it’s time to change our approach to authentication. Passwords are weak – so what’s the alternative? The computing future we’re heading toward is one in which the traditional password becomes a secondary component of identity verification. The primary element will come in the form of something you, the user, uniquely possess – such as a smartcard or token access. This is the idea behind Entrust Datacard’s IdentityGuard authentication and identity management platform. The platform is built on the premise that point authentication solutions don’t provide nearly the identity protection needed to handle the cyberthreats and vulnerabilities of the modern computing world. IdentityGuard dramatically improves enterprise, banking and government network security by providing an identity-based authentication platform that relies on methods of identity guarding that are far stronger than the password, including tools like mobile soft tokens, mobile smart credentials and smartcards. The password is gradually being supplanted by better, more sophisticated and safer means of identity protection. In the near future, the idea that a single username and password separated businesses from their accounts will be unthinkable. This is the future we need to work toward.
<urn:uuid:6dc6adbc-366b-4ec6-a7b8-5c6a98a5e1bb>
CC-MAIN-2022-40
https://www.entrust.com/de/blog/2015/04/that-password-youre-using-its-not-enough/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00639.warc.gz
en
0.950354
978
2.78125
3
History is not just a genuinely occurring intersection of fact, choice, and expectation. A look at the Roman Tacitus and the Byzantine Procopius, two very different historians, reveals the complexity and difficulty of the study of the past. At least three different approaches to history are part of this: recalling or restoring or even inventing. Everyone is somehow imperfect. For example, no historical or historical source exposes the complete and uncoated facts. Furthermore, archaeology or historical research does not disclose evidence without context and it is often difficult to define the significance of the recovered data. Moreover, it is difficult to prove that certain supposed “history” was invented. At the same time, they also tell us a great deal about the values and dreams of society. Let’s have a look at today’s history: 1946: Germany War Crimes Twelve high ranking Nazis, including Joachim von Ribbentrop, Nazi Minister of Foreign Affairs, was sentenced to death in Nuremberg by the International War Crimes Tribunal. The Gestapo Member and Chief of the German air force, Hermann Goering. Inner Secretary Wilhelm Frick. Seven others were jailed for ten years to life, including Rudolf Hess, the former deputy to Adolf Hitler. Three others were acquitted. 1921: Ireland Negotiations The talks between Eamonn De Valera (Sinn Feine) and Lloyd George about Ireland’s future are to commence in London again. Britain consistently said they prefer to fight Ireland rather than grant Ireland complete independence, as Great Britain is not the British Kingdom yet vulnerable to Ireland. Both sides will prefer a compromise rather than a war so that this meeting will have a solution in which everybody is pleased. 1946: UK Mensa Created Mensa, which is not political and without social distinction (racial, religious, etc.) is established in the renowned high IQ society. The business is founded by Roland Berrill and Dr. Lancelot Ware, with a high IO ranking of over 98 percentiles as the only membership qualification, and a membership of more than 100,000 worldwide. Mensa has drawn unexpected members over the years, including boxers, actors, and actresses, writers, politicians, and inventors.
<urn:uuid:0b4cd37a-e543-4bcf-94d2-d3c2b12090b0>
CC-MAIN-2022-40
https://areflect.com/2020/10/01/today-in-history-october-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00639.warc.gz
en
0.951478
461
2.6875
3
A group of academics at South Korea’s Gwangju Institute of Science and Technology (GIST) have utilized natural silk fibers from domesticated silkworms to build an environmentally friendly digital security system that they say is “practically unbreachable.” “The first natural physical unclonable function (PUF) […] takes advantage of the diffraction of light through natural microholes in native silk to create a secure and unique digital key for future security solutions,” the researchers said. Physical unclonable functions or PUFs refer to devices that leverage inherent randomness and microscopic differences in electronics introduced during manufacturing to generate a unique identifier (e.g., cryptographic keys) for a given set of inputs and conditions. In other words, PUFs are non-algorithmic one-way functions derived from uncopiable elements to create unbreakable identifiers for strong authentication. Over the years, PUFs have been widely used in smartcards to provide “silicon fingerprints” as a means of uniquely identifying cardholders based on a challenge-response authentication scheme. The newly proposed method from GIST employs native silk fibers produced by silkworms to create PUF-based tags that are then used to devise a PUF module. This mechanism banks on the underlying principle that a light beam experiences diffraction when it hits an obstacle, in this case, the silk fiber. images from Hacker News
<urn:uuid:75c5e76e-17cd-4531-ac87-224736a19494>
CC-MAIN-2022-40
https://news.cyberfreakz.com/researchers-use-natural-silk-fibers-to-generate-secure-keys-for-strong-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00639.warc.gz
en
0.893719
292
2.96875
3
By David Camilo, Edited by Nima Schei– Computer vision faces a processing challenge, in order to get results as fast as possible. Usually, the algorithmic processes, such as machine learning and deep learning processing take place at the cloud, where all the data is analyzed, processed, and then the results is sent to the final user; This is because processing requires a large amount of hardware resources that are not easily accessible in the traditional infrastructures. For example, in the traditional surveillance systems with vision capabilities, the cameras are recording all the events that occur, then the data is sent to the cloud to be analyzed; But with the delay in response, and without a person constantly monitoring the cameras, most of the information could be lost or arrive after one event has occurred: For instance, in a residential building it’s common that a guard checks the camera system all the time, but if he or she is distracted, an intruder could enter the building without being seen, and if the systems has computer vision capabilities, the analysis of the data could take time and may arrive once the intruder is already inside the building. For solving this, there are systems that have been developed recently using IoT solutions and Edge computing. In the article “Edge computing: vision and challenges” , the authors describe video analytics as one of the main challenges for the Edge Computing. In this article, we present a few solutions and proposals that involve Edge Computing and IoT for computer vision. IoT and Edge Computing First a little introduction on IoT (Internet of Things) and Edge Computing. Internet of Things is an emergent technology in which the common objects are connected to the internet in order to simplify our lives and to automate various tasks. According to statista, in 2016 there was 15.41 billion devices connected to the internet and for 2021 there will be 42.62 billion devices. The IoT technology is used for example in smart homes: Think about having your coffee maker, your alarm clock, and your water heater, connected in an IoT environment. You program your alarm clock to ring at 6 am, at 5:55 am the alarm clock sends a message to the coffee maker, and it starts to make the coffee, then, when the alarm clock rings at 6, you’ll have your coffee made and warm ready to be drunk. You usually take 5 minutes drinking your coffee, then, while you’re enjoying the coffee, the coffee maker sends a message to the water heater, which starts to warm up the water for you to take a shower when you end your coffee. IoT has also industrial applications, for example in an agricultural farm. Instead of the sprinklers having programmed to work at a specific time, it’s possible to have sensors that measure the humidity of the soil, and the crops hydration condition. Then, when the sensors measure that the crop needs water, it activates the sprinklers at the needed time. This could save water, and therefore money, and improves harvest quality. The IoT’s common architecture includes sensors and actuators, which recollect the data, and send it to data centers to process and analyze it. With the Edge Computing, there is no necessity to send all the data to the cloud to be processed and analyzed. Using an Edge Node, that is a piece of hardware with high capability of processing, the information and data produced by the IoT devices could be processed at the edge, improving response time and reducing the delays on a system [As shown in figure 2]. Real-Time Video Analytics: The Killer App for Edge Computing In this paper, the authors present what could be the future of the video analytics involving edge computing. With the growth of technologies like the edge computing, and the necessity of applications with small latency times and fast responses, Ananthanarayanan et al. propose that a geographically distributed architecture of public clouds and edges is the only possible solution to achieve the real-time requirements of video analytics applications. They also propose Rocket, a video analytics software stack which can handle different type of video applications like surveillance and security systems, self-driving cars or vision zero for traffic. One of the proposed applications that Rocket can handle, is now under operation, in Bellevue, Washington, using cameras to process all the information on traffic intersections REVAMP2T is a solution for pedestrian tracking, using Edge video analytics for multi camera. It was presented in 2019 by Christopher Neff, Matías Mendieta, Shrey Mohan, Mohammedreza Baharani, Samuel Rogers and our CTO, Dr. Hamed Tabkhi. They developed an end-to-end IoT system and deep learning algorithms that allows to detect, re-identificate and track pedestrians across multiple cameras without storing the streamed data. For its IoT system, the edge nodes act as the IoT devices, and contain cameras equipped with Nvidia AGX Xavier, a potent device in which they perform their deep learning-based video analytics algorithms. The edge servers connect the edge nodes of their respective area, and perform the processing, communications and storage needed for the app to work. It is important to notice that REVAMP2T doesn’t store any personal information of the pedestrians detected. ParkMaster was presented in 2015. Its purpose is to exploit the smartphones’ cameras inside vehicles, in order to find an on-roadside parking spot, using image recognition algorithms running on the smartphones. In this case, the smartphones’ cameras work as IoT devices and at the same time as edge nodes, making the computing vision processing. After the smartphone ends the visual analytic algorithm, uploads the data to the cloud, where is stored, and gets available for other drivers, working as a crowdsourcing system. In order to achieve the computer vision real-time requirements, edge computing and IoT are starting to be researched and used in both, academic and industrial fields. The necessity of high processing for the deep learning or machine learning algorithms used in these solutions are the perfect excuse to exploit this new technology. We briefly presented 3 different solutions that mark the path to follow in the visual analytics field. This also brings new concerns about privacy that must be boarded by the developers, regarding the facial recognition and storage of private information. Finally, as the time passes, the developers are more exiting the cloud and more getting involved in the edge. W. Shi, J. Cao, Q. Zhang, Y. Li and L. Xu, “Edge Computing: Vision and Challenges,” in IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637-646, Oct. 2016, doi: 10.1109/JIOT.2016.2579198. Statista (2016) Internet of Things (IoT) connected devices installed base worldwide from 2015 to 2025 [Online] Available in: www.statista.com/statistics/471264/iot-number-of-connected-devices-worldwide/. Consulted: 06-21-2020. Image taken from: Kristiani E., Yang CT., Wang Y.T., Huang CY. (2019) Implementation of an Edge Computing Architecture Using OpenStack and Kubernetes. In: Kim K., Baek N. (eds) Information Science and Applications 2018. ICISA 2018. Lecture Notes in Electrical Engineering, vol 514. Springer, Singapore G. Ananthanarayanan et al., “Real-Time Video Analytics: The Killer App for Edge Computing,” in Computer, vol. 50, no. 10, pp. 58-67, 2017, doi: 10.1109/MC.2017.3641638. Michael Martinez, “Real-Time Video Analytics: The Killer App for Edge Computing” [Online] Available at: https://www.computer.org/publications/tech-news/research/real-time-video-analytics-for-camera-surveillance-in-edge-computing Consulted: 06-22-2020 Neff, Christopher & Mendieta, Matias & Mohan, Shrey & Baharani, Mohammadreza & Rogers, Samuel & Tabkhi, Hamed. (2019). REVAMP2T: Real-time Edge Video Analytics for Multi-camera Privacy-aware Pedestrian Tracking. IEEE Internet of Things Journal. PP. 1-1. 10.1109/JIOT.2019.2954804. Sammarco, Matteo & Grassi, Giulio & Pau, Giovanni & Bahl, Victor & Jamieson, Kyle. (2015). ParkMaster: Leveraging Edge Computing in Visual Analytics.
<urn:uuid:66925d4b-38b1-4f99-9029-4f8610f53e99>
CC-MAIN-2022-40
https://hummingbirds.ai/iot-and-edge-computing-for-computer-vision/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00639.warc.gz
en
0.892261
1,839
3.046875
3
Cyber crimes are expected to cause more than 6 trillion dollars in damages in 2021. By the year 2025, it's estimated that the cost will go up to $10.5 trillion, making application security a top concern for organizations. According to the most recent Data Breach Investigations Report from Verizon, 39% of data breaches resulted from a web application vulnerability. Vulnerabilities are found throughout the software development life cycle (SDLC), from development to production. Quick detection before they can lead to a successful exploit is key to keeping companies safe from a data breach. With help from a vulnerability assessment, organizations can test their web application infrastructures checking for weaknesses that could later lead to an application attack. Following a respective procedure is an industry standard that is becoming ineffective due to today's agile development environments. Organizations should look to automated approaches that help developers keep up with production demands without sacrificing the security of their applications. What Is a Vulnerability Assessment? A vulnerability assessment is a planned process used to reveal application vulnerabilities. Security analysts use a database of known vulnerabilities and details about the application’s infrastructure to configure vulnerability assessment tools. Once configured, these tools are used to scan applications for vulnerabilities to give details about issues both in development and in production. Organizations benefit from catching vulnerabilities early in the SDLC when remediation is cost-effective and will not hold up production. Carefully planned and executed vulnerability assessments can protect against several common severe OWASP application threats including: SQL Injection Attacks SQL injection attacks pose the largest risk to organizations by targeting vulnerable queries. When an exploitable vulnerability is detected, attackers can insert malicious injections that get processed by databases to execute desired results. SQL injections can allow cyber criminals to come into contact with sensitive data and other information that could put users and systems in jeopardy. Cross-site Scripting (XSS) Attacks Cross-site scripting (XSS) attacks are another large threat to organizations, listed as one of the top four highest risks. Attackers target vulnerabilities in browser-side scripts, attempting to gain access into unauthorized areas. If attackers are successful in manipulating code, they can trick web applications into sending malicious code to a different end-user, either crashing the application or stealing sensitive data Session Hijacking Attacks Attackers launch session hijacking attacks on unsecured HTTP communications. If session tokens are intercepted due to weak security measures, attackers can cause damage to the application’s infrastructure, intercept sensitive data, or gain access to other applications going unnoticed. Session hijackings are often difficult to detect, with cybercriminals lurking undetected until they intercept credentials or user information. Process of Vulnerability Assessment Vulnerability assessments are a critical part of web application safety and protection. They require a team of expert security analysts along with penetration testing tools that scan applications for vulnerabilities. A standard industry process for assessing application vulnerabilities is an ongoing cycle that includes configuration of tools, vulnerability testing, analysis of results, and remediation. For the most effective approach to detecting and remediating application vulnerabilities throughout production, the process of vulnerability assessments is best if repeated at different stages throughout development and production. Many recommend daily scans of applications for vulnerabilities. Configuration of Vulnerability Assessment Tools The first step in the process of vulnerability assessment is configuring vulnerability assessment tools. Configurations are based on known vulnerabilities via databases and in-depth analysis by security experts. The effectiveness of legacy scanning tools relies heavily on knowledge of the application’s infrastructure and the expertise of the analysts configuring them. With most scanning tools, improper configurations could lead to inaccurate results and hold up production times. Execution of Vulnerability Scans After configurations, scans are executed depending on the stage of production of the application. Early in production, static application security testing (SAST) tools are used to prevent holding up developers from coding. In deployment, dynamic application security testing (DAST) tools test the application’s response when vulnerabilities are triggered. Most of the time, both tools are used for a more aggressive approach to finding and preventing vulnerabilities before a potential exploit. Analysis of Triggered Application Vulnerabilities Both SAST and DAST require expert analysis of vulnerabilities, where security experts will analyze and rank triggered vulnerabilities and propose means of remediation. Every single vulnerability is listed in PDF form after a scan with SAST tools, each one taking one hour or more to investigate. After investigations and rankings, results are passed to developers who will find a means of remediation. Remediation of Triggered Application Vulnerabilities Once security teams are done with analysis, they pass their findings to developers to remediate. Once remediations are in place, the cycle begins again, where previous vulnerabilities are checked for effectiveness and new vulnerabilities are hunted down. Legacy Vulnerability Assessment Tools Hold Up Production Application development has reached record speeds thanks to open-source libraries and scalable infrastructures. Advancements in development require advancements in security, which is why legacy tools create a bottleneck. Vulnerability scanning takes hours to execute and even longer to remediate. These consume valuable development time and slow release cycles. In response, 55% of developers admit to skipping security scans to meet deadlines due to the challenges associated with legacy application security testing (AST) tools. SAST and the False Positive There are multiple reasons organizations choose SAST. One is the need to demonstrate compliance with industry standards and internal security policies. Another is the fact that SAST is a proven technology, in place in organizations for numerous years. However, legacy SAST solutions were not designed for the modern SDLC. Scan times are too long and require specialized resources. Remediation is difficult, as line-of-code guidance is missing and developers often waste valuable time searching for and diagnosing the cause of a vulnerability. Because legacy SAST tools sit outside of the software, the piles of security alerts they generate often are chock full of false positives. Application security teams are directly impacted and spend hours triaging and diagnosing alerts that turn out to be false positives. With false positives taking longer than one hour to triage and diagnose and real vulnerabilities taking several hours to detect and another four hours to remediate, the amount of time incurred by legacy SAST tools is substantial. DAST and the False Negative DAST vulnerability scanning tools are incapable of detecting zero-day attacks and can only identify known threats. False negatives can pose serious risk and incur significantly more time to remediate—if and once they are found. Specifically, if found in production, finding the cause of a triggered vulnerability requires additional testing, which is time-consuming and can take an application offline. Even if the false-negative vulnerability was discovered in development, the time to remediate it can be significant. For example, 27% of developers say that they stop coding to remediate triggered vulnerabilities every single day. With more applications in development, this will only grow over time. Improving Vulnerability Assessments With Pipeline-native Static Scanning In a recent study, 61% of organizations admitted they experienced a successful application attack three or more times in the past year, and 72% of those attacks resulted in the loss of critical data. A different approach to application security is required, one that scales to the demands of the modern SDLC, one that eliminates the noise of false positives, and one that unleashes the full potential of DevOps and Agile. Legacy application security embeds security outside of the software. Instead, application security must be instrumented and reside within the software and be pipeline native. Contrast Scan uses pipeline-native static analysis to analyze code in runtime, which accelerates scan times up to 10x and remediation time 45x while improving efficiency by 30%. Further, unlike legacy SAST tools that do not integrate into continuous integration/continuous deployment (CI/CD) and into the DevSecOps life cycle, Contrast Scan integrates into the CI/CD pipeline and the Contrast Application Security Platform. Automated Vulnerability Assessments With IAST As part of the Contrast Application Security Platform, Contrast Scan integrates with Contrast Assess interactive application security testing (IAST) that empowers developers to automatically detect and fix vulnerabilities while writing code. And as is the case with Contrast Scan, Contrast Assess virtually eliminates false positives. Extending Security Protection and Observability Into Runtime The Contrast Application Security Platform extends security within the software into production with Contrast Protect that delivers continuous runtime protection and observability. Contrast Protect detects attacks on vulnerabilities in real time and blocks them before they can successfully exploit the vulnerability. The information—including known and unknown threats—is shared within the Application Security Platform, enabling Contrast Scan, Contrast Assess, and Contrast OSS to pinpoint exploitable vulnerabilities in development and test environments. When a vulnerability is triggered, developers know the exact location for quick detection and faster remediation. Like its Application Security Platform counterparts in development and test, Contrast Protect dramatically reduces the number of false positives—pinpointing only those vulnerabilities that pose a true risk. Vulnerability Assessments Using a DevSecOps Platform Approach Vulnerability assessments are a critical part of application security, and all organizations should adopt more aggressive approaches as the application attack surface expands. Instead of continuing the same cycle with a mix of outdated tools, it’s best to incorporate application security measures that integrate with development environments and empower developers to find and fix vulnerabilities when they are introduced. In addition, new approaches such as pipeline-native static analysis must replace legacy models that employ outside-in security. Finally, a comprehensive DevSecOps platform approach that integrates all elements of application security—development, test, and production—into one interface can generate significant improvements, helping to scale application security while unleashing the full potential of DevOps and Agile. [Report]: The State of DecSecOps Report
<urn:uuid:d56a3b33-2e0e-46f1-94c5-5c032531aff0>
CC-MAIN-2022-40
https://www.contrastsecurity.com/glossary/vulnerability-assessment?hsLang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00639.warc.gz
en
0.915842
2,009
3.0625
3
Corporations are always thinking about how to protect assets. A few of the white-collar crimes companies must consider include hacking/intrusions (cyber vulnerability), insider/outsider trading (convergence of cyber and financial crimes), the Foreign Corrupt Practices Act (FCPA), spear fishing (email compromise) and economic espionage. They must consider the possibility of internal corruption or external corruption, and environmental factors such as culture and competition contributing to these crimes. As protection, organizations can use cybersecurity, pen testing and data loss prevention tactics. WHAT IS CYBERSECURITY? Cybersecurity is the body of technologies, processes and practices designed to protect networks, computers, programs and data from attacks, damage or unauthorized access. The term "cybersecurity" refers to business function and technology tools used to protect information assets. Data is increasingly digitized and the internet is being used to save, access and retrieve vital information. Protecting this information is no longer a priority but has become a necessity for most companies and government agencies around the world. Here are some other key definitions: Data breaches are occurring more frequently. There are increasing pressures for businesses to step up efforts to protect personal information and prevent breaches. Cyber criminals attack to gain political, military or economic advantage. They usually steal money or information that can eventually be monetized (e.g., Social Security numbers, credit history, credit cards, health records, etc.). Cyber attacks may come from malicious outsiders, accidental loss, malicious insiders, hacktivists and state-sponsored actors. DEFINING INTERNAL AUDIT’S ROLE IN CYBERSECURITY When it comes to selecting a cybersecurity control framework, guidance and frameworks don’t need to be reinvented. Organizations should choose the one that works for them (e.g., ITIL or COBIT), add onto it and take responsibility for it. Here are some of the frameworks to choose from: - NIST Framework for Improving Critical Infrastructure Cybersecurity - ISACA COBIT 5 and the Emerging Cyber Nexus - SANS Institute and the Top 20 Critical Security Controls - DSS Control Catalog - ISO/IEC 27001 - Other Industry Specific Frameworks: FFIEC, HITRUST, etc. Cyber Risk: Roles and Responsibilities Effective risk management is the product of multiple layers of risk defense. Internal audit should support the board in understanding the effectiveness of cybersecurity controls. These three lines of defense for cybersecurity risks can be used as the primary means to demonstrate and structure roles, responsibilities and accountabilities for decision-making, risks and controls to achieve effective governance risk management and assurance. Business operations perform day-to-day risk management activity such as risk identification and risk assessment of IT risks. They provide risk responses by defining and implementing controls to mitigate key IT risks, and reporting on progress. An established risk and control environment helps accomplish this. Risk management is the process of drafting and implementing policies and procedures, ensuring that existing procedures are kept up to date, responding to new strategic priorities and risks, monitoring to ensure compliance with the updated policies, and providing surveillance over the effectiveness of the compliance controls embedded in the business. As the 3rd line of defense, what steps can internal audit take? - Work with management and the board of directors to develop a cybersecurity strategy and policy. - Identify and act on opportunities to improve the organization’s ability to identify, assess and mitigate cybersecurity risk to an acceptable level. - Recognize that cybersecurity risk is not only external; assess and mitigate potential threats that could result from the actions of an employee or business partner. - Leverage relationships with the audit committee and board to heighten awareness and knowledge on cyber threats, and ensure that the board remains highly engaged with cybersecurity matters and up to date on the changing nature of cybersecurity risk. - Ensure that cybersecurity risk is integrated formally into the audit plan. - Develop and keep current an understanding of how emerging technologies and trends are affecting the company and its cybersecurity risk profile. - Evaluate the organization’s cybersecurity program against the NIST Cybersecurity Framework, recognizing that because the framework does not reach down to the control level, the cybersecurity program may require additional evaluations of ISO 27001 and 27002. - Seek out opportunities to communicate to management that, with regard to cybersecurity, the strongest preventive capability requires a combination of human and technology security—a complementary blend of education, awareness, vigilance and technology tools. - Emphasize that cybersecurity monitoring and cyber incident response should be a top management priority; a clear escalation protocol can help make the case for—and sustain—this priority. - Address any IT/audit staffing and resource shortages as well as a lack of supporting technology/tools, either of which can impede efforts to manage cybersecurity risk. Internal Audit Focus Areas There are five key components crucial to cyber preparedness. Here’s how internal audit can contribute to each one: Protection: Internal audit provides a holistic approach to identifying where an organization may be vulnerable. Whether testing bring-your-own-device (BYOD) policies or reviewing third-party contracts for compliance with security protocols, internal audit offers valuable insight into protection efforts. Having effective IT governance is also crucial, and internal audit can provide assurance services for that area as well. Detection: Good data analytics often provide organizations the first hint that something is awry. Increasingly, internal audit is incorporating data analytics and other technology in its work. The 2015 CBOK practitioner survey found that five in 10 respondents use data mining and data analytics for risk and control monitoring, as well as fraud identification. Business Continuity: Proper planning is important for dealing with and overcoming any number of risk scenarios that could impact an organization’s ongoing operations, including a cyber attack, natural disaster or succession. Crisis Management/Communications: Preparedness in crisis management and crisis communications can significantly and positively impact an organization’s customers, shareholders and brand reputation. Internal audit can help with plan development, provide assurance checks of its effectiveness and timeliness, and ultimately offer analysis and critiques after plans are executed. Continuous Improvement: Internal audit may provide the most value by contributing insight gleaned from its extensive scope of work. Cyber preparedness assumes survival of a cyber attack, but it serves no purpose if the organization does not evolve and improve its strategies and protocols to be better prepared for the next attack. This information is further detailed in our Internal Audit’s Role in Cybersecurity Guide, including internal audit’s role with the board and example cybersecurity issues to look out for. Learn more about this topic by exploring these related publications on KnowledgeLeader:
<urn:uuid:83977f32-0461-4b77-b031-824f22a2905c>
CC-MAIN-2022-40
https://www.knowledgeleader.com/blog/what-internal-audits-role-cybersecurity
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00639.warc.gz
en
0.919335
1,386
2.875
3
Endpoint detection and response (EDR) is a form of endpoint protection that uses data collected from endpoint devices to understand how cyberthreats behave and the ways that organizations respond to cyberthreats. While some forms of endpoint protection are focused purely on blocking threats, endpoint detection and response attempts a more holistic approach. Through continuous endpoint monitoring and rigorous data analysis businesses can gain a better understanding of how one threat or another infects an endpoint and the mechanisms by which it spreads across a network. Instead of remediating threats offhand, organizations can use the insights gained via EDR tools to harden security against future attacks and reduce dwell time for a potential infection. Think of EDR security as a flight data recorder for your endpoints. During a flight, the so-called “black box” records dozens of data points; e.g., altitude, air speed, and fuel consumption. In the aftermath of a plane crash, investigators use the data from the black box to determine what factors may have contributed to the plane crash. In turn, these contributing factors are used to prevent similar crashes in the future. Likewise, endpoint telemetry taken during and after a cyberattack (e.g., processes running, programs installed, and network connections) can be used to prevent similar attacks. The term “endpoint threat detection and response” was coined by noted author and cybersecurity expert Anton Chavukin as a way of calling out “tools primarily focused on detecting and investigating suspicious activities (and traces of such) other problems on hosts/endpoints.” Nowadays, the term has been shortened to just “endpoint detection and response.” When people talk about EDR cyber security, they’re probably referring to a type of endpoint protection that includes EDR capabilities. Just keep in mind the two terms are not one in the same. A flight data recorder can’t take control of the airplane and avert disaster during a crash scenario. Likewise, EDR alone isn’t enough to stop a cyberattack without integrated antivirus, anti-malware, anti-exploit, and other threat mitigation capabilities. Endpoint detection and response is broadly defined by three types of behavior. This refers to EDR’s ability to be deployed on an endpoint, record endpoint data, then store that data in a separate location for analysis now or in the future. EDR can be deployed as a standalone program or included as part of a comprehensive endpoint security solution. The latter has the added benefit of combining multiple capabilities into a single endpoint agent and offering a single pane of glass through which admins can manage the endpoint. EDR technology can interpret raw telemetry from endpoints and produce endpoint metadata human users can use to determine how a previous attack went down, how future attacks might go down, and actions that can be taken to prevent those attacks. EDR scans for programs, processes, and files matching known parameters for malware. Threat hunting also includes the ability to search all open network connections for potential unauthorized access. Incident response refers to EDR’s ability to capture images of an endpoint at various times and re-image or rollback to a previous good state in the event of an attack. EDR also gives administrators the option to isolate endpoints and prevent further spread across the network. Remediation and rollback can be automated, manual, or a combination of the two. “Think of EDR as a flight data recorder for your endpoints. During a flight, the so-called “black box” records dozens of data points; e.g., altitude, air speed, and fuel consumption. In the aftermath of a plane crash, investigators use the data from the black box to determine what factors may have contributed to the plane crash ... Likewise, endpoint telemetry taken during and after a cyberattack (e.g.,processes running, programs installed, and network connections) can be used to prevent similar attacks.” Before going into the difference between EDR and antivirus, let’s get our definitions straight. We know EDR is a kind of endpoint protection that leverages endpoint data and the things we learn from that data as a bulwark against future infection—so what is antivirus? Malwarebytes Labs defines antivirus as “an antiquated term used to describe security software that detects, protects against, and removes malware.” In that sense, “antivirus” is a bit of a misnomer. Antivirus stops computer viruses, but it can also stop modern threats like ransomware, adware, and Trojans as well. The more modern term “anti-malware” attempts to bring the terminology up to date with what the technology actually does; i.e., stop malware. People tend to use the two terms interchangeably. For the purposes of this article, we’ll use the more modern term and just call it “anti-malware.” Now, to understand the difference between EDR and anti-malware we have to look at the use cases. On one hand you have off the shelf anti-malware designed for the consumer looking to protect a few personal devices (like a smartphone, laptop, and tablet) on their home network. On the other hand you have EDR for the business user, protecting hundreds, potentially thousands of endpoint devices. Devices can be a mixture of work-owned and employee-owned (BYOD). And employees may be connecting to the company network from any number of potentially unsecure public WiFi hotspots. When it comes to threat analysis, the typical consumer only wants to know that their devices are protected. Reporting doesn’t extend much beyond how many threats and what kinds of threats were blocked in a given span of time. That’s not enough for a business user. Security admins need to know “What happened on my endpoints previously and what’s happening on my endpoints right now?” Anti-malware isn’t great at answering these questions, but this is where EDR excels. At any given moment EDR is a window into the day-to-day functions of an endpoint. When something happens outside the norm, admins are alerted, presented with the data and given a number of options; e.g., isolate the endpoint, quarantine the threat, or remediate. According to Malwarebytes Lab’s 2021 State of Malware Report, malware detections on Windows business computers decreased by 24% overall. Cybercriminals are moving away from piecemeal attacks on consumers, instead focusing their efforts on not just businesses, but educational institutions and government entities as well. The biggest threat at the moment is ransomware. Ransomware detections on business networks are at an all-time high, due largely to the Ryuk, Phobos, GandCrab, and Sodinokibi ransomware strains. Not to mention Trojans like Emotet, which carry secondary ransomware payloads. And it’s not just the big name, Fortune 500 companies getting hit. Organizations of all sizes are being targeted by cybercriminal gangs, lone wolf threat actors, hacktivists, and state-sponsored hackers looking for big scores from companies with caches of valuable data on their networks. Again, it’s the value of the data, not the size of the company. Local governments, schools, hospitals, and managed service providers (MSPs) are just as likely to be the victim of a data breach or ransomware infection. Consider the average cost of a data breach. The 2021 IBM “Cost of a Data Breach Report” puts the number at $4.24 million. In the US the number was $1.97 million higher where remote work played a role in prompting a breach. With this sobering data in mind, endpoint protection like Malwarebytes Endpoint Protection and Response, is crucial to protecting your endpoints, your employees, your data, the customers you serve, and your business from a dangerous array of cyberthreats and the damage they can cause. Endpoint Detection and Response (EDR) or Endpoint Threat Detection and Response (ETDR), continuously monitors devices to readily detect, evaluate, and respond to cyberthreats. EDR supports your business’ cybersecurity posture as an integrated endpoint security solution. EDR security solutions work by monitoring suspicious threat actor activity across all endpoints and workloads, providing bolstered network visibility into the attack surface to help security teams detect and respond to incidents that would otherwise be unforeseen. With an EDR solution, organizations can continuously monitor endpoints in real-time through the combined capabilities of endpoint management, data analysis, threat hunting, and incident response. Antivirus solutions traditionally use signature-based detection to identify threats on a device. By comparing file signatures against a list of known computer viruses, AV software can recognize and block the virus from attacking. Unlike antivirus software, EDR solutions use behavioral analysis and threat intelligence to gain visibility into endpoint activity. Select your language
<urn:uuid:521b7d66-77d6-49eb-a599-6395c3d96663>
CC-MAIN-2022-40
https://www.malwarebytes.com/cybersecurity/business/what-is-edr
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00639.warc.gz
en
0.933213
1,868
2.75
3
EVERYTHING YOU NEED TO KNOW ABOUT PRIVATE PUBLIC PARTNERSHIPS (PPP) Private Public Partnership (PPP) is the term used to describe a private business collaborating with their local police agency by sharing video footage and data through a camera share program, with the goal of contributing to public safety efforts. Through strong community policing and engagement, cities can experience a greater sense of trust and confidence. Private Public Partnership FAQs Why is Private Public Partnership needed? Private public partnership (PPP) is needed to obtain video footage where agencies typically do not have access. It is also an essential part of including the community in the contribution of solving crimes. Community policing and engagement is a critical aspect of ensuring a safer city and elicits trust within the community. What are the advantages and disadvantages of Private Public Partnership? Some advantages of private public partnership (PPP) are the aspect of community partnership and engagement as well as increased access to critical evidence. Some disadvantages are that a PPP program could take time to implement as they require various steps and approvals. However, it is because of the extensive creation process that communities are able to develop a PPP that is perfect fit for their agency and the people they serve. Why is private Public Partnership important? Private Public Partnership (PPP) is important because it gives law enforcement more access to critical evidence and includes the public in the safeguarding of their community. When there is a strong connection between law enforcement agencies and communities, there is a new level of trust and confidence within the community. What does a successful Private Public Partnership look like? A successful public private partnership allows for community policing and engagement by collaborating with the public in order to enhance public safety efforts.
<urn:uuid:ac8c2971-64b4-4d1a-9ce7-116276452feb>
CC-MAIN-2022-40
https://www.motorolasolutions.com/en_us/solutions/private-public-partnership.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00039.warc.gz
en
0.955856
355
2.640625
3
When it is not possible to eliminate an SQL statement to improve performance, it might be possible to simplify the statement. Consider the following questions: - Are all columns required? - Can a table join be removed? - Is a join or WHERE restriction necessary for additional SQL statements in a given function? An important requirement of simplification is to capture all SQL statements in order for a given executed function. Using a sampling process will not identify all possible improvements. Here is an example of a query simplification: mysql> SELECT fid, val, val -> FROM table1 -> WHERE fid = X; This query returned 350,000 rows of data that was cached by the application server during system startup. For this query, two-thirds of the result set that was passed over the network was redundant. The fid column was passed as a restriction to the SQL statement, and therefore the value was identical for all rows. The val column was also unnecessarily duplicated. The optimal SQL statement is shown here: mysql> SELECT val -> FROM table1 -> WHERE fid = X; The following is an example of a table join simplification that requires knowledge of all SQL statements for a given function. During the process, the following SQL statement is executed to retrieve a number of rows: mysql> SELECT /* Query 1 */ id FROM table1 -> WHERE col1 = X -> AND col2 = Y; At a later time in the execution of this function, the following statement was executed using the id value from the previous SQL statement: mysql> SELECT /* Query 2 */ table2.val1, table2.val2, table2.val3 -> FROM table2 INNER JOIN table1 USING (id) -> WHERE table2.id = 9 -> AND table1.col1 = X -> AND table1.col2 = Y -> AND table2.col1 = Z; This second SQL statement could be simplified to this: mysql> SELECT /* Query 2 */ val1, val2, val3 -> FROM table2 -> WHERE table2.id = 9 -> AND table2.col1 = Z As the first query performs the necessary restriction for the id column (that is, col1 = X and col2 = Y), the join and restriction clauses for table1 are redundant. Removing this join condition simplifies the SQL statement and removes a potential problem if only one statement is altered in the future. The MySQL database supports subqueries. The performance of subqueries may be significantly slower in certain circumstances than using a normal table join. Here is an example: SELECT id, label FROM code_opts WHERE code_id = (SELECT id FROM codes WHERE typ='CATEGORIES') ORDER BY seq This SQL statement can simply be rewritten as follows: SELECT o.id, o.label FROM code_opts o INNER JOIN codes c ON o.code_id = c.id WHERE c.typ='CATEGORIES' ORDER BY o.seq The change might appear to be subtle; however, this approach for more complex queries can result in improved query performance. Understanding the Impact of Views Developers should know the true class of the table that is used for SQL statements. If the object is actually a view, the impact of SQL optimizations can be masked by the complexity of join conditions for the view definition. A common problem with data warehouse (DW) and business intelligence (BI) tools is the creation of views on top of views. For any view definition, additional information may be retrieved that is not ultimately required for an SQL statement. In MySQL, complex queries involving views can be easily improved by using the necessary underlying tables.
<urn:uuid:2984925b-d87b-4bd1-8497-3cfb8510e873>
CC-MAIN-2022-40
https://logicalread.com/simplify-sql-statements-improve-mysql-perf-mc12/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00039.warc.gz
en
0.854398
780
2.78125
3
An intrusion prevention system (IPS) is an automated network security device used to monitor and respond to potential threats. Like an intrusion detection system (IDS), an IPS determines possible threats by examining network traffic. Because an exploit may be carried out very quickly after an attacker gains access, intrusion prevention systems administer an automated response to a threat, based on rules established by the network administrator. The main functions of an IPS are to identify suspicious activity, log relevant information, attempt to block the activity, and finally to report it. IPS’s include firewalls, anti-virus software, and anti-spoofing software. In addition, organizations will use an IPS for other purposes, such as identifying problems with security policies, documenting existing threats and deterring individuals from violating security policies. IPS have become an important component of all major security infrastructures in modern organizations. How An IPS Works An intrusion prevention system works by actively scanning forwarded network traffic for malicious activities and known attack patterns. The IPS engine analyzes network traffic and continuously compares the bitstream with its internal signature database for known attack patterns. An IPS might drop a packet determined to be malicious, and follow up this action by blocking all future traffic from the attacker’s IP address or port. Legitimate traffic can continue without any perceived disruption in service. Intrusion prevention systems can also perform more complicated observation and analysis, such as watching and reacting to suspicious traffic patterns or packets. Detection mechanisms can include: - Address matching - HTTP string and substring matching - Generic pattern matching - TCP connection analysis - Packet anomaly detection - Traffic anomaly detection - TCP/UDP port matching An IPS will typically record information related to observed events, notify security administrators, and produce reports. To help secure a network, an IPS can automatically receive prevention and security updates in order to continuously monitor and block emerging Internet threats. Many IPS can also respond to a detected threat by actively preventing it from succeeding. They use several response techniques, which involve: - Changing the security environment – for example, by configuring a firewall to increase protections against previously unknown vulnerabilities. - Changing the attack's content – for example, by replacing otherwise malicious parts of an email, like false links, with warnings about the deleted content. - Sending automated alarms to system administrators, notifying them of possible security breaches. - Dropping detected malicious packets. - Resetting a connection. - Blocking traffic from the offending IP address. Intrusion prevention systems can be organized into four major types: - Network-based intrusion prevention system (NIPS): Analyzes protocol activity across the entire network, looking for any untrustworthy traffic. - Wireless intrusion prevention system (WIPS): Analyzes network protocol activity across the entire wireless network, looking for any untrustworthy traffic. - Host-based intrusion prevention system (HIPS): A secondary software package that follows a single host for malicious activity, and analyzes events occurring within said host. - Network behavior analysis (NBA): Examines network traffic to identify threats that generate strange traffic flows. The most common threats being distributed denial of service attacks, various forms of malware, and policy abuses. pattern matching to detect attacks. By making slight adjust to the attack architecture, detection can be avoided. IPS Detection Methods The majority of intrusion prevention systems use one of three detection methods: signature-based, statistical anomaly-based, and stateful protocol analysis. - Signature-based detection: Signature-based IDS monitors packets in the network and compares with predetermined attack patterns, known as “signatures”. - Statistical anomaly-based detection: An anomaly-based IDS will monitor network traffic and compare it to expected traffic patterns. The baseline will identify what is "normal" for that network – what sort of packets generally through the network and what protocols are used. It may however, raise a false positive alarm for legitimate use of bandwidth if the baselines are not intelligently configured. - Stateful protocol analysis detection: This method identifies protocol deviations by comparing observed events with pre-determined activity profiles of normal activity. Modern networked business environments require a high level of security to ensure safe and trusted communication of information between various organizations. An intrusion prevention system acts as an adaptable safeguard technology for system security after traditional technologies. The ability to prevent intrusions through an automated action, without requiring IT intervention means lower costs and greater performance flexibility. Cyber attacks will only become more sophisticated, so it is important that protection technologies adapt along with their threats. - Intrusion Prevention System or IPS - How to Configure the Intrusion Prevention System (IPS) - Whitepaper: Barracuda Web Application Firewall vs. Intrusion Prevention Systems (IPS) How Barracuda Can Help Barracuda CloudGen Firewalls incorporate an advanced Intrusion Detection and Prevention System (IDS/IPS) which provides real-time network protection from a broad range of network threats, vulnerabilities, and exploits. As a result, it is able to identify and block advanced evasion attempts and obfuscation techniques that are used by attackers to circumvent and trick traditional intrusion prevention systems. In addition, all Barracuda CloudGen Firewall models can apply IPS/IDS to SSL encrypted web traffic using the standard ' trusted man-in-the-middle' approach. Do you have more questions about Intrusion Prevention Systems? Contact us today!
<urn:uuid:a26a8a48-26df-4120-abe1-4e067843005f>
CC-MAIN-2022-40
https://www.barracuda.com/glossary/intrusion-prevention-system
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00039.warc.gz
en
0.893557
1,137
3.40625
3
SQL Injection (SQLi) is a type of an injection attack that makes it possible to execute malicious SQL statements. These statements control a database server behind a web application. Attackers can use SQL Injection vulnerabilities to bypass application security measures. They can go around authentication and authorization of a web page or web application and retrieve the content of the entire SQL database. They can also use SQL Injection to add, modify, and delete records in the database. An SQL Injection vulnerability may affect any website or web application that uses an SQL database such as MySQL, Oracle, SQL Server, or others. Criminals may use it to gain unauthorized access to your sensitive data: customer information, personal data, trade secrets, intellectual property, and more. SQL Injection attacks are one of the oldest, most prevalent, and most dangerous web application vulnerabilities. The OWASP organization (Open Web Application Security Project) lists injections in their OWASP Top 10 2017 document as the number one threat to web application security. How and Why Is an SQL Injection Attack Performed To make an SQL Injection attack, an attacker must first find vulnerable user inputs within the web page or web application. A web page or web application that has an SQL Injection vulnerability uses such user input directly in an SQL query. The attacker can create input content. Such content is often called a malicious payload and is the key part of the attack. After the attacker sends this content, malicious SQL commands are executed in the database. SQL is a query language that was designed to manage data stored in relational databases. You can use it to access, modify, and delete data. Many web applications and websites store all the data in SQL databases. In some cases, you can also use SQL commands to run operating system commands. Therefore, a successful SQL Injection attack can have very serious consequences. - Attackers can use SQL Injections to find the credentials of other users in the database. They can then impersonate these users. The impersonated user may be a database administrator with all database privileges. - SQL lets you select and output data from the database. An SQL Injection vulnerability could allow the attacker to gain complete access to all data in a database server. - SQL also lets you alter data in a database and add new data. For example, in a financial application, an attacker could use SQL Injection to alter balances, void transactions, or transfer money to their account. - You can use SQL to delete records from a database, even drop tables. Even if the administrator makes database backups, deletion of data could affect application availability until the database is restored. Also, backups may not cover the most recent data. - In some database servers, you can access the operating system using the database server. This may be intentional or accidental. In such case, an attacker could use an SQL Injection as the initial vector and then attack the internal network behind a firewall. There are several types of SQL Injection attacks: in-band SQLi (using database errors or UNION commands), blind SQLi, and out-of-band SQLi. You can read more about them in the following articles: Types of SQL Injection (SQLi), Blind SQL Injection: What is it. To follow step-by-step how an SQL Injection attack is performed and what serious consequences it may have, see: Exploiting SQL Injection: a Hands-on Example. Simple SQL Injection Example The first example is very simple. It shows, how an attacker can use an SQL Injection vulnerability to go around application security and authenticate as the administrator. The following script is pseudocode executed on a web server. It is a simple example of authenticating with a username and a password. The example database has a table named users with the following columns: # Define POST variables uname = request.POST['username'] passwd = request.POST['password'] # SQL query vulnerable to SQLi sql = “SELECT id FROM users WHERE username=’” + uname + “’ AND password=’” + passwd + “’” # Execute the SQL statement database.execute(sql) These input fields are vulnerable to SQL Injection. An attacker could use SQL commands in the input in a way that would alter the SQL statement executed by the database server. For example, they could use a trick involving a single quote and set the passwd field to: password' OR 1=1 As a result, the database server runs the following SQL query: SELECT id FROM users WHERE username='username' AND password='password' OR 1=1' Because of the OR 1=1 statement, the WHERE clause returns the first id from the users table no matter what the password are. The first user id in a database is very often the administrator. In this way, the attacker not only bypasses authentication but also gains administrator privileges. They can also comment out the rest of the SQL statement to control the execution of the SQL query further: -- MySQL, MSSQL, Oracle, PostgreSQL, SQLite ' OR '1'='1' -- ' OR '1'='1' /* -- MySQL ' OR '1'='1' # -- Access (using null characters) ' OR '1'='1' %00 ' OR '1'='1' %16 Example of a Union-Based SQL Injection One of the most common types of SQL Injection uses the UNION operator. It allows the attacker to combine the results of two or more SELECT statements into a single result. The technique is called union-based SQL Injection. The following is an example of this technique. It uses the web page testphp.vulnweb.com, an intentionally vulnerable website hosted by Acunetix. The following HTTP request is a normal request that a legitimate user would send: GET http://testphp.vulnweb.com/artists.php?artist=1 HTTP/1.1 Host: testphp.vulnweb.com artist parameter is vulnerable to SQL Injection. The following payload modifies the query to look for an inexistent record. It sets the value in the URL query string to -1. Of course, it could be any other value that does not exist in the database. However, a negative value is a good guess because an identifier in a database is rarely a negative number. In SQL Injection, the UNION operator is commonly used to attach a malicious SQL query to the original query intended to be run by the web application. The result of the injected query will be joined with the result of the original query. This allows the attacker to obtain column values from other tables. GET http://testphp.vulnweb.com/artists.php?artist=-1 UNION SELECT 1, 2, 3 HTTP/1.1 Host: testphp.vulnweb.com The following example shows how an SQL Injection payload could be used to obtain more meaningful data from this intentionally vulnerable site: GET http://testphp.vulnweb.com/artists.php?artist=-1 UNION SELECT 1,pass,cc FROM users WHERE uname='test' HTTP/1.1 Host: testphp.vulnweb.com How to Prevent an SQL Injection The only sure way to prevent SQL Injection attacks is input validation and parametrized queries including prepared statements. The application code should never use the input directly. The developer must sanitize all input, not only web form inputs such as login forms. They must remove potential malicious code elements such as single quotes. It is also a good idea to turn off the visibility of database errors on your production sites. Database errors can be used with SQL Injection to gain information about your database. If you discover an SQL Injection vulnerability, for example using an Acunetix scan, you may be unable to fix it immediately. For example, the vulnerability may be in open source code. In such cases, you can use a web application firewall to sanitize your input temporarily. To learn how to prevent SQL Injection attacks in the PHP language, see: Preventing SQL Injection Vulnerabilities in PHP Applications and Fixing Them. To find out how to do it in many other different programming languages, refer to the Bobby Tables guide to preventing SQL Injection. Frequently asked questions SQL Injection is a web vulnerability caused by mistakes made by programmers. It allows an attacker to send commands to the database that the website or web application communicates with. This, in turn, lets the attacker get data from the database or even modify it. SQL Injection is a very old vulnerability – it has been discovered in 1998. However, according to our 2020 research, 8 percent of websites and web applications have SQL Injection vulnerabilities. A successful SQL Injection attack may lead to a complete compromise of a system or theft of the entire database. For example, an SQL Injection attack in 2019 led to the theft of complete tax data of 5 million people. The only efficient way to detect SQL Injections is by using a vulnerability scanner, often called a DAST tool (dynamic application security testing). Acunetix is known to be top-of-the-line in detecting SQL Injections and other vulnerabilities. Acunetix is able to reach where other scanners fail. The best way to prevent SQL Injections is to use safe programming functions that make SQL Injections impossible: parameterized queries (prepared statements) and stored procedures. Every major programming language currently has such safe functions and every developer should only use such safe functions to work with the database.
<urn:uuid:3a77eaab-78e5-47ad-a23e-05280705820e>
CC-MAIN-2022-40
https://www.acunetix.com/websitesecurity/sql-injection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00039.warc.gz
en
0.866043
2,045
3.5625
4
Application Traffic Management Definition Application traffic management (ATM) refers to techniques for intercepting, analyzing, decoding, and directing web traffic to the optimum resources based on specific policies. Also called network traffic management, it allows network administrators to significantly increase overall network application performance by routing and filtering packets based on content in their payloads or headers. By applying these standards for security, scalability, availability, and performance to any IP-based application users can save money and improve efficiency. What is Application Traffic Management? Application traffic management (ATM) refers to controlling and monitoring all application connectivity and availability issues. By enhancing availability, efficiency, and security, ATM addresses capacity and ensures the network is a well-managed, high-value resource. Application Delivery Controllers (ADC) provide ATM by quickly optimizing the delivery and routing of specific types of data to ideal resources. Unlike legacy appliance-based ADCs, modern solutions use deep packet inspection combined with rules and policies to determine what type of data it is and other application performance metrics, in the process of finding the right organizational servers to route it to. This allows the system to prioritize certain types of data, sending mission-critical data preferentially to high-performing servers. Application Delivery Controllers and Application Traffic Management Application delivery controllers, also called load balancers, handle several types of application traffic as they discern how to route data: Burst Traffic. This is inconsistent traffic (like downloads with large files such as video or images) that comes in bursts and then subsides. This kind of traffic exhausts application availability by immediately consuming a high bandwidth, so load balancers can contain it by limiting bandwidth access. Interactive Traffic. Interactive traffic consists of short pairs of requests and responses (like online shopping or browsing) that involve applications and end-users in real-time interactions. These exchanges result in poor application response time and reduced bandwidth. Manage interactive traffic by prioritizing requirements over other traffic. Latency Sensitive Traffic. This traffic is time-sensitive, such as live gaming, VoIP, video streaming, and Video Conferencing. The application depends on a steady stream of traffic and on-time service, but still may experience sudden bursts of traffic despite an ongoing demand for required data packets. A range of bandwidth based on priorities is the way to handle this issue with load balancing. Non-Real-Time Traffic. Emails and batch processing applications generate non-real-time traffic in which real-time delivery is less critical. Scheduling bandwidth outside business hours is important to effective traffic management here. Benefits of Application Traffic Management Solutions A well-run network with smarter ATM delivers several key benefits for organizations: Simplified infrastructure. A public cloud service that replaces hardware-based application servers is better equipped to scale without sacrificing quality. Reduced costs. When application performance improves and brings user experience along with it, companies’ costs for customer support drop. A cloud-native process for application delivery and traffic management also saves on maintenance and hardware acquisition costs. Enhanced productivity. When team members can easily access services and information on applications anywhere, from any device, efficiency is optimal. Applications can perform faster with cloud-native management. Improved end-user experience. Faster, smoother, more user-centric experiences are possible with efficient cloud-based application traffic management. Improved security performance. Smarter routing and optimal resource management protect the entire system from internal and external threats and keep applications secure. Application Traffic Management Best Practices There are several important best practices for application traffic management to keep in mind. Application traffic managers work with two main sources of data: flow data and packet data. The controller acquires flow data from routers and other Layer 3 devices. Flow data informs the system about traffic volumes and the routes network packets travel. This helps improve performance by better using available resources because it identifies unauthorized WAN traffic. The application traffic manager sources packet data from mirror ports and SPAN to better understand how the application and users are interacting and to track those interactions on WAN. The controller can use these data sets to assess security issues such as suspicious malware. Real-time and historical data Real-time data is critical to effective application traffic monitoring, but historical data is also important to optimized performance. Both types of data are crucial to analyzing past events, identifying trends, and comparing new activity to past behavior. Internal and external traffic monitoring Most networks are configured with intrusion detection systems, but a huge number of them lack sufficient internal traffic monitoring. This means the entire system is vulnerable to internal damage from a rogue IoT device or corrupt mobile that is inside. This also means that any internal errors or misconfiguration could result in the firewall allowing malicious traffic. Does Avi Offer Application Traffic Management Solutions? Yes. Avi delivers multi-cloud application services such as load balancing and ingress controller services for containerized applications with microservices architecture through dynamic service discovery, ATM, and web application security. Container Ingress provides scalable and enterprise-class Kubernetes ingress traffic management, including local and global server load balancing (GSLB), web application firewall (WAF), and performance monitoring, across multi-cluster, multi-region, and multi-cloud environments. Avi integrates seamlessly with Kubernetes for microservices and container orchestration and security. For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.
<urn:uuid:22c3a8ed-15d3-4cb5-957b-7819e9ed6cc4>
CC-MAIN-2022-40
https://www-stage.avinetworks.com/glossary/application-traffic-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00039.warc.gz
en
0.906132
1,123
3.09375
3
The Domain Name System (DNS) Server is a server that is specifically used for matching website hostnames (like example.com)to their corresponding Internet Protocol or IP addresses. The DNS server contains a database of public IP addresses and their corresponding domain names. Every device connected to the internet has a unique IP address that helps to identify it, according to the IPv4 or IPV6 protocols. The same goes for web servers that host websites. DNS servers help us avoid memorization of such long numbers in IP addresses (and even more complex alphanumeric ones in the IPV6 system) as they automatically translate the website names we enter into the browser address bar into these numbers so that the servers can load the right web pages. Introduction to the Domain Name System To understand the role of the DNS Server, it is important to know about the Domain Name System. The Domain Name System is essentially a phonebook of the internet. Just like how a phonebook matches individuals to a phone number, the DNS matches a website name to their corresponding IP address. What is DNS? The DNS is a system of records of domain names and IP addresses that allows browsers to find the right IP address that corresponds to a hostname URL entered into it. When we try to access a website, we generally type in their domain names, like cdnetworks.com or wired.com or nytimes.com, into the web browser. Web browsers however need to know the exact IP addresses to load content for the website. The DNS is what translates the domain names to the IP addresses so that the resources can be loaded from the website’s server. Sometimes, websites can have numerous IP addresses corresponding to a single domain name. For example, large sites like Google will have users querying a server from distant parts of the world. The server that a computer from Singapore tries to query will likely be different from the one a different computer from say Toronto will try to reach, even if the site name entered in the browser is the same. This is where DNS caching comes in. DNS caching is the process of storing DNS data on the DNS records closer to a requesting client to be able to resolve the DNS query earlier. This avoids the problem of additional queries further down the chain and improves web page load times and reduces bandwidth consumption. The amount of time that the DNS records are stored in DNS cache is called time to live or TTL. This period of time is important as it determines how “fresh” the DNS records are and whether it matches recent updates to IP addresses. DNS caching can be done at the browser level or at the operating system (OS level). - Browser DNS Caching Since web browsers generally store DNS records for a set amount of time, it is usually the first place that is checked when a user makes a DNS record. Being on the browser, there are fewer steps involved in checking the DNS cache and making the DNS request to an IP address. - Operating system (OS) level DNS caching Once a DNS query leaves an end user’s machine, the next stop where a match is sought is at the operating system level. A process inside the operating system, called the “stub resolver” checks its own DNS cache to see if it has the record. If not, the query is sent outside the local network to the Internet Service Provider (ISP). How Does DNS Work? The DNS is responsible for converting the hostname, what we commonly refer to as the website or web page name, to the IP address. The act of entering the domain name is referred to as a DNS query and the process of finding the corresponding IP address is known as DNS resolution. DNS queries can be of three types: recursive query, iterative query or non-recursive query. - Recursive query – These are queries where a DNS server has to respond with the requested resource record. If a record cannot be found, the DNS client has to be shown an error message. - Iterative query – These are queries for which the DNS client will continue to request a response from multiple DNS servers until the best response is found, or an error or timeout occurs. If the DNS server is unable to find a match for the query, it will refer to a DNS server authoritative for a lower level of the domain namespace. This referral address is then queried by the DNS client and this process continues with additional DNS servers. - Non-recursive query – these are queries which are resolved by a DNS resolver when the requested resource is available, either due to the server being authoritative or because the resource is already stored in cache. The Different Types of DNS Server Once a DNS query is entered, it passes through a few different data center servers before resolution, without any end user interaction. - DNS recursor This is a server designed specifically to receive queries from client machines. It tracks down the DNS record and makes additional requests to meet the DNS queries from the client. The number of requests can be decreased with DNS caching, when the requested resources are returned to the recursor early on in the lookup process. - Root name server This server does the job of translating the human-friendly host names into computer-friendly IP addresses. The root server accepts the recursor’s query and sends it to the TLD nameservers in the next stage, depending on the domain name seen in the query. - Top Level Domain (TLD) nameserver The TLD nameservers are responsible for maintaining the information about the domain names. For example, they could contain information about websites ending in “.com” or “.org” or country level domains like “www.example.com.uk”, “www.example.com.us” and others. The TLD nameserver will take the query from the root server and point it to the authoritative DNS nameserver associated with the query’s particular domain. - Authoritative nameserver In the last step, the authoritative DNS nameserver will return the IP address back to the DNS recursor that can relay it to the client. This authoritative DNS nameserver is the one at the bottom of the lookup process that holds the DNS records. Think of these as the last stop or the final authoritative source of truth in the process. DNS Lookup vs DNS Resolver The process by which a DNS server returns a DNS record is called a DNS lookup. It involves the query of the hostname from the web browser to the DNS lookup process on the DNS server and back again. The DNS resolver is the server that deals with the first step in the DNS lookup process and which starts the sequence of steps that end in the URL being translated into the IP address for loading the web pages. First, the user-entered hostname query travels from the web browser to the internet and is received by the DNS recursive resolver. The recursive DNS server then queries the DNS root server which responds with the address of the to the TLD server responsible for storing the domains. The resolver then makes a DNS request to the corresponding domain’s TLD and receives the IP address of the domain nameserver. As a last step, the recursive DNS server queries the domain nameserver and is returned with the IP address to send to the web browser. It is after this DNS lookup process is done that the browser can request for individual web pages through HTTP requests. These steps make up a standard DNS lookup process but they can be shortened with DNS caching. DNS caching allows the storage of the DNS lookup information locally on the browser, the operating system or a remote DNS infrastructure, which allows some of the steps to be skipped for faster loading. DNS Server FAQs What is a DNS server used for? DNS servers are used to help facilitate communication between humans and computers. The key role of DNS servers is to match website hostnames to their corresponding IPV4 or IPV6 addresses. How can I find my DNS server? If ever you need to locate your DNS server, there are a few ways to do it. The first option is via your computer or router. If using Windows 8.1 or 10 go to the Start button in the bottom left corner and type Command Prompt, then ipconfig/all, then hit Enter. You’ll then be presented with a ton of information relating to your computer, including a list of DNS servers. If using MAC, click on System Preferences and Network instead. Select the specific network connection you want to check, click Advanced, then finally the DNS tab. Here you will see your servers listed. Alternatively, there are many websites online that offer a free DNS testing service. These services give you a plethora of information pertaining to your DNS server including the IP address, Hostname, and location. What are the benefits of a DNS server? There are many benefits to be seen when using a DNS server. The most significant include: - DNS servers help you locate websites by typing the domain as opposed to its IP address - They add an extra layer of security to your network - Without DNS servers, online transactions would be impossible - If a website changes its IP address, the DNS server will pick this up and automatically update its database so users are unaffected. - DNS servers are fast at what they do, meaning less downtime for users. What to Do if DNS Server Isn’t Responding? From time to time DNS servers experience problems, and you may see a message appear informing you that your DNS server is not responding. This could be down to a number of factors including poor internet connection, an outdated browser, a power outage on the server’s side, or simply erroneous DNS settings. The good news is, that there are lots of ways to try and resolve this. - Switch web browser. If the problem occurs while on Google Chrome, try Firefox, or Opera instead. - Turn off your firewall temporarily. While firewalls are extremely important when it comes to protecting your computer against unwanted DNS attacks, they do have a habit of interfering with your network connection. Once turned off, revisit the page you had problems connecting to. If the website loads ok, then you know the firewall settings need adjusting. - Clear your DNS settings. The final step to try when experiencing DNS server issues is to clear your DNS cache.
<urn:uuid:e0ec965a-e6b7-4411-8d5b-750995060cd3>
CC-MAIN-2022-40
https://www.cdnetworks.com/ko/web-performance-blog/what-is-a-dns-server/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00039.warc.gz
en
0.904894
2,150
3.546875
4
Ransomware attacks are becoming more and more common. Since they are on the rise, you should be doing everything possible to protect yourself. You may not be able to completely avoid them even with the best protection but you can take other steps to ensure you are not a victim. Ransomware Attacks on the Rise No one wants to be the victim of a ransomware attack but according to data recently released from Kaspersky Lab, the number of people who have been attacked have more than doubled in the last quarter. In Q3 of 2016 alone, there were 821,865 victims of ransomware. This does not include other types of cyber attacks. Not only that, but the number of people who have been attacked has been steadily rising during 2016 and does not show any sign of slowing down. Ransomware attacks occur when a cyber attacker hacks your system and takes hold of your files. They will not allow you access your files in any way unless you pay them a “ransom” that will allow them to release the files back to you. This is not only inconvenient, but also can be very dangerous. With attacks happening more frequently, it is best to take steps to better protect yourself. What You Can Do to Protect Yourself Even though it is scary how many attacks are happening these days, that does not mean you do not have a way to better protect yourself from them. One of the best ways to protect yourself is by having software installed that will prevent attacks from ransomware and malware while also notifying you of potential threats. You can purchase packages that are monthly and yearly. If you have protection on your devices, it is much more difficult for someone to target you. Other things you can do to protect yourself include: If you want some help protecting your computer from ransomware attacks, be sure to contact Interplay at (206) 329-6600 or [email protected]. They can help you find the ideal solutions and protections so you can be safer on the internet.
<urn:uuid:143880a0-c30c-4b07-82f1-bfdaefa18153>
CC-MAIN-2022-40
https://www.interplayit.com/blog/the-rise-of-ransomware-attacks-and-how-you-can-protect-yourself/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00039.warc.gz
en
0.968525
409
2.75
3
We continue with The Apple Data Center FAQ How Does Apple Power its North Carolina Data Center? In May 2012 Apple announced that the electricity supporting the iDataCenter in Maiden would either be supplied by renewable energy generated on-site, or by directly purchasing clean, renewable energy generated by local and regional sources. And the company has honored its promises over the years. The company built three solar farms, each valued at around $10 million, as well as a 10-megawatt fuel cell installation, valued at least at $4 million. One solar farm is across the street from the Maiden data center, while another solar farm is in Conover and the third one is in Claremont. The 200-acre array of photovoltaic solar panels serves as a supplement to its utility power feed from Duke Energy as well as fuel cells from Bloom Energy that use biogas from nearby landfills to generate electricity. Apple says its generation facilities at the Maiden, N.C. facility produce enough on-site renewable energy to power the equivalent of 14,00 homes. This map from Apple provides an overview of the Maiden property and the location of its on-site generation facilities. Apple’s use of fuel cells also appears to be the largest such facility dedicated to a U.S. data center. The facility uses methane from nearby landfills, which is transported via a natural gas pipeline system. The raw biogas are cleaned and separated to increase the methane content and remove unwanted components (including sulfide, chlorine and sulfur) before being injected into the natural gas pipeline. The installation features 24 200-kilowatt Bloom Energy Servers placed on outdoor pads. Bloom Energy is converting a former Chrysler auto assembly plant in Delaware into a manufacturing facility to churn out its Bloom Energy Servers for East Coast customers, including Apple. The Bloom Energy Server is based on solid oxide fuel cell technology that converts fuel to electricity through an electro-chemical reaction, without any combustion. Because they are housed at the customer premises, the Bloom box can continue operating during grid outages. Apple’s focus on sustainability extends to the construction methods using in building the North Carolina facility. The Apple data center in Maiden has earned Platinum, the highest level attainable under the LEED ( Leadership in Energy and Environmental Design) rating system for energy efficient buildings. The company used 14 percent of recycled materials in its construction process, and diverted 93 percent of construction waste from landfills. Apple also sourced 41 percent of purchased materials within 500 miles of the Maiden site, which reduced the environmental impact from trucking materials over long distances. The facility also uses a “free cooling” system that employs water-side economization, in which cool outside air is incorporated into a heat exchanger to supply cold water for the data center cooling systems. Apple recently announced the completion of a 50MW solar farm in Arizona, which will offset energy consumption by the company’s new data center in Mesa, Arizona, currently under construction. The photovoltaic plant in Arizona is the fifth large-scale solar array the company has built for its data centers. Three solar arrays accompany its data center campus in North Carolina, and one accompanies its data centers in Nevada. Apple also has contracted for the output of 130MW of capacity from a solar project in Central California. Earlier this year, Apple created an energy company called Apple Energy as a subsidiary, which gives it more flexibility to buy and sell energy on the wholesale electricity market. Has Apple Finally Quieted Criticism from Greenpeace? After years of criticism from Greenpeace about its dirty footprint, today Apple is actually earning accolades from the environmental group. Here’s what Greenpeace says about Apple now: “Apple continues to lead the charge in powering its corner of the internet with renewable energy even as it continues to rapidly expand. All three of its data center expansions announced in the past two years will be powered with renewable energy. Apple is also having a positive impact on pushing major colocation providers to help it maintain progress toward its 100% renewable energy goal.” However, it’s taken about four years to silence Greenpeace. The criticism began in April 2011, when the group identified Apple as a leading offender in using energy from “dirty” sources to power its data centers, including coal and nuclear power. The group’s finding relied on an estimate by Greenpeace that Apple would consumer as much as 100 megawatts of electricity at its North Carolina data center. A Greenpeace report was sharply critical of the presence of Apple, Google and Facebook in North Carolina, which it labeled the “dirty data triangle” because of their reliance on electricity sourced by coal and nuclear power. “These mega data centers, which will draw from some of the dirtiest generation mixes in the US, highlights the sway of low-cost energy, misplaced tax incentives, and a corresponding lack of commitment to clean energy,” Greenpeace wrote. In April 2012, Greenpeace stepped up its criticism of Apple, issuing a second report, How Clean is Your Cloud?, which received widespread media attention. Greenpeace also staged a protest at Apple’s headquarters to draw attention to its critiques of Apple’s power sourcing for its data center. Apple was targeted after it had already announced its plans for on-site renewable energy in Maiden, which Greenpeace belittled as enough to supply just 10 percent of the data center’s power needs, again citing the 100 megawatt estimate. But Apple responded by disclosing that will use 20 megawatts of power at full capacity in its North Carolina data center, about one-fifth the amount estimated by Greenpeace. “Our data center in North Carolina will draw about 20 megawatts at full capacity, and we are on track to supply more than 60% of that power on-site from renewable sources including a solar farm and fuel cell installation which will each be the largest of their kind in the country,” Apple said in a statement. “We believe this industry-leading project will make Maiden the greenest data center ever built.” After Apple doubled the size of its planned solar array, Greenpeace offered its first public affirmation of Apple’s sustainability initiatives. “Apple’s announcement today is a great sign that Apple is taking seriously the hundreds of thousands of its customers who have asked for an iCloud powered by clean energy, not dirty coal,” said Gary Cook, senior IT analyst at Greenpeace. “Apple’s doubling of its solar capacity and investment in local renewable energy are key steps to creating a cleaner iCloud.” Cook said Greenpeace would continue to pressure Apple to make a deep commitment to renewable energy. “Apple must adopt a firm siting policy to prioritize renewable energy when it chooses locations for new data centers,” he said. “Only then will customers have confidence that the iCloud will continue to get cleaner as it grows.” Apple has certainly proven its commitment.
<urn:uuid:06047008-41bd-4a4e-a055-cd145857bcab>
CC-MAIN-2022-40
https://www.datacenterknowledge.com/data-center-faqs/apple-data-center-faq-part-2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00240.warc.gz
en
0.951867
1,426
2.703125
3