text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Windows Tip Sheet Do More with More Where a batch file might seem just a bit over the top, try this nifty trick with the More command. - By Jeffery Hicks You’re probably familiar with the More command, which you use to page output. But there’s another version you may not be familiar with. When you are at a command prompt and type an opening parenthesis, ( , followed by Enter, you will get a special prompt that looks like this: At the prompt type a command. Each new line will allow you to enter another command. Type a closing parenthesis, ), to get back to the main prompt. Once you hit Enter, all of your commands will be executed in sequence. Try it out. Open a command prompt and type or simply copy and paste at a CMD prompt: Echo I am a command All four commands will execute. As you see, this is like creating a batch file -- without the file! I’d say it is also equivalent to creating a script block in PowerShell, except this version is interactive. Here’s another example for you to try: Echo This is %computername% echo It is now %date% %time% echo Free space on drive C: dir c:\ | find /i "free" You can use this technique to redirect output from multiple commands to the same file. When you get to the closing parenthesis, add > filename, like this: I can very easily redirect the output of three different DIR commands to the same file. Tech HelpJust An Got a Windows, Exchange or virtualization question or need troubleshooting help? Or maybe you want a better explanation than provided in the manuals? Describe your dilemma in an e-mail to the MCPmag.com editors at [email protected]; the best questions get answered in this column and garner the questioner with a nifty Redmond T-shirt. When you send your questions, please include your full first and last name, location, certifications (if any) with your message. (If you prefer to remain anonymous, specify this in your message, but submit the requested information for verification purposes.) Spelling counts and typing mistakes can’t be fixed. There’s no way to go back once you’ve moved to the next line. But if you need to abort what you’re doing, use Ctrl+C before you put in the closing parenthesis. So the next time you have multiple commands you need to run -- perhaps a series of scripts -- you don’t need to figure out how create a batch file. Just do more! Jeffery Hicks is an IT veteran with over 25 years of experience, much of it spent as an IT infrastructure consultant specializing in Microsoft server technologies with an emphasis in automation and efficiency. He is a multi-year recipient of the Microsoft MVP Award in Windows PowerShell. He works today as an independent author, trainer and consultant. Jeff has written for numerous online sites and print publications, is a contributing editor at Petri.com, and a frequent speaker at technology conferences and user groups.
<urn:uuid:a6855812-c3ff-463b-bb0a-5539ce764667>
CC-MAIN-2022-40
https://mcpmag.com/articles/2007/08/15/do-more-with-more.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00625.warc.gz
en
0.885083
760
2.734375
3
The accelerated growth of technology raises questions about its impact on climate; this is a concern we cannot ignore. A proper response to the climate crisis requires a deep-rooted commitment, one that promotes sustainable, re-usable, energy-efficient services, such as Microsoft Azure’s Big Data compute. This service drives innovation to decrease negative environmental effects. As an example, Microsoft’s modern global datacenters are an investment into an energy efficient future, leading the way in energy efficient data centers. Leading by example and demonstrating transparency into how they help organizations in every sector significantly reduce carbon impacts is one of the many benefits in migrating your big data workloads to the Microsoft Azure cloud ecosystem. No matter what size your organization is you can measure the carbon impact of your Microsoft Azure Big Data workload, to help make informed, sustainable big data compute cluster provisioning decisions. You might be wondering how exactly you measure this carbon impact. Microsoft has a built-in tool, the Azure sustainable calculator, which estimates your workload’s carbon impacts in comparison to the on-premises datacenter’s carbon emissions. Energy Efficient Big Data Compute Clusters Current third-party studies from industry experts have compared Microsoft Azure Big Data compute with traditional datacenter compute and the results of those studies are promising. The results show that on average Microsoft Compute is 70 percent more energy efficient than the monolithic on-premises data centers. Furthermore, procuring Azure services includes renewable electricity which powers Microsoft Datacenters. If considered, carbon emissions from Big Data Azure compute clusters are on average 90 percent lower than the monolithic data centers.
<urn:uuid:3e1388ad-129d-414e-a12d-2769b3b6eadc>
CC-MAIN-2022-40
https://web.fastlaneus.com/blog/decreasing-carbon-emissions-using-azures-big-data-compute
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00625.warc.gz
en
0.879332
343
2.71875
3
At universities and companies around the globe, there are people plugging away trying to solve the myriad technological challenges of quantum computers. But that doesn’t mean practical applications of quantum computing are some futuristic fantasy. Already, quantum technology is trickling into the real world. One big leap happened earlier this year when security company Lockheed Martin purchased the 128-qubit “D Wave One,” which Forbes calledthe first commercially available quantum computer. (This isn’t certain because, as Bristol University’s Jeremy O’Brien noted, “If it’s that important that you’re using this technology, you wouldn’t want to advertise.”) D Wave refuses to let the perfect get in the way of the good, commented Geordie Rose, the chief technology officer at D Wave. Even if the prototypical quantum computer doesn’t yet exist, D Wave is proof that the tools are in place to build infant quantum computers — and an infant quantum computer can outwit the most advanced conventional computers. “There is this feeling that quantum computing is this mystical thing,” Rose said. “They’re not. They’re hard to build, but everything is difficult to build. The latest iPhone was really hard to build. So these things are hard to build, but you can do it with a concerted effort.” Communicating in Quantum Seeing as it’s a global security company, the details on how Lockheed Martin will implement the D Wave One are kept secret, as are the exact design specs of D Wave’s computer. But that is not the only marriage of quantum technology and commercial enterprise. The Bristol University research team led by O’Brien is working in collaboration with cellphone giant Nokia. Communications companies will be huge beneficiaries of quantum technology, O’Brien said. The analogy he used (after qualifying that such analogies are inherently flawed) was measuring a piece of paper with a ruler. When we measure a piece of paper, it doesn’t matter that “we have to rain photons down onto that piece of paper,” he pointed out. The paper’s properties are unaffected by the presence of an observer. Not so in the quantum world. “Because it’s a quantum mechanical system, any information that a third party extracts is detectable,” O’Brien said. “What this means is you and I could set up a communication link where we could guarantee, by the laws of physics, that the information isn’t being disturbed.” This is important because there will inevitably be more and more information transmitted electronically, he explained. “In five year’s time, it will be quite conceivable that your phone will be telling your doctor you have high blood pressure or telling your bank that you’re buying a new home. We’re going to be transferring more and more information, and we need to make sure it’s protected.” Like 2 Protons in an Atom More and more, quantum technology is breaking free from the shackles of the laboratory and diffusing into the commercial market — be it in the name of national security or telecommunications. And this figures to be a self-perpetuating phenomenon: As companies invest in this technology, the technology will get better, more companies will invest and so on. This in itself is a departure from classical computing, noted IBM’s Mark Ketchen. “It used to be, back in the 80s or so, if you looked at the technology we developed on computers, we basically said, ‘We gotta have it all. We gotta own this technology. We’re going to do all this innovation and we’re not going to let people know what we’re up to too much,'” he recalled. Now IBM is collaborating a host of partners — from Princeton to the University of Wisconsin to BBN Technologies, said Ketchen. For its part, D Wave’s investors include Goldman Sachs, Draper Fisher Jurvetson, the Business Development Bank of Canada, International Investment Underwriters and more. “That whole notion of developing technology in isolation is gone,” Ketchen said. “Cost is one factor, but I think another actor is man. There is no way one company or one institution is going to have the brainpower for this, no matter what. So we have a strong team ourselves, and we’re very eager to work directly with other institutions, and in some cases other companies.” The Future Not Yet Decoded Exactly what this cooperation produces, and when it does so, is anyone’s guess. For the various groups and nations involved in this enterprise — from private companies to universities, from Santa Barbara to Bristol — there is a sense among those in the field that the exact impact of quantum computing is unknown. To invoke another analogy: Quantum computers are today what telephones were in the 1950s. The basic technology is known, but where that technology goes, and how it gets there, is still something of a mystery. “As more and more smart people start playing with it, the number of things a quantum computer can do will grow quite rapidly,” Ketchen observed. “We know, based on history, that when such a machine actually exists, there will be a whole new range of things to do that nobody is even thinking about right now. There are things that are inaccessible now that will be accessible when it exists. “A whole new world is going to open up.” That last part, it seems, has better than a non-unit probability of coming true.
<urn:uuid:7a031d88-3e9e-49da-8315-d6da3a84751f>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/quantum-computers-part-3-a-whole-new-world-73680.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00625.warc.gz
en
0.93722
1,193
2.921875
3
With the help of breath taking technological advancements, the majority of our communications are conducted online. That is why encryption techniques are more important than ever. Keep reading to learn more about encryption techniques and identity based encryption. We use electronic communications very often in this day and age. We send files, videos, pictures for business or communication purposes. We use online services like social networks and messenger apps for keeping in touch. We send millions of e-mails, download tons of data. As a result, we put ourselves in a very vulnerable position. Hackers, cyber criminals or attackers can do us harm if they get a hold of our electronic communications. That is why we use multiple cryptography solutions to ensure the safety and integrity of our communications. Being one of the most popular encryption methods, Identity Based Encryption offers a thorough protection for both individuals and organizations. In this article, we will take a closer look at what identity based encryption is and why it is necessary for your safety. What is identity based encryption? Identity based encryption is a scheme of a public key ecosystem. In this ecosystem, any string is considered as a valid public key. Which means, information such as e-mail addresses or even dates can be public keys. This way, if the sender of an e-mail has access to the public parametres of a system, they can encrypt the message using a text value like the e-mail address of the intended receiver of the message. Identity based encryption is often considered as a significant primordial version of ID-based cryptography. Moreover, it is also a sub-type of public-key encryption. ID-based encryption was first introduced by Adi Shamir in the early 80s. He first offered identity-based signatures as a method of encryption yet he was not able to offer a solution for the identity-based encryption problem. It was only in 2001 that a unique pairing of Boneh-Franklin scheme and Cocks’s encryption scheme was able to solve the issue posed by the identity based encryption. How does the identity based encryption work? Identity based encryption has a rather very straightforward working principle. It allows both the sender and receiver to generate a public key from a text value based on a known identity. For the identity based encryption to work, a trusted third party is needed. This third party is also called as the Private Key Generator and as its name suggests, it creates private keys. Firstly, the Private Key Generator generates a master publish key and keeps a corresponding master private key (also known as the master key). Then, sender and receivers can compute a public key that corresponds to the identity through bringing the identity value and the master key together. In order to get a hold of the corresponding private key, the authorized party needs to contact the Private Key Generator. This system allows the parties to encrypt their messages or verify each other’s signatures without needing a distribution of keys prior to sending the intended message. In this article, we will discuss Internet Key Exchange (also known as IKE, IKEv1 or IKEv2) in detail and explain why it is important for...
<urn:uuid:4d23c905-edb1-464c-aaa7-10b24722f6bc>
CC-MAIN-2022-40
https://www.logsign.com/blog/how-does-identity-based-encryption-work/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00625.warc.gz
en
0.946835
648
2.984375
3
By Adam Petrovsky, GovEd Practice Leader, Logicalis US For many government and educational organizations, public meetings are a very important part of the democratic process. Citizens with an interest in observing or participating in public meetings often attend school board meetings, city or county council meetings, zoning commission meetings or zoning board meetings. As advances in technology have improved, however, public meetings have adopted a handful of common tools. Solutions such as live broadcasting via cable networks or the web, audio reinforcement with better microphone and PA systems, laptop use, and projectors displaying presentations are regularly seen at public meeting events. Unfortunately, most of these technologies only represent small changes from an analog solution to a digital one – think writing on a flip chart vs. projecting a PowerPoint presentation on a screen. While these technologies do increase efficiency and productivity, their use has not resulted in true transformative change. Recently, however, a number of digital government communications and collaboration solutions have emerged that can dramatically change how public meetings operate, and allow both the governing bodies and the public a better overall experience. Using the right communications and collaboration tools, public meetings can be transformed in a number of important ways: Remote Video Participation Remote video participation allows citizens or board members to leverage immersive video conferencing technology to participate from a remote location. Remote locations could include additional public sites to avoid meeting congestion or individual conferencing units assigned to board members. These solutions are integrated into the overall video broadcast and provide tools for meeting management. Remote Expert Testimony Remote expert testimony allows experts to comment and participate in public meetings without the travel and expense that would typically be incurred by the governing body. Few governing bodies have the capabilities or funds to bring these types of experts into a public forum, but by using communications and collaboration solutions commonly available to businesses today, governments can cost effectively bring experts to a public meeting via virtual means. Virtual Public Meetings Virtual public meetings allow a fully functional meeting environment virtually in the cloud. Special configurations for public meetings provide unique functionality and full control by the governing body, but allow for much greater citizen exposure and engagement. Citizens simply register for a meeting ahead of time, are assigned identifiers, and meeting managers can control the look and feel of the virtual meetings. Live video, presentations and Q&A sessions can all be integrated, documented and recorded for later viewing. By integrating translation services to either remote video participation or in a virtual public meeting, public meetings can become more relevant to community members who do not speak English. Clearly, each public meeting forum is different and requires a custom approach based on the requirements of the governing body. As a result, government entities interested in transforming the way they currently hold public meetings should connect with an experienced solution provider like Logicalis that has a dedicated government practice and deep expertise in the design, planning and integration of these types of technologies into public meeting environments. Want to learn more? Explore Logicalis’ GovEd website, then drill down into the kinds of audio visual services and communication and collaboration solutions Logicalis offers to target, increase productivity and generally delight your audience. You can also read Logicalis’ recent GovEd news here.
<urn:uuid:b4b78d5f-4e7e-47d4-aaba-d04944992440>
CC-MAIN-2022-40
https://logicalisinsights.com/2016/12/15/using-technology-to-transform-public-meetings/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00025.warc.gz
en
0.928213
648
2.640625
3
Call routing is the process of forwarding communications from an originating source to a desired endpoint. In order for a call or message to travel from the sender to the intended recipient, it must navigate through a series of interconnected networks. Optimizing the route that a call takes to reach the recipient ensures better call quality, stronger connections, and fewer dropped calls. Call Routing History The first commercial telephones functioned something like walkie-talkies, with each device connected to one partnered device in another location. If that sounds impractical – it was! Businesses needed a separate telephone system for each location they wanted to reach and eventually, local telephone exchange networks prevailed. In the past, private telephones were wired to a central exchange located in that town or neighborhood and a human switchboard operator worked to manually route calls to their destinations. Under this system, local telephone exchanges were connected to other telephone exchanges by a system of trunks. Eventually these trunks expanded to connect cities, states, countries, and continents. The public telephone switched network, or PSTN, was born. Most public switchboard operators became obsolete in the 1970s and 80s, when direct dialing became possible for long distance calls. Modern call routing is accomplished digitally, and VoIP (voice over internet protocol) makes it possible to route communications over a data network, bypassing the PSTN entirely. As the technology evolves, the underlying principle of call routing remains the same: network owners actively monitor and troubleshoot routing to ensure that calls reach their final destination, wherever that may be. How does Call Routing Work? Though the technology has changed a great deal since the 1870s, the exchange principle remains the basis of call routing today. Users can now dial international as well as local recipients directly, but networks are still interconnected by a system of hierarchical trunks. Like taking a connecting flight to an international travel hub, a call may need to navigate through a series of networks to reach its destination. Long distance calls are likely to pass through a number of carrier networks, and not unlike international travel, some routes are more efficient than others. Carriers oversee all of these transfers and find the optimal route for calls and messages so that users experience clear connections and fewer dropped calls. Benefits of Call Routing Calls made to one main telephone number can be queued and seamlessly transferred to the appropriate department or branch. Call routing also makes it possible to distribute calls based on time zone, geographic location, and language preference. Quickly connecting callers with an easy-to-navigate automated menu or a helpful human is a proven way to increase customer satisfaction and conversion. Types of Call Routing For businesses, call routing can also refer to the technology that makes it possible to queue and distribute incoming calls. Automatic call distribution (ACD) relies on preset criteria to queue calls in a way that benefits your business. These criteria can be set to allow for time-based routing, so that after-hours calls are directed to a branch in different time zone where customer support is still available; skills-based routing, so that calls are routed to an agent who is best qualified to respond to a specific customer’s needs; or they may be routed to evenly distribute calls amongst a team of agents. Within these very broad routing schemes, there are many ways to specify how, and under what conditions, a call is transferred. Calls can also be routed to IVR systems that can help callers that do not require a conversation with a live agent. How is Bandwidth Involved with Call Routing What are the Benefits of Bandwidth’s Call Routing Bandwidth owns and operates its own network which allows us to actively monitor and optimize call routing. Customers get higher quality calls with fewer drops and clearer connections.
<urn:uuid:452f53b0-42a3-4331-989f-4e90b0bf863f>
CC-MAIN-2022-40
https://www.bandwidth.com/glossary/call-routing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00025.warc.gz
en
0.952486
778
3.6875
4
Best Cyber Security Books To Read In 2022 I am happy to see that my book Cybersecurity : The Beginners Guide is still in the Top of the lists … Thank you 🙂 Best Cyber Security Books To Learn Cyber security is not an entry-level field, which means that there are a few fundamental skills and knowledge you must be familiar with before diving into it. It would help if you had some mastery of technical and non-technical skills to excel in this field. To be a cyber security expert, truth be told, you must be a jack of all trades. It would be best to have a good grasp of networking, system administration, command-line coding, forensics, reverse engineering, cyber operations, audit and compliance and digital security. This article will explain the cyber security books you will need to become a competent cyber security expert. What do you study in cyber security? Cyber security learning involves gaining theoretical knowledge from cyber security books to perform practical endeavors like penetration testing. Let us now deep study the things you will need to know before taking cyber security as a career. For beginners, it is an added advantage to familiarize themselves with basic computer architecture and operating systems. Beginners should be proactive in learning the Linux commands since they are part of the basic building block in studying cyber security. Like any other field, cyber security is a broad discipline, and for beginners, they may need to focus on a specific subfield before becoming an all-rounded expert. Remember, you cannot learn everything you need to know because it is continually growing, and therefore learning does not have a terminal point. Some people might decide to explore programming, while others may emphasize digital forensic, security policies, and others may explore the broad aspect of cyber security. For beginners, the following are the main areas to be explored. - Basic Data Analysis - Basic Scripting or Introductory Programming - Cyber Defense - Cyber Threats - Fundamental Security - Design Principles - Information Assurance Fundamentals - Introduction to Cryptography - Networking Concepts - System Administration Let us now delve into some of the cyber security books and e-books that will be resourceful in your study. Cyber security for dummies is a free e-book that delivers fast, convenient, and easily readable content. It describes what people need to know in order to defend themselves and their organization against malicious attacks. Cyber security for dummies offers steps that users can use to protect themselves at the workplace or at home. Top Rated Cyber Security Books of February 2022 1) Cybersecurity: The Beginner’s Guide: A comprehensive guide to getting started in cybersecurity Rating is 5 out of 5 2) Cybersecurity Essentials Rating is 4.9 out of 5 3) Cybersecurity For Dummies Rating is 4.8 out of 5 Cybersecurity The Beginner’s Guide Cybersecurity: The Beginner’s Guide More than 400 pages of a comprehensive guide to getting started in cyber + 100 pages of advice from Cybersecurity experts. You will find everything to excel your career in Cybersecurity, or help your Organization to close the Cyber Talent Gap. Thank you very much for all the experts to help me make this happen. Together we can Ann Johnson ( Microsoft) Dr Emre Eren KORKMAZ ( University of Oxford) Girard Moussa ( SAP ) Martin Hale ( Charles Sturt University) Dr. Ivica Simonovski, Onur Ceran, (Turkish Police Force) To find out more about my books : Cybersecurity The Beginner’s Guide It’s not a secret that there is a huge talent gap in the cybersecurity industry. Everyone is talking about it including the prestigious Forbes Magazine, Tech Republic, CSO Online, DarkReading, and SC Magazine, among many others. Additionally, Fortune CEO’s like Satya Nadella, McAfee’s CEO Chris Young, Cisco’s CIO Colin Seward along with organizations like ISSA, research firms like Gartner too shine light on it from time to time. This book put together all the possible information with regards to cybersecurity, why you should choose it, the need for cybersecurity and how can you be part of it and fill the cybersecurity talent gap bit by bit. Starting with the essential understanding of security and its needs, we will move to the security domain changes and how artificial intelligence and machine learning are helping to secure systems. Later, this book will walk you through all the skills and tools that everyone who wants to work as a security personal needs to be aware of. Then, this book will teach readers how to think like an attacker and explore some advanced security methodologies. Lastly, this book will dive deep into how to build practice labs, explore real-world use cases, and get acquainted with various security certifications. By the end of this book, readers will be well-versed with the security domain and will be capable of making the right choices in the cybersecurity field Things you will learn - Get an overview of what cybersecurity is, learn about the different faces of cybersecurity and identify the domain that suits you best - Plan your transition into cybersecurity in an efficient and effective way - Learn how to build upon your existing skills and experience in order to prepare for your career in cybersecurity To order the book : Amazon: Order here Google Books : Order here Packt Publishing: Order here - ISBN : 978 1 78588 533 2 - ASIN: 1789616190 - ISBN-13: 978-1789616194 Publisher: Packt Publishing Date: May 24, 2019 Number of Pages: 390 To my family, my real friends, my mentors, I cannot thank you enough. Yes, I am a doctor and yes I lead a big team and yes, I have a career; but none of those would be the case without YOU. I would like to thank everyone who gave me feedback for being honest, allowing me to focus on my goals; to ignore people who gave negative vibes; to work hard with a positive attitude and always look forward. Do you want to have a career in Cybersecurity? Do you want some guidance on how to switch to cyber? Then this book is right for you, where you will learn everything you need to start your Cyber career from the authors as well as Industry experts like Ann Johnson, Robin Wright, Yuri Diogenes, Dr. Ivica Simonoski, Onur Ceren, Judd Wywbourn, Girard Moussa and many more. Coming soon via Packt, Amazon.com and selected book retailers #BeginWithCybersecurity For more information :
<urn:uuid:84530e6f-40b4-4ac4-96ea-c6a1e10bf0dc>
CC-MAIN-2022-40
https://www.erdalozkaya.com/best-cyber-security-books-to-read-in-2022/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00025.warc.gz
en
0.920361
1,400
2.515625
3
Written by: Jay H. Botnets are an extensive network of PCs infected with malware controlled remotely by an attacker, known as the “bot-herder.” Think of a robot army under the control of its master. Each machine under the control of the bot-herder is known as a bot. From the attacker’s centralized position, they can easily command every computer in the botnet to attack simultaneously. Depending on the botnet’s size, anywhere from thousands to millions of bots can conduct these attacks. How Do Botnets Work? Before hackers can execute their attacks, they first need to build their botnet army. To develop their botnet, attackers must exploit a vulnerability to gain access to a victim’s device. Malicious actors usually accomplish this through vulnerabilities in software or websites or trick victims into installing malware on their devices. Once your device becomes infected, your computer, phone, or tablet is now under the control of the botnet’s creator. Once the botnet’s owner is in control of your device, they usually use your machine to carry out malicious actions. Common tasks carried out by botnets include: - Sending out masses of email spam, often including malware, to millions of users. - Orchestrating distributed denial-of-service (DDOS) attacks to take websites down by flooding them with traffic. - Reading and stealing the victims’ personal data. How To Protect Yourself From Botnets Like most malware, the best way to protect yourself from botnets is by practicing good online hygiene. Among proper measures to follow include: - Avoid clicking on suspicious links and downloads. - Install a reputable antivirus such as Malwarebytes. - Always update your device’s operating system and applications when the manufacturer issues an update. - Use a firewall when browsing the Internet. - Work alongside a trusted managed IT service provider such as Design2Web IT to protect your business from cyberthreats. Generally, attackers look to pick the “lowest-hanging fruit.” If you practice proper digital hygiene, you will be that much safer from attacks. Contact us today if you would like more information on keeping your business safe from cyber threats. Comments are closed.
<urn:uuid:8b664595-caee-4c2a-b3e6-2aed917e34ae>
CC-MAIN-2022-40
https://design2web.ca/blog/what-is-a-botnet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00025.warc.gz
en
0.934677
470
3.484375
3
Following the news that Thames Water is reviewing the water usage of data centre infrastructure within their region, analysts from data and analytics company Global Data have highlighted the pressures faced by data centre operators, water companies, and local councils as a result of climate change. While climate change is driving data centres toward greater energy and cooling efficiency, innovations such as water cooling could potentially be viewed as incompatible with a future in which the global drought conditions we have witnessed throughout the summer of 2022 become a more regular occurance. David Bicknell, principal analyst in the thematic intelligence team at GlobalData, said: “We have reached an environmental crunch point in the resources needed to run data centres. Switching to liquid cooling can cut a data centre’s electricity usage, but water is an increasingly scarce resource in drought-stricken parts of Europe and the US. “With the UK experiencing its driest summer for 50 years – and water companies failing to reduce leaks – operators hoping to use 25 litres of drinking water an hour to cool data centres as a cheaper alternative to energy-guzzling refrigeration systems are finding their options running dry. Cleaning up rain or river water is more expensive for operators and will require an environmental license. Yet, using that water may itself reduce the nearby water table. “Data centres create relatively few jobs, so it is no wonder local council members are starting to object to using local land and environmental resources for data centre development. In early 2022, South Dublin County Council passed a motion to prevent further local data centre development until 2028 as part of its County Development Plan. It is unlikely to be the last organisation to take such a decision.” Chris Drake, principal analyst of data center technologies at GlobalData, added: “In recent years, so-called hyperscale data centres have managed to achieve high levels of energy efficiency thanks to the use of energy efficient designs, modern cooling systems, and a reliance on renewable energy. However, as existing data centres are expanded and new ones built, often in key hub locations, this puts mounting pressure on finite land, energy and water resources. “With many parts of the world experiencing prolonged periods of drought and the likelihood of future recurring drought, this will attract growing scrutiny and criticism of the way data centres consume water and encourage pressure for new restrictions to be introduced. Although switching to alternative cooling systems could, in many cases, help address existing pressures, switching to alternative technologies rarely happens overnight.”
<urn:uuid:a7ff2c60-65d6-43d0-8240-eeb47ed8ccd1>
CC-MAIN-2022-40
https://digitalinfranetwork.com/news/data-centre-operators-under-pressure-as-water-resources-dry-up/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00025.warc.gz
en
0.925096
505
3
3
What is Cloud Computing in Healthcare? Cloud computing has become a useful tool for many organizations, including those in healthcare Cloud computing has changed the way that countless organizations conduct business, and that includes those in the healthcare industry. The cloud shift is making it easier for doctors to help their patients, offering easy access to intelligent software and reducing labor and maintenance costs. But cloud computing is a complex topic to take in. If you’ve been left asking yourself “What is cloud computing in healthcare?”, here’s everything you need to know. Jump to a section… What Is Cloud Computing? Cloud computing provides instant access to additional computing power and data storage space hosted in large “clouds.” These clouds are distributed across multiple data centers, which consist of the actual hardware and infrastructure that cloud computing uses. End users can communicate back and forth between on-prem endpoints and data centers to store, retrieve, and process data. Since its inception, cloud computing has been used for everything from software development to data recovery, and it’s becoming an increasingly common way to facilitate business. What Is Cloud Computing in Healthcare? In healthcare, cloud computing mostly relates to the management of patient information and access to software-as-a-service (SaaS) applications. Many healthcare organizations are switching to cloud computing solutions or are currently using them because they reduce the amount of time and effort it takes to catalog, organize, and retrieve patient records, while also enabling easier collaboration between patients and care providers. Cloud computing also offers robust software and additional computing power, which provides better access to patient portals and web applications. Additionally, cloud computing has helped further clinical research. Healthcare organizations can use cloud computing for Big Data Analytics operations, analyzing information from thousands of patient records to identify correlations between datasets and develop a predictive model using data mining techniques. Using such a predictive model enhances decision support systems and therapeutic strategies, improving the patient experience. Streamlining access to documents and improving computation power can greatly benefit healthcare workers and patients, but there are also financial incentives for the broader organization. For example, cloud computing lowers infrastructure costs because it doesn’t rely on local servers. There are also fewer maintenance costs inherent in cloud computing because the framework can be updated automatically. The lack of local servers also means that healthcare organizations can have a lower headcount within development and IT departments. IT workers alone have a median salary of $97,430 in the US, so the cost savings can become significant quickly. Finally, cloud computing is easily scalable and flexible. The popularity of healthcare cloud computing options has surged since the COVID-19 pandemic largely because of how scalable the technology is, as organizations can increase or decrease their cloud processing power as needed. The benefits are immediate, as pivoting to cloud computing enables local governments to partner with healthcare providers and use mobile devices for contact-tracing operations. Essentially, the rise of cloud computing in healthcare means that doctors have more time to spend with patients, are capable of providing more accurate care, and can lean on machine learning and AI to reinforce their decision-making processes all while reducing administrative and IT costs. It’s these improvements to the overall system that make cloud computing and healthcare such lucrative partners. Ensuring PCI and HIPAA Compliance Organizations will need to ensure that they are meeting compliance guidelines for PCI and HIPAA as they transition from on-prem to cloud solutions. It’s understandable that marrying healthcare and cloud computing may cause concerns about data privacy and security issues during the transition period in some organizations, specifically regarding hefty fines should any vulnerabilities violate PCI and HIPAA guidelines. After all, creating an infrastructure capable of meeting legal requirements from scratch presents a logistical challenge — one that’s prone to human error. However, healthcare cloud computing infrastructure no longer has to be created from scratch. Instead, DevSecOps cloud automation services have made it far easier for organizations to transition from on-prem to cloud services while ensuring that compliance guidelines are met. Most DevSecOps automation service solutions are low-code/no-code, meaning that they don’t require large teams of developers to successfully transition to the cloud infrastructure. This further reduces costs and shortens the development time required to make the leap. Cloud automation services that come with PCI and HIPAA-compliant code baked in from the onset mean developers don’t have to worry about recreating the framework required to ensure that compliance guidelines are met. This also reduces the risk of human error causing security vulnerabilities, which is currently responsible for 52% of security breaches — making it the most common cause. For more information on streamlining PCI and HIPAA compliance in cloud computing, read our whitepaper. Finding a Cloud Automation Partner There’s a wide range of potential partners for cloud automation services. For organizations that want to ensure they’re meeting PCI and HIPAA guidelines while maintaining a fast, cost-efficient transition to cloud computing, DuploCloud is here to help. Offering a no-code/low-code solution purpose-built to help organizations ensure compliance with industry best practices, DuploCloud’s DevSecOps automation platform creates a frictionless process that alows you to quickly and easily provision your application in the cloud in a secure and compliant manner. With built-in security and compliance considerations, your healthcare organization can rest assured that you’re meeting HIPAA and PCI guidelines. Further, DuploCloud can reduce the time and cost of cloud migration by 80% while ensuring that your platform stays up to date with as small of a developer and IT headcount as possible. Interested in learning more? Contact us today.
<urn:uuid:190d1533-1dcd-416d-b78f-6545f45474ef>
CC-MAIN-2022-40
https://duplocloud.com/blog/what-is-cloud-computing-in-healthcare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00025.warc.gz
en
0.930801
1,166
2.828125
3
- How many network hops to resolve this domain? - How is this system connected to the Internet? A traceroute gives you a detailed log of all the network hops between your users and your target system. Traceroutes show you each sequential hop and the RTT (round trip time) for each successful hop. You can use traceroutes to quickly identify the source(s) of latency. What are traceroutes used for? Traceroutes are used to track transit delays of packets across a network. Rather than a ping which tells you the total RTT between a user and a system. A traceroute will fail if two of the three packets sent are lost. This likely means that there is a faulty configuration somewhere. 1. Specify the Host/IP You can run a traceroute on the same page that you would create a Sonar check. Just specify the host or IP and click "Run Traceroute". 2. Select Location Choose which monitoring node you want to run your traceroute from. 3. View Results
<urn:uuid:52fc05de-e131-428c-ab87-079b540be279>
CC-MAIN-2022-40
https://support.constellix.com/support/solutions/articles/47000803596-what-is-a-traceroute-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00025.warc.gz
en
0.894634
231
2.921875
3
Now that IBM and Intel have both created chips using 45nm process technology, it’s clear chip technology will continue at least a little longer on the path predicted by Moore’s Law, with ever-tinier creations coming out every two years or so. High-k metals, in particular, appear to help solve one of the biggest problems that have dogged chip designers and manufacturers in recent years: power leakage. While silicon historically has resulted in more and more heat loss as transistors got tinier and tinier, high-K metals used in transistor gate dielectrics appear to significantly reduce the problem. The result? A win-win for both manufacturers and consumers. Transistors will continue shrinking, at least in the near term, and many of the devices that use them will begin exploding with functionality. The Incredible Shrinking Geometry “Geometries are shrinking,” Stephan Ohr, research director for analog semiconductors at Gartner Group, told TechNewsWorld. “We’re getting close to Angstrom, or atomic, levels. Right now a lot of work is being done at 65nm, and road maps take us down as far as 22nm. You can use this to build New York City on the head of a pin — and that’s barely an exaggeration.” While the first integrated circuit was a 4-transistor device, Intel’s new generation of chips will have dual processing cores and 535 million transistors, Ohr pointed out. “We’re looking in a couple years to put 16 to 32 processors on a single chip,” Jim McGregor, research director and principal analyst for In-Stat, told TechNewsWorld. “Imagine taking a rack of supercomputers and shrinking it down to the size of a PC. We face some roadblocks, but over the next 10 years the semiconductor industry will continue its march toward innovation. I don’t see that stopping.” Of course, the goal is not to make devices themselves smaller, Ohr noted. Rather, it’s to pack more transistors into one consistently sized chip. Same Space, More Functions Essentially, when transistors get smaller, that means they occupy a smaller proportion of the space allotted to them. “You have an increased transistor budget,” Roger Kay, president of Endpoint Technologies, told TechNewsWorld. “So the question becomes, what can you do with that budget?” What’s happening, Kay said, is that manufacturers can add more modules to the device in question, allowing it to increase its functionality without getting any bigger. “What you get is increased performance at reduced cost,” McGregor explained. “We keep increasing the amount of stuff we can put on a single chip, and consumer electronics is the area with the biggest benefit.” The Phone Factor Cell phones, or smartphones, are one of the best examples. “As you get a higher transistor budget, your cell phone can also be an MP3 player, a personal information manager, a video player. We’re seeing a piling-on of functionality. Today, you can pretty much have whatever you want,” Kay stated. “It’s like a cake,” he continued. “The cost is in the mixing and baking. If you can get more transistors in for the same mixing and baking, in some sense those are free transistors. Once you’ve spent money on design and intellectual property, the incremental cost is essentially zero.” The limit, then, is just complexity — when there are too many functions, or too much for users to learn and get value from. The addition of video capabilities to consumer devices is one Kay believes is widespread, and it’s already being seen in smartphones. Same with the addition of music capabilities. “Traditionally, music capabilities were too hot, too big and needed too many circuits,” said Kay. Another more challenging addition is voice recognition, he added. “The best voice recognition software is very large, with lots of analysis modules. Being able to solve the voice problem well takes lots of transistors, and integrating it into a smaller device is a sort of a Holy Grail.” The result, though, would enable cell phone users to use voice rather than key-pad commands to operate their device. Home Consoles, Diet Advisors “Where Intel wants to take this is home media consoles,” Ohr declared. In that scenario, your home PC becomes your telephone with VoIP, videoconferencing and multimedia, in addition to Internet access with shopping, browsing and e-mail. More transistors, again, is what makes it possible. Wearable computers are another realm Ohr sees being enabled by the increasingly tiny transistors. Among their capabilities might be monitoring of medical devices as well as the ability to read and interpret ingredients labels in the grocery store, for example. “The more transistors you have, the easier and speedier the decision is to automate,” he explained. The result could be assistance in deciding which products on the shelves best match with your particular diet. For the future, open questions include how to power all these capabilities. “The power consumption is not trivial,” Ohr pointed out. “Regulators in battery chargers are among the least efficient devices out there, but they are used because they’re the lowest cost.” “Power consumption is a killer,” McGregor added. “A primary goal is reducing it. Battery technology is not keeping up with the rest of the technology, and that’s a critical challenge.” Perhaps even more pressing, though, is the question of interoperability. “The biggest challenges rely not on silicon but on software,” McGregor claimed. “We don’t have the software to make things interoperable. Those are issues we need to start addressing with standards and a better software environment. We need to start thinking more holistically about interoperability and the fit to consumer needs.” In short, the market is changing rapidly. Instead of technology pushing the market, consumers are pushing technology, McGregor said. In not too long, “we’re going to see this true convergence device that can do anything in the palm of your hand,” he added, “and whatever form factor you choose is up to you.”
<urn:uuid:386ccaa6-07ed-4747-ab94-e74d70ec37b2>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/tiny-transistors-better-things-come-in-small-packages-56295.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00025.warc.gz
en
0.945506
1,372
2.9375
3
Hacking Screen Time – How Kids Are Getting Around Parental Controls Knowing the hacks kids are using is half the battle. There are no shortage of technology devices used in our homes. Everything from, iPads, laptops, smart TVs, iPhones, and watches abound. However, if there are not parental controls on your entire inventory of digital devices, then kids can bypass one option for another. Thus, negating limitations on an individual device. In addition to avoid limits by switching devices, there are also a number of ways parental controls can be hacked. The newest launch of iOS 16, Apple is making Screen Time and Family Sharing setup easier for parents. In the meantime, here are some of the commonly used methods to circumvent parental controls: • Changing Time Zones. Setting devices to an alternate time zone can fool the device into granting more time. Although Apple was supposed to have fixed this in iOS 15, this trick still works on those devices running older an older iOS. • Tap For More Time Feature. Once the setup is complete, if the “Block at Downtime” is not toggled on, kids can tap “Ignore Limit” and keep screen time going. • Redownloading Apps. This was corrected on the newer iOS, but it is worth knowing. Once a time limit is reached on an app, one can redownload it without parental approval. This causes the app to remain accessible until the downtime is next scheduled. Adjust app permissions to prevent this workaround by going to Settings, then Screen Time > Content & Privacy > iTunes & App Store Purchases > Deleting Apps. Set that to Don’t Allow. • Recording Your Password. When your child hands you their phone to type in your parental password, beware. By secretly turning on the “Screen Record” feature before handing it over to their parent, kids are able to see exactly what the password is. Similarly, we’ve heard of some kids who are able to guess the parent’s password by looking for smudges on the keypad. Even more ways parents can be undermined by technology – and solutions. Aside from just iPhone and iPads, there are a number of other devices and methods kids use to bypass parental controls to gain access to digital resources of all types. • Using Tech In Offline Mode. This is a relatively easy hack. By switching the device to offline mode, kids can continue to play games like Minecraft or other offline games. • Hacking The Family Router. Routers equipped with parental controls typically have a standard password that can be Googled. If the parents have not changed it, kids can login setup a second admin account and create an alternate SSID (service set identifier), which is the name of the Wi-Fi network. They then connect their device to this new Wi-Fi, while parents are none the wiser. A more common method is to reset the router to the original settings, sans parent controls and restricted Wi-Fi. To prevent this, Amazon sells a lock box for home safety that guards against tampering. • Factory Reset. As a last resort, your teen might opt to completely wipe their phone to factory settings. If they have backed up their data to cloud, this means they can reload all the original content after the reset. Minus the parental controls. Checking the status of parental controls and monitoring usage will alert you to any inconsistencies. • Using a VPN. The irony here is that by design, a virtual private network helps to prevent unwanted Big Data and advertisers from gaining access to your family’s information. Alternatively, teens can use this same technology to breach parental controls. Many of these VPN apps are at the expense of your child’s data and effectively hide all their online actions. Here’s how to check for a VPN app: • Look for VPN app icon. • Search for VPN app in the phone’s search bar. • Check the App Store by searching for VPN and see if they have been previously downloaded. • Check for VPN in the upper left corner near the cell signal. It is always a good idea to prevent unwanted app downloads and installations by enabling the “ask permission” feature. Similarly, you can set up the app store with a parent ID and password. As long as there are technology devices, we will have children and teens who want to use them. Also, they’ll likely find ways to bypass rules. This is why we recommend staying up-to-date with the problems associated with overusing media and unfiltered online access. Talk to your children about how big tech uses data for profit. Addictive gaming and toxic social media undermines day-to-day lives by hijacking kids time and altering their thought processes. By giving them to tools and understanding to make good choices, they’ll grow into an adult that has a healthy relationship with the digital space. TIPS & TRICKS Encourage a technology detox. Is trying to outwit your kids is too cumbersome? Maybe it is time to take the phone away. Do not allow kids to keep technology devices in their room. Establish healthy habits with technology early. Password protect all devices. Our services are engineered to meet the specific objectives of each client. That starts with having the right people, who are experts in their field to develop solutions that support our clients. Thoughtful solutions, not quick fixes.
<urn:uuid:0963e8c5-201d-4953-8206-d307d893aeec>
CC-MAIN-2022-40
https://www.bridgeheadit.com/how-to-hack-parental-controls/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00025.warc.gz
en
0.928083
1,130
2.578125
3
Debian and Ubuntu are distributions that lend themselves naturally to comparison. Not only is Ubuntu a continuing fork of Debian, but many of its developers also work on Debian. Even more important, you sometimes hear the suggestion that Ubuntu is a beginner’s distribution, and that users might consider migrating to Debian when they gain experience. However, like many popular conceptions, the common characterizations of Debian and Ubuntu are only partially true. Debian’s reputation as an expert’s distribution is partly based on its state a decade ago, although it does provide more scope for hands-on management if that is what you want. Similarly, while Ubuntu has always emphasized usability, like any distro, much of its usability comes from the software that it includes — software that is just as much a part of Debian as of Ubuntu. So what are the differences between these Siamese twins? Looking at installation, the desktop, package management, and community in the two distributions what emerges is not so much major differences as differences of emphasis, and ultimately, of philosophy. Ubuntu’s standard installer places few demands on even novices. It consists of seven steps: the selection of language, time zone, and keyboard, partitioning, creating a user account, and confirmation of your choices. Of these steps, only partitioning is likely to be alarming or confusing, and, even there, the choices are laid out clearly enough that any difficulty should be minimized. The limitation of the Ubuntu installer is that it offers little user control over the process. If you are having trouble installing, or want more control, Ubuntu directs you to its alternate CD. This alternate CD is simply a rebranded version of the Debian Installer. Contrary to what you may have heard, the Debian Installer is not particularly hard to use. True, its graphical version lacks polish, and, if you insist on controlling every aspect of your installation, you might to blunder into areas where you can only guess at the best choice. However, the Debian Installer caters to less experienced users as much as experts. If you choose, you can install Debian from it by accepting its suggestions with only slightly more difficulty than installing Ubuntu would take. Desktops and Features Both Debian and Ubuntu are GNOME-centered distros. Although each supports a wide variety of other desktops, including KDE, Xfce, and LXDE, they tend to be of secondary importance. For instance, it took six weeks for Debian to produce packages for KDE 4.4, while Kubuntu, Ubuntu’s KDE release, has received relatively little attention in Ubuntu’s efforts at improving usability. Debian offers a version of GNOME that, aside from branded wallpaper, is little changed from what the GNOME project itself releases. By contrast, Ubuntu’s version of GNOME is highly customized, with two panels, whose corners are reserved for particular icons: the main menu in the upper left, exit options in the upper right, show desktop in the bottom left, and trash in the bottom right. Ubuntu’s GNOME also features a notification system and a theme that places title bar buttons on the left — controversial innovations that are unique to Ubuntu (unless some of its derivatives have adopted them recently). In its drive towards usability and profitability, Ubuntu also boasts several utilities that are absent from Debian. These include Hardware Drivers, which helps to manage proprietary drivers, Computer Janitor, which helps users remove unnecessary files from the system, and the Startup Disk Creator wizard. In addition, Ubuntu offers direct links to Ubuntu One, Canonical’s online storage, and the Ubuntu One music store. Theoretically, these extra features should make Ubuntu easier to use. And, perhaps for absolute newcomers, they do. However, for many users, the difference between the standard Debian and Ubuntu desktops will be minimal. These days, what determines the desktop experience is less the distribution than the desktop project itself. Ubuntu does usually make new GNOME releases available faster than Debian does. But if you are using the same version, your desktop experience will not differ significantly no matter which of the two you use. Packages, Repositories, and Release Cycles Debian and Ubuntu both use .DEB-formatted packages. In fact, Ubuntu’s packages come from the Debian Unstable repository for most releases, and from the Debian Testing repository for long term releases (see next page). However, that does not mean that you can always use packages from one of these two distributions in the other. If nothing else, Debian and Ubuntu do not always use the same package names, so you may have dependency problems if you try. For example, in Debian, you want kde-full or kde-minimal to install KDE, while in Debian, the package you want is kubuntu-desktop. The differences in names can be especially difficult to trace in programming libraries. Another difference is the organization of online software repositories. Famously, Debian divides its repositories into Unstable, Testing, and Stable. There is also Experimental, but, since that is only for the roughest of packages — the first drafts, you might say — most users either avoid it, or else take only the smallest, most self-contained packages from it. Packages that meet the minimal standards for quality for Debian are uploaded to Unstable, and then find their way into Testing. There, they stay until a new Stable release is planned, eventually undergoing a final series of bug-testing and being included in the new release. In effect, the Debian system allows you to choose your own level of risk and innovation. If you want the very latest software, you can use Unstable — at the risk of running into problems. Alternatively, you can choose Stable for well-tested software supported by constant security updates — at the risk of missing out on the latest software releases. Since Debian releases can be irregular, sometimes, the Stable release is extremely old indeed. Similarly, the internal organization of each Debian repository allows you to choose the degree of software freedom that you prefer. Unstable, Testing and Stable are each further subdivided into main (free software), contrib (free software dependent on other none free software) and non-free (software free for the download, but having a non-free license). By default, Debian installs with only main enabled, so you have to edit /etc/apt/sources.list if you want the other repositories. All this is very different from the organization of Ubuntu’s repositories. Instead of being organized by testing status, Ubuntu’s repositories consist of Main (software supported by Canonical, Ubuntu’s commercial arm), Universe (software supported by the Ubuntu community), Restricted (proprietary drivers), and Multiverse (software restricted by copyright or legal issues). In recent years, these have been joined by Backports (software for earlier releases), and Partners (software made by third parties). For those who want the very latest, Ubuntu also has Launchpad, a combination of a project website and Debian’s Experimental repository. The result is a mixture of criteria. Ubuntu’s Main repository is free and tested, while Universe is free but possibly untested (nor do you have any quick way of knowing which packages are untested). Restricted and Multiverse are proprietary, but their testing status is uncertain, while the freedom and quality of Backports and Partners packages has to be individually researched. As with Debian’s repositories, Ubuntu’s show a concern with quality and software freedom, but, unlike with Debian, judging a package by either criteria is vastly more difficult. Given that Ubuntu releases on a six-month cycle, using packages from Debian Unstable or Testing, on the whole Ubuntu’s software tends to be less well-tested than a Debian official release based on Stable. In fact, from time to time, you can see complaints on the Ubuntu community forums about problems with particular packages. Such complaints are far less common in Debian. However, to be fair, impatience with the slowness of official releases tempts countless Debian users into dabbling with Testing, Unstable, or even Experimental, and rendering their systems unusable. Community and governance For many users, technical issues are probably the main concern when choosing a distribution. However, for more experienced users, the communities and how they operate can be equally important — and Debian and Ubuntu could hardly be more different. The Ubuntu community is only six years old, but it long ago established itself as a very different place from Debian. Interactions in the Ubuntu community are governed by a Code of Conduct, which is largely successful in ensuring that discussions are polite and constructive. At the very least, the code provides a measure of expected behavior that can be referred to when discussions threaten to run out of control. By contrast, the Debian community has a reputation of being a more aggressive place — one that has sometimes been accused of being unfriendly to women in particular and newcomers in general. This atmosphere has improved in the last few years, but can still flare up. One reason for this atmosphere is that Debian is an institutionalized meritocracy. Although non-developers can write documentation, test bugs or be part of a team, becoming a full Debian Developer is a demanding process, in which candidates must be sponsored by an existing developer, and repeatedly prove their competence and commitment. That said, among full developers, Debian is a radical democracy, with its own constitution outlining how it is run and how decisions are made. The Debian Leader is elected by the Condorcet method of ballot counting, and has more power to control than to coordinate. Instead, mailing lists are used to discuss problems to the point of exhaustion, and general resolutions about distribution policy may follow. One reason that Debian discussions have a reputation for unruliness may simply be that much of the governance is done in public by dedicated people. Ubuntu shares the tendency to meritocracy and transparency that is part of most free software projects. However, although that spirit prevails most of the time in daily interactions, ultimately, major policy decisions are determined by Mark Shuttleworth, Ubuntu’s founder and self-described benevolent dictator for life. Those who work closely with Shuttleworth — most frequently, Canonical employees — also tend to have a larger say than others in the Ubuntu community. However, this authority tends not to be exercised except for deciding major strategic directions, and, even then, only after monitoring of community discussion. In the end, the difference between the Debian and Ubuntu communities lies in their core values. Although less important than a few years ago, Debian remains a community-based distribution, dedicated to its own concepts of software freedom and meritocratic democracy, even at the expense of rapid decision making. Ubuntu, however, for all its strong community, is also the key to Canonical’s success as a business. If Ubuntu is more hierarchical than Debian, it is still more open than the majority of high-tech companies. Making a choice Despite their common origin, Debian and Ubuntu today are significantly different. When you are choosing between them, the decision is not a case of right or wrong, or of superiority or inferiority, but of what matters to you. On the one hand, since Ubuntu forked, Debian has continued much as it always has. As a distro, it is aimed at all levels of users. It favors free software ideals, de-emphasizing proprietary software and seeing upstream projects such as GNOME as the place where changes should occur, rather than the distribution. Yet, at the same time, its community values prompt Debian to give the maximum freedom of choice in their software. In order to maintain these imperatives, Debian is perfectly willing to have long periods between releases and outdated official releases, because it is a community effort, in which business values such as timeliness and being current are secondary concerns. On the other hand, Ubuntu has singled out the new user as its audience. While it has by no means abandoned free software ideals, it is more likely than Debian to countenance proprietary software, either for the convenience of users or to make the distribution more competitive as a product. Meeting its release schedule is at least as important as software quality in Ubuntu, and commercial success is important enough that Ubuntu developers are willing to make changes in the distribution rather than upstream in order to have them as soon as possible. Generally, Ubuntu is a friendlier place than Debian, but also a less democratic one. For many people, the ideal distribution would probably have aspects of both Debian and Ubuntu. But, since that ideal does not exist, in the end making a choice is a tradeoff. Users must decide which values or tendencies matter most to them, and choose from there, knowing that either choice may not be entirely satisfactory.
<urn:uuid:cac0e19d-939f-4367-a606-f6ea77041eb1>
CC-MAIN-2022-40
https://www.datamation.com/open-source/debian-vs-ubuntu-contrasting-philosophies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00025.warc.gz
en
0.953793
2,641
2.546875
3
Much more rapidly than anyone originally thought possible, facial recognition technology has become part of the cultural mainstream. Facebook, for example, now uses AI-powered facial recognition software as part of its core social networking platform to identify people, while law enforcement agencies around the world have experimented with facial recognition surveillance cameras to reduce crime and improve public safety. But now it looks like society is finally starting to wake up to the immense privacy implications of real-time facial recognition surveillance. San Francisco moves to ban facial recognition surveillance For example, San Francisco is now considering an outright ban on facial recognition surveillance. If pending legislation known as “Stop Secret Surveillance” passes, this would make San Francisco the first city ever to ban (and not just regulate) facial recognition technology. The bill is being brought for a vote, as concerns mount that facial recognition surveillance unfairly targets and profiles certain members of society, especially people of color. When law enforcement agencies adopt this technology, for example, they usually roll it out in high-crime, low-income neighborhoods, and those disproportionately are neighborhoods with a high percentage of people of color. This outright ban on facial recognition surveillance is part of a broader package of rules designed to give the city’s Board of Supervisors enhanced oversight over all surveillance technology used by the city. In addition to blocking any facial recognition surveillance technology from being used by any city agencies or law enforcement authorities, the new rules would require an audit of all existing surveillance technology (such as automatic license plate identification tools); an annual report on how technology is being used and how data is being shared; and board approval before buying any new surveillance technology for the city. A tipping point for facial recognition surveillance One reason why the outright ban on facial recognition technology is so important is because it fundamentally flips the script on how to talk about the technology. Previously, the burden of proof was on the average citizen and advocacy groups – it was up to them to show the hazards and negative aspects of the technology. Now, the burden of proof is on any city agency (including local police) that would like to implement the technology – they not only have to demonstrate that there is a clear use case for the technology, but also demonstrate that the pros far outweigh the cons for any high-tech security system (including a facial recognition database). While the American Civil Liberties Union (ACLU) of Northern California supports the legislation, the local law enforcement authorities do not. As they see it, there is a clear use case for such technology – it helps to improve overall security, it discourages crime in a certain area if people know they are being watched, and images captured enable them to solve crimes by understanding who was at a particular scene at a particular point in time. Moreover, they resent what they see as far too much oversight from the city over the way they implement facial recognition surveillance. Earlier, they challenged a bill that would have given municipalities more oversight over how police departments can use surveillance technology. London’s embrace of facial recognition surveillance Ultimately, the privacy battle over the proper use and scope of facial recognition surveillance will play out on the streets, not in the courts. In London, for example, the Metropolitan Police have experimented with facial recognition technology, with mixed results. Before rolling out the new technology, the police stated that people who covered their face in areas where there were cameras would not be stopped for suspicious behavior. Yet, that was not necessarily the case once the technology was already in place. In one high-profile case, a man who was stopped for covering his face was later fined by the police after swearing and becoming hostile. Of course, you can view this behavior in one of two ways – as the actions of a “guilty” person who was properly stopped and detained by the police for covering his face, or as the outraged actions of an “innocent” person who was improperly stopped and detained on a ridiculous charge. Without a doubt, there is a blurry gray line here. At what point do you stop someone simply because they are trying to maintain their privacy? London police officers were told to “use judgment” when stopping people who avoid the cameras, but doesn’t that imply that certain types of people – such as young people of color – will be stopped more often than others? Facial recognition surveillance at the White House In another high-profile case of facial recognition surveillance being rolled out with mixed (some might say dubious) results, consider the case of the U.S. Secret Service experimenting with the use of facial recognition surveillance outside of the White House. The idea is simple: scan the faces of all people strolling around the perimeter of the White House, in order to detect potential “people of interest” (i.e. terrorists) who might cause a security risk for the U.S. president. For now, this is only a limited test, and is scheduled to be completed by the end of summer 2019. When is the best time to regulate a new technology? These three examples – the proposed ban of facial recognition surveillance in San Francisco, the use of facial recognition surveillance by London police, and the White House experiment with real-time face recognition – highlight a key point. The time to regulate a new technology is at the very outset, and not after it has become so entrenched and ingrained that getting rid of it would seem to be unnecessarily complex. That’s why the current moment is so important. Facial recognition systems are now used in airports and at border crossings by immigration and customs enforcement authorities. They are used as crowd control tools in authoritarian nations. And they are used by social networks. It’s now just as common for someone to login to their digital device with their face as it is with their fingerprint. So we really are at a tipping point when it comes to deciding what to do with high tech surveillance systems that use our face as the primary form of identification – if we wait a few more years, it might be too late to do anything about it.
<urn:uuid:7c1da03d-34cf-4686-ad48-9e36236bd528>
CC-MAIN-2022-40
https://www.cpomagazine.com/data-privacy/facial-recognition-surveillance-now-at-a-privacy-tipping-point/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00025.warc.gz
en
0.961569
1,229
2.53125
3
Microsoft says it's recorded a massive increasein XorDDoSactivity (254 percent) in the last six months. XorDDoS, a Linux Trojan known for its modularity and stealth, was first discovered in 2014 by the white hat research group, MalwareMustDie (MMD). MMD believed the Linux Trojan originated in China. Based on a case study in 2015, Akamai strengthened the theory that the malware may be of Asian origin based on its targets. Microsoft said that XorDDoS continues to home on Linux-based systems, demonstrating a significant pivot in malware targets. Since Linux is deployed on many IoT (Internet of Things) devices and cloud infrastructures, we are likely to see DDoS (distributed denial-of-system) attacks from botnets that have compromised such devices. DDoS attacks—where normal Internet traffic to a targeted server, service, or network is overwhelmed with a flood of extra traffic from compromised machines—have become part of a greater attack scheme. Such powerful attacks are no longer conducted just to disrupt. DDoS attacks have become instrumental in successfully distracting organizations and security experts from figuring out threat actors' end goal: Malware deployment or system infiltration. XorDDoS, in particular, has been used to compromise devices using Secure Shell (SSH)brute force attacks. XorDDoS is as sophisticated as it gets. The only simple (yet effective) tactic it uses is to brute force its way to gain root access to various Linux architectures. As Microsoft said in the report: "Adept at stealing sensitive data, installing a rootkit device, using various evasion and persistence mechanisms, and performing DDoS attacks, XorDdos enables adversaries to create potentially significant disruptions on target systems. Moreover, XorDdos may be used to bring in other dangerous threats or to provide a vector for follow-on activities." Security IoT devices If you have an IoT device at home, know there are ways to secure it. Note that you may need some assistance from the company who built your IoT device if you're unfamiliar or unsure how to do any of the below. - Change your device's default password to a strong one - Limit the number of IP addresses your IoT device connects to - Enable over-the-air (OTA) software updates - Use a network firewall - Use DNS filtering - Consider setting up a separate network for your IoT device(s) - When you're not using your IoT device, turn it off. If you plan to get an IoT device soon, buy from a well-known brand. You're much more likely to get assistance from your supplier in beefing up your IoT device's security.
<urn:uuid:eec1f4f1-e478-46e3-9547-be2ddac06d96>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2022/05/massive-increase-in-xorddos-linux-malware-in-last-six-months
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00025.warc.gz
en
0.926039
562
2.75
3
Eight Principles for the Point of Knowledge – Where Business Rules Happen We are already living in a knowledge economy – let’s start acting like it! It’s know-how (business rules) that makes your company smart. Let’s get to the point of that knowledge. Adapted from: Business Rule Concepts: Getting to the Point of Knowledge (4th ed.), 2013. http://www.brsolutions.com/b_concepts.php - All know-how expressed in business terms – no ITSpeak. - All know-how presented/applied selectively in exactly-as-needed fashion. - All know-how presented/applied in ‘just-in-time’ fashion. - All interactions gauged to the knowledge level of the role and the person. - All workers enabled to ‘play up’ to level of ablest workers. - All decisions made 100% consistently – no exceptions (except intentionally). - All applications of business rules traceable and transparent. - All know-how managed directly – by business people and business analysts. Tags: Business Rule Concepts, business rule principles, Business Rules, knowledge economy, Principles Ronald G. Ross Ron Ross, Principal and Co-Founder of Business Rules Solutions, LLC, is internationally acknowledged as the “father of business rules.” Recognizing early on the importance of independently managed business rules for business operations and architecture, he has pioneered innovative techniques and standards since the mid-1980s. He wrote the industry’s first book on business rules in 1994.
<urn:uuid:4ce4d632-3d13-4fa9-80f7-20b4e13131c8>
CC-MAIN-2022-40
https://www.brsolutions.com/eight-principles-for-the-point-of-knowledge-where-business-rules-happen/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00025.warc.gz
en
0.906803
337
2.640625
3
Crash Course in Machine LearningA survey of machine learning algorithms In this article we clarify some of the undefined terms in our previous article and thereby explore a selection of machine learning algorithms and their applications to information security. This is not meant to be an exhaustive list of all machine learning ML) algorithms and techniques. We would like, however, to demistify some obscure concepts and dethrone a few buzzwords. Hence, this article is far from neutral. First of all we can classify by the type of input available. If we are trying to develop a system that can identify if the animal in a picture is a cat or a dog, initially we need to train it with pictures of cats or dogs. Now, do we tell the system what each picture contains? If we do, it’s supervised learning and we say we are training the system with labeled data. The opposite is completely unsupervised learning, with a few levels in between, such as semi-supervised learning: with partially labeled data active learning: the computer has to "pay" for the labels reinforcement learning: labels are given as output starts to be produced However, each algorithm typically fits more naturally either in the supervised or unsupervised learning category, so we will stick to those two. Next, what do want to obtain from our learned sytem? The cat-dog situation above is a typical classification problem; given a new picture, to what category does it belong? A related but different problem is that of clustering, which tries to divide the inputs into groups according to the features identified in them. The difference is that the groups are not known beforehand, so this is an unsupervised problem. Figure 1. Classification. Both of these problems are discrete, in the sense that categories are separate and there are no in-between elements (a picture that is 80% cat and 20% dog). What if my data is continuous? Good old regression to the rescue! Even the humble least squares linear regression we learned in school can be thought of as a machine learning technique, since it learns a couple of parameters from the data and can make predictions based on those. Figure 2. Linear Regression Two other interesting problems are estimating the probability distribution of a sample of points (density estimation) and finding a lower-dimensional representation of our inputs (dimensionality reduction): Figure 3. Dimensionality reduction. With these classifications out of the way, let’s go deeper into each particular technique that is interesting for our purposes. Support vector machines Much like linear regression tries to draw a line that best joins a set of closely correlated points, support vector machines ( SVM) try to draw a line that separates a set of naturally separated points. Since a line divides the plane in two, any new point must be on one of the two sides, and is thus classified as belonging to one class or the other. Figure 4. Support Vector Machines in More generally, if the inputs are n-dimensional vectors, an tries to find a geometric object of dimension n-1 (a hyperplane) that divides the given inputs into two groups. To name an application, support vector machines are used to detect spam in images (which is supposed to evade text spam filters) and face We need to group unlabeled data in a meaningful way. Of course, the number of possible clusterings is very large. In the k-means technique, we need to specify the desired number of clusters k beforehand. How do we choose? We need a way to measure cluster compactness. For every cluster we can define its centroid, something like its center of mass. Thus a measure of the compactness of a cluster could be the sum of the member-to-centroid distances, called the distortion: Figure 5. Distortion is lower on the left than on the right, so compactness is better. With that defined, we can state the problem clearly as an optimization problem: minimize the sum of all distortions. However, this problem is NP-complete (computationally very difficult) but good estimations can be achieved via k-means. It can be shown and, more importantly, makes intuitive sense, that: Each point must be clustered with the nearest centroid. Each centroid is at the center of its cluster. Artificial neural networks and deep learning Loosely inspired by the massive parallelism animal brains are capable of, these models are highly interconnected graphs in which the nodes are (mathematical) functions and the edges have weights, which are to be adjusted by the training. A set of weights is scored by the accuracy of labeled output, and optimized in the next step or epoch of training in a process called back-propagation (of error). The weights are adjusted in such a way that the measured error decreases. The nodes are arranged in layers and their functions are typically smooth versions of step functions (i.e., yes/no functions, but with no big jumps), and there are two special layers for input and output. After training, since the whole network is fixed, it’s only a matter of giving it input and getting the output. Figure 6. A neural network with two layers. The networks described above are feed-forward, but there are also recurrent neural networks. Convolutional networks use mathematical cross-correlation instead of regular smooth step functions. Deep neural networks owe their name to the great number of layers they use and to the fact that they are unsupervised learning models. While these networks have been quite succesful in applications, they are not perfect: in contrast to simpler machine learning models, they don’t produce an understandable model; it’s just a black box that computes output given input. biology is not necessarily the best model for engineering. In Mark Stamp’s words , Attempting to construct intelligent systems by modeling neural interactions within the brain might one day be seen as akin to trying to build an airplane that flaps its wings. Decision trees and forests In stark contrast to the unintelligible models extracted from neural networks, decision trees are simple enough to understand at a glance: Figure 7. A decision tree for classifying malware. Taken from . However, decision trees have a tendency to overfit the training data, i.e., are sensitive to noise and extreme values in it. Worse, a particular testing point could be predicted differently by two trees made with the same training data, but with, for example, the order of features reversed. These difficulties can be overcome by constructing many trees with different (even possibly overlapping) subsets of the training data and making the final conclusion by choosing from among all the trees' decisions. This solves overfitting, but the intution obtained from simple trees is lost. Anomaly detection via k-nearest neighbors Detecting anomalies is a naturallyunsupervised problem and really makes up a whole class of algorithms and techniques, more data mining than machine learning. The k-nearest neighbors algorithm ( kNN), essentially classifies an element according to the k training elements closest to it. Figure 8. The new point would be classified as a triangle 3NN, but as a square in kNN algorithm can also be adapted to be used in the context of regression, classification, and anomaly detection, in particular by scoring elements in terms of the distance to its closest neighbor Notice that in kNN there is no training phase. the labeled input is the training data and the model in itself. The most natural application for anomaly detection in computer security is in intrusion detection I hope this article has served to establish the following general ideas on machine learning: MLhas gained a lot of momentum in the past few years, its basic ideas are quite old. Fancy names can sometimes be used to masquerade simple ideas. MLis not a field of its own, rather an application of statistics, optimization, data analysis and data mining. - Mark Stamp (2018). Introduction to Machine Learning with Applications in Information Security. CRC Press. Ready to try Continuous Hacking? Discover the benefits of our comprehensive Continuous Hacking solution, which hundreds of organizations are already enjoying.
<urn:uuid:7ac08d93-d2ab-4186-8fec-aad26070bb3a>
CC-MAIN-2022-40
https://fluidattacks.com/blog/crash-course-machine-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00225.warc.gz
en
0.934655
1,794
2.671875
3
For years, automotive companies have been working towards realizing a longstanding dream, in which self driving cars travel without interruption on public roads, carrying their passengers in comfort and safety to their destinations. There have been great strides towards this new reality. Many automotive companies are already embedding software with varying levels of autonomous capability into their vehicles, although only a handful currently has made such features accessible to the driver. Progress towards this end is not necessarily slowed by the availability of the technology used in autonomous cars – it’s already in many cars currently on the road – but more significantly by the regulation and interoperability of the technologies. It’s well known that some companies that have made autonomous driving functionality accessible have run into serious trouble, with injuries and even fatalities among drivers and pedestrians. Terrible outcomes such as these further complicate the legal issues surrounding the development and deployment of the technology, potentially setting back the rate of development by years. So how do we safely accelerate towards the autonomous driving future? Five steps to autonomous driving technology There are five levels of autonomous driving technology, with level one representing some automatic features, such as collision detection or lane departure warnings, and level five representing full automation, where no driver is needed. Higher levels of automation depend on a vehicle’s sensors perceiving external obstacles and factoring in driver judgment based on distance and time, just as a human driver would. So it is essential for vehicles to be equipped with intelligence based on crowdsourced and validated driver “truths” or data, as well as further secondary high-quality sensors and secondary camera systems to enable 2D or 3D outer labelling and scenario extraction. To make this intelligence equipping and scenario extraction happen, manual work such as labelling and enriching data is needed – until now, this has been a significant effort. But artificial intelligence makes it significantly easier by creating what are, essentially, automated, crowdsourced truths. Human engineers are not able to process such massive data pools efficiently. So, to ensure vehicles are equipped with accurate scenario intelligence free of redundancies, automation is critical. Implementing the technology used in autonomous cars Currently, even the market leaders are only at level three, and features no higher than level two are legal for use on public roads. For example, “autopilot” mode still requires the driver to be awake and alert, ready to manually intervene by steering and braking when required. This in itself requires hugely sophisticated technology, not just with the vehicle outwardly sensing conditions and events on the road, but also monitoring what the driver is doing, even when they have switched to autopilot and handed control to the processors. Advanced Driver Assistance Systems (ADAS) are a foundational piece in the jigsaw, especially in moving through the levels of autonomy. Level three, of course, is where things get complicated. Not only is the technology exponentially more complex, but every technology component at every step needs to be developed so that it can behave correctly in every situation, and then it needs to be independently validated and assured for its effective and safe use. For some newer automotive companies, technology testing and deployment has been in their DNA from the outset, with the vision of autonomous driving a clearly identified goal in their purpose. But for longer-established companies and original equipment manufacturers (OEMs), the journey towards autonomous car technology is arguably more challenging. These companies cannot simply wipe the slate clean and redesign their development processes from scratch – unless it’s through a completely new experimental spinoff unhindered by legacy processes. Even in that situation, as ADAS technology becomes more mainstream there will be an expectation from consumers that new cars come fitted with level one and two ADAS features as standard. And this means that the manufacturer will need to retrofit these new technologies into existing vehicle ranges, and therefore new elements will need to be added in to existing design, testing and manufacturing processes. Very few companies have the resources or time to conduct their own experiments and develop new ADAS-implementing processes. Buying in and implementing a working proof-of-concept could slash the development time and budget and free them from some of the regulatory burdens they face The data behind the drive ADAS and autonomous driving have another fundamental element beyond technology and legality, and that is data. Higher levels of autonomous driving functionality require more situations and decisions to compute and execute decisions on. No vehicle could travel enough kilometers on public roads to experience every possible eventuality, so virtual modeling becomes a critical part of testing – and this means big data being generated and processed. Digitization also plays a fundamental role in the validation of data, especially when necessary features and software updates must be made. Over-the-air updates that require software and UI updates are much more convenient with software and feature release management processes in place instead of the traditional methods that relied on hardware-based updates. In addition, automatic vehicle-to-vehicle (V2V) communications and interactions with devices and assets that are, for example, part of the new smart infrastructure on roads (vehicle-to-everything or V2X communications), also demand the transfer of massive volumes of data, including across new 5G networks. But more than this, all the data a vehicle produces and processes has to be stored, annotated, visualized, analyzed, and then made available to all the different stakeholders in the development. Experience and insight for the future According to the Capgemini Research Institute’s recent AI in Automotive report large automotive OEMs can boost their operating profits by up to 16% by deploying AI at scale. The report also highlights where auto manufacturers should focus their AI investments. Capgemini works with automotive industry clients to validate, verify and standardize the intelligence used across all levels of autonomy, and bring organizations working proofs of concept and constantly updated data sets to their existing design and development processes. 5G is one of the core technologies that will enable autonomous driving on public roads. This new cellular standard accelerates connection speed and reduces latency, and enables vehicles to communicate with each other and a huge number of connected on-road assets and infrastructure almost instantly. Capgemini has extensive experience in the rollout of autonomous driving 5G, from building the infrastructure, to designing and manufacturing vehicles with 5G technology embedded in them. And with Capgemini Engineering, we are the only global firm with both the depth of product engineering and breadth of ability to master data, and deploy technology at scale, that underpins the progression of Intelligent Industry. Above all, we can co-develop autonomous systems and technologies, and validate and verify the relevant responsibilities for their safety so that advanced and autonomous vehicles can be on the road safely, sooner. Discover more about Driving Automation Systems Validation! VICE PRESIDENT, DIGITAL ENGINEERING AND MANUFACTURING SERVICES GROUP AUTOMOTIVE SOLUTION MANAGER, CAPGEMINI ENGINEERING FRANCE
<urn:uuid:a76f328b-d151-4337-8929-7a78cd429297>
CC-MAIN-2022-40
https://www.capgemini.com/pl-pl/resources/the-road-to-autonomous-car-development/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00225.warc.gz
en
0.947441
1,442
3.140625
3
In 1965, Gordon Moore, Intel’s cofounder, predicted that the number of transistors on an integrated circuit would double every 18 months. In some eyes, that law — which has become somewhat of an accepted axiom in the computer industry — has been losing veracity. There are some — Moore included, according to several accounts — who think there are limits to the growth potential of silicon-based semiconductors. But others, including researchers at Lucent’s semiconductor division, reckon that silicon technology still has enough capacity to handle many more generations of innovation. Predicting the long-term viability of traditional semiconductor technology is important, for if U.S. technology companies cannot maintain the pace of innovation in computing, there could be dire consequences for the economy. “The industry plans its manufacturing [according to] Moore’s Law and invests accordingly,” Martin Reynolds, general vice president and research fellow at Gartner, told TechNewsWorld. “The transistors can still get smaller, the die larger and the connections denser.” Researcher M. Ashraf Alam, a member of Lucent’s Agere Systems division, last year proved Reynolds’ point. Alam’s work showed that transistors could indeed continue to shrink even smaller without losing reliability. “Our discovery shows that silicon has more steam left in it and gives everyone a little breathing room while we try to discover new ways to continue to shrink transistor structures,” he told TechNewsWorld. Researchers say there is an array of challenges involved in extending the performance of a circuit or a processor by putting more and more transistors onto individual chips. The most fundamental challenge involves a thin layer of insulation, called a dielectric film, that is an integral part of each transistor. Semiconductor researchers had believed that silicon dioxide — the dielectric material traditionally used in semiconductor manufacturing — would not be usable at a thickness of less than 20 angstroms. (One centimeter is equivalent to 100 million angstroms.) That’s because silicon transistors use thicker films as insulators, and any small breakdown across this insulator would quickly lead to other defects nearby, resulting in a dramatic increase in electric current leakage that would cause the failure of the entire chip. Late last year, Alam and colleagues made a significant advance in understanding the behavior of these thin films. The research team found that breakdowns in insulators are independent of one another, so they do not “snowball” into the kind of sudden failure seen in devices with thicker films, as scientists previously had thought. “As transistors get smaller, this film gets thinner and the time to breakdown becomes shorter,” Alam told TechNewsWorld. “Researchers had previously found that the breakdown in thin films was not as catastrophic as breakdowns in thicker films, but this work shows how fundamentally different a breakdown in a thin film is.” Alam said this discovery not only allows engineers to calculate slight increases in leakage current, but also demonstrates that, in general, the increase in leakage will not affect a circuit’s performance. “This is good news for the communications semiconductor industry, but also for the scientific community at large,” he noted. Action on Thin Films Alam’s discovery could lead to technologies that significantly extend the performance limits of silicon as a viable technology, allowing companies to continue to develop new, innovative electronic products. “Without this breakthrough, [integrated circuit] technology could not be proven to meet standard reliability assurance,” Mark Pinto, vice president of Agere Systems, told TechNewsWorld. “Previous-generation process technologies could never offer the level of performance that’s being required by customers today.” Other research teams — at Texas Instruments, for example — are tackling the difficulties associated with making chips smaller and faster. Gene Franz, a principal fellow at TI, disagrees with the sceptics, offering his own unique postulate that parallel’s Moore’s Law. He says processing power will increase and power consumption will decrease exponentially over time, indicating that the entire system built around chips — not just the chips themselves — will get smaller. But these developments still have not quieted the skeptics, who note that the fabrication technology needed to make computer chips is being stretched to its limits. At a roundtable sponsored by technology promoter Dubai Silicon Oasis earlier this week, Dave Chavoustie, executive vice president of semiconductor equipment manufacturer ASML, noted that customers are aggressively trying to extend the life of existing fabrication tools. “We’re being challenged by our customers to find more creative ways to take advantage of existing fabs,” said Chavoustie. Migration to a new size of silicon wafers historically has happened every two to three chip generations — roughly the equivalent of three to five years. Today’s sluggish economy, however, has cut the cash flow of companies so much that they cannot afford to invest in the tools to take silicon chip fabrication to the next level. Costs the Key “The low return on capital means that the industry can’t afford to develop the tools and infrastructure,” said Don Mitchell, president and CEO of semiconductor equipment manufacturer FSI International. But many, including Gartner’s Reynolds, still do not buy into this pessimism. Cost might be an issue, but most analysts agree that R&D spending exists to support development of smaller and smaller chips. “A good example is Intel’s investment in [extreme ultraviolet] technology that will take the industry through another four iterations of technology,” Reynolds said. “Along the way, we’ll see new techniques emerge to make ever-smaller transistors work,” he told TechNewsWorld. “And, ultimately, when we run out of land, the industry will build up by stacking chips.”
<urn:uuid:72d6db93-7639-4a1b-8663-00245d2209d2>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/life-after-moores-law-beyond-silicon-31137.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00225.warc.gz
en
0.943105
1,258
3.234375
3
Jul 09, 2021 \ Research, University Université de Paris accelerates Cosmology applications with Graphcore IPUs Jul 09, 2021 \ Research, University A Université de Paris researcher has used Graphcore IPUs to accelerate neural network training for cosmology applications. In the newly published paper, researcher Bastien Arcelin explores the suitability and performance of IPU processors for two deep learning use cases in cosmology: galaxy image generation from a trained VAE latent space, and galaxy shape estimation using a deterministic deep neural network and Bayesian neural network (BNN). An unprecedented amount of observational data will be produced in upcoming astronomical surveys. One such project is the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST) which is expected to generate 20 Terabytes of data every night, and around 60 Petabytes in total during its 10 years of service. Cosmology researchers are increasingly using neural networks to manage such large and complex datasets. However, the sheer amount of unknowable or uncontrollable variables involved in real-world observations can make training neural networks on this type of data extremely challenging. As in many AI applications, simulated data – where precise parameters can be known and controlled, is best suited to training. Ideally, the quantity and quality of such data should be as close as possible to the real data recorded by photometric galaxy surveys like the LSST. Because this simulation data is also large and complex, the associated neural network needs to be fast, accurate and, for some applications, able to accurately characterise epistemic uncertainty. Such computational demands beg the question; can hardware specifically designed for AI deliver superior performance for deep learning in cosmology? Graphcore’s IPU was designed to efficiently process graphs – the fundamental structure behind every AI algorithm – which makes it an ideal example processor to test this theory. In this research, the performance of a single first generation Graphcore chip, the GC2 IPU, was compared with one Nvidia V100 GPU. When training neural networks with small batch sizes, the IPU was found to perform at least twice as fast as the GPU for the DNN and at least 4 times faster for the BNN. The GC2 IPU attained this performance with only half the power consumption of the GPU. The framework TensorFlow 2.1 was used during all the experiments, since TensorFlow 1 & 2 are fully supported on the IPU with an integrated XLA backend. Data based on simulations of cosmological sources, such as galaxies, has traditionally been based on simple analytic profiles like Sérsic profiles, a slow generation technique which heightens the risk of introducing model biases. Deep learning methods have the potential to simulate data much faster. Generative neural networks are increasingly being employed to model galaxies for various applications in cosmology (for example, Lanusse et al., 2020, Regier, McAuliffe, and Prabhat, 2015 or Arcelin et al., 2021). Images are generated by sampling the latent space distribution of a variational autoencoder (VAE) which has been trained on isolated galaxy images. In this case, the IPU can be seen to outperform the GPU when generating small batches of images. It is likely that running this workload on Graphcore’s second generation IPU, the GC200 would enable significantly improved performance when generating larger batches, due to the processor’s greatly expanded on-chip memory. Next generation astronomical surveys will look deeper into the sky than ever before, resulting in a higher probability of blended (i.e. overlapped) objects. Clearly, it is much harder to measure the shape of galaxies when they are partly covered. Existing galaxy shape measurement methods do not perform accurately on these overlapping objects, so new techniques will be needed as surveys continue to observe further in the sky. Université de Paris researcher Bastien Arcelin has been developing a new technique featuring deep neural networks and convolutional layers in order to measure shape ellipticity parameters on both isolated and blended galaxies. The first experiment was carried out with a deterministic neural network and the second test used a BNN. Once they have been trained, trainable parameters in a deterministic network have a fixed value and do not change even if the network is fed twice with the same isolated galaxy image. This contrasts with BNNs where the weights themselves are assigned probability distributions rather than single values. This means that feeding a BNN network twice with the same image results in two different samplings of the probability distributions, leading to two slightly different outputs. By sampling these distributions multiple times, the epistemic uncertainty of the results can be estimated. For the deterministic neural network, Graphcore IPUs enable faster time to train in comparison with GPUs, particularly when small batch sizes are used. This is not hugely surprisingly since IPU technology is already known for its efficient performance on convolutional neural networks in computer vision applications. Next, the technique was tested on a Bayesian Neural Network. It is crucial for cosmology researchers to be able to establish the level of confidence they can have in neural networks’ predictions. Researchers can use BNNs to calculate the epistemic uncertainties associated with predictions, helping to determine the confidence level in these predictions, as epistemic uncertainty strongly correlates with large measurement errors. IPUs again outperform GPUs in the case of the BNN with at least four times faster time to train. The IPU is clearly capable of significantly reducing artificial neural network training time for galaxy shape parameter estimation and performs best at small batch sizes when using one IPU as in this research. For higher batch sizes, the network can be split over multiple IPUs to improve performance and efficiency. For researchers based in Europe like Bastien Arcelin who need to process their data safely within the EU or EEA, the smartest way to access IPUs is through G-Core’s IPU Cloud. With locations in Luxembourg and the Netherlands, G-Core Labs provide IPU instances fast and securely on demand through a pay-as-you-go model with no upfront commitment. This AI IPU cloud infrastructure reduces costs on hardware and maintenance while providing the flexibility of multitenancy and integration with cloud services. The G-Core AI IPU cloud allows researchers to: Sign up for Graphcore updates: Sign up below to get the latest news and updates:
<urn:uuid:dd1ddb5b-678e-4dbc-a94d-9439e7d34c5d>
CC-MAIN-2022-40
https://www.graphcore.ai/posts/universit%C3%A9-de-paris-accelerates-cosmology-applications-with-graphcore-ipus
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00225.warc.gz
en
0.923841
1,352
2.796875
3
The Sodinokibi (REvil) ransomware has developed a new trick to encrypt more of a victim’s files. Some applications, such as database or mail servers, will lock files they have opened so that other programs can’t modify them. This prevents a file from being corrupted if multiple processes are trying to modify the file at the same time. This also prevents ransomware from encrypting the file without shutting down the process first. Many ransomware variants try to shut down active processes but are not able to shut down all of them. The researchers at Intel471 have reported that the latest version of Sodinokibi now uses the Windows Restart Manager API to close processes or shut down Windows services that are keeping a file open, enabling the ransomware to encrypt even more files than was previously possible. The API was created by Microsoft to make software updates easier. By Anthony Zampino Introduction Leading up to the most recent Russian invasion of Ukraine in
<urn:uuid:ab1baa52-e503-4f0d-b7b0-1017c156b99b>
CC-MAIN-2022-40
https://www.binarydefense.com/threat_watch/sodinokibi-ransomware-now-encrypts-more-files/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00225.warc.gz
en
0.939073
192
2.578125
3
The client side virtualization is very important factor here since there are many advantages which are associated to it. One can observe that this ability helps one tightening up the security which helps him securing all the data that he has. Also, the control over the data management and the data travelling can be made stricter and hence one can feel in the full control of the system. There are several characterises which the client sided virtualizations have, some of them are mentioned below; The Virtual Machines are basically the software programs which behave and emulates like another computer system. The Virtual Machines operates like a complete computer system but on another computer hardware creating a unique facility for the user to operate two operating systems (OS) on the same computer hardware on the same time. However, this feature is not harmful in any respect and there is no data loss to the host computer when a file runs on the virtual machine system. If the primary computer operating software is damaged being victim of viruses, it can easily be replaced with the fresh one. The Virtual Machine interface has ability to install the operating system on the host computer by using its own interface. Therefore, we can say that the main purpose of Virtual Machines is that we have a software version of a hardware that really does not exist. Also the user is enable to save his precious time working on single computer instead of separate computers, cost saving as electricity charges are of single computer, and many more. Most importantly, you can install such programs and software's, which were never made to be run on the host computer operating software. The Popek and Goldberg defined Virtual Machines as "an efficient, isolated duplicate of a real machine". However, some features may not be virtualized like time, clock, device controls, etc. The Virtual Machines can be further classified into two types concerning to their nature of working; The System Virtual Machines: provides the computer supported interface to execute the functions and commands as the real Operating System. This system is used where no physical hardware exists for various uses and provides you an opportunity to work on the same computer architecture but two operating systems at the same time. This Virtual Machine software is installed on the hard drive of the host computer system either by virtual patrician and can use the printing, faxing and other applications of host computer. Hence, there is time and energy saving option available. The Process Virtual Machines: are also known as "Application Virtual Machines" and "Managed Runtime Environment: due to the feature that this software installs within the primary operating system of that particular computer system. It provides the opportunity to use a single programming language for the host system as in the virtual machines. It means that the same computer language that is being used for the virtual software can also be used for the host computer system. The most popular types are Java Virtual Machines, Dot Net Framework and Parrot Virtual Machines. We know that all types of software require specific space availability to installation and give good performance. In case of low space, the software and the concerned operating system will take more time for processing and may hang-on for a moment. The Virtual Server 2005 provides many options to manage the virtual machine's system resources. You are able to allocate the memory required to be allocated to the Virtual machine in addition to the control settings that how much the host computer's CPU resources shall be allocated to the Virtual Machines. We have also discussed the same in above types of virtual machines. Therefore, resource allocation can be sub-divided further to: Allocation of Memory: This is one of the main parts of its configuration. When we create Virtual Machine, it is also compulsory to specify the requirement of memory for the virtual machine. It represents the total memory in GBs or MBs that the virtual server shall allocate to the virtual machine. However, the maximum memory is limited up to 3.6 GB, which also depends upon the availability of space in the physical memory and this, can further be modified when the virtual server is off. It is important to mention that the memory setting page automatically shows the calculation after analyzing the availability of physical memory that how much memory you can allocate. Whereas, virtual machine requires at least 32 MB space more than the mum limit to run Video RAM (VRAM). Allocation of Other Resources of CPU: It can be described as the maximum and minimum resources your host computer has available to allocate the Virtual Machine along with the other level of controls. However, it only shows the resources you have specified on the resource allocation page. This is called Resource Allocation by Capacity. You can assign the number ranging 1 to 10,000 to a Virtual Machine that how much it is important to allocate resources to this particular virtual machine over others. The 1 represents the minimum and 10,000 represent the maximum weight. This is called Resource Allocation by Weight. There is a bit difference between the virtualization and the emulation. The emulation describes the complete process and requirements required by a virtual machine to be operated like the physical/host computer. All the date is that the complete operations are simulated in software whereas; the virtualization requires hardware like CPU of the host computer. This term was first used by IBM in 1957 and was referred to the hardware only but now in the modern age it is used for both the hardware and software configurations. These emulators were invented to run a program basically designed for different architecture of computer. The video games in Playstation2 could be played in PC or Xbox 360 by providing an ideal environment of duplication. There are various types of emulators written in C-Language and some of them in Java like Commodore 64, Atari, Nintendo, etc. However, these emulators were complex as most of them were written in C-Language until it was written in Java. The Java PC project (JPC) is pure java emulator in the X 86 platforms and it runs where you have installed a Java Virtual Machine. This program is installed over the primary operating system of the host computer and creates a virtual layer giving opportunity to install another favourite operating system on it in most reliable and quick manner. This JPC is easily installed on Windows, Linux, etc operating systems and provides dual security system. It does not harm data on your primary computer because of various security layers and runs about 20% of the native speed of the host computer making it the fastest emulator software. The emulation process only requires the availability of an advanced architecture of computer to benefit you the chances to use many restricted consoles on your PC. It was once thought that the virtual machines could not be attacked and is most safe to operate for computer operations but it has been falsified as the efforts of criminal activities of attackers. In year 2012, the first Trojan virus named "Morcut". The software companies invested on the virtualization with only aim to invent a safe working environment and cost saving but this justification has failed completely. You also need registered antivirus software for its security. Most of the security concerns arise due to the operations not by the virtual machine itself. It needs to be secure via various methods and techniques. The first technique is to use a secure network for data sharing purpose and always install a registered anti-virus program on the host computer. When we share a data from the network of host computer using Network Interface card (NIC), it is possible that a virus may attack the data centre of virtual machine. Moreover, we can use the trusted portion of the hard disk of host computer as patrician for various operations of virtual machine. You can also put an anti-virus program in the virtual machine and keep on scanning the data as a routine task and keep malwares out of its range. To keep virtual machine a secure one, you should use the anti-virus program, patching and cloning and host-based detection of errors and viruses. Moreover, strictly restrict the unsecure network accessibility on the host computer will lead to the security of data on virtual machine to much extent. The Kaspersky has detected three approached to safe data and other operation of virtual machine The Agent-Based approach, complete anti-virus program is installed on each virtual machine as used on the host computer. This method will however provide safety to virtualized data but performance shall be compromised using this approach. The Agent-Less approach, despite using the traditionally used antivirus programs, a built-in anti-malware should be provided to the Virtual Machines, which will keep it away from attacks from networks integration and other possibilities. It provides the virtual machine with opportunity to update itself and takes much less space. However, due to unavailability of scanning, processing and other options as to full anti-virus program, the agent less approach limits itself from complete security. This approach will also require more space due to the requirement of separate system manager. This approach is much beneficial where the virtual machines are only used for storage of data. The Light-Agent approach, it is the combination of both the agent-less and agent-based approach and gives the opportunity to operate a special tool box to control the various malware activities. The malicious contained websites can be blocked via alarm alert sound system. This is the widely used approach. We can say the best fit between the performance and safety. You need to follow few steps to get connected your virtual machine to a network. A separate IP Block shall be required separately for your public or private network. It depends on your interest and mission whether to use the public or private network. If you have got your IP Block then start your configuration process of the virtual machine installed on your personal computer. Still, if you have not installed virtual machine, it can be obtained from the Public Network IP Manager. There are various configuration steps of Network Managers for different operating systems like Windows 2008 Server Core, Windows 2003 Standard and Enterprise, Window 2008 Standard, Data centre, Enterprise, Red hat, Fedora, Ubuntu, etc. One can find several links where such information is available. The hypervisors are also known as Virtual Machine Monitors (VMM), which are used to run the operations of virtual machines. There might be running more than one VMM on a single computer making it enable to run and operate more than one virtual machine on it, which will be called as "Guest Computer". The hypervisor allocates the right and calculated resources of CPU to all the virtual machines installed on the host computer including the memory and disk storage. The hypervisor can further be classified into two categories: TYPE-1, it is directly installed on the host computer hardware and controls the virtual machine manager directly. This is also called the bare-metal means a computer with no hardware or operating system. If you are using the Microsoft Hyper-V hypervisor, VMware Oracle VM Server for x86, KVM, or Citrix XenServer, then this is the first type of hypervisor you are working on. TYPE-2: These are also called the hosted hypervisors. The native/bare-metal hypervisors requires hardware whereas the type-2 requires an operating system to run. Now a question may arise whether which type is better, it totally depends on you and the resources available. However, the type 1 is the most reliable hypervisor and secured one as it does not require an operating system to run its functions. This feature also makes it faster hypervisor. There might be some limitations in operating the type 2 due to compulsion of an operating system. We can say that hypervisor based duplication of data enables better backup of data. This method of hyper version is also cost effective as well and less complex than the other methods of data cloning of the current age. Hence, one can see the virtualization plays an important role in one's life and everyone is supposed to know about them. The reason is, that more the methods and the important components one knows about, more he would be able to get it corrected at his own computer. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from [email protected] and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:d4d9711e-0a10-4bb9-b8b5-ea3915d4a069>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/a-plus-basics-of-client-side-virtualization.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00225.warc.gz
en
0.942547
2,522
2.953125
3
What’s most interesting about this news, and what nobody is talking about, is that other carriers don’t have subsidies, either. What is a subsidy? The Merriam-Webster online dictionary defines a “subsidy” as “a grant or gift of money.” The word “subsidy” is usually associated with a government giving money to an industry or organization as a matter of policy. The US government, for example, collects taxes from the public and then gives some of that money to specific industries in order to prop up and influence the direction of national energy supplies, infrastructure, food and so on. In order for something to be considered a “subsidy,” money must be transferred from one entity to another as both Merriam and Webster say as a “grant” or a “gift.” If you give me money, you’re giving me a subsidy. If you give me money for now, but I have to give it back plus interest, that’s a loan. (Ironically, “telecommunications” is the third largest industry to get subsidies from the US government. That money comes from US taxpayers. That means if you pay taxes in the United States, you actually subsidize AT&T, Sprint and Verizon, not the other way around.) Use of the word “subsidy” as it relates to an assumed discount when purchased with a carrier contract is a mis-use of that word, a euphemism, a misdirection, a lie. Carriers give you loans, not subsidies So let’s be clear about your average mobile phone contract. They claim their proposition is this: “If you sign a two-year contract for phone and data service with us, we’ll pay for most of the cost of your smartphone.” But that’s just spin – conceptual packaging of costs designed to maximize what you’re willing to pay. The truth is that they are not paying for most of the cost of your phone. Be assured that you are paying the full cost of that phone, and then some. The carriers are not going off and getting money from elsewhere and redirecting it to you. That would be a subsidy. What they are in fact doing is selling you a phone and a plan at a price that fully covers the total cost of both the phone and the plan and also leaves a hefty pile of cash for them to keep as profit. The contract is the legal requirement for you to pay for all that with monthly payments, plus penalty fees if you exceed the terms of the contract — for example, if you exceed your “minutes.” Regarding the phone, they are giving you a loan. They’re paying for the price of the phone up front, and then the repayment of that loan is built into your monthly payment, which you are required by contract to pay. And that’s the problem. The terms of the loan they give you is a rip-off. When you buy into a “subsidized” phone with a two-year contract, some number of dollars you’re paying each month is payment on that loan. Let’s call it $20 per month, which is the amount T-Mobile charges per month for the repayment of their loans for iPhones after you pay $99 up front according to their new plan. (It’s also the amount in a similar model outlined last week in the blog Phone Scoop.) Because an average carrier contract has you paying typically about $200 up front for a high-end smartphone, we can assume that hypothetical $20 includes some interest. (An unlocked iPhone costs $649 from Apple. A “subsidized” phone from a carrier where, in our very conservative model, you’re paying $20 per month in payments and interest on your loan works out to $680 — the price of the phone, plus a total of $30 interest on the loan.) If you immediately upgrade to a new phone after two years and sign a new, two-year contract, then you get another loan and the process continues. If, however, you’re among the majority that waits some time before getting a new phone, you continue to pay your monthly loan payments on the phone, even though it has been paid off. It’s like paying off your house and all the interest on your home loan after 15-years, but being required by the bank to continue paying your regular mortgage payments every month until you buy another house and get another loan. That’s why carriers love the whole “subsidy” shell game. It hides your loan payments in a murky payment plan that definitely covers the cost of your phone, interest on the loan they gave you, the costs associated with your actual service plus more money to profit the carrier. There’s no “subsidy” anywhere in this plan. They use the word to confuse you into taking out a very bad loan. The reason the scheme works and the reason consumers don’t scrutinize what they’re actually paying for is a flaw in human reasoning called “present bias.” Humans will gladly pay more for something later if it means spending less now. So how is T-Mobile different? Interestingly, T-Mobile is also happy to give you a loan to buy your phone. But there are three differences. First, when you’ve paid off the loan with T-Mobile, you stop making payments. Second, they don’t charge you interest on the loan. And, third, they give you a discounted price for the phone — for example (by the time you pay off your loan, you will have paid less than $600 for an iPhone 5). You can also buy the unlocked phone at a discount price if you don’t take out the T-Mobile loan. For example, they charge $549.99 for the iPhone 5 ($100 less than Apple charges) if you pay the whole cost up front. Or, you can just use the phone you already own. In essence, they’ve de-coupled your monthly payment for wireless service from the loan they give you for your phone — the cost of service is the same whether you get a loan from them to buy a discounted phone or whether you bring your existing phone onto their network. Why T-Mobile’s plan is good Ultimately, it’s all a shell game no matter what. Who knows, for example, if T-Mobile takes some of your money they charge you for the wireless service and use it to help discount the phone or carry the phone loans? What’s good about T-Mobile’s plan is that the phones, loans and wireless service costs are all separated from each other. T-Mobile’s new policies help shatter the “subsidy” scam that used to be nearly universal. And the piece de resistance from T-Mobile’s point of view is that their loan program exploits “present bias” even more than conventional plans do. The proposition for consumers is: “Pay $99 now and the rest later.” More importantly, though, I would like to destroy the absurd notion that subsidies exist anywhere in the industry. There are no subsidies. Subsidies are a myth. The truth is that what carriers call “subsidies” are really very bad, very high-interest loans buried inside a shell game designed to confuse the public into spending far more for phones and wireless service than they really should.
<urn:uuid:47cafab0-1f4f-4853-8006-e16c90c60ae8>
CC-MAIN-2022-40
https://www.datamation.com/mobile/why-the-mobile-phone-subsidy-is-a-myth/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00225.warc.gz
en
0.956926
1,617
3.03125
3
Did you see when Donald Trump’s website defaced by hackers during the presidential election? Or when Lenovo’s website was hacked and visitors were greeted with slideshows of bored teenagers? According to a study conducted by the University of Maryland – Every 39 seconds there is an attack on websites. Such website defacement attacks are common and are generally targeted attacks. These attacks are an embarrassment for you and your business and should be kept at bay with robust website defacement prevention measures. What is website defacement? Just like vandalism in the real world, a website defacement is an online form of the same. After defacing a website, cybercriminals leave visible evidence that your website is hacked in messages and visual images. These messages or images may be political, religious, or written just for fun. Mostly, the attackers do it just to bring attention to a particular agenda they want to promote. There are also other incidents where the attackers leave behind messages such as “Hacked by …” to gain fame. As a business owner, your website is the face of your business, and website defacement attacks can easily damage your company’s reputation. It will also bring financial impact for your business since the customers will skip your website and purchases won’t be made. The cost of fixing your website from defacement is also there. This is why website defacement prevention strategies are so important for your business. How do hackers deface a website? Attackers deface websites by gaining unauthorized access through various means. They might exploit a vulnerability in your websites such as accessing admin accounts using credential theft, code injection, or exploiting elevated permissions and rights. Other attacks such as DNS hijacking and malware infections can also be used. Once hackers get access to core files through these attacks, they can make changes as they want. Apart from defacement, these attackers can also install backdoors and other malware on the website to exploit it later. Examples of website defacement Let’s see some website defacement examples and see how they look like. 1. Defacement of Trump’s website during the 2020 election campaign. This is a typical example of political vandalism. 2. Lenovo’s website was hacked and visitors welcomed with a slideshow of bored teenagers. This was done in retaliation against Superfish by a hacker group named Lizard Squad. 3. Taking over a cryptocurrency forum by an ad campaign. This is how ads hijack websites and display their content. As we see above, there are different kinds of website defacement, but any form is evidence that your website is compromised and you need to fix it as soon as possible. 6 Steps to website defacement prevention As we have seen how website defacement can impact your business, we need to understand how to prevent such incidents. By following the below steps you can stop such attacks on your website: 1. Limit privileges By limiting access to admin files and folders, you are protecting against attackers who have control over regular users, since these accounts will not have access. Make sure that access to core files is only available to those who need it. Privileges and file permissions should be based on user roles and requirements. Also, proper off-boarding of inactive users is important to have a sanitized list of users with high privileges. 2. Change default credentials This is a very common security tip that is often ignored. When setting up a website, make sure that you update the default credentials. Basic admin names and passwords can be easily cracked to get access to your website. Also, try to change the default admin email and admin location while setting up your website, in line with website defacement prevention steps. 3. Limiting the number of add-ons and plugins More the number of installed plugins and extensions on your website more is the potential entry points for attackers to enter. According to a report, WordPress websites with more than six add-ons are twice as likely to get hacked when compared with websites with no addons. Attackers might also exploit zero-day vulnerabilities in the plugins or themes and compromise your website. You can prevent this by uninstalling the unused themes and plugins and also by regularly updating them. 4. Limiting the number of file uploads If your website allows for file uploads then it can be used by attackers to penetrate your system by uploading malicious payloads. Attackers might upload malware that would compromise your website. When using file uploads, make sure that there is a limit on upload size and file types and scan every file for suspicious entries. 5. Protect against attacks such as SQL Injection and XSS attacks Protecting your site against XSS and SQL Injection attacks can be simple by sanitizing user inputs on your website and encrypting communication between servers. Installing a website defacement protection solution such as a web application firewall can also protect your site from such attacks. 6. Scanning for vulnerabilities Security is never an end-point and thus it requires regular updates and repairs. Security scanners such as Astra can scan your entire website and find any vulnerability that might be present. This will reduce the chances of your website getting hacked and protect against a multitude of attacks that exploit such security gaps. How Astra Security protects website defacement? Astra Security offers a complete security suite that includes Firewall, Malware scanner, one-click malware removal, IP/Country blockers, GDPR, and so on. Our expertise in cybersecurity and friendly assistance helps you in water bolting your website’s security from all sorts of cyberattacks. With our security tools, you can protect your website from website defacement, SQLi, XSS, CSRF, credit card hacks, file infection, spam, and 100+ other cyber threats. Astra Security Suite can stop real-time attacks on your website and help you take the necessary steps to put up website defacement prevention and other website security measures. Don’t believe, try us.
<urn:uuid:78b38a08-eeb1-4265-a34a-48cd79872360>
CC-MAIN-2022-40
https://www.getastra.com/blog/knowledge-base/guide-on-website-defacement-prevention/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00225.warc.gz
en
0.948751
1,230
2.578125
3
How to create a database front-end in 5 minutes Full Transcript of Video Today, I'm going to show you how to create a web-based front-end for your database in just 5 minutes. This front-end will let us create, read, update, and delete data from our database using any web browser. We'll be using our low-code development platform, m-Power, as it simplifies the process. Now, in this demo we're creating a front-end over an iSeries database. But, with m-Power, the process is the same for any database. We could just as easily create the same front-end over MySQL, SQL Server, Postgres, Oracle, and any other relational database. So, here on m-Power's home screen, I'll click the "Create Apps" button, as I want to create a new web application. Now I just name my application and select a template. The template is how we choose what our application is going to do. In this case, I want to view and update data, so I need the data list with web form template, which is already selected. m-Power looks at my database and shows me a list of available tables. For this demo, I'm interested in a front-end for my customer data. Once I select the table, I can pick and choose which fields I want, or just select all fields up here. Now, the field names in my database table aren't exactly user friendly. So, I'll click "edit field settings" and adjust the descriptions so users know which field they're editing. Also, I don't really like the way these fields are arranged. So, I'll quickly rearrange the fields how I want them to appear. Okay, once I'm done, I need to tell m-Power how to sort my data. I'll sort it by customer number. So now, m-Power has created a data model using the specifications that I've just outlined. In this screen, I get a preview of the application we're about to complete...and have the option to add more features. For this application though, I don't need to do any of that, so I'll move on to the build step. Once I click Build, m-Power puts everything together for me. So, let's check it out. As you can see, the data is organized in a familiar table format. Here at the top, I can sort and filter my data, or even add new customers. These options on the left let me view, copy, delete, or update the data. Now, when I go to update this customer, I don't really love the form layout here. The form itself is too long. I could shorten some of these inputs and make everything fit a lot better. To do that, I'll open up m-Painter (which is m-Power's visual editor) and head over to the form layout editor. Here I can remove rows and organize my form however I want. Once I'm done, I'll just save my changes and reload my form. It looks a lot better now, doesn't it? Everything fits on the screen. It feels cleaner and easier to work with. Okay, so we've put everything together now. Let's verify that it's working. Suppose I want to update a customer address in the database. I'll click the update option on that customer record to up the form again, and change the address. Once I make the change and click accept, you'll see that the change now shows up in my database table. So, this database front-end works well as is. It does what I want it to do. But, we can make it a little better. For instance, I think it'd be useful to add an order history drill down. This would let a user click on a customer name and pull up a list of their orders. Let's quickly add that. Now, I already have an application that lists customer orders. I just need to use m-Power's smartlink feature to link that to the one I just built. I'll go into m-Painter, and add a smartlink here to the customer name. m-Power is smart enough to know that when I click this link, it should return all of that customer's orders from the order history application. Once I save my changes, you'll see that I can now drill into individual customer orders. Now, there's a lot more I could do here. I could add user or role-based security so only certain employees can update the data. I could add edit-checking to ensure that employees aren't entering faulty data. I could set up a workflow that automatically sends an email every time a new record is created. I could even create reports and dashboards over this data and put everything into a secure portal. For sake of time however, we won't do any of that right now. But, if you'd like to see more of what m-Power can accomplish, check out our website at mrc-productivity.com. Thanks for watching. Learn how m-Power can help you Sign up for a free trial Sort videos by category using the options below
<urn:uuid:d1ac45cc-61d9-46df-bbe6-a39b65cf5f6d>
CC-MAIN-2022-40
https://www.mrc-productivity.com/research/videos/video-db-frontend.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00225.warc.gz
en
0.931991
1,095
2.5625
3
Kubernetes clusters can be managed through the kubectl command-line tool. For configuration, Kubectl searches the $HOME/.kube directory for a file called config. Different kubeconfig files can be specified using the KUBECONFIG environment variable or the —kubeconfig flag. The kubectl syntax, command actions, and common examples are covered in this introduction. For further information on each command, including all supported flags and subcommands, consult the kubectl reference manual. Installation instructions may be found at kubectl installation. To perform kubectl commands from your terminal window, use the format # kubectl [command] [TYPE] [NAME] [flags] In the above syntax, command, TYPE, NAME, and flags are as follows: Create, get, describe, and delete are examples of commands that you can use to perform operations on one or more resources. The resource type is specified by TYPE. You can provide the singular, plural, or shortened forms of resource types, which are case-insensitive. The resource’s name is specified by NAME. Case matters when it comes to names. Keep in mind that if no name is given, all resources, such as kubectl get pods, are listed. The command-line flags override default values as well as any related environment variables. When applying a command on more than one resource type, you can specify each resource by using its type and name and for that make use of the following syntax, and group them together if they are all of the same type: TYPE1 name1 name2 name3 name#…. Let’s get started with the kubectl command, but first, make sure: A Kubernetes cluster is required, as well as the kubectl command-line tool configured to connect to it. This tutorial should be done on a cluster that has at least two nodes that do not control plane hosts. You can use minikube to construct a cluster if you don’t currently have one. To run minikube, type the attached command in the command line: Kubectl Get Pods Display the pods with the kubectl get pods command and choose one to run with the exec command: The get command in Kubectl displays one or more resources. Pods (po), replicationcontrollers (rc), services (svc), nodes (no), componentstatuses (cs), events (ev), limitranges (limits), persistentvolumeclaims (pvc), persistentvolumes (pv), resourcequotas (quota), endpoints (ep), namespaces (ns), horizontalpodautoscalers (hpa), serviceaccounts, or secrets are some of the possible resource types. Kubectl Get Pods -o Wide The get pods -o wide command displays a list of all pods in the current namespace, along with other information. Any additional information will be published alongside the results in plain language. Pods hold the name of the node. For all kubectl instructions, the plain-text format is the default output format. To show results in a specific format to your terminal window, you can use the -o or —output flags with a supported kubectl command. When a deployment is created, Kubernetes also creates a Pod to specifically host the application instance. A Pod is basically a collection of application container(s), as well as the resources they share. A Pod is modeled after an application-specific “logical host” and can hold several tightly connected application containers. A Pod, for example, may include both the container and the Node. js application as well as a separate container that is used to feed the data that the Node.js website will broadcast. Containers in a Pod are all assigned the same IP address as well as port space. They are always in the same place and on the same schedule. They run on the same Node in the same shared context. The Kubernetes platform’s atomic unit is the pod. When we build a Deployment in Kubernetes, it generates Pods that contain containers (as opposed to creating containers directly). Each Pod is specifically assigned to the Node that it is scheduled on and stays there until it is deleted or destroyed. In this article, we have provided you with the basics of kubectl and how to list all pods in “ps” output format in this exercise. In addition to that, we have also given instructions on listing all pods in ps output format, as well as other useful information. You can use this command as a whole, the plural form (pods), or the short code option at the start of each section for each object. They’ll all produce the same result. Most of the commands will need to be followed up with the precise name of the resource you’re controlling.
<urn:uuid:cce76aa4-9e72-465a-8708-ca19f64b418e>
CC-MAIN-2022-40
http://dztechno.com/kubectl-get-pods-wide-format/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00425.warc.gz
en
0.882373
1,047
2.890625
3
Every phenomenon in the physical world is accompanied by friction. Indeed, perpetual motion is unattainable due to the dissipation of energy induced by friction. Machines and what they were able to perform, led to the First Industrial Revolution. But with the advent of thermodynamics, physics was able to explain the limits of mechanical performance. These limits did not depend on the technical components of the machines in question, but on the transformations of energy allowed – or not – by physics. Gaining order always costs energy, and nature constantly tends to dissipate order. Disorder cannot be ordered spontaneously without an investment of energy. Click on the below buttons to download and subscribe to the full magazine: Business processes and the frictions within The First Industrial Revolution gave birth to the concept of operational processes – orderly tasks that mark out a chain of production and value. The discipline of business process management appeared, which aimed at streamlining and optimizing these processes within enterprises to avoid inefficiency. However, over time, inefficiency and friction creep into processes, just as they do in nature. In the digital age, emerging technologies can extract and process massive data sets via digital, cloud-based platforms. This data can be collected and leveraged with a premeditated use to deliver frictionless customer service, for example. Traditional organizations whose data legacy has not been collected or planned with intentional objectives in mind often find this a challenge. When we apply analytics to data generated by the information systems of these traditional organizations through what we call “process mining,” we discover processes that are very different in reality from originally-defined operational processes. Bottlenecks appear that affect the efficiency of the process. This is an example of friction and the movement to disorder that inevitably takes place – and it tends to have a human origin. In traditional businesses, humans use and supervise machines as tools rather than operators in processes. The machines do not supervise operations and do not make decisions. When technologies and tools evolve or humans change, processes change in order to circumvent the problems that are created by these changes. And this is an inevitable decline to disorder in operational processes. However, in a frictionless business, humans and machines work in symbiosis through new and emerging technologies that have enabled the emergence of the platform economy. As we shall see, artificial intelligence (AI) plays a central role by augmenting, not just by replacing, humans with machines. AI at the heart of frictionless processes Before a task can be automated, the process it is part of needs to be completely redesigned. This is the first form of order that must be put into place, so when exceptions occur, they can be managed by humans without friction. AI can orchestrate machines and humans by predicting operations upstream from a particular process that creates exceptions. These are then assigned to a human operator who handles them correctly. In this way, AI becomes part of the actual architecture that keeps the “human in the loop” to perfect the process managed by machines. In essence, processes are managed by machines and augmented by humans. This intelligent architecture is another form of order established in the process. AI acts as an invisible hand governing operations between humans and machines to improve the overall efficiency of the process. There are other ways for AI to intervene in the architecture of a process in order to limit friction. One of them is to embed analytics across a process. By leveraging the data manipulated and generated by machines and humans at each node of a process, each operator can access essential insights into operations executed by other operators, regardless of whether they are machines or humans. A level of coherence then emerges between the operations executed within the same process. Collective intelligence and the Frictionless Enterprise In nature, coherence is one of the main features of effective and frictionless collective decision-making processes. This is used, for instance, by living beings to avoid certain threats. This collective intelligence provides the capacity for adaptation and resilience to change. As an example, look at the spectacular way in which a flock of birds travels and behaves collectively in a seemingly well-choreographed ballet (see Figure 1). When a bird first sees a threat, it changes its direction of flight, and this information spreads across the flock. As a result, all of the other birds change their direction of flight accordingly. For the flock to move coherently in this way, each bird gathers insights from its nearest neighbors and follows their average direction. Step by step, all of the birds follow a coherent trajectory. This is an example of analytics inherently embedded in each bird’s brain that enables the flock to be resilient to external threat and change. In business, an enterprise is composed of processes, each of which is made up of a combination of humans and machines. To make a process more resilient to change and limit the evolution towards disorder, each operator must be fed with insights from the data generated and processed by all operators in the process. In our example, each bird is analogous to an operator working in a given node of the process. AI creates coherence between all the nodes of a process. This notion of collective intelligence is fundamental for understanding the concept of the Frictionless Enterprise. AI as the guardian of order To build a frictionless business, having access to data and digital is not enough. Neither is replacing humans with machines through leveraging AI. The processes that organize an enterprise must be completely reimagined to be able to implement intelligent automation and create a digitally augmented workforce at scale. AI is no longer only involved in the execution of tasks, but also in the orchestration of operations between machines and humans. By acting like an invisible hand, processes become smarter and operators more coherently. In short, AI acts as the guardian of order at different levels of the process – providing resilience to change that are the sources of frictions impacting the effectiveness of the process. Taoufik Amri is a former quantum physicist. He graduated from the Ecole Normale Supérieure in France with a Ph.D. in quantum and statistical physics. As the principal data scientist for Capgemini’s Business Services, Taoufik advises Capgemini’s clients on implementing ready-to-use deep tech and intelligent automation solutions to dramatically accelerate the optimization of our clients’ business operations. Taoufik also identifies tasks that can be performed better and/or faster with AI, measures the value added by AI with advanced quantitative business process models, and designs human-in-the-loop solutions.
<urn:uuid:885d6c06-b933-4894-825c-cc9a695cc209>
CC-MAIN-2022-40
https://www.capgemini.com/au-en/the-invisible-hand-of-ai-collective-intelligence-underlying-the-frictionless-enterprise/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00425.warc.gz
en
0.942067
1,336
2.84375
3
Retrofitting a data center is about managing limitations and trade-offs. Decision-makers have to consider physical limits (such as the weight a floor will support and how much cooling equipment can fit into an existing space). Then there’s infrastructure to think about: It would be difficult to swap out an old uninterruptible power supply (UPS) cable for a brand-new one. Such restrictions have an impact on energy efficiency too: Existing UPS cables generally operate at 85 percent efficiency, whereas the newest ones are in the range of 97.5 percent. To reach the highest efficiency numbers, you’d need to change your entire data center architecture, which is impractical for most companies. Retrofitting a data center to make it more energy efficient has its restrictions, but doing so can be less costly than having to rebuild an entire facility. To weigh the variables—and achieve energy cost savings – you need to know what’s broken. Here are five tips for determining the efficiency of your data center and how to make it green as can be. 1. Get to know your data center. An energy efficiency assessment from someone who specializes in data centers should be a priority, says Neil Rasmussen, CTO of American Power Conversion (APC), a provider of data center power and cooling equipment. IBM, EYP Mission Critical, Syska Hennessy, APC and Hewlett-Packard offer such services. HP recently added Thermal Zone Mapping to its assessment offering. This service uses heat sensors and mapping analysis software to pinpoint problem areas in the data center and helps you adjust things as needed, says Brian Brouillette, vice president of HP Mission Critical Network and Education Services. For example, the analysis looks at the organization of equipment racks, how densely the equipment is populated, and the flow of hot and cold air through different areas of the space. It’s important to place air-conditioning vents properly so that cool airflow keeps equipment running properly, without wasting energy, says Brouillette. 2. Manage the AC: Not too cold, not too hot, but Energy efficiency often starts with the cooling system. “Air conditioners are the most power hungry things in the data center, apart from the IT equipment itself,” says Rasmussen. If your data center is running at 30 percent efficiency, that means for every watt going into the servers, two are being wasted on the power and cooling systems, he says. To reduce wasted energy, one of the simplest and most important things you can do is turn on the AC economizers, which act as temperature sensors in the data center. According to Rasmussen, 80 percent of economizers are not used, just as IT administrators often turn off the power management features in PCs. It’s also important to monitor the effects of multiple air-conditioning systems attached to a data center; sometimes, Rasmussen says, two AC systems can be “out of calibration” one sensing humidity is too high and the other sensing it’stoo low; their competition, like a game of cooling tennis, can waste energy. Richard Siedzick, director of computer and telecommunications services at Bryant University, uses such features in his data center. “If the temperature rises to a certain level, the AC in that rack will ramp up, and when it decreases, it will ramp down.” The result is a data center climate that few are used to. Instead of being met with an arctic blast at the door, Siedzick says people have told him his data center is too warm. That’s not actually the case: AC economizers help cooling stay where it is needed, rather than where it is not. And that means increased efficiency and monetary savings. “We estimate we’ve seen a 30 percent reduction in energy [in part, due to more efficient cooling] and that translates into $20,000.” Siedzick says other precision controls, such as humidity sensors, are used in the data center as well. 3. Place equipment in the right spot. Most data center floors are raised and tiled. Tiles should be located near the air inlets of IT equipment, not near the exhaust. Since the exhaust areas (where the air is coming out) run hotter than the inlets, making sure tiles (which provide ventilation) are located in the right place makes the AC units run more efficiently. Also, make sure you have the right number of vented tiles in your data center. If you have too many or too few, efficiency goes down. 4. Mind the gaps. Eliminate nooks and Many racks in data centers contain gaps, either as a result of extra space or equipment that has been removed. Whatever the reason, it makes airflow unnatural, and that’s bad for efficiency. “The exhaust air can go back through the intakes of the equipment, which makes you have to run the AC colder,” says Rasmussen. The answer: blanking panels. Installing these panels onto server rack cabinets are a way to make the air flow in a data center more efficient. Many people forget to install blanking panels, even though server manuals from OEMs mandate their use. But Rasmussen says they are inexpensive (sold 100 to the box, in some cases) and easy to install. 5. Can it get hotter in here? Check. Once you’ve done everything listed above, check to see if you can run the air-conditioning at a higher temperature. Rasmussen says that most units are set at around 55 degrees and some get as low as 45 degrees. The lower the temperature, the less efficient your data center is. “You should run that AC hot as you can without the servers overheating,” Rasmussen says. He says 68 degrees is a good target, but unless you are operating a brand-new data center with a top-notch design, you are unlikely to hit such a number. If you follow the rules above, Rasmussen says it’s likely you can increase the temperature to 55 or 60 in a
<urn:uuid:706ce140-a291-4abd-87ad-cb95aae6af92>
CC-MAIN-2022-40
https://www.cio.com/article/274751/energy-efficiency-five-ways-to-find-data-center-energy-savings.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00425.warc.gz
en
0.923503
1,396
2.609375
3
In light of a never-ending tsunami of data breaches and leaks, consumers, businesses, and developers are taking a second look at how data is collected, used, stored, and secured. They're not the only ones. Governments are acting to take matters into their own hands by placing the tech community under tighter scrutiny and passing regulations meant to protect the citizens' right to privacy and control over their information. Since its passage, there's been a lot of talk about the EU's new data retention requirements, including controversy about the huge fines for non-compliance with the General Data Protection Regulation (GDPR). Although the GDPR was constructed and deployed to protect European citizens, its reach extends globally due to the borderless nature of the Internet. That means developers from America to Australia should be (and are, according to this survey) concerned about incorporating security into the design process in a way that ultimately supports privacy protection after an app or web platform is deployed: Privacy, Security, and Web Development It's difficult to guard privacy without also strengthening security, which is leading developers and website administrators to redefine how they approach both. One of the tenets of the GDPR is the concept of privacy by design, which is spelled out in Article 25 of the regulation: “... data protection by design; data controllers must put technical and organizational measures such as pseudonymization in place — to minimize personal data processing.” Building compliant systems means that new functionality needs to be added to deliver data pseudonymization, encryption and other privacy enhancing measures. The Internet is Poorly Designed for Security With more websites being deployed and managed on a cloud platform, attack vectors are proliferating. Many of the data breaches that have made the news in recent years can be directly tied to flaws or weaknesses in design rather than end-user apathy or neglect. This places additional pressure on developers and security experts to get it right before a platform is launched rather than patch up the problem later. Security is essential at every stage of web development, and standard security protocols should be baked into the core framework of app design and testing. This can be managed by following the foundational principles of security through design and implementation: - Operating within the legal guidelines and being accountable - Knowing and understanding the regulations regarding privacy and security - Considering the ethical elements of design and system development - Communication with users through the design and deployment process to ensure that data privacy concerns and best practices are addressed - Implementing measures for data security, retention, and retirement to prevent leaks and breaches - Developing and documenting guidelines, policies, and procedures related to privacy protection and security - Developing standards and methods within your organization for applying concepts and procedures - Monitoring, evaluating, and restructuring guidelines and procedures as needed Where Does Zero Trust Fit In? Zero trust is a security model in which access to a network from inside or outside is never automatically granted but rather forces anything and everything to be verified and authenticated each time it uses the system. It’s essentially a “trust no one” approach that is the polar opposite of traditional security measures like a firewall or virtual private network (VPN), both of which categorize you as “okay” for eternity once you pass the initial verification process. This isn’t to say that those just jumping on the VPN bandwagon for the first time are advised to jump back off because, for either individuals or business networks, the encryption and IP-cloaking features of most leading VPN service providers boost online privacy and security by essentially creating a private tunnel through which data flows between your device and the Internet. It’s a great way to hide from hackers, especially if you have remote workers who need to securely access the company network. But businesses with an online presence should be working towards zero trust as a foundational strategy. This is a security protocol that makes no assumptions; in other words, it assumes zero trust. It operates on a threat model that any users, services, or systems interacting within the security perimeter are inherently untrustworthy and need to constantly verify their authenticity before being granted access to any part of the system: The GDPR doesn't necessarily have the same compliance guidelines and remedies as other recent regulations like California's Consumer Privacy Act, AB-375. But, they do share several common denominators that developers can use to create a comprehensive approach to data privacy and security. This common ground includes information regarding compliance that companies must know and convey to users, employees, partners, and anyone else whose data is collected, such as: - What data is collected and stored - Where in the database or network it's stored - How the information is accessed and by whom - Whether that data is sold, shared, or processed outside of the immediate storage perimeter - How the data is secured within the storage perimeter Implementing a zero trust architecture makes up for any lack of visibility by allowing for discovery of the data flow at every access point within and across networks and platforms by requiring that all communications be verified across every channel. The point of zero trust is to eliminate the concept of trust by making it irrelevant. By adopting a zero trust posture, companies are able to automatically discover and inventory all assets, including applications and databases, and incorporate asset management into their security plan. They're also able to lock down these assets through standards like least privilege access. This has the effect of reducing the attack surface, provides accountability and transparency, and shows that developers and their clients are taking data privacy and security seriously. Providing this type of proof is one of the requirements for GDPR compliance. In addition, zero trust: - Helps prevent data breaches - Frees up IT security specialists and management to focus on other areas of growth - Enables eCommerce and digital business platforms to meet full GDPR compliance - Establishes and nourishes consumer and employee trust in businesses and software developers - Maintains professional integrity and reputation Robust design is about more than functionality and UX. A main feature of both is how secure a website or app is constructed, and how far it goes toward protecting website owners from liability and users from malicious activity. You can't separate privacy and security, and these regulations aren't going away anytime soon. In fact, they're likely to strengthen and proliferate as more governments get into the act. Implementing a zero trust security architecture at the initial design stage, hardening it through discovery during testing, and continuous monitoring after deployment will help ensure that security that protects privacy is built-in to the process.
<urn:uuid:8fecfdbf-262b-4af0-873a-c7b426e7b150>
CC-MAIN-2022-40
https://blogs.blackberry.com/en/2019/09/zero-trust-security-makes-gdpr-compliance-easier
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00425.warc.gz
en
0.948619
1,345
2.890625
3
The subject Internet of Things (IoT) is intriguing to me, both from a technical and personal point-of-view. IoT to me is where all current cloud initiatives and network virtualization innovations come together, complemented with new technologies. Today, we are realizing highly scalable cloud platforms that are secure and easy to manage as they serve our workloads and store data. Building these scalable cloud platforms and seeing traction on cloud native applications is another push to support IoT innovations. It is fun to see that we are moving towards adopting cloud technologies, but are already on the verge of entering new technology rapids. Innovation is moving at a scarily fast pace. – Bill Gates Looking at IoT and its applications, one will quickly notice that the possibilities are immense! The future will be data driven. All our devices, be it a mobile phone, smartwatch, car, plane or refrigerator, will gather data that will lead to more insights on how we as human beings operate and consume. That will lead to services being specifically tailored per individual. And what about enterprises and industrial components? Imagine what can be done in every industry vertical. We can only try to understand the opportunities. Data in Edge We may be somewhat dismissive about every device being connected to the ‘internet’, but IoT is already happening. Think about ordering sushi or fast food in selected restaurants where the order is entered using a tablet or similar device. Or what about a hospital where patient data is gathered by sensors and is accessible on mobile devices for the nurses. Think about a car that is collecting data about the driver and his driving characteristics, or the data that is generated by the vehicle itself. Each sensor in a device will collect data, we are talking about a lot of data mining here… Think Gigabytes of data from a single car drive or Terabytes per plane per flight. The future includes Brontobytes and Geobytes – Michael Dell That raises the question; is all this data consumable? How are we going to collect all that data? How do we process, analyze and store that data? At first looks, you may think that all the data is ingested in a central cloud environment. But that is not efficient. Because we are talking about a prodigious amount of data it only seems logical to process, analyse and even compress the data locally before sending the important catches of data over to a centralized platform. Local data processing will require local compute and storage resources, also referred to as edge computing. So even though we have cloud computing as a tool execute data processing, it seems inevitable that data will use local compute and storage power on edge devices. It will be interesting to track the application and development of efficient edge compute solutions (like nano-servers, further usage of Field-programmable Gate Array (FPGA) chipsets, etc.) in the near future. Moving to edge computing is interesting because today we are in the process of centralizing IT in cloud solutions while IoT innovations will lead to decentral systems, although accompanied by central systems. They will supplement each other. A very important factor for IoT and edge devices will be the telecom providers and as they will provide 5G services. The rollout of 5G is a key driver for IoT as it allows for a very low and consistent latency while increasing the bandwidth which is required to support connectivity to the edge devices. This will be a necessity as we will be transferring a lot of data from a massive amount of IoT devices to a centralised cloud platform. The ability to create highly scalable and performant 5G platforms will depend strongly on Network Functions Virtualization (NFV). Telco operators are working hard on moving from a monolithic telco workload landscape to fully virtualized platforms. VMware helps to thrive NFV adoption with vCloud for NFV that includes vSphere, NSX and related components. It is very important to squeeze every bit out of hardware and the ESXi / VM layer to realize consolidation ratios and lower costs while building a consistent performant fundament for NFV. That was one of the key reasons for us (Frank and myself) to write the Host Deep Dive book. In the process of working towards NFV and IoT, are we finally forced to adopt IPv6? I would say ‘yes’! IPv4 will simply not cut it anymore, as we all said well before 2010. We have enough challenges to deal with, but we can already see new solutions being developed to deal with security and manageability of IoT and edge components. Cloud (native) platforms are gaining traction and so is NFV. I am still trying to wrap my head around the endless possibilities and try to understand the impact of the IoT and edge wave, powered by powerful NFV and cloud platforms. Very exciting times ahead!! We are only scratching the surface of what can be done with IT.
<urn:uuid:612e3f5b-85ea-4750-b54f-5365ac34519d>
CC-MAIN-2022-40
https://nielshagoort.com/2017/08/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00425.warc.gz
en
0.946369
996
2.515625
3
Living On The Edge What will living on the edge be like? As in, living in the world of edge computing? The short answer is it will be much of the same. Just better. First of all, the market comes up with these terminologies that continue to confuse the industry. Edge computing is nothing other than distributed computing whereby you’re bringing computing closer to the user. Why you’re doing this is so the user is able to access information quicker in an edge computing environment than they were before. This is because we live in a world of milliseconds. The architecture that existed before and would be practiced today is between 15 to 30 millisecond response time. That’s granted you’ve got fast enough internet speed, but even those requirements are shifting rapidly. We’ve got more connected devices than we did just a year ago. We’ve got applications that reside on these connected devices. The more pressure and congestion there is on these networks, the more edge computing will be necessary. Think about it. We are looking at autonomous vehicles. We’re looking at the Internet Of Things (IOT). We are looking at Machine Learning. We are looking at robotic process automation. We’re looking at capturing information from anything that is in our life. This is anything that’s got a connected data point. For instance, an Apple iWatch is a perfect example or a Fitbit is another example. They are giving us data that we didn’t have before and data that we actually didn’t even think about capturing before. So what does that mean? There’s an exponentially growing amount of data and moving that data through our current infrastructure at the speed that’s required is not possible. Hence, the importance of edge computing. What is edge computing? It’s simply locating mission critical data centers closer to the geographical locations that need them. Through this, you can get to data and share data quickly closer to the user because that’s where all the action is happening. It’s distributed computing. In many ways, edge computing is already occurring. It also won’t change our current data center structures. Data will continue to feed back into the massive data centers and the co-location facilities that exist now. Edge computing will allow enhanced application performance in targeted areas. It will augment rather than replace. These augmenting edge data centers could be private or public cloud, they could be mini data centers located within cities or could be a full-scale data center. It could be anything that we wanted and is a derivative of how much computing capacity is required to process the information of that local geography. So, it could be as simple as targeted additions, or data performance boosts, according to zip code or a city or the county. As you can imaging, edge computing in San Francisco is going to be a lot different than edge computing in Lake Tahoe. Where these additional edge computing investments will be put in place will be driven by population bases, numbers of devices, and the sophistication of use within certain areas. Clearly, farming towns will have less need for edge computing than manufacturing locations with high level Internet Of Things applications. Regardless of all of the enthusiasm about the potential of edge computing, and it’s increasing need in our rapidly expanding technological environment, it’s important again to be clear about what it is. Edge computing is distributed computing. It’s essentially targeted data centers. And, although it should be considered in all of your planning, it won’t be replacing your traditional data center infrastructure. Yet, one thing is certain. More and more, when it comes to edge computing, we will all be living on the edge. “More and more, when it comes to edge computing, we will all be living on the edge.” the instor difference INSTOR SOLUTIONS: Founded in 1996, Instor is a global leader in data center design, build, management, structured cabling, power infrastructure, data center moves & migration, specialized containment, and cooling solutions. Instor has designed and installed infrastructure solutions for high-growth startups to Fortune 100 companies and public institutions. Based in the San Francisco Bay Area, Instor operates throughout the U.S. and in the European Union and has offices in Ashburn, Virginia, Dallas/Fort Worth, Texas, Hillsboro, Oregon, and a European branch in Dublin, Ireland. Additional information can be found at https://www.instor.com. Instor Data Infrastructure With the explosion of new challenges and exciting new technology on a daily basis it doesn’t take long for once great strategies and design to grow cumbersome and inefficient. Our team of experts have the answers to your biggest questions and can provide an extensive analysis of your data infrastructure. It’s efficiently provided and at no cost to you.
<urn:uuid:4ba65de9-8123-4eca-944d-079347777096>
CC-MAIN-2022-40
https://instor.com/living-on-the-edge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00425.warc.gz
en
0.928856
1,029
2.65625
3
What is a Trojan Horse? (Trojan Malware) A Trojan Horse (Trojan) is a type of malware that disguises itself as legitimate code or software. Once inside the network, attackers are able to carry out any action that a legitimate user could perform, such as exporting files, modifying data, deleting files or otherwise altering the contents of the device. Trojans may be packaged in downloads for games, tools, apps or even software patches. Many Trojan attacks also leverage social engineering tactics, as well as spoofing and phishing, to prompt the desired action in the user. Trojan: Virus or Malware? A Trojan is sometimes called a Trojan virus or Trojan horse virus, but those terms are technically incorrect. Unlike a virus or worm, Trojan malware cannot replicate itself or self-execute. It requires specific and deliberate action from the user. Trojans are malware, and like most forms of malware, Trojans are designed to damage files, redirect internet traffic, monitor the user’s activity, steal sensitive data or set up backdoor access points to the system. Trojans may delete, block, modify, leak or copy data, which can then be sold back to the user for ransom or on the dark web. 2022 CrowdStrike Global Threat Report Download the 2022 Global Threat Report to find out how security teams can better protect the people, processes, and technologies of a modern enterprise in an increasingly ominous threat landscape.Download Now 10 Types of Trojan Malware Trojans are a very common and versatile attack vehicle for cybercriminals. Here we explore 10 examples of Trojans and how they work: - Exploit Trojan: As the name implies, these Trojans identify and exploit vulnerabilities within software applications in order to gain access to the system. - Downloader Trojan: This type of malware typically targets infected devices and installs a new version of a malicious program onto the device. - Ransom Trojan: Like general ransomware, this Trojan malware extorts users in order to restore an infected device and its contents. - Backdoor Trojan: The attacker uses the malware to set up access points to the network. - Distributed Denial of Service (DDoS) attack Trojan: Backdoor Trojans can be deployed to multiple devices in order to create a botnet, or zombie network, that can then be used to carry out a DDoS attack. In this type of attack, infected devices can access wireless routers, which can then be used to redirect traffic or flood a network. - Fake AV Trojan: Disguised as antivirus software, this Trojan is actually ransomware that requires users to pay fees to detect or remove threats. Like the software itself, the issues this program claims to have found are usually fake. - Rootkit Trojan: This program attempts to hide or obscure an object on the infected computer or device in order to extend the amount of time the program can run undetected on an infected system. - SMS Trojan: A mobile device attack, this Trojan malware can send and intercept text messages. It can also be used to generate revenue by sending SMS messages to premium-rate numbers. - Banking Trojan or Trojan Banker: This type of Trojan specifically targets financial accounts. It is designed to steal data related to bank accounts, credit or debit cards or other electronic payment platforms. - Trojan GameThief: This program specifically targets online gamers and attempts to access their gaming account credentials. Examples of Trojan Malware Malware programs like Trojans are always evolving, and one way to prevent breaches or minimize damage is to take a comprehensive look at past Trojan Attacks. Here are a few examples: - NIGHT SPIDER’s Zloader: Zloader was masquerading as legitimate programs such as Zoom, Atera, NetSupport, Brave Browser, JavaPlugin and TeamViewer installers, but the programs were also packaged with malicious scripts and payloads to perform automated reconnaissance and download the trojan. The threat actor’s attempts to avoid detection caught the attention of threat hunters at CrowdStrike who were able to quickly piece together the evidence of a campaign in progress. - QakBot: QakBot is an eCrime banking trojan that can spread laterally throughout a network utilizing a worm-like functionality through brute-forcing network shares and Active Directory user group accounts, or via server message block (SMB) exploitation. Despite QakBot’s anti-analysis and evasive capabilities, the CrowdStrike Falcon platform prevents this malware from completing its execution chain when it detects the VBScript execution. - Andromeda: Andromeda is a modular trojan that was used primarily as a downloader to deliver additional malware payloads including banking Trojans. It is often bundled and sold with plugins that extend its functionality, including a rootkit, HTML formgrabber, keylogger and a SOCKS proxy. CrowdStrike used PowerShell via the Real Time Response platform to remove the malware without having to escalate and have the drive formatted — all while not impacting the user’s operations at any point. How do Trojans Infect Devices? Trojans are one of the most common threats on the internet, affecting businesses and individuals alike. While many attacks focused on Windows or PC users in the past, a surge in Mac users has increased macOS attacks, making Apple loyalists susceptible to this security risk. In addition, mobile devices, such as phones and tablets, are also vulnerable to Trojans. Some of the most common ways for devices to become infected with Trojans can be linked to user behavior, such as: - Downloading pirated media, including music, video games, movies, books, software or paid content - Downloading any unsolicited material, such as attachments, photos or documents, even from familiar sources - Accepting or allowing a pop-up notification without reading the message or understanding the content - Failing to read the user agreement when downloading legitimate applications or software - Failing to stay current with updates and patches for browsers, the OS, applications and software While most people associate Trojan attacks with desktop or laptop computers, they can be used to target mobile devices, such as smartphones, tablets or any other device that connects to the internet. Like a traditional malware attack, mobile Trojan attacks are disguised as legitimate programs, usually as an app or other commonly downloaded item. Many of these files originate from unofficial, pirated app marketplaces and are designed to steal data and files from the device. How to Prevent Trojan Horse Attacks For everyday users, the best way to protect against Trojan attacks is by practicing responsible online behavior, as well as implementing some basic preventive measures. Best practices for responsible online behavior include: - Never click unsolicited links or download unexpected attachments. - Use strong, unique passwords for all online accounts, as well as devices. - Only access URLs that begin with HTTPS. - Log into your account through a new browser tab or official app — not a link from an email or text. - Use a password manager, which will automatically enter a saved password into a recognized site (but not a spoofed site). - Use a spam filter to prevent a majority of spoofed emails from reaching your inbox. - Enable two-way authentication whenever possible, which makes it far more difficult for attackers to exploit. - Ensure updates for software programs and the OS are completed immediately. - Back up files regularly to help restore the computer in the event of an attack. In addition, consumers should take steps to protect their devices and prevent them from all types of malware attacks. This means investing in cybersecurity software, which can detect many threats or block them from infecting the device. How to Respond to a Trojan Malware Attack The growing sophistication of digital adversaries makes it increasingly difficult for users to properly resolve Trojan attacks on their own. Ideally, if a person suspects that their system has been infected by a Trojan or other type of malware attack, they should contact a reputable cybersecurity professional immediately to help rectify the situation and put proper measures in place to prevent similar attacks from occurring in the future. At a minimum, consumers should download an antivirus program and malware removal service from a reputable provider. For enterprise clients, it is important to work with a trusted cybersecurity partner to assess the nature of the attack and its scope. As discussed above, many traditional antivirus and malware removal programs will not adequately remediate existing threats or prevent future events. CrowdStrike Solution to Trojan Malware For enterprise organizations, protection against Trojans is especially important as a breach on one computer can lead to the entire network being compromised. Organizations must adopt an integrated combination of methods to prevent and detect all types of malware, including spyware. These methods include machine learning and exploit blocking. Here we review these capabilities within the context of CrowdStrike Falcon®, the market’s leading cloud-native security platform. - Machine Learning: Falcon uses machine learning to block malware without using signatures. Instead, it relies on mathematical algorithms to analyze files and can protect the host even when it is not connected to the internet. - Exploit Blocking: Malware does not always come in the form of a file that can be analyzed by machine learning. Some types of malware may be deployed directly into memory through the use of exploit kits. To defend against these, Falcon provides an exploit blocking function that adds another layer of protection. CrowdStrike Falcon combines these methods with innovative technologies that run in the cloud for faster, more up-to-the-minute defenses. To learn more, contact our organization to schedule a demo or enroll in a trial.
<urn:uuid:cf720ae8-1061-4cf2-aea2-2e4af69ef065>
CC-MAIN-2022-40
https://www.crowdstrike.com/cybersecurity-101/malware/trojans/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00625.warc.gz
en
0.905187
1,984
3.21875
3
Introduction to Terraform Terraform is an open-source infrastructure as code software tool created by HashiCorp. It is today one of the most popular tools that can help you automate the deployment of your infrastructure on almost all major Public Cloud provider i.e. AWS, Azure & GCP and also private clouds. To begin using Terraform on your Laptop, you will need to first setup Terraform on your system. Terraform runs on almost all the major OS currently available i.e. MacOS, Linux, Windows etc. In this post we will learn the step-by-step procedure to install Terraform up and running on your Windows 10 system. How to Install Terraform? To install Terraform, first download the Terraform .exe file from: https://www.terraform.io/downloads.html. Ensure you download the correct version based on your system configuration i.e. 32bit/64 bit. Latest available version of Terraform is v1.15.0. If you happen to have a code that in your environment that you want to reuse it is recommended to check if you are using the same terraform version or a version which ensures backward compatibility. Step 2: Once download is complete; you will get a zipped file. Unzip the file to get the .exe file. Copy the .exe file to a folder on your local system directory for e.g.: C:\Users\ipwithease\Terraform. Step3: Next, open your Start Menu and type in “environment” and the first thing that comes up should be Edit the System Environment Variables option. Click on that and you should see this window. Click on Path under System Variable > Click Edit Step 4: On the next window Click New & Add the path where you copied your .exe file > Click Ok. Now to verify if the installation has been successful go to Command-Line, change your working directory to the one where you copied the Terraform .exe file and run “terraform –version” As you can see the above screenshot, at present we were using terraform version 0.12.24, we are getting a notification command-line that a newer version is available. In order to mover from 0.12.24 you just need to download the latest version 0.15.0 version and replace the terraform.exe file from the path where you stored the older version terraform.exe file. To get the list of commands that Terraform supports, type terraform and hit enter Terraform relies on plug-ins called providers which can contain a group of resources and arguments to be able to define the object that need to be created. Terraform configurations must declare which providers they require so that Terraform can install and use them. To view all the publicly available, terraform providers you can browse to the link as below: The link above the main directory containing the providers that are provided by HashiCorp or by different vendors whose products can integrate with Terraform i.e. Cisco, Juniper, Azure, AWS and the so on. FAQs Related to Terraform: Define Terraform provider. - Terraform Providers are essentially plugins that Terraform installs to interact with the remote systems i.e. Azure/AWS/Google Cloud/ VMware and a lot of other vendors devices. Name some major market competitors of Terraform. - Some of the major competitors of Terraform are: Kubernetes, Ansible, Packer and Cloud Foundry. Why is Terraform used for DevOps? - This is because, Infrastructure as code is the foundation for DevOps and Terraform manages Infrastructure as code. Can we use Terraform for an on-prem infrastructure? - Yes, we can use Terraform for an on-prem infrastructure. Name some of the built-in provisioners available in Terraform. - Some of the built-in provisioners available in Terraform are: Chef Provisioner, Salt-masterless Provisioner, File Provisioner, Remote-exec Provisioner, Habitat Provisioner, Puppet Provisioner & Local-exec Provisioner What is Terraform D? - It is a plugin that is used on most in-service systems and Windows.
<urn:uuid:172ffb25-374e-4996-9235-071b36ebb1a6>
CC-MAIN-2022-40
https://ipwithease.com/how-to-install-terraform/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00625.warc.gz
en
0.863943
917
2.90625
3
Interconnected networks sharing data and information across the nation or around the world are the foundation for global commerce, infrastructure management, law enforcement, and much more. These networks are also prime targets for malware, ransomware, and hackers — working alone or as part of larger efforts to sow chaos and disruption. It’s the reason why network security is front and center for federal agencies and the organizations with which they partner. Any vendor or organization that does business with federal agencies is familiar with Security Technical Information Guides (STIGs). STIGs are published by the Defense Information Systems Agency (DISA) to define the cybersecurity standards required for a particular device deployed on a federal agency network. Multiple STIGs exist for different network devices. And, as new devices become available, so do new STIGs. Securing infrastructure in a federal agency environment is not optional — complying with STIGs is mandatory for all Department of Defense agencies and the contractors that work with them. While the intent is to create the most secure network environment possible, change is still a constant in the world of IT. New devices are added to networks all the time. And, existing network devices routinely receive software updates and new features, changing their baseline configurations over time and requiring additional compliance measures to reduce cyber risk and exposure. Unfortunately, these changes can happen so often, it can result in significant backlogs for the network teams managing STIG compliance. And, if those teams are primarily making the required changes manually, human error and other configuration mistakes can occur. Network automation is a logical solution but only if the team can ensure network security. The current state of STIG compliance No one questions the need to keep networks secure — especially those that support federal agencies and other government departments and entities. That’s why STIGs were developed — to ensure all agencies and the vendors they work with are following the same cybersecurity standards and the devices deployed on their networks are continually updated to remain in compliance with those standards. The challenge is that STIGs are necessarily complex, as they encompass multiple devices, applications, and configurations. They’re also dynamic — changing and evolving with the devices and networks they manage. Baseline security configurations defined by STIGs cover a wide number of devices. But, network devices can also have their own set of standards above and beyond the general baseline configuration. For example, a required configuration for an operational standard could stipulate that “core and edge routers in region X must have service configurations for NTP, Syslog, and DNS set to servers in the same region.” This would apply to a small number of devices. But, over time, this standard could change and evolve as the network grows — perhaps eventually defining that “core routers must use a set of service hosts that are different from edge routers in the same region.” Then there’s the service configuration itself, which defines the commands that provision ports, VLANs, routes, ACLs, and any other features needed to provide access to an application or service. These types of configurations can change daily in some network domains. Plus, when a device is initially deployed on a network, these baseline security configurations are not typically enabled by default. Rather, they have to be enabled by the network teams. Additionally, new applications and services require new configurations. And when something is no longer needed, those same devices need to be updated to remove older configurations. In other words, over time, what you start with isn’t necessarily what you end with. All this leads to the question of the day: How are network teams ensuring configuration takes place? Unfortunately, many network teams are relying on manual processes to stay on top of all these moving parts. That’s neither effective nor efficient. If network teams are primarily configuring devices and making ongoing changes and updates manually, it’s an indication they’re lacking modern tools for success. Using automation to modernize and future-proof the network Whether it’s security baselines changing through updated STIGs, operational standards that evolve over time, or service configurations that change daily, it’s important to recognize how fluid network device configurations have become. And, as the number of devices in the network has increased, it’s important to reevaluate the existing set of tools that network engineers are working with and determine if they’re equipped for success. While the initial reaction may be to adopt some form of network automation, automation at the expense of network security is not the answer. Identifying and deploying an automation solution that can keep pace in a STIG-compliance environment means finding a modern way to easily manage hundreds or thousands of network configurations that will go through some amount of change over the lifetime of the device. Following are several key steps to this process. ● Commit to automation integration and make it a priority. Understand that STIG compliance is too complex and fluid to be left to time-consuming manual processes that can lead to bottlenecks and errors. ● Start at the beginning. What specific compliance tasks are creating bottlenecks and backlogs? Are those tasks repeatable across the device ecosystem and can you replicate their solutions? What tools, whether open-sourced or from a vendor, are already available? ● Recognize that your network devices span physical, virtual, and cloud networks, and plan your automation and compliance processes with the entire network infrastructure in mind. ● Adopt an end-to-end perspective toward network automation that looks beyond automating tasks, overcomes existing operational silos, and integrates with your existing systems and technologies. ● Ensure your network’s compliance engine works hand in hand with the automation solution to guarantee that every single device is always in compliance. ● The solution you implement should be one that the existing network team can easily adopt from Day One and be flexible enough to provide the ability to create automations that extend across multiple network domains and integrate with other IT systems. ● Collaborate across teams to ensure seamless sharing of data and successful integration of the technology. ● Commit to giving your team members the time, training, and resources they need to implement and maintain the new automation solution. ● Focus on implementing an effective automation solution but also step back and assess the integration process and change course when necessary. That may sound daunting, and it requires a fair amount of time and legwork to find the right solution, but automation and STIG compliance can work together. The result is a modern network future-proofed for an ever-changing compliance environment.
<urn:uuid:41ec6ad1-2039-4b72-bd41-34552a1fd3f2>
CC-MAIN-2022-40
https://www.missioncriticalmagazine.com/articles/94285-automating-change-in-a-stig-environment
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00625.warc.gz
en
0.925267
1,347
2.6875
3
Modern Day Fax Machines – The Evolution of Fax into a Mobile World Wednesday, September 15, 2021 If you’ve ever had to fax your financial records to your accountant at tax time, then you’ve probably experienced the panicked feeling that can come out of having to use an old-fashioned fax machine. Well, you’re not alone. Many of us have had the experience of looking down at an important document and wondering if faxing is the best option for sending it. So, why does sending faxes inspire such abject terror in so many of us? The fax machine doesn’t exactly make faxing simple. In fact, nothing makes faxing more burdensome and inconvenient than the technology itself. Imagine this situation: You’re standing there feeding your financial records into the document feeder. Since fax pages transmit separately, you’re forced to load each page one by one. Your foot is tapping by the time you notice that the last page has jammed in the machine and, as a consequence, the fax transmission is automatically cancelled. It’s. Time. To. Start. Over. Again. (Cue feelings of abject terror.) In order to understand faxing, let’s take a look at the origins of fax technology, the direction it’s currently headed, and the brighter future ahead for cloud faxing. The Origins of Faxing Invented by Alexander Bain in 1842, the early fax machine worked by printing messages onto metallic surfaces. Initially a commercial failure, the fax machine proved to be a legal nightmare, as Bain was forced to repeatedly protect his patent against similar inventions. However, as governments and other organizations began to prize faxing in the early 20th Century as a way to securely transmit documents, maps and other confidential data, the view of the fax machine changed considerably. Newspapers and the military could send and receive time-sensitive information faster than ever before, without the threat of interference from unwanted third parties. It was an extremely useful tool that organizations benefited almost exclusively from for the greater part of the 1900s. The Era of the Fax Machine It wasn’t until the end of the 20th Century that faxing would gain widespread adoption. In fact, it wasn’t until fax machines were manufactured for general use and prices slashed that sales, among consumers and small business owners, skyrocketed. By the early 1980s, there were more fax machines in homes and offices than in large organizations. By the end of the decade, several million fax machines were in use. However, this widespread success wouldn’t last. By the 2000s, the fax machine was on the decline, with fewer consumers purchasing it. The rise of multifunction devices made the old-fashioned fax machine appear archaic. Although faxing was still used in many corporate offices, it was no longer considered the ‘be all and end all’ technology that it once was. The internet now offered email as an alternative, and some opted for other document transfer services, like a same-day messenger. Modern Day Faxing By 2007, cloud computing revolutionized information and communication technology. Within only a few years, faxing had all but shifted from the fax machine to the internet and, ultimately, to fax apps. The fax machine used by public relation firms, advertisers, and small business owners rely on paper being in their hands of their clients. Small towns in rural areas might have poor internet capability and rely on their landlines to deliver important messages. Despite many businesses leaving fax machines out of their modern office setups, the fax machine persists. With faxing still alive and well, many industries rely on it for its practicality. Online faxing has transformed the way faxes are sent and received, making it just as convenient as email. No longer are physical machines and paper required. In accordance with federal regulations, many cloud fax companies have made vast improvements to online faxing apps that can run on computers and even smart phones. Now, consumers can easily send their financial records digitally across a secure channel. Online portals help manage all cloud fax transmissions, and only those authorized to use the accounts have access. Furthermore, businesses can send invoices and other confidential company documents. It’s a win-win for everyone. Living in a Mobile World The internet and mobile apps have changed the way consumers and professionals fax. Faxing can now be done anywhere, any time, with ease. It’s no surprise, really, that faxing has ended up on your phone, right in your pocket – just like everything else. Baraniuk, C. (February 25, 2015). Why the fax machine isn’t quite dead yet. Retrieved from http://www.bbc.com/future/story/20150224-why-the-fax-machine-wont-die
<urn:uuid:1ff066b3-4c3c-4d89-b9d4-be1c71e08969>
CC-MAIN-2022-40
http://sassets3.j2global.com/blog/the-evolution-of-fax
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00625.warc.gz
en
0.95632
1,020
2.5625
3
1. A cloud is designed to optimize and manage workloads for efficiency. Therefore repeatable and consistent workloads are most appropriate for the cloud. 2. A cloud is intended to implement automation and virtualization so that users can add and subtract services and capacity based on demand. 3. A cloud environment needs to be economically viable. Why aren’t traditional data centers private clouds? What if a data center adds some self-service and virtualization? Is that enough? Probably not. A typical data center is a complex environment. It is not uncommon for a single data center to support five or six different operating systems, five or six different languages, four or five different hardware platforms and perhaps 20 or 30 applications of all sizes and shapes plus an unending number of tools to support the management and maintenance of that environment. In Cloud Computing for Dummies, written by the team at Hurwitz & Associates there is a considerable amount written about this issue. Given an environment like this it is almost impossible to achieve workload optimization. In addition, there are often line of business applications that are complicated, used by a few dozen employees, and are necessary to run the business. There is simply no economic rational for such applications to be moved to a cloud — public or private. The only alternative for such an application would be to outsource the application all together. So what does belong in the private cloud? Application and business services that are consistent workloads that are designed for be used on demand by developers, employees, or partners. Many companies are becoming IT providers to their own employees, partners, customers and suppliers. These services are predictable and designed as well-defined components that can be optimized for elasticity. They can be used in different situations — for a single business situation to support a single customer or in a scenario that requires the business to support a huge partner network. Typically, these services can be designed to be used by a single operating system (typically Linux) that has been optimized to support these workloads. Many of the capabilities and tasks within this environment has been automated. Could there be situations where an entire data center could be a private cloud? Sure, if an organization can plan well enough to limit the elements supported within the data center. I think this will happen with specialized companies that have the luxury of not supporting legacy. But for most organizations, reality is a lot messier.
<urn:uuid:94290a4b-b1f3-482a-91b1-df8656fdec62>
CC-MAIN-2022-40
https://hurwitz.com/whats-a-private-cloud-anyway/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00625.warc.gz
en
0.956771
484
2.640625
3
Today, over 5 billion people globally send and receive text messages. That’s an astonishing 65% of the world’s population. While traditional SMS is for person-to-person communication, A2P SMS is designed for machine-to-person communication. It allows industries such as tourism, healthcare, and banking systems to send automated notifications, alerts, and reminders to their customers. Automated flight updates are a typical example of A2P SMS, but now these have been replaced with COVID-19 contact tracing alerts. People who had contact with someone infected with COVID-19 may first get an A2P text message from their local health department. SMS and A2P have become a critical way for businesses to engage with customers, and it is expected to grow at a rate of 26 percent over the next five years. While robocalls and consumer fraud have been plaguing voice services for years, SMS and A2P SMS have generally been considered a well-established, safe, efficient, and cost-effective communication. In fact, around 98% of all text messages are immediately opened by the receiver. Sadly, these communication channels are starting to show some cracks. Their rise has also led to a rise in three types of SMS fraud, which if left unchecked, risk harming this critical communications channel: Smishing is a type of cyber-security attack that utilizes SMS to steal personal credentials of mobile users. Smishing is a term that combines "SMS" (short message services) and "phishing," which is when fraudsters send emails that contain malware. These fraudsters, or in this case, ‘smishers’, simply use text messages instead of email. This type of fraud is particularly dangerous because people tend to trust text messages more than emails and will often click on harmful links or respond to fraudulent requests. Gray-Routing happens when A2P traffic is intentionally mixed with valid person-to-person (P2P) traffic with the intent of avoiding payment for A2P charges. This is a way for fraudsters to send A2P SMS but disguise them as standard SMS, which typically don’t incur a charge, and it cuts into service providers’ revenue streams. SMS SPAM is also on the rise – along with customer complaints due to these unsolicited messages. While not always fraudulent – just like robocalls, they can become annoying and can lead to subscribers missing or ignoring legitimate texts. Fortunately, recent advancements in SMS firewalls can prevent fraudsters from smishing, spamming, and grey- routing SMS traffic. Here are three good reasons to act now, before it’s too late: - Protect the trust you’ve built: The presence of fraudsters and hackers can harm the very trust you have worked so hard to build with your enterprise and retail customers. If this trust gets eroded, consumers will simply stop answering their text messages for fear of being infected with malware - or worse. - Protect your revenue streams: The growth of lucrative A2P SMS has just begun, and with millions of IoT and 5G connected ‘things’ coming online, your busines customers will depend on it more than ever. According to Juniper Research, operator revenues from global A2P messaging traffic are expected to grow from $39.6 billion to $50 billion from 2020 to 2025. - Protect your customers: Smishing can result in serious financial losses for your subscribers. It often is used to get individuals to reveal personal information, such as passwords or credit card numbers. And they are often successful because the messages appear to come from legitimate sources, such as banks and well-known retail brands. SMS Firewalls to the Rescue SMS Firewalls address these threats and security breaches and can help to boost enterprise messaging revenue. Research from Juniper highlights machine learning’s critical role in identifying and mitigating fraudulent SMS traffic in real-time. By implementing SMS Firewalls, Juniper expects drastic reductions in revenue losses from illegitimate use of these channels, from $5.8 billion in 2020 to $1.2 billion by 2025. Mobileum’s SMS Firewall, which was recently recognized as a market leader by Juniper Research, automatically identifies known and unknown messaging security attacks. Using machine learning technologies, Mobileum’s SMS Firewall reduces revenue leakage by controlling traffic from grey routes. It also detects SMS spamming and smishing by securely evaluating multiple parameters, including message content, to protect customers from malicious attacks. Mobileum’s SMS Firewall automatically detects and blocks fraudulent mobile-terminated SMS. It analyzes messages based on various criteria, including the use of malicious keywords. It also leverages the latest in machine-learning, analytics, and natural language processing (NLP) techniques to detect the presence of fraudulent or suspicious URLs, telephone numbers, emails, and other keywords within the messages. This automated process has the added advantage of ensuring complete data privacy with zero human intervention, while also providing an increased level of detection accuracy. Mobileum’s SMS Firewall prevents security threats by blocking unsolicited messages from untrusted sources - while distinguishing and allowing legitimate messages. Mobileum also offers SMS Bypass fraud detection and SMS testing, which enables CSPs to perform smart profiling and active testing to detect the presence of potential SMS Bypass fraud in their network. Operators can run real-time SMS tests across different international networks, platforms, and aggregators to expose suspicious and fraudulent messages, or open an SMS Center (SMSC) that will terminate illegal or unregulated traffic. Furthermore, operators can enhance their fraud coverage and accuracy with Mobileum’s advanced analytics Fraud Management System and increase their SMS revenue by preventing content providers from bypassing SMS termination fees. Want to learn more? Contact Us, and we will put you in touch with one of our Security or Risk Management experts.
<urn:uuid:438ab988-8f1b-41c7-86d1-7476be861588>
CC-MAIN-2022-40
https://blog.mobileum.com/three-reasons-for-fighting-sms-fraud-before-its-too-late
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00625.warc.gz
en
0.926292
1,215
2.8125
3
We live in a world built on the back of enormous technological advances in processor technology, with rapid increases in computing power drastically transforming our way of life. This was all made possible thanks to three key factors: The von Neumann architecture that the vast majority of processors are based on; Moore’s Law, which predicted the trend of increased transistor count, leading to more functionality on the chip at a lower cost; and Dennard scaling, laws on how to make those transistors smaller while their power density stays constant, allowing the transistors to be both faster and lower in power. But the rapid growth made attainable by technology solutions and scaling is nearing an end. The death of Moore’s Law is often pronounced, as chip companies struggle to shrink transistors beyond the fundamental limits of how small they can go. Device scaling has slowed due to power and voltage considerations, as it becomes harder and harder to guarantee perfect functionality across billions of devices. Then there’s the von Neumann bottleneck. The von Neumann architecture separates memory from the processor, so data must be sent back and forth between the two, as well as to long-term storage and peripheral devices. But as processor speeds increase, the time and energy spent transferring data has become problematic, leaving processors idle and capping their actual performance. This problem has become particularly acute in large deep learning neural networks, limiting the potential performance of artificial intelligence applications. Yearning to move on from von Neumann’s grand designs of the 1940s, an ambitious effort is underway at IBM to build a processor designed for the deep learning age. Using phase-change memory devices, the company hopes to develop analog hardware which performs a thousand times more efficiently than a conventional system, with in-memory computing on non-volatile memory finally solving the bottleneck challenge. But this new concept brings its own set of complex technological hurdles yet to be overcome. In a series of interviews over the past six months, IBM gave DCD a deep dive into its multi-year project underway at its labs in Zurich, Tokyo, Almaden, Albany and Yorktown. Handling analog information "There are many AI acceleration technologies that we're looking at in various states of maturity,” Bill Starke, IBM Distinguished Engineer, Power microprocessor development, told DCD. “This is the most exciting one I've seen in a long time.” Phase-change memory (PCM) “was originally meant to be a memory element which just stores zeros and ones,” Starke said. “What was recognized, discovered, invented here was the fact that there's an analog physical thing underneath it,” which could be used for processing deep learning neural networks as well as for memory. “For the price of a memory read, you're essentially doing a very complex matrix operation, which is the fundamental kernel in the middle of AI,” Starke said. “And that's a beautiful thing, I feel like nature is giving us a gift here.” Exploiting the unique behavior of chalcogenide glass, phase-change memory can - as the name suggests - change its state. Chalcogenide glass has two distinct physical phases: a high conductance crystalline phase and a low conductance amorphous phase. Both phases coexist in the memory element. The conductance of the PCM element can be incrementally modulated by small electrical pulses that will change the amorphous region in the element. The overall resistance is then determined by the size of the amorphous regions, with the atomic arrangement used to code information. “Therefore, instead of recording a 0 or 1 like in the digital world, it records the states as a continuum of values between the two - the analog world," IBM notes. The company has been researching PCM for memory for more than a decade, but "started building experimental chips for AI applications in 2007-2008," Evangelos Eleftheriou, Zurich-based IBM Fellow, Neuromorphic & In-memory Computing, told DCD. "And we keep producing experimental chips - one of those is the Fusion chip, and we have more in the pipeline." Training and inference To comprehend how different chip architectures can impact deep learning workloads, we must first understand some of the basics of deep learning, training and inference. Think of a deep learning neural network as a series of layers, starting from the data input and ending with the result. These layers are made up of groups of nodes that are connected with each other, loosely inspired by the concept of neurons in the brain. Each connection has an assigned strength or weight that defines how a node impacts the next layer of nodes. During the training process the weights are determined by showing a large number of data, for instance images of cats and dogs, over and over again until the network remembers what it has seen. The weights in the different layers, together with the network architecture, comprise the trained model that can then be used for classification purposes. It will be able to distinguish cats from dogs, giving a large weight to relevant features like whiskers, and will not be disturbed by irrelevant low-weight features like, for instance, clouds in the picture. This training phase is a hugely complex and computationally intense process, in which the weights are constantly updated until the network has reached a desired classification accuracy - something that would be impractical to run every single time somebody wanted to identify a cat. That’s where inference comes in, which takes a trained model and solidifies it, running it in the field and no longer changing the weights. More work for less power The long term aim, according to Jeff Burns, IBM Research’s director of AI Compute, is for PCM to be able to run both inference and training workloads. "We see a very large advantage in overall compute efficiency," said Burns, who is also the director of the company’s upcoming AI Hardware Center in New York. "So that can be realized as: if you have a certain workload, doing that workload at much, much, much lower power consumption. Or, if you want to stay in a power envelope, doing a much larger amount of computation in the same power. "These techniques will allow one of the most compute intensive parts of the computation to be done in linear time.” By carefully tuning the PCM devices' conductance, analog stable states can be achieved, with neural network weights memorized in the physical phase configuration of these devices. By applying a voltage on a single PCM, a current equal to the product of voltage and conductance flows. IBM researchers Stefano Ambrogio and Wanki Kim explain: "Applying voltages on all the rows of the array causes the parallel summation of all the single products. In other words, Ohm’s Law and Kirchhoff’s Law enable fully parallel propagation through fully connected networks, strongly accelerating existent approaches based on CPUs and GPUs." But this tantalizing promise of a superior technology that breaks free from the von Neumann bottleneck, that outlives Moore’s Law, and ushers in new deep learning advances, comes with its own set of issues. "The problems are different in both inference and training," Wilfried Haensch, Distinguished IBM Research staff member, Analog AI technologies, told DCD. Let's start with the comparatively easier inference space, and assume you still run training workloads on a GPU. "So, the trick here is, how do you get the weights from the GPU environment onto the analog array, so that you still have sufficient accuracy in the classification?” Haensch said. "This sounds very easy if you look at it in a PowerPoint slide. But it's not. Because if you copy floating point numbers from one digital device to another, you maintain the accuracy - the only thing that you do is copy a string of zeros and ones.” Analog accuracy issues When copying a number from a digital environment into an analog environment, things become a little more complicated, Haensch said: “Now what you have to do is take the strings of zeros and ones, and imprint it into a physical quantity, like a resistance. But because resistance is just a physical quantity, you will never be able to copy the floating point number exactly. Physics is precise but not accurate, so you get a precise resistance, but it might not be the one that you want - perhaps it's a little bit off." This inference accuracy issue is something IBM's Almaden lab hopes to overcome, running tests on long short-term memory (LSTM) networks, a complex deep learning approach fit for tasks with sequential correlation like speech or text recognition, where it can understand a whole sentence, rather than just a word. In a paper presented at the VLSI Symposia this June, Inference of Long-Short Term Memory networks at software-equivalent accuracy using 2.5M analog Phase Change Memory devices, Almaden “deals with how to copy the weights into the neural network and maintain inference accuracy,” Haensch said. The paper details how to use an algorithm that allowed researchers “to copy the weights accurately enough, so that we can maintain the classification accuracy, as expected from the floating point training,” Haensch said. “So this is a very, very important point. Our philosophy is that we will first focus on inference applications, because they're a little bit easier to handle from a material perspective. But if we want to be successful with this, we have to find a way to bring the trained model into the analog array. And this is a significant step to show how this can be done.” For inference PCM devices, IBM have “convinced themselves that this approach is feasible and that there is no fundamental roadblock in the way,” Haensch said. “For commercial use, inference is probably about five or six years away.” Can analog devices do training? After that comes training, with Haensch admitting that “the training part is a little bit out. You really have to re-engineer these non-volatile memory elements so that they have certain switching behavior.” Over in the Zurich labs, researchers got to work on trying to overcome the inherent challenges with PCM devices for deep learning training. “In deep learning training, there are basically three phases,” Zurich’s Eleftheriou told DCD. “There is a forward pass, that is similar to inferencing, in which you don't stress precision,” where you calculate the values of the output layers from the input data with given fixed weights, moving forward from the first layer to the last layer. “Then there is a backward pass with errors, again you don't need high precision,” where computation is made from the last layer, backward to the first layer, again with fixed weights, he said. The third part is where you “need to update the weights, thereby changing the connection strength between the input and output of each layer,” Eleftheriou said. It is this part that remains difficult, so the solution at Zurich is to run the first two phases of training - forward and backward passes - on the PCM. However, the weight updates are accumulated on a standard von Neumann processor before the updates are transferred rather sporadically to the PCM devices. “This is, in a nutshell, the whole idea." Haensch said: “That's a very important stepping stone, because it allows us to create the ability to train without pushing the material properties too hard.” Going beyond that stepping stone, as well as pushing towards commercialization, could have a profound impact on the future of deep learning, Haensch believes. “If you look at the development of neural networks today, it is really driven by the properties of GPUs,” he said. “And GPUs require that you have narrow and deep networks for better learning - this is not necessarily the best solution. The analog arrays allow you to go back to shallow networks with wider layers, and this will open up the possibility to to re-architect deep learning for learning optimization, because you are not bound anymore by the memory limitations of the GPUs.” Cindy Goldberg, program director of AI Hardware Research at IBM, concurred: “It’s about not just looking at the exact workloads of today, but how these AI workloads are evolving out of these very narrow applications into much more broad and evolving and dynamic AI workloads. “This is what informs the development approach of these accelerators, it is not just about having the best widget of today that is out of date in six months, but about really anticipating how these workloads are evolving and changing for the long term.”
<urn:uuid:4d9aa68c-1f23-479b-9a99-e87fb1b53dbc>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/analysis/phase-change-memory-and-quest-escape-von-neumanns-bottleneck/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00625.warc.gz
en
0.940198
2,697
3.625
4
A new study by scientists from IBM Research, Compaq Corporate Research Laboratories and AltaVista presents a comprehensive “map” of the World Wide Web, and uncovers inadequate linking between regions of the Internet that can make navigation difficult — or in some cases, impossible. Based on an analysis of more than 600 million Web pages, the findings contradict earlier, small-sample studies that suggested a high degree of connectivity on the Web. According to the study, the Web is divided into four large regions, each containing approximately the same number of pages. Massive constellations of Web sites in the outer regions are inaccessible by links, the most common route of travel between sites for Web surfers. Giant Bow Tie The image of the Web that emerged through the research is that of a bow tie. The knot of the bow tie is the Web’s “strongly-connected core” and contains almost one-third of all Web sites. In the core region, Web surfers can use links to travel between the sites with relative ease. One bow of the tie contains “origination” pages, and the other contains “termination” pages. Origination pages, constituting almost one-quarter of the Web, allow users to reach the connected core eventually. However, those pages cannot be reached from the core. Termination pages, which also constitute approximately one-quarter of the Web, can be reached from connected core, but do not link back to it. The fourth and final region, constituting approximately 22 percent of the Web, contains “disconnected” pages. Disconnected pages are sometimes connected to origination and/or termination pages, but Web surfers in the connected core cannot access those pages. What Does It Mean? The researchers believe that scientific and business communities will be able to improve connectivity on the Web if they understand the bow tie theory and identify which category their Web pages fall into. For example, understanding connectivity problems can help such search engine developers as AltaVista design more effective Web “crawling” strategies. Crawling the Web, then indexing the pages found, is one of the fundamental methods used by search engines to organize the Internet. Currently, it is estimated that the most popular search engines index only 16 to 25 percent of the Web’s URLs. With the new data in hand, search engines will be able to improve methods for ranking Web sites. Currently, ranking in search engine results relies heavily on the number of links to and from a site and promotes “link spamming” to artificially enhance a site’s ranking. Despite the increased awareness and attempts at increasing connectivity, as more pages become part of the connected core, new pages will continue to be created in all three outer regions of the Web, the report predicts.
<urn:uuid:d2bddf12-b8b3-44f0-a26b-2d8389f1ebec>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/map-of-web-reveals-connectivity-flaws-3318.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00625.warc.gz
en
0.927114
589
3.328125
3
Astronomers have long suspected that Jupiter undergoes more frequent collisions with space objects than we know, but this past Monday one apparently occurred that was so dramatic as to even be visible with amateur telescopes here on Earth. Wisconsin-based amateur astronomer Dan Peterson first reportedthe event, having viewed it as it happened. Soon afterward, he described what he had seen online. Dallas-based astrophotography enthusiast George Hall took the discovery further. “When I saw [Peterson’s] post, I went back and examined the videos that I had collected this morning,” he wrote in a blog post of his own later that day. Sure enough, there in Hall’s footage was a clear view of a brief but fiery explosion on the planet: “an apparent object impact captured about 6:35 am on Sept. 10, 2012, from Dallas, Texas USA.” What, exactly, caused the impact is not yet clear, but the most likely explanation appears to be either a comet or an asteroid. “Impacts of comets and asteroids onto Jupiter’s atmosphere may be more frequent than previously thought,” Mario Livio, a senior astrophysicist with the Space Telescope Science Institute, told TechNewsWorld. Similar impacts in recent years were caused by objects including Comet Shoemaker-Levy 9, which broke up and collided with Jupiter in 1994, and an asteroid that collided with the planet in 2009, Livio pointed out. “Following up on these impacts provides much information about the conditions in Jupiter’s atmosphere,” he explained. In fact, “to this very day, some of the waves seen propagating following the 1994 impact remain somewhat of a mystery.” Given that the evolution of life on Earth was also punctuated by asteroid impacts, “clearly understanding these events is important,” Livio concluded. ‘It’s the Floor Sweeper’ Many scientists believe that Jupiter, with its strong gravitational pull, acts as a sort of “magnet” that prevents many comets and asteroids from hitting Earth, Paul Czysz, a professor emeritus of aerospace engineering with St. Louis University, told TechNewsWorld. “Jupiter probably gets impacted more than we see,” Czysz explained. “It basically deflects them enough that they don’t come close to Earth.” In other words, from our perspective, “it’s good that Jupiter is there,” Czysz added. “It has basically protected the inner planets from a lot more impacts. It’s the ‘floor sweeper’ that keeps the junk from coming into the inner planets.” ‘The More Scared You Get’ Saturn and the moon, in fact, have done much the same thing, but on a lesser scale, he pointed out. On the moon, for instance, “every one of those craters you see represents an impact,” he said. There are occasionally Earth-crossing asteroids, to be sure, but “the Earth is so small that it’s really an unfortunate happenstance if one actually hits Earth,” Czysz explained. “It’s like having a 10-mile-wide target and a ping pong ball in the middle, and throwing rocks to try to hit it.” Until humans developed modern telescopic equipment, moreover, we weren’t even aware of most asteroids, Czysz added. Now, however, “the more you find out, the more scared you get,” he concluded. ‘The Luck of the Draw’ It’s not necessarily true that Jupiter provides a shield for Earth, said William Newman, a professor in the departments of earth and space sciences, physics and astronomy, and mathematics at UCLA. Rather, it’s “more the luck of the draw,” he told TechNewsWorld. “Jupiter will certainly prevent some objects and take the hit itself, but others will enter. The fact that others have occurred is proof.” In any case, given that NASA’s Jet Propulsion Lab, for one, has an ongoing project to watch for asteroids and comets, why did this latest hit on Jupiter come as such a surprise? ‘Black Against Black’ Well, for one thing, “you can’t see them, they’re so black,” Czysz said. “They reflect less than 1 percent of their light. They’re black against a black background.” Indeed, “the larger ones we can detect, but the smaller ones are difficult,” Newman explained. “There are tiny objects that get away, and we don’t see them until we have a near miss.” However, that’s far from a frequent occurrence, particularly among the larger objects, Newman pointed out. “The little guys include shooting stars and cometary debris — there’s a lot of that stuff,” he said. “Every time you go up by a factor of 10 in size, the frequency goes down by more than a factor of 10.” ‘We Don’t See Much of the Big Stuff’ The last really big such event occurred 65 million years ago, Newman noted. “There have been three or four or maybe five others in the last hundreds of millions of years, and they’ve been widely spaced in time,” he added. “But they played a big role in our early solar system.” [*Correction – Sept. 19, 2012] Today, however, “we’re not likely to see one that size,” Newman concluded. “Happily, we don’t see much of the big stuff.” *ECT News Network editor’s note – Sept. 19, 2012: The original version of this article incorrectly quoted William Newman as stating that several events occurred in the last 100 million years.
<urn:uuid:f89f247c-4b73-4d5a-b078-aff312eed725>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/skywatchers-treated-to-spectacular-fiery-show-on-jupiter-76167.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00625.warc.gz
en
0.956993
1,305
3.140625
3
The Dragon was caught by its tail on Friday. The unmanned SpaceX spacecraft, which launched into orbit earlier this week, has successfully docked with the International Space Station, marking a first for a cargo-carrying private spacecraft. The docking was assisted with the station’s 58-foot robotic arm controlled by astronaut Don Pettit. The historic linkup occurred 250 miles above northwest Australia. “This is a very big deal for space exploration,” said futurist Glen Hiernsta of the arrival of the Dragon capsule to the ISS. This is the first private company to embark on such a mission. “They’ve succeeded with several launches,” Hiernstra told TechNewsWorld, “The next ‘big hold your breath moment’ isn’t just one moment either. It is whether they can do this over and over, and they are proving they can.” Launch and Docking For space exploration, each mission can be a nail-biter, beginning with the takeoff, when even small problems can have catastrophic results. “The way the world works in space research is much like commercial flight, where 80 percent of accidents occur during takeoff and landing, with the rest occurring somewhere else,” Thomas Zurbuchen, professor of space science at the University of Michigan, told TechNewsWorld. “You have to understand that launches start with a bucket load of explosives, essentially,” said Zurbuchen. “That is always the biggest hurdle. But they passed that mark, and they’ve docked with the ISS, so that was the next big step.” The docking is a historic point because it simply hasn’t been done before outside of government space programs. The accomplishment is an “amazingly huge success,” commented Zurbuchen. “The launch was a big deal, and the docking was the biggest moment so far,” he said. “Docking with the robotic arm was the money shot!” Many Safe Returns While the Dragon capsule has made it to the station, it still has to return to Earth. SpaceX has successfully returned a Dragon from orbit, but all eyes are now on this mission to see if it is an end-to-end success. “The main technological challenge that lies ahead following the docking, unloading of cargo, and undocking from the ISS will be de-orbit and re-entry of the Dragon spacecraft into the Earth’s atmosphere,” said JohnW. Delano, Ph.D., associate director of the New York Center for Astrobiology. “The Dragon spacecraft will be returning some scientific equipment from the ISS, so a successful re-entry and accurate landing will be important.” The technology onboard Dragon, including guidance and control, has a long and successful history. “The technology and engineering associated with the Dragon mission to the ISS is largely routine nowadays,” said Delano. “The newsworthiness of this event is that the mission has been flown by a private company — following substantial NASA funding and major engineering consultation in the design and construction of the Falcon and Dragon.” Private Sector in Space While this mission is historic in terms of being a unique first from the commercial sector, there are still many issues that need to be resolved to determine if it will change the course of space exploration. “If private companies become convinced that a profit is to be made through space travel — e.g., space tourism in low-Earth orbit — then the momentum for privately funded space missions could grow,” said Delano. “If that major growth occurs, then this current mission will be recorded historically as an important first step along that path of privately funded space missions.” Only time will tell if this mission becomes historically important or just an historic footnote, he stressed. “When NASA funding is eventually withdrawn from SpaceX and other private companies, then we’ll see if there really is a profit to be made in privately funded space missions,” explained Delano. “At the moment, NASA is saving a bit of money by having SpaceX bring this small mass of cargo to the ISS instead of using a space shuttle to do it or a Russian Progress resupply spacecraft,” he noted. “By comparison, the space shuttles would normally bring several tons of cargo to the ISS on each mission, versus the less than half ton currently being delivered by the Dragon spacecraft.” Commercial Space Flight the Future? Beyond the actual mission, there is the fact that it could inspire a new level of innovative thinkers to consider the future of Space exploration. “This can now become commercially interesting, as the key risks have been addressed, and they’ve done everything except launch people into space,” Zurbuchen noted. “So this isn’t just a story of technology, it is about the amazing talent coming through in an amazing company. SpaceX is really an entrepreneurship story. That’s what I’m excited about.” This mission could also spur other companies to get involved. “I believe it is a game changer, in that a small ambitious startup has proven that they can do this,” said Hiemstra. “They have had huge support from NASA, but this puts pressure on the larger aerospace companies, such as Boeing and Lockheed, to get involved. This could make space exploration more competitive and cheaper.”
<urn:uuid:3e896c4a-8ec8-4823-9d84-7b57788a239c>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/spacex-chalks-up-giant-leap-for-commercial-space-travel-75222.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00625.warc.gz
en
0.953403
1,153
2.546875
3
Cybersecurity is sometimes viewed as being inherently reactive. But given the security issues we face today, security professionals must push beyond merely blocking an attack before a network breach. Cybersecurity teams must also have the ability to disrupt an attack from achieving its goal. This might sound similar to blocking an attack, but there's more to it. This foresight can be acquired through knowledge of the kill chain, which refers to models that map the stages of attacks from initial system probing and network penetration to the final exfiltration of valuable data. Some people in our industry describe this process as "cyber threat intelligence." The Strategy Behind Cyber Threat Intelligence Such a strategy goes beyond signatures or details tied to a specific threat. It could also include context and information about attack methodologies, tools utilized to obscure an infiltration, methods that hide an attack within network traffic, and tactics that evade detection. It is also important to understand the different kinds of data under threat, the malware in circulation, and, more importantly, how an attack communicates with its controller. These elements of foresight enable the disruption of an attack at any of the points mentioned above. But threat intelligence is also about being qualitative, at least to the degree that it can be leveraged to respond to an attack, whether that means a forensic analysis for full recovery or the attribution and prosecution of the people responsible for the attack. Sources of Cyber Threat Intelligence Information sharing is a critical aspect of any security strategy. It's critical to compare the network or device you are trying to protect against a set of currently active threats; this allows you to assign the right resources and countermeasures against different attacks. To leverage intelligence, start by accessing a variety of threat intelligence sources, some of which might include: - Actionable insights from manufacturers: These arrive as a part of regular security updates or, more accurately, as a signature with the ability to detect a known threat. - Intelligence from local systems and devices: When you establish a baseline for normal network behavior, it becomes easier to assess when something is out of whack. Spikes in data, an unauthorized device attempting contact with other devices, unknown applications rummaging the network, or data being stored or collected in an unlikely location are all forms of local intelligence. This can be used to identify an attack and even triangulate on compromised devices. - Intelligence from distributed systems and devices: As is the case with local intelligence, similar intelligence can be collected from other areas of the network. As they expand, networks provide and create new infiltration opportunities for attacks or threats. Also, different network environments — virtual or public cloud, for example often run on separate, isolated networking and security tools. In those cases, centralized process for both the collection and correlation of these different intelligence threads become necessary. - Intelligence from threat feeds: Subscription to public or commercial threat feeds help organizations enhance their data collection, both from their own environment and those collected from a regional or global footprint in real time. It could boil down to two formats: - Raw feeds: Security devices simply cannot consume raw data, usually because it lacks context. This intelligence is utilized better post-processing from customized tools or local security teams. Such an effort converts the raw data into a more practical format. An added advantage with raw feeds is that they're much closer to real time and are often cheaper to subscribe to. - Custom feeds: Information processed with context is easily consumed by security tools; an example could be specific information delivered using tailored indicators of compromise. Vendors may customize the data for consumption by an identified set of security devices. At the same time, organizations also need to ensure that their existing tools support common protocols for reading and utilizing the data. - Intelligence between industry peers: Information sharing has become an advantageous norm for many. Several groups, such as ISACs (information sharing and analysis centers) or ISAOs (information sharing and analysis organizations), share threat intelligence within the same market sphere, geographic region, or vertical industry. They are especially useful for identifying threats or trends affecting your peers with the potential to impact your own organization. Intelligence in the corporate ecosystem is important, but the opportunity to reduce the number of threats, potentially exposing everyone to less risk, is more valuable than the advantage received from holding on to this information. Sharing is an important aspect of any security strategy. Then again, so is access to actionable intelligence in real time. Whatever the case, just remember that sharing your own threat intelligence serves to make everyone safer. - 6 Steps for Sharing Threat Intelligence - 8 Low or No-Cost Sources of Threat Intelligence - Is Threat Intelligence Garbage? - The State of Information Sharing 20 Years after the First White House Mandate Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info.
<urn:uuid:a2f3500e-cf44-4120-808a-21cee3b5e68a>
CC-MAIN-2022-40
https://www.darkreading.com/threat-intelligence/why-sharing-intelligence-makes-everyone-safer
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00025.warc.gz
en
0.936827
1,006
2.59375
3
Multi-Protocol Label Switching – MPLS – is a method of ensuring packets of data get where they’re supposed to, via a sensible route, and that packets are prioritised appropriately. Packets are labelled with one or more labels. As each packet passes through the MPLS network, labels may be added, replaced or stripped off. The network distributes information so that each switch knows what it is supposed to do if it encounters a particular label. As a label protocol it has evolved to look at QoS (Traffic Engineering) and supporting optical connections via MPLS-TP. MPLS has the following benefits and it explains why it has come of age and has a lot of traction with IT / Telecommunication’s and the ubiquitious Data Centres / PODS. Benefits of MPLS:- -Improve Uptime – by sending data over an alternative path in less than 50 milliseconds (if one exists). MPLS also reduces the amount of manual intervention your network provider has to do to create a WAN, reducing the likelihood of human error bringing down your circuit. -Create Scalable IP VPNs – with MPLS it’s easy to add an additional site to the VPN. There is no need to configure a complex mesh of tunnels, as is common with some traditional approaches. -Improve User Experience – by prioritising time-sensitive traffic such as VoIP. Multi-Protocol Label Switching offers multiple Classes of Service, enabling you to apply separate settings to different types of traffic. -Improve Bandwidth Utilisation – by putting multiple types of traffic on the same link, you can let high priority traffic borrow capacity from lower priority traffic streams whenever required. Conversely, when the lower priority traffic needs to burst beyond its usual amount of bandwidth, it can use any capacity that’s not being used by higher priority services. -Hide Network Complexity – an MPLS connection between two sites can be configured to act like a long ethernet cable, with the hops involved hidden from view. This is sometimes known as VPLS (Virtual Private LAN Service). -Reduce Network Congestion – Sometimes the shortest path between two locations isn’t the best one to take, as congestion has made it less attractive (at least for the time being). MPLS offers sophisticated traffic engineering options that enable traffic to be sent over non-standard paths. This can reduce latency (the delay in sending/receiving data). It also reduces congestion on the paths that have just been avoided as a result of traffic engineering. Also, please look at LISP, TRILL, VXLAN, etc..
<urn:uuid:96f2196e-2a46-49ed-b2de-103ebdfde114>
CC-MAIN-2022-40
https://www.erlang.com/reply/42278/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00025.warc.gz
en
0.928677
549
2.953125
3
Phishing is a method of trying to gather personal information using deceptive e-mails and websites. Typically, a phisher sends an e-mail disguised as a legitimate business request. For example, the phisher may pass himself off as a real bank asking its customers to verify financial data. (So phishing is a form of "social engineering".) The e-mail is often forged so that it appears to come from a real e-mail address used for legitimate company business, and it usually includes a link to a website that looks exactly like the bank's website. However, the site is bogus, and when the victim types in passwords or other sensitive information, that data is captured by the phisher. The information may be used to commit various forms of fraud and identity theft, ranging from compromising a single existing bank account to setting up multiple new ones. Early phishing attempts were crude, with telltale misspellings and poor grammar. Since then, however, phishing e-mails have become remarkably sophisticated. Phishers may pull language straight from official company correspondence and take pains to avoid typos. The fake sites may be near-replicas of the sites phishers are spoofing, containing the company's logo and other images and fake status bars that give the site the appearance of security. Phishers may register plausible-looking domains like aolaccountupdate.com, mycitibank.net or paypa1.com (using the number 1 instead of the letter L). They may even direct their victims to a well-known company's actual website and then collect their personal data through a faux pop-up window. Can phishing attacks be prevented? Companies can reduce the odds of being targeted, and they can reduce the damage that phishers can do (more details on how below). But they can't really prevent it. One reason phishing e-mails are so convincing is that most of them have forged "from" lines, so that the message looks like it's from the spoofed company. There's no way for an organization to keep someone from spoofing a "from" line and making it seem as if an e-mail came from the organization. A technology known as sender authentication does hold some promise for limiting phishing attacks, though. The idea is that if e-mail gateways could verify that messages purporting to be from, say, Citibank did in fact originate from a legitimate Citibank server, messages from spoofed addresses could be automatically tagged as fraudulent and thus weeded out. (Before delivering a message, an ISP would compare the IP address of the server sending the message to a list of valid addresses for the sending domain, much the same way an ISP looks up the IP address of a domain to send a message. It would be sort of an Internet version of caller ID and call blocking.) Although the concept is straightforward, implementation has been slow because the major Internet players have different ideas about how to tackle the problem. It may be years before different groups iron out the details and implement a standard. Even then, there's no way of guaranteeing that phishers won't find ways around the system (just as some fraudsters can fake the numbers that appear in caller IDs). That's why, in the meantime, so many organizations-and a growing marketplace of service providers-have taken matters into their own hands. How can companies reduce the chance of being targeted by a phishing attack? In part, the answer has to do with NOT doing silly or thoughtless things that can increase your vulnerability. Now that phishing has become a fact of life, companies need to be careful about how they use e-mail to communicate with customers. For example, in May 2004, Wachovia's phones started ringing off the hook after the bank sent customers an e-mail instructing them to update their online banking user names and passwords by clicking on a link. Although the e-mail was legitimate (the bank had to migrate customers to a new system following a merger), a quarter of the recipients questioned it. As Wachovia learned, companies need to clearly think through their customer communication protocols. Best practices include giving all e-mails and webpages a consistent look and feel, greeting customers by first and last name in e-mails, and never asking for personal or account data through e-mail. If any time-sensitive personal information is sent through e-mail, it has to be encrypted. Marketers may wring their hands at the prospect of not sending customers links that would take them directly to targeted offers, but instructing customers to bookmark key pages or linking to special offers from the homepage is a lot more secure. That way, companies are training their customers not to be duped. It also makes sense to revisit what customers are allowed to do on your website. They should not be able to open a new account, sign up for a credit card or change their address online with just a password. At a minimum, companies should acknowledge every online transaction through e-mail and one other method of the customer's choosing (such as calling the phone number on record) so that customers are aware of all online activity on their accounts. And to make it more difficult for phishers to copy online data-capture forms, organizations should avoid putting them on the website for all to see. Instead, organizations should require a secured log-in to access e-commerce forms. At the end of the day, though, better authentication is the best way to decrease the likelihood that phishers will target your organization. What plans should my company have in place before a phishing incident occurs? Before your organization becomes a target, establish a cross-functional anti-phishing team and develop a response plan so that you're ready to deal with any attack. Ideally, the team should include representatives from IT, internal audit, communications, PR, marketing, the web group, customer service and legal services. This team will have to answer some hard questions, such as: * Where should the public send suspicious e-mails involving your brand? Set up a dedicated e-mail account, such as [email protected], and monitor it closely. * What should call center staff do if they hear a report of a phishing attack? Make sure that employees are trained to recognize the signs of a phishing attack and know what to tell and ask a customer who may have fallen for a scam. * How and when will your organization notify customers that an attack has occurred? You might opt to post news of new phishing e-mails targeting your company on your website, reiterating that they are not from you and that you didn't and won't ask for such information. * Who will take down a phishing site? Larger companies often keep this activity in-house; smaller companies may want to outsource. - If you keep the shut-down service in-house, a good response plan should outline whom to contact at the various ISPs to get a phisher site shut down as quickly as possible. Also, identifying law enforcement contacts at the FBI and the Secret Service ahead of time will improve your chances of bringing the perpetrator to justice. - If a vendor is used, decide what the vendor can do on your behalf. You may want to authorize representatives to send e-mails and make phone calls, but have your legal department handle any correspondence involving legal action. * When will the company take action against a phishing site, such as feeding it inaccurate information or exploiting vulnerabilities in its coding? Talk out the many pros and cons beforehand. * How far will you go to protect customers? Decide how much information about identity theft you'll give to customers who fall for a scam, and how this information will be delivered. You should also talk through scenarios in which you will monitor or close and re-open affected accounts. * Are you inadvertently training your customers to fall for phishing scams? Educate the sales and marketing teams about characteristics of phishing e-mails. Then, make sure legitimate e-mails don't set off any alarms. How can we quickly find out if a phishing attack has been launched using our company's name? Sometimes a new phish announces itself violently, as an organization's e-mail servers get pummeled with phishing e-mails that are bouncing back to their apparent originator. There are other ways to learn about an attack, though-either before or after it occurs. a) Monitor for fraudulent domain name registrations. Phishers often set up the fake sites several days before sending out phishing e-mails. One way to stop them from swindling your customers is to find and shut down these phishing sites before phishers launch their e-mail campaigns. You can outsource the search to a fraud alert service. These services use technologies that scour the Web looking for unauthorized uses of your logo or newly registered domains that contain your company's name, either of which might be an indication of an impending phishing attack. This will give your company time to counteract the strike. b) Set up a central inbox. To do this, organizations typically set up one e-mail address where all suspected phishing e-mails are directed, with an address such as [email protected] or [email protected] Ideally, this central inbox should be monitored 24/7. The easiest and most effective way to find out if your organization is being targeted by phishers is simply by giving the general public a way to report phishing attacks. "It's your customers and noncustomers who are going to be the ones that tell you that the phish is out there," said one security manager interviewed for a case study published in c) Watch your Web traffic. Internet Storm Center recommends that by examining web traffic logs and looking for spikes in referrals from specific, heretofore unknown IP addresses, CSOs may be able to zero in on sites used for large-scale phishing attacks. After gathering victims' information, many phishing sites then redirect the victim to a log-in page on the real website the phisher is spoofing. How can we help our customers avoid falling for phishing? People who know about phishing stand a better chance of resisting the bait. "The best defense is that a consumer has heard of phishing and is unlikely to respond," says Patricia Poss, an attorney with the Bureau of Consumer Protection at the Federal Trade Commission. People must be trained to think twice about replying to any e-mail or pop-up that requests personal information. Teach employees how to recognize spoofed e-mail. Similarly, warn your customers about the dangers of phishing, and let them know you'll never ask for their account number, password, Social Security number or any other personal information via e-mail. Train them to avoid clicking on e-mail links to reach you and instead to type your company's URL directly into a new browser window. However, there's only so much that customer education can do. The onus is also on the organization to limit the damage by shutting down the phishing site. If an attack does happen, how should we respond? Once a phishing attack occurs, the goal for the organization is to get the phishing site shut down as quickly as possible. This limits the window of opportunity in which the phisher can collect personal information. With any phishing attack, organizations should take three steps (or hire a firm to take these steps for them). Step 1) Gather basic information about the attack. This should include screen shots of the website plus the URL. Step 2) Contact the ISP (or whoever is hosting the website). Explain the situation and ask that the site be shut down. Many phishing sites are launched on hacked computers, so in a best-case scenario, taking down the site is simply a matter of contacting a website's owners, pointing them to the URL of the webpage, and asking them to remove the offending content (and patch their web servers). Step 3) Contact law enforcement. Although this is an important step, be warned that it isn't necessarily the most effective way to get the site shut down quickly. The FBI and Secret Service are more concerned with patterns and big busts than individual ones, and until a customer has fallen for a scam and suffered damages, there may have been no law broken. Nevertheless, agents may be able to intervene on your behalf-and who knows, your case may be part of the bigger picture investigation needed to shut down a given fraudster. (This has happened. In May 2005, a 20-year-old Texas man was sentenced to almost four years in prison for phishing.) By establishing a relationship with law enforcement, you'll come to understand when agents want information about what kinds of attacks. For instance, the bank in the aforementioned CSOcase study gets a compact disc from its vendor with information about each phish, and a copy of that CD is then passed on to the FBI, which looks for patterns or anomalies in the attacks. Does all this sound like too much for your company? Then pay someone else to do it for you. The marketplace is brimming right now with companies that will do the dirty work.
<urn:uuid:ea49e9ec-1769-409b-aaa2-87299149d58b>
CC-MAIN-2022-40
https://www.diligent.com/news/phishing-test-results-barely-passing-grade-users/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00025.warc.gz
en
0.952235
2,713
3.515625
4
What Is HIPAA Compliance and Why Is It Important? What Does HIPAA Mean? What is HIPAA exactly and what do you as a company need to do to stay on the right side of its associated regulations? HIPAA stands for the Health Insurance Portability and Accountability Act, which was passed by Congress back in 1996. HIPAA has since then been updated and built on, most notably with the 2009 HITECH act (Health Information Technology for Economic and Clinical Health) and the 2013 Omnibus Rule. These together extended liability towards Business Associates and their subcontractors, as well as stricter protections on how PHI can be used as regards marketing and sales. While HIPAA concerns a number of areas, including healthcare coverage for people losing or changing their job and tax-related provisions, our main focus will be on Title II of the law, which is about the exchange, security, and privacy of health data and what concerns the vast majority of businesses when it comes to compliance. Let’s jump right in and go through all you need to know about HIPAA and what the keys to success are for HIPAA compliance. What Is the Purpose of HIPAA? As we just noted, HIPAA has several purposes outside of data protection—specifically related to health insurance law reform. For most organizations researching HIPAA, however, their primary goal is to know what they need to do in order to stay compliant with its regulations and avoid the fines that come from non-compliance. This area of HIPAA is all to do with data protection and privacy in relation to the disclosure and use of protected health information, or PHI. HIPAA compliance and the security of PHI is critical to health organizations today. Who Has to Abide By HIPAA? Entities that have to abide by HIPAA compliance are known as covered entities. Covered entities are people or companies that store, handle, and process PHI. Covered entities, in addition to keeping in compliance with HIPAA, are also responsible for reporting violations relating to it. The following individuals and organizations constitute covered entities: Health Care Providers - Nursing homes - Health Plans Health Insurance Companies - Company health plans - Government-provided health care plans Health Care Clearinghouses - These are entities which facilitate the processing of nonstandard health information into standard data elements. These are effectively middlemen between healthcare providers and insurance payers. - A “business associate” creates, receives, maintains, or transmits protected health information (PHI) on behalf of a covered entity or another business associate acting as a subcontractor. - A subcontractor that creates, maintains, or transmits protected health information (PHI) on behalf of a business associate has the same legal responsibilities as a business associate under HIPAA. In other words, privacy- and security-related legal responsibilities flow “downstream” to subcontractors performing work for a business associate. - A hybrid entity performs both HIPAA-covered and non-covered functions as part of its business. A large corporation that has a self-insured health plan for its employees may elect to be treated as a hybrid entity. Other examples are a university with a medical center or a grocery store that has a pharmacy. What Does PHI Encompass? Personal health information (PHI) refers to any demographic information which can be used to identify a patient, client, or other entity. There are 18 identifiers that make information relating to health considered PHI. These are: - Dates, except year - Geographic data - FAX numbers - Social Security Numbers - Email addresses - Medical record numbers - Account numbers - Health plan beneficiary numbers - Certificate/license numbers - Vehicle identifiers and serial numbers, including license plate numbers - Phone numbers - Web URLs - Device identifiers and serial numbers - Internet protocol (IP) addresses - Full-face photos and comparable images - Biometric identifiers (fingerprints, for example) - Any numbers or codes that uniquely identify someone These are the types of data and information that must be protected in order to remain HIPAA compliant. What Is Considered a HIPAA Violation? A HIPAA violation occurs when compliance is not adhered to by an entity, and there are literally hundreds of ways individuals and organizations can fall foul of HIPAA compliance. Common violations of HIPAA will typically involve one of the following: - Unauthorized, impermissible, or unnecessary disclosure of PHI - Unauthorized accessing of PHI - Incorrect disposal of PHI - Lack of conducted risk assessment by the entity - Lack of risk management as regards PHI - Failure to establish HIPAA compliance agreement with third parties when providing access to PHI - Failure to provide security awareness of HIPAA training to employees - PHI theft - Sharing of PHI without prior permission - Mishandling/unwarranted mailing of PHI - Failure to notify individual of a security incident involving PHI within 60 days of breach discovery - No documentation of compliance protocols, procedures, and management What Happens If HIPAA Is Violated? A HIPAA violation occurs when any aspect of the HIPAA standards and provisions are contravened. You can find a full rundown of all HIPAA regulations, published by the Department of Health and Human Services Office for Civil Rights, here. If a violation is reported, the covered entity is subject to penalties, whether they be civil or criminal—penalties can vary significantly, depending on the violation. Typically, the US Department of Health and Human Services Office for Civil Rights (OCR) will investigate violations—and they will investigate all covered entities who report breaches of more than 500 records. If the OCR determines that a particular case is criminal rather than civil, they will refer it to the Department of Justice. In the majority of cases, individuals can expect to pay $100 per violation; repeat violations can cause fines of up to $25,000. In cases where individuals have shown a willful neglect of HIPAA regulations and have made no attempt to correct their policies and procedures, a minimum penalty of $50,000 can be incurred, up to a maximum of $1.5 million. In criminal cases, lesser sentences of a $50,000 and up to one year in prison are possible—with a $250,000 fine and up to 10 years in prison being the maximum. For civil proceedings, violations are categorized into tiers, with 4 being the most severe. They are as follows: - Tier 1: A violation that the covered entity was unaware of and could not have avoided. - Tier 2: A violation that the covered entity should have been aware of but could not avoid. - Tier 3: A violation that occurred as a direct result of willful neglect, but where an attempt was made to rectify the violation. - Tier 4: A violation constituting willful neglect where no attempt was made to correct the violation. The penalties for HIPAA non-compliance for each tier are as follows: - Tier 1: Minimum fine of $100 per violation up to $50,000 - Tier 2: Minimum fine of $1,000 per violation up to $50,000 - Tier 3: Minimum fine of $10,000 per violation up to $50,000 - Tier 4: Minimum fine of $50,000 Criminal proceedings are a little different, with three tiers and far more severe punishments than civil proceedings. They are as follows: - Tier 1: Reasonable cause or no knowledge of violation - Tier 2: Obtaining PHI under false pretenses - Tier 3: Obtaining PHI for personal gain or with malicious intent - Tier 1: Up to one (1) year in jail - Tier 2: Up to five (5) years in jail - Tier 3: Up to 10 years in jail Can I Be HIPAA Certified? At the time of writing this, there is no such thing as HIPAA compliance certification or verification. Third parties may offer a form of “HIPAA certification”, but there is not an officially endorsed or mandated certification offered by HHS. There is no standard or implementation specification that requires a covered entity to “certify” compliance. The evaluation standard § 164.308(a)(8) requires covered entities to perform a periodic technical and non-technical evaluation that establishes the extent to which an entity’s security policies and procedures meet the security requirements. – Office for Civil Rights (OCR) So, while there is no HIPAA certification, many third party MSSPs can perform periodic assessments when necessary and ensure that you are in compliance with HIPAA. What Is a HIPAA Officer? A HIPAA officer is a compliance officer. Whether they are in-house or hired as a third party, their primary job will be to ensure your HIPAA compliance by making sure your security and privacy protocols for PHI data are correctly enforced. In instances where there is no such policy in place, the HIPAA officer will be responsible for developing and implementing a compliance plan for the individual or organization. They will then be in charge of maintaining and monitoring the program, investigating and reporting where legally necessary and ensuring that patient or client data is being safeguarded as required by state and federal law. What Is the Key to Success for HIPAA Compliance? If you’ve been reading this piece (or skimming) and felt your pulse raising a little looking at the penalties for non-compliance, then don’t worry. It doesn’t take a lot to ensure that you are compliant with HIPAA, but there are certainly some keys to success for HIPAA compliance that organizations would do well to follow. First, you should seek out a managed security service provider who performs HIPAA assessments to come and audit your systems for HIPAA compliance. Once they’ve performed the risk assessment, they will be able to recommend and carry out the implementations you need to make sure you are doing everything possible to maintain compliance. What Is a HIPAA Risk Assessment? Related Post: What Happens During a Cybersecurity Risk Audit? A HIPAA compliance audit is the assessment performed by a compliance officer which will take a deep dive into your systems and security protocols. First, they will need to collaborate with you in determining the scope of the audit—chiefly related to your obligations (in this case, HIPAA is the main priority, though you may need to be compliant with other regulations, too). They will then draw up a schedule for the audit and proceed to the next stage; execution. This part involves vulnerability scanning, penetration testing, and a gap analysis. In the case of a risk assessment for HIPAA compliance, a gap analysis will be essential, as this is where the HIPAA compliance officer will detail what needs to be done to bring you or your company into compliance. Once the HIPAA compliance audit is concluded, the compliance officer will make their recommendations and you can get a clear understanding of what needs to be done. You may also take this opportunity to delegate the implementation of these recommendations to the MSSP, in which case you can sign a long-term contract with them—allowing you to get on and run your business while the managed security service provider takes care of compliance. If you’d like to learn more about HIPAA compliance and what a managed security service provider can do for you, take a look at our Compliance Services page.
<urn:uuid:03bf150b-59c2-459d-a995-31c54efcda56>
CC-MAIN-2022-40
https://www.impactmybiz.com/blog/what-is-hipaa-compliance-what-hipaa-means/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00025.warc.gz
en
0.940433
2,434
2.640625
3
The metaverse is the latest technology that most companies are at least considering exploiting as a major business driver. Theoretically, it will enable commerce and communications, and will be a major way to engage with consumers, professionals, and the world as a whole. Of course, this brings a new risk landscape that is the equivalent of scope of the early Internet. The reality is that the metaverse is not new. There have been metaverses in different forms for decades. Second Life was the first major platform to create an artificial universe, and it revealed a variety of lessons. Massively multiplayer role player games have also revealed critical lessons. Even Zoom is arguably a form of metaverse. The experiences showed many vulnerabilities, both technical and operational. The metaverses have been host to sexual abuse, terrorist communications, scams, money laundering, financial thefts, etc. Additionally, there are technical vulnerabilities that are much more difficult to deal with than the traditional vulnerabilities an organization has to deal with given the fact that a great deal of functionality is enabled on client systems that the organization has no control over. This presentation walks through the history of the metaverses, past incidents, the obvious and non-obvious vulnerabilities, and guidance for being proactive in dealing with the vulnerabilities.
<urn:uuid:57ed6e01-2c7e-440d-8d8c-ee4c3fb2bd02>
CC-MAIN-2022-40
https://www.infosecworldusa.com/isw2022/session/919292/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00025.warc.gz
en
0.959142
254
2.609375
3
It’s a long-standing question. Can Apple Macs get viruses? While Apple does go to great lengths to keep all its devices safe, this doesn’t mean your Mac is immune to all computer viruses. So what does Apple provide in terms of antivirus protection? Let’s take a look along with some signs that your Mac may be hacked and how you can protect yourself from further threats beyond viruses, like identity theft. Signs that your Mac may be hacked Whether hackers physically sneak it onto your device or by tricking you into installing it via a phony app, a sketchy website, or a phishing attack, viruses and malware can create problems for you in a few ways: - Keylogging: In the hands of a hacker, keylogging works like a stalker by snooping information as you type. - Trojans: Trojans are type of malware that can be disguised in your computer to extract important data, such as credit card account details or personal information. - Cryptominers: Similar to trojans, this software hides on a device. From there, it harnesses the device’s computing power to “mine” cryptocurrencies. While cryptomining is not illegal, “cryptojacking” a device without the owner’s consent is most certainly illegal. Some possible signs of hacking software on your Mac include: Is your device operating more slowly, are web pages and apps harder to load, or does your battery never seem to keep a charge? These are all signs that you could have malware running in the background, zapping your device’s resources. Your computer feels like it’s running hot Like the performance issues above, malware or mining apps running in the background can burn extra computing power (and data). Aside from sapping performance, malware and mining apps can cause your computer to run hot or even overheat. Mystery apps or data If you find apps you haven’t downloaded, along with messages and emails that you didn’t send, that’s a red flag. A hacker may have hijacked your computer to send messages or to spread malware to your contacts. Similarly, if you see spikes in your data usage, that could be a sign of a hack as well. Pop-ups or changes to your screen Malware can also be behind spammy pop-ups, changes to your home screen, or bookmarks to suspicious websites. In fact, if you see any configuration changes you didn’t personally make, this is another big clue that your computer may have been hacked. What kind of antivirus do Macs have? Macs contain several built-in features that help protect them from viruses: - XProtect and Automatic Quarantine: XProtect is Apple’s proprietary antivirus software that’s been included on all Macs since 2009. Functionally, it works the same as any other antivirus, where it scans files and apps for malware by referencing a database of known threats that Apple maintains and updates regularly. From there, suspicious files are quarantined by limiting their access to the Mac’s operating system and other key functions. However, . - Malware Removal Tool: To further keep Apple users protected, the Malware Removal Tool (MRT) scans Macs to spot and catch any malware that may have slipped past XProtect. Similar to XProtect, it relies on a set of constantly updated definitions that help identify potential malware. According to Apple, MRT removes malware upon receiving updated information, and it continues to check for infections on restart and login. - Notarization, Gatekeeper, and the App Review Process: Another way Apple keeps its users safe across MacOS and iOS devices is its Notarization Apps built to run on Apple devices go through an initial review before they can be distributed and sold outside of Apple’s App Store. When this review turns up no instances of malware, Apple issues a Notarization ticket. That ticket is recognized in another part of the MacOS, Gatekeeper, which verifies the ticket and allows the app to launch. Additionally, if a previously approved app is later to found to be malicious, Apple can revoke its Notarization and prevent it from running. Similarly, all apps that wish to be sold on the Apple App Store must go through Apple’s App Review. While not strictly a review for malware, security matters are considered in the process. Per Apple, “We review all apps and app updates submitted to the App Store in an effort to determine whether they are reliable, perform as expected, respect user privacy, and are free of objectionable content.” - Further features: In addition to the above, Apple includes technologies that prevent malware from doing more harm, such as preventing damage to critical system files. Do I need to purchase antivirus for my Mac? There are a couple reasons why Mac users may want to consider additional protection in addition to the antivirus protection that Mac provides out of the box: - Apple’s antivirus may not recognize the latest threats. A component of strong antivirus protection is a current and comprehensive database of virus definitions. As noted above, , leaving Mac owners who solely rely on XProtect and other features susceptible to attack. - Apple’s built-in security measures for Macs largely focus on viruses and malware alone. While protecting yourself from viruses and malware is of utmost importance (and always will be), the reality is that antivirus is not enough. Enjoying the life online today means knowing your privacy and identity are protected as well. In all, Macs are like any other connected device. They’re susceptible to threats and vulnerabilities as well. Looking more broadly, there’s the wider world of threats on the internet, such as phishing attacks, malicious links and downloads, prying eyes on public Wi-Fi, data breaches, identity theft, and so on. It’s for this reason Mac users may think about bolstering their defenses further with online protection software. Further protecting your Mac from viruses and attacks Staying safer online follows a simple recipe: - Being aware of the threats that are out there. - Understanding where your gaps in protection are. - Taking steps to protecting yourself from those threats and closing any gaps as they arise. Reading between the lines, that recipe can take a bit of work. However, comprehensive online protection can take care of it for you. In particular, McAfee Total Protection includes an exclusive Protection Score, which checks to see how safe you are online, identifies gaps, and then offers personalized guidance, and helping you know exactly how safe you are. An important part of this score is privacy and security, which is backed by a VPN that turns on automatically when you’re on an unsecure network and personal information monitoring to help protect you from identity theft—good examples that illustrate how staying safe online requires more than just antivirus. Consider your security options for your Mac So, Macs can get viruses and are subject to threats just like any other computer. While Macs have strong protections built into them, they may not offer the full breadth of protection you want, particularly in terms of online identity theft and the ability to protect you from the latest malware threats. Consider the threats you want to keep clear of and then take a look at your options that’ll help keep you safe. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:b5a9e0d9-467a-4c52-93c8-d52c895c7fbf>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/internet-security/can-apple-macs-get-viruses/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00025.warc.gz
en
0.925461
1,570
2.71875
3
Multimedia Quiz - Multimedia Questions and Answers 1. With reference to multimedia elements, pick the odd-one out of the following: - Voice Script 2. JPEG is an acronym for __________ and is a file format for _________ 3. What is another name for 2D animation? 4. What is the name of popular software used for creating 2D animation for use in web pages? 5. Which of the following is NOT a video file extension: MP4, AVI, QT, JPG, and MOV? 6. In animation, a keyframe is a frame in which the artwork differs from that of the previous frame: TRUE or FALSE? 7. MP3 is an extension of a _________ file 8. Which of the following two file formats has a smaller file size: WAV or MP3? 9. In a multimedia project, a storyboard details the text, graphics, audio, video, animation, interactivity, and other that should be used in each screen of the project: TRUE or FALSE? 10. DAT is an acronym for _________ 11. If you do not want external noise to enter into the room where a voice-over recording for a multimedia project takes place, then that room should be _________ 12. What is the name of the programming / scripting language of Flash? 13. With an appropriate software and more than one GIF image, you can create a GIF animation: TRUE or FALSE? 14. Which HTML tag you’ve to use to insert a Flash movie in a web page? 15. If you want to enlarge / reduce an image size, which differs extensively from its original size without loss in its quality, that image should be in which format: vector or raster format? 16. A graphic image file name is tree.eps. This file is a bitmap image: TRUE or FALSE? 17. EPS is an acronym for __________ 18. A multimedia developer wants to use a video clip in a project and he wants the clip’s file size to be as small as possible. Which video file format should he use: MP4 or AVI? 19. FPS stands for __________ 20. A broadcast / NTSC video requires how many FPS for it to play smoothly? 21. Codec is any acronym for __________. It can be hardware-based, software-based, or both: TRUE or FALSE? 22. Using Illustrator or CorelDraw you can create mainly what type of graphics: vector or raster? 23. What is the name of the function for integrating multimedia elements, programmatically and or without programming, using a software, to create a multimedia project? 24. What method of animation creates the in-between frames when you create the start and end points of the animation? 25. A most basic skill a person requires to pursue an animation career is ________ skills. Multimedia Quiz Answers 1. Voice script 2. Joint Photographic Experts Group; Graphic images 3. Cell animation 7. Sound/Music/Audio file 10. Digital Audio Tape 17. Encapsulated Post Script 19. Frames Per Second 21. Coder-Decoder; TRUE
<urn:uuid:1ee5d646-a0d7-4332-9b63-d9351dbd3b66>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/871/multimedia-quiz-multimedia-questions-and-answers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00025.warc.gz
en
0.822073
772
2.765625
3
We’ve all see the Control Panel applet for Power Options: It lets you control a wide variety of power-related settings, such as when the system goes into standby mode, and how the system interacts with the UPS. Because you can control Windows 2000’s power management through a simple interface, it’s easy to miss what’s really going on. In this article, I’ll take you on a behind-the-scenes tour of Windows 2000’s power management capabilities. Functions of Power Management Services When you first glance at the Power Options applet, shown in Figure 1, you might assume that the Windows 2000 power management system is limited to turning the monitor and hard disks on and off, and putting the system to sleep. However, this isn’t the case. Besides interacting with the UPS (which is beyond the scope of this article), power management performs four critical functions (beyond the obvious): - The power management services must be able to wake a sleeping system instantly. After all, putting the computer to sleep would be pointless if the user had to wait for a full boot sequence to complete during the wake up. It would be just as easy to have the user turn the system off. Instead, the user can simply press the power button, and the computer will instantly return the user to the point at which the computer went to sleep. - The power management services must be able to respond to wake-up events. A wake-up event is some event the computer must be awake to handle. For example, suppose a computer has a fax modem, and someone tries to send a fax to that computer. The modem receiving the call could be a wake-up event. (This is a further illustration of why it’s important for a computer to wake instantly from a sleep state–if the computer takes a long time to wake up, the caller will hang up before the computer wakes up to receive the fax.) Other examples of wake-up events include running scheduled tasks such as virus scans or automatically downloading e-mail messages. Many computers also contain wake-on-LAN capabilities, which will wake the computer if it receives a packet from across the network. - The power management services must be able to adjust software to changing power states. If a computer is going to sleep, it must be able to communicate that information to applications so that certain types of applications can stay semi-active, while others hibernate. For example, you don’t want your computer to wake up just because your word processor is set to do an auto-save every ten minutes. On the other hand, you don’t want the Task Scheduler to completely go to sleep if it has important tasks to perform. The power management services must know the differences between such programs and be able to interact with them accordingly. - The power management services must be able to fully interact with the hardware. Various devices have different power requirements while in sleep mode, depending on what they do. For example, if your system uses a wake-on-LAN feature, the network card must retain some amount of power to provide that functionality. The Windows 2000 operating system must be able to interact with hardware devices to ensure that they have the proper amount of power for their current function. Not only must the operating system interact with the power needs of existing devices, but if you add devices to your system in the future, the power management services must also be able to interact with those new devices. The power management services have a lot of responsibility, and you may be wondering how they get the job done. Windows 2000 regulates power by complying fully with the Advanced Configuration and Power Interface (ACPI) standards used by most modern computer hardware. Windows 2000 implements this support through something called the On Now Power Management Design Initiative. The basic premise of On Now is that when the operating system loads, all of the power management responsibilities are handed over to Windows 2000 from the computer’s BIOS (assuming the BIOS supports ACPI). All of the power management functions are still performed at the BIOS level; the only difference is that they are controlled by Windows rather than through the internal BIOS settings. Once Windows has gained control of the computer’s power management functions, it decides which devices get what level of power through something called a power policy. A power policy is an internal setting that Windows builds based on the user preferences shown in Figure 1 and information collected about each hardware device in the system. The hardware devices inform Windows of their power requirements through their associated device drivers. For example, a modem might tell Windows its full power setting and how much power is needed for standby mode. Because the applications and the hardware both interact so closely with Windows, Windows can make intelligent decisions about the power needs in particular situations. For example, Windows can make sure that the screen doesn’t turn off in the middle of a Power Point presentation. As you can see, there is much more to managing a system’s power than meets the eye. The Windows 2000 power management services allow the hardware and software to work together to best meet the user’s power management needs. // Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it’s impossible for him to respond to every message, although he does read them all.
<urn:uuid:c81a39ed-a33b-4530-9799-20ea522531e9>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/os/windows-2000-power-management-behind-the-scenes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00025.warc.gz
en
0.944367
1,144
2.640625
3
- Dispersion phenomena - Basic principles of PMD and CD - Causes of PMD and CD in the fiber - Effects of PMD and CD on the fiber - Different types of dispersion - EXFO’s PMD and CD measurement method - Result analysis using pass/fail thresholds An overview of general dispersion phenomena occurring in fiber-optic communications, as well as the problems caused specifically by polarization mode dispersion (PMD) and chromatic dispersion (CD). The course also describes the internal workings of EXFO’s PMD and CD analyzers and explains how to interpret the result obtained with the instruments. Hands-on exercises on PMD and CD analysis and result interpretation are also included in the curriculum. The first part of this course consists of lectures using PowerPoint presentations and demonstrations, while the second part consists of specific hands-on PMD and CD testing exercises, as well as experiments on total PMD measurement, coefficient calculation, pass/fail result analysis using thresholds, and result interpretation. Attendees will receive a binder containing copies of presentations and other handouts.
<urn:uuid:a753f870-d7e6-450a-91c8-6454569d5001>
CC-MAIN-2022-40
https://www.exfo.com/en/services/training/courses/introduction-dispersion-testing-cd-pmd/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00025.warc.gz
en
0.914386
238
2.796875
3
The threat of ransomware has loomed over critical infrastructure, such as utilities and transportation networks, for years. A 2018 report by the American Petroleum Institute warned of dire risks to the national gas and oil industry, and last month, the U.S. Department of Energy announced a focused 100-day initiative to modernize the nation’s electric grid to improve cyberattack visibility, detection, and response. On May 7, security experts’ fears were realized when Colonial Pipeline Company, which supplies nearly half of the U.S. East Coast’s petroleum, was hit by a ransomware attack that forced it to shut down some systems and temporarily suspend all pipeline operations. The attack, for which ransomware gang DarkSide has claimed responsibility, prompted the U.S. Department of Transportation to issue an emergency order lifting some regulations on drivers carrying fuel in 17 states and the District of Columbia. IT-OT Convergence Fuels Cyberattacks on Utilities This isn’t the first ransomware attack on a U.S. energy company. Last year, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (DHS CISA) issued an advisory after a ransomware attack on an unidentified natural gas compression facility impacted “control and communication assets on the [facility’s] operational technology (OT) network,” forcing it to shut down for two days. The advisory goes on to explain that the attack began with a spearphishing scheme, by which cybercriminals breached the facility’s IT network, then used this access to pivot to OT systems and plant the ransomware. Historically, IT systems were siloed from OT systems, the highly specialized industrial hardware and software used by utilities, transportation networks, and manufacturers. OT systems were typically air gapped, meaning they weren’t connected to IT systems or the internet. However, as utilities and other organizations handling critical infrastructure digitally transformed, OT systems were connected with IT systems and hooked up to the internet. This IT-OT convergence enabled utilities to deliver energy more efficiently, benefitting both consumers and the environment, but it also enabled cybercriminals to use IT systems as a backdoor into OT systems. While cyberattacks on IT systems are costly and destructive, most of them don’t put people’s lives in danger. The same cannot be said for attacks on OT systems, which have real-world ramifications. Cyberattacks on utilities can damage grid assets, causing power outages, tainting water supplies, damaging the environment, and putting human health and life at risk. Securing Passwords Goes a Long Way Toward Securing Critical Infrastructure Against Ransomware According to a recent study by Coveware, about 75% of ransomware attacks begin one of two ways, both of which leverage compromised login credentials: - By compromising remote desktop protocol (RDP) services, either by exploiting an unpatched vulnerability or using a stolen or guessed password. - Through email phishing. This means that simply by securing their passwords, energy providers and other critical infrastructure organizations can substantially reduce their risk of a ransomware attack. Additionally, because compromised login credentials are also responsible for over 80% of successful data breaches, they’ll simultaneously be defending their systems against data breaches. Keeper helps energy providers and other critical infrastructure organizations secure their OT systems by securing the most vulnerable part of their IT networks, their employees’ passwords. Keeper’s zero-knowledge, enterprise-grade password security and encryption platform gives IT administrators complete visibility into employee password practices, enabling them to monitor password use and enforce password security policies organization-wide, including strong, unique passwords and multi-factor authentication (2FA). Fine-grained access controls allow administrators to set employee permissions based on their roles and responsibilities, as well as set up shared folders for individual groups, such as job classifications or project teams. For enhanced protection, organizations can deploy valuable add-ons such as Keeper Secure File Storage, which enables employees to securely store and share documents, images, videos, and even digital certificates and SSH keys, and BreachWatch™, which scans Dark Web forums and notifies IT administrators if any employee passwords have been compromised in a public data breach. Keeper takes only minutes to deploy, requires minimal ongoing management, and scales to meet the needs of any size organization. Not a Keeper customer yet? Sign up for a 14-day free trial now! Want to find out more about how Keeper can help your organization prevent security breaches? Reach out to our team today.
<urn:uuid:1ce887f9-5d65-4b85-9e86-74d6d5f3b06c>
CC-MAIN-2022-40
https://www.keepersecurity.com/blog/2021/05/12/password-security-protects-energy-grids-other-critical-infrastructure-from-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00025.warc.gz
en
0.941188
927
2.59375
3
Think of the damage that a hacker can do. Right from, breaking into people’s accounts, spreading fake websites, sending out dangerous spam to tricking people into handing out personal information, infecting millions with malware, and even denying access to the internet. Now imagine what a hacker can do with an army of computers at their disposal, strengthening his resources on an order of thousands and millions. This army of computers actually exists, and these are called “Botnets”. What is Botnet? Basically, a botnet is a network of infected computers which, under the command of a single master computer, work together to accomplish a goal. It may seem simple, but it is the powerhouse behind some of the worst attacks’ hackers can attempt. A botnet includes groups of computers that have been infected with malware. A hacker remotely controls all of the computers in the group to do things like sending spam messages, generating fake web traffic, conducting DDoS attacks, serving ads to everyone in the botnet, or even forces payment from users to be removed from the botnet. A botnet relies on two things: First, it needs a large network of infected devices, called “zombies”, to do the grunt work for whatever scheme the hacker has planned. Second, it needs someone to actually command them to do something, which is called the Command and Control center, or “bot herder”. Once these things are in place, a botnet is ready to bring chaos and do harm to people and systems. How do Botnets work? There are two primary ways that botnets are set up, the Client-Server model and the Peer-to-Peer model. In both cases, the Command and Control owner can command and control the network. This is the reason why they use digital signatures to ensure that only commands issued by the hacker or whoever he sold the botnet to are spread through the entire network. 5 ways to stop Botnets from stealing Data Botnet attacks are generally combined with other cyber threats, which makes its detection challenging. However, eliminating botnet threats can help businesses to stay protected from such attacks. Botnets are difficult to stop once they have taken control of user’s devices. So, to reduce phishing attacks and other issues, make sure each of your devices is guarded well against this malicious hijack. Neumetric, a cybersecurity services, consulting & products Organisation, can help you reduce your security cost without compromising your security posture. Our years of in-depth experience in handling security for Organisations of all sizes & in multiple industries make it easier for us to quickly execute cost-cutting activities that do not bring value to you, while you continue focusing on the Business objectives of the Organisation.
<urn:uuid:bc6b8fe0-af62-4817-b104-cddc82421fde>
CC-MAIN-2022-40
https://www.neumetric.com/what-is-botnet-how-to-prevent-botnet-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00025.warc.gz
en
0.951793
574
3.328125
3
It is safe to say that SSL and TLS certificates have become the foundation of web security in today’s digital world. It doesn’t matter whether you own a blog with over a thousand followers or manage a million-dollar business website, you will need to use HTTPS or an SSL certificate for protecting the website. However, several business owners often ask how SSL helps to secure a website. Before we delve into that, we would like to shed light on what will happen if you do not use SSL certificates on your website. For instance, you will be able to see a drop in the SEO ranking of your website if you do not have an SSL certificate on your website. It is also important to note that Google will be issuing a ‘not secure’ warning to websites without SSL certificates. This will affect the reputation and growth of your business, which is something most people would want to avoid. Fortunately, you can avoid all these troubles by investing in an SSL certificate. How To Secure Website with SSL certificate SSL and TLS certificates can be described as digital certificates, which use encryption for keeping website data secure and safe. If your website is currently running on HTTP, then you will need to add an SSL or TLS certificate to provide an additional layer of security to your website. Most of you would be aware that HyperText Transfer Protocol or HTTP is an application protocol that is used to share data in the WWW (World Wide Web). HTTP actually works for defining how certain data or information can be shared and used on the web. It will also dictate how internet browsers and web servers respond to specific actions like responding to requests or commands. HTTP also makes it easy for internet users to interact with HTML files and other resources on the site. This is accomplished by transmitting hypertext messages between services and browsers through TCP (Transmission Control Protocol). HTTP makes use of a string of various request methods for completing requests such as GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS, PATCH, and CONNECT. All types of HTTP servers rely on the HEAD and GET requests, but not all of them support the other types of request methods. Does HTTP Use SSL or TLS Certificates As mentioned earlier, HTTP is not secure and it doesn’t use SSL or TLS certificates. If you visit a website with the HTTP protocol, then your web browser might display a warning message. This is to indicate that the site you are visiting is not safe and secure. The reason why HTTP is not secure and safe is that all responses and requests are delivered as plain text. As a result, anyone will be able to access and see the responses and requests that are being shared. This means that hackers may maliciously modify, steal, or delete the data. How SSL And TLS Certificates Makes HTTP Secure Website owners and admins will be able to make sure that all responses and requests shared within their website are secure and to do so, you need to purchase SSL certificate. SSL certificates will help businesses by encrypting all HTTP responses and requests. The TLS/SSL certificate technology is capable of transmitting all responses and requests into a format, which hackers and cybercriminals will not be able to interpret or access. In the case of HTTP, all requests will be in simple plain text, which makes them an easy target for hackers, but that’s not the case when using SSL certificates. Websites that are using TLS or SSL certificates for encrypting responses and requests will be able to transform the data into a random mix of letters and numbers. This means that hackers will not be able to read or interpret the information. So, if you want to ensure that your website is secure, then it is best to get in touch with reliable SSL Certificate providers to buy a digital certificate. Here are a few effective ways that will help you to bolster your SSL security. Install The Site Seal Most website admins and business owners might probably know that SSL certificates come with a site seal. If you are not aware of this, then you should note that the site seal will contain the name of the certificate issuing authority. The website seal that comes with the certificate indicates that a reliable and professional third-party authority has issued and verified the identity of your business. Businesses that place the website seal at the exact place on the website will be able to easily remind visitors that you are a trusted entity with whom you can do business. There is no point in purchasing and installing an SSL certificate if your site is still available over the HTTP protocol. This is why businesses must direct users to HTTPS instead of HTTPS. If you are wondering how to do it, then HSTS (HTTP Strict Transport Security) is the solution you are looking for. HSTS serves the crucial purpose of preventing your website from protocol downgrade attacks and cookie hijacking. HSTS forces internet browsers to make connections only over HTTPS. Generate A CAA Record If you already have a preferred Certificate Authority and if you only want them to issue TLS/SSL certificates, then you should be looking at certificate authority authorization (CAA). You will need to generate a CAA record for your website. Once you have done that, no other certificate authority (other than the ones you allow) will be able to issue SSL certificates for your site. This will help you avoid the chance of mis-issuance from both your and CA’s side. SSL certificates are great when it comes to offering adequate protection for your website. That said, SSL certificates should not be considered as a one-stop solution. This is mainly because SSL certificates facilitate the encryption and offer authentication for data-in-transit. This means that SSL certificates encrypt data when it is transmitted back and forth between the server and the browser. This type of transmission is critical when securing the sensitive data of users such as passwords, credentials, and credit card details. Unfortunately, this is not enough to get ahead of hackers and cybercriminals. This is why you should implement a comprehensive security strategy for your website. The above-mentioned tips will help you to bolster your SSL security.
<urn:uuid:36a2be79-18f7-40d5-b11e-81fda9c48edf>
CC-MAIN-2022-40
https://cyberexperts.com/how-to-bolster-your-website-security-with-ssl-tls/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00226.warc.gz
en
0.934167
1,272
2.78125
3
On February 16, 1968, the first 9-1-1 call was placed by Senator Rankin Fite in Haleyville, Alabama. Before this time, if someone had an emergency, they would dial “0” for the operator or call the local station. It wasn’t until the Public Safety Act of 1999 that 9-1-1 was officially established as the nation’s emergency calling number. Back in 1968, AT&T was the telephone service provider for most of the United States, and rotary phones were predominantly used. For those of you who never used a rotary phone before, YouTube demos highlight what it was like to place a call. The 9-1-1 system is now so familiar that most people don’t even think about it, until an emergency happens. 9-1-1 remains a vital part of everyday crime-fighting, fire and emergency medical response, as well as the management of major events and the response to natural disasters. Legacy 9-1-1 systems installed decades ago are based on analog circuit-switched technology used in the Public Switched Telephone Networks (PSTN), and remain the backbone of how calls are delivered. While not much has changed with the technology in use, what has changed is how calls to 9-1-1 are placed. Approximately 240 million 9-1-1 calls are placed a year with 80% of calls using cellular phones. With the proliferation of smart devices now in use, new technology colliding with old infrastructure can have major implications in call processing speed, flexibility to route calls, and location accuracy by PSAPs (Public Safety Answering Points) when help is needed most. 9-1-1 services need to grow beyond voice to save seconds and lives. Public safety agencies recognize the need to improve supporting requests for assistance and face many challenges in transforming how they can respond faster and smarter. The efforts of the NG9-1-1 Institute, APCO International, NENA and iCERT organizations place the critical needs of public safety in the forefront to achieve the true promise of Next Generation 9-1-1 — helping first responders do a better job and protect the well-being of the communities served. The next 50 years: accelerating transformation. NG9-1-1 will eventually replace the current 9-1-1 systems allowing citizens to send text messages, photos, videos, and other digital information to public safety agencies to respond more safely and effectively. Motorola Solutions is proud to be working alongside public safety agencies for 90 years, innovating mission-critical communications, and providing service and support for call-taking and dispatch solutions for over 30 years, including PremierOne and Spillman Flex. Our expansion investment with CallWorks and pending acquisition of Airbus DS Communications, along with our partnership with RapidSoS, are designed to help agencies accelerate beyond NG9-1-1 and expand their capabilities with enhanced intelligence for improved response and safety. Over these past 50 years, 9-1-1 has saved thousands of lives thanks to the many heroes who helped answer the calls. As technology rapidly evolves, Next Generation 9-1-1 delivers the flexibility and tools needed to effectively and efficiently support operations and achieve the best possible outcome for years to come.
<urn:uuid:f72b0dc1-9caa-4f08-850a-750ed789f95a>
CC-MAIN-2022-40
https://blog.motorolasolutions.com/en_us/9-1-1-turns-5-0/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00226.warc.gz
en
0.945044
672
2.65625
3
For the deaf and hard of hearing, your facial expressions convey so much meaning. “Language is grammar — it’s sentence structure,” said Sharron Hill, the director of the American Sign Language Interpreting (ASLI) Program at the University of Houston, “And so, the way that individuals who communicate with a visual mode of communication convey grammar is on the face.” ASL uses hand movements, body language, facial expressions, and lip-reading to convey the nuances of a transmitted message. For example, it matters whether your eyebrows are up or if they’re down. This small detail decides how you answer a question. If eyebrows are up, this means a yes or no question, and if they are down, it is an open-ended question. Your mouth wide open or closed tightly conveys how large or small an object is, whether it’s thin, smooth, or thick. The movement of the tongue can tell you how far something is. It also explains whether it’s right next to you or if it is thousands of miles away. The nod of the head determines whether you understand something or you do not. The way you turn your shoulders shows who is talking in a story or taking the lead in a race. It has been said that 50% of ASL consists of facial expressions or body movements. These intricacies of the language are endless. The best way to learn these is through those who are native to the language. If you’re learning ASL, find those in the deaf community, and don’t be afraid to ask questions! You will find that most community members are eager to help those who sincerely want to learn. Learning ASL has personally brought me many joys. I have developed lifelong friendships with those who first taught me the language. The language and the culture have made me more in touch with the unspoken communication we all use. They have given me many opportunities to help others. I would, without doubt, suggest it to anyone who has the time to learn. My suggestion to you, if you are currently learning, is to seek out native users. Spend time with them, get to know their history, the challenges they face. You will surely come out with a new perspective and increased incentive to learn their language.
<urn:uuid:57967986-9f32-4f3c-afb8-bec8ce7033f7>
CC-MAIN-2022-40
https://www.drware.com/the-importance-of-facial-expressions-within-american-sign-language/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00226.warc.gz
en
0.952945
477
3.140625
3
Artificial Intelligence has provided astonishing developments in the education industry, profiting both the students and institutions. AI is now part of our everyday lives, as we found ourselves encompassed by this technology at every turn. From automatic parking systems, smart sensors to capture striking photos, and personal assistance, AI has become more of a necessity than we have come to realize. Furthermore, Artificial Intelligence is unquestionably evident in the education sector, pushing its way through traditional methods. The educational system displays more convenient and personalized traits, all due to the diverse applications of AI. As a result, this has completely transformed the way people learn nowadays as educational elements are shifting to be more accessible by smart devices and computers. Today, and more so with a world ridden by a pandemic, students are not obligated to attend physical classrooms to learn as long as they are well equipped with computers and internet connection. Moreover, AI is empowering the computerization of administrative tasks. This enables institutions to reduce the time once allotted to perform complex tasks, allowing educators to spend more time with students. So, how has AI transformed the education system? AI can computerize administrative duties for teachers and academic institutions. Typically, instructors spend a considerable amount of time grading exams, evaluating homework, and presenting valuable feedback to their students. However, today’s technology can automate teachers’ grading duties, especially when multiple tests are involved. This denotes that professors will gain added time to spend with their students to guide them in the right direction, instead of taking up prolonged hours grading tests. This isn’t only applicable to simple multiple-choice tests only; software providers are generating a more reliable method to grade written answers and standard essays. Another area that benefits from AI is the institution admissions committee. Artificial Intelligence is providing accessible ways for automation of analysis, distribution, and processing of paperwork. AI is offering new techniques in education, paving the way to ensure that all learners achieve their ultimate scholastic accomplishment. Robots currently can compose digital content of comparable quality as several AI essay writing services. Ideally so, this particular technology has now entered the classroom context of education. Smart content is not just about the writing; it also incorporates virtual content like video meetings and video lectures—thus, initiating a new turn for textbooks. AI systems are still working with traditional syllabuses to produce customized books for specific subjects. Consequently, books are moving towards digitalization, and new education platforms are designed to accommodate students of all academic classes and ages. An illustration of the means mentioned above is the Cram101, which applies AI to make textbook contents more understandable and more comfortable to navigate through summaries of the chapters, flashcards, and practice tests. Another beneficial AI interface is Netex Learning, which allows professors to design electronic curriculums and educative data through several devices. Netex also incorporates online support applications, audios, and descriptive videos. Just as Netflix offers personalized recommendations, the same can be done in education. When AI is introduced to the learning classrooms, teachers are not replaced; however, they can operate better by submitting personalized support and advice to each pupil. AI generates personalized in-class tasks and final exams, securing the students’ best possible outcome and assistance. Immediate feedback is one of the fundamental keys to successful tutoring and ensuring student improvement. With the use of AI-powered applications, students receive focused and customized feedback from teachers. Teachers also have the option to summarize lessons into smart learning guides and flashcards. This is also beneficial on the college level of education, where university students will be granted more time to interact with professors. Smart tutoring systems such as Carnegie Learning offer prompt feedback and work directly with students, thanks to AI. Education should not have any limits, and AI can aid in reducing boundaries. Technology introduces revolutionary developments by promoting the teaching of any course wherever you are in the world, and at any time. Education interfaces powered by AI provides students with critical IT skills. As technology advances and AI develops, there will be a wider variety of courses obtainable online, where students will hold the luxury of learning anywhere in the world. AI augments IT methods to reveal new efficiencies. For example, schools can arrange the proper means to deter students from getting lost in crowds when they run in corridors—this is similar to town planners using AI technologies to reduce traffic jams and enhance pedestrians’ safety. AI can also help analyze complex data to facilitate data-driven forecasts, allowing precise preparation for the future. For instance, this method can be useful when allocating seats at school gatherings or ordering food from local cafeterias. Data-driven projections can also help schools in omitting wastage caused by excess purchases and thereby lessening costs. It is rare to find teachers, let alone students, searching the library for any materials or information these days. All thanks to Google, we can obtain anything we are looking for through the tap of our fingers. Nonetheless, retrieving these data is also a tremendous task by itself. Today, programs such as Quizlet are ready to help students by presenting exactly what they are searching for. AI is holding a significant role in enhancing the lives of disabled individuals. AI stands out by offering more reliable support to students with special needs—another approach to AI’s attempt at personalizing education. For example, speech recognition software such as Nuance can help transcribe words within a lesson for students with writing challenges or low mobility. Traditional educational systems have been primarily focused on the memory preservation of students rather than their comprehensive minds. With Virtual and Augmented Reality, AI can introduce a vibrant learning atmosphere to the students, where they can traverse through the galaxies, see world landmarks, and much more. Like many newfound technological advancements, AI is no stranger to the heaping expense that comes with it. The high cost doesn’t only apply to the product itself, but also its maintenance and repair. Moreover, as AI needs added digital tools, the volume of power required to operate schools will rise drastically. Schools will need to extend their budget to meet the expenses and submit alternative options to balance power consumption. If schools fully revert to AI in educational classrooms, these applications can replace teachers in various aspects. A significant role of schools nowadays is the connection among teachers and students—moreover, how these personal relationships can form behaviors. Rather than using AI to make teaching more efficient only, we may ultimately become reliant on technology. Just as many students will be losing the human connections with teachers, this may also mean that some AI technologies may advance enough to replay many school staff. From directing the administration to teaching, AI holds an answer to everything. Online learning has diminished the limit to class sizes. With such adjustments, AI might be a decisive factor leading to a remarkable rise in education’s unemployment sector. The influence of AI technology will be observed from the most basic education levels throughout higher learning institutions. Artificial Intelligence may also notify the students about their career paths depending on their goals and improvements, supporting them even past academics. With its fast track advancement, it is only a matter of time until we discover AI’s ultimate impression in the education industry.
<urn:uuid:29fa5cff-74e8-493e-afa6-4c7eab1ac090>
CC-MAIN-2022-40
https://plat.ai/blog/ways-ai-is-changing-the-education-industry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00226.warc.gz
en
0.952212
1,453
3.234375
3
Most large companies are able to financially survive a cyberattack. But for a small business with fewer employees and less revenue, a data breach can bring business to a halt, and costs associated with the recovery can run a bank account dry. Ransomware, a type of malware designed to render data or an entire network useless, is one of the most common ways hackers will try to extort money from small businesses. Typically, the victim will have to pay the attacker in exchange for a decryption key, which can cost anywhere from a few hundred to a few thousands of dollars, depending on the industry and whether a cyberforensics team is needed. Eighty-nine percent of breaches overall this year had a financial espionage motive, according to the Verizon 2016 Data Breach Investigations ReportOpens a New Window.. It is estimated cybercrimes will cost businesses more than $2 trillion each year by 2019, according to data from CheckmarOpens a New Window.x, a company specializing in application security. Continue reading the original article on Fox Business.
<urn:uuid:c26ba14c-a106-4945-9f59-2733d37b13d7>
CC-MAIN-2022-40
https://checkmarx.com/in-the-news/can-small-business-afford-hacked/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00426.warc.gz
en
0.945115
210
2.53125
3
Independent evaluations that can establish scientific validity will curtail the impact of biases and human error during forensic investigations, the watchdog agency says. Forensic algorithms play a crucial role in modern criminal investigations, helping law enforcement determine whether an evidentiary sample can be matched to a specific person. While technology can curtail subjective decisions and reduce the time it takes analysts to reach conclusions, it comes with its own set of challenges. In a follow-up to a May 2020 report on how forensic algorithms work, the Government Accountability Office outlined the key challenges affecting the use of these algorithms and the associated social and ethical implications. Law enforcement agencies primarily use three kinds of forensic algorithms in criminal investigations: latent prints, facial recognition and probabilistic genotyping, GAO said. All three compare evidence from crime scenes to an online database. However, a number of factors ranging from the quality of the evidence, the size of the respective database and age, sex and racial demographics have the potential to reduce the accuracy of these findings. Analysts themselves are subject to human error, and biases differ from person to person. The accuracy of latent prints frequently relies on the percentage of the fingerprint covered in the sample and whether it is smeared or distorted, making it difficult to draw precise inferences when the quality of the evidence is compromised. Similarly, the accuracy of facial recognition algorithms can vary when individuals wear glasses or makeup or if the image was taken from an extreme angle. According to GAO, law enforcement also runs into problems assessing the validity of probabilistic genotyping, or the technology used in DNA profiling. Most studies evaluating probabilistic genotyping software have been conducted by law enforcement or software developers themselves, the report stated. A report from the President's Council of Advisors on Science and Technology noted that independent evaluation is often required to establish scientific validity, but there have been few such studies. GAO offers policymakers three solutions to improve the reliability of forensic algorithms. The first involves increased training for law enforcement analysts and investigators to boost their understanding of the algorithms and the subsequent results. To reduce the risk of misuse and improve consistency, GAO says policymakers could also support the development and implementation of standards and policies related to law enforcement’s testing, procurement and use of such algorithms. Finally, GAO suggests that increased transparency related to testing and performance could improve the public’s knowledge of the technologies and help address corresponding challenges. “By automating assessment of evidence collected in criminal investigations, forensic algorithms can expand the capabilities of law enforcement and improve objectivity in investigations,” GAO said. “However, use of these algorithms also poses challenges if the status quo continues.” Read the full report here.
<urn:uuid:9e6e7a39-1519-4ebf-9eef-b05882679074>
CC-MAIN-2022-40
https://gcn.com/public-safety/2021/07/outside-reviews-can-limit-bias-in-forensic-algorithms-gao-says/315604/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00426.warc.gz
en
0.927996
553
2.890625
3
Antivirus software or anti-malware software prevents, scans, detects, and removes viruses on a computer. It’s designed to protect your computer from outside threats, including computer worms, spyware, botnets, rootkits, keyloggers, and of course, viruses. Primarily everyone who uses computers has relied and continues to rely on antivirus software to protect their devices and their data from cybercriminals. We discuss the evolution of antivirus software and where this technology could potentially go in the future. There is a stark difference between a firewall and an antivirus. As mentioned earlier, antivirus software is what protects you and your computer devices from outside threats. With an estimated and expected 6 billion worldwide users connected to the internet, antivirus software is more important than ever. But, unfortunately, the growth of internet-connected users also equates to the increase of potential cyber attackers. There is a perpetual need to predict or foresee the trajectory of cybersecurity protection. Cybercriminals continue to adapt, which is why antivirus needs have also adapted and grown over the years. The very first identified computer virus was generated by BBN Technologies in 1971. It was nicknamed the “Creeper” and was a self-reproducing virus that attacked the hard drive until it couldn’t be operated. Modern viruses continue to evolve into more innovative and superior attacks and can initiate complete damage with a single click of a button. Antivirus emerged in the mid-80s with now well-known names such as John McAfee, Eugene Kaspersky, and more. The first antivirus products were made available in the 1990s but were overtly simple and needed specific signatures to identify each virus. The problem with early antivirus products was the need to identify a specific virus signature. These early viruses couldn’t detect any unique and innovative viruses, and as cybercriminals and cyber threats became increasingly more advanced—antivirus software needed to evolve. Antivirus detection was pretty simple compared to how current antivirus works today. The idea of the cybersecurity industry keeping up with cyber criminals hasn’t been enough. Cybersecurity experts need to be ahead of cybercriminals, which has led to what is called the next-generation antivirus. Instead of only protecting against things the software can see, the next-generation antivirus protects against behaviors. This new approach may implement cloud-based analytics, artificially intelligent algorithms, and machine learning technology. The prevention of malware infections isn’t a simple task, so some antivirus and security vendors are developing technology called Endpoint Detection and Response. This type of security can detect viruses and malware, as well as search for and examine potentially compromised endpoints. Endpoint detection and response (EDR), referred to as endpoint threat detection and response (EDTR), is an endpoint security solution that uses both continuous real-time monitoring and endpoint data collection. It also uses rules-based automated response and analysis skills. It also uses an endpoint protection platform (EPP), which fights malware at the level of the device. The endpoint protection platform is defensive in form and may also use components of traditional signature-based antivirus methods. Anti-virus software has come a long way since its conception. Next-generation antivirus or NGAV solutions prevent attacks by observing and countering all attacker strategies, methods, and actions. Next-generation antivirus uses traditional software but brings it to a new level with endpoint security detection and protection. It employs more than the conventional file-based malware signatures and uses a cloud-based approach. In addition, using predictive analytics with artificial intelligence and machine learning technologies truly is the next generation of antivirus. Next-generation antivirus can prevent malware and non-malware attacks, identify malicious threats, and behavior, and can identify patterns of malicious activity from unknown sources. It can also collect and analyze widespread endpoint data to determine its origins. Lastly, it responds to innovative and upcoming threats that may have formerly gone undetected. Advancements in technological security and staying ahead of cybercriminals will always be an essential focus for our security. Online and digital security is important because most people have personal data stored online. Many individuals rely on connected devices and digital accounts for various aspects, including banking. It’s reported that 73% of individuals globally use online banking at least once a month. This can include checking an account’s current balance or transferring money. It is also reported that 59% of people find online banking more secure for certain complex processes. Innovations in technological security are also important because cybercriminals are also innovating. The idea of the Quantum Supremacy and the Quantum Apocalypse is another reason why advancements in technology (from a general perspective) are important. Quantum supremacy is the experimental demonstration of a quantum computer’s supremacy and advantage over traditional computers (including the world’s current fastest supercomputers). The concept of the quantum apocalypse also goes hand in hand with this because once quantum supremacy is proven, whoever has a quantum computer could potentially bypass any encrypted or secret files. A quantum computer could, in theory, crack any security measures taken by any company or organization. Security advancements and innovations in technology are vital because the world relies on data more than ever before. As a result, cybersecurity experts need to stay ahead of the pack when it comes to security and technology to safeguard users from cyber threats. As the world continues to rely on technology, connected devices, and data—the need for the best antivirus and security measures becomes even more vital. Antivirus software has come a long way since its conception. Advancements in artificial intelligence, machine learning, and cloud-based analytics have made current antivirus software a formidable opponent against cybercriminals. Now that we’re on the verge of quantum computing, innovations in this aspect of computer technology will also be essential for security computer security measures. The evolution of antivirus software shows that technology continues to advance in a multitude of facets.
<urn:uuid:1324db4b-d5ef-484b-99b7-e9a8b3e5b575>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/the-evolution-of-antivirus-software
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00426.warc.gz
en
0.942615
1,235
3.53125
4
Information is the lifeblood of today’s business world. With timely and accurate information business decisions can be made quickly and confidently. Thanks to modern technology, today’s business environment is no longer constrained by physical premises or office walls. We can work on laptops, smartphones or tablets and, with nearly ubiquitous internet connectivity, we can work from any location. With this growing dependence on technology we need to also accept there will be times when that technology is going to fail us, either by accidental or malicious intent. We do not expect 100% security in our everyday lives, and we should not expect it in our “technical” lives. What we need to do is design our systems and security programs to be resilient in the event of a failure. This means shifting our thinking away from solely preventing attacks to trying to develop strategies on how to ensure the business can continue to function should an attack happen and be successful. In essence, a change in mind-set is required, and not just in those developing the security programs, but also in senior business management. To develop this resilience to cyber-attacks, the focus should be on ensuring the business understands the impact of a potential attack and the steps required for them to prevent, survive and recover from it. This requires security not to be viewed only as a purely technical discipline, but also from a business and risk management point of view. This requires technical people who would traditionally focus on point solutions to specific technical threats to translate the potential impact of security incidents into terms and language that business and non-technical people will understand. Business operates on the principle of risk, and every business decision involves an element of risk. Sometimes the result of that risk is positive, for example, increased sales; sometimes it’s negative such as loss of market share. Traditionally, security people with technical backgrounds look at issues in a very black or white way, it either works or it does not work, it is secure or not secure. Being resilient involves a change in mind-set whereby you look to identify how secure the business needs to be in order to survive. This is a challenge for both technical and non-technical people. For business people it requires that they get involved in the decision making process regarding information security security by identifying what are the critical assets to the business and how valuable those assets are. The risks to those assets then need to be identified and quantified so that measures can be put in place to reduce the levels of risk against those assets to a level that is acceptable to the business. So instead of a checklist approach to security, or an all-or-nothing approach, decisions are more focused on what the business needs and investment can be best directed to the more appropriate areas. I often compare developing a resilient approach to security to how kings protected their crown jewels in their castles during the Middle Ages. The core of the castle is the Keep and it is the most secure part of the castle. The Keep was where the most valuable assets were kept. The Keep itself was placed in a very defendable position within the castle walls. Those castle walls were defended in turn by moats, turrets, and drawbridges. Outside the castle walls were where the villagers and farmers lived. In the event of an attack the king would raise the drawbridge leaving those outside open to attack, but these were acceptable losses to protect the crown jewels. Even if the castle walls were breached the crown jewels would remain protected within the Keep. In today’s security landscape, businesses need to identify what their crown jewels are and protect them accordingly by moving them to the digital equivalent of a Keep. Similarly, they also need to identify what should remain within the village, or even within the castle walls, and be prepared to lose that in the event of a major attack. Effective security requires rigorous and regular risk assessment exercises, particularly as today the business environments, technology, and security threats, change so quickly. These risk assessments should be supported by good security policies outlining what the required security controls are to manage the identified risks. Key to having a resilient approach to security is to have an effective incident response plan in place so that when an attack happens the business can still function and survive. It is time we moved from designing our security infrastructure to avoid failure, and to acknowledge and accept that failure will happen. How we deal with that failure will determine how well our organizations can recover from security incidents. Instead of looking how to avoid failure, we need to learn that failure is an option. What is not an option is not being resilient enough to recover from and survive such a failure. Brian Honan is an independent security consultant based in Dublin, Ireland, and is the founder and head of IRISSCERT, Ireland’s first CERT. He is a Special Advisor to the Europol Cybercrime Centre, an adjunct lecturer on Information Security in University College Dublin, and he sits on the Technical Advisory Board for several information security companies. He has addressed a number of major conferences, wrote ISO 27001 in a Windows Environment and co-author of The Cloud Security Rules.
<urn:uuid:f2ee6179-6a9b-4dcd-9695-14b0dba0b362>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2014/07/31/failure-is-an-option/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00426.warc.gz
en
0.965911
1,036
2.5625
3
Flexible Learning Spaces The terms, flexible seating, flexible furniture, and flexible classroom are really the end product of the pedagogical change from teacher-centered learning to student-centered learning. A learner-centered environment embodies practices that optimize the students’ movement from whole-group instruction to smaller groupings, to personalized spaces. A flexible learning space is designed to morph classroom activities into different learning configurations by using furniture, materials, tools and technology. It becomes an essential part of learning and teaching by enhancing both the students’ and teacher’s sensory input, physical movement, and psychological well-being. I like to use the term, mobile and modular (or mobimod for short) to describe the live action of a classroom moving from one activity to the next. Imagine a fourth grade classroom working on a paper airplane building project in small groups, and then, the students easily move their “mobimod” furniture to the walls to make room for a square dancing activity happening five minutes later, in the same space. To optimize such a flexible design, these face-to-face learning spaces utilize five categories of furniture and technology to enhance student movement, which in turn helps to spark their motivation, engagement, and creativity. - Seating and Movement Sitting with subtle movement while working independently or in groups - Modular Sitting Tables Sitting at shaped tables that optimize space while working in groups or independently - Sit to Stand Tables and Movement Having a standing option to weight-transfer while working independently or in groups - AV & Visual Communications Walls that talk using audio, video and visuals with a variety of fixed and mobile displays, and boards - Mobile Storage Bin and cabinet places organized and optimized for stacking and mobility. Personalized storage for each student, and for the variety of room materials, books, tools and technology From Classroom to Learning Studio As a child in the 20th century, you probably attended K-12 classrooms where all the desks and chairs were the same, probably in rows, facing the front chalkboard. As 21st century learning and teaching practices support literacy, STEAM, project-based learning, critical thinking, and collaborative learning, the curriculum has been transformed. What hasn’t evolved as much is the physically furnished classroom in this century. Stationary single or double desks and 4-legged chairs still make up most classrooms today designed for single activity whole-group instruction. Change is often a slow process especially when considering the budget realities in changing out whole classrooms of furniture. However, in the last ten years many more teachers and administrators have moved together to sync their 21st century curriculum with a sprinkling of flexible furniture and technology in their classrooms. I call this process, “transitions to transformation” as the physical classroom evolves into what many are now calling, “Learning Studios.” As we move forward in the 2020’s, I see classrooms becoming learning studios as pedagogy and physical space converge to enhance student creativity, expression, and understanding. Flexible Learning Space Assets - Emulates the world of work – In the real world, adults work in teams. Project-based tasks are what most women and men do everyday at their jobs. In the 21st century, office space has transformed how people work in face-to-face environments that also facilitates online activities as well. Educators are empowered as designers to create learning spaces that now includes a broader mix of hard and soft furniture not only made for schools. This new mix includes furniture from office and work, hotel and restaurant, and home and living space environments. - Designers of their space – A teacher working with her/his students should be the designers of their learning space. With a variety of district standard flexible furniture, the classroom design is often the first class project of the school year. - A Social and Emotional Safe Nest – When students walk into a classroom, they need a safe place to land. Cozy is cool- both a physical and emotional feeling. Many students need the comfort of a welcoming classroom to serve as a springboard for the deep-dive of learning with a class of peers. - Living with Covid – As collaborative teams had become standard practice in pre-covid classrooms, the pandemic has created learning challenges that include the furniture. Flexible shape desks and tables that can both be configured for group work, or can easily be reconfigured for social distancing space and individualized seating. To flex minds, we need to flex classroom space. Learning spaces in the 2020’s need non-traditional eclectic designs to create a positive learning environment. The key to a flexible learning space combines a student’s need to fidget, rock, swivel, stretch, stand, and even get horizontal with mobile and modular furniture and technology. Subtle self-movement and weight transfer keeps our brains stimulated and helps prevent mental fatigue within a contained space. It’s really simple, physical movement sparks the mind to enhance one’s motivation, engagement, and creativity that opens paths for learning. - Flexible Classroom by VS – https://www.youtube.com/watch?v=IU2C0hy7FV8 - Fostering Student Well-Being in Elementary Learning Spaces – https://www.youtube.com/watch?v=8PulZZvr7tI - Well-Being by VS – https://vsamerica.com/media/partnerNet/El/Elem/Elementary-Well-Being_Brochures2022.pdf - Design Patterns for Creative Learning Environments – https://fieldingintl.com/design-patterns/ - D&D Project Showcase – https://ddsecurity.com/projects-portfolio - D&D Furniture Lines – https://ddsecurity.com/landing-page/furniture - This article has also been published in the Fall 2022 Issue of Slate Magazine – the quarterly magazine of the Idaho School Board Association: https://www.idsba.org/blog/publications/slate-magazine/
<urn:uuid:04ee98b1-8bf6-4e53-9a7d-b871914bbe71>
CC-MAIN-2022-40
https://ddsecurity.com/2022/09/19/the-benefits-of-flexible-furniture-in-the-classroom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00426.warc.gz
en
0.935674
1,271
3.15625
3
February 11th was Safe Internet Day for showing our youth how to be safe and respectful on the Internet. What kind of an Internet role model are you? Leaders need to demonstrate this every day and not just one day of the year. What Did You Say? If you are on your phone or tablet and claim you can multitask, then you should never ask the question “What did you say?” The simple answer is to just put the phone down. Focus on the moment and the person(s) around you. Most of us our guilty! I am working on this bad habit and some days are better than others. It’s ironic at the family dinner table when my daughters call me out as I glance at my phone (it is true, your own medicine tastes bad). Thank you to my family! Always Take The High Road You do not have to rant or express your negative opinion when you post or reply online. Take the high road. Acknowledge criticism and negativity. Address it in a positive way or avoid it. If it does not affect you, your family, your team or your company than just ignore it. Why do so many feel they have to voice their negative opinion and always take a stand? Can we just leave things alone and let them be? I find there are so many naysayers fighting all these battles. It does not have to be about being right and proven others wrong. I challenge them to channel this energy into something positive and make a world of difference. Game on! Engage Our Youth Start the conversation with our youth. Reach out to them and have them share their aspirations and their dreams. They are our future and we should encourage and support them. Make it about them instead of us. Ask them to participate and interact in a positive way online. What are their interests? What would they like to do to change the world? Safe and positive dialogue with our youth is important. My daughters are 19 (almost 20) and 22. I learned a lot watching them grow up interacting in the digital world. Fortunately, being in the technology sector, it helped to be ahead of the curve and understand what they were doing and the pressure of belonging online. We also witnessed cyberbullying. Leaders and parents must educate our youth and provide them with the skills to deal with this in the proper manner. Be positive. Be positive all the time. Be supportive. Treat everyone with respect and support conversations at every opportunity. Radio silence and negativity does not help anyone. Get out of your own comfort zone and be a role model. Remember to avoid controversy. Step up and make a difference. It is in all of us and we bring so much to the table to help our youth. Together, we can create a safe Internet environment, break down barriers, learn and help each other along our journeys. How are you being an Internet role model?
<urn:uuid:fb74d0da-ec1c-4d32-bd44-9f5ce7a5d983>
CC-MAIN-2022-40
https://staging.alphakor.com/blogs/leadership/be-an-internet-role-model/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00426.warc.gz
en
0.963242
600
2.546875
3
For many years, people have been talking about the accelerating volume and velocity of data. Open any technology publication and you’ll see stories offering advice on how companies can tame their big data and clean up their data lake. In fact, some of those stories are authored by my colleagues! But another dimension to this ever-growing volume of data is the Internet of Things (IoT). According to Wikipedia: The internet of things (IoT) is the network of physical devices, vehicles, buildings and other items—embedded with electronics, software, sensors, actuators, and network connectivity that enable these objects to collect and exchange data Think about it like this: do your customers use your mobile app? That is a data-generating sensor. They drive your car? Full of sensors producing data. Jet engine: massive data production in operation. They walk through your store? Where were they, what did they look at and touch? Every thing nowadays can host sensors that generate data and teach you things about your products, customers, and services. This is a true data fire hose so your organization really needs to embrace data-driven otherwise you will be hard pressed to find signal in this massive data noise. Although IoT may be a new(ish) concept to many of you, it’s actually been around for a while. For over 20 years, researchers in universities worldwide have been investigating how they can put sensors on objects and IP addresses on things. A classic example that you may recognize is the concept of a smart fridge. It’s a refrigerator that connects to the Internet, alerts you when you’ve run out of milk, and orders the milk for you. Pretty cool, right? Until fairly recently, IoT stayed mainly in the halls of academia. But all that has changed. Today, Internet of Things is very much a part of our everyday lives. Think about it. We have thermostats that can connect to the internet. They collect information about you and apply it to do things like automatically enable and disable the thermostat based on your behavior or on the basis of the weather. Let me share another example. A few months back, I visited a technology firm in Silicon Valley. In their cafeteria, they had a soda machine with a menu, much like you see in many restaurants today. I picked cola with lemon. The physical machine has a cartridge inside that mixed up my cola with lemon. But that soda machine is also an IoT device. It has a device connected to a server that collects data. It recorded that I picked cola mixed with lemon, on a specific date and time. And it merged that data with data it collects from all its machines around the world. I’m certain that the company that owns the machine uses the data to drive innovation (e.g., which mixes are popular where and when – and which mix cartridges should we push). Seems innocent enough, right? But things actually get a bit scarier. Many companies are now adding sensors to their machines that go even farther. They’re adding cameras on the device that see you arrive. They perform face recognition so that they can present you with the most popular options based on your latest choices. These machines combine data with sensors to make the experience personal. You’re no longer a demographic, but an individual standing in front of the IoT device. And it recognizes you. You think you’re ordering a drink, but the company behind the machine is collecting data about you – your preferences, the time of day you visit the machine, your photo. And they collect this data without you realizing it. So the real question becomes how are they using, transferring, and securing this data. How is your data being used – for what purpose and in what way? And think about the volume of data that IoT adds to the world. It makes the “big data†we’ve been talking about for all these years really just “relatively big data.†Internet of Things introduces an enormously higher volume and frequency of big data. It’s not just generated by transactions – it’s generated by machines themselves. These machines are always on, and they are constantly generating a flood of data. And how we deal with that flood requires an entirely new set of requirements. Now, we need to consider: - How do we interpret it? - Is it quality information? - What is signal vs noise? - Is it secure? - How do we manage the volume itself? - Where do you store it? - What about privacy and data protection? All of these considerations are ones we consider today, but in the world of IoT, we need to think about them in an entirely different way. True, data-driven companies are embracing IoT and using it to drive innovation. They are being extreme in how they use data to be truly data-driven. Traditional companies are jumping on the IoT bandwagon as well – but many times, they struggle to deal with the data deluge it creates. They don’t have the right systems, the right people, or the right interpretation to use it for true competitive edge. So I ask you – if your organization is talking about IoT, are you also talking about the extreme ways in which data can drive your business? Are you considering the sheer volume of data that Internet of Things generate, and how you’ll use it to your advantage? If you’re not, you should be.
<urn:uuid:9b19aa9c-a871-4054-963e-03e767ee455a>
CC-MAIN-2022-40
https://www.collibra.com/us/en/blog/internet-of-things-the-era-of-even-bigger-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00426.warc.gz
en
0.950561
1,135
3.078125
3
"People who repeatedly attack your confidence and self-esteem are quite aware of your potential, even if you are not." ~ Wayne Gerard Trotman, author What is Anti-Bullying Week?Anti-Bullying Alliance (ABA), an association of organizations and individuals dedicated to putting a stop to bullying in an effort to create safer environments mainly for kids and young people to live, learn, play, and grow, have been holding an annual campaign that takes place every second week of November to highlight and educate children, teachers, parents, and carers about bullying so they can take action against it. This year, the week is commemorated from the 14th to the 18th of November. The overall theme of this year’s campaign is “Power for Good”. Social media channels of public and private personalities and groups who support the cause actively use the hashtags #PowerForGood and #AntiBullyingWeek on Twitter, Facebook, and LinkedIn. Why "Power For Good"?The ABA reminds every child and young person that they are powerful, positive change agents and encouraging people to take individual and collective action against bullying can make a difference in the world around them. It’s a challenge that those who may be affected by such an act do something about it, from reporting instances of bullying online, in the school grounds, and in the streets. Using their “Power for Good” is not only a challenge to children but also a plea to the adults (teachers, support staff, parents, and carers) in their lives must heed, as they have the responsibility to (1) support those who are being bullied or who knows someone who is, (2) guide them on what proper steps they can take to help address the behavior of bullying, and (3) encourage them to grow in confidence with a healthy self-esteem. What is considered bullying?Many people from different age groups perceive bullying differently. Case in point, if you may recall, the London School of Economics in Political Science, in partnership with the EU Kids Online Network, has released a white paper in 2014 about how kids in the EU see “problematic situations” online, and one of the takeaways from that these kids consider certain actions to be “bullying” if the doer is generally someone whom they don’t know. If a friend or someone they know does the “bullying”, they consider it as just another form of teasing. The ABA defines bullying as “the repetitive, intentional hurting of one person or group by another person or group, where the relationship involves an imbalance of power.” It can be physical, verbal, or psychological, and it can happen either face to face or on cyberspace. Bullying is recognized as a behavior, and many organizations that work with those affected by this negative behavior also work with those doing the act in order to determine its root cause and come up with solutions to change it. In the next few days, we’ll identify acts of bullying (and which ones aren’t) that are happening within certain age groups and provide advice on how those affected can best handle the situation they may find themselves in. Can bullying happen to anyone?It’s a common notion that only school-aged children, which includes high school teens and college young adults, are being bullied and that it only happens within the confines of the school or university grounds. Nothing can be further from the truth: Adults are bullied, too, and seeing it in the workplace is not an uncommon occurrence. We’ll be looking into adult bullying more closely in the coming days as well. The ABA, Ditch the Label, other known non-profit organizations in and outside the UK recognize that bullying is a very real societal problem that needs to be addressed. However, they also recognize that urge others that labeling the person or group doing the bullying as “the bully” and the individual or group they target as a “the victim” does not help address the problem or nip bullying in the bud. Instead, they encourage everyone to drop the labels entirely and understand that both the doer and the receiver of the bullying act are in need of help. The Malwarebytes Labs Team
<urn:uuid:03ae5d16-01fb-47d5-8581-114a15f34003>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2016/11/pledge-to-use-your-power-for-good-this-anti-bullying-week
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00426.warc.gz
en
0.96831
885
3.40625
3
Progressive web applications (PWAs) are web applications that load like regular web pages or websites but include app-like characteristics such as working offline, receiving push notifications, and accessing device hardware. They are built with modern web technologies such as Service Workers, Web Components, and React. Progressive enhancement is used to give users a good experience even when their browser doesn’t support all the features of a PWA. PWAs are hosted on servers and accessed via URLs. They don’t need to be installed or updated, and they work on any device with a web browser. How are Progressive Web Applications changing the Web Development Landscape? Progressive web applications (PWAs) are revolutionizing the web development landscape. They are built using cutting-edge web technologies and combine the best of the web and mobile worlds. They are perfect for developing countries where data is expensive and connectivity is poor. - PWAs are faster and more reliable than traditional web apps - PWAs are more secure than traditional web apps and native apps Some of the benefits of using PWAs include, - Increased engagement – PWAs can boost user engagement by 400% - Increased conversions – A PWA can increase conversions by 104% - Reduced development time – PWAs can be built in a fraction of the time it takes to develop a mobile app - Reduced maintenance costs – PWAs save developers time by avoiding designing in multiple codebases resulting in fewer expenses How can you get started with Progressive Web Applications? Here are the three things you need to follow, 1. Use HTTPS HTTPS is the standard security protocol for communications on the web. It ensures that data is encrypted in transit, preventing third-party interception. In progressive web applications, HTTPS is essential for ensuring user privacy and security. HTTPS is an important security measure for progressive web applications. It helps protect user privacy and ensure data integrity. 2. Serve a Manifest File PWAs are built using modern web technologies and standards, such as Service Workers, HTML5, and CSS3. Web app manifests are files that describe a web application. They allow developers to specify the default screen orientation. One common way to serve a manifest file is to include it in the <head> of your document, as shown below, <link rel=”manifest” href=”manifest.json”> 3. Add a Service Worker A service worker is a script that runs in the background, separate from your web page. It has control over caching and handling of network requests for that page and its assets. Service workers are perfect for creating offline experiences, intercepting and altering network requests, and speeding up loading times on subsequent visits. The future prospects for PWAs are very bright, and they are likely to become more and more popular in the years to come. They offer a number of advantages over traditional mobile applications. They’re faster to load, they use less data, and they can be installed on the home screen of a device. They can also be used to improve the user experience on low-end devices.
<urn:uuid:7b3a7314-6b6b-4bc3-94f6-5ab6e08c7b33>
CC-MAIN-2022-40
https://blog.miraclesoft.com/progressive-web-application-the-future-of-web-development/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00426.warc.gz
en
0.922591
644
2.6875
3
The National Institute of Standards and Technology (NIST) has posted its latest report about the usage of biometric authentication on cellular devices to permit first responders to obtain quick access to sensitive information, at the same time making sure that only authorized persons could access the data. A lot of public safety organizations (PSOs) are currently utilizing mobile gadgets to access sensitive information from any place, however, making sure access is safe and only authorized persons could utilize the devices to see that data has previously depended on the usage of passwords. Passwords may be secure; nevertheless, passwords must be complex to withstand brute force attempts to predict passwords. Needing to type in a lengthy and complicated password can prevent access to essential information. In many cases, access to sensitive information must be given quickly. It isn’t useful for first responders to need to input a password. Any holdup, even one that is only a couple of seconds, has possibilities to worsen an emergency. Biometrics provides a safer authentication solution than passwords and can permit data access a lot faster. Biometric authentication like fingerprint, face, and iris scanning options were integrated into a lot of mobile phones and Apple gadgets. Although using biometric identifiers could enhance identity, credential, and access management (ICAM) functionality and accelerate access to critical information, there may be lots of difficulties applying mobile device biometric authentication and certain problems for first responders. The report created by the National Cybersecurity Center of Excellence (NCCoE) in joint collaboration with the Public Safety Communications Research (PSCR) looks at the authentication difficulties encountered by first responders and gives recommendations on how to implement authentication solutions. Usually, biometric authentication is accomplished via the utilization of wearable sensors and scanners designed into devices; but there are possibilities for verification mistakes. Scanners could fail to record fingerprints or possibly give access because of false matches. The NIST report explained that to utilize biometrics for authentication, reasonable confidence is necessary for the biometric system to properly confirm authorized individuals and not validate unauthorized individuals. The combo of these issues describes the overall precision of the biometric system. The guidance document offers information into the efficiency of biometric authentication options, clarifies how verification issues can happen with capture, extraction, and registration, as the possibilities for false matches. The report additionally gives information to enable administrators to carry out biometric authentication on shared mobile gadgets and talks about the possible privacy problems and how to offset those problems. The purpose of the report is to give first responders more data on using biometric device authentication and the difficulties they may encounter changing from passwords to permit them to make better-educated choices regarding the most effective way of authentication to satisfy their requirements. NIST would like comments on the report. Comments ought to be received by July 19, 2021.
<urn:uuid:c05194c1-c66a-4d64-bba1-e42a5de2989b>
CC-MAIN-2022-40
https://www.calhipaa.com/nist-issues-guidance-for-first-responders-on-using-biometric-authentication-for-mobile-gadgets/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00426.warc.gz
en
0.923681
567
2.75
3
Application availability is a measure used to evaluate whether an application is functioning properly and usable to meet the requirements of an individual or business. Application availability is determined based on application-specific key performance indicators (KPIs) such as overall or timed application uptime and downtime, number of completed transactions, responsiveness, reliability, and other relevant metrics. Real or perceived application failures are also taken into account, such as consistent errors, timeouts, missing resources, and DNS lookup errors. To ensure acceptable service for users and reliable support for the business, organizations typically seek to maintain high availability. A high availability system is able to maintain continuous operations with an extremely low error rate for an extended period of time. To achieve demanding standards of high availability, such as the elusive “five 9s” (99.999 percent), a system must be designed from the ground up to ensure effective backup and failover for processing, data storage, and networking. Server load balancing plays a critical role in ensuring high availability by enabling a rapid response to a server failure. A load balancer can be deployed as the front end to a cluster of servers, routing each incoming client request to a member of the cluster, and relaying the response back to the client. To ensure high availability and optimal service, the load balancer performs continual health checks of each server in the cluster, using probes to determine its eligibility for requests. If a server becomes unavailable or falls below acceptable performance metrics, the load balancer detects the outage, stops sending requests to it, and redistributes traffic across the remaining members of the cluster. High availability load balancing further aids application availability by ensuring that the load balancer itself remains available. Global server load balancing (GSLB) performs a similar function to server load balancing on a broader scale. GSLB provides load balancing, site failover, and web traffic management across multiple data centers and/or clouds (private cloud or public cloud). This makes it possible to ensure application availability based on factors such as the health, availability, and loading for each data center; client geographical locations based on their ISP DNS server address; and bandwidth utilization. The A10 Networks Thunder® Application Delivery Controller (ADC) ensures high availability and rapid failover through load balancing, global server load balancing, and continuous server health monitoring. Take this brief multi-cloud application services assessment and receive a customized report.Take the Survey
<urn:uuid:8ef3831a-e9e0-4c94-a593-c2d634e398b2>
CC-MAIN-2022-40
https://www.a10networks.com/glossary/what-is-application-availability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00426.warc.gz
en
0.901532
488
2.765625
3
Scientists at Scripps Research have discovered the role of an immune system double agent. This molecule, called USP18, can help curtail immune responses, but it can also open the door to bacterial infections, such as harmful listeria and staph infections. “I call the molecule a ‘wolf in sheep’s clothing,’ ” says Namir Shaabani, Ph.D., a postdoctoral researcher at Scripps Research and co-first author of the recent Science Immunology study. The researchers found that deleting the specific gene for this protein in certain immune system cells helps the body fight bacterial infections. This work, conducted in mouse models, offers a potential antimicrobial approach that could target both bacteria and viruses. It all comes down to type 1 interferons, a type of immune molecule produced at the start of a viral infection. Interferons fight off the virus, and then their levels should drop when the threat is gone. Study senior author John Teijaro, Ph.D., assistant professor at Scripps Research, says scientists have long wanted to understand a paradox in immunology—the question of how interferon-stimulated genes (ISGs) that usually help against viruses also dampen the host’s ability to resist many bacterial infections. For the new study, the team found that deleting a single ISG known as Usp18 in mouse dendritic cells, a type of immune cell, enhanced the body’s ability to control infections with two strains of Gram-positive bacteria. They also found that normal induction of USP18 after infection impaired antibacterial responses mediated by a protein called tumor necrosis factor and accompanying reactive oxygen species generation, which help destroy bacteria in cells. “Our results were unexpected because the absence of USP18 augments type 1 interferon signaling, which, if the current thinking is correct, should promote rather than prevent bacterial infection,” says Teijaro. Teijaro emphasizes that the study is basic biology—it illuminates the fundamental workings of the immune system—but it’s worth investigating whether USP18 can be targeted with drug therapies to treat bacterial infections. Knowing how to inhibit USP18 function could also give doctors the tools to boost interferon activity to better fight viral infections as well. “One of our goals going forward is to test this therapeutically,” says Teijaro. “We also want to expand our investigation to understand the role of USP18 in secondary bacterial pneumonia and tuberculosis infections.” There’s one more potential advantage to developing therapies targeting USP18, Shaabani says. “Therapies targeting USP18 would also have the advantage of targeting the host, and not the bacteria directly, and therefore should be less susceptible to antibiotic resistance.” More information: Namir Shaabani et al, The probacterial effect of type I interferon signaling requires its own negative regulator USP18, Science Immunology (2018). DOI: 10.1126/sciimmunol.aau2125 Journal reference: Science Immunology search and more info website Provided by: The Scripps Research Institute
<urn:uuid:73c005c1-a5b2-48a2-9ac3-fb91339bb43b>
CC-MAIN-2022-40
https://debuglies.com/2018/10/05/research-have-discovered-the-role-of-an-immune-system-double-agent-this-molecule-can-help-curtail-immune-responses-but-it-can-also-open-the-door-to-bacterial-infections/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00626.warc.gz
en
0.920209
674
3.421875
3
Leadership in Energy and Environmental Design (LEED), developed by the US Green Building Council, is a set of rating systems for the design, construction, operation and maintenance of green buildings. The certifications come as Silver, Gold and Platinum, and the highest Platinum certification indicates the highest level of environmentally responsible construction with efficient use of resources. Over the years, LEED has become more popular and is now an internationally accepted standard for “green buildings.” While LEED certified homes, commercial buildings and even neighborhoods are present across the world, LEED data centers are surprisingly rare. Less than 5% of all US data centers have LEED certification. This, however, is changing, and more and more data centers are now becoming LEED certified, thanks to the growing awareness of environmental issues. A single word to describe LEED certified data centers is “sustainable.” Here are some characteristics of a typical LEED certified data center. - Advanced cooling system to reduce energy consumption. This could be implemented in different ways, such as using outside air and cooling it by evaporation to cool the facility, deploying custom servers that operate at higher temperatures and using cold air containment pods with variable speed fans to match airflow with server requirements. - Improved cooling efficiency. Using chilled water storage system, for instance, has the potential to transfer up to 10,400 kWh of electricity consumption from peak to off-peak hours daily and, therefore, improves cooling efficiency. - Reduced energy consumption. Monitoring power usage in real-time and leveraging analytics during operations helps to allocate power judiciously. Distributing power at higher voltages reduces power loss, and eliminating energy-draining transformers helps to convert power to the appropriate voltage and reduce the generation of heat. The overall aim is to maintain low power usage effectiveness (PUE), which is the measure of the energy used beyond the IT load. - Using a clean backup power system. One innovative approach is replacing the football field sized room full of batteries that powers the uninterrupted power supply with mechanical fly wheels and a diesel engine. This reduces emissions, noise pollution and fuel consumption. - Using renewable energy. Extensive use of renewable energy, such as solar power, to reduce dependence on the grid and fossil fuels is a characteristic of all green data centers, moreso when aspiring for LEED certification. - Green construction. Construction of the facility also influences LEED certification. Using recycled materials for construction, purchasing materials near the site to reduce consumption of fossil fuels and diverting construction waste to nearby landfills reflects positively on LEED ratings. - Intelligent design. Adopting an in-row design confines the heat to a smaller area, reducing the space to cool and, therefore, reducing electricity consumption considerably. Similarly, a modular design helps to contain cooling only to the required area instead of cooling the entire facility. While LEED does not force data centers to follow specific methods of cooling, reducing energy consumption and the like, the system has a combination of credit categories, and each credit category has specific perquisites that the data center has to satisfy. Each rating system is made up of a combination of credit categories, and the number of points the project earns determines the level of LEED certification. To learn more about data center certifications, check out our Certifications and Qualifications page.
<urn:uuid:6a7224a5-736e-4bc2-b6cc-c964c8863f2e>
CC-MAIN-2022-40
https://lifelinedatacenters.com/colocation/leed-certification-data-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00626.warc.gz
en
0.92886
681
3.109375
3
Challenging Some of the Myths About Static Code Analysis Static code analysis is the automated inspection of whole-program source code without executing that program. Over time, a number of interpretations and even misconceptions about this technology and how it impacts software developers have emerged, including: - Static analysis tools are glorified compilers - Static analysis is for junior developers - Static analysis is noisy and generates too many false positives This paper addresses common myths surrounding static code analysis and explains what the technology can do for developers and the software development lifecycle.
<urn:uuid:098db2e5-26c8-41dd-b030-522834878799>
CC-MAIN-2022-40
https://www.bitpipe.com/detail/RES/1387557008_928.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00626.warc.gz
en
0.888068
111
2.546875
3
The global fleet of electric buses is already helping cities avoiding the purchase of 279,000 barrels of oil per day, according to Bloomberg New Energy Finance (BNEF). Diesel buses consume an extraordinary amount of fuel compared to a passenger vehicle. Bloomberg noted that "For every 1,000 battery-powered buses on the road, about 500 barrels a day of diesel fuel will be displaced from the market." Still, global oil consumption is going up, though maybe not as fast as it might have without electric buses. According to the US Energy Information Administration (EIA), total global oil consumption increased from 96.87 million barrels per day in 2016 to 98.52 million barrels per day in 2017. The EIA projects that the world will consume 100.31 million barrels per day in 2018. There's still reason for optimism though, especially given the aggressive push for electric vehicles in some Chinese cities like Shenzhen. The country accounts for 99 percent of the electric buses in the world (though in the country itself, only 17 percent of that fleet is electric). In addition, Bloomberg reports that "every five weeks, Chinese cities add 9,500 of the zero-emissions transporters—the equivalent of London’s entire working fleet." Those statistics seem to confirm an earlier report from the International Energy Agency (IEA), which noted that in 2016, China had an extraordinary 200 million electric two-wheelers, 3 million to 4 million low-speed electric vehicles, and more than 300,000 electric buses.In the US as of 2015 (PDF), 76.5 percent of US buses were diesel or diesel-hybrids. Just 3.7 percent were gasoline or gasoline-hybrids, and 18.1 percent ran on compressed natural gas. A mere 0.2 percent of the US bus fleet were electric battery buses, and just 20 buses were hydrogen fuel cell-powered. Those numbers are already changing as many metropolitan areas switch to zero-emissions buses. The US Department of Transportation notes that one diesel-burning bus can emit as much as 27 times the amount of carbon dioxide as a passenger vehicle, so every diesel bus replaced with a zero-emissions vehicle is good progress. Last October, several major city mayors agreed to purchase only zero-emissions vehicles for their bus lines until 2025. Those cities included Los Angeles, Mexico City, Paris, Vancouver, Seattle, and London. Electric buses are prime candidates for electrification in ways that passenger vehicles are not. Buses don't go terribly fast, so they can handle the extra weight of battery packs without a noticeable degradation in performance. They also tend to operate in urban areas, where the effects of particulate emissions on resident health is a much more urgent problem, according to a September 2017 paper from the Oak Ridge National Laboratory (PDF). The electric bus market also has a few firms that have been at this for years. Shenzhen-based BYD had sold some 35,000 buses to cities in China since 2011, and California-based Proterra recently announced that it sold 14 all-electric buses to the city of Washington, DC. Proterra Chief Commercial Officer Matt Horton told Ars in an email that "Electric buses enable $40,000 in annual energy and maintenance savings," meaning they're less expensive than diesel-competitors from an overall perspective. "Over 60 percent of states have battery-electric bus programs in operation or plans to begin service," Horton continued. "Just this week, New York MTA, the largest US transit fleet, said it is committing to a 100 percent zero-emission bus fleet." Editor Update: Although the MTA doesn't have an official plan for upgrading its buses to a zero-emissions fleet, NYC Transit President Andy Byford said on Thursday,"It does depend on the maturity of the technology—both the bus technology and the charging technology—but we are deadly serious about moving to an all-electric fleet." NYC MTC has agreed to purchase 70 electric buses by 2019. It has roughly 5,700 buses in its fleet. The new enthusiasm for electric buses may also be due to a re-evaluation of range anxiety. Last September, Proterra broke a vehicle record with one of its all-electric buses by driving 1,000 miles on a single charge.
<urn:uuid:d6b4a2fa-ed60-42d7-8aef-08931706e511>
CC-MAIN-2022-40
https://arstechnica.com/cars/2018/04/electric-buses-are-avoiding-hundreds-of-thousands-of-barrels-of-oil-per-day/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00626.warc.gz
en
0.966238
872
3.015625
3
During investigations into the possession of indecent images of children, police will sieze any digital devices owned by the suspect and pass them, under controlled conditions, to a Digital Forensic Analyst for investigation. It is the analysts’ role to extract evidence of any videos or images, and other documents, even where they have been deleted. For computers running Microsoft Windows, a common method for recovering evidence of deleted images is to analyse the thumbnails that are created for each image when the folder they are stored in is viewed. The thumbnails are created to reduce the time it takes to preview a folder, but because a thumbnail often remains present even after the image itself has been deleted, these entries can be used to confirm possession of indecent images, even if no other evidence of the image exists. In Windows XP, the thumbs.db file is automatically generated whenever a user views a folder in Explorer using ‘thumbs’ or ‘filmstrip’ mode. Files included in Thumbs.db files include image files (JPEGs, BMPs, GIFs and PNGs), document files (TIFFs and PDFs), video files (AVIs and MOVs), presentation files (PPTs) and some web pages (HTM and HTML). As well as image thumbnails, the thumbs.db file will also include information such as the original file name and the date each thumbnail was last written. While it is possible for a computer user to delete the thumbs.db file to remove this record, this is often overlooked because it appears as a ‘hidden file’, meaning that Explorer’s settings need to be manually altered in order for it to become visible for deletion. However, even when visible, it is not possible to view the contents of a thumbs.db file without specialist software. With Windows Vista came a new approach to the creation of thumbnails, which has now been carried through to Windows 7. Instead of creating a thumbs.db file in every folder, Vista creates a single set of ‘thumbcache’ files, stored in a central directory. For Computer Forensic Analysts, this system has pros and cons. The central location means that even if a user running Vista deletes an entire folder containing indecent images, evidence may still exist in the central cache. In addition, thumbnails may even be recovered centrally for images stored on removable media (such as a CD or USB drive). However, the central location also means that users need only a single set of files to remove all thumbnails from the computer. Most significantly, while thumbnails offer a useful evidence recovery method, all three of the most recent Windows operating systems come with the option to disable the creation of thumbnails should the user wish, so it is never the sole avenue of enquiry for a Digital Forensic Analyst. Thorough investigations employ an extensive forensic tool kit to recover registry records, piece together fragments of deleted files, and track user movements online, meaning that in reality, if there were ever images on a suspects drive, computer forensics will usually be able to prove it.
<urn:uuid:8dbb5ef0-93a4-4d28-bcb6-5758c325347a>
CC-MAIN-2022-40
https://www.intaforensics.com/2009/01/09/windows-7-image-thumbnails-a-double-edged-sword-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00626.warc.gz
en
0.943074
629
2.859375
3
There are many techniques through which can safe himself. It is not necessary that one has to wait till the attack happens, he can already takes some measures and can check if those attacks can happen in the future or not. They can be done by some tools which can attest and can estimate the extent to which a computer is open for attacks. Here are some ways through which one can help himself in this regard; The first important thing about the tools is the interpretation. One must be able to interpret the results which are generated through these tools so he can use them in the future too for the betterment. There are the tools which can be sued to ensure that the system would stay safe in the computer and if these tools indicate that there is going to some problem in the future, then one should surely consider changing the structure and making the defences become more effective since the data safe should be the first priority of any person. Here are the tools which can be utilized to manage and analyse the performance of the system; Protocol analyser: this is the tool which can be the both hardware and the software. It is used for capturing the traffic and it can even analyse the signals and hence the whole traffic can be checked over some communication channel. These channels can vary in nature like they can be from someplace like the computers bus to some link of satellite. It can provide some means of the communication through the standard protocol of communication. Each of these types of the protocols has some various tools which can be used for the collection of the signals and the data. Vulnerability scanner: it is always important to one for check out with the defensive techniques of the computer and check whether they are good enough or not. There is the volnerability scanner, this software can be utilized for checking out whether the program is designed to attack the computer or not. It can access the computer system, the applications and the network and hence can tell whether a computer is weak enough to get attacked or not and how much there are chances that it would be infected. This can also be run as the part of some vulnerability management by the things which are tasked with the protection of system. Also, they can be used by some of the red hats and black hats to get some access to the unauthorized data. Honeypots: The computer has a different usage for the term honey pot. Is basically some trap which has been set for detecting or deflect some attempts which are done in order to get access to the computer system and the unauthorized usage of the information system. Normally, a honey pot is consisted of some data of computer. Or it might also contain some network website which can become some part of that network easily. But normally, it is monitored and is isolated. Hence it seems to contain some information and the sources of value to the hackers. This thing is much more similar to the baiting which is set by the police for some criminal. Then that bait is conducted through some surveillance which is under cover. Honey nets: honey net is actually software which is open net. It is developed by many people who want to help other for checking out their security systems and how easily their computers are to be attacked by some attackers and the hackers. There are some high interaction based honey pots too. These are the solutions which do not actually emulate. In fact, they are like OS which function full fledge those system and the application can be found easily in many homes now so one can also bring that thing in their computer and get them protected by some malicious attacks which can comprise their data security. Port scanner: The port scanner is basically a software application which is designed for the probe of the host or server against the open ports. This thing is mostly used by many administrators to help them verify the policies of security related to the networks they have and by some hackers as well, so that know can identify some services which are being run with even some view. The port scan is known as the attack which is sent to the client. It requests the ranges of the server ports addresses which are there on some host. This is done with some goal setting of finding out a port and then checking out for some vulnerability which is known for those services. Many of the major people who use that don't do it for the intention of attacking. The just do it so that they can determine some services which are there on some machine which is controlled remotely. There can be so many hosts which are there for some specific ports. Some of them are utilized when it comes to the searching for some specific services like the computer which is SQL based might be looking there for the host which are listed on the port 1433 of TCP. Passive vs. active tools: There are two types of the tools and they are active and passive. Active tools are that when an attack happens they detect them and take some actions immediately so the computer says protected and the passive tools are opposite. Open the detection, they don't really take any action and they just stay there and send warning to the users so he can take some action. Banner grabbing: When it comes to some computer network, this thing is the technique which is used for getting some information which is related to some computer system on a specific network. These services are run there to get the ports opened. There are the admins which can sue this thing to take the inventory of that system and the services which are available on those networks. But, the intruder can make usage of those banner grabbing's so that he can find some network hosts which are running some various versions of the operations system and some application with some known exploits. Risk is something which indicates that there sure the chances that computer can get effected form some attack. Risk isn't as bad as the threat, but still it is bad and the reason is that there are some chances that it can destroy that computer. Basically, risk is a bigger term. It involved both the threat and the likelihood of the threat. Threat vs. likelihood: The threat is something solid. It means that there are the changes something is going to happen for sure and that thing is bad. The likelihood includes the probability that something which is going to happen, might not happen and might even happen. So there is never a security which involved in it. The assessments can be done in some various ways. Like one can know that what are the risks and the threats which are associated to some software downloading and usage and to which extend they are more likely to make some damages. Here are the assessment types which are commonly used; Risk: Whenever there is something risky, it means that there is a probability involved that it might or might not affect the system. The risk is typically conceived as less risky than the threat. Since in the case of risk, it might happen that some miracle can save the system and the data is destroyed. Threat: Threat is something bad. It is something concrete and it means something is surely going to happen and it doesn't involve any of the probability that there would be some chances of occurrence. The threat is more dangerous so one should stay away from the things which can expose some threats. Vulnerability: This term means that how prone the system is to the attacks. It actually defines the defensive system of a system. If a system has got some strong defences then it would be easy for it to take care of it and if the system is bad, then there can be many threats which would be posed by the system and hence it will surely get infected. There are some techniques which are called as the assessment techniques. This technique plays some important role when it comes to knowing about the assessment of the techniques which have been implemented for the security of the system. Here are some of those assessment techniques which can be sued by one; Baseline reporting: This is basically a measurement for the very basic level. It is the process of managing change as well. This is, that when some problem happens, it does t just happens rapidly it first hits some baseline. When it does, that activity should be reported first so that one can already get alert that some problem is going on and those issues need to be addressed properly. Code review: this is the systematic examination and is often known as the reviewing as well. This is basically the examination of the sources codes which are in the computer. It is specially designed to find the mistakes and get them fixed by overlooking them into some initial development time. Hence it can help the developer improving his skills by also improving the quality of that software. These reviews are carries out in some different forms like the informal walkthroughs, the inspections, pair programming's etc. these codes can often be found and can be removed the various vulnerabilities. These can be the race conditions, the buffer overflows, memory leaks etc. hence the security of the software is improvised overall and one can be ensured that the system which is sued, is secured. Determine attack surface: This is the tool which has even created by one for analysing the changes which have been made for attacking the surface which is in the OS. They are designed for the OS which are the windows Vista and beyond. This tool is very important one and hence is also recommended by the Microsoft itself. This recommendation is done at the stage where the verification takes place. This is the one of tools which can also analysis that changes which has been made to the windows 6 series OD. So in those tools, one can easily analyse what is changed and where it has happened. That change can happen in the server, assemblies, and registry and file permissions, etc. the Microsoft also claim that this is the same tool which is being used by the engineers at the office to test the effectiveness and the effects of the software which is installed on the OS. Review architecture: Another important thing that many people over look many times is the architecture's review as well. The way the software is designed can tell a lot about the software and the performance. Software which is built upon some strong bases would stay for long since it would have more power to stay sharp and can defend the system very well. So the architecture should be reviewed as well so that one can ensure the safety of system. Review designs: Another important thing which should pop up into one's mind is the design can explain much about the software, the design can indicate whether the software is flawless or not and should it be trusted or not. Hence, there are many ways which can be sued by the people so that they can know there is nothing wrong that is going to happen in the future. Also, they can be assuring of the fact that he can make some measure sin advanced so he knows that he can stay safe. Also, while buying a system or establishing a connection, then too one can check the defensive technique of the computer through them so he can make some good purchase decisions. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from [email protected] and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:35d13bbc-bf86-4c3e-add5-7351b0d2de10>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/security-plus-tools-and-techniques-to-discover-security-threats-and-vulnerabilities.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00626.warc.gz
en
0.978329
2,324
2.9375
3
What is web filtering? A web filter, or internet filter, analyzes the web applications accessed by users and restricts access to blocklisted websites, content, and domains deemed malicious or inappropriate by admins. Organizations employ web filtering software to block potential cyber risks, limit user interactions on websites, and prevent unsafe or explicit content from being accessed by users. How do web filters work? Internet content filters are usually a part of cloud access security brokers (CASBs) or other cloud protection software deployed to prevent access to harmful applications and block security threats. Content control software can: - Block access to lists of websites and cloud applications banned by administrators. - Generate a risk profile or score web applications based on threat analytics or category of webpage content. The types of internet filters employed are based on an organization's needs. This includes client-based filters installed in endpoints, network filters installed in the transport layer of the network, or a browser-based filter as a plug-in or extension that limits access only for a particular browser. Why is web filtering important? Internet filtering is one of the layers of network security that thwarts cyber risks and maintain productivity. Admins might use web filters to: - Block known dangerous websites or harmful URLs that may contain, for example, malware, or spyware. - Restrict access to potential emails containing phishing links. - Prevent users or students from accessing explicit content, gaming websites, or video streaming sites. - Block access to personal data storage applications like Dropbox or OneDrive. - Allow only websites or cloud applications authorized by the organization.
<urn:uuid:a712ec72-b1f7-4b93-ba0c-f5c25568cb6d>
CC-MAIN-2022-40
https://www.manageengine.com/data-security/what-is/web-filtering.html?source=what-is
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00626.warc.gz
en
0.84327
331
3.1875
3
I’ve been working in the hosting business for quite some years now. During my time, it often happens that I work with companies that have multiple contacts. One can be the technical guy, responsible for the IT infrastructure and another can be the actual decision maker who might not be very technical but understands the value of good service and competitive pricing. In these situations, I like to make sure everyone understands why we are recommending a specific solution. When it comes specifically to a CPU, there’s a basic, simplified way to explain what components a processor has and how to choose the ones that are important to run your website or applications. When choosing a processor you should look at the following properties: number of cores, gigahertz (GHz) and cache. In today’s ever increasing data crunching world, all three of these aspects are important to determine the performance of your machine. An easy way to understand the power of a CPU is to compare it to a working person. Almost all of today’s modern processors come with multiple cores, and every core in a processor can equate to a full time employee working all the time to run your processes. So the more cores, the more work that can be done simultaneously. If we take one of my favorites as an example, a dual E5645 processor, you would have 12 cores (employees) working nonstop to run the processes you defined. How’s that for increasing your production? This brings us to understanding GHz. To continue our analogy, let’s compare processor speed to how fast each of your employees can do his job. I think it’s safe to say that a team of world-class Olympic sprinters can run 4 x 100m faster than a team of seven year-olds. Similarly, a 2.67GHz multi-core processor like the E5645 will be significantly faster than a 1.5Ghz multi-core processor (which would be similar to a dual-quad core server). Finally, there is cache. With the CPU we correlated the amount of cores as the number of employees working for us, then we looked at GHz as how fast they can do their job. To bring cache into the story is to visualize how much load each one of your employees can take. Cache determines the amount of weight each of those cores can carry; the more cache, the more processes (total work) each core can complete. So, what should you look for when choosing the right CPU? Well, that really depends on what resources you need to run your online product! But let’s stick with my current winner, the Hexa-Core Xeon E5645 processor that has 6 cores running at 2.40Ghz each, with a Max Turbo Frequency of 2.67Ghz. We offer Dual E5645 processors in the Hewlett Packard DL 180G6 chassis. Who normally goes for this solution? Whoever runs virtualization platforms (much more efficient), streaming media (FAST!), back-end gaming sites (hardware causing latency? Game killer), and actually every database server that needs strong and fast performance. Be sure to also take a look at our blog called Six Advantages of Hexa Core CPUs. Need some advice on which processor you need for your business? Shoot an email to me or my colleagues ([email protected]) and we will be happy to help you decide. In addition, we have a special October Sale on those Hexa-Core Xeon E5645 processors served from our US datacenter. Check it out!
<urn:uuid:99967a44-3e29-48f0-a8d9-82cf1029db7d>
CC-MAIN-2022-40
https://blog.leaseweb.com/2012/10/05/what-should-i-look-for-when-choosing-a-cpu/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00626.warc.gz
en
0.939109
739
2.546875
3
Complete Analysis on DWDM Technology It has been over 20 years since DWDM technology first came on the scene, and in the last two decades it has revolutionized the transmission of information over long distances. At present, DWDM technology is so widely applied that we almost forget that there was a time when accessing information from the other side of the globe was expensive and slow. What Is DWDM Technology - Data in a Rainbow DWDM stands for Dense Wavelength Division Multiplexing, which is an optical multiplexing technology used to increase bandwidth over existing fiber optic backbones. The “dense” here refers to the fact that DWDM technology supports more than 80 separate wavelengths, each about 0.8 of a nanometer (nm) wide on a single optical fiber. DWDM technology increases the network capacity and makes efficient use of bandwidth. The data from various different sources is put together on optical fiber in which each signal travels at the same speed on its own light wavelength. At the receiver end, every channel is demultiplexed into original source, therefore different data formats with different data rates such as Internet data, Synchronous Optical Network data (SONET), and asynchronous transfer mode (ATM) data can be transmitted together at the same time through one optical fiber. The transmission capability of DWDM is 4 to 8 times of TDM (Time Domain Multiplexing) and here EDFAs (Erbium doped optical amplifier) are deployed to boost the strength of signal. The signal can be transmitted to more than 300 km before regeneration. Figure 1: The principle of DWDM technology DWDM can expand capacity and serve as backup bandwidth without installing new fibers, thus it is ideal for long distance telecommunication services. DWDM technology can also be used in various networks like sensor networks, remote radar networks, tele spectroscopic process control network and many more. By using only two fibers, 100% protected ring with 16 separate communication signals can be constructed deploying DWDM terminals as these are self healing rings. In order to meet the demand in fast growing industrial base, DWDM network can be used for existing thin fiber plants as these plants cannot support high bit rates. Transparency—Because DWDM architecture is based on physical layer, it can transparently support both TDM and data formats such as ATM, Gigabit Ethernet, ESCON, and Fibre Channel with open interfaces over a common physical layer. Scalability—DWDM network can leverage the abundance of dark fiber in many metropolitan area and enterprise networks to quickly meet demand for capacity on point-to-point links and on spans of existing SONET/SDH rings. Dynamic provisioning—Fast, simple, and dynamic provisioning of network connections give providers the ability to provide high-bandwidth services in days rather than months. Backbone DWDM Network Structures The DWDM-based network structures can be divided into three classes, which are simple point to point DWDM link, DWDM wavelength routing with electronic TDM and switching/routing backbone network and all optical DWDM network. 1. Simple Point to Point DWDM Link In this DWDM architecture, the electronic nodes can be SONET/SDH switches, Internet routers, ATM switches, or any other type network nodes. The DWDM node consists of typically a pair of wavelength multiplexer / de-multiplexer (lightwave grating devices) and a pair of optical-electrical/ electrical-optical converters. Each wavelength channel is used to transmit one stream of data individually. The DWDM wavelength multiplexer combines all of the lightwave channels into one light beam and pumps it into one single fiber. The combined light of multiple wavelengths is separated by the demultiplexer at the receiving end. The signals carried by each wavelength channel are then converted back to the electrical domain through the O/E converters (photo detectors). In this way, one wavelength channel can be equivalent to a traditional fiber in which one light beam is used to carry information. It is worth noting that the wavelength channels in one fiber can be used for both directions or two fibers are used with each for one direction. Figure 2: Point to Point DWDM Link 2. Wavelength Routing With Electronic TDM In this structure, wavelength routers are used to configure or reconfigure the network topology within the optical domain and the TDM network nodes are used to perform multiplexing and switching in the electrical domain. This combined optical and electrical network architecture can be applied in SONET/SDH in which the electrical TDM network nodes would be SONET switches, or in the Internet in which the electrical TDM network nodes would be the Internet routers. The architecture can also be used in an ATM network where the electrical TDM network nodes would be ATM switches. Figure 3: Wavelength Routing with Electronic TDM 3. All Optical DWDM Network As it is seen that the electrical TDM/switching nodes can be of any kind, such as SONET/SDH switches, Internet routers, and ATM switches. This indicates that the all-optical TDM nodes in the all-optical architecture can be optical SONET/SDH switches, or all-optical ATM switches, or all-optical Internet routers. Different types of all-optical TDM/switch nodes can also be in one network, provided the protocol conversions are implemented. In fact, the optical TDM/switch node and the wavelength router in one routing site can be combined into one all-optical switching node that not only forwards packets through time domain multiplexing but also selects the light path intelligently according to the availability and traffic loads of the links. Figure 4: All Optical DWDM Network Deploy DWDM Over CWDM Network In the previous text, we have fully discussed DWDM technology and DWDM network. And CWDM is used to be a more popular low cost entry point for many customers. However, as the need for capacity grows and service rate increases, there is a demand to increase the capacity of existing CWDM networks. The principle of deploying DWDM solution over CWDM network lies in the fact that DWDM wavelengths are actually within the CWDM wavelengths range as shown in the Figure 5. Thus, the DWDM network can be connected to CWDM network via the CWDM channels of 1470 nm, 1490 nm, 1510 nm, 1530 nm, 1550 nm, 1570 nm, 1590 nm, and 1610 nm. In most cases, the 1530nm and 1550nm channels are suggested for the combination of CWDM and DWDM system to increase the capacity of the existing CWDM fiber optic network. Figure 5: DWDM and CWDM Wavelengths To combine the DWDM wavelengths with CWDM wavelengths, both CWDM MUX/DEMUX and DWDM MUX/DEMUX are used. The following picture shows the connection methods for hybrid CWDM and DWDM by using 1550nm channel. On both ends of the fiber link, a CWDM MUX/DEMUX and a DWDM MUX/DEMUX with corresponding wavelengths are deployed. Connect the line port of the DWDM MUX/DEMUX to the 1530nm/1550nm channel port of the CWDM MUX/DEMUX, the DWDM wavelengths can be added to the existing CWDM network. Figure 6: Build DWDM over CWDM Network The wavelengths should be carefully considered during the selection of the CWDM MUX/DEMUX and DWDM MUX/DEMUX. As above mentioned, wavelengths of 1530 nm and 1550 nm are suggested to be used for CWDM and DWDM hybrid. The following picture shows the suggested wavelengths for CWDM and DWDM hybrid. If the 1530nm port is to be used, the DWDM MUX/DEMUX channel ports are suggested to range from 1529.55 nm to 1536.61 nm. For 1550nm port, the channel ports of the DWDM MUX/DEMUX is suggested to range from 1545.32 nm to 1557.36 nm. Figure 7: Suggested Wavelengths for CWDM and DWDM Hybrid Practical Considerations in Deploying DWDM Network When deploying a DWDM network, customers may encounter some questions that will affect their choices of vendor, equipment type, design and so on. Some of these FAQs are as follows: Is the DWDM system compatible with existing fiber plant? Although the majority of installed fiber such as SM fiber and NZ-DSF can support DWDM network, there are still some types of older fiber that are not suitable for DWDM use. So if new fiber must be deployed, choose the one that supports future growth, particularly when DWDM systems will expand into new wavelength regions with higher bit rates. What is my migration and provisioning strategy? Because DWDM technology is capable of supporting massive growth in bandwidth demands over time without forklift upgrades, it represents a long-term investment. Both point-to-point and ring topologies can serve as foundations for future growth. Planning should allow for flexible additions of nodes to meet the changing demands of customers. What network management tools can I use? A comprehensive network management tool will be needed for provisioning, performance monitoring, fault identification and isolation, and remedial action. Such a tool should be standards-based (SNMP, for example) and able to inter-operate with the existing operating system. What is my strategy for protection and restoration? Designing a protection strategy is a complex process that many considerations should be taken into account. There are both hard failures and soft failures. The former must be addressed through redundancy at the device, component, or fiber level. The latter must be addressed by the system through intelligent wavelength monitoring and management. Protection and survivability strategies depend upon service type, system, and network architectures. In many networks, they also depend on the transport protocol. The Historical Evolution and Future Trend of DWDM Technology As Figure 8 shows, by the mid-1990s, dense WDM (DWDM) systems were emerging with 16 to 40 channels and spacing from 100 to 200 GHz. By the late 1990s, DWDM systems had evolved to the point where they were capable of supporting 64 to 160 parallel channels, densely packed at 50 or even 25 GHz intervals. It can be seen that as technologies advance, there is an increase in the number of wavelengths accompanied by a decrease in the spacing of the wavelengths. Along with increased density of wavelengths, systems also advanced in their flexibility of configuration, through add-drop functions, and management capabilities. Figure 8: The Evolution of DWDM Technology Recent innovations in DWDM transport systems include pluggable and software-tunable transceiver modules capable of operating on 40 or 80 channels. This dramatically reduces the need for discrete spare transceivers, when a handful of tunable transceiver modules can handle the full range of wavelengths. In the future, DWDM technology will continue to provide the bandwidth for large amounts of data. With the development of technology, closer spacing of wavelengths will be possible to increase the capacity of systems. But DWDM is also moving beyond transport to become the basis of all-optical networking with wavelength provisioning and mesh-based protection. And this evolution will be promoted by switching at the photonic layer. In the early 1990’s, a single fiber can only transmit 2.5Gbps of information, but now with DWDM technology, it can carry almost 10 Terabits/sec. With the prominent advantages, DWDM network has become an ideal and cost-efficient solution to expand network capacity. There is no doubt that DWDM technology will reshape the future communication network.
<urn:uuid:3ec9a429-b695-4f01-ac76-61784ded4fe2>
CC-MAIN-2022-40
https://community.fs.com/blog/complete-analysis-on-dwdm-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00026.warc.gz
en
0.921767
2,541
3.546875
4
Do you end up reusing passwords just to keep it simple? You are not alone – this is a common practice. In fact, 59% of people reuse the same password everywhere! The problem with this commonality is it presents a big security risk for everyone, including your employer. So, if a hacker gets access to your password for your personal bank account, it’s very likely that the hacker could use your password to access multiple logins – including your workplace accounts. Passwords are a way to “authenticate” or identify that a person is who they say they are. For that reason, they need to be secret only to you. Protect the things you value by understanding how passwords work and how to protect them. How do hackers get passwords and what are their methods? We’ll discuss a few and ways that you can protect yourself. Providers store passwords insecurely in what’s called ‘clear text’ When you create a password for access to a website, you probably don’t know how the company is going to store it. Sometimes sites or providers store passwords in just plain text. That means should a hacker be able to access the file or database, they can just read the password in plain English. How do you know if your password is stored in clear text? You can be sure that your provider stored your password in clear text if you’ve ever done a password reset and they just email you what your password is rather than a reset link. Even the most well-known companies sometimes store their passwords in plain text accidentally. In fact, Facebook recently was discovered to be storing passwords in clear text. Good systems use several types of encryption to ensure that even if a password is exposed, it is not human readable. Malware on your computer capturing text The whole discussion of Malware and how it infects your computer could be a series of blog posts all on its own. Malware often installs software on your device that is constantly monitoring every key you type. It waits for you to go to a banking site, an ERP system, or an online service and type your password. Then, without your knowledge, your password is transferred back to the hacking group to be used later. Here is the creep factor: the best malware resides on a system for weeks or months without detection, and the whole time it is sending your secrets back to criminals. How to prevent Malware: Apply patches regularly and have security monitoring software that catches and reports suspicious activity on your network. Easy-to-guess passwords used in Brute Force, Dictionary or Password Spraying attacks The most common passwords like 123456 or “password” are well known attack targets. Many people use an easy-to-remember password, which can easily be exploited. Hacking organizations use a combination of guessing techniques to break into accounts, this involves using software to guess passwords at the rate of thousands per minute, much faster than anyone could type. There are many different methods for guessing passwords. First, dictionary attacks happen when the attacker uses common words found in the dictionary to try and break into account. Brute force attacks are when all possible combinations of passwords are tried to access the system. And finally, password spray attacks happen when the perpetrator tries a relatively small number of passwords against a very large number of accounts. How to avoid password attacks: Enforce polices that prevent easy to guess passwords and limit the number of attempts that someone can try and login to prevent these types of break ins. Unfortunately, we humans are usually the weakest link in any system. Social engineering is a huge attack vector for hackers. This can include phishing attempts where an email that looks legit asks users to “login” to a site that is made to look authentic when it’s really a way to capture usernames and passwords. Other even more targeted attacks can involve calling users and impersonating someone from IT or another company to trick users into giving away their secret. There are thousands of websites that take advantage of typos in the URL so that you think you’re where you intended. Instead, you are at a hacking site and you give your credentials to them. How to keep your organization safe: Educate employees about these types of attacks and conduct regular tests and reeducation as methods of phishing get more and more sophisticated. Here are some additional ways to help reduce your risk to password hacking. Use multi-factor authentication. Anytime it’s available, accounts should be secured by more than a password. Perform system maintenance regularly. Make sure that you are applying patches at the operating system, application, and systems level on consistent schedule. Missing one patch means that an attacker can exploit that vulnerability the next day. It’s something that must be done regularly. Period. Use sophisticated detection, prevention and monitoring software. Patching your systems isn’t enough. You must have software on your network and devices that is looking for suspicious behavior and that software has to come from trusted companies who constantly monitor global threats and update their signatures of attacks. Use a password manager. Never reuse passwords. A good password manager can generate passwords for you, make the available wherever you need. When you do this, your secrets are safe. Educate your team. While systems and software and polices are good, they are only as good as the people who know your passwords. Make sure you have a constant source of education and reeducation of people who use your systems. Then test regularly. Employees need to know and see examples of threats, so they know what to look for. If all this seems a bit overwhelming and a lot for your organization to manage, fear not. Arctic IT is here to help your organization. ArcticCare 365, our award-winning managed service offering, helps you navigate the threat landscape and put in to place the polices to limit your exposure. Connect with us today to learn more.
<urn:uuid:caa39fc7-58f0-4dbe-b50f-f8343cc728a2>
CC-MAIN-2022-40
https://arcticit.com/how-hackers-get-passwords-and-what-you-can-do-to-protect-yourself/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00026.warc.gz
en
0.94987
1,229
2.8125
3
Cybersecurity. The science of securing IT infrastructures. Firewalls, access lists, passwords, encryption, penetration testing, intrusion detection, etc. A crazy sysadmin once told me that the only way to completely secure a computer network is to unplug it. At first that statement seems over the top. Yet the more you think about it, the truer it gets. There is always a flaw. It’s impossible to secure everything. All you can hope for is to add extra time for an attacker to find or accomplish what he set out to do. Regardless of the data that you are trying to shield, if somebody is willing to pay for it, hackers will steal it. No one is immune. I’ve established that much in my previous blog post “Stealing data is easy”. Control is an illusion. Accept that you have no control. All you can do is be prepared for when attacks hit and limit the damage. Chances are you’ve already been breached, and you will be breached again. The concept of a “breach” is often misunderstood. The school janitor who accidentally sees a student report card on an unlocked computer while cleaning a classroom is technically considered a breach. Albeit accidental and meaningless, it still consists of a breach of confidentiality. Leaving a post-it note with a password on your monitor is the equivalent of leaving keys to a warehouse full of inventory by the doorstep. An unlocked computer is akin to leaving that warehouse door wide open. Biological viruses attack the weakest links in an organism; hackers attack the weakest links in an organisation. If a kid wants candy they’ll know exactly which parent will bend first. One of them is always weaker. Exploiting the weak link is something that has been going on in society forever and the cyber world is no different. Employees are the weakest link in a company’s cybersecurity plan. In fact, a whopping 95% of cyber-attacks and incidents exploit unsuspecting and uniformed employees according to IBM’s X-Force Threat Intelligence Index. With the aid of social media, a black-hat can easily find out who’s working at a target company. They can then find their hobbies, birthdays, vacations, etc. You can obtain an abundance of details about an individual within minutes. Attacking that company then become child’s play. Imagine now if they could plug in your local area network, or worse…. connect to your wireless infrastructure from the parking lot. Armed with all those details about key individuals there is no stopping them. If your users are unaware, all the firewalls in the world won’t matter. If there is no one to firmly enforce IT policies, they mean nothing. So, what to do while you wait? Fortunately, it’s not all doom and gloom. Raising awareness to these concepts could be included in your company’s orientation package for instance. Get your users to complete security awareness webinars or subscribe them to a cyber risk evaluation service like Security Aware which is powered by Beauceron Security. Don’t give them anything easy. If you get a flu shot you might still get the flu, but you are protected against certain variations. Use analogies like that to help raise awareness in your organisations. Does this article induce paranoia? If so mission accomplished! I’ve made you aware. Now go out there and figuratively unplug that network before they get you!
<urn:uuid:0dad88ad-a3dd-40e7-bd13-10752e9e94a7>
CC-MAIN-2022-40
https://bulletproofsi.com/blog/you-are-the-weakest-link/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00026.warc.gz
en
0.943742
723
2.796875
3
Bite Marks | Objective Opinion and Junk Science In 1993, the U.S. Supreme Court wrote in its decision regarding Daubert v. Merrell Dow Pharmaceuticals, Inc., that among the many factors for courts to consider when admitting the evidence is the ‘potential error rate’ of a scientific method. The Daubert Standard and the Frye Standard (1923, Frye v. The United States) have come to be considered the written standards for what forensic evidence can be admitted into trial courts as a basis for the ruling. As science changes over time, our abilities to understand our world evolve, it would seem that the evolution of our court processes would follow to keep up with the science and values of the time. The government has always needed a little prodding to keep up. If a conviction is set to be decided based on forensic evidence, for matters of truth in decision making, the error rates for the given forensic testing should also be stated. To hold someone’s life in your hands as a jurist, not having a scientific background, would you want the man in the lab coat presenting his testimony that he believes ‘X is a match Z’ also to tell you those lab findings are only 50% correct? These are but slight details. The only sure forensic evidence is DNA evidence. All others are fallible; they are simply junk science. Many individuals have been falsely convicted based on junk science. In the year 2020, thanks to groups like the Innocence Project, a non-profit legal group working to change wrongful convictions, the U.S. legal system has been turned on its head and forced to take a hard look at its mishandling of truths when it comes to hard science and facts. What is Forensic Odontology? Otherwise known as forensic dentistry, is the application of dental knowledge and skills to the matters of courts, police, crime-solving, and body identification. Indisputable court evidence can be obtained in dental remains. It takes a specialized certification for a dentist to practice forensic dentistry for legal purposes. Some dental findings can be fundamental and essential in solving crimes or discovering the identity of remains found. Forensic dentistry involves the proper handling, examination, and evaluation of dental evidence. From this type of evidence, an examiner may discover the age, identity, DNA, race, and even occupation. Teeth can even give clues to the socioeconomic status of the victim. If available, examiners can compare teeth to previous dental records to help provide an identification. This identification process is done through a comparison of films taken before and after death. There are many reasons the courts and legal systems use forensic dentistry. Some of these uses are very valid, as listed above. However, some services of forensic odontology have been used to convict and identify individuals as criminals falsely. The use of forensic bite marks as a form of suspect identification has come under scrutiny. Like many of the forensic sciences, its validity in the courtrooms is being questioned. Bite Marks Used for Identifications A study was produced in 2009 by the National Academy of Sciences entitled: “Strengthening Forensic Science in the United States: A Path Forward.” The report was damning for the American judicial system. They presented problems with every forensic science available except DNA. The scope of the information can be summed up in this comment. Excluding DNA analysis, the report found, “no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source.” This concept matters tremendously to those whose lives have been impacted by junk science. Keith Allan Harward was convicted of the 1982 rape of a Newport News woman and the murder of her husband. The challenge was that Harward insisted he was innocent during and after his trial. One, in which he would be convicted as guilty, based on bite marks. The jury listened carefully as six forensic scientists testified that Harward’s teeth matched bite marks that were found along the rape victim’s legs. For years, courts across the United States accepted forensic odontology as a verified source of science. The problem one could suspect is that those who are legislators are also not scientists. The ability to be a public trust and write legal statutes is not a study in the scientific method. With the capability of so many specialists to direct our judicial system, understanding the percentages of truth should not have been so difficult. Most jury members are your neighbors, the local small business owner, the mother of the young boy who mows lawns, or a single young man just getting on his way. Juries are and can be anyone, but to have a significant scientific background making up any single jury is not likely. If the court and scientists tell you to deduce your decision from facts as presented, then the fault lies with the court for not understanding the science. After hearing six expert forensic scientists testify that Harward can be identified as the biter of the woman, the jury convicted him. It took many years to get the courts to look at a newer available science, DNA analysis, to discover that he was innocent. However, during court, Dr. Lowell Levine, an AFBO board-certified expert, testified that there was “a very, very, very high degree of probability—so high that it would be a practical impossibility” that anyone else could have bitten the victim. The jury then listened to another AFBO board-certified expert; Dr. Alvin Kagey testified that Harward had made the marks “with all medical certainty that there is just not anyone else that would have this unique dentition.” On April 27, 2016, the Virginia Supreme Court declared Harward innocent, wrongfully convicted based on junk science. Harward had spent 33 years of his life in prison. Keith Harward has vowed that he would use his time left to work on preventing others from being wrongfully convicted. He has stated that any time in the future where bite mark evidence is used in the future, he will show up. Harward wants everyone to know that it isn’t real science. He said, “I will contact the media. I will stand on the street corner in a Statue of Liberty outfit with a big sign saying, ‘This Is Crap.'” Objective Opinions & Junk Science How could this have happened? There is a great deal of difference between a DNA match and a bite mark match. One test is done scientifically, is measured, and the results are factual. The other is done by human comparisons, visual referrals, ideas, and personal objectivity. Knowing how the science is collected, tested, and compared can make a great deal of difference in the measurability of the ‘truth’ behind the results. Of all the different types of evidence that the NAS report found faulty, it seemed that it was the forensic dentists that got their feathers ruffled the most. Indignity does not become a profession; what should matter most to forensic scientists, regardless of background occupation, is finding the truth. The problems that arise from bite mark evidence comes along two avenues of proof—the first being identifying a pattern mark on a victim as an injury left by human teeth. The second is to attempt to match the marks to the impressions of a suspect’s teeth. Regardless of how either of these areas is investigated, measured, studied, the end conclusions are not scientific at all, but mere personal opinion. After the release of the NAS report, the President’s Council of Advisors on Science and Technology in 2016, rebuked the entire field of dental forensics by making this statement. “Available scientific evidence strongly suggests that examiners cannot consistently agree on whether an injury is a human bite mark and cannot identify the source of a bite mark with reasonable accuracy.” PCAST went even further, stating that they felt that any future benefit to further develop the advancement of bite mark science to be extremely low or limited. The problem does partially seem to be in the very nature of bite mark evidence. Marks left on the skin can vary, move, stretch, or change over time. The science begins flawed. When a case is set for analysis, gaining samples of marks left on the skin is relatively straightforward. Photography, dental casts, transparent overlays, computer enhancements, electron microscopy, are all typical ways to enhance the views and study the affected areas. Swabbing can be done for serology or DNA. What skin looks like at the time of the study, or how DNA results are returned are direct and easy to understand. What can severely limit the case to use forensic dentistry at all to match bite marks is that bite marks are distorted in varying degrees by the elasticity in one’s skin. Swelling and healing can move the original marks, change them, and stretch them. Another issue can be that people’s teeth change and shift over time. These are just a start looking into the problems involved in comparisons, and the accuracy of any valid results or conclusions from these tests is slim. While it may be true that these forensic dentists believe in their lifelong work, the ability for bite marks to be an available forensic science for trial courts does not meet all of the required guidelines for scientific evidence. The American Board of Forensic Odontology (ABFO) indeed lists numerous ways in which testing can be done and compared; it fails to provide probability numbers for any of the tests it approves. The trouble is, it is impossible. There have been no studies conducted to establish the differences in bite marks. Unlike DNA, there is no central repository of bite marks and dental casts. There is no way to make comparisons with enough individuals to determine the accuracy of the findings for bite mark analysis because it is not science at all, but an objective opinion. Science admitted into trial courts must follow either or both the Frye Standard or Daubert Standard. According to both of these standards, the acknowledged science must be of sound relevance and proven reliability. Bite-mark evidence has neither. If the science could never produce its probability of failure to the courts, it should have never been admitted as evidence into the trial court process, to begin with. What can be done now? Prosecutors, lawyers, judges, and forensic odontologists must admit their shortcomings and prepare for a bigger future. The entire process must be done for seeking truth and delivering justice, or attempting to have a civil society is a wasted effort. Those who have studied forensic dentistry still have much to give to the courts, and a long way to go to take their science. These dentists can still help solve a crime, identify victims, and many other areas of investigative analysis. Bite marks, however, have a way to go. Bite-mark evidence has been behind more than 31 wrongful convictions that have been overturned recently. Robert Lee Stinson served 23 years of a life sentence for the brutal rape and murder of a 63-year-old woman. The only evidence tying him to the crime was bite mark evidence. In 2009, at the age of 44, he was released from prison, when the Wisconsin Innocence Project showed that the DNA was not a match. As recently as 1995, Gerard Richardson was sentenced to 30 years in prison without the possibility of parole. The evidence that the jury had convicted him on was bite mark evidence that the forensic scientist Dr. Ira Titunik had testified “was made by Gerard Richardson… there was no question in my mind.” This testimony was followed up by the prosecutor who elaborated on this with “Mr. Richardson, in effect, left a calling card… It’s as if he left a note that said, ‘I was here,’ and signed it because the mark on her back was made by no one else’s teeth.” With that information, the jury convicted him. On December 17, 2013, the decision was overturned after Gerard had spent nearly 20 years in prison. What is the Value Now? While there are many more cases that have been and are currently being overturned, for forensic analysts, crime scene investigators, first responders, and law enforcement, where does that leave any value in collecting this type of evidence? Every piece of evidence tells a part of the story. It is wrong to have the idea that this piece or that bit of evidence will be the “match” that puts a criminal behind bars. All evidence is there to describe the scene, to help the judge, the jury, and others understand what happened, how it happened, and if possible, why. With crime, though, the why is not always a reasonable idea, which would make sense to the everyday person—presenting the ‘why’ is precisely why the collection process includes every detail. Making sense of chaos isn’t always easy. Not for experienced judges, and we put the verdict in the hands of those who are inexperienced, who do not have the background to comprehend the reality of the situation that is often thrust before them. Value is found in the ability of evidence to weave together the fabric of time in which the crime was created so that these innocent onlookers can attempt to understand all of the players’ parts in the story. In this light, every piece of evidence has value, and collecting, storing, and processing each piece according to protocols is the best chance that any legal recourse can be made in crime.
<urn:uuid:366b5290-75e6-41b8-85ac-4163c1e45f2b>
CC-MAIN-2022-40
https://caseguard.com/articles/bite-marks-objective-opinion-and-junk-science/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00026.warc.gz
en
0.966654
2,789
2.515625
3
Trial and error attacks on passwords take place in two ways: - On-line trial-and-error attempts against the site itself - Off-line attempts against a hashed password file We construct strong passwords when we try to resist both types of attack. One aspect of this is to avoid well-known passwords, especially those stolen by hackers. There are several sources for these lists. OWASP seems to encourage a collection of lists stored on Github. Have I Been Pwned The site Have I Been Pwned.com keeps a list of passwords disclosed from hacked databases and published data dumps. The author, Troy Hunt, has been working on, and writing about, cybersecurity for at least a decade. Hunt’s site contains over 600 million passwords, and the site is set up so that password checking actually takes place on your browser – the server never sees the password you’re asking about. The technique is rather clever, and described on Hunt’s web site. Let’s use Have I Been Pwned to check the strength of single-word passwords. - Make a list of 5 words, some short, some longer, but all less than 8 characters. Type them in and indicate which have or have not been pwned. - Pick the longest of your pwned words. Add a digit to it and see if that has been pwned. - Using the longest of your pwned words, add a special character or punctuation mark to the end, and see if that has been pwned. - Find a word that is longer than 10 letters that has already been pwned. - Make a list of 5 words containing 10, 11, or 12 letters. Type them in and indicate which have or have not been pwned. Numbers as Passwords Now we’ll use Have I Been Pwned to look at numerical passwords. - Type a few randomly-chosen digits into Have I Been Pwned and see if it’s a pwned password. Don’t type repeated digits or sequences. Without erasing the previous digits, type in another digit and if this password was pwned. If so, continue typing in additional digits and checking until the number is not pwned. How many digits did you need to type? Pairs of Short Words This one is more interesting if you find which of the following yield pwned words. - Find a pair of short words, each 3-5 letters long, that has been pwned. - Add a digit after the two words and see if it has been pwned. - Put a digit between the two words and see if it has been pwned. Many desktop systems have a command line function (or perhaps even an app) that will calculate hash values for files and text strings. The “MD5” hash is sufficient for our purposes. It isn’t a particularly strong hash (i.e. not resistant to attacks) but it’s widely available, fast, and yields a small value to cut and paste. On Unix/Linux/Mac systems, there is probably an “md5” shell command. To hash the password “dogscats” you type this command: md5 -s “dogscats” The quotes prevent the shell from trying to treat the password as a file name. If you need to insert special characters in the password, use “\” as the escape character. You may also use your web search engine to find an online site that calculates hashes. The site tools4noobs has one. More Password Testing The web site CrackStation.net has built a database of almost 1.5 billion words that could be passwords. The site stores all those words in hashed formats and will look up hashed passwords against them. This gives you a different set of potential passwords to check against.
<urn:uuid:2726e7f9-4bc3-4803-aee9-81e2f3f6a346>
CC-MAIN-2022-40
https://cryptosmith.com/train/a/pw-hash/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00026.warc.gz
en
0.935634
838
3.46875
3
Big data can indeed unveil paths for unprecedented growth, for they provide a clear view of the current scenario; it sets a base for how organizations can build upon that data to make better plans and execute them accordingly. One of the numerous benefits of a data-driven organization is utilizing a digital record to store customer behavior and then using that information to develop better strategies. Although the process is a bit challenging, those who learn to tackle the hurdle take their organizations toward a market-ready and competitively secure setup. While it is extremely beneficial to make decisions based on data-driven insights, many organizations still struggle to understand the optimum use of their big data; as a result, they overlook the potential big data has in transforming their organization. Investments put in data analytics has indeed increased over the past years which indicates the growing awareness of big-data (or DataOps) benefits; however, churning out of all the benefits that DataOps can provide is a feat mostly unachieved by many organizations. They face difficulties when leveraging big-data and end up underestimating big data’s potential. Organizations require orientation and planning for execution of big-data to achieve the best outcomes possible. DataOps, is a revolutionary way of managing data that promotes high-efficiency communication between data, teams, and systems. DataOps runs parallel to the benefits that DevOps provides. DataOps garners the data of organizational process change, realignment, and available technology to facilitate a professionally well-cultured relationship between everyone who handles data – data scientists, engineers, developers, business users, etc. – allowing all users to have swift access to the target data. Because of creating data-driven enterprise, three essential properties are associated with DataOps: Volume: Big data takes systematic record of massive scale business transactions, social media exchange, and information flow from machine-to-machine or sensor data. Velocity: DataOps or Big-data analytics proposes timely data stream at high speed. Variety: The Data collected forms totality in the form of a spectrum representing the full Data register. The data often comes in various formats such as structured, numeric in the traditional database or the unstructured text documents, video, audio, email, or stock ticker data. With these varied capacities of big data, organizations must implement DataOps on a larger scale. It’s not just monetarily beneficial but also sets a smooth foundation for a variety of allied processes. The utilization of big data is even more important than just getting a grasp of the data. An organization with proper utilization of comparably fewer data points will leave behind an organization with poor utilization in the race of optimal business solidarity and growth. A data-driven enterprise, thus, entertains various privileges that other firms don’t such as: Cost Reduction: Big data tools such as Hadoop and Cloud-Based Analytics help in reducing costs drastically especially when the data is extensive. These tools help organizations use the big data more effectively through locating and retrieving the data efficiently. Time-Saving: The high velocity, at which data travels in a DataOps model cuts the usual long hours into small segments and renders the organization an opportunity to use the spare time for further growth of the enterprise. Tools like Hadoop and In-Memory Analytics identify the target sources immediately and make quick decisions based on the learnings. Product Development: Having customer data in hand the enterprise can efficiently analyze the market forces and act accordingly. Creating product that satisfies the customer’s needs is one of the most common strategies that firms embrace nowadays. Foreseeing Market Conditions: Big data analytics renders the most accurate analysis of market conditions. By keeping a record of customer purchasing behavior and likewise data, the enterprise makes itself ready for coping with future market forces and planning accordingly. Controlling Reputation: Big data tools can also help enterprises do sentiment analysis such as review and rating analysis. Organizations can get a clear insight of their current outlook and aim at propagating the positives while marring down the negatives. Creating and operating in a data-driven enterprise seems to be a fundamental choice for the organizations nowadays. DataOps approaches allow businesses to manage big data in the cloud through automation; this inculcates a culture of a self-service model that unfolds a variety of benefits for both, the organization, and the customer.
<urn:uuid:45473027-c304-48e7-a261-c9c59b53ed18>
CC-MAIN-2022-40
https://www.idexcel.com/blog/tag/enterprise-big-data-solutions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00026.warc.gz
en
0.916399
891
2.765625
3
USB (Universal Serial Bus) is the standard for connecting all sorts of devices today. But the universal in its name can be a bit misleading, as there are many different types of USB connectors and a few different standards. Let’s discuss the various iterations of USB so you know what type of connection to expect with what devices. We’ll include images along the way for easy identification. Types of USB Connectors You can tell USB cables apart by the connector on either end. Here are the most common types. This is the standard connector, found on one end of almost every USB cable. It’s a rectangular connector that only fits in one way. You’ll find several USB-A ports on virtually every desktop computer and laptop. Many TVs, game systems, cars, media players, and other devices have one or more, too. You won’t find cables with USB-A on either end, as there’s really no situation in which this could be useful. In fact, connecting two computers with a USB-A cable could damage them both. This is an older connector that’s not used nearly as often nowadays. It’s almost square at one end, and usually plugs into a printer or similar device. Other than these uses, it’s been largely overtaken by the newer standards below. As the name suggests, this is a smaller connection type that’s good for mobile devices. It’s been largely superseded by micro-USB, but you’ll still find it on some cameras, MP3 players, and other such devices. This is a tiny connector that’s popular on all kinds of portable devices. Everything from Android phones to external battery packs to Bluetooth headphones uses a micro-USB port. However, some smartphones have moved onto the newer USB-C port. This is the newest USB standard. Unlike older cables, which usually have USB-A on one end and another type on the other, USB-C can connect two devices that both have USB-C ports. Also different from the above types, it’s reversible. USB-C is slowly being adopted by device manufacturers. Many newer Android phones, like the Samsung Galaxy S9 and Google Pixel devices, use USB-C. Apple’s newest MacBook and MacBook Pro models only feature USB-C ports, as well. If you know about USB-C, you may have also heard about the Thunderbolt hardware interface. This is a standard that allows a USB-C port and cable to transfer data at speedy rates, connect to high-resolution displays, and perform other tasks. Not every USB-C port supports Thunderbolt 3, though. For example, Apple’s newest MacBook Pro models feature several Thunderbolt 3 USB-C ports. But the standard MacBook’s single USB-C port lacks Thunderbolt 3 support. Because of all this, USB-C is a bit confusing. The port can either be a basic USB port similar to the ones above, or it can be a multi-purpose jack. This depends on the device. For more details on USB C, check out the reasons Cable Matters gives for why USB-C docking stations are so useful. This isn’t really a USB standard, but we include it for the sake of completion. Apple has used the proprietary Lightning cable in its mobile devices since late 2012. Like USB-C, it’s reversible. iPhone and iPad users plug a Lightning to USB-A cable into their devices to charge, connect to a PC, and more. USB Speed Standards Throughout its life, USB has updated its standards a few times. In addition to the types of connectors on each end, each USB cable and port has a standard of speed. USB 1.0 was released in 1996, but it wasn’t until late 1998 that USB 1.1 arrived and kicked off the era of USB properly. This could only utilize USB-A and USB-B connectors, and is ancient by modern standards. You’re very unlikely to find any USB 1.x devices or cables around today. In 2000, USB got a makeover with its 2.0 update. This supports much faster speeds than version 1 could, and introduced support for several of the new ports mentioned above. It’s also notable for adding USB OTG (On-The-Go) support, which allows two USB devices to communicate directly. For example, with an adapter, you can connect a standard USB keyboard to an Android phone. USB 2.0 is still used in cheaper flash drives, along with many mice, keyboards, and similar devices. If a cable or port doesn’t have any USB 3 markings, as discussed below, it’s likely USB 2.0. USB 3.0 launched in 2008, with 3.1 and 3.2 iterations coming later. Its biggest upgrade is much faster transfer speeds than USB 2.0 can provide. You’ll recognize USB 3.x cables and ports by their blue coloring and/or lightning bolt logo. These devices are backwards-compatible, so you can plug a USB 3.x cable into a USB 2.0 port or vice-versa. However, doing so limits you to USB 2.0 speeds. Many external hard drives and higher-end flash drives use USB 3. USB-C cables are always USB 3. Older cable types, like micro-USB, require a special connector type for USB 3.0 compatibility. You’ll often see this kind of connector on external hard drives so they can take advantage of USB 3 speeds. Now you now about the various types of USB cables and their uses. Generally, you can plug standard devices like mice and keyboards into a USB 2.0 port, as speed isn’t a priority for them. But any devices that will transfer data, like an external hard drive, should be plugged into a USB 3 port for best results. Time will tell if USB-C becomes the standard and largely replaces these or not. Either way, we’ll have USB-A ports around for a long time to support older devices.
<urn:uuid:9ce4604c-b3ce-434b-af75-0df0c1adbca6>
CC-MAIN-2022-40
https://www.next7it.com/insights/understanding-usb-cables-ports/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00226.warc.gz
en
0.927064
1,325
2.890625
3
The planet Venus will create a rare spectacle on Tuesday when it passes directly in front of our sun, creating an image for viewers on Earth that won’t be repeated until the year 2117. Known as “the 2012 Transit of Venus,” the nearly seven-hour journey will begin at 3:09 p.m. Pacific Daylight Time (22:09 UT) Tuesday and will be widely visible around the globe. Observers on seven continents will be in a position to see it, in fact — including even a sliver of Antarctica. Ideal viewing conditions will be in the mid-Pacific, where the sun is high overhead during the crossing. In the United States, the transit will be best viewed at sunset, treating those on Earth to a rare view of what NASA describes as “the swollen red sun ‘punctured’ by the circular disk of Venus.” To avoid burning their eyes, viewers of the event must not watch the transit directly. Instead, they can watch using a projection technique or a solar filter. A No. 14 welder’s glass can also work well, according to NASA. NASA Television will air a live program starting at 5:30 p.m. EDT on Tuesday showcasing the celestial phenomenon. ‘The Relative Tilt of the Orbit’ “These transits are rare,” Scott Austin, associate professor of astronomy and director of the astronomical facilities at the University of Central Arkansas, told TechNewsWorld. Venus actually ends up between Earth and the sun every 1.6 years, Austin pointed out. The majority of those conjunctions don’t result in transits, however, “due to the relative tilt of the orbit of Venus relative to the orbit of the Earth,” he explained. ‘This Doesn’t Happen Often’ Indeed, “the way the orbits of the planets wobble and the fact that Earth has to be in exactly the right position with respect to the sun and Venus mean this doesn’t happen that often,” noted Paul Czysz, professor emeritus of aerospace engineering at St. Louis University. Venus and Earth are roughly the same size, Czysz told TechNewsWorld, but Venus goes around the sun faster than Earth does because it’s closer to it. “You have to be exactly in the right place and at the right time” to see the transit, he added. So does the sun, for that matter, so that Venus’s transit can be made visible by daylight. A Long History Humans have long been entranced by Transits of Venus, not just for their beauty and rarity but also for scientific reasons. The Babylonians and Mayans were able to predict exactly when such events would happen on the basis of observation alone, Czysz pointed out. Later, in the 18th century, “observations of the transit were used to get estimates of distances within the solar system,” Mario Livio, senior astrophysicist with the Space Telescope Science Institute, told TechNewsWorld. At that time, the size of the solar system was one of science’s biggest mysteries. “Venus transits have been historically scientifically significant for determining the apparent diameter of Venus and its distance,” noted Austin. ‘Layers of the Venusian Atmosphere’ This year’s transit may not be quite as central to our understanding of distances in the solar system, but it’s by no means devoid of scientific import. “This transit will allow planetary scientists to study certain layers of the Venusian atmosphere by observing the light from the Sun passing through those layers,” Austin explained. “This will help refine this technique when applied to extrasolar planets that are observed to transit their parent stars,” he added. ‘No One Alive Had Seen a Transit’ Transits of Venus come in pairs separated by more than a hundred years. Tuesday’s transit is the “bookend” of an eight-year pair, with the last occurring in June 2004. For that last one, no one alive at the time had seen a Transit of Venus with their own eyes, NASA noted, and the hand-drawn sketches and grainy photos of previous centuries scarcely prepared them for what was about to happen. Modern solar telescopes captured an unprecedented view of the event then — and this time around the view should be even better. Looking at ‘Moonshine’ This year’s transit will be observed by NASA’s Solar Dynamics Observatory, and it will also be observed indirectly by the Hubble Space Telescope, Livio pointed out. “Hubble will be observing the moon to look at ‘moonshine’ — the reflection of the sun’s light by the lunar surface,” he explained. “By comparing data in and out of transit, information will be gathered on Venus’ atmosphere,” Livio noted. Astronaut Don Pettit will also photograph Tuesday’s Transit of Venus from the International Space Station, with photos posted to Flickr along the way. Pettit and the Expedition 31 crew will be the first people in history to see a Venus transit from space. ‘You’d Burn a Hole’ Perhaps most important of all is that viewers don’t try to watch the transit directly, because tiny Venus covers too little of the sun to block out its blinding glare. “If you use binoculars, you could burn your eyes,” Czysz warned. “With telescopes as large as they are today, you’d burn a hole right through a piece of wood.” Instead, viewing must be done through special solar filters or at a planetarium or observatory that’s equipped for solar viewing and open for the event, noted Austin, whose observatory at UCA is among those hosting a special viewing. At such facilities, the image of the transit is “run through a series of lenses” and projected, Czysz pointed out, so “you’re not looking at any image directly.” Our ‘Last Chance’ In any case, there’s no doubt taking the necessary safety precautions will be well worth the trouble. “Given that the next transit is not until December of 2117,” Livio said, “this is most probably the last chance of anybody alive today to actually see a transit.”
<urn:uuid:41fa076b-4843-46af-8160-0e82e7f48359>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/from-venus-with-love-a-once-in-a-lifetime-celestial-show-75294.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00226.warc.gz
en
0.92604
1,387
3.765625
4
Alan Turing was a renowned British multidisciplinary scientist considered the founding father of artificial intelligence (AI) and computer science. Some of Turing’s greatest achievements are in the field of cryptography. During World War II, he worked on that global conflict’s most significant signals intelligence (SIGINT) effort, leading a project to crack Enigma, an encryption device that gave the German military and Axis powers material advantage over the Allied forces. In doing so he and others working at the Government Code and Cypher School (GC&CS), Bletchley Park, built the Bombe, a British decryption device that succeeded at Enigma decryption and turned the tide of the war. Turing’s efforts toward computer science include development of the Automatic Computing Engine (ACE), one of the first known programmable computers, and for co-developing the Manchester computers, which over 1947-77 were similar devices that helped usher in today’s modern computing. Turing’s accomplishments in cryptography in the service of the UK government were popularized in the 2014 film The Imitation Game starring Benedict Cumberbatch. “Alan Turing created a test called The Imitation Game as a means to determine whether a machine was thinking, and from his white paper came an entirely new way of understanding whether machine intelligence is possible.”
<urn:uuid:c0c5da53-eff2-42ad-b414-ea39b0cacfaa>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/alan-turing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00226.warc.gz
en
0.952752
280
3.4375
3
The shipping industry is a growing source of greenhouse gas emissions from the transportation sector. According to the Organization for Economic Co-operation and Development (OECD), complete decarbonization of the sector would be possible by 2035 if we can effectively combine technological measures, operational measures, and renewable energy. Digitalization has given access to an integrated network of unexploited data that can potentially benefit society and the environment. The development of intelligent systems connected to the internet of things can generate unique opportunities to address challenges associated with climate change strategically. Download this white paper to get insights on: Current trends in the shipping industry concerning global emissions. Impact of the global transportation industry (with specific attention to shipping logistics) on the environment. Emerging concepts of sustainability in the shipping industry. How shipping organizations can effectively monitor their carbon footprint and take necessary actions based on insights.
<urn:uuid:97dd945c-5a20-4e63-b3b2-1d31c2a5165d>
CC-MAIN-2022-40
https://www.nagarro.com/en/whitepapers/sustainable-shipping-solutions-for-greener-future
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00226.warc.gz
en
0.902023
185
3.09375
3
Today, data is power. Qualitative, meaningful insights derived from collected data give businesses an edge in competition. However, consumer information privacy and the protection of their personal information has now been included in the ambit of the law, greatly limiting what consumer information businesses can peruse and in what ways. Among the world’s most talked-about data privacy laws is the California Consumer Protection Act (CCPA), which came into effect on January 1, 2020. Its enactment has thrown a spanner in the Data Discovery efforts of organizations worldwide. The CCPA stipulates certain compliances that must be mandatorily observed in order to protect consumer information from being misused or used without consent. Considering that big data analytics is expected to reach a revenue of $68 billion by 2025, there is an increasing need for a mechanism that empowers the process of data discovery and aligns it with CCPA compliances. Let’s see how data discovery can develop a sound foundation that synchronizes with the CCPA. CCPA and Data Discovery Owing to accelerated digitalization, cybercrime, AI technologies, and high volumes of online traffic, the collection and processing of consumer data has become a sensitive topic today more than ever. Data discovery walks the tightrope, especially where the CCPA holds its sway – it has emerged as one of the most stringent consumer data privacy laws out there. Businesses are required to disclose their data collection practices in accordance with the CCPA, and consumers can choose to have their personal information (PI) deleted or even refuse to have it collected altogether. Data discovery becomes even more complicated when ambiguities occur in what can be classified as personal/sensitive information under the CCPA, with many definitions resulting in clashes or redundancies. The use of robust, cascade-filter implements thus becomes necessary. These tools are empowered by artificial intelligence and machine learning to systemize the process of data discovery in line with the CCPA compliances. Aligning the use of Big Data with privacy laws has tangible benefits, as corroborated by a Cisco report. It details how organizations closer to GDPR compliance experienced fewer breaches and lower cost-per-breach than those that weren’t. Let’s see how that works. Establishing Synchrony Between Data Discovery and CCPA Compliance The world is predicted to generate 181 Zettabytes of data by 2025. That is a lot of data. In the context of observing and respecting consumer privacy and data protection, businesses will need intelligent implements to tailor a data discovery process that can classify and segregate usable information from protected data. Privacy compliance can thus be simplified to an extent such that manual comprehension of copious amounts of data becomes possible. Intelligent software equipped with the power of artificial intelligence and machine learning assists businesses in aligning their data discovery process to CCPA compliance mandates. An example of such software is Secuvy.ai’s Data Discovery tool, which employs AI and ML to assist organizations with data privacy compliance by readjusting the way data is processed. Identifying Compliance Mandates The first step to rejig data discovery constitutes thoroughly understanding the data privacy requirements applicable to your business region. In addition, consumer geography must be studied to determine if any data privacy laws apply to their data as stipulated by the region they reside in. Automating data discovery powered with artificial intelligence promotes more visibility into privacy laws in the context of consumer data and helps operationalize data discovery according to the prescribed regulations. AI data discovery software has the capability to sift and sort through large volumes of data with the objective to: - Identify and collect metadata pertaining to sensitive information and tag it - Create a repository of the data tagged as sensitive - Boost supervision and contextual “understanding” of collected data to derive accurate insights for safe data use within compliance parameters - Eliminate the risk of false positives or negatives by intelligently establishing lexical, contextual and interpretative correlations between privacy regulation and data discovery AI software for data discovery can help organizations offset the data breach costs (average total at $3.86 billion according to IBM) by strengthening the foundations of data governance. Data Relationship and Mapping AI-based data discovery tools assist organizations in improving bad data management practices by automating a large part of their data discovery. With data privacy protocols hardwired into the AI engine, it becomes possible to: - Improve data visibility of sensitive assets in lieu of the applicable privacy regulations - Identify potential compliance/security/privacy breaches and act on them promptly with a view to correct data mismanagement - Establish data linkages, relationships, histories, and other correlations between various datasets categorized by subject or other identifiers - Compile data attributed in a single interface to get the complete picture of your data scene Data Risk Management and Compliance Monitoring The CCPA empowers consumers to sue organizations if they find evidence of data misuse. Collecting Big Data and processing it improperly is thus immensely risky. AI-based data discovery software automates data storage and access, making it easier to locate specific datasets when in need. This also attenuates risk severity by catalyzing prompt action on non-compliance and revisiting appropriate security controls. Additionally, Data Discovery Tools supervise the flow of data in and out of the organization, maintaining security protocols at each stage until the data moves out of its ambit. While the CCPA restricted the free use of consumer data by organizations, there is still a way to ethically and responsibly handle all this information and run business intelligence effectively. Needless to say, the end-to-end compliance assurance that comes with Automation offered by AI-powered data discovery ecosystems like Secuvy has many benefits to bestow on organizations grappling with high data volume and a lack of best practices.
<urn:uuid:27009699-a822-41f9-8a4d-e5e2569cdf1e>
CC-MAIN-2022-40
https://secuvy.ai/blog/data-discovery-as-the-foundation-for-ccpa-compliance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00226.warc.gz
en
0.912453
1,199
2.5625
3
The contentious U.S. election campaign offered up many highlights, but the aftermath of election night – explosive cyberattack allegations – provided even more intrigue. These weren’t run-of-the-mill allegations, either. In fact, U.S. intelligence officials at the CIA and FBI were adamant that Russia was behind cyberattacks during the U.S. election that were targeted against the Democratic party – part of a bid to hurt Hillary Clinton’s presidential hopes and to help get Donald Trump into the White House. Whether Russian involvement helped Trump to become leader of the free world — Russian President Vladimir Putin has scoffed at the allegations — is up for debate. But what is certain is that cybersecurity risks are serious business – and companies need to be aware both of the risks and of how to prevent them. Successful attacks, after all, can cripple corporate networks, decimate bottom lines, and damage reputations among customers and suppliers. It’s easy sometimes to assume that all the threats come from outside of organizations, but it’s important to understand that the real threats come from within because cyber criminals are more and more shifting their focus to attacking corporations from the inside rather than from the outside to potentially evade detection. Yes, workers, can possibly present serious threats to security. What follows, therefore, are some tips on safeguarding businesses from the potential inside threats. - Education is Key Education is critical if businesses want to reduce the risks of cyberattacks that lead to damaging data breaches. Verizon’s 2016 Data Breach Investigations Report notes that a whopping 63% of confirmed data breach incidents were the result of weak, default or stolen passwords. The report adds that cyber criminals, employing social engineering techniques, still have little trouble convincing people click on links that lead to pages requesting personal information. For Instance, the 2016 report shows that 30% of phishing messages were opened compared to 23% in 2014 — and 12% of targets ended up opening the malicious attachments or clicking on the links versus 11% in 2014. What this means is that businesses have to educate their workers so that these workers don’t become the weak links that end up compromising their networks. - Manage Access Businesses that put in place solid identity and access management policies can lessen the odds of being victimized by cyberattacks since they will be able to govern which employees have access to what information. Robust policies will help businesses to validate workers’ identities, which will then provide employees with access to only the amount of information — sensitive or otherwise — that they need to do their jobs. It’s also important that businesses monitor the online behavior of their employees. This is particularly important when it comes to accessing information that could potentially be used for financial gain, and there needs to be a clear process for revoking access right away if necessary. - Mobile Considerations In an age when many employees use mobile devices — both company-issued and personal — it’s critical that businesses not only recognize the potential threats, but also address these threats with appropriate actions. According to one source, 61% of workers use their mobile devices both for work-related purposes and for personal purposes, but many of these same workers don’t get training for how to properly use their mobile devices. The 2016 Data Breach Investigations Report, meanwhile, notes that security incidents are often caused by workers who, for instance, lose their laptops or mobile devices. It adds that 39% of theft occurs in victims’ work spaces and 34% occurs in workers’ personal vehicles. So companies need clear policies to ensure that workers understand how to safely use their mobile devices. The threats facing corporations in this digital age are very real as cyber criminals get more and more resourceful when it comes to finding ways to access corporate networks. While it’s important for businesses to be wary of external threats that could lead to data breaches, it’s also important for them to be mindful of the internal threats when working on cybersecurity policies. This means engaging their employees so that they don’t become the weak links. By Ian Palmer Having earned a Bachelor of Journalism from Carleton University in Ottawa, Ontario, Canada in 1999, Ian has covered a wide range of technology issues over the years and has written for IT related sites such as InfoSec Institute and Linux.com
<urn:uuid:3fd00576-d0bf-402c-adb8-251abb85bf8b>
CC-MAIN-2022-40
https://cloudtweaks.com/2017/04/cybersecurity-policies-must-address-internal-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00226.warc.gz
en
0.962432
893
2.65625
3
Processor speeds have increased dramatically in recent years. As a result, the heat given off by processors has also increased, as has the noise associated with equipment, such as fans, used to keep them running at a safe temperature. Because water can conduct heat about 30 times faster than air can, a water cooling system allows the processor to run at higher speeds while drastically reducing system noise. Some industry experts predict that water cooling systems will become standard for personal computers in the near future. Water cooling is increasingly used to deal with the special requirements of the data center. Because data centers are often assigned the most convenient available space, rather than a space that is specially designed, servers may be contained in too small an area or one that cannot be adequately ventilated. Consequently, plumbing is often required for water cooling, which can limit the flexibility of data center design because systems connected to plumbing cannot be easily rearranged
<urn:uuid:344fa07b-782a-400a-b243-7a92f0d20fe9>
CC-MAIN-2022-40
https://cyrusone.com/resources/tools/water-cooling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00426.warc.gz
en
0.962624
184
3.59375
4
As you read a sentence, its meaning may be clear even before you reach its end. This illustrates our topic. Our minds process text sequentially. As we read, the context presented to us by an author develops in our minds. What precedes clarifies what follows, and vice-versa. This phenomenon is a result of efficiency. It’s how language works. Reducing the number of symbols we use simplifies communication in one sense; but it also forces us to adopt complications like words and grammar. Few of us write with hieroglyphs anymore. Consequently, we render our thoughts in the form of longer streams of consciousness, like this paragraph. The more reading we do, the better we are at predicting what’s looming ahead. Yet, we still must let our author’s picture become complete in our mind before we are sure we “get” the meaning. In data mining, the problem of contextual meaning is lessened when data is structured as it is in a database, where the meaning of a value is implied by its location. Grammar isn’t required with a structure like this. The meaning of a series of digits in a phone number column, for example, can be taken for granted. Knowing the meaning of a value allows us to apply rules to the value. We can readily see when data is malformed. In other words, data cleansing, which is crucial to data mining, becomes possible only once we know what variations are allowable based on context. Text Data Mining Unfortunately, data mining isn’t always about structured data. Text mining — or text data mining — is about comprehending natural language and extracting high quality information from it. Natural languages have structure, too. These structures are generally more complex than a schema, especially one designed for data mining. Because of these inherent complexities, entire technologies have arisen to extract, parse and analyze text. At the same time, an increasing amount of stored data is becoming subject to privacy measures, especially encryption. Clearly, data mining operations must access the plain expression of meaning, or plaintext, in order to mine it for useful information. In the case of structured data, the unit subject to encryption may be a relatively small set of symbols, perhaps only a field or row. When data is structured, encryption can be efficiently applied in various dimensions. It should be noted that when text is encrypted, the strength of the encryption might depend on the amount of data being encrypted at one time. For example, AES (advanced encryption standard) and Triple-DES (data encryption standard) symmetric ciphers often use cipher-block chaining techniques to strengthen overall security by feeding the output of encrypting one block of data into the next encryption operation, etc., making cryptanalysis more difficult. The desired result of encryption is a large mass of bits that provide no contextual reference for the underlying plaintext, which poses a challenge to text data mining. Existing rudimentary approaches to accessing encrypted data include separating the decipherment and mining operations into sequential stages. A negative implication of this approach is that obtaining a sufficiently large text sample for analysis may require exposing too much plaintext for too long a period. Conversely, decrypting too little data may fail to reveal the proper context of the information and lead to flawed analysis. Approaches to Secure Mining It may seem strange to contemplate allowing encrypted text to be mined at all. However, text that is valuable for mining isn’t necessarily public information. Furthermore, mining text may not necessarily compromise data security, considering that the result of data mining may simply be an aggregation — or the rules that govern an inference engine or a neural network — rather than the details of the text itself. Consequently, some form of control is necessary because data is not always mined by its owner. What emerges from this is the need for an engagement between the interests of the data miner and those of the data owner. In matters of law, fault may be avoided by adhering to the terms of a contract. In IT, it is avoided by adhering to a protocol. Therefore, what remains is to develop both text mining strategies and protocols that efficiently engage streams of encrypted text for data mining without violating security policy. We can apply a service-provision metaphor to text data mining by defining the service as either (a) the simple access to the data (fig. 1), or (b) the mining operation itself, which is conducted by the owner on behalf of the mining interest or “consumer” (fig. 2). In the first case, the consumer retains the mining function, perhaps because the consumer’s techniques are valued intellectual property. During the mining, the consumer has access to the text in its original form. In the latter case, mining is provided as a service. This simplifies the interface to the data and allows the owner to restrict any view on the data. This approach requires the consumer to trust the mining methods of the owner. The quality of the mining and/or analysis is only as good as the technology to which the owner has subscribed. A third approach (fig. 3) allows the consumer to first provide the data owner with the “method” of mining in the form of a mining object, to which the owner will subject the data. This “middle” approach both protects access to the text and enables the use of the consumer’s competitive technology. To determine the nature of an interface between the consumer and the data owner, we first enumerate the rights of the data owner with respect to data access. This is crucial because we must ensure that the consumer’s “methods” do not conflict with the owner’s data security policies. For example, the data owner may have the “rights” to: - Restrict mining to aggregations (sums, averages) as opposed to allowing specifics (names and numbers) - Restrict mining to generalizations (“most,” “some,” “many”) as opposed to direct measures (“maximum,” “minimum,” “average”) - Restrict any access to certain data elements, such as identification numbers (SSN, credit-card numbers, etc.) and/or data related to certain groups of individuals such as minors - Restrict mining to (or from) certain date ranges Pursuing such “qualitative” attributes implies not only an ability to symbolize and encode representations of such dimensions, but it also implies a uniformly acceptable process to identify new criteria and extend the protocol dynamically, following the model of the ITU-T X.690 extensible standard for object encoding rules. So far, we’ve approached text data mining assuming that data being mined always resides in a static “place” at the time it is being mined. An alternative scenario (fig. 4) envisions secure data in transit being subject to mining en route. Secure data transport technologies such as HTTP Applicability Statement 2 support the inclusion of various metadata specifying how the data was “packaged” (i.e. compressed, encrypted, digitally signed, etc). Enabling data in transit to be securely mined can be accomplished by extending this metadata to include the owner’s mining “policy” and sufficient technology to enforce the owner’s restrictions. The approaches presented here highlight some emerging challenges facing data and text mining in a technological environment growing increasingly sensitive to security and privacy concerns. David Walling is CTO of nuBridges, an e-business security provider.
<urn:uuid:31fcf89b-c74a-49da-a522-afdb29746a50>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/the-theory-and-practice-of-secure-data-mining-60819.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00426.warc.gz
en
0.927901
1,568
3.09375
3
Artificial intelligence is becoming the fastest disruptor and generator of wealth in history. It will have a major impact on everything. Over the next decade, more than half of the jobs today will disappear and be replaced by AI and the next generation of robotics. AI has the potential to cure diseases, enable smarter cities, tackle many of our environmental challenges, and potentially redefine poverty. There are still many questions to ask about AI and what can go wrong. Elon Musk recently suggested that under some scenarios AI could jeopardise human survival. AI’s ability to analyse data and its accuracy is enormous. This will enable the development of smarter machines for business. But at what cost and how will we control it? Society needs to seriously rethink AI’s potentials, its impact to both our society and the way we live. Artificial intelligence and robotics were initially thought to be a danger to be blue-collar jobs, but that is changing with white-collar workers – such as lawyers and doctors – who carry out purely quantitative analytical processes are also becoming an endangered species. Some of their methods and procedures are increasingly being replicated and replaced by software . For instance, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory, Massachusetts General Hospital and Harvard Medical School developed a machine learning model to better detect cancer. They trained the model on 600 existing high-risk lesions, incorporating parameters like, family history, demographics, and past biopsies. It was then tested on 335 lesions and they found it could predict the status of a lesion which 97 per cent accuracy, ultimately enabling the researchers to upgrade those lesions to cancer. Traditional mammograms uncover suspicious lesions, then test their findings with a needle biopsy. Abnormalities would undergo surgeries, usually resulting in 90 per cent to be benign, rendering the procedures unnecessary. As the amount of data and other potential variables are considered, human clinicians cannot compete at the same level of AI. So will AI take the clinicians job or will it just provide a better diagnostic tool, freeing up the clinicians to provide better connection with their patients? Confusion around the various terminologies relating to AI can warp the conversation. Artificial general intelligence (AGI) is where machines can successfully perform any intellectual task that a human can do – sometimes referred to as “strong AI”, or “full AI”. That is where a machine can perform “general intelligent actions”. […]
<urn:uuid:89a307ef-92ce-438d-8cab-b0655bfb2423>
CC-MAIN-2022-40
https://swisscognitive.ch/2018/06/16/artificial-intelligence-bring-new-renaissance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00426.warc.gz
en
0.949217
497
3.3125
3
It is statistically proved that road accidents are highly caused by teenage drivers more than adult drivers. Despite knowing the risks of using a mobile phone while driving, 70% of young drivers are used to texting or making calls when they drive. Frequent use of mobile phones while driving leads to poorer lane-keeping, slower response time, insufficient maintenance of distance from vehicles in the front, and reduced awareness of surroundings. The impact of popular social networking games such as Pokemon GO is also causing a rise in the number of road accidents as many users tend to play the game while driving. Hence, the requirement for technology to control a driver’s phone while driving is paramount. Only by implementing a smart portable system, you can control the abrupt use of mobile phones while driving. The system should be able to control and monitor the behavior of a driver whenever the driver violates a threshold and causes the risk of a road crash. Challenges Caused by Distracted Young Drivers Studies conducted by various governmental and non-governmental agencies prove that the consequences of using mobile phones while driving are more fatal than talking to a fellow passenger. The tendency of using mobile phones is more among youngsters, especially teenagers. Messaging while driving is considered to be the most harmful type of distraction. Mobile phone prevents the ability of a driver to control the vehicle and anticipate hazards. Existing collision averting systems can help to an extent, but cannot save lives. Some countries do not commission the use of Radio Frequency Blockers to block mobiles.s Even the use of devices like Google Glass for texting can impair driving performance. Let’s work together to solve your business Ensuring a Safe Driving Ecosystem with OBD and Mobile Technologies Techniques like smartphone accelerators and mobile sensors are beneficial in detecting driving behavior while using mobile applications. However, these kinds of applications require high-performance computational capabilities. This is where Fingent can help you. Fingent’s proposes a hybrid (hardware and software combined) solution to monitor teen driver behavior by tracking the mobile phone usage and setting up a mechanism to block incoming calls while driving, specifically when the car speed reaches a certain threshold. OBD-II and ControllerThis includes a wired on-board diagnostics (OBD)- II device and a controller module which comprises GPS (Global Positioning System), SD card, RPi2, and a Bluetooth dongle. Bluetooth communication happens between the controller module and the mobile device. Driver and Parent Mobile AppsDriver has to download and install the safety app into their mobile device. When the driver launches the safety app and pairs the mobile with the controller through Bluetooth, the data will be transferred to the web back-end. A web back-endThe trip data is sent to the web backend using the mobile internet. This data also includes speed graphs generated at regular intervals and route plotted on the map. - Parents can be in control of the driving habits of their teenage children by restricting mobile usage while driving. - Driver mobile gets locked automatically when the speed limit exceeds the threshold (that is above 10mph). - The driver mobile gets unlocked when the driver speeds down below 10 mph. - Trip information becomes available to the parents of the teen driver when the car engine is switched off or when mobile Bluetooth is disconnected. - All incoming calls except whitelisted numbers are blocked when the car speed reaches the threshold. - A custom lock appears on pressing the mobile power button. - The driver can either call from the white-listed numbers or open custom map to provide the source and destination to get the route. How Fingent can help? Fingent helps you leverage emerging technologies in innovative ways to redefine your business processes. With a dedicated team of engineers who stay abreast of the latest technologies and agile methodology of development, we convert out-of-the-box concepts of our clients into solutions with ease. CONSISTENT HIGH-QUALITY RESULTS - Top-notch developers - Dedicated quality assurance team - Adherence to QA best practices - Agile development process COST-EFFECTIVE, BUDGET-FRIENDLY SOLUTIONS - No last-minute surprises - Efficient operations - Good coding practices - Expertise in latest technologies PREDICTABLE RESULTS ADHERING TO DEADLINES - Transparent project management - Warranty-assured deliverables - Carefully defined project plans - Handle unexpected challenges
<urn:uuid:29bd8fd9-677c-49a3-83ac-e367ad0e8843>
CC-MAIN-2022-40
https://www.fingent.com/uk/usecases/restricting-mobile-phone-usage-while-driving-using-obd/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00426.warc.gz
en
0.891099
936
3.0625
3
Update – October 5, 2015: The creators of Linux.Wifatch responds to our blog posting explaining the reasons for their actions. The following story could well work as the script of a Hollywood movie or superhero comic. Let me introduce you to Linux.Wifatch, one of the latest pieces of code infecting Internet of Things (IoT) devices. We first heard of Wifatch back in 2014, when an independent security researcher noticed something unusual happening on his home router. The researcher identified running processes that didn’t seem to be part of the legitimate router software and decided to investigate further. During his analysis he discovered a sophisticated piece of code that had turned his home router into a zombie connected to a peer-to-peer network of infected devices. Lately we’ve seen that home routers, and IoT devices in general, are becoming more interesting to cyber crooks; these devices may not hold a lot of interesting data but under the control of criminals they have proven to be quite useful, for instance, to articulate distributed denial-of-service (DDoS) attacks. As well as this, it’s difficult for the average user to detect if one of these devices has become infected and so most infections go unnoticed. In April of this year, we were provided with some additional information on Wifatch. At first sight there was nothing unusual about it—as part of Symantec’s efforts to identify malware targeting embedded devices we run a large network of honeypots that collect many samples and Wifatch seemed to be just another of these threats. However, after a closer look, this particular piece of code looked somewhat more sophisticated than the average embedded threat we usually spot in the wild. A force for good or a force for evil? During our analysis we began to unveil some of Wifatch’s secrets. Most of Wifatch’s code is written in the Perl programming language and it targets several architectures and ships its own static Perl interpreter for each of them. Once a device is infected with the Wifatch, it connects to a peer-to-peer network that is used to distribute threat updates. The further we dug into Wifatch’s code the more we had the feeling that there was something unusual about this threat. For all intents and purposes, it appeared like the author was trying to secure infected devices instead of using them for malicious activities. Wifatch’s code does not ship any payloads used for malicious activities, such as carrying out DDoS attacks, in fact all the hardcoded routines seem to have been implemented in order to harden compromised devices. We’ve been monitoring Wifatch’s peer-to-peer network for a number of months and have yet to observe any malicious actions being carried out through it. In addition, there are some other things that seem to hint that the threat’s intentions may differ from traditional malware. Wifatch not only tries to prevent further access by killing the legitimate Telnet daemon, it also leaves a message in its place telling device owners to change passwords and update the firmware. Figure 1. Message left by Wifatch Wifatch has a module that attempts to remediate other malware infections present on the compromised device. Some of the threats it tries to remove are well known families of malware targeting embedded devices. The threat author left a comment in the source code that references an email signature used by software freedom activist Richard Stallman (Figure 2). Figure 2. Comment in Wifatch source code Wifatch’s code is not obfuscated; it just uses compression and contains minified versions of the source code. It would have been easy for the author to obfuscate the Perl code but they chose not to. The threat also contains a number of debug messages that enable easier analysis. It looks like the author wasn’t particularly worried about others being able to inspect the code. The threat has a module (dahua.pm) that seems to be an exploit for Dahua DVR CCTV systems. The module allows Wifatch to set the configuration of the device to automatically reboot every week. One could speculate that because Wifatch may not be able to properly defend this type of device, instead, its strategy may be to reboot it periodically which would kill running malware and set the device back to a clean state. Hint of a dark side? Despite the previously listed actions, it should be made clear that Linux.Wifatch is a piece of code that infects a device without user consent and in that regard is the same as any other piece of malware. It should also be pointed out that Wifatch contains a number of general-purpose back doors that can be used by the author to carry out potentially malicious actions. However, cryptographic signatures are verified upon the use of the back doors to verify that commands are indeed coming from the malware creator. This would reduce the risk of the peer-to-peer network being taken over by others. We believe that most of Wifatch’s infections are happening over Telnet connections to devices using weak credentials. After monitoring Wifatch’s network for a number of months, we estimate it to include somewhere in the order of tens of thousands of devices. The following charts show the breakdown of affected countries and the architectures of infected devices. Figure 3. Breakdown of infected countries Figure 4. Breakdown of infected architectures A selection of ARM architectures make up the bulk of infected devices, with MIPS and SH4 making up the majority of the remaining. PowerPC and X86 both account for an insignificant percentage (0.132 collectively). There is no doubt that Linux.Wifatch is an interesting piece of code. Whether the author’s intentions were to use their creation for the good of other IoT users—vigilante style—or whether their intentions were more malicious remains to be seen. What we do know is that it pays to be suspicious and, with this in mind, Symantec will be keeping a close eye on Linux.Wifatch and the activities of its mysterious creator. Resetting an infected device will remove the Wifatch malware; however, devices may become infected again over time. If possible, users are advised to keep their device’s software and firmware up to date and to change any default passwords that may be in use. Update – October 5, 2015: The author of Linux.Wifatch has responded to our blog and posted a Q&A to explain their actions. The following is an extract from the response. Why did you write this and let it go? First, for learning. Second, for understanding. Third, for fun, and fourth, for your (and our) security. Apart from the learning experience, this is a truly altruistic project, and no malicious actions are planned (and it nice touch that Symantec watch over this). Why release now? It was never intended to be secret. And to be truly ethical (Stallman said) it needs to have a free license (agree) and ask before acting (also agree, so only half way there). Why not release earlier? To avoid unwanted attention, especially by other mlaware authors who want to avoid detection. Plan failed, unwanted attention has been attracted, so release is fine. Who are you? We are nobody important. Really. Do you feel bad about abusing resources by others? Yes, although the amount of saved bandwidth by taking down other scanning malware, the amount energy saved by killing illegal bitcoin miners, the number of reboots and service interruptions prevented by not overheating these devices, the number of credentials and money not stolen should all outweigh this. We co-opted your devices to help the general public (in a small way). Can I trust you to not do evil things with my devices? Yes, but that is of no help - somebody could steal the key, no matter how well I protect it. More likely, there is a bug in the code that allows access to anybody. Should I trust you? Of course not, you should secure your device. Why is this not a problem? Linux.Wifatch doesn't use elaborate backdoors or 0day exploits to hack devices. It basically just uses telnet and a few other protocols and tries a few really dumb or default passwords (our favourite is "password"). These passwords are well-known - anybody can do that, without having to steal any secret key. Basically it only infects devices that are not protected at all in the first place!
<urn:uuid:08b8cc26-82eb-4651-aa72-611acc8d1618>
CC-MAIN-2022-40
https://community.broadcom.com/symantecenterprise/communities/community-home/librarydocuments/viewdocument?DocumentKey=ef23b297-5cc6-4c4a-b2e7-ff41635965fe&CommunityKey=1ecf5f55-9545-44d6-b0f4-4e4a7f5f5e68&tab=librarydocuments
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00626.warc.gz
en
0.948037
1,793
2.6875
3
BFD is a protocol that is used in the networks to quickly detect link failures and hence enhance the speed of convergence of the routing protocols. Different routing protocols have different mechanisms to detect link failures for e.g. OSPF uses hello packets and dead interval while the EIGRP uses hello packets and hold down timers. Though the time of convergence for OSPF and EIGRP and other protocols is fast but still some real time applications such as VOIP may face issues and require a much faster speed of convergence of the networks. We can fine tune the hello and dead/hold timers to increase convergence speed but that isn’t recommended in the networks though. BFD can run independent of any other protocol however the other protocols i.e. OSPF,EIGRP,BGP,HSRP etc. can use BFD for link failure detection instead of using their own mechanism. Whenever a link failure occurs BFD will notify the routing protocol that link loss has occurred and the protocol using it will tear town the neighbor relation immediately and start the process to re-converge. The BFD operates in two modes Asynchronous mode and Demand mode. In Asynchronous mode BFD will keep sending the hello packets and if some hello packets are not received the session is torn down.The Demand mode is different, once BFD has found a neighbour it won’t continuously send control packets but only uses a polling mechanism. Another method has to be used to check reachability, for example it could check receive and transmit statistics of the interface. Right now Cisco (or any other vendor I know of) doesn’t support BFD demand mode. Let us have a look at an example where we use OSPF between two routers with and without BFD. OSPF will run between the two routers. We will then shut the port Fa0/0 on R1 and see R1 immediately breaks OSPF neighborship as it detects the directly connected link down. R2 however takes time to break the OSPF neighborship till the dead timer expires. We see below R1 and R2 are OSPF neighbors Now let’s shut down the port Fa0/0 on R1 and notice how much time R2 takes to break neighborship. As seen below R1 immediately breaks OSPF adjacency R2 took approx. 15 seconds to break the OSPF neighborship: We enable BFD. We use BFD with OSPF and then again shut Fa0/0 on R1 to break the OSPF nieghborship and check on R2 how much time it takes to break OSPF neighborship. Enabling BFD on R1 and R2: - BFD interval is the time in milliseconds after which BFD packet is sent. - The second value to configure is the minimum receive interval. This is how often we expect to receive a BFD packet from our neighbor. - The last value to configure is for the hold-down. Now BFD is configured lets configure OSPF to use the same. Now when we shut the Fa0/0 port on R1 we see both R1 and R2 break OSPF adjacency immediately
<urn:uuid:e76eaa9a-1d60-412d-bab5-f6be0c393a77>
CC-MAIN-2022-40
https://ipwithease.com/bfd-bidirectional-forwarding-detection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00626.warc.gz
en
0.925292
686
2.640625
3
The positive implications of the arrival of the quantum era are obvious. However, not all potential uses of quantum computing power are quite so benign. Quantum computers are ideally suited to solving complex mathematical problems, such as the factoring of large numbers, which is at the core of asymmetric cryptosystems. This has serious implications for cybersecurity. Indeed, cybersecurity relies on a rather restricted number of cryptographic primitives. Foremost among them are the well-known RSA and ECC algorithms. Both are based on the hardness of factoring, which does not hold true anymore once quantum computers are available. As the critical lifespan of data gets longer, the danger becomes more tangible. Data stolen today does not have to be decrypted today to hold value. Financial, healthcare and intellectual property data stolen today could still be relevant in 10 years’ time. The ability to download now and decrypt later means than, even if they are only available in several years, quantum computers pose a genuine threat to data security today. In response to the threat of the quantum computer, there is a need to replace the current cybersecurity infrastructure with a new quantum-safe one. For this purpose, cybersecurity innovators are turning to a variety of technologies. First, one can replace current cryptographic algorithms, which will not withstand the arrival of the quantum computer, by a new set of quantum-resistant algorithms, also known as post-quantum algorithms. The search for suitable algorithms has been formalised by a process led by the NIST in the USA. Candidates for various cryptographic functions are currently under scrutiny. Standardisation is expected within 4 to 5 years. However, there is a distinct possibility that new quantum algorithms, i.e. algorithms operating on quantum computers, may threaten these. The risk may be unreasonable for data with high and long-term value. Alternatively, in an interesting twist, one can use quantum technologies themselves, and in particular quantum cryptography to counter the emerging threat. Advances in the development of quantum key generation and quantum key distribution (QKD) for example are well underway. QKD is a breakthrough technology that exploits one of the fundamental principles of quantum physics (observation causes perturbation) to ensure forward secrecy of encryption keys across an optical fibre network, or across free space. Any attempt to eavesdrop on the network would be detected and passive interception is rendered impossible. Using QKD now will provide immediate protection to your data in the face of today’s brute force attacks, ensure that data with a long shelf life is protected against future attacks and safeguard high-value data in a post-quantum computing world. Unlike the quantum computer, QKD is already a reality. There is a number of real-world installations of QKD already in place. This includes a 2000 km-long infrastructure backbone in China, used to secure data exchanged between Beijing and Shanghai (and all points in between). This is currently being extended to an 11’000 backbone, which will cover most of Eastern China. In Europe, the QComm hub has recently launched the UK Quantum Network (UKQN), while other QKD networks are planned in several countries. In the USA, QKD real-world implementation is led by a private company, Quantum Exchange. Quantum Exchange is building a key-as-a-service infrastructure to link financial companies in the New York area, with more to come. ID Quantique is a pioneer in the field of quantum security solutions, delivering a certified source of entropy for key generation with our Quantis range and provably secure key exchange via our Clavis and Cerberis range of products. To get the latest updates on what’s happening in the quantum world, subscribe to our newsletter.
<urn:uuid:7730ff21-72bb-40be-8cc6-a7f6251968aa>
CC-MAIN-2022-40
https://www.idquantique.com/quantum-safe-security/quantum-computing/cybersecurity-implications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00626.warc.gz
en
0.926465
768
3.140625
3
In a recent blog post, I wrote about the Evolution of Network Protocol Architectures. As I previously stated, networking technology is not static. It is constantly evolving, and yet many times when organizations refresh or ‘modernize’ their network, they purchase shiny new network switches but insist on deploying the same networking protocols that operated on the previous network. In this blog, I provided a historical look at how networking protocols matured as computer network architectures transformed. The differences between a traditional networking protocol stack and the 802.1aq method are at the heart of the conversation. 802.1aq transparently extends Layer 2 connectivity, regardless of physical topology or location. To further visualize the continuing Evolution of Network Protocol Architectures we have created this video: The overall evolution of network protocol architectures will, of course, continue. 802.1aq is by no means the end of the evolution. As we move further towards smart infrastructure and communities, this evolution will continue to provide the automation and scale required for networks.
<urn:uuid:e9ed0b72-f6e1-4922-a885-d3f2c7befba2>
CC-MAIN-2022-40
https://www.extremenetworks.com/extreme-networks-blog/evolution-of-network-protocol-architecture/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00626.warc.gz
en
0.916496
205
2.53125
3
Do you know how much your account information is worth to criminals? You might be shocked to find out that credentials you believe are incredibly important only fetch $10-20 on the dark web. What’s worse is how much financial damage stolen data could cost you later on. These thieves are taking part in credential theft, a market expected to exceed $18 billion by 2024. Let’s find out more about credential theft and how to protect yourself against it. What Is Credential Theft? Credentials are specific data or authentication tools that are required to verify a person’s identity. When the credentials match, that person is authenticated and granted access to a particular system or network. Theft is when an adversary has the intent, capability, and opportunity to intercept that information. In other words, credential theft happens because criminals are able to intercept your personal information. One way hackers get credentials is through phishing. Verizon finds that most attacks target not the wealthy, but the unprepared. Further, Verizon found that 76 percent of attacks were financially motivated. Security expert Brian Krebs found that, indeed, credentials sold for an average of $15. He also came across a screenshot from a dark-web service that showed one crook who earned more than $288,000 in just a few months by selling stolen data. How Can You Prevent Credential Theft? It is nearly impossible to prevent your information from appearing on the dark web. With tens of thousands of data breaches every year, you should practically expect to be on it. However, you can take steps to prevent the information that’s on the dark web from allowing hackers to access your accounts. First, practice good email security awareness. Don’t send your credentials to someone else in an email, and don’t click on a link from an unknown sender. Second, use strong, unique passwords that change on a regular basis. Combine upper & lowercase letters, symbols, and numbers Change your password periodically Use a different password for every site Since you likely access dozens (if not hundreds) of accounts, consider using a password manager. Password managers can help generate, store, retrieve, and change your passwords for you. Finally, use multifactor authentication (aka MFA or two-factor authentication). This requires you to use a combination of two ways to verify your login credentials with: Something you know—such as a password or PIN Something you have—like a mobile phone or debit card Something you are—like a fingerprint or facial recognition An example of how MFA works is if you get a notification on your phone that somebody in India is trying to access your bank account. If you’re in the U.S., you know somebody you haven’t authorized is trying to get into your account. You can deny the login attempt. Then, change your password using the guidelines listed above. The best strategy to protecting against credential theft includes having strong, regularly updated passwords to nullify the effects of having your data on the dark web. That will help protect you from would-be attackers.
<urn:uuid:f77bdbb0-e709-4b8a-84e9-874178f8443b>
CC-MAIN-2022-40
https://blog.integrityts.com/the-best-strategy-for-protecting-against-credential-theft
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00626.warc.gz
en
0.921719
651
2.765625
3
An Introduction to Linux Security Modules (LSMs) An Introduction to Linux Security Modules (LSMs) The Linux security module (LSM) framework, which allows for security extensions to be plugged into the kernel, has been used to implement MAC on Linux. LSM hooks in Linux Kernel mediates access to internal kernel objects such as inodes, tasks, files, devices, and IPC. LSMs, in general, refer to these generic hooks added in the core kernel code. Further, security modules could make use of these generic hooks to implement enhanced access control as independent kernel modules. AppArmor, SELinux, Smack, TOMOYO are examples of such independent kernel security modules. LSM seeks to allow security modules to answer the question "May a subject S perform a kernel operation OP on an internal kernel object OBJ?" LSMs can drastically reduce the attack surface of a system if appropriate policies using security modules are implemented. DACs vs. MACs DAC (Discretionary Access Control) based access control is a means of restricting access to objects based on the identity of subjects or groups. For decades, Linux only had DAC-based access controls in the form of user and group permissions. One of the problems with DACs is that the primitives are transitive in nature. A user who is a privileged user could create other privileged users, and that user could have access to restricted objects. With MACs (Mandatory Access Control), the subjects (e.g., users, processes, threads) and objects (e.g., files, sockets, memory segments) each have a set of security attributes. These security attributes are centrally managed through MAC policies. In the case of MAC, the user/group does not make any access decision, but the access decision is managed by security attributes. LSMs are a form of MAC-based controls. LSM hooks are applied after the DAC and other sanity checks are performed. here it is shown that the LSM hooks are applied in core objects, and these hooks are dereferenced using a global hooks table. These global hooks are added ( e.g., check AppArmor hooks when the security module is initialized. TOCTOU problem handling LSMs are typically used for a system's policy enforcement. One school of thought is that the enforcement can be handled in an asynchronous fashion, i.e., the kernel audit events could pass the alert to userspace, and then the userspace could enforce the decision asynchronously. Such an approach has several issues, i.e., the asynchronous nature might result in the malicious actor causing the actual damage before the actor could be identified. For example, if the unlink() of a file object is to be blocked, the asynchronous nature might result in the unlink getting successful before the attack could be blocked. LSM hooks are applied inline to the kernel code processing; the kernel has the security context and other details of the object while making the decision inline. Thus the enforcement is in line with the access attempt, and any blocking/denial action can be performed without TOCTOU problems. Security Modules currently defined in Linux kernel LSMs are generic hooks, but if a new security module has to be introduced, the new module can be operated as a kernel module. However, the introduction of a new security module also requires changing the core-kernel code to introduce this module to the kernel. There are already the following modules defined in the default Linux kernel:: Linux kernel module In the above list, AppArmor and SELinux are undoubtedly the most widely used. AppArmor is relatively easier to use, but SELinux provides the greater intensive and fine-grained policy specification. Linux POSIX.1e capabilities logic is also implemented as a security module. There can be multiple security modules used at the same time. This is true in most cases; the capabilities module is always loaded alongside SELinux or any other LSM. The capabilities security module is always ordered first in execution (controlled using .order = LSM_ORDER_FIRST flag). However, note that AppArmor, SELinux, and Smack security modules initialize themselves as exclusive (LSM_FLAG_EXCLUSIVE) security modules. There cannot be two security modules in the system with the LSM_FLAG_EXCLUSIVE flag set. Thus, this means that one cannot have any two of the following (SELinux, AppArmor, Smack) security modules registered simultaneously. Permissive hooks in LSMs Certain POSIX-compliant filesystems depend on the ability to grant accesses that would ordinarily be denied at a coarse level (DAC level) of granularity (check the capabilities man page for CAP_DAC_OVERRIDE). LSM supports DAC override (a.k.a., permissive hooks) for particular objects such as POSIX-compliant filesystems, where the security module can grant access the kernel was about to deny. Security Modules: A general critique LSMs, as generic MAC-based security primitives, are very powerful. The security modules allow the administrator to impose additional restrictions on the system to reduce the attack surface. However, if the security module policy specification language is hard to understand/debug, the administrator usually takes a stance of disabling it altogether, thus imposing friction in adoption. - LSMs are not designed to prevent a process from being attacked. - Good coding practices, configuration management, and memory-safe languages are the tools for that. - The protections provided by LSMs do, however, help protect your system from being hacked when an attacker exploits flaws in one of the running programs. - Linux Security Modules: General Security Support for the Linux Kernel, Wright & Cowan et al., 2002
<urn:uuid:7769107e-9155-49bd-9e57-e7b4f5203031>
CC-MAIN-2022-40
https://www.accuknox.com/blog/an-introduction-to-linux-security-modules
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00026.warc.gz
en
0.894436
1,238
2.78125
3
We are further down the road of A.I. As we grow in understanding, so, too, do we grow to understand its differences. In 2020, we can classify artificial intelligence into 4 distinct types. The types are loosely similar to Maslov’s hierarchy of needs, where the simplest level only requires basic functioning and the most advanced level is the Mohammad, Buddha, Christian Saint, all-knowing, all-seeing, self-aware consciousness. The four A.I. types are - Reactive Machines - Limited Memory - Theory of Mind - Self Aware We are currently well past the first type and actively perfecting the second. At the moment, the third and fourth types exist only in theory. They are to be the next stage of A.I.—let’s take a look. Reactive Machines perform basic operations. This level of A.I. is the simplest. These types react to some input with some output. There is no learning that occurs. This is the first stage to any A.I. system. A machine learning that takes a human face as input and outputs a box around the face to identify it as a face is a simple, reactive machine. The model stores no inputs, it performs no learning. Static machine learning models are reactive machines. Their architecture is the simplest and they can be found on GitHub repos across the web. These models can be downloaded, traded, passed around and loaded into a developer’s toolkit with ease. Limited memory types refer to an A.I.’s ability to store previous data and/or predictions, using that data to make better predictions. With Limited Memory, machine learning architecture becomes a little more complex. Every machine learning model requires limited memory to be created, but the model can get deployed as a reactive machine type. There are three major kinds of machine learning models that achieve this Limited Memory type: These models learn to make better predictions through many cycles of trial and error. This kind of model is used to teach computers how to play games like Chess, Go, and DOTA2. Long Short Term Memory (LSTMs) Researchers intuited that past data would help predict the next items in sequences, particularly in language, so they developed a model that used what was called the Long Short Term Memory. For predicting the next elements in a sequence, the LSTM tags more recent information as more important and items further in the past as less important. Evolutionary Generative Adversarial Networks (E-GAN) The E-GAN has memory such that it evolves at every evolution. The model produces a kind of growing thing. Growing things don’t take the same path every time, the paths get to be slightly modified because statistics is a math of chance, not a math of exactness. In the modifications, the model may find a better path, a path of least resistance. The next generation of the model mutates and evolves towards the path its ancestor found in error. In a way, the E-GAN creates a simulation similar to how humans have evolved on this planet. Each child, in perfect, successful reproduction, is better equipped to live an extraordinary life than its parent. Limited Memory Types in practice While every machine learning model is created using limited memory, they don’t always become that way when deployed. Limited Memory A.I. works in two ways: - A team continuously trains a model on new data. - The A.I. environment is built in a way where models are automatically trained and renewed upon model usage and behavior. For a machine learning infrastructure to sustain a limited memory type, the infrastructure requires machine learning to be built-in to its structure. More and more common in the ML lifecycle is Active Learning. The ML Active Learning Cycle has six steps: - Training Data. An ML model must have data to train on. - Build ML Model. The model is created. - Model Predictions. The model makes predictions, - Feedback. The model gets feedback on its prediction from human or environmental stimuli. - Feedback becomes data. Feedback is submitted back to a data repository. - Repeat Step 1. Continue to iterate on this cycle. Theory of Mind We have yet to reach Theory of Mind artificial intelligence types. These are only in their beginning phases and can be seen in things like self-driving cars. In this type of A.I., A.I. begins to interact with the thoughts and emotions of humans. Presently, machine learning models do a lot for a person directed at achieving a task. Current models have a one-way relationship with A.I. Alexa and Siri bow to every command. If you angrily yell at Google Maps to take you another direction, it does not offer emotional support and say, “This is the fastest direction. Who may I call and inform you will be late?” Google Maps, instead, continues to return the same traffic reports and ETAs that it had already shown and has no concern for your distress. A Theory of Mind A.I. will be a better companion. Fields of study tackling this issue include Artificial Emotional Intelligence and developments in the theory of Decision-Making. Michael Jordan presented some of his Decision-Making research at the May 13th event, The Future of ML and AI with Michael Jordan and Ion Stoica, and more coverage was presented at the ICLR 2020 conference. Finally, in some distant future, perhaps A.I. achieves nirvana. It becomes self-aware. This kind of A.I. exists only in story, and, as stories often do, instills both immense amounts of hope and fear into audiences. A self-aware intelligence beyond the human has an independent intelligence, and likely, people will have to negotiate terms with the entity it created. What happens, good or bad, is anyone’s guess. Are there other AI types? There are other types of A.I. the more tech-oriented crowd observes. They follow a similar outline but get written about with a stronger foundation in what the A.I. is used for, what it is capable of, and how it helps advance humanity. These three types are: - Artificial Narrow Intelligence - Artificial General Intelligence - Artificial Super Intelligence Whichever way you break down A.I., know that it A.I. is a strong software tool for the future that’s here to stay. A.I. is eliminating repetitive tasks in the workforce and elevating humans to reach higher selves, embracing constant states of change and creativity.
<urn:uuid:47fa8a14-1103-40d9-9dfd-92408e62316e>
CC-MAIN-2022-40
https://www.bmc.com/blogs/artificial-intelligence-types/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00026.warc.gz
en
0.940791
1,390
3.125
3
3 RF Behaviors to Know as a Wireless Engineer Radio frequencies can be a challenging subject, especially to those who aren't accustomed to it or have changed jobs or roles; More networking roles than ever require a basic or intermediate understanding of RF. One of the first steps in understanding RF as a whole is the basic behaviors so you know what to expect — and the units or descriptions so that you can speak about RF in the right context. What are RF Behaviors? RF is confusing initially for many people. We're used to the behaviors of light, air or water. RF doesn't exactly behave like them, but it does have qualities of each. RF is nominally a wave, but it's a bit more complicated than that. If you could discretely isolate a single RF wave, it would behave more like a ray or light than anything else. Most of the motile properties of RF come in here, and let's conceptualize them as a ray for now. What is Absorption? RF energy, and all other types of energy have the property of absorption or rather the objects they strike do. Different frequencies have different rates of absorption in differing objects as well. Absorption is the property of the energy from the wave being taken in and dispersed within the object. In particular the energy isn't just taken in, it's changed to a different sort of energy in the object itself. The important thing to remember is absorption doesn't have to be complete, but absorbed signals do not continue. For IoT, a good frequency range to talk about is 2.4Ghz. You might be shocked to learn you've likely one of the highest-energy emitters of 2.4Ghz radiation in your home right now. Don't panic, it's your microwave. Your microwave emits 2.4Ghz radiation at a much, much higher power than your WiFi AP does. The 2.4Ghz range of frequency is absorbed excellently by water, meaning it changes the energy from electromagnetic energy to heat energy very efficiently. This is how it heats your food. The same happens outside when the sun makes your skin warm. It's the solar electromagnetic rays being absorbed and changed to heat. What are Reflection and Refraction? Continuing on with the light analogy, RF has a pair of properties called reflection and refraction. This is the same meaning as in casual English. The energy bounces off of the item, instead of being absorbed as above. This property has an inverse relation to the previous property, in that if something is reflected it cannot be absorbed, and the converse is true as well. When you look in a mirror, the light is being reflected into your eyes, light that bounces off of your body and is absorbed into your eyes. Notable here is the difference between reflections and refractions. This difference has to deal with the angle the light bounces off of a given object. Think of bouncing a laser off of a mirror. In that the laser comes back off of the mirror at exactly the opposite angle that it came at it. In numerical terms, if you send a laser at a mirror at a 90 degree angle, you'll receive a reflection at a 270 degree angle — it's always 180 degrees more or less than the angle of incidence. Refraction, on the other hand, is when the signal, or light bounces off of the reflecting medium at an angle that is more or less than 180 degrees. Objects viewed in water show this phenomenon well. You're seeing refraction when an object in water appears to be broken, smaller or larger, or at an odd angle when you know it to be straight. In reality, light is moving at a different speed and angle through two different mediums to produce this effect, or through inconsistencies in a single medium. This is why things seem to waver when viewed in water. In RF, metal in environments will likely present your greatest risk of reflection. Beware of large metal devices and windows when considering reflection. Windows vary depending on glass and coating in what degree it will reflect and what will pass through. What are Scattering and Diffraction? Scattering is when the signal is a larger wavelength than the discrete pieces that make up the medium it's bouncing off of. It results in the signal being bounced at odd angles, and in directions that are somewhat unpredictable. These can be both reflections and diffractions. Think of a disco ball here, if it was a single discrete object, how a straight beam of light scatters off of the ball and goes in several directions. A refraction does not change the individually transmitted waves themselves, but only alters the direction. A diffraction may change the signal itself however. Diffraction causes the path of the light or signal to change, and bends the path that it takes. This is roughly demonstrated by fun-house mirrors that make you seem taller or shorter, skinnier or wider. What's happening here is that the light bouncing off of your body's path is being altered, so that a small portion of your body is represented as a much larger one, the light that would take up that square inch being instead spread to cover a square foot once it arrives at your eyes. How Does This Apply to RF? Each of these properties are fairly common in the wild as a design engineer — for WiFi, IoT or even cellular. When you're designing, if you treat a mirror, or office windows as you would a wall, you'll be pretty shocked when you do your validation survey. RF is a wave, which means it will spread out to cover more ground, but the same principle still applies. This is one reason that best practice is not to place an omnidirectional antenna next to metal. Metal is highly reflective and will send the energy right back into your antenna causing a whole host of issues. When designing, take heed of the materials surrounding your APs and your clients. For example, a common use of IoT devices is to report statistics of devices. Consider a freezer. Without knowledge of the materials, you might be able to place a sensor and the accompanying AP closest to the source of the information inside the freezer. Freezers are usually made of metal though, and as discussed are likely RF impermeable and will result in your signal bouncing off of the walls through reflection instead of continuing to the next AP or gateway. The same applies to pipes in a refinery but with refraction. This can lead to hotspots and dead spots in your coverage. Likewise a common source of absorption is water. Many industrial spaces will have tanks, storage vessels, even drinking water that will nearly entirely absorb and neutralize your signal. That doesn't mean that all absorption is bad. A quick truism for design is "Signal where you want it, not where you don't." It sounds silly, but RF in places where it isn't explicitly desired is both a security risk and makes your network more difficult to manage. Knowing the expected behavior of your network and RF in general is essential to both design and troubleshooting. It behooves any designer or admin to understand them in context in your network in particular before you begin troubleshooting; This is especially true in complex environments and when entering a new environment. If you're using a software to aid your design or troubleshooting, be sure to double-check what the software does and does not simulate. Not all software will model all of the properties well, as it's very computationally expensive, and this can lead to some surprises when you go back to validate the network after implementation. Avoid these surprises at all costs, by knowing what to expect from your RF.
<urn:uuid:f8eb7904-03d5-4013-99ee-c0d7d4514a6d>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/technology/networking/3-rf-behaviors-to-know-as-a-wireless-engineer
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00026.warc.gz
en
0.961904
1,562
3.03125
3