text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
October marks the 18th annual National Cybersecurity Awareness Month, and industry leaders are urged to invest time re-imagining their ability to defend against a debilitating ransomware attack. Professionals outside the technology sector hear a great deal about ransomware when major incidents occur. But with wide-reaching business concerns on your plate, few manage to set aside enough time to understand the intricacies of a ransomware attack. The following information is guidance to help protect your livelihood and avoid being hacked. 1. What Exactly is Ransomware? Everyday people typically do not follow computer security trends to understand how a ransomware hack occurs. But most people understand that a ransomware attack can disrupt even Fortune 500 corporations. That’s largely due to the splashy national headlines following a high-profile cybersecurity hack. Essentially, ransomware ranks as the most popular and preferred method of infiltrating and seizing control of an organization’s digital assets. Although most companies deploy standard anti-virus software and malware detection tools, many are late to the party as cybercriminals exploit vulnerabilities. Ransomware is a type of malicious software that possesses flexibility in terms of delivery. Hackers can deploy it through email to an unsuspecting employee who mistakenly downloads a file or clicks a malicious link. Sophisticated hackers can penetrate a network and deposit ransomware in ways only the most determined virus detection tools can identify. These types of schemes often provide digital thieves time to let the infestation spread to networks in a company’s orbit. Once the malware has been activated, cybercriminals take operational control over the entire company and demand a cryptocurrency payment – in other words, a ransom. Those who pay the thieves are promised a decryption code to unfreeze their network. Cybercriminals do not always follow through with their promises. It’s also essential to know that 80 percent of ransomware attack victims who pay digital thieves reportedly suffered a second attack. Another 46 percent indicated their data was corrupted even after decryption. 2. What Makes a Ransomware Attack Different from Another Cybersecurity Hack? With National Cybersecurity Awareness month here, let’s dig into the importance of understanding the different methods and nefarious goals of hackers. Nearly 550,000 computer security attacks involving ransomware were carried out on a daily basis during 2020 to the tune of $20 billion in business losses. Hackers reportedly develop upwards of 300,000 strands of malware each day to overcome inadequate or outdated antivirus software and virus detection tools. Here are some of the markers that distinguish a ransomware hack from others. - A ransomware attack is motivated by financial gain or espionage 95 percent of the time - A ransomware hack follows the street crime modus operandi of a kidnapping - Cybercriminals who leverage a ransomware scheme typically negotiate a payoff - Ransomware hackers openly take control of digital assets - A ransomware hack deploys malware most likely through email, but could also be deployed by brute force computer security penetration, social engineering, or inserting the malicious files covertly Among the many reasons a ransomware attack differs from other security hacks include its brazen openness and defined cycle. Other hacking schemes typically involve siphoning off or copying valuable personal information and digital assets for sale on the dark web. These are more akin to late-night burglaries and petty theft than big-league ransomware. 3. Aren’t Just Big Companies Targeted with Ransomware? A common misconception exists that only large corporations with resources to pay millions to cybercriminals are targeted. That notion is largely driven by media attention paid to large-sum payouts and disruption. Many recall the recent Colonial Pipeline ransomware attack that caused widespread gasoline and diesel fuel shortages along the U.S. Eastern Seaboard. Coverage was splattered across television and print media for weeks, and the organization reportedly doled out upwards of $4.4 million to regain control of its network. Although garnering less media attention, an Acer computer manufacturer ponied up $50 million in 2021, and Kia Motors reportedly paid $20 million. While seldom making front page news, approximately 46 percent of small and mid-sized businesses have been the subject of a ransomware attack at least once. Victims of a ransomware hack reportedly paid the digital thieves 73 percent of the time. Payments, usually in the form of cryptocurrency, ranged between $10,000 and $50,000 just over 40 percent of the time. And at least 13 percent paid over $100,000. The lower payouts seem like good reasons to believe that small and mid-sized organizations are not necessarily on a ransomware attack hit list. However, nothing could be further from the truth. 4. Who Can be Impacted by a Ransomware Hack? Following the Colonial Pipeline ransomware attack, Senate Judiciary Committee member Chuck Grassley released a statement that highlights the imminent threat ransomware attacks pose to companies of every size and sector. “Ransomware does not just affect the deeper pockets of large companies like Colonial Pipeline and JBS. An estimated three out of every four victims of ransomware is a small business,” Sen. Grassley reportedly stated. “Ransomware often originates from countries with a permissive law enforcement environment that allows these cybercriminals to flourish.” The key takeaway is that every organization, ranging from Fortune 500 companies to small and medium sized business, as well as Federal, State and Municipal government (link to new blog on municipal hacks) entities, require determined IT security defenses. It’s important to take a proactive approach to your organization’s cybersecurity preparedness. You should always be on the defensive with constant monitoring of your IT infrastructure. This is critical. Whether your infrastructure supports a dispersed, global business or six desktops in a small law firm it’s imperative to be using the latest antivirus scanning, endpoint solution protection, cybersecurity awareness training, and malware removal tools, among others. Hackers target outfits with seemingly weak computer security and fleece them for every penny possible. 5. Examples of Ransomware Attacks Ransomware ranks among the most destructive types of malware, and it has a long and infamous history. This brand of digital theft has been around for decades, and each generation of online criminals adds to business losses. Whether spread by email, worms, vectors, or computer security vulnerabilities, ransomware attack victims often pay without reporting the incident. The FBI confirms that few ransomware victims ever report the incident These are high-profile examples of ransomware that have plagued honest business people. - AIDS Trojan – Considered one of the earliest ransomware files, it was transmitted via floppy disc. Victims were scammed into making a $189 payment to a Panama address. The perpetrator was eventually caught and donated the money to AIDS research. - WannaCry – Although the payment demand ranged from only $300 to $600, WannaCry became something of a household name because it targeted outdated versions of Microsoft Windows. It’s one of the reasons cybersecurity professionals are hyper-vigilant about updating software today. - CryptoLocker – This malware used a type of automated shakedown scheme in which victims could click through and make a crypto payment and receive a decryption key. The scheme reportedly scammed victims out of $3 million. - Bad Rabbit – This remains a good example of how sophisticated hackers operate. So-called Russian hackers used a phony Adobe Flash update to trick people into downloading their ransomware. This also stands as a good example of why the most current and determined AV scan and malware detection tools are necessary. - SamSam – This ransomware hack exploits computer security vulnerabilities such as weak passwords. Hackers also use tricks involving social engineering and brute force attacks to get leverage over systems. Cybercriminals have been developing their version of next-generation ransomware attack files since almost the advent of the personal computer. Without the most proactive malware detection capabilities, organizations remain at risk. 6. How Will I Know if My Business is Hit by a Ransomware Attack? As you’ve probably heard, “An ounce of prevention is worth a pound of cure.” This Benjamin Franklin quote is very applicable here. When asked if you would know if a ransomware attack was underway, the answer would be, “Sometimes yes. And sometimes, by the time you recognize such an attack, it may be too late.” This is why monitoring for malicious activity is imperative. Although the cybercriminals who deploy ransomware seem to act swiftly, that perception may not necessarily mirror reality. It’s not uncommon for hackers halfway around the world to cast a wide net that includes hundreds, if not thousands, of potential targets. Bulk email strategies, called “Phishing,” send messages to random people. This low-level tactic plays the odds that at least one unsuspecting employee will click on a malicious link or download a file. That misstep doesn’t always trigger a real-time response from the hacker to pounce on your network. In some cases, the malicious file may need time to spread. In others, cybercriminals may wait patiently as it migrates across your business network and those of others. This is why monitoring tools are important. The key point is there will likely be a window of opportunity to bring malware removal tools to bear. The earlier an intrusion is identified, the easier it is to prevent it from getting too far. That’s why being knowledgeable of the following telltale signs of a possible ransomware attack and knowing how to respond to a cybersecurity hack are crucial. Staff members receive electronic messages that ask them to download files or click on links. These phishing schemes are often accompanied by an incentive or reason for urgency. By bringing such emails to the attention of supervisors and computer security specialists, AV software can be activated, and the malicious file purged. When hackers breach the computer security of one station, they typically begin searching for information to overrun the organization. This may entail discovering a domain, company name and understanding the admin access of a given computer or profile. To gather this information, hackers leverage some type of network scanning tool. Discovering a foreign network scanner often means your system has been penetrated, and proactive defensive measures are necessary. Antivirus Software Disabled Once a ransomware attack is underway, hackers typically try to disable antivirus scanning packages. Those that are not considered enterprise-level can be sidelined with relative ease by clever hackers. If your antivirus scan or software isn’t functioning properly, the system may have been breached. When a business system appears to repeat a behavior each day, that may indicate malicious files are hiding under the radar. It’s essential to understand that even though malware may have been recently removed, savvy cybercriminals continue to create new ransomware every day. Cybersecurity involves a chess match between honest developers protecting business networks and hackers trying to pry open vulnerabilities. Having the latest AV scanning and malware detection capabilities remains mission-critical for business survival. Recon Ransomware Attack Sophisticated hackers may be inclined to probe organizations by using limited cybersecurity attacks that target the computer security of only a few workstations. The point of small-scale incursions is to gather information about an organization’s ability to respond in kind and potentially repel the ransomware attack. Hackers usually expect in-house IT support teams to take a victory lap after deterring the threat. Truth be told, a tsunami of malware could be coming down the pipe. It’s critical to promptly follow up small-scale attacks and other telltale signs with big-league AV scanning, antivirus software deployment, malware detection, and quickly close any cybersecurity gaps before you’re talking to a hacker about crypto payments. 7. What Do I Do If This Happens? A ransomware attack causes disruption in terms of productivity, lost digital assets and can tarnish your business or brand. Industry leaders can do two specific things to protect their operations — harden your defenses and have a fallback position. The fallback position involves diligently backing up digital files to multiple servers and hard drives that cannot be reached through the primary network. By conducting thorough backups on a daily basis and keeping them in a different location, a ransomware hacker only holds sway over yet-to-be secured data. That may put you in a position not to pay the cryptocurrency demand. But the best defense may be investing in robust software and detection capabilities that deter threat actors. Keep in mind that hackers are obviously not hard-working individuals. They’re looking for easy money, and you can make that a non-starter. 8. How Can I Protect my Business and Infrastructure from a Ransomware Attack? The first step to achieving determined cybersecurity involves gathering information about evolving security industry strategies and the latest technologies used to prevent breeches like ransomware attacks. There’s no better way to Celebrate National Cybersecurity Awareness Month over the coming weeks than to become more astute to the risks, like ransomware, that could impact your business when you least expect it. You may want to consider providing a refresher security awareness seminar for your team or perhaps even conduct a network health check to identify any potential threats your business faces. We’d be happy to help you determine what would work best for your unique needs, just let us know when you’d like to talk.
<urn:uuid:7979cc0b-8ffe-4ca0-a7e3-5591850564cb>
CC-MAIN-2022-40
https://gomachado.com/8-things-you-need-to-know-to-protect-your-business-from-a-ransomware-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00444.warc.gz
en
0.930471
2,742
2.953125
3
We’ve crystalized carbon and hid it away in concrete. Entire hyperscale data centers are powered by the wind and the sun. Air, free and abundant, can be used in lieu of water to cool data centers — even in a place like Houston (which is saying a lot based on the summer we’re having in Texas). Efficiency gains have helped to limit electricity demand from data centers to the point where they now account for only 1% of global electricity demand after data consumption shot up by almost 40% in the spring of 2020. Still, the industry continues to challenge itself to do better by the environment, striving to reach the zero carbon holy grail. The final frontier in the fight to get that last bit of operating emissions out of the atmosphere is the uninterruptible power supply system. Diesel engine generators have been the go-to since the beginning of data center time. They pack a lot of power into a relatively small footprint. If the fuel’s available, then a data center can stay up and running through a really long power outage. Of course, what diesel power generators offer in terms of practicality is offset by carbon emissions. And so, data center developers and operators are bringing a lot of creative options to the table, approaching the problem from a number of angles. Green backup power Hyperscalers, like Facebook and Google, have rolled out interesting alternatives in recent years, and Compass announced our newest effort to go green earlier in the summer. Here’s a quick rundown on some promising front-runner alternatives to diesel-powered generators. - Alternative fuels — Earlier this summer, Compass announced a partnership with Foster Fuels to become one of the first data center providers to use hydrogenated vegetable oil (HVO)-based biodiesel to fuel on-site generators. Using HVO blended fuels has the potential to reduce a facility’s greenhouse gas (GHG) emissions by approximately 85% versus traditional diesel. It also significantly reduces particulates and sulfides emitted. - Lithium-ion batteries — In 2014, Facebook began testing lithium-ion (Li-ion) batteries for backup power. As the Li-ion batteries have become increasingly available and affordable, several data center operators have begun placing them on individual racks, bypassing the traditional valve-regulated lead-acid battery UPS for backup power. Given the longer life, smaller footprint, and environmental advantages, Li-ion batteries now account for 15% of the data center battery market, and adoption is expected to reach 38.5% by 2025. However, fire protection concerns loom large with Li-ion solutions. - Google’s battery park - Google just completed a two-year test of a “battery park” at a data center in Belgium. The battery park replaced diesel-powered generators with 10,665 solar panels that generate a total of 2.9GWh of electricity, 5.5 MWh of which can be stored. When not in use, the battery park feeds power back into the Belgian grid. The reliability of a data center is everything, and it hinges on backup power. It stands to reason this is a complex issue with a lot of different solutions bubbling up to the surface. Operators are hard at work to strike the balance between reliability and sustainability. The entire industry is taking carbon emissions very seriously, working hard and working together to design solutions that are scalable and effective. Time will tell where we land on the voyage, but the great work underway tells me progress is being made, and, ultimately, we’ll get there together.
<urn:uuid:78792e56-a233-4a8c-a900-39d5eaad117f>
CC-MAIN-2022-40
https://www.missioncriticalmagazine.com/articles/94289-the-final-frontier-for-data-center-carbon-neutrality
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00644.warc.gz
en
0.927735
739
2.65625
3
Google has opposed the US Justice Department in its attempt to give FBI the power to search and seize digital data, claiming the proposed amendment “raises a number of monumental and highly complex constitutional, legal and geopolitical concerns.” On 13 February, Google sent a strongly worded letter (opens in new tab) to The Judicial Conference Advisory Committee on Criminal Rules, urging the Committee to reject the proposed amendments and leave the expansion of the government’s investigative and technological tools to Congress. What Google finds most disturbing is the fact that FBI would have permission to access data stored anywhere in the world, giving the US government unrestricted access to unthinkable amounts of information. “Remote searches of media or information that have been ‘concealed through technological means’ may take place anywhere in the world,” Google writes, adding that “this concern is not theoretical”. “Concealed through technological means” – means that the FBI could remotely search computers that have hidden their location using different anonymity services like Tor. The second crucial problem lies in the place where the computer is located. Currently, under the so-called Rule 41, the FBI needed a warrant to search a computer, given by a judge located in the same district. Now, the Justice Department claims such an approach is outdated and that the FBI should be able to search digital property outside of a judge’s district. It says that it should only need a warrant issued by the judge in the district where the crime took place. Google says this approach violates the sovereignty of other nations: “To this end, it is well established that ‘[a]bsent a treaty or other agreement between nations, the jurisdiction of law enforcement agents does not extend beyond a nation’s borders’.
<urn:uuid:67f6a413-2a55-4f21-b2e9-5403b94941d5>
CC-MAIN-2022-40
https://www.itproportal.com/2015/02/19/google-fights-fbi-initiative-legalize-hackings/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00644.warc.gz
en
0.952044
370
2.609375
3
Every time employees send or receive data online they need bandwidth. Like time and money, bandwidth is a scarce resource in many offices. After all, computers and digital devices rely on bandwidth to complete tasks online. Bandwidth is the amount of information that can be sent or received per second. This might be measured in Kbps (thousands of bits per second) or Mbps (millions of bits per second). Many people think having a higher bandwidth will mean a faster user experience. In fact, it’s only one factor that affects response time. Bandwidth is actually about capacity more than speed. Eight bits of information is one byte. A byte is the amount of memory it takes to store one character, such as the letter “Q.” You can’t drive fast on a one-lane road when there’s a lot of traffic. You also can’t navigate the information highway as quickly in online congestion. If you’re the only one in the office late at night, you’ll have no trouble trying to stream an online webinar, but you might struggle to stream the same webinar when sales are on a video conference call and advertising are sending a graphic-heavy email. What Is Using Bandwidth? There is greater demand on bandwidth every day. Your business migrated to cloud services for greater mobility and online consistency, but sharing information in real time requires bandwidth usage to synchronize data. Backing up to the cloud provides businesses with greater peace of mind, yet it can be a headache if that backup is happening right when you want to get on a video chat with a client – your connection can suffer. You’ll be that person who keeps dropping in and out of that important meeting! When you’re using an online meeting tool (audio or video), you can also slow things down for others. Even email needs bandwidth to send and receive data. The bigger the files (e.g. images or spreadsheets?), the more bandwidth activity. Uploading a few PDFs can take up 20–40Mb of the total, which can choke a network with limited upload capacity. All those personal devices your people are bringing to work can make a difference, too. Smartphones will often start backing up to the cloud when they are on a Wi-Fi network. Bandwidth Usage Solutions Often, there is no option for greater bandwidth because the infrastructure where you’re located won’t support greater bandwidth. You’re already getting the most capacity your provider can offer. Still, there are ways to better manage bandwidth: - Switch to a business-grade router or a Unified Threat Management (UTM) appliance. These allow you to identify and manage bandwidth usage better. They also add security (firewalls, filtering) to your network connection. - Set up Quality of Service (QoS) to rank the activities your business values more (e.g. configuring video conferencing to take data preference ahead of file downloads). - Block some devices entirely, such as employee phones backing up to the cloud. - Schedule some activities for a more convenient time (e.g. set your system backups to happen in the middle of the night, fewer people are likely to be trying to do things online). Want to regain control of your internet capacity? A managed services provider can monitor traffic and usage, and help you set up a solution for smarter bandwidth usage. Improve productivity and give employees something to smile about (other than a cat riding a vacuum cleaner on Facebook) with better bandwidth management. Give us a call today at 317-497-5500.
<urn:uuid:8e7b5939-9796-413c-acb8-5e2a177126f7>
CC-MAIN-2022-40
https://coremanaged.com/whats-causing-your-bandwidth-woes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00644.warc.gz
en
0.920337
754
2.609375
3
It’s no secret that cybersecurity is one of the main challenges currently faced by our society. Hackers who got into government servers and private communication services have become a global threat. The blockchain could be a revolutionary technology in the fight against cyber threats, offering to protect databases and generally ensure integrity. Data protection is today’s top cybersecurity priority for any company. Because of the increased sophistication of cyberattacks, there is an increasing demand for cybersecurity. Building the necessary cybersecurity protocols no longer requires relying solely on conventional information technology security controls. Fintech, supply chain, food, insurance, and many more businesses will benefit from the tremendous breakthrough known as a blockchain. Blockchain technology is being widely adopted by business owners, decision-makers, and investors to change how organizations interact with one another, regulators, and with clients. With speedier capabilities for detection, mitigation, and reaction, this insight piece aims to analyze blockchain as an effective solution for managing serious cyber threats. In this article, we’ll explore the benefits of blockchain in cybersecurity, how blockchain SIEMs work, and how your organization can benefit from them. What is blockchain? Blockchain is a decentralized, immutable database that makes it easier to track assets and record transactions in a corporate network. On a blockchain network, practically anything of value may be recorded and traded, lowering risk and increasing efficiency for all parties. Information is essential to business. It is best if it is received quickly and is accurate. Blockchain is the best technology for delivering that information because it offers real-time, shareable, and entirely transparent data that is kept on an immutable ledger and accessible only to members of a permission network. Orders, payments, accounts, production, and many other things may all be tracked via a blockchain network. Who, what, when, where, and how much, —can all be recorded in the data block. Every block is interconnected with those that came before and after it. These blocks build a chain of data as an asset moves from place to place or ownership changes hands. The blocks link securely together to prevent any blocks from being altered or a block from being introduced between two existing blocks, and the blocks certify the precise timing and order of transactions. Benefits of Blockchain in cybersecurity. Blockchain enables us to guarantee that policies are followed. Confidentiality: This means ensuring that only those with a legitimate need have access to the relevant information. Blockchain data is completely encrypted to prevent unauthorized parties from accessing it as it travels over unreliable networks. To stop assaults from within the network, security measures such as access controls should be put into place right at the application level. By employing public key infrastructure to authenticate participants and encrypt their communication, blockchain can offer sophisticated security measures. However, there is a substantial danger of theft of private keys when backup private keys are kept in secondary storage. To avoid this, cryptographic techniques based on integer factorization problems should be used, as well as key management protocols like IETF or RFC. Integrity: Organizations can ensure data integrity by utilizing the immutability and traceability features that are built into blockchain technology. In the event of a 51% cyber control attack, consensus model protocols can assist businesses to establish procedures to prevent and control ledger splitting. In Blockchain, the past state of the system is recorded with each successive iteration, creating a fully traceable history log. Smart contracts can be employed to validate and uphold agreements between parties, preventing miners from extracting data blocks. What Is Blockchain SIEM? The Blockchain SIEM The term “Blockchain SIEM” is relatively new, but it’s already gaining traction among cybersecurity professionals. A Blockchain SIEM is a security information and event management (SIEM) system that uses blockchain technology to monitor and analyze security events. In essence, it gathers all the data being gathered by your organization’s existing security tools—and then gives you an overview of all the activity on your network in one place. A blockchain SIEM makes it easy to track everything from malware infections to insider threats because every action taken by an employee or user gets recorded on a public ledger, which means anyone can access this data if they need to. This also means that any attempts at tampering with that information will be immediately noticeable as well: any changes would require approval from multiple parties for them to go through! Blockchain SIEM Authenticates the Identity of a Normal User - Blockchain SIEM Authenticates the Identity of a Normal User A blockchain SIEM can authenticate an employee’s identity and check if he/she is authorized to access certain data, or even be part of a specific organization. In most cases, this authentication process will be built into the system from the ground up so that it’s seamless for both administrators and users. This also includes being able to verify if an employee has left their company or revoked access (de-authorization). Blockchain SIEM Monitors an Abnormal Situation in Real Time Blockchain SIEM can monitor an abnormal situation in real-time. For example, if a node is compromised and is sending malicious data to other nodes, or if a user attempts to tamper with the blockchain itself. There are several ways in which a blockchain SIEM could monitor that abnormal situation: - It could detect anomalies in the flow of data across nodes and networks. - It could detect inconsistencies between multiple nodes’ security policies, templates, and baselines. - It could be programmed to automatically stop rogue transactions before they happen by detecting suspicious network activity beforehand Blockchain SIEM Assists in Investigation of Cyber-Attackers and Unidirectional Audit Trails Third, blockchain SIEM systems can be used to assist in the investigation of cyber-attackers. Blockchain SIEM makes it possible to identify the source of an attack and track down its origin with greater accuracy. This is especially useful because many attacks are launched from servers that are in countries where laws do not allow access to these servers. Blockchain SIEM offers an opportunity for companies to gain insight into the activities of their employees and their authorized users without compromising their privacy or exposing confidential information. With this technology, businesses can keep track of who accessed which piece of data at what time, allowing them to determine whether an employee is accessing a system for malicious purposes or simply as part of his or her job duties. Companies have been adopting various technologies such as artificial intelligence (AI), machine learning, and predictive analytics over the past few years but these tools still lack one feature: context awareness (CA). CA refers to being able to understand why something happened rather than just knowing what happened. Blockchain SIEM Notifies Users of Attack Suspicion and Reduces Response Time With blockchain SIEM, you can alert users of attack suspicion and reduce response time. As a result, you’ll be able to identify and prevent cyberattacks using the right tools on time. Blockchain SIEM provides real-time security intelligence that keeps your organization safe from malicious activity and data breaches. It also records all events on the network in an immutable ledger that cannot be changed or deleted by unauthorized users or administrators. This means that all events are recorded permanently on the blockchain, which helps improve visibility into systems and networks without relying on third-party vendors for monitoring software licenses or support contracts. What is encrypted and how? - Search in encrypted records – usually, even log collectors that encrypt log data, keep a recent index with decrypted data for searching purposes. LogSentinel has developed a search in encrypted records to prevent data leaks from such indexes. All data on the SIEM – all logs are being encrypted to avoid data breaches. - Every record is individually encrypted. Databases typically encrypt an entire column, row, or table. However, in addition to that encryption, we separately encrypt each record, requiring anyone attempting to decrypt the data to locate the key for each record individually. Blockchain is not only a database, blockchain can also be used as an event stream processor and even as a SIEM. Use cases are limitless and depending on your needs, a blockchain solution might be the best fit for you. There are several ways that you can leverage blockchain to improve your cybersecurity, including verifying identity, detecting attacks, and protecting data integrity. Many organizations have already begun using these features of the blockchain in their day-to-day operations The blockchain is a technology that has the potential to radically change the way businesses operate. It has been used in several industries, including healthcare, financial services, and cybersecurity. Businesses should consider their options for implementing this new technology so that they can take full advantage of its benefits.
<urn:uuid:cb282a8a-8340-4dbb-97cf-4134ba1497c3>
CC-MAIN-2022-40
https://logsentinel.com/blog/blockchain-in-cybersecurity-blockchain-siem/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00644.warc.gz
en
0.940634
1,777
2.875
3
1. Run Time Application Security Protection (RASP) Today applications mostly rely on external protection like IPS (Intrusion Prevention Systems), WAF (Web Application Firewall)etc and there is a great scope for a lot of these security features being built into the application so that it can protect itself during run time. RASP is an integral part of an application run time environment and can be implemented as an extension of the Java debugger interface. RASP can detect an attempt to write high volume data in the application run time memory or detect unauthorized database access. It has real time capability to take actions like terminate sessions, raise alerts etc. WAF and RASP can work together in a complimentary way. WAF can detect potential attacks and RASP can actually verify it by studying the actual responses in the internal applications. Once RASP is inbuilt in the applications itself, it would be more powerful than external devices which have only limited information of how the internal processes of the application work. (Read more: Top 5 Big Data Vulnerability Classes) 2. Collaborative Security Intelligence By collaborative security, I mean collaboration or integration between different Application Security technologies. DAST+SAST: DAST (Dynamic Application Security Testing) does not need access to the code and is easy to adopt. SAST (Static Application Security Testing) on the other hand needs access to the code but has the advantage of having more insights of your application’s internal logic. Both the technologies have their own pros and cons, however, there is great merit in the ability to connect and correlate the results of both SAST and DAST. This can not only reduce false positives but also increases the efficiency in terms of finding more vulnerability. SAST+DAST+WAF: The vulnerabilities detected by the SAST or DAST technologies can be provided as input to WAF. The vulnerability information is used to create specific rule sets so that WAF can stop those attacks even before the fixes are implemented. SAST+DAST+SIM/SIEM: The SAST/DAST vulnerability information can be very valuable for SIM (Security Incident Management) or SIEM (Security Incident Event Management) Correlation engines. The vulnerability information can help in providing more accurate correlation and attack detection. WAF+RASP: WAF and RASP are complementary. WAF can provide information which can be validated by RASP and hence help in more accurate detection and prevention of attacks. Grand Unification: Finally one day we will have all the above combined together (and many more) in such a way so that organization can have true security intelligence. (Read more: 5 easy ways to build your personal brand !) 3. Hybrid Application Security Testing By “Hybrid” I mean combining automation and manual testing in a manner “beyond what consultants do” so that we can achieve higher scalability, predictability and cost effectiveness. DAST and SAST both have their own limitations. Two of the major problems areas are False Positives and Business Logic Testing. Unlike Network Testing where you need to find known vulnerabilities in a known piece of code, Application Testing deals with unknown code. This makes the model of vulnerability detection quite different and more difficult to automate. So you get the best quality results from consultants or your in-house security experts. However, this model is non-scalable. There are more than a Billion applications which need testing and we do not have enough humans on earth to test them. It is not a question of “man vs. machine” but it is a matter of “man and machine”. The future is in the combination of automation and manual validation in “smart ways”. iViZ is an interesting example that uses the automated technology along with “work flow automation” (for manual checks) so that they can assure Zero False Positives and Business Logic Testing with 100% WASC Class coverage. In fact they offer unlimited applications security testing at a fixed flat fee while operating at a gross margin better than average SaaS players. (Read more: Phishers Target Social Media, Are you the Victim?) 4. Application Security as a Service I believe in “as a Service” model for a very simple reason: We do not need technology for the sake of technology but to solve a problem i.e. it’s the solution/service that we need. With the growing focus on “Core Competency”, it makes more sense to procure services than acquire products. “Get it done” makes more sense than “Do It Yourself” (off course there are exceptions). Today we have SAST as a Service, DAST as a Service, and WAF as a Service. Virtually everything is available as a service. Gartner, in fact has created a separate hype cycle for “Application Security as a Service”. Application Security as a Service has several benefits like: reduction of fixed operational costs, help in focusing on core competency, resolving the problems of talent acquisition and retention, reduction of operational management overheads and many more. (Watch more : 3 causes of stress which we are unaware of !) 5. Beyond Secure SDLC: Integrating Development and Operations in a secure thread Today is the time to look beyond Secure SDLC (Software Development Life Cycle). There was a time we saw a huge drive to integrate security with the SDLC and I believe the industry has made some decent progress. The future is to do the same in terms of “Security+Development+Operations”. The entire thread of Design, Development, Testing through to the Production, Management, Maintenance and Operations should be tied seamlessly with security as the major focus. Today there is a “security divide” between Development and Operations. This divide will blur some day with a more integrated view of security life cycle.
<urn:uuid:958b4b4a-f057-4eda-b98c-89f0d4d8a34e>
CC-MAIN-2022-40
https://www.cisoplatform.com/profiles/blogs/5-application-security-trends-you-don-t-want-to-miss
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00644.warc.gz
en
0.933907
1,231
2.5625
3
Simple Mail Transfer Protocol (SMTP ) is an Internet standard for electronic mail (e-mail) transmission across Internet Protocol (IP) networks. SMTP was first defined in RFC 821 (STD 15) (1982), and last updated by RFC 5321 (2008) which includes the extended SMTP (ESMTP) additions, and is the protocol in widespread use today. SMTP is specified for outgoing mail transport and uses TCP port 25. While electronic mail servers and other mail transfer agents use SMTP to send and receive mail messages, user-level client mail applications typically only use SMTP for sending messages to a mail server for relaying. For receiving messages, client applications usually use either the Post Office Protocol (POP) or the Internet Message Access Protocol (IMAP ) to access their mail box accounts on a mail server.
<urn:uuid:387c1a49-5fef-45df-ab50-bf674f16a108>
CC-MAIN-2022-40
https://www.dcsny.com/glossary/smtp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00044.warc.gz
en
0.861596
176
3.640625
4
The National Institute of Standards and Technology (NIST) — part of the U.S. Department of Commerce — maintains a frequently updated Cybersecurity Framework that any organization can use as a set of guidelines and recommended practices for improving overall cybersecurity. As NIST describes it: “The Framework is voluntary guidance, based on existing standards, guidelines, and practices for organizations to better manage and reduce cybersecurity risk. In addition to helping organizations manage and reduce risks, it was designed to foster risk and cybersecurity management communications amongst both internal and external organizational stakeholders.” Following the framework’s guidance will definitely improve your cybersecurity profile. But that second sentence about communication is also very important. It’s critical for all stakeholders on various teams to share a common security vocabulary if they’re going to coordinate their efforts effectively. It's a big document, and for small IT teams with limited resources it can seem daunting to approach its wealth of information and prioritize the practical steps you can take now to have the greatest positive effect on cybersecurity. But it’s not as complex as it appears. The five core functions The NIST framework separates out five core functions that need to be addressed for optimal cyber security: - Identify – Develop an organizational understanding to manage cybersecurity risk to systems, people, assets, data, and capabilities. - Protect – Develop and implement appropriate safeguards to ensure delivery of critical services. - Detect – Develop and implement appropriate activities to identify the occurrence of a cybersecurity incident. - Respond – Develop and implement appropriate activities to take action regarding a detected cybersecurity incident. - Recover – Develop and implement the appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity incident. Drilling down into the many specific recommendations for fulfilling these core functions, and identifying which ones are most important for you, is a big job. But here at least we can take a quick pass through each of them and identify what might count as low-hanging fruit in each category. It's important to conduct a thorough security audit of your entire infrastructure. Once you understand the specific risks to different groups of individuals, devices, data structures, apps, and other critical services, you can use that understanding to guide your efforts to improve cybersecurity. Free online tools such as Barracuda Email Threat Scan, Barracuda Vulnerability Manager, and Barracuda Cloud Assessment Scan provide a wealth of baseline information about your exposure to cyber risk in the key areas email security, web application security, and cloud services configuration. Security awareness simulation and training programs such as Barracuda Security Awareness Training can deliver even more granular visibility into risk profiles for individual users. Barracuda Data Inspector scans your Microsoft 365 data to identify many types of sensitive and malicious data that may be stored in unsecure locations or represents a potential threat of compromise. The information you gather using these tools has direct practical application in how you prioritize your cybersecurity investments going forward. This core function covers all the things you do to reduce the chance of an incident and to limit or contain the impacts of a cybersecurity incident. This includes: - Identity management and access controls. Enforce up-to-date password policies, and protect your critical assets and applications against unauthorized access using a zero trust network access solution such as Barracuda CloudGen Access. - Enabling your users to more consistently identify and report malicious emails through the use of an advanced security awareness training solution like Barracuda Security Awareness Training. - Securing and protecting data against accidental or malicious loss by implementing and advanced, cloud-first backup solution such as Barracuda Backup. - Implementing effective network segmentation to prevent incidents from spreading beyond the initial area of compromise. In order to detect cybersecurity incidents in progress, you need to be able to monitor inbound, outbound, and internal traffic of all kinds, and to identify malicious email, malware, app compromise attempts, and unauthorized movement of data. Strong email security such as Barracuda Email Protection helps you detect both known and unknown threats. It monitors all email traffic and uses AI to detect malicious anomalies. Detecting malicious network traffic requires a full-featured network firewall such as Barracuda CloudGen Firewall. And detecting malicious application activity, such as that of evasive bots or the latest generation of ransomware attacks, demands an advanced, easy-to-use web application firewall solution like Barracuda WAF-as-a-Service. It’s critical for your team to be ready to respond rapidly and effectively to a cybersecurity incident. This requires significant advance planning and communication among different stakeholders. To increase the speed and accuracy of your team’s response, use an incident-response automation capability such as Barracuda Incident Response that dramatically simplifies the job of identifying the scope of an email-based attack, deleting the attack from affected systems, and updating security settings based on the specific threat data collected. Minimizing the overall impacts of a cybersecurity incident, and restoring any lost capacity to deliver services or conduct operations, is key to keeping the ultimate financial cost to a minimum. If you have an advanced backup solution in place, such as Barracuda Backup, you should be able to restore any compromised or damaged data — from individual files to entire servers — quickly and completely, and to restore any lost operational ability. In the event of a ransomware incident, it can also make it easy for you to avoid paying any ransom. How you control your users’ password selections can have a large impact on whether stolen or otherwise compromised credentials can be used to penetrate your network. NIST password guidelines offer an example of why it’s important to revisit the Cybersecurity Framework periodically. The guidelines used to recommend fairly frequent required password changes for all users. But it was found that this had the perverse effect of reducing security. Users would very often establish a pattern for successive passwords — raising or lowering a number, for instance, or transposing two characters, or replacing letters with special characters. It turned out — as a 2016 US Federal Trade Commission article reports — that with just three or four previous passwords, it was often simple to guess a given user’s current password. So now the practice of requiring password updates (unless there’s evidence of a compromise) is explicitly not recommended. Other NIST recommendations for password policies include: - Check passwords against breached password lists - Block passwords contained in password dictionaries - Prevent the use of repetitive or incremental passwords - Disallow context-specific words as passwords - Increase the length of passwords Some of these recommendations can be implemented via Active Directory settings, but others require third-party password-management solutions. Embrace the journey Perhaps the most important thing to remember while studying the NIST framework and seeking to implement its recommendations is that cybersecurity is not a destination but a journey. That is, you’re not going to reach an endpoint where your security is perfect. Instead, staying secure is an ongoing, iterative process that you’re constantly improving and adapting to evolving conditions. New threat data drives new strategies and new technical capabilities, resulting in yet more data to drive the next generation of improvements. In other words, don’t try to swallow the NIST Cybersecurity Framework whole. Instead, use it as a guide to help you define, plan, and execute discrete cybersecurity strategies that are most urgent for your organization. Tony Burgess is a twenty-year veteran of the IT security industry and is Barracuda’s Senior Copywriter for Content and Customer Marketing. In this role, he researches complex technical subjects and translates findings into clear, useful, human-readable prose. You can connect with Tony on LinkedIn here.
<urn:uuid:769af214-2b26-466c-8489-5e135b867305>
CC-MAIN-2022-40
https://blog.barracuda.com/2022/09/09/using-the-nist-cybersecurity-framework-to-boost-your-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00044.warc.gz
en
0.913247
1,609
3.09375
3
One of the largest problems with indexes is that indexes are often suppressed by developers and ad-hoc users. Developers using functions often suppress indexes. There is a way to combat this problem. Function-based indexes allow you to create an index based on a function or expression. The value of the function or expression is specified by the person creating the index and is stored in the index. Function-based indexes can involve multiple columns, arithmetic expressions, or maybe a PL/SQL function or C callout. The following example shows how to create a function-based index: An index that uses the UPPER function has been created on the ENAME column. The following example queries the EMP table using the function-based index: The function-based index (EMP_IDX) can be used for this query. For large tables where the condition retrieves a small amount of records, the query yields substantial performance gains over a full table scan. The following initialization parameters must be set (subject to change with each version) to use function-based indexes (the optimization mode must be cost-based as well). When a function-based index is not working, this is often the problem. Function-based indexes can lead to dramatic performance gains when used to create indexes on functions often used on selective columns in the WHERE clause. To check the details for function-based indexes on a table, you may use a query similar to this:
<urn:uuid:4d4e3220-8d1e-4c00-9967-1e7a2f52c32a>
CC-MAIN-2022-40
https://logicalread.com/oracle-11-g-function-based-indexes-mc02/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00044.warc.gz
en
0.860449
296
2.609375
3
The Preamble of our country speaks of Liberty. This liberty is for all the people of India to freely express their thoughts. Article 19 (1) (a) of the Constitution of India is guardian of all the citizen for their right of speech and expression. Every person has the right to speak his point and express his opinion. This right plays a very important role in public opinion. It includes the freedom of communication and Right to propagate or publish one’s views held the Apex Court in S. Rangarajan Case 2 SCC 574. In Ramlila Maidan Incident re 5 SCC 1, the Supreme Court held that the freedom of speech and expression is regarded as the first condition of Liberty. However this is not an absolute right. Article 19 (2) provides reasonable restrictions over Article 19 (1) (a) allowing the state to impose restrictions in the matters of Sovereignty and Integrity of India, security of state, friendly relations with foreign states, public order, decency or morality, contempt of court, defamation or incitement to an offence. Obviously the court has to decide whether the restrictions to be imposed are reasonable or not. The Apex Court in Shreya Singhal v. Union of India had struck down section 66A of Information Technology Act 2000 on 24 March 2015. The argument that was raised is that the section abstain free speech and expression. What is section 66A of the Information Technology Act 2000? 66A. Punishment for sending offensive messages through communication service, etc., Any person who sends, by means of a computer resource or a communication device,— (a) any information that is grossly offensive or has menacing character; (b) any information which he knows to be false, but for the purpose of causing annoyance, inconvenience, danger, obstruction, insult, injury, criminal intimidation, enmity, hatred, or ill will, persistently by making use of such computer resource or a communication device, (c) any electronic mail or electronic mail message for the purpose of causing annoyance or inconvenience or to deceive or to mislead the addressee or recipient about the origin of such messages shall be punishable with imprisonment for a term which may extend to three years and with fine. Explanation: For the purposes of this section, terms “electronic mail” and “electronic mail message” means a message or information created or transmitted or received on a computer, computer system, computer resource or communication device including attachments in text, images, audio, video and any other electronic record, which may be transmitted with the message. What is the reason behind unconstitutionality of section 66A? What makes a democracy more beautiful is the presence of impartial judiciary working for the betterment of its nation. That’s what has been proved right for our nation this time as the apex court of India again by its historic decision strengthens the freedom of speech and expression by striking down the Section 66A of the Information and technology Act, 2000 which allows police to arrest people for posting “offensive content” on the internet. The term “offensive” is too vague that anyone can interpret the way they want. It has no exact meaning to it. The main argument for unconstitutionality of this section is majorly curbing the fundamental right. Also this section is very wide and vague. This section includes any information which is sent which is considered as offensive. It is not only hard to determine what is offensive, it is harder to determine the nature of the section which directly violates the fundamental rights enshrined under the Constitution of India. The Supreme Court of India found section 66A of The Information Technology Act, 2000 and arbitrary as well. Another problem was that section 66A was a cognizable offence, i.e. whoever posted content which can be considered as ‘offensive’ or ‘menacing’ could be arrested by the Police without a warrant, making this the most crucial drawback to this vaguely drafted provision. What essentially made this provision prone to inappropriate. Under this all the discretion is left over the police authorities to decide what is offensive and what is not. The court noted that governments come and go but section 66A will remain forever and refused to consider the Centre’s assurance that the law in question will not be abused. The court however, said there was no need to strike down two other provisions of the IT Act that provide blocking of sites. What are the after effects? Its been two years after this section has been held unconstitutional. The reason why this is a topic of discussion is the very important nature of the section which is prone to abuse. The judgment also considered the validity of other provisions of the IT Act namely Section 69A and 79 along with the Rules made in them. Section 69A and the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 authorise the Central Government to block or order an intermediary (such as Facebook, YouTube or any internet/ telecom service provider) to block access by the public of any content generated, transmitted, stored, etc. in any computer resource, if it is satisfied that such content is likely to create communal disturbance, social disorder or affect India’s defence and sovereignty, etc.. Section 69A clearly states that : 69A Power to issue directions for blocking for public access of any information through any computer resource. –
<urn:uuid:495c9a14-23da-4c6b-8718-936d94a5b972>
CC-MAIN-2022-40
https://cybersecurityhive.com/punishment-66a-messages-computer-devices-unconstitutional/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00044.warc.gz
en
0.944499
1,114
2.828125
3
Phishing attacks are a widespread method of network infiltration, and today there are many different types. These attacks have become more sophisticated and harder to counter over time. Fret not, our guide will detail everything you need to know to keep your company safe. What Is a Phishing Attack Anyway? A phishing attack is a form of cyber attack where scammers typically—although not always—send out many fraudulent emails. The emails try to intentionally mislead recipients to either click on a malicious link or download infected attachments by attempting to look legitimate. Their purpose is to steal the recipient’s personal information, such as login credentials and credit card details. Phishing attacks can also install malware on a victim’s computer with the same overall goal. These types of attacks are potentially more problematic if the malware has enough time to do severe damage. There are several different methods behind phishing attacks, so it’s essential to be mindful of the key ones. How a Phishing Attack Works We know what phishing attacks are, but how exactly do they work? While the methods used in the attacks differ, generally, the most common types of phishing attacks will proceed in an order similar to this: - An email arrives in your inbox. - The email looks and sounds legitimate. It often sets up a common scenario—for example, that your password will expire soon, and you need to take action or that there has been a suspicious login on one of your accounts. - The instructions ask you to click a link to renew or change your password within 24 hours. - You click the link and then get redirected to a false page that appears and looks like the actual renewal page. You enter your old password and “create” a new one. - The attacker monitors and then hijacks your original password to gain access to multiple secure areas normally protected by that password. They may also set up a malicious script to run in the background of your computer or network to feed them more secure info. Below is an example of what these emails commonly look like: The user only needs to click the link once to begin the process, which is why these types of attacks are so difficult to deal with in the first place. These attacks can happen to anyone and even people who may be somewhat aware of them and how they work. Let’s look at some more examples. Example #1: The Attacker Employs Spear Phishing An emerging type of phishing attack is known as “spear phishing.” Spear phishing targets a specific person or enterprise, in contrast to random users, as seen in the email example above. These attacks are more in-depth and require special knowledge about a company and its operation. The attack starts with the perpetrator researching the names of employees, say within the marketing department. From there, they gain access to confidential invoices and then pose as the marketing director, sending an email with a link inside to someone like the project manager. The link in the email will redirect to a password-protected document, but in reality, it’s a false version of the stolen invoice. Once the project manager has logged in, the attacker can steal their information and achieve complete access to a company’s network. Example #2: The Attacker Employs Whaling or CEO Fraud Another type of attack, certainly one on the rise, is what’s known as “whale phishing” or, alternatively, “CEO fraud.” In this type of attack, the attacker uses social media or the company website to look up and find the name of the CEO or someone else in a senior leadership position—possibly the finance manager. They then imitate the person in question using a similar email address. Then they send an email where they may ask for something as direct as a money transfer or for the email recipient to review a specific document. These attacks are repeatedly successful as staff members are usually hesitant to refuse a request from someone high up in the business. This type of attack is different from spear phishing because the fraudulent communications appear to have come from someone senior. They can do even more damage and are often severe if not countered. How To Get Started and Counter Phishing Attacks We know what phishing attacks are and have seen some examples of them in action, but that’s just the beginning. We’ve assembled some additional steps you can take below to minimize your chances of being a victim of one of these unwanted attacks. Step 1: Stay Informed About the Latest Phishing Attacks In the hyper-connected online world, phishing attacks are more common than ever—staying informed of the latest attack methods is critical. The quicker you find out about new attacks and share them with your employees, the sooner you can avoid them. Keeping your staff trained and aware is a big part of this, which we’ll look at a bit later in the guide. So how do you stay informed? For one, be sure to sign up for email newsletters and prioritize reading blog posts every month. Useful publications you can read include Infosecurity Magazine, which covers the latest attacks and lists new and emerging strategies, and IT Security Guru, which features a dedicated newsletter focusing on cybercrime, ransomware, and other daily insights. It’s worth staying up to date with free anti-phishing add-ons, too. Most browsers enable you to download add-ons that can detect the signs of a malicious site and let you know before the worst of the attack happens. These are typically free and get installed across a variety of different devices. A simple search under the “add-ons” section will do the trick in most browsers. Of course, be sure you don’t readily give out your information to an unsecured site either. If the URL of a site doesn’t start with “HTTPS” or there’s no padlock icon, you should avoid entering sensitive information or downloading files whenever possible. A site without a proper security certificate might not always be a phishing scam, but it’s never worth taking unnecessary chances. Step 2: Don’t Ignore Security Updates & Set Up Firewalls While the sheer number of updates we receive today can be somewhat overwhelming, it’s important to stay updated as much as possible and not put them off. Security patches are released regularly for good reason; their frequency is often in response to the latest attacks. These updates patch holes in your company’s security, and if you don’t update, you can be at a much greater risk of phishing attacks through obvious and known vulnerabilities. That said, there’s more you can do. For instance, firewalls are a highly effective way to prevent the most harmful attacks. Firewalls act as a shield between your system and the attacker and can limit the chances of the perpetrator successfully gaining entry into your company network. Although not commonly known, it’s best to use both desktop and network firewalls in tandem—when these get used together, they significantly enhance your overall security. Most firewalls can be set up relatively quickly in a few straightforward steps, and the payoff is (almost) always worth it. You can configure firewalls to block data from specific locations such as apps or ports while allowing the required information through—most come preconfigured and ready to use. Be sure to identify your network’s assets and plan a structure for how your assets get grouped too. Just keep in mind that firewalls alone are typically not enough, so don’t become too reliant on them. Step 3: Ensure Employee Training Gets Delivered Properly Keeping a team aware of the latest threats and teaching them what to sidestep is crucial. The problem? Most training gets delivered at a yearly event or employee orientation, where much of it gets forgotten a week after the return trip. For online security training, most employees tend to click through the content as fast as possible with little in the way of retention. In-person training typically revolves around PowerPoint slides narrated by a speaker who would rather be elsewhere. This type of training rarely works. Companies, especially enterprises, need a robust Security Education, Training, and Awareness (SETA) program to ensure proper security. These programs enable businesses to educate their employees about network security issues and help to set realistic expectations. The best programs—the ones that have seen the most success—are streamlined and highly focused in their production. The security topics in these programs get prioritized, and the audience is determined well in advance—topics are presented in simple and exciting ways for them to digest. The training is all about raising awareness and seeks to make the information as memorable as possible. Education planning workshops often serve at the core of these programs to help keep employees engaged. Step 4: Know Your Phishing Attack Prevention Best Practices By now, you’ll have a firm idea of what phishing attacks are and should have some ideas on how to minimize your chances of being a victim. That said, the world of phishing attacks is vast, and there’s always more to learn. As a result, we’ve put together some best practices for you and your company to ensure you remain as protected as you can. They are: - Be aware that attackers can change the name that appears on an email, and they may be using a previously compromised account to take their attack further. The good news is that the URL of the email address, or the link, can only be falsified so far. Look out for strange punctuation or unusual letters, like seeing a small “l” when it should be a capital “I.” - It’s important to hover over the sending address or the link given and check to make sure it looks legitimate. If an address looks strange, nine times out of ten, it’s because it’s redirecting the user to a dishonest landing page. The password gets quickly harvested on this page, and the rest of the attack can take place. If in doubt, don’t click the link. Instead, contact a team member, especially if the email is supposedly from them, and verify it first. - Passwords typically act as the sole protection for accounts and systems, and as such, hackers focus on acquiring them above all else. We’ve all used the same password before across the web, but frequently doing it is a severe error. Doing so could mean that one data breach leads to all of your other accounts becoming compromised. Instead, practice password discipline and replace your passwords regularly. Try to avoid using the same password for different accounts—the short-term convenience doesn’t pay off. - Emails that ask for login credentials or payment information should be treated with caution every time. Good phishing prevention requires a constant state of vigilance from you and your entire team. While firewalls and antivirus software can help, there’s nothing quite as important as educating your team and keeping them aware. Make it a practice at team meetings and other events to update everyone on the status of the latest attacks and the strategies to counter them wherever possible. - Testing your employees with a mock phishing campaign is an excellent way of seeing how your company is doing. While these can be uncomfortable—nobody wants to be told they fell for something that could compromise the entire company—when implemented correctly, they can be an extremely effective defense against phishing attacks. Ensure that the tests are positive and feature constructive feedback so that employees stay motivated. Offering rewards for identifying scams often works well. - Multifactor authentication is a small but powerful move you can make. Commonly known as MFA, this form of security prevents the information from being hijacked through a one-time secondary password that gets delivered via different formats, including SMS messages, physical tokens, or even biometric IDs. Contrast this to a simple user name and password, and the difference is noteworthy. Multiple authentications add additional security layers that are hard for phishers to get around.
<urn:uuid:165c6829-c2a2-42e2-be24-59450223d344>
CC-MAIN-2022-40
https://nira.com/phishing-attack-types/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00044.warc.gz
en
0.94207
2,506
3.25
3
IBM on Thursday introduced a prototype chip dubbed the “Holey Optochip,” a wedding of traditional computer chip technology with the use of optical pathways. Able to transmit 1 terabit of data per second (Tbps), the prototype chipset could eventually result in faster downloads of information like apps and streaming video. The prototype to be discussed at the Optical Fiber Communications Conference in Los Angeles, Calif., carries with it staggering processor power yet uses off-the-shelf silicon. The new IBM design provides enough horsepower to download 500 high-definition movies in one second or transmit the entire contents of the U.S. Library of Congress in an hour. A crucial link in processing the massive amount of data required to feed consumers’ computing needs spans only a few feet. Data farms used by Apple and others to stream data are composed of hundred of high-speed computers just feet from each other. Being able to cross that short distance faster can mean the difference between smooth or jerky video on your iPad. For chip manufacturers such as Intel, slowed by a depressed market, the news could not come at a better time. Faud Doany, an IBM researcher familiar with the project, tells Tech News World the chip doubles the data transmission power of traditional 10 megabit per second high-speed connections. The Holey Optochip simply modifies designs already available to chip makers. Doubling Traditional Chip Capacity The Optochip provides a “complete optical transceiver in a compact transistor design,” Doany said. A traditional 90-nanometer IBM transceiver with 24 receiver and transmitter circuits has 48 holes in the silicon — called “optical vias.” Like opening up a new road during a traffic jam, by punching holes in the silicon chip, IBM freed up more lanes for data to travel. Each of the 24 transceivers can send or receive 20 gigabits of data instead of the traditional 10 gigabits. All of this is good news for both chipmakers and consumers, said In-Stat silicon analyst Eric Higham. “Chipmakers have heavily targeted data storage because of the higher price,” Higham told TechNewsWorld. What makes the news from IBM so intriguing is it targets both the move from silicon to optics for pushing mountains of data and the economics of the chip-making industry. “‘Off-the-shelf’ is a codename for lower-cost,” Higham said. The Name of the Game Is Bandwidth “The whole name of the game these days is bandwidth — at the root of that is video, high-definition, especially,” Higham said. If 3D video becomes common, that will triple the bandwidth needed by consumers, he stated. While the first use of the chip design will likely be supercomputers that crunch massive amounts of data, five to seven years later, the optical chipset could be part of business computing, said Doany. He hesitated to say whether the average PC would ever be powered by such the chip. While IBM won’t make the Holey Optochip, the design will be heavily used by the company, he said. “Our goal is to prototype the best-of-breed optical transceiver,” Doany said. For IBM, success of the Holey Optochip could rest not on impressive speed, but on keeping costs down. If the end design is very expensive, the chipset will be ignored by Intel and others, Higham says. However, chipmakers may decide higher data transfers are worth a higher price. Unlike PC sales, there is no sign of a slowdown in demand for the infrastructure used to satisfy consumers’ unyielding desire for more data.
<urn:uuid:ed129f04-3cc6-4a0e-b2b7-525a4e18427c>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/holy-semiconductors-ibm-reveals-1tbps-holey-optical-chip-74594.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00044.warc.gz
en
0.929656
775
2.875
3
A typical organization has far more privileged accounts than most people realize. The no-brainers are what I call built-in superuser accounts like “Administrator” and “root”. These accounts exist on every device and system at the OS or firmware level, at the domain level in Active Directory (AD), and in some applications, like the sa account in SQL Server. But then there are the custom superusers that have the same level of authority as root or Administrator, but are customer created accounts. These accounts exist at the same levels as built-in superusers. We are talking about members of the Administrators group on Windows computers, members of Domain Admins at the domain level, users with ALL authority in the sudoers file in Linux at the application level, users in the SQL Server role Sysadmin, SharePoint farm administrators group, and similar administrative roles in Exchange. But don’t overlook the cloud security implications. Cloud applications and accounts have superusers too. For instance Azure has administrator roles at the tenant, account and subscription levels, and even within resource groups. Finally, there are the subsidiary admins. These are privileged users with less than superuser ability, but who still have a powerful subset of admin permissions like: - Password reset - User account maintenance - System maintenance operations - Group membership maintenance - Control over permissions to certain resources - Audit log access All of these are privileged accounts – within the IT department. The granularity of delegation and how it works varies greatly between different systems, applications, and clouds. But privilege isn’t unique or limited to the IT department—even though you might think so based upon most of what’s written about privilege management and the scenarios usually discussed. Those admin accounts reflect an arbitrary limit on what we consider privileged accounts that corresponds to the boundaries of the IT department. There’s no good reason for this. Risk and privilege remain risk and privilege regardless of the department involved. Here are some examples of things outside the IT department we should regard as privileged and for which we should extend the same level of protection as we do for accounts with a similar risk level inside IT: - Banking transactions - Software build servers - Automated process and manufacturing control systems - Commodities and securities trading The pyramid below portrays not just the relative ranking of privileged account types, but also conveys the proportion in terms of account quantity at each level. Here’s something that makes some of these scenarios even more critical to protect--the applications are frequently old and never designed to withstand today’s threat actors and other cyber risks. Some of these process control and manufacturing systems have no security at all. So how do you protect them other than putting them on an isolated network and controlling access with physical security? Privileged access management (PAM), including privileged session management (PSM) technology, typically used to protect admin accounts inside the IT department is actually the perfect solution to extend to privileged users outside of IT and for protecting vulnerable applications as well. If you’re interested in learning more, BeyondTrust’s Jason Jones and I explored this issue in depth in our recent webinar, which you can now watch on-demand here: Securing Privilege Outside the IT Department: High Value Transactions, Vulnerable Applications and Access to Critical Information. Randy Franklin Smith, CEO, Monterey Technology Group, Inc. CISA, SSCP, Security MVP Randy Franklin Smith is an internationally recognized expert on the security and control of Windows and Active Directory security who specializes in Windows and Active Directory security. He performs security reviews for clients ranging from small, privately held firms to Fortune 500 companies, national, and international organizations.
<urn:uuid:8d549460-a674-430e-bf3d-a8013011136e>
CC-MAIN-2022-40
https://www.beyondtrust.com/blog/entry/extending-privileged-access-security-beyond-your-it-department-to-protect-end-users-systems-applications
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00245.warc.gz
en
0.91355
771
2.546875
3
Containerization as a concept of isolating application processes while sharing the same operating system (OS) kernel has been around since the beginning of this century. It started its journey from as early as Jails from the FreeBSD era. Jails heavily leveraged the chroot environment but expanded capabilities to include a virtualized path to other system attributes such as storage, interconnects and users. Solaris Zones and AIX Workload Partitions also fall into a similar category. Since then, the advent and advancement in technologies such as cgroups, systemd and user-namespaces greatly improved the security and isolation of containers when compared to their initial implementations. The next step was to abstract the complexity involved in containerization by adding user-friendly features such as templates, libraries and language bindings. LXC, an OS container meant to run multiple services, accomplished this task. In the late 2000s, Docker, an application container that was meant to run a single service, took it one step further by creating an entire ecosystem of developer tools and registries for container images, improving the user experience of containers and making containerization technology accessible to novice Linux users. This led to widespread adoption. The evolving nature of this domain led to the creation of a standardization process called Open Container Initiative in 2015. Containerization drastically improves the “time to results” metric of many applications by eliminating several roadblocks in the path to production. These roadblocks arise from issues pertaining to portability, reproducibility, dependency hell, isolation, configurability and security. For example, applications in fast-paced growth areas, such as high performance computing (HPC) which include verticals and workloads like molecular dynamics, computational fluid dynamics, MIMD Lattice Computation codes, life sciences and deep learning (DL), are very complicated to build and run optimally, due to very frequent updates to the application codes, its dependencies and supporting libraries. Container images include the applications themselves and their development environments, which aids developers in porting their applications from their desktops to datacenters. This also helps with version control, making it easier to understand which versions of the software package and dependencies lead to a particular result. This, in turn, helps to manage dependency hell, which refers to the entire application failing if one of its dependencies fails. Another real-world example is collaboration between researchers, where containers help with reproducibility of the results. In short, instead of spending time debugging the software environment, researchers can focus on the science. Containers are orders of magnitude faster to spin up than virtual machines (VM) since they do not have a hypervisor providing a virtual hardware layer and a guest OS on top of the host OS as an independent entity. For a majority of HPC and DL applications, an un-tuned full VM may experience performance degradation when compared to bare metal. The overhead and performance degradation of a container when compared to an equivalent fully subscribed bare metal workload performance is nonexistent. All of these features have promoted the deep proliferation of container technologies. Despite providing a considerable abstraction over the complexity of the container ecosystems that came before it, there are some limitations to Docker that are specific to the HPC and DL space where performance at scale matters. At the outset, the hardware (compute, storage and networking) and software stack architectures that support traditional HPC workloads are very similar to deep learning workloads. The challenges faced by both domains are also quite similar. For deep learning, even though the state of the practice is at a single node level, given the scale at which the available dataset sizes are growing and technological updates to models and algorithms (model parallelism, evolutionary algorithms), it’s imperative that most – if not all – parts of the data science workflow (finalize the model, training, hyper parameter tuning, deploying the model/inferencing) have the ability to scale. Containerization provides the solution here for build once (known optimal configuration) and run at scale. The first limitation for Docker is the daemon, which needs to run in the background to maintain the Docker engine on every node in the datacenter. This daemon needs root privileges that can be a security concern. These concerns can be remedied through several extra checks that need to be put in place by the sys admins. These are well detailed in the Docker security page. Some of the checks are obvious, such as enabling SELinux along with disallowing users to mount directories outside the docker context, limiting the binding options to some trusted user-defined paths, and checking to make sure that all files written to the file system retain the appropriate user’s ownership rights. However, all this comes with the cost of limiting the useable features in docker. Typical HPC and DL codes can scale to tens of thousands of core counts, and the compute nodes are fully subscribed. The second gap in Docker today is lack of support for the software stack that aids in this scaling, such as MPI, schedulers and resource managers (Slurm, torque, PBS pro). Native support for graphics processing units (GPUs) is another major concern. Docker containers can be orchestrated through frameworks such as Kubernetes or Docker swarm. However, the scheduling capabilities of Kubernetes as compared to well-established resource managers and schedulers, such as Slurm or torque, are in its infancy. Kubernetes was traditionally aimed at micro services, which could be apt when we consider deep learning inferencing. But for deep learning training, where large scale batch scheduling jobs are involved, it falls short. Most of the issues mentioned above have been addressed by a new container platform called Singularity, which was developed at Lawrence Berkeley National Lab specifically for high performance computing (HPC) and DL workloads. The core concept of singularity is that the user context is always maintained at the time of container launch. There is no daemon process. Instead, singularity is an executable. Singularity containers run in user space, which makes the permissions of the user identical inside and outside the container. In default mode, to modify the container, root credentials would be needed. This doesn’t mean the singularity is immune from security concerns. The singularity binary is a setuid executable and possesses root privileges at various points in its execution while the container is being instantiated. Once instantiated, the permissions are switched to user space. Setuid-root programs make it difficult to provide a secure environment. To address this, there is an option to set the container to a non-suid mode. This forces the directories to exist in the image for every bind mountpoint, disables mounting of image files and enforces use of only unpackaged image directories, allowing users to build a singularity container in unprivileged mode. High performance interconnects, such as InfiniBand and Intel Omni-Path Architecture (Intel OPA), are very prevalent in the HPC space and are very applicable to DL workloads where the applications benefit from the high bandwidth and low latency characteristics of these technologies. Singularity has native support for OpenMPI by utilizing a hybrid MPI container approach where OpenMPI exists both inside and outside the container. This makes executing the container at scale as simple as running “mpirun … singularity <executable command>” instead of “mpirun … <executable>”. Once mpirun is executed, an orted process is forked. The orted process launches Singularity, which in turn launches and instantiates the container environment. The MPI application within the container is linked to the OpenMPI runtime libraries within the container. These runtime libraries communicate back to the orted process via a universal process management interface. This workflow would be natively supported by any of the traditional workload managers, such as slurm. Similar to the support for InfiniBand and Intel OPA devices, Singularity can support any PCIe-attached device within the compute node, such as accelerators (GPUs). Singularity can find the relevant NVIDIA libraries and drivers on the host via the ld.so.cache file, and will automatically bind those libraries into a location within the container when the –nv flag is used. The nvliblist.conf file needs to be used to specify which NVIDIA libraries to search for on the host system when the –nv option is invoked. There is a container registry for singularity similar to docker hub. Additonally, a well-defined process exists for converting existing docker containers seamlessly into singularity containers. Singularity, like any containerization technology, is not bereft of certain universal challenges. Driver conflicts and path mismatches are some of the preliminary hindrances – especially the driver versions for external-facing PCIe devices, such as OFED for Mellanox, IFS for Intel OPA and CUDA for NVIDIA GPUS, since these are kernel dependent. Production and final deployment environment constraints need to be considered when the containers are being built to ease a few of these issues. For applications which are MPI parallelized, if mpirun is called from within the container, an sshd wrapper needs to be used to route the traffic to the outside world. Some of the much needed features which are anticipated from singularity include monitoring performance of containers, checkpoint-restart capability and improved security. Singularity Performance at Scale Our research team embarked on a mission to containerize HPC and DL applications for our internal use cases. A few representative studies from the HPC Innovation Lab are shown here. Figure 1 illustrates the performance of the LS-DYNA application, a finite element analysis code in the manufacturing domain. The application here is using the Car2car dataset, which simulates a two-vehicle collision. Each compute node has 36 cores, and the application is scaled out to 576 compute cores or 16 nodes of Dell EMC PowerEdge C6420 Server. The percentage points at the top of each data point in the figure represent the difference in performance between a complete bare metal run versus a containerized run through singularity. The relative performance difference is under 2%, which is within the run-to-run variation. Figure 1 Baremetal vs container at scale for LS Dyna This behavior is repeated in the deep learning domain as shown in Figure 2. Here, we use 8 x Dell EMC PowerEdge C4140 Servers populated with four NVIDIA Tesla V100s each, interconnected with Mellanox EDR to scale a Resnet50 model through Horovod’s MPI overlay for Tensorflow. The dataset used here is from Imagenet 2012. The performance comparison between a bare metal versus a containerized version of the framework at 32 Tesla V100 is still under 2%, showing negligible performance delta between the two. Figure 2 Baremetal vs Singularity at scale for Tensorflow+Horovod Containerization has been a solution for applications that pose challenges to compile and run optimally at scale. It has been a vehicle for collaboration and sharing best practices. Both high performance computing and deep learning workloads can benefit greatly from containerization. Links to some related articles are listed below. Nishanth Dandapanthula is a systems engineering manager in the HPC and AI Solutions Engineering group at Dell EMC. The Next Platform commissioned this piece based on the uniqueness of the benchmarks and results and its relevance to our reader base.
<urn:uuid:c7802c40-8f5d-475f-8799-78e5f11cba17>
CC-MAIN-2022-40
https://www.nextplatform.com/2018/03/19/singularity-containers-for-hpc-deep-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00245.warc.gz
en
0.929807
2,351
3.046875
3
“What is phishing?” is one of the most searched terms on the Internet. It has become one of the most effective techniques used by cybercriminals to gain access into the corporate network. The risks associated with a phishing security threat is growing daily as criminals plot targeted and highly sophisticated campaigns to secretly breach the network of an organisation, whether for financial gain, espionage or other motivation. Phishing has become one of the biggest business risks of the century and according to CSO Online 92% of malware is delivered via email. In addition, it is also the number one delivery vehicle for ransomware. Criminals are targeting the outer ‘layer’ of a company’s defence – it’s employees – by crafting cleverly disguised branded emails and very realistic communication to entice users to click on the email link or attachment. Once the employee clicks on the link, malware is automatically downloaded to their computer or device. Alternatively, a spoofed website collects login credentials, in essence compromising the data on that device, and in most instances providing entry into the company network. Cybercriminals are looking for a way to penetrate the corporate network by gaining access through usernames and passwords which provide access to company platforms. Targeted phishing is known as ‘spear phishing’ and is where the attackers hone-in on specific individuals with privileged admin rights such as executives or individuals at the C-Suite level. Usually targeted individuals are ‘stalked’ on social media or may previously have been identified via a phishing email which they clicked. The outcome can be disastrous to a company compared to ordinary phishing, because it often leads to larger financial losses or wider access to mission critical systems. While companies and individuals are aware of the risks of phishing email, they continue to get caught out time and again because of the sophisticated nature of these cyber social engineering tactics. During the coronavirus pandemic, there was a marked increase of phishing attacks related to Covid-19 communication – centred around tax relief for individuals and companies. This is a perfect example of social engineering at play, because of the uncertainty around Covid-19 and financial fears, people were ready to click on an email to apply for Covid-19 relief. The malware Azorult used a fake email and Coronavirus map to infiltrate and generate ransomware attacks within organisations around the globe within a matter of hours of being distributed. In the political realm, recent research from CSC in the United states about the Presidential race, shows that over 90% of websites linked to Donald Trump & Joe Biden campaigns were found to be at risk for potential disinformation or data theft. This is because the web domains used in their campaigns are not protected from domain and DNS hijacking – a technique often used by hackers to launch their phishing scams. The Damage From a Phishing Scam Phishing attacks are known to result not only in the loss of company data, finances and reputation, but is also said to severely affect employee productivity. One phishing attack can cost a medium-sized company an average of $1.6 million due to having sub-standard cybersecurity practices in place. Phishing statistics 2020: - 22% of breaches in 2019 involved phishing - 96% of phishing attacks are by email - One in every 99 emails is a phishing attack - One in 25 branded emails is phishing - Close to a third or 30% of phishing emails make it past default security - Two in three phishing attempts use a malicious link and over half contain malware - 65% of organizations in the United States experienced a successful phishing attack Source: Verizon, Sonic Wall, Veronis Although the majority of people know what phishing is, it is not always clear what the right phishing prevention strategy and defences are. Many companies have poor cybersecurity practices in place which fails to protect the network, endpoints and corporate data which places them at great risk to the rising tide of phishing attacks. At ENHALO we believe that layering the defences will limit the effects of phishing and remains a consistent way to mitigate against the IT security risks. Eight Layers of Defence In a phishing defence strategy, education is one of the single most effective ways of dealing with an attack. Because phishing uses social engineering and attackers continue to improve their craft, these emails become more difficult to identify as fraudulent. According to 2020 statistics, 97% of users cannot identify a sophisticated phishing email and therefore fall for them time and time again. Since they are at the forefront of the attack, education of employees and users of your platforms remain a fundamental part of the defence and should be prioritised and maintained. 2. Rule of Least Privilege Limit users’ access as much as possible. It’s vital to ensure that users only have access to what they need to fulfil their function. If they do not need access to a resource or system, don’t give it to them! It’s often the case that most users don’t need the access that they think they need. Once access is granted, it’s difficult to take it back. Furthermore, roles change within companies and when this occurs it’s important that access rights are checked so that access continues to align with what is required only. Review access rights periodically. Don’t skip this review and ensure strictness and remain firm. The access that systems have should be carefully thought out too. Systems should be treated in the same way as people, with regards to access control. Systems should also only have the access that they require to fulfil its purpose. For instance, if a computer or device does not need access to a server to function, then don’t give it access. Having devices such as mobile phones or IoT devices (such as a kettle for example) on the same network as your company file server, does not make sense. Rather put them on a separate network that is isolated from the company’s ‘crown jewels’ for not only phishing prevention, but any hacker attack. If these devices are isolated, and are compromised, they can’t be used as a springboard to get to the organisation’s files. It may sound unlikely, but it does happen. Rather use the rule of least privilege and be safer in the long run. 3. Email Scanning Scan the email on the way in and on the way out of the organisation with a tool that is not part of the echo system. This means, if you use a particular cloud provider, forward the mail to a third party that is not connected to that cloud provider for additional scanning. By doing this the integrity of the email can be ensured. Often attackers break into a cloud platform and send the phishing email from within the system. Alternatively, attackers create an email in the inbox of the user which means that it’s not even sent, so it can’t be scanned. In these instances, they are difficult to stop with scanning, so other layers of defence (controls), including education are critical. 4. Multi-factor Authentication (MFA) MFA helps protect against phishing because if a user is tricked by a phishing email and their credentials are compromised and stolen, the attacker will still require the one-time use component of the credential, which the attacker will not have access to. 5. Tighten Your Geo-location Only accept connections and emails from geographic locations that you deal with and ensure that the users are only able to visit and interact with countries that they need to -especially when using corporate devices. It’s surprising how half of the planet as a minimum can be eliminated from the equation through doing this. This will reduce the phishing attack surface and will result in a more secure posture. Moreover, if the computers and users are blocked from addressing other geo-locations, it means that an extra layer of defence is in place which the attackers will need to get over in order to exploit the user through phishing. 6. Proxy and filtering Filter all access to and from the computers and from user interactions with links. By limiting where the users can go and by implementing user and application aware proxies, connections can be filtered. This is quite simple to achieve if the devices which the users utilise are configured correctly. Remember to include the mobile devices that are largely used by all. One easy way to do this is by using a secure and trusted DNS service that filters the DNS request. Very soon Secure DNS will be the norm, and this will further enhance overall security. Already, vendors of browsers are implementing versions of secure DNS to avoid the trickery. 7. Disable External Links If a link is not to and from a system that is trusted by the organisation – disable it. It’s surprising how habitual we are, and often use the same sites daily to do our work. It is possible to create a list of trusted sites and only allow traffic to and from those sites for added security. After a couple of days that profile will encompass a trusted list. From time to time a user might require an additional site, if a new application or process is introduced. This approach is not as difficult as one may think and introduces a very strong level of security, far superior than allowing access to everything. 8. Manage the Credential Most Likely to Get Phished This is an ‘out of the box’ idea that is being adopted nowadays. In simple terms, groupings of privileged accounts exist that are used for transactions including special usernames and passwords and credit card numbers. These credentials are only active for a small window, for instance, when the transaction needs to go through and subsequently the credentials are deactivated, and the credit card invalidated until needed again. This control is extremely effective, as the window of opportunity is consistent but limited to minutes and the attacker would need to know when that small window of opportunity is available to exploit the credentials. If alerts exist on theses credentials whenever they are used, it will be apparent if they are compromised. If the accounts are locked and the cards deactivated, and you get an alert then you know that the accounts have been compromised and appropriate actions and measures can be taken. Using the assumption that the credentials get compromised despite all other layers of defence (controls) in place, the management of credentials is a good safety net. Connect with us to meet with an experienced consultant to tailor a cybersecurity solution for your business. Disclaimer Insights and press releases are provided for historical purposes only. The information contained in each is accurate only as of the date material was originally published.
<urn:uuid:c0d91447-8246-46cc-9572-88918da90d08>
CC-MAIN-2022-40
https://enhalo.co/must-know-cyber/phishing-one-of-worst-security-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00245.warc.gz
en
0.949286
2,245
3.1875
3
Aid Fullerton posted 26 Jan 2001 Dear cabling experts,| Please could you explain the logic/advantages of patch panels. Why not cable straight from hub/switch to wallplate or equipment? What do they achieve? Thanks in advance. Roman Kitaev posted 30 Jan 2001 Patch panels provide a method of arranging circuits. The patch panel is the central place where circuits are connected, re-connected and administered. Using patch panels, you will not have to move and disturb the horizontal cable (which is actually not designed for being moved a lot) and ports at the active equipment. The only case I can think of where the patch panel may not be needed is direct connection of two PCs by a crossover cord. Roman Kitaev, RCDD Click here to see the expert's profile
<urn:uuid:0877ae87-a575-4594-ae50-eb39535aae6b>
CC-MAIN-2022-40
http://www.cabling-design.com/helpdesk/answers/188.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00245.warc.gz
en
0.904756
202
2.671875
3
The federal government is typically a step behind when it comes to regulating the Internet, but in a timely and significant move the Federal Trade Commission issued a report recently that said the mobile industry should include do-not-track features in its apps and software. The report is an indication that the government is stiffening its attitude toward mobile privacy, and should also be a reminder to consumers to pay close attention to how much access and what types of permissions they grant to mobile apps. Do-not-track has entered into mainstream consciousness recently, and most web browsers such as Google Chrome and Mozilla Firefox now have DNT features or plugins that can be activated that essentially prevent a user from having their web activity tracked. But the FTC report is the first significant step the government has taken to apply the same standards to mobile devices, and it is geared more specifically toward the apps that run on smartphones than it is their mobile web browsers. Many apps require geolocation and other personal information from users, and many downplay or mislead users about what kind of information they can and do gather. For instance, and almost surely not coincidentally, on the same day it issued this report the FTC also fined PATH, a social networking app, $800,000, for violations of federal privacy laws regarding children for collecting too much information from users, including information about the people in their address books. According to the New York Times, FTC has increased its emphasis on mobile data privacy because of the great number of entities that smartphones can be used to provide information to, from service carriers and mobile operating system developers, to device manufacturers, app companies, and, of course, advertisers.
<urn:uuid:e5775a09-2171-4ca3-93fd-8f6fb8d2dfc2>
CC-MAIN-2022-40
https://www.kaspersky.com/blog/federal-government-mobile-privacy-restrictions/1365/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00245.warc.gz
en
0.957247
335
2.734375
3
Microsoft has confirmed a report of a new, un-patched memory corruption error in Word. The bug can be exploited by creating a specially crafted Word document and allow the attacker to execute unauthorized code on the system. In order for this attack to be carried out, a user must open the malicious Word document attached to an email. In a web-based attack scenario, an attacker would have to host a web site that contains a Word file that is used to attempt to exploit this vulnerability. When a user opens a specially crafted Word file using a malformed string, it may corrupt system memory in such a way that an attacker could execute arbitrary code. In the past year, hackers have increasingly research Microsoft Office products, which some security researchers consider to be a better source for bugs that the core operating system. The vulnerability affects Microsoft Word 2000, Microsoft Word 2002, Microsoft Office Word 2003 and Microsoft Word for Mac OS X. The problem is not expected to be fixed in today’s software patches.
<urn:uuid:bb8e8c75-c1f6-495c-9903-76ef6050b8d8>
CC-MAIN-2022-40
https://it-observer.com/microsoft-confirms-word-attack.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00245.warc.gz
en
0.911887
204
2.703125
3
On the 2nd November, 1988 the Morris Worm was the first blended threat affecting multiple systems on the Internet. One of the things the worm did was to exploit a buffer overflow against the fingerd daemon due to the usage of gets() library function. In this particular case the fingerd program had a 512-byte buffer for gets(). However, this function would not verify if the input received was bigger than the allocated buffer i.e., would not perform boundary checking. Due to this, Morris was able to craft an exploit of 536-bytes which will fill the gets() buffer and overwrite parts of the stack. More precisely it overwrote the memory address of the return stack frame with a new address. This new address would point into the stack where the crafted input has been stored. The shellcode consisted on a series of opcodes that would perform the execve(“/bin/sh”,0,0) system call. This would give a shell prompt to the attacker. A detailed analysis about it was written by the Eugene Spafford, an American professor of computer science at Purdue University. This was a big event and made buffer overflows gain notoriety. Time has passed and the security community had to wait for information about the closely guarded technique to be publicly available. One of the first articles on how to exploit buffer overflows was written in the fall of 1995 by Peiter Zatko a.k.a Mudge – at the time Mudge was one of the members of the prominent hacker group L0pht. One year later, in the summer of 1996, the 49th issue of the Phrack e-zine was published. With it, came the notorious step-by-step article “Smashing the Stack for Fun and Profit” written by Elias Levy a.k.a. Aleph1. This article is still today a reference for the academia and for the industry in order to understand buffer overflows. In addition to these two articles another one was written in 1997 by Nathan Smith named ” Stack Smashing vulnerabilities in the UNIX Operating System.” These 3 articles, especially the article from Aleph1 allowed the security community to learn and understand the techniques needed to perform such attacks. Meanwhile, in April 1997 Alexander Peslyak a.k.a. Solar Designer posted on Bugtraq mailling list a Linux patch in order to defeat this kind of attacks. His work consisted in changing the memory permissions of the stack to read and write instead of read, write and execute. This would defeat buffer overflows where the malicious code would reside in the stack and would need to be executed from there. Nonetheless, Alexander went further and in August 1997 he was the first to demonstrate how to get around a non-executable stack using a technique known as return-to-libc. Essentially, when executing a buffer overflow the limits of the original buffer will be exceeded by the malicious input and the adjacent memory will be overwritten, especially the return stack frame address. The return stack frame address is overwritten with a new address. This new address, instead of pointing to an address on the stack it will point to a memory address occupied by the libc library e.g, system(). Libc is the C library that contains all the system functions on Linux such as printf(), system() and exit(). This is an ingenious technique which bypasses non-executable stack and doesn’t need shellcode. This technique can be achieved in three steps. As Linus Torvalds wrote in 1998 you do something like this: - Overflow the buffer on the stack, so that the return value is overwritten by a pointer to the “system()” library function. - The next four bytes are crap (a “return pointer” for the system call, which you don’t care about) - The next four bytes are a pointer to some random place in the shared library again that contains the string “/bin/sh” (and yes, just do a strings on the thing and you’ll find it). Apart of pioneering the demonstration of this technique, Alexander also improved his previous non-executable stack patch with a technique called ASCII Armoring. ASCII Armoring would make buffer overflows more difficult to happen because it will map the shared libraries on memory address that contain a zero byte such as 0xb7e39d00. This was another clever defense because one of the causes of buffer overflows is the way the C language handles string routines like strcp(), gets() and many others. These routines are created to handle strings that terminate with a null byte i.e, a NULL character. So, you as an attacker when you are crafting your malicious payload you could provide malicious input that does not contain NULL character. This will be processed by the string handling routine with catastrophic consequences because it does not know where to stop. By introducing this null byte into memory addresses the payload of buffer overflows that are processed by the string handling routines will break. Based on the work from Alexander Peslyak, Rafal Wojtczuk a.k.a. Nergal, wrote in January 1998 to the Bugtraq mailing list another way to perform return-to-libc attacks in order to defeat the non-executable stack. This new technique presented a method that was not confined to return to system() libc and could use other functions such as strcpy() and chain them together. Meanwhile, In October 1999, Taeh Oh wrote “Advanced Buffer Overflow Exploits” describing novel techniques to create shellcode that could be used to exploit buffer overflow attack. Following all this activity, Crispin Cowan presented on the 7th USENIX Security Symposium on January 1998 a technology known as StackGuard. StackGuard was a compiler extension that introduced the concept of “canaries”. In order to prevent buffer overflows, binaries compiled with this technology will have a special value that is created during the function epilogue and pushed into the stack next to the address of the return stack frame. This special value is referred as the canary. When preforming the prologue of a function call, StackGuard will check if the address of the return stack frame has been preserved. In case the address has been altered the execution of the program will be terminated. As always in the never ending cat and mice game of the security industry, after this new security technique was introduced, others have had to innovate and take it to the next level in order to circumvent the implemented measures. The first information about bypassing the StackGuard was discovered in November 1999 by the Polish hacker Mariusz Wołoszyn and posted on the BugTraq mailing list. Following that In January 2000, Mariuz a.k.a. Kil3r and Bulba, published on Phrack 56 the article “Bypassing StackGuard and StackShield”. Following that a step forward was made in 2002 by Gerardo Richarte from CORE security who wrote the paper “Four different tricks to bypass StackShield and StackGuard protection”. The non-executable stack patch developed by Alexander was not adopted by all Linux distributions and the industry had to until the year 2000 for something to be adopted more widely. In August 2000, the PaX team (now part of GR-security) released a protection mechanism known as Page-eXec (PaX) that would make some areas of the process address space not executable i.e., the stack and the heap by changing the way memory paging is done. This mitigation technique is nowadays standard in the GNU Compiler Collection (GCC) and can be turned off with the flag “-z execstack”. Then in 2001, the PaX team implemented and released another mechanism known as Address Space Layout Randomization (ASLR). This method defeats the predictability of addresses in virtual memory. ASLR randomly arranges the virtual memory layout for a process. With this the addresses of shared libraries and the location of the stack and heap are randomized. This will make return-to-libc attacks more difficult because the address of the C libraries such as system() cannot be determined in advance. By 2001, the Linux Kernel had two measures to protect against unwarranted code execution. The non-executable stack and ASLR. Nonetheless, Mariusz Wołoszyn wrote a breakthrough paper in issue 58 of Phrack on December 2001. The article was called “The Advanced return-into-lib(c) exploits” and basically introduced a new techniques known as return-to-plt. This technique was able to defeat the first ASLR implementation. Then the PaX team strengthen the ASLR implementation and introduced a new feature to defend against return-to-plt. As expected this technique didn’t last long without a comprehensive study on how to bypass it. It was August 2002 and Tyler Durden published an article on Phrack issue 59 titled “Bypassing PaX ASLR protection”. Today, ASLR is adopted by many Linux distributions. Nowadays is built into the Linux Kernel and on Debian and Ubuntu based systems is controlled by the parameter /proc/sys/kernel/randomize_va_space. This mitigation can be changed with the command “echo <value > /proc/sys/kernel/randomize_va_space ” where value can be: - 0 – Disable ASLR. This setting is applied if the kernel is booted with the norandmaps boot parameter. - 1 – Randomize the positions of the stack, virtual dynamic shared object (VDSO) page, and shared memory regions. The base address of the data segment is located immediately after the end of the executable code segment. - 2 – Randomize the positions of the stack, VDSO page, shared memory regions, and the data segment. This is the default setting. Interesting is the fact that on 32-bit Linux machines an attacker with local access could disable ASLR just by running the command “ulimit -c”. A patch has just been released to fix this weakness. Following the work of StackGuard, the IBM researcher Hiroaki Etoh developed ProPolice in 2000. ProPolice is known today as Stack Smashing Protection (SSP) and was created based on the StackGuard foundations. However, it brought new techniques like protecting not only the return stack frame address as StackGuard did but also protecting the frame pointer and a new way to generate the canary values. Nowadays this feature is standard in the GNU Compiler Collection (GCC) and can be turned on with the flag “-fstack-protector”. Ben Hawkes in 2006 presented at Ruxcoon a technique to bypass the ProPolice/SSP stack canaries using brute force methods to find the canary value. Time passed and in 2004, Jakub Jelinek from RedHat introduced a new technique known as RELRO. This mitigation technique was implemented in order to harden data sections of ELF binaries. ELF internal data sections will be reordered. In case of a buffer overflow in the .data or .bss section the attacker will not be able to use the GOT-overwrite attack because the entire Global Offset Table is (re)mapped as read only which will avoid format strings and 4-byte write attacks. Today this feature is standard in GCC and comes in two flavours. Partial RELRO (-z relro) and Full RELRO (-z relro -z now). More recently, Chris Rohlf wrote an article about it here and Tobias Klein wrote about it on a blog post. Also in 2004 a new mitigation technique was introduced by RedHat engineers. The technique is known as Position Independent Executable (PIE). PIE is ASLR but for ELF binaries. ASLR works at the Kernel level and makes sure shared libraries and memory segments are arranged in randomized addresses. However, binaries don’t have this property. This means the addresses of the compiled binary when loaded into memory are not randomized and become a weak spot for protection against buffer overflows. To mitigate this weakness, RedHat introduced the PIE flag in GCC (-pie). Binaries that have been compiled with this flag will be loaded at random addresses. The combination of RELRO, ASLR, PIE and Non-executable stack raised significantly the bar in protecting against buffer overflows using return-to-libc technique and its variants. However, this didn’t last long. First Sebastian Krahmer from SUSE developed a new variant of return-to-libc attack for x64 systems. Sebastian wrote a paper called “x86-64 buffer overflows exploits and the borrowed code chunks exploitation technique”. Then with an innovative paper published on ACM in 2007, Hovav Shacham wrote “The Geometry of Innocent Flesh on the Bone: Return-into-libc without Function Calls (on the x86)”. Hovav introduced the concept of using return oriented programming and what he called gadgets to extend the return-to-libc technique and bypass different mitigation’s enforced by the Linux operating system. This technique was based on the work from Solar and Nergal and does not need to inject code and takes advantage of existing instructions from the binary itself. Reuse existing instructions and chain them together using the RET instruction to achieve the end goal of manipulating the program control flow execute code of attackers choice. This is a difficult technique to perform but is powerful and is known as ROP. A summary was presented by Hovav on Black Hat 2008. Also, in 2008, Tilo Müller wrote “ASLR Smack & Laugh Reference” explaining the different attacks against ASLR in a comprehensive study that outlines the various techniques. In 2009 the paper “Surgically returning to randomized lib(c)” from Giampaolo Fresi Roglia also explains how to bypass non-executable stack and ASLR. In 2010, Black Hat had 3 talks about Return-Oriented exploitation. More recently and to facilitate ROP exploitation, the French security researcher Jonathan Salwan wrote a tool written in Python called ROPgadget. This tool supports many CPU architectures and allows the attacker to find the different gadgets needed to build its ROP chain. Jonathan is also gives lectures and makes his material accessible. Here is the 2014 course lecture on Return Oriented Programming and ROP chain generation. ROP is the current attack method of choice for exploitation and research is ongoing on mitigation and further evolution. Hopefully, this is gives you good reference material and a good overview about the evolution of the different attacks and mechanisms against Stack based buffer overflows. There are other type of buffer overflows like format strings, integer overflows and heap based but those are more complex. Buffer Overflows is a good starting point before understanding those. Apart of all the material linked in this article, good resources for learning about this topic are the books Hacking: The Art of Exploitation by Jon Erickson, The Shellcoder’s Handbook: Discovering and Exploiting Security Holes by Chris Anley et.al., and A Bug Hunter’s Diary: A Guided Tour Through the Wilds of Software Security by Tobias Klein.
<urn:uuid:2f6bdd63-daa4-43ad-9701-220bba4d66a3>
CC-MAIN-2022-40
https://countuponsecurity.com/2016/04/11/evolution-of-stack-based-buffer-overflows/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00245.warc.gz
en
0.947684
3,172
3.671875
4
How many web pages make up the Internet? It’s not an easy number to calculate. A team of Dutch scientists published a paper describing one estimate method that uses word-count frequency in search results. Applying their method to the Google search index, they estimated that there are around 47 billion web pages. Assuming the average length of a Web page is 6.5 printed pages, then it would take 305.5 billion pages to print the Internet. This is roughly the same number of pages as 75 million copies of the seven-volume Harry Potter series. Looking at it another way, if each of these 47 billion web pages was a book, the Internet would be more than 1,200x larger than the 38 million books and other print materials in the U.S. Library of Congress, the largest library in the world.
<urn:uuid:0a5478c2-ba3b-49fd-88b5-3cdce134531a>
CC-MAIN-2022-40
https://www.entefy.com/blog/how-many-pages-to-print-the-internet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00245.warc.gz
en
0.916288
170
3.25
3
This course provides an introduction to the tools and methodologies used to perform malware analysis on scripts and executables, in Windows systems. Students will expose on how to analyze the functionality of a malicious script, debugging executables and observing malware functions. In two-day class, the analysis will split into one day analysis: - Day-1: analysing malicious script (e.g. PowerShell, VBScript) including deobfuscation technique. - Day-2: students will learn on how to analyse malicious executable, including rapid reverse engineering (covering static and dynamic analysis). The course cover the latest threat landscape of malware infection vector, from malicious script to reverse engineering the payload. - Knowledge in malware analysis and handling - Basic reverse engineering knowledge - Understanding of programming and scripting language - VMWare Player or Fusion or Workstation - 30GB of disk space - 8GB RAM (minimum set of 4GB of RAM for VM)
<urn:uuid:dfeaf854-a0cf-41db-a84d-9e93f91217ef>
CC-MAIN-2022-40
https://nanosec.asia/nsc2019/trainings/it02/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00445.warc.gz
en
0.822885
209
2.578125
3
The global microcontroller market is expected to grow at a CAGR of around 8.3% from 2020 to 2027 and expected to reach the market value of around US$ 20.2 Bn by 2027. A microcontroller is a single-chip microcomputer made using VLSI fabrication. A microcontroller is also referred to as an embedded controller. Currently, several types of microcontrollers with varying word lengths are available on the market, including 4bit, 8bit, 64bit, and 128bit microcontrollers. A microcontroller is a compressed micro computer designed to control the functions of embedded systems in office machines, robots, home appliances, automobiles, and a variety of other devices. Furthermore, a microcontroller includes components such as memory, peripherals, and, most importantly, a processor. Furthermore, microcontrollers are used in devices that require a degree of control to be applied by the device's user. Rising adoption of electric vehicles propel the growth of global microcontroller market Since the advancements in battery technology have been slow when compared to advancements in the power electronic area, the disadvantage of EV short range remains. With such technological limitations, the electric vehicle (EV) appears to be a viable alternative to the internal combustion engine (ICE) automobile today. The use of a microcontroller in this research is gaining popularity due to the microcontroller's ability to produce designs that control signals with flexibility. Furthermore, the microcontroller performs mathematical and logical functions that enable it to simulate logic and electronic circuits. Furthermore, microcontrollers can be used because they provide flexibility that hardwired control components do not. Aside from that, the microcontroller is essential in the development of the ECU of a typical subsystem or module in an EV. The type of microcontroller used in an EV is typically determined by the type of subsystem or module. For example, it uses a PIC18F2680 microcontroller in its subsystem or module allowing easy replication and adaptation to all subsystems, resulting in a homogeneous network with a simplified deployment and management. Microcontrollers in medical industry will have 100% gains in the global market According to ACM research, an ageing population combined with dietary and lifestyle choices results in a demand for portable, wearable, and implantable medical devices that enable chronic disease management and wellness assessment. Microcontrollers offer the ideal balance of programmability, cost, performance, and power consumption for such medical devices. Furthermore, microcontrollers enable today's medical applications while also enabling future applications with sophisticated signal processing requirements. The design of an embedded microcontroller system chip, for example, has resulted in the first sub-microwatt per channel electroencephalography (EEG) seizure detection. Furthermore, according to IEEE data, many medical devices are built around microprocessor systems, specifically microcontrollers. On the basis of electroencephalography, the issues of increasing and assessing reliability are discussed (EEG). It is proposed that the device's main component—a microcontroller—be duplicated to increase its reliability. Microcontrollers possess multifaceted applications bolster the growth of global microcontroller market At the moment, one or more microcomputers control cell phones, alarm clocks, microwaves, ovens, washers, and thermostats. Even small devices, such as TV remote controls, contain a microcontroller. Because all of these electronic devices are controlled by programs that runs on increasingly powerful microprocessors, it is now possible to design products that will follow different instructions and behave differently depending on the user. Because of the incorporation of microcontrollers, the ability to create a much broader range of products for the home, workplace, or public that can adapt to meet user needs is consistently increasing while the cost is decreasing. Today, there is a trend of industrial automation, which is the process of handling various industrial processes automatically with the help of several machines involving computers and robots using microcontrollers and sensors. The incorporation of a microcontroller increases efficiency and productivity while decreasing human effort and labor costs. The global microcontroller market is segmented as product and application. The product is segmented as 8-bit, 16-bit, and 32-bit. By application, the market is segmented as automotive, consumer electronics & telecommunication, industrial, medical devices, and military & defense. Based on product, the 16-bit segment is expected to dominate the market during the forecast period. The growing use of MCUs for automotive applications, as well as rapidly decreasing average selling prices, has contributed to the market's dominance. Because of the specific requirements for the technologies, the automotive industry uses 16-bit MCUs on a large scale. Furthermore, the automotive segment is expected to dominate the market in the coming years. This is due to increased demand for MCUs as a result of increased demand for enhanced car safety features, convenience functions, entertainment systems, and government emission control mandates. The automotive industry is also expected to grow significantly as a result of the Advanced Driver Assistance System (ADAS). As image sensors are used in ADAS technology to improve driver safety, features such as lane departure warning, parking assistance, and collision avoidance systems are available. Stringent regulatory standards aimed at improving road safety for drivers and pedestrians are expected to fuel growth even further. North America will continue to dominate the global microcontroller market in the coming years. This is due to the region's large consumer base for electronic devices such as tablets, smart phones, and other consumer electronics. In addition, IoT systems are widely used in homes and businesses in this region. This has increased consumer demand for smart electronic gadgets such as smart wearable’s, sensors, medical devices, and other IoT enabled devices, which is expected to drive regional market growth in the coming years. Europe, on the other hand, will have the second largest market share in the global microcontroller market. The presence of major automotive and OEM in this region is one of the factors attributed to the growth of the global microcontroller market. Furthermore, Asia Pacific is expected to be the fastest growing regional market for the global microcontroller market, with a reasonable CAGR. The high proliferation of start-up business ventures in developing economies, particularly India, that offer essential microcontroller services, combined with innovative add-on services and products involving GPS navigation, insurance, and entertainment systems, are the key factors driving the global microcontroller market's growth. The major players of the global microcontroller market are Infineon Technologies AG, Microchip Technology Inc., NXP Semiconductors N.V., Renesas Electronics Corporation, Texas Instruments Incorporated, Toshiba Corporation, ON Semiconductor, Analog Devices, Inc., and among others Market By Product Market By Application Consumer Electronics & Telecommunication Military & Defense Market By Geography • Rest of Europe • South Korea • Rest of Asia-Pacific • Rest of Latin America Middle East & Africa • South Africa • Rest of Middle East & Africa Microcontroller market is expected to reach a market value of around US$ 20.2 Bn by 2027. The microcontroller market is expected to grow at a CAGR of around 8.3% from 2020 to 2027. Based on product, 16-bit segment is the leading segment in the overall market. Technological advancement is one of the prominent factors that drive the demand for microcontroller market. Infineon Technologies AG, Microchip Technology Inc., NXP Semiconductors N.V., Renesas Electronics Corporation, Texas Instruments Incorporated, Toshiba Corporation, ON Semiconductor, Analog Devices, Inc., and among others. North America is anticipated to grab the highest market share in the regional market Asia Pacific is expected to be the fastest growing market in the forthcoming years
<urn:uuid:b3a003e6-0ade-494f-a6dd-ae3dff44eabe>
CC-MAIN-2022-40
https://www.acumenresearchandconsulting.com/microcontroller-market
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00445.warc.gz
en
0.912108
1,694
3.0625
3
“HIPAA” is a commonly recognized term, but the complexities surrounding what HIPAA Compliance entails and how it can impact you are not commonly understood. Some people are unsure of what it stands for, and others are unclear on what it requires, who it applies to, what benefits it offers and more. When most people think of HIPAA, they think of the privacy agreement made by a doctor to their patients. But it’s important to understand every aspect of HIPAA to avoid unknowingly violating it. This guide will take a deep dive into HIPAA, and provide clarity around one of the most common uncertainties surrounding it: “What is the civil penalty for unknowingly violating HIPAA?” We’ll also touch on solutions for preventing accidental violation, talk about why HIPAA is so important and more. Free Download: HIPAA Risk Assessment Template What is HIPAA Compliance? HIPAA is an acronym for “Health Insurance Portability and Accountability Act.” The act created a federal law in 1996 to protect sensitive patient health information from being disclosed without the patient’s knowledge or consent. According to the Department of Health Care Services, HIPAA does the following: - Reduces health care abuse and fraud - Mandates industry-wide standards for health care information on electronic billing and other processes - Provides the ability to transfer and continue health insurance coverage for American workers and their families when they change or lose their jobs - Requires the protection and confidential handling of protected health information Why is HIPAA Compliance important? Let’s start by discussing the importance of HIPAA for patients. It serves the greatest benefits for patients because it ensures multiple safeguards to Protected Health Information (PHI). According to the HIPAA Journal, “without HIPAA there would be no requirement for healthcare organizations to safeguard data – and no repercussions if they failed to do so.” HIPAA also benefits patients by giving them control over who their information is released to and shared with. Prior to the enactment of HIPAA, there was no requirement for healthcare organizations to release copies of patients’ health information. HIPAA allows patients to play an active role in their healthcare. Even the greatest healthcare providers make mistakes; when patients can obtain copies of their medical information, they are able to check for errors and correct mistakes when needed. Another benefit it provides is allowing information to be passed between providers, eliminating the need for repeat tests. This way, healthcare providers have their patient’s entire health history at their disposal. HIPAA is also important to healthcare organizations. If you’re a healthcare professional, HIPAA provides benefits like streamlining administrative functions, improving efficiency and eliminating silos between specialists. It has been instrumental in facilitating the industry’s transition from paper to electronic records. How does someone violate HIPAA? HIPAA is federally regulated by the U.S. Department of Health & Human Services (HHS). HIPAA violations are penalized whenever the HHS finds that the acquisition, access, use or disclosure of PHI is done in a way that poses a risk to the involved patient. It’s important to be clear on whether or not you are subject to the HIPAA Privacy Rule, one of the most important requirements of HIPAA. According to the CDC, the following entities are subject to the Privacy Rule and should have a comprehensive understanding of the regulatory concerns: - Healthcare providers - Health plans - Healthcare clearinghouses - Business associates If you are subjected to the Privacy Rule, you may be at risk of violating it unintentionally. One example of an unintentional HIPAA violation could be losing a USB flash drive that contains private health information on it. Nobody intended for that information to be released or illegally stolen, but it is a breach of HIPAA nonetheless. Of course, there are times when HIPAA is violated intentionally. A common example of this is a doctor discussing PHI with a colleague in a public area of a hospital. This is a known violation and therefore is considered an intentional breach of HIPAA. Bonus Material: Download our Free HIPAA Compliance Checklist What is the civil penalty for unknowingly violating HIPAA? The type of penalty that is issued when someone unintentionally violates HIPAA (that is, it can be proven that there was no malintent) is called civil penalty. Civil penalties are often given out when the violating party is neglectful or unaware that their actions were wrong. Here are the details of the civil penalty for unknowingly violating HIPAA: - $100 per violation - If there was reasonable cause and the individual did not act with willful neglect, they’re fined a minimum of $1,000 - If the individual was acting with willful neglect, but then fixed the issue, they’re fined a minimum of $10,000 per violation - If the individual was acting with willful neglect and did not fix the issue, they’re fined a minimum of $50,000 per violation How to prevent accidental HIPAA violations Risk management in healthcare is an integral part of ensuring HIPAA compliance. To serve their patients and staff best, hospitals and other medical organizations must assess and control risks with regulations like HIPAA. Staying on top of these risks demands a powerful and flexible program. LogicManager’s integrated healthcare risk management, compliance and governance solutions are designed to meet the needs of healthcare professionals. Every organization has unique processes, circumstances, and potential problems. Our customizable software allows you to accomplish the following: - Prioritize resource allocation by identifying the most critical areas and functions of your business - Record key performance metrics such as rates of readmission and HACs - Relate metrics directly to the business processes that drive them to reveal dependencies and fill gaps in your program - Track all regulatory compliance requirements including the Joint Commission, HIPAA, Medicare and Medicaid, the Affordable Care Act and more in our pre-built risk library What Is HIPAA Compliance: Conclusion LogicManager has everything you need to make your risk management process a painless one. Our software is designed to alleviate the pain points in your company’s risk management processes, so you can focus on aligning and achieving operational and strategic goals across your organization. Much like HIPAA, LogicManager eliminates silos, prevents duplicative work, improves efficiencies and streamlines tasks. Implementing a risk-based framework and methodology is critical in preventing accidental HIPAA violations and allows healthcare organizations to dedicate more time to what matters most: providing quality care. See how your organization can benefit today by requesting a free demo of our HIPAA Compliance software.
<urn:uuid:89c40f3b-d51c-4a50-920b-c7b0e3ed7231>
CC-MAIN-2022-40
https://www.logicmanager.com/resources/compliance-management/hipaa-compliance-guide/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00445.warc.gz
en
0.940044
1,375
2.84375
3
The worldwide use of both magnetic resonance imaging (MRI) and pacing devices has vastly increased in recent years and an important number of implanted patients likely will need an MRI over the course of the lifetime of their device. Although some studies have demonstrated that, given appropriate precautions, MRI can be safely performed in patients with selected implantable pacemakers, MRI in pacemaker patients is still contraindicated. Recently, new pacing systems have been specifically designed for safe use in the MRI environment and the first experience reported suggests that the technology is safe and may allow patients to undergo MRI. MRI safe or MRI compatible products help maximize safety and comfort both for patients and medical staffs. According to the AHA general recommendations, MRI examination of nonpacemaker-dependent patients is discouraged and should only be considered in cases in which there is a strong clinical indication and in which the benefits clearly outweigh the risks. The increased need for MRI in PPM patients has prompted the development of a specifically designed pacemaker and lead system tested for safe use in the MRI environment and in early 2011, the US Food and Drug Administration (FDA) approved the first cardiac pacemaker designed to be used safely but conditionally during MRI examinations.
<urn:uuid:639d2fae-4ea3-4bbc-86c1-abd9ec88b9b0>
CC-MAIN-2022-40
https://www.marketsandmarkets.com/Market-Reports/mri-compatible-market-144565632.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00445.warc.gz
en
0.945726
241
2.6875
3
Cyber Attacks Affect All Hacking and cyber threats have been affecting various sectors and walk of life. The education sector is no exception with school hacks becoming a growing problem in the United States, as well as elsewhere in the world. This poses a myriad of challenges to students, parents, tutors, and college administrators. To prevent data loss, they take steps to harden computer systems to eradicate or minimize their vulnerability to external intrusion. Students are at times most vulnerable to hacking because they cannot afford expensive services or equipment to protect their computers and data. Hackers take advantage of their vulnerability by installing malware, sending phishing messages, and other malicious methods. See our list of recommendations on the effective protective and preventive measures you can take to manage or mitigate the risk of ● Data management Develop effective and need-driven data management practices. It is of critical importance to ensure that schools collect only essential data about students. On their part, students should exercise caution when they are asked to share personally identifiable information (PII). It is also advisable to refrain from sharing your social security number (SSN) unless absolutely necessary. This concerns parents’ SSNs too. In fact, the U.S. Department of Education does not require parents to share their SSNs due to the high risks related to identity theft. ● Network & account security Ensure your security by using a virtual private network (VPN) every time you work online. VPNs help conceal your identity. Firewalls also help prevent unauthorized access to your network and account. Multifactor authentication is another great safety measure. Without a well-protected network and account, students looking for a professional dissertation writing service will face unnecessary delays in identifying and using a top-rated, legitimate, reliable, and trustworthy company to complete their assignments. It is equally important to use only secure Wi-Fi networks. If it’s not too urgent, it’s always a good idea to refrain from using Wi-Fi in public places to access data with your PII. Protect your software by constantly upgrading your antivirus and operating systems. In general, being disciplined about maintaining cyber hygiene has huge payoffs, so it would only be prudent and pragmatic of you to constantly seek preventive and protective solutions. ● Strong password Using strong passwords is a must in today’s data-driven world. Make sure you use a combination of letters, symbols, and numbers to have a really tough password to crack. It’s also a good idea to use a mix of lower- and uppercase letters. It goes without saying that you should never use the numbers and names that are directly related to you. These include the dates of birth, SSNs, names of your family members, and other similar PII. Data encryption is a recognized tool in protecting data and sensitive information. By encrypting data, you transform it into a code, which can only be accessed using a decryption key or a password. You are always on the safe side by using encryption to store information on your computer or share it over the internet or via your local network. Whatever the encryption method, make sure you talk to the ICT folks at your college to show you how to do it and which method to use. You can also use free online resources should you have to do it on your own. Education and training should go hand in hand with cyber hardening and other protective measures. Hackers often lie in waiting to seize the right moment for an attack. They are extremely adept at spotting and taking advantage of human and technological vulnerabilities. Customized training sessions advance the skills of students learning cybersecurity and data protection methods. In addition, training in digital citizenship covers skills, behaviors, rights, and practices in the digital environment, which benefits students, teachers, and college administrators in equal measure. Luckily, many colleges offer regular data security awareness training sessions. Never consider these redundant, and always attend them even if you think you’re well versed in cybersecurity. ● Fishy email links and attachments We all get extraneous information and emails every day. Some of them include prize or lottery winning notifications, bonus offers, or unbelievably profitable business proposals. Don’t be naive to accept and click any of these, playing into the hands of hackers. Hackers won’t waste much time taking advantage of any opportunity presented to them. They never fail to act quickly to go ahead and install malware on your computer, giving them unauthorized access to your data. Protecting your student’s data is an absolute must to avoid the stress and damage inflicted by cyber attacks. Hackers have become extremely instrumental in developing new and innovative ways of stealing PII and other sensitive data. It is essential that students take measures to protect their privacy, data, and computer systems. Some of the key steps include data encryption, strong passwords, network, and account security, and regular training sessions. Not only will these efforts minimize the risk of data loss, but they will also save you extra costs related to data recovery. Carl Hill has tons of experience in advising students, tutors, and college administrators on the best data protection methods. His inputs and recommendations have already resulted in practical hardening efforts undertaken by many colleges. As a professional writer and researcher, Carl has also carried out extensive research on the impacts of hacking on student performance and academic achievements.
<urn:uuid:c55aa1b5-4553-493c-b406-63869ce083af>
CC-MAIN-2022-40
https://hackingvision.com/2022/02/09/protect-students-data-hackers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00445.warc.gz
en
0.945276
1,109
3.125
3
Originally published in u-blox Magazine, May 2020. Re-published with permission. It’s one thing to implement IoT security in friendly environments – with unlimited power, plenty of computing capacity, ample bandwidth, and a team of security experts present and prepared to catch and patch up any security flaws. But these conditions are rare in real-world IoT deployments, where droves of battery-powered devices scattered far and wide run on stripped down operating systems, with minimal computing power, and tightly rationed data plans, often beyond the physical reach of tech support. The task is made more challenging still by the fragmented mix of technologies, protocols, and vendors that are thrust together into dedicated solutions. A domestic air quality monitor, for instance, might combine multiple particulate matter sensors, a host processor, and a wireless modem, each from a different supplier. After sensing and processing, the data may be sent to a smart home hub, then to a router, and finally up to the cloud, passing at each hop through products made by different suppliers from all over the world. In this specific case, confidentiality might not be a high-stakes concern. But given the psychological cost of raising false alarms, data integrity might be. And because nobody wants their smart home devices to be recruited into a botnet and pulled into a nefarious online conspiracy, so might access control. Pet trackers, domestic surveillance cameras, smart TV boxes, connected thermostats, and coffee machines – the list of exposed devices is long and getting longer by the day. The stakes are far greater in the industrial IoT, where compromised smart sensors, smart meters, or smart devices might expose confidential data of thousands to millions of devices. Public and private utilities rely on hundreds of thousands of such devices that are expected to work in the field for a decade or more. And it goes beyond protecting confidentiality. Compromised devices can be a conduit for hackers to bring operations to a halt, leading to downtime that can cost companies millions of dollars. Respect for data privacy is absolutely critical to build public trust in connected health applications. Because of the sensitive nature of the data involved, patients count on eHealth service providers to treat their data with as much or more care than their doctors. And because smart health devices such as cardiac pacemakers and connected insulin pumps interface directly with the body, the consequences of being hijacked by hackers can be dramatic, both for the patient and the producers of the device. Municipal authorities are also developing and deploying applications that aim to positively impact their residents’ lives. To be successful, they need to be able to count on the authenticity, integrity, and confidentiality of the data they sense. In each of these settings, the ability to patch and update firmware to address inevitable vulnerabilities is vital for a connected business to stand the test of time. Ultimately, the end-goal is always the same. Devices must be secured so that users can trust and control their devices. The privacy of the data should be protected both on devices themselves and as it transits from the devices to the cloud to ensure authenticity, integrity, and confidentiality. Access to the devices, their data, and their features needs to be restricted to authorized users. And measures should be put in place to detect and respond to intrusions, mitigate their effects, and correct the vulnerabilities at their source. Because their businesses depend on it, major cloud service providers such as AWS, Microsoft Azure, and Google Cloud Platform are constantly building out their services to offer a solid security baseline. But for entire solutions to be secure, the end devices at the periphery of the network also need to pull their weight. To hackers, vulnerabilities at the network’s edge can be an open door into the protected portions of the network. They can exploit them unnoticed to sniff or manipulate data as it transmits to the cloud. Or they can modify a device’s firmware to make it serve their own needs. Building a foundation of core protection – services capable of withstanding attacks from all but the most sophisticated actors such as nation states – requires establishing a chain of trust, from the device end all the way up to the cloud, even as data travels through poorly secured, or even hostile, environments. This requires taking security seriously from the design phase through manufacturing, all the way to operations. While there is no one-size-fits-all security solution, there is a general script that helps ensure that the important boxes are checked and that best practices are followed. In their draft recommendations for IoT device manufacturers, the US National Institute for Standards and Technology (NIST) lays out six voluntary but recommended activities to tighten cybersecurity of commercial IoT devices. The GSMA, which represents the interests of mobile network operators, has laid out its security guidelines in a series of reference documents. And we have outlined a four-step path to developing and deploying IoT ecosystems that are resilient to evolving cyberthreats in a white paper. Such a secure solution needs a rock-solid foundation to build on. In this case, this foundation is provided by an immutable chip ID and a robust root of trust, which is best explained as a source that enables a trusted set of advanced security functionality. These can include the ability to securely execute user applications, protect against and detect tampering, and securely store and handle encryption keys and other security assets. A secure boot sequence and secure updates ensure that only authenticated firmware runs on the device. A secure client library generates keys and crypto functions needed to securely connect devices to the cloud, and encryption keys derived from the root of trust protect the confidentiality and integrity of all data, whether at rest or in motion. If all that sounds extremely resource intensive, well, it can be. But, with the right expertise, u-blox, a global provider of leading positioning and wireless communication technologies, and Kudelski, the global leader in digital security, are proving that it is possible to fit best-in-class security onto a 16 by 26mm module designed to transmit data for years on end under a tight power budget. It combines Kudelski’s unique security architecture and sophisticated lightweight algorithms to offer a highly scalable key management system aligned with the needs of the IoT. Root-of-trust-based encryption means that customers no longer always need to incorporate a dedicated, separate Secure Element – also referred to as a crypto-chip. End-to-end encryption from the device to the backend or cloud applications means that all the gateways, routers, and other intermediaries on the journey remain blind to the data being sent. And a unique LPWA-optimized key management solution can reduce data overhead eight-fold over standard public key infrastructure (PKI) certificate-based solutions. What makes the solution unique is the way in which the root key of each device is known to Kudelski’s hardened secure servers in the cloud. Both the device and the cloud also contain the proprietary, battle-tested algorithms that generate ephemeral, one-time use keys to provide critical IoT functions like the encryption of data, the authentication of commands, and the validation of new firmware updates. This provides the highest possible level of data confidentiality, device security, and finite access control while limiting bandwidth and power usage. Most devices designed for secure communications are assigned multiple, usually two or three, root keys during production. If one key is compromised, they can cycle through the remaining ones until they are all used up. Thanks to the pre-shared keys connecting our secure modules to the cloud, users can effortlessly create any number of encryption keys. As a result, every single communication to and from each individual device can be uniquely secured. This can be invaluable in common IoT use cases. Take, for instance, a device that uses machine-learning-generated algorithms to identify suspicious data traffic suggesting the device is being exploited by unauthorized users. Because the devices themselves have limited computational power, the algorithms are typically trained on the cloud using oodles of data before being transferred to the devices. The resulting algorithms, often the more valuable intellectual property, can then be sent over the air to the deployed devices using an encryption key that is unique to the device and to the session. This is but one example of how tying each device’s physical root of trust to the cloud raises the level of security users can leverage to protect their business and their data. And in the event that these keys are disclosed, the same scheme can be used to transparently renew all of the security of the system without impacting the backend or cloud applications, ensuring that active security is available throughout the lifetime of the IoT devices. Another one dovetails with the growing popularity of OSCORE, a lightweight security protocol designed specifically for highly resource constrained IoT end devices. OSCORE decreases security overhead and bandwidth usage by encrypting only the sensitive portion of messages being transferred, so that the gateways, routers, and servers the data travels through do not need to decrypt the data to reroute as it travels towards its destination. Furthermore, OSCORE uses pre-shared keys rather than a resource intensive key negotiation process. The protocol, however, leaves open how the keys are shared between the IoT end device and the destination server. Again, the Kudelski key management scheme from the end device’s root of trust to backend applications offers an ideal solution. By aligning the solution with the IoT SAFE standards and working group, overseen by the GSMA, we are ensuring that our solution runs seamlessly on all GSMA networks. In addition to leveraging existing security hardware resources to securely store and manage keys and enabling remote management of deployed security applets, abiding to IoT SAFE makes it easy for device manufacturers to develop solutions using an immutable identity pre-provisioned in the SIM. As the IoT continues to expand deeper and deeper into our lives and our businesses, securing devices and encrypting communications are becoming ever more critical. With the right technology, security architecture, and expertise, enabling a foundation of core protection for the IoT with minimal resources in terms of battery power, CPU power, and bandwidth is becoming possible. Learn the background, vocabulary and key concepts necessary to develop and deploy IoT ecosystems that are resilient to evolving cyber-threats.
<urn:uuid:083b30d2-c706-492a-a830-73f0c1049623>
CC-MAIN-2022-40
https://www.kudelski-iot.com/insights/minimal-resources-maximal-iot-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00445.warc.gz
en
0.931631
2,085
2.578125
3
Magic links provide a way for users to authenticate without a password (or what it’s called “passwordless authentication” that we already talked about weeks before). The entire authentication process with a magic link involves the user providing their email and then clicking on said “magic link” to log in. The truth is, that more than 60% of users admit to having reused passwords somehow. Humans just can’t remember hundreds of strong passwords. That’s why we’ll delve into how magic links work, on a technical level, review the security implications of using them and discuss how they improve the customer experience. What are Magic Links and How do they Work? Magic links are time-limited and unique, meaning they are not valid after a certain time, for security reasons. Magic links are a method of authenticating users online and can be used in a passwordless system or multi-factor authentication system. Let’s explain Magic Links by imagining that a URL with an embedded token is sent via email and sometimes SMS instead of the user entering login credentials to log in. Once the user clicks on that link to authenticate, they are redirected to the application or system that has successfully logged in, as if using a “magic” password, but without the actual password. As many organizations move beyond password-based authentication, magic logins are emerging as a popular method of consumer authentication, based on enterprise risk appetite. Whether users need a magic link from Slack, a magic link from Tumblr, or a way to easily access their apps and services, magic login frees them from remembering a long list of passwords. Magic Links are similar to setting up a One Time Password (OTP) for authentication and follow the same flow as a “Forgotten Password” workflow. At a high level, it goes like this: a user gives an app an email address and then clicks the magic link that is sent to their email, and voilà, they’re connected. From the end user’s point of view, a magic link appears magical. But really, it only uses tokens and hash functions. Let’s take a look from a technical point of view. - A user visits an application or a website. - The website asks for the user’s email address. - The user enters their email address. - The app generates a token for the magic link and forms the magic link. - The application sends the URL of the magic link to the user’s email. - The application receives the query at the endpoint of the magic link. If a user is not found, we will not be authenticated and nothing else will happen. This is a step to help stop hackers dead in their tracks. How? There are arguments that error messages are a way of giving hackers clues as to who does and does not have an account on their system. Developers can configure whether the link remains valid for set time intervals or for the life cycle of the user’s session. Magic Links Pros vs. Cons Are Magic Links safe? Well, it has been proven that organizations that implement magic links benefit in various ways. For example… Simple implementation and use of authentication. Since magic links follow a nearly identical flow to password resets, implementing them means making minor tweaks to your code at no additional cost. - Users just need to enter their email addresses and click the magic link to sign up for an app, providing a simple and engaging onboarding process. - By exchanging passwords for magic links, organizations experience reduced administrative overhead, spend less time dealing with failed login security alerts, and no longer need to act on new password requests. - A positive experience with the “magic” password login process can encourage users to continue using your app. Magic links allow you to build loyalty and a returning fan base. - Simplifying the login process at checkout means you’ll have fewer customers abandoning their purchases, opening the door to more conversions on both web and mobile. - Each set of weak or recycled credentials is a window of attack on your organization. By not having a password, you will reduce the risks of account takeovers and data breaches through compromised credentials. From a developer’s perspective, magic links are a very attractive form of user authentication. There’s no additional hardware to buy and hardly any new code to write if you already have a “Forgot Password” workflow. Unfortunately, they are not as secure as other forms of authentication. That said, much of the responsibility for security rests with the user and the user’s email provider. But don’t worry, the security around magic links isn’t all that bad. Weak passwords and reused passwords have huge security issues that lead to account breaches. Hackers have no business using brute force or impersonation of a customer.
<urn:uuid:bf7900a9-9a75-489c-8132-ec60efaa6c88>
CC-MAIN-2022-40
https://www.gomyitguy.com/blog-news-updates/magic-links-2022
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00445.warc.gz
en
0.906483
1,024
2.546875
3
There are different types of fiber optic cable. Some types are single-mode, and some types are multimode. Multimode fibers are described by their core and cladding diameters. Usually the diameter of the multimode fiber is either 50/125 µm or 62.5/125 µm. At present, there are four kinds of multi-mode fibers: OM1, OM2, OM3 and OM4. The letters “OM” stand for optical multimode. Each type of them has different characteristics. Each “OM” has a minimum Modal Bandwidth (MBW) requirement. OM1, OM2, and OM3 fiber are determined by the ISO 11801 standard, which is based on the modal bandwidth of the multimode fiber. In August of 2009, TIA/EIA approved and released 492AAAD, which defines the performance criteria for OM4. While they developed the original “OM” designations, IEC has not yet released an approved equivalent standard that will eventually be documented as fiber type A1a.3 in IEC 60793-2-10. - OM1 cable typically comes with an orange jacket and has a core size of 62.5 micrometers (µm). It can support 10 Gigabit Ethernet at lengths up 33 meters. It is most commonly used for 100 Megabit Ethernet applications. - OM2 also has a suggested jacket color of orange. Its core size is 50µm instead of 62.5µm. It supports 10 Gigabit Ethernet at lengths up to 82 meters but is more commonly used for 1 Gigabit Ethernet applications. - OM3 fiber has a suggested jacket color of aqua. Like OM2, its core size is 50µm. It supports 10 Gigabit Ethernet at lengths up to 300 meters. Besides OM3 is able to support 40 Gigabit and 100 Gigabit Ethernet up to 100 meters. 10 Gigabit Ethernet is its most common use. - OM4 also has a suggested jacket color of aqua. It is a further improvement to OM3. It also uses a 50µm core but it supports 10 Gigabit Ethernet at lengths up 550 meters and it supports 100 Gigabit Ethernet at lengths up to 150 meters. - Diameter: The core diameter of OM1 is 62.5 µm , however, core diameter of the OM2, OM3 and OM4 is 50 µm. - Jacket Color: OM1 and OM2 MMF are generally defined by an orange jacket. OM3 and OM4 are usually defined with an aqua jacket. - Optical Source: OM1 and OM2 commonly use LED light source. However, OM3 and OM4 usually use 850 nm VCSELs. - Bandwidth: At 850 nm the minimal modal bandwidth of OM1 is 200MHz*km, of OM2 is 500MHz*km, of OM3 is 2000MHz*km, of OM4 is 4700MHz*km. OM3 & OM4 are Superior to OM1&OM2 Both OM1 and OM2 work with LED based equipment that can send hundreds of modes of light down the cable, while OM3 and OM4 are optimized for laser (eg. VCSEL) based equipment that uses fewer modes of light. LEDs can not be turned on/off fast enough to support higher bandwidth applications, while VCSELs are capable of modulation over 10 Gbit/s and are used in many high speed networks. For this reason, OM3 and OM4 are the only multi-mode fibers included in the 40G and 100G Ethernet standard. Now OM1 and OM2 are usually used for 1G which are not suitable for today’s higher-speed networks. OM3 and OM4 are used for 10G mostly at present. But in the future, since OM3 and OM4 can support the 40G and 100G, which may make them the tendency. Related article: Singl-mode vs. Multimode Fiber Cable
<urn:uuid:95c1456d-6d5a-48dc-be96-83d381d73f82>
CC-MAIN-2022-40
https://www.cables-solutions.com/tag/om4-multimode-fiber
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00445.warc.gz
en
0.934209
844
2.65625
3
The promise of the metaverse is nothing less than a total transformation of the way we work, play, and live. In order to come anywhere close to meeting this promise, the metaverse must be so big that it's pervasive. How can something of this scope be possible? The answer lies inside your friendly neighbourhood edge data center. The future is virtual Major tech players like Microsoft, Zoom, Epic Games and Facebook have made it clear they see huge potential in the metaverse. Even apparel corporations like Nike and Ralph Lauren are willing to invest big money into the idea. With all the attention and excitement, it may seem like the metaverse is just around the corner. However, there are still so many more questions than answers. Although Facebook’s Mark Zuckerberg claims the metaverse will be mainstream in the next 5-10 years, others aren’t so sure. Time magazine’s Andrew R. Chow speculates that Zuckerberg’s “singular version of the metaverse is decades away from fruition, if even possible at all … The amount of social coordination, infrastructure building and technological advancement it would take to build a universal metaverse like those portrayed in sci-fi is immense”. So, what kind of technological advancements and infrastructure building are we talking about? Firstly, there needs to be hardware – both for users and content creators that doesn’t exist yet. Secondly, a significant auxiliary source of processing power must be activated to support new and massive computing functions. Thirdly, there is the software behind the virtual platforms that must be developed. This last factor seems to be drawing the most attention, but there is another, perhaps more important factor that will determine the fate of the metaverse. That factor is the network supporting all three of these developments. The metaverse will require constant, instantaneous, high bandwidth data transfers that simply aren’t currently possible. Networks aren’t set for the Metaverse The Internet’s design is based on sharing files from one computer to another, with a server communicating with an endpoint or another serving individually. What the metaverse will require is constant interaction between hundreds, thousands, maybe even millions of other servers or user devices, and responsiveness in real time. It can be tricky to realize this capability is so far away. For example, text chats and social media seem interactive between hundreds of participants and are more common than ever. Although it may not seem like it, these interactions only require a single connection with an individual server. Multiplayer video game experiences are more accurate and comparable, but they, too, are deceptive. Gaming has become adept at hosting hundreds of thousands of users in the same cyberspace by dividing them up onto server nodes where only a handful interact with each other at any given time. In 2019 Fortnite famously hosted a virtual concert by famed DJ Marshmello where over 10 million users logged on to participate. However, the idea that this amount of people were concurrently interacting in a shared experience is illusory. Fortnite divided the audience into groups of 100 where you could only interact with 3 other participants. Technically speaking the event was 100,000 versions of an experience happening approximately at the same time. A similar event featuring rapper Travis Scott last year boasted a record-setting 12 million users through the same method, prompting questions about server disruptions in some areas. Even just providing the illusion of a live, synchronous experience is problematic on today’s networks, not to mention providing the real thing. Chow cites the failure of virtual and augmented reality technology to reach mainstream adoption as a discussion point for the metaverse, and it's an apt one. Although the VR market continues to grow, the experience remains far from optimal, and is likely one of the primary factors in its underperformance as a commodity. Could the lesson from VR be that if latency or lost packets hurts the experience, the product in general loses desirability? If so, the metaverse will put this tendency on full blast. So what gives? In many ways, the emergence of the metaverse relies on the emergence of a new network to support it. Since we know the biggest corporations in the world are already on board, the onus is now on those running the networks to keep pace. As a result, the industry needs to shift its thinking in regard to the metaverse. Since it's not an application that runs alongside any existing service or network, in many ways the metaverse represents the next generation of Internet infrastructure. Innovations such as Nvidia’s 800 Gbps servers will help, but they won’t replace the need for next generation infrastructure, infrastructure that needs to be as pervasive as the virtual worlds they help build. To reduce latency, the optimal proximity of data centers to the end user will be as crucial as ever. Questions about the metaverse seem to center around what, who, and when, but the answer to one of the most important questions – where – is likely edge data centers.
<urn:uuid:e72b21ea-095a-475b-a288-c38e83bbaa3f>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/opinions/the-edge-data-center-will-be-home-to-the-metaverse/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00445.warc.gz
en
0.94438
1,018
2.578125
3
Both the proxy and the firewall limit or block connections to and from a network but in a different way. While a firewall filters and blocks communication (ports or unauthorized programs that seek unauthorized access to our network), a proxy redirects it. A firewall is a network security device that monitors incoming and outgoing network traffic and decides whether to allow or block specific traffic based on a defined set of security rules. Firewall is first line of defence of a network. It establishes a barrier between secured and controlled internal networks that can be trusted and untrusted outside networks, such as the Internet. A firewall can be hardware, software, or both. Types of Firewalls: Fundamentally, messages are divided into packets that include the destination address and data. Packets are transmitted individually and often by different routes. Once the packet reach their destination, they are recompiled into the original messages. Packet filtering is a firewall in its most basic form. Primarily, the purpose is to control Access to specific network segments as directed by a preconfigured set of rules, or rule base, which defines the traffic permitted Access. Packet filters usually function at layers 3 (network) and 4 (transport) of the OSI model. In general, a typical rule base will include the following elements: - Source address - Destination Address - Source port - Destination Port Packet filtering firewalls are the least secure type of firewall, because they cannot understand the context of a given communication, making them easier for intruders to attack. 2.Stateful inspection firewall Stateful Inspection, a technology developed and patented by Check Point, incorporates layer 4 awareness into the standard packet-filter firewall architecture. Stateful Inspection differs from static packet filtering, in that it examines a packet not only in its header, but also the contest of the packet up through the application layer, to determine more about the packet tan just information about its source and destination. The state of the connection is monitored and a state table is created to compile the information. As a result, filtering includes context that has been established by previous packets passed through the firewall. For Example, Stateful-inspection firewalls provide a security measure against port scanning, by closing all ports until the specific port is requested. 3.Unified threat management (UTM) firewall A unified threat management (UTM) system is a type of network hardware appliance, virtual appliance or cloud service that protects businesses from security threats in a simplified way by combining and integrating multiple security services and features. UTM devices are often packaged as network security appliances that can help protect networks against combined security threats, including malware and attacks that simultaneously target separate parts of the network. UTM cloud services and virtual network appliances are becoming increasingly popular for network security, especially for smaller and medium-sized businesses. They both do away with the need for on-premises network security appliances, yet still provide centralized control and ease of use for building network security defence in depth. Originally developed to fill the network security gaps left by traditional firewalls, NGFWs usually include application intelligence and intrusion prevention systems, as well as denial-of-service protection. Unified threat management devices offer multiple layers of network security, including next-generation firewalls, intrusion detection/prevention systems, antivirus, virtual private networks (VPN), spam filtering and URL filtering for web content. 4.Next-generation firewall (NGFW) Firewalls have evolved beyond simple packet filtering and stateful inspection. Most companies are deploying next-generation firewalls to block modern threats such as advanced malware and application-layer attacks. According to Gartner, Inc.’s definition, a next-generation firewall must include: - Standard firewall capabilities like stateful inspection - Integrated intrusion prevention - Application awareness and control to see and block risky apps - Upgrade paths to include future information feeds - Techniques to address evolving security threats The proxy server is also known as the application gateway as it controls the application level traffic. In spite of examining the raw packets, it filters data on the basis of the header fields, message size and content also. As it is mentioned above that the proxy server is a part of the firewall, packet firewall alone would not be feasible because it cannot differentiate between port numbers. The proxy server behaves as a proxy and takes the decisions for managing the flow of the application specific traffic (Using URLs). Now, let’s understand how the proxy server works? The proxy server is placed in the middle of the client and the original server. It executes a server process to receive a request from the client to access the server. When the proxy server opens the request it a checks the entire content. If the request and its content seems to legitimate, the proxy server sends the request to the real server as if it is a client. Also, if the request is not a legitimate request, the proxy server immediately drops it and sends the error message to the external user. Another advantage of the proxy server is “Caching” – when the server receives a request for a page, it first verifies whether that page response is already stored in the cache. If no such response is stored the proxy server sends the corresponding request to the server. In this way, the proxy server lessens the traffic, load on the real server and enhances the latency. Differences between Firewall and Proxy - The Firewall is used to block the traffic which can cause some damage to the system, it acts as a barrier for the incoming and outgoing traffic in the public network. On the other hand, the proxy server is a component of a firewall which enables communication between the client and the server if the client is legitimate user and it acts as client and server at the same time. - Firewall filters the IP packets. In contrast, the proxy server filters the requests it receives on the basis of its application level content. - The overhead generated in firewall is more as compared to a proxy server because the proxy server uses caching and handles fewer aspects. - The firewall uses the network and transport layer data while in proxy server processing the application layer data is also used. Also refer Firewall vs IPS vs IDS
<urn:uuid:d8fa77d6-ef42-4715-a0d2-1c62a56eed67>
CC-MAIN-2022-40
https://networkinterview.com/firewall-vs-proxy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00445.warc.gz
en
0.925183
1,298
3.578125
4
What is Ransomware? Ransomware has been around for quite some time, with early examples found dating back to the late 1980s and early 1990s. One of the earliest known ransomware families, PC Cyborg, was discovered in 1989. This ransomware used symmetric key cryptography to encrypt the user’s files and then demanded a payment of $189 to $378 in order to decrypt the files. In order to make payment, the user was instructed to use a specific credit card or purchase specific money orders. If the user did not comply with these instructions, the software would delete the key needed to decrypt the files, making them permanently inaccessible. The methods used to distribute ransomware have also evolved over time. Early examples were typically spread via floppy disk or email attachments, but as internet usage has become more widespread, so has the distribution of ransomware. These days, it is not uncommon for ransomware to be spread via malicious email attachments or links, drive-by downloads, or even exploit kits. No matter the method of distribution, once a system is infected, the ransomware will typically take one of two forms: either it will encrypt the user’s files and demand a payment to decrypt them, or it will lock the user out of their system and demand a payment to regain access. While ransomware can be a serious threat, there are steps that users can take to protect themselves. In the next section, we will go over some of the best practices for avoiding ransomware infections in the first place. We will also cover what to do if you find yourself the victim of a ransomware attack, as well as some steps you can take to limit the damage that these attacks can cause.
<urn:uuid:f39e6175-66aa-4444-a0d5-0802768e3e1e>
CC-MAIN-2022-40
https://www.freedoc.com/protecting-your-organization-against-ransomware-with-laserfiche/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00645.warc.gz
en
0.96304
340
3.609375
4
What is a server when using UEB resource-based licensing? A server is a machine that you want to protect. In an environment, every server is typically installed to perform a certain role. For example, a server may be installed with Microsoft Exchange to serve as an email server, or may be installed with the SQL application for their accounting package. Machines which are installed to host virtual machines are called hypervisors (these are the ones that are licensed on a per-socket license).
<urn:uuid:0d38313f-7f16-4f85-b094-fd7919d402b7>
CC-MAIN-2022-40
https://helpdesk.kaseya.com/hc/en-gb/articles/4407519374481-What-is-a-server-when-using-UEB-resource-based-licensing-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00645.warc.gz
en
0.958927
106
3
3
As enterprises continue on their digital journeys, security teams are preparing for the good, the bad, and the ugly of APIs. We’ll explain in plain language what APIs do, how they are attacked, and how API security works either as a stand-alone solution or with Web Application Firewalls and DDoS protection as part of an overall defense-in-depth application security strategy. Application Programming Interfaces (APIs) are software intermediaries that enable applications to communicate with one another. Web APIs connect between applications and other services or platforms, such as social networks, games, databases and devices. Additionally, Internet of Things (IoT) applications and devices use APIs to gather data, or even control other devices. For example, a utility company may use an API to adjust the temperature on a thermostat to save power. APIs also make rapid development and innovation possible in cloud-native environments. APIs simplify low-level software layers and enable developers to focus on the core functionality of their applications. They both lower the barrier to entry for inexperienced developers and increase efficiency for more experienced people. They deliver unprecedented flexibility and speed at lower costs than other development approaches. For more on the benefits of APIs in web application development, read my post, How Web Applications Are Attacked Through APIs. How cybercriminals attack APIs APIs often self-document information, such as their implementation and internal structure, which can be used as intelligence for an attack. This makes them tempting targets for cyber criminals. Additional vulnerabilities, such as weak authentication, lack of encryption, business logic flaws and insecure endpoints make APIs vulnerable to the attack types outlined below. Man In The Middle (MITM) A man in the middle (MITM) attack involves an attacker secretly relaying, intercepting or altering communications, including API messages, between two parties to obtain sensitive information. For example, a malicious actor can act as a man in the middle between an API issuing a session token in an HTTP header and a user’s browser. Intercepting that session token would grant access to the user’s account, which might include sensitive personal data, such as credit card information and login credentials. API injections (XSS and SQLi) For example, a perpetrator can inject a malicious script into a vulnerable API (i.e., one that fails to perform proper filter input or escape output (FIEO)) to launch an XSS attack targeting end users’ browsers, etc. Additionally, malicious commands could be inserted into an API message, such as an SQL command that deletes tables from a database. Any web API requiring parsers or processers is vulnerable to attack. For example, a code generator that includes parsing for JSON code, and doesn’t sanitize input properly, is susceptible to the injection of executable code that runs in the development environment. A DDoS attack on a web API attempts to overwhelm its memory and capacity by flooding it with concurrent connections, or by sending/requesting large amounts of information in each request. If you have visibility into the API being targeted, you know how it will react to a flood of requests and good DDoS protection will help mitigate the attack. DDoS protection is compromised, however, when you do not know the full schema or changes that have been made to the schema of an API facing a deluge of requests, so you don’t know how it will respond to an attack. How API Security works Imperva API Security enables comprehensive API visibility for security teams – without requiring development to publish APIs via OpenAPI or by adding resource-intensive workflow to their CI/CD processes – by providing full contextual data and tags and automatically determining risks around sensitive data. Security teams can leverage continuous discovery of APIs – whether known edge APIs, unknown shadow APIs or internal APIs driving transactions on the backend –- to incorporate a positive security model and ensure ongoing protection from API-based threats. What’s more, when an API is updated, Imperva API Security enables security teams to understand any new risks and incorporate changes. This all leads to faster, more-secure software release cycles. Imperva API Security is a tool that enables security to keep pace with innovation without impacting development time. Join us to learn more about API trends, terms, key use cases, and what key capabilities your Security and DevSecOps teams need to protect your enterprise data. We will have Chris Rodriguez, Research Director from IDC’s Security & Trust practice kicking off the session with his industry insights. Then, Imperva’s Head of API Security Lebin Cheng will share what customers are saying about API security. Join us on March 30 and learn about: - The trends driving rapid adoption of APIs and the emerging risk surface that results from an outdated API inventory - Where application security fits in protecting APIs and reducing risks - Which tools are best to cover each part of the OWASP API Top 10A strategy to discover and classify every API in and out of production - Hear from two industry experts on API Security and how APIs have become the lingua franca of the Internet today, and why you need to act quickly to prevent data breaches. Reserve your spot today. Try Imperva for Free Protect your business for 30 days on Imperva.
<urn:uuid:71d685b0-5123-400c-a4ec-82ad30f51e2e>
CC-MAIN-2022-40
https://www.imperva.com/blog/api-security-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00645.warc.gz
en
0.914371
1,091
3.390625
3
Goal: The number of web-based devices is expanding at an exponential clip, virtualization is making a very static environment dynamic, and now with the exhaustion of IPv4 and the oncoming complexities of IPv6- network operators must reevaluate what IP Address Management (IPAM) really is. The Goal of this article is to define the various functions that make up IP Address Management. “So, IPv6. You all know that we are almost out of IPv4 address space. I am a little embarrassed about that because I was the guy who decided that 32-bit was enough for the Internet experiment. My only defense is that that choice was made in 1977, and I thought it was an experiment. The problem is the experiment didn’t end, so here we are.” Vint Cerf, LCA 2011 Keynote Speech TCP/IP, (Trasmission Control Protocol/Internet Protocol) is the technology that devices use to interact. IP addresses are the unique identifier that devices use to communicate to each other over the Global Internet. At the inception of the Internet, IP version 4 (IPv4) was and is currently the most widespread protocol used to communicate. By their binary nature, IP addresses are a finite resource. IPv4, specifically, is approaching full deployment globally. The keeper of the free address pool, the Internet Assigned Numbers Authority, (IANA), is fully depleted of IPv4 resources. The Asia Pacific Regional Internet Registry, 1 of the 5 regional registries that report to IANA, is also fully depleted of IPv4 resources. Another, the American Registry for Internet Numbers, (ARIN), is not far behind. To continue the operation of the Internet, Internet Protocol version 6 (IPv6) was created. This address space is vast – more than 170 undecillion addresses – and unlikely to be depleted in the next 50 years. Networks wishing to grow, or new networks wishing to enter the market must transition to include both IPv6 and IPv4, eventually transitioning entirely to the new protocol. This evolution will require an entirely new paradigm of IP resource management. Every host on every network must be unique in order for the Global Internet to function. The concept of uniqueness requires specific, 100% accurate accounting of where and to whom address space is deployed to preclude duplicate assignments. Also, with the diminishing of IPv4, it is imperative to know what space is free for assignment. IP addresses are distributed in blocks called “subnets”. A subnet is assigned and routed to an organization, and then that organization can use that subset of IP addresses to access the Internet through the physical circuit connecting them to their Internet Service Provider (ISP). In the IPv4 world this can mean millions of IP addresses, and in IPv6 world octillions. It is up to the network engineer to “architect” a subnet, making sure he/she is subdividing those IP address across their network based on growth needs, growth projections, capacity planning, etc. This is a very crucial first step in the IP management process. Think of subnetting in terms of a bag of M&M’s: If you start out with all yellow candies, divide them in two piles, then pull out half of them, you would no longer have two piles of yellow candies, but two piles of blue candies. If you split one of the two piles of blue into two, you would have two piles of green candies, and one larger pile of blue candies. Split one of the greens, and you have two reds, one green and the blue. This can go on to very small chunks, always divisible by two. When you start out with 16,777,216 (known in IPv4 as a /8 or historically a Class A) or start out with 79,228,162,514,264,337,593,543,950,336 (known in IPv6 as a /32) candies, keeping track of the piles and how they are split up is mission critical for enterprises and ISP’s alike. Historically, IPAM was an add-on to DNS/DHCP tools and was little more than an expensive alternative to the widespread practice of using manual spreadsheets. Most IPAM tools sold since the early 2000’s, were really IP tracking modules tasked with identifying what a particular IP was being used for and by what. This is a conflation of IPAM and Asset Tracking, while both valuable, they are mutually exclusive. Little thought was put into the IP Network Planning and Architecture as part of these modules, leaving the planning of the architecture to the Network Architects to draw out on whiteboards outside of the tool. This is not what should be considered IPAM. IPAM is the means of planning, architecting, tracking, and managing the address space used in a network. As you can see, tracking is a part of the definition, however it is only one component of the total definition. IPAM is a five step process that should be followed to truly be called an IPAM solution as illustrated by our diagram above (6 steps if you include the normalization and importing of data. (This can take months to normalize and import if you have years of IPv4 data to import). Step 0 – Data normalization and importing. This is fairly self-explanatory, however, it can be one of the more intensive parts of the process to move from manual processes to automated platform based. We have seen this take some companies months, during which time they discover countless errors in data, duplications, erroneous entries and entries with no detailed information (one of the more compelling reasons to move to a more automated system). Step 1 – Requesting space from RIR. Most tools today have little integration into the five RIRs. Requesting space used to be an ongoing task as IP resources were consumed, forcing one to go back to their RIR with justification for more space. Using a templatized email or leveraging the open APIs, a good IPAM should allow you to perform this task from within itself versus copying, pasting, formatting and then sending in to the RIR to await receipt and then input into an IP tracker. Step 2 – Plan and architect the IP allocation and assignment policy and then execute. This is the heart and soul of a good IPAM solution and where most products on the market are not performing. Instead, it is left to the Network Architect to draw up a plan, create subnets accordingly and then input those subnets into the IP Tracker. A good IPAM solution will incorporate this planning structure in the engine and allow the Architect to simply write the policy for the highest level subnets. Note: In larger organizations there may be a difference between Network Architects and IP Analysts. A good IPAM should have various levels of permissions granting access to the system that allow for this type of delineation – sometimes referred to multi-level user access. Step 3 – Assigning IP resources per policy or “Get Next Available IP Subnet”. This process is usually for the IP Analyst who is working to assign unique IP’s to down stream customers or services. Step 4 – Propagating IP data to and from other systems such as DNS, DHCP, Assets, etc. A good IPAM will integrate into ALL systems that relate to the IP address itself. This can include front-end sales tools holding information about clients (eg. leveraging Salesforce’s API), or directly into internal DNS, DHCP systems and the like. Step 5 – Reporting is an oft-overlooked component of IPAM. Most tools simply use log files to capture certain transactions. This creates a problem for many companies that are forced to comply with transparency/compliance protocols such as SAS 70, HIPPA, or Sarbanes-Oxley. Following these five steps (six if you include data import – hard not to) in your evaluation will lead you to a great IPAM tool. Our next article will talk about fundamental differences between IPv4 IPAM and IPv6 IPAM. A fundamental difference between IPv4 (scarce) and IPv6 (unlimited) planning is changing mindsets from a plan geared towards very limited resources (IPv4) and one with near unlimited resources (IPv6). An entirely different mindset arises with planning for purpose instead of planning to run out.
<urn:uuid:0e96097e-4dd8-4271-a4a3-ac19d64fd964>
CC-MAIN-2022-40
https://www.6connect.com/resources/what-is-internet-protocol-address-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00645.warc.gz
en
0.946075
1,746
2.953125
3
The utilization of mobile phones is rapidly expanding, and these devices have been the most phishers targets. While mobile devices have access to various communication channels (email, social media, etc.), text messages offer substantial benefits to phishers. Every day, billions of texts are sent, and threat actors actively utilize SMS phishing, also known as Smishing, to target users’ privacy and their money. Specifically, mobile and email message volumes continue to ascend internationally, with mobile messaging traffic showing no signs of decreasing in 2021. Approximately, 44% of Americans reported a sharp increase in scam text messages during the pandemic. Smishing is a hack that uses false SMS messages to trick consumers into disclosing personal information. Like other phishing attempts, Smishing is deceiving someone into opening a link that downloads malware onto their mobile device or provides the scammer’s some vital personal information. Smishing is especially harmful to individuals who don’t understand fundamental cybersecurity because SMS messages are formulated in such a way that they look genuine. Various Types Of Smishing Smishing started as an SMS phishing assault, and numerous smishing attacks are being tried. Here are some of the most prevalent attacks to be aware of. The Smishers could do a variety of things with a text message. Impersonating a bank agent and stealing your personal information are examples of this. They may send you a text message with the link to urge you to go to your bank’s website and verify a recent questionable charge. Sometimes, they may ask you to phone their customer care number contained in a text message to address a current questionable charge or a compromised account. This format could make one release some vital information during the call, or one might be tempted to upgrade to get a prize giveaway. Smishing in Instant Messaging Smishing officially does not include phishing via instant messaging programs like Facebook Messenger or WhatsApp, although it is closely connected. Instead, the Smisher capitalizes on people’s comfort with receiving and responding to messages from strangers via inauthentic social media platforms by pretending to be who they are not. The purpose of the attack, like a genuine phishing attempt, is for you to provide the threat actor with personal information such as passwords or credit card details. These attackers are willing to pay a high price for your sensitive information. These deals typically feature a clickable link. Smishing communications may include confirming a bogus order and the link to change or cancel a particular transaction. When the receiver of such a message clicks on a link, they are taken to the fake website that harvests their login information. What To Do To Safeguard Your Organization Against Smishing Attack Make use of access control. Not everyone in the company requires access to all files. Only those who need to use databases, websites, and networks should have access to them. This decreases the possibility of smishing attacks. Instruct staff to compress files and deliver them over email rather than other methods because it is a safer alternative. Utilize security awareness To improve awareness of the dangers of clicking links and downloading files in text messages, use security awareness programs and models. To keep training dynamic and engaging, add gamification and micro- and nano-learning modules. Determine how well-versed your personnel are in cybersecurity. Before you begin, it can be highly beneficial to evaluate your employees’ cybersecurity knowledge by running a short survey with particular questions that assess their level of alertness to various fraud efforts. You may solve this by creating a free survey using a tool like JotForm. Knowing your employees’ expertise on the subject will aid in developing your cyber awareness training program. Remind your employees always Remind employees not to respond to links within a text message from unknown senders or phone numbers. Employees should ban and remove SMS messages from their smartphones. Keep everyone up to date on potential smishing attacks. If you become aware that your organization is being used in a smishing or phishing scam, notify your clients and customers as soon as possible to avoid unintended data breaches or other corporate damage. Reiterate your company’s policies on requesting account information and acceptable communication techniques. Have clear BYOD policies and restrictions. If employees are permitted to use their smartphones for business purposes, implement BYOD policy, “Bring Your Own Device “ that establishes clear expectations and standards for everything from app usage to cyber threat detection. Get Help with Mobile Security While cybercriminals are out there, your organization may take precautions to keep safe. It’s unlikely that opening a text message will expose your company to a virus or endanger your data; avoid clicking on any links. Be cautious of any unusual or unexpected messages to prevent any form of email phishing or Smishing. Contact us at C Solutions to discover more about protecting your company’s mobile devices from Smishing.
<urn:uuid:81e95ba0-3208-4e87-8210-ea39d7559c1d>
CC-MAIN-2022-40
https://csolutionsit.com/smishing-protect-yourself/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00045.warc.gz
en
0.915754
1,015
2.828125
3
In today’s high-tech and data-driven environment, network operators face an increasing demand to support the ever-rising data traffic while keeping capital and operation expenditures in check. Incremental advancements in bandwidth component technology, coherent detection, and optical networking have seen the rise of coherent interfaces that allows for efficient control, lower cost, power, and footprint. Below, we have discussed more about 400G, coherent optics, and how the two are transforming data communication and network infrastructures in a way that’s beneficial for clients and network service providers. What is 400G? 400G is the latest generation of cloud infrastructure, which represents a fourfold increase in the maximum data-transfer speed over the current maximum standard of 100G. Besides being faster, 400G has more fiber lanes, which allows for better throughput (the quantity of data handled at a go). Therefore, data centers are shifting to 400G infrastructure to bring new user experiences with innovative services such as augmented reality, virtual gaming, VR, etc. Simply put, data centers are like an expressway interchange that receives and directs information to various destinations, and 400G is an advancement to the interchange that adds more lanes and a higher speed limit. This not only makes 400G the go-to cloud infrastructure but also the next big thing in optical networks. What is Coherent Optics? Coherent optical transmission or coherent optics is a technique that uses a variation of the amplitude and phase or segment of light and transmission across two polarizations to transport significantly more information through a fiber optic cable. Coherent optics also provides faster bit rates, greater flexibility, modest photonic line systems, and advanced optical performance. This technology forms the basis of the industry’s drive to embrace the network transfer speed of 100G and beyond while delivering terabits of data across one fiber pair. When appropriately implemented, coherent optics solve the capacity issues that network providers are experiencing. It also allows for increased scalability from 100 to 400G and beyond for every signal carrier. This delivers more data throughput at a relatively lower cost per bit. Fundamentals of Coherent Optics Communication Before we look at the main properties of coherent optics communication, let’s first understand the brief development of this data transmission technique. Ideally, fiber-optic systems came to market in the mid-1970s, and enormous progress has been realized since then. Subsequent technologies that followed sought to solve some of the major communication problems witnessed at the time, such as dispersion issues and high optical fiber losses. And though coherent optical communication using heterodyne detection was proposed in 1970, it did not become popular because the IMDD scheme dominated the optical fiber communication systems. Fast-forward to the early 2000s, and the fifth-generation optical systems entered the market with one major focus – to make the WDM system spectrally efficient. This saw further advances through 2005, bringing to light digital-coherent technology & space-division multiplexing. Now that you know a bit about the development of coherent optical technology, here are some of the critical attributes of this data transmission technology. - High-grain soft-decision FEC (forward error correction):This enables data/signals to traverse longer distances without the need for several subsequent regenerator points. The results are more margin, less equipment, simpler photonic lines, and reduced costs. - Strong mitigation to dispersion: Coherent processors accounts for dispersion effects once the signals have been transmitted across the fiber. The advanced digital signal processors also help avoid the headaches of planning dispersion maps & budgeting for polarization mode dispersion (PMD). - Programmability: This means the technology can be adjusted to suit a wide range of networks and applications. It also implies that one card can support different baud rates or multiple modulation formats, allowing operators to choose from various line rates. The Rise of High-Performance 400G Coherent Pluggables With 400G applications, two streams of pluggable coherent optics are emerging. The first is a CFP2-based solution with 1000+km reach capability, while the second is a QSFP DD ZR solution for Ethernet and DCI applications. These two streams come with measurement and test challenges in meeting rigorous technical specifications and guaranteeing painless integration and placement in an open network ecosystem. When testing these 400G coherent optical transceivers and their sub-components, there’s a need to use test equipment capable of producing clean signals and analyzing them. The test equipment’s measurement bandwidth should also be more than 40-GHz. For dual-polarization in-phase and quadrature (IQ) signals, the stimulus and analysis sides need varying pulse shapes and modulation schemes on the four synchronized channels. This is achieved using instruments that are based on high-speed DAC (digital to analog converters) and ADC (analog to digital converters). Increasing test efficiency requires modern tools that provide an inclusive set of procedures, including interfaces that can work with automated algorithms. Coherent Optics Interfaces and 400G Architectures Supporting transport optics in form factors similar to client optics is crucial for network operators because it allows for simpler and cost-effective architectures. The recent industry trends toward open line systems also mean these transport optics can be plugged directly into the router without requiring an external transmission system. Some network operators are also adopting 400G architectures, and with standardized, interoperable coherent interfaces, more deployments and use cases are coming to light. Beyond DCI, several application standards, such as Open ROADM and OpenZR+, now offer network operators increased performance and functionality without sacrificing interoperability between modules. Article Source:Coherent Optics and 400G Applications
<urn:uuid:3756adb9-a3c0-47ed-bc0a-0dfe8f68ab18>
CC-MAIN-2022-40
https://www.fiber-optic-transceiver-module.com/coherent-optics-and-400g-applications.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00045.warc.gz
en
0.89503
1,181
2.8125
3
Rainbow table attacks, birthday password attacks, and hybrid attacks are discussed in this post. Other password attacks are covered here. These are all important topics related to the SY0-401 Security+ exam and covered in the CompTIA Security+: Get Certified Get Ahead: SY0-401 Study Guide. Rainbow Table Attacks Rainbow table attacks are a type of attack that attempts to discover the password from the hash. However, they use rainbow tables, which are huge databases of precomputed hashes. It helps to look at the process of how some password crackers discover passwords without a rainbow table. Assume that an attacker has the hash of a password. The application can use the following steps to crack it: - The application guesses a password (or uses a password from a dictionary). - The application hashes the guessed password. - The application compares the original password hash with the guessed password hash. If they are the same, the application knows the password. - If they aren’t the same, the application repeats steps 1 through 3 until finding a match. From a computing perspective, the most time-consuming part of these steps is hashing the guessed password in step 2. However, by using rainbow tables, applications eliminate this step. Rainbow tables are huge databases storing passwords and their calculated hashes. Some rainbow tables are as large as 160 GB in size, and they include hashes for every possible combination of characters up to eight characters in length. Larger rainbow tables are also available. In a rainbow table attack, the application simply compares the hash of the original password against hashes stored in the rainbow table. When the application finds a match, it identifies the password used to create the hash (or at least text that can reproduce the hash of the original password). Admittedly, this is a simplistic explanation of a rainbow table attack, but it is adequate if you don’t plan on writing the algorithm to create your own rainbow table attack software. Salting passwords is a common method of preventing rainbow table attacks, along with other password attacks such as dictionary attacks. A salt is a set of random data such as two additional characters. Password salting adds these additional characters to a password before hashing it. These additional characters add complexity to the password, and also result in a different hash than the system would create using the original password. This causes password attacks that compare hashes to fail. Check out this blog post, which covers bcrypt and Password-Based Key Derivation Function 2 (PBKDF2). Both use salting techniques to increase the complexity of passwords and thwart brute force and rainbow table attacks. Pass the Security+ exam the first time Passwords are typically stored as hashes. Salting adds random text to passwords before hashing them and thwarts many password attacks. Birthday Password Attacks A birthday attack is named after the birthday paradox in mathematical probability theory. The birthday paradox states that for any random group of 23 people, there is a 50 percent chance that 2 of them have the same birthday. This is not the same year, but instead one of the 365 days in any year. In a birthday attack, an attacker is able to create a password that produces the same hash as the user’s actual password. This is also known as a hash collision. A hash collision occurs when the hashing algorithm creates the same hash from different passwords. This is not desirable. As an example, imagine a simple hashing algorithm creates three-digit hashes. The password “success” might create a hash of 123 and the password “passed” might create the same hash of 123. In this scenario, an attacker could use either “success” or “passed” as the password and both would work. Birthday attacks on hashes are thwarted by increasing the number of bits used in the hash to increase the number of possible hashes. For example, the MD5 algorithm uses 128 bits and is susceptible to birthday attacks. Secure Hash Algorithm version 2 (SHA-2) can use as many as 512 bits and it is not susceptible to birthday attacks. A hybrid attack uses a combination of two or more attacks to crack a password. As an example, a dictionary attack can use a dictionary of words, but also combine it with a brute force attack by modifying the words. For example, after using all the words in the dictionary, a password cracker can append all the words with a number such as 1 and try them. Other Security+ Study Resources - Security+ blogs organized by categories - Security+ blogs with free practice test questions - Security+ blogs on new performance-based questions - Mobile Apps: Apps for mobile devices running iOS or Android - Audio Files: Learn by listening with over 6 hours of audio on Security+ topics - Flashcards: 494 Security+ glossary flashcards, 222 Security+ acronyms flashcards and 223 Remember This slides - Quality Practice Test Questions: Over 300 quality Security+ practice test questions with full explanations - Full Security+ Study Packages: Quality practice test questions, audio, and Flashcards
<urn:uuid:b6a67ae0-ac60-4e18-9d61-ded616a7304c>
CC-MAIN-2022-40
https://blogs.getcertifiedgetahead.com/rainbow-table-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00045.warc.gz
en
0.911847
1,053
3.546875
4
What is Online or Digital Footprint: Definition and Examples Table of Contents - By Bree Ann Russ - Aug 19, 2022 An online digital footprint is a digital history and reputation that follows you whenever you interact with anything online. It includes all your social media posts, comments, likes, shares, images, videos, blog posts, and more. Some also call it an online reputation or electronic reputation. An online digital footprint is essentially what others see when they look you up online or view your social media accounts. Since you may not want specific people seeing every detail of your private life, it’s important to monitor your online presence. What is a Digital Footprint? As the name suggests, a digital footprint is the long-term collection of digital data that someone leaves behind when using the Internet. You may also see it phrased as an online digital record. Digital footprint examples can include everything from a person’s email address to their social media posts, comments, and website visits. Your digital footprint is essential for finding employment, maintaining professional relationships, and even finding a place to live. While having a positive digital footprint is important, there is only so much you can do to change any online information from your past. How is a Digital Footprint Different Than an Online Reputation? The basic digital footprint definition is a record of who you are and what you do online. Your digital footprint can be your social media posts, email addresses, websites, and other electronic footprints that trace your digital activity. An online reputation is the public perception of you or your business. It's the collection of all the information people have about you—from your website to job references to professional certifications. Your online reputation can be positive or negative, and it can significantly impact how people view you. A digital footprint is a lot more accessible than an online reputation. Anyone can access your digital footprint if they know where to look. It’s much easier for someone to damage your online reputation than it is for them to damage your digital footprint, which is hidden on the internet behind passwords and security measures. Why is Monitoring Your Digital Footprint Important? You may be surprised at what people find when they look your name up online. Your digital footprint is a treasure trove of information that potential employers, landlords, and others can use to find out more about you. A quick online search of your name might even lead people to information that dates back to childhood or even your high school days. Unfortunately, it may also include information about you that is either outdated or completely incorrect. It’s important to monitor your online presence so that you can remove or edit incorrect information whenever possible. An online digital footprint may also be called an online reputation or electronic reputation. How to Monitor Your Online Footprint There are several ways to monitor your digital footprint. First, you can also use Google Alerts to keep an eye on how you are discussed online. The alerts will send you emails when new content about you is published. Next, you can also use social media to keep track of your reputation. If you have a public social media profile, you can set up notifications to receive emails whenever someone comments on your posts. Notifications allow you to respond quickly to any negative comments or reviews. You can also use social media to your advantage by posting positive content about your life. For example, you can post stories about volunteering in your community on your social media channels. These stories help boost your positive digital footprint or can help bury a negative digital footprint under posts that show improvements in your life. They may also bring new opportunities to light to help expand your reach. How Search Engines & Social Media Monitor Your Online Activity When you search for someone on the Internet, you may be redirected to their social media pages or other sites that host their content. This is because search engines like Google use algorithms to determine how relevant a piece of information is. Social media sites like Facebook and Instagram monitor your online behaviors. Each of these sites has a way of determining what information is relevant and what is not. The sites often remove posts that they deem inappropriate and others flag as inappropriate. When you post information to sites like Facebook and Twitter, the search engines will index your post to be easily accessible. This means that the search engine will store the post and make it easily searchable through their website. When you tag someone in a post or upload an image with someone else, the search engines will also store this information. You are furthering your online reputation when you engage in activities like tagging people in posts, using keywords in your bios, posting images that have geo-locations, and using hashtags. What Outsiders Can Do with Your Online Data When sharing information online, you should also consider who else can access it. Once you post something online, it can be easily accessed and even republished elsewhere. Whatever you post online, outsiders have access to. These breadcrumbs of information about you add up to a more complete profile about who you are. People can look through items like pictures you post, comments you make, and locations you talk about to gather a lot of information about you. Consider the fact that you may have information like this online: - Your name - The town you live in - If you are married or not, and to whom if you are - How many kids you have - Your political stance - Your favorite restaurants - Names of your pets - Favorite vacation destinations - Your birthday - And so much more If you are like most people, some combination of this information will lead a hacker to your passwords for many of your important accounts. Plus, people can more easily steal your identity with these tidbits of information since it is easier to pretend to be you! Your Digital Footprint Says a Lot About You When you are online, you leave digital footsteps that others can see. Your online presence could include everything from your social media posts to your online reviews and more. The length of digital history you have varies but can date back years. Be careful what you post, comment on, and search for online. The last thing you want is to comment on a post in jest, only to have it cost you a job in the future. For help monitoring your digital footprint, consider investing in identity monitoring services. That way, you’re aware of any inaccurate information about you online. Plus, you can get alerts should anyone try to use information from your digital footprint!
<urn:uuid:e1ae6ca3-971b-4349-96e3-9ceb004d341c>
CC-MAIN-2022-40
https://www.idstrong.com/sentinel/digital-footprint/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00045.warc.gz
en
0.924266
1,335
3.203125
3
Crysis ransomware first appeared in February of 2016, and new strains of the software continue to pose serious security threats for both personal computer users and businesses. Crysis is a type of crypto-ransomware, which means that it encrypts the files on an infected computer so that they are unreadable. It then demands a ransom in exchange for decryption keys. In most cases, Crysis infects a computer through email, and many victims aren’t aware that they’ve been infected until the encryption process is complete. If you think that Crysis has infected your computer, turn it off, unplug all media from it, and call Datarecovery.com at 1-800-237-4200. Our malware recovery experts can create a plan to start restoring your data as quickly as possible. What is Crysis Ransomware (And How Does It Work)? Crysis is a type of malware that infects your system and encrypts your files. Essentially, it encodes the files on an infected computer in such a way that only someone with a key can open them again. At this time, there is no known crack for its encryption scheme. After the malware encrypts the infected computer’s files, it changes the computer’s desktop wallpaper to an image with an email address. When the victim sends an email to that address, the assailants instruct them to send a ransom with bitcoin, which is a digital currency. This type of infection is devastating to both individuals and businesses. The ransom can range from 2-4 Bitcoins ($1,200 – $2400 as of writing), and, as with any criminal dealing, victims can never be sure that they will receive the key once they pay. Even if the key works and a victim can decrypt their files, it is possible that malware will linger on the computer. Even worse, new versions of Crysis can harvest credentials from a computer, which gives the attackers a dangerous amount of access to the victim’s information. Here are some of the distinguishing features of Crysis: - Crysis fools victims into clicking on malware with a double file extension trick. The file with the payload is an executable, but by using two file extensions, it can resemble a PDF, document, or other innocuous file. - The malware does not target specific file extensions. It can encrypt any file type, including system files. - It primarily uses RSA and AES-128 ciphers for encryption. - Because Crysis can encrypt all file types, including executables, it can make an infected computer unstable. - There are reports that new variants of Crysis can harvest credentials from infected computers, including user login information. - Crysis can infect both Windows and Mac devices. - Crysis attempts to delete volume shadow copies of files so that restoring your files with backups is not possible. - At least one strain uses an “autorun” to spread across attached devices. As such, a Crysis infection can be a serious network security risk. Crysis is versatile, and because it’s capable of stealing user credentials, it may infect a computer multiple times. The best way to avoid an infection is to avoid opening email attachments from unrecognized sources. How Does Crysis Ransomware Infect My System? Crysis infects computers through several avenues, but it’s typically disguised as a non-executable or as a legitimate program, like WinRar or Microsoft Excel. While it primarily spreads through email, it could also appear on a compromised website. Crysis ransomware often uses a double file extension. This can make it appear as a harmless type of file, like a .jpg or .pdf, but it acts as an executable. If a computer user clicks on the ostensibly harmless file, the malware begins infecting the computer and encrypting all of its files. The program can spread throughout a network fairly quickly, but up-to-date antivirus software may prevent it from gaining a foothold. Because some variants of Crysis steal credentials, corporate information theft is a serious concern. Crysis infections on larger networks should be treated immediately by qualified ransomware experts. Can I Disable or Remove Crysis Ransomware Encryption? Security experts claimed that they were able to decrypt earlier variants of Crysis. However, newer variants use more sophisticated algorithms and cannot be cracked through brute force methods. For this reason, our specialists attempt to locate backups and alternate versions of files if possible. We can also explore decryption options for older variants, and if necessary, we can organize a safe, secure ransom payment as a last resort. If you believe that Crysis has infected your computer, we recommend turning the machine off, unplugging all media, and calling our ransomware specialists. We can begin formulating a plan to restore your files and remove the ransomware permanently while preserving your user credentials. Though Crysis attempts to delete shadow copies of files, we may be able to locate backup copies that you weren’t aware of. The security specialists at Datarecovery.com can advise you on your options, and get your computer back in working order. Call 1-800-237-4200 to get started.
<urn:uuid:e9d70f52-ee27-4492-9058-8075a2c7caae>
CC-MAIN-2022-40
https://datarecovery.com/rd/crysis-ransomware-infection-decryption-services/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00045.warc.gz
en
0.924248
1,075
2.6875
3
With all of the talk, perhaps hype, around “server-less” computing these days, it is easy to get lost in jargon. After all, how can you run computer code without having a computer? In short, you can’t. The attraction here is the idea that you pay for only the computing capacity you require, not for idle servers run twenty-four hours a day and seven days a week. Let’s look at a little history and explore how this whole “server-less” thing came to be; and more exciting, where it is going! The suffix as-a-Service can be tacked on to just about anything from Infrastructure-as-a-Service (IaaS) to Platform-as-a-Service (PaaS), Software-as-a-Service, Functions-as-a-service (FaaS), Database-as-a-Service (DBaaS). And the list goes on; but what does it all mean? The term Software-as-a-Service has been in usage for a while now, replacing Application Service Providers (ASPs) in popular use. It really refers to an application that can service multiple customers by hosting the application for them. It is a welcome product of the internet and web servers, but not really a new technology by itself. The path toward the current explosion of “–as-a-Service” products really began when the first virtual machines were introduced. Companies like VMWare and open source projects like Xen (see https://www.xenproject.org/about/history.html for a history of the Xen project) began by emulating a whole computer using a program called a hypervisor, making it possible to define the entire environment of the computer with declarations in configuration files. A computer could now run other computers as if they were programs. A Windows™ or Linux computer could run one or more Windows™ or Linux computers, store the state on disk, pause them and even copy the disk state to other computers and run them. At first, virtual machines were slow. As virtual machines matured, the processor manufacturers introduced features in the chip sets to make running the emulation more efficient (in fact less emulation and more execution). New features included “live-migration”; which allows a running computer to be paused, copied and restarted on another physical machine. A hypervisor programming interface (API) was developed by each vendor to allow automation of the whole process. Computers were now “software-defined”. And soon, so were networks, routers and load balancers. Infrastructure management was now possible as a software-managed service! So, what does it mean “server-less computing”? When we refer to “server-less” computing, what we really mean is that the service that runs an application or function is really managing the provisioning of the system for us. This falls somewhat under the category of Platform-as-a-Service (Paas) because platforms tend to create a place to run applications and functions without having to provision the infrastructure (networks, load balancers, machine instances and such). So there is no magic; and that is not a bad thing. Managing infrastructure is expensive and complicated. Setting up a physical environment in-house can mean configuring networks, buying redundant servers, routers, and load balancers as well as assuring that power, HVAC and Internet connectivity are all available. This requires both capital expenditure and labor – sometimes around-the-clock labor! Enter “The Cloud” Cloud services such as Amazon AWS, Microsoft Azure, and Google Cloud have helped to remove the physical aspect as well as much of the labor from the equation by providing automated infrastructure, platform and database services with consoles to manage them. These services make it possible to provision almost immediately and to scale as needed without having to invest in capital equipment or an army of hardware and network experts. The first implementations were based on the virtual machines discussed above. The Alpha Anywhere product line helps you create compelling applications for mobile and desktop devices without having to become an expert in a variety of technical areas. With the introduction of Alpha Anywhere Application Server for IIS and Alpha Cloud, you get the same benefit with respect to deployment. You don’t have to become an expert, because Alpha Cloud handles the provisioning, scaling and disaster recovery for you. Alpha Cloud deployments are usage-based. You only pay for the computing capacity you need. Usage plans help add some predictability for cost management. Much like a cell phone, cloud computing is a utility but for application deployment. Many developers new to Alpha Cloud often ask “How do I connect to my server?” The simplest answer is “You don’t!” A better answer is “You don’t have to.” Alpha Cloud currently runs on Amazon AWS, spinning up instances of servers using Amazon AWS Autoscaling. Autoscaling is a service that interacts with a load balancer and a collection of virtual machines to make sure that machines are healthy and that there is just the right amount of computing power to run applications installed. The rules required to manage scaling are all defined as part of the configuration. Alpha Cloud builds on the automation APIs for Amazon AWS by assigning web sites to groups of servers with spare capacity, and if no group has the spare capacity, starting a new group of servers. To make sure that applications are always available, Alpha Cloud creates at least two servers for each group in separate data centers (called availability zones). What if something goes wrong? Computers crash, applications fail, networks have interruptions of service. Sometimes this is a result of a software defect. Sometimes this is a result of a hardware component failure. Stuff just happens. Alpha Cloud takes advantage of best practice architecture developed by Amazon that assumes that things will go wrong and works to mitigate problems before they even happen. Alpha Anywhere Application Server for IIS (used on Alpha Cloud) takes advantage of Microsoft IIS application pools to scale processes and to recover from application failures. - If a machine fails, Amazon Autoscaling starts a new one. - If the load on the machines in a group exceeds a predefined threshold, Amazon Autoscaling starts a new one. - If the load on the machines in a group drops below a certain threshold, Amazon Autoscaling terminates one or more of the machines to save on cost. - If an application stops responding, the application pool terminates the process and starts another one. - If a process crashes, a new one is automatically started by the application pool. - Since processes tend to perform better if they are restarted periodically, Alpha Cloud configures the application pool to restart all of its processes once a day. This is done in an “overlapping” fashion, meaning that new processes are started before IIS redirects traffic away from the old ones and then terminates them. So is Alpha Cloud Server-less? From the subscriber’s perspective, there are no servers on Alpha Cloud; only deployed web sites and applications. You do not need to manage servers on Alpha Cloud. Currently, some of the dialogs show you the servers (yes plural) your web site is assigned to. In the future, this may be hidden from you entirely, and for a number of reasons. The Containers are Coming! So Alpha Cloud is built on clusters of virtual machines. Virtual machines are great! They are self-contained descriptions of an entire computer that can be backed up, moved around, started up and shut down so they are available as needed. But virtual machines also take time to start. The virtual computer still has to run all of the code to boot up as a physical machine. And virtual machines also use all of the memory required for the virtual machine on the host computer (the one running the hypervisor that controls the virtual machine). A set of operating system features introduced on Linux in recent years has made it possible to run a process that “thinks” it is a separate computer, but shares the installed operating system with the host computer. These became known as “containers”, because, like virtual machines, they are defined as if they are separate computers. They “contain” all of the assets required to run one or more applications. Docker is one of the best known technologies implementing containers (although there are several). Microsoft introduced Windows Containers with Windows Server 2016; quickly embracing Docker. Although the Windows™ implementation has lagged the Linux implementation of containers; it is evolving quite quickly. These lightweight “containers” can be started in less than a second in many cases. Because they share the operating system code with the host, the memory footprint is smaller as well. In fact, containers are so lightweight and quick to start, that they can be used for one-off/batch and special processing requests and then shut down and forgotten. Or you could run a lot of them at once behind a load balancer. So containers are disposable. In fact they are defined to be immutable. In other words, they are not expected to be saved and restarted, so they don’t need to be patched or upgraded. You just create a new container when it changes. Containers are great. If there are one or two, or ten, maybe even twenty, it’s easy to find and manage them. Get enough of them and you feel like you are herding cattle (for a fun read on characterizing servers and containers as pets or cattle see http://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/). The next step in the evolution of containers is a way to manage hundreds, or even thousands of them – called “orchestration”. Orchestration takes all of the separate pieces that make containers run and groups them into services similar to the Amazon AWS Autoscaling groups we discussed above. The current front runner for managing containers is a project created by Google and open-sourced called Kubernetes. The big three cloud vendors have all adopted Kubernetes and have a service that creates a cluster of nodes to run containers. One of the latest features made available by cloud providers is often, I think incorrectly, referred to as “server-less computing”. Functions-as-a-Service allows you to deploy a single function as a scheduled task or a web service. The granularity is now one specific web request rather than a full blown web site (you might call it a microservice). What makes it feel “server-less” is that you don’t provision a server to handle the request. The provider has a service listening for requests that then fires up a container to handle one or more requests. A container is started. It handles pending requests, and then it can be destroyed. No more hourly charges. You pay for usage (CPU, Data Transfer and memory capacity). The Future is Bright and Alpha will be right there for you! Containers, Kubernetes, Functions-as-a-Service; all of these are becoming ubiquitous. Alpha Software intends to make use of the best technologies for deploying applications. This includes full blown web/mobile applications, Functions-as-a-Service/microservices and scheduled tasks. What will you have to do to take advantage of these new deployment options? If we do our job right, and we intend to, you will have to do very little. You might, for example, see a new option like (Container) in the web site definition dialog of Alpha Cloud instead of the region to deploy your web site in that you see now. If you no longer have to consume precious time dealing with servers to provision the deployment environment, load balancing, redundancy, scale-ability, server utilization and like, you can focus on building a great application! Whether Alpha Cloud is running on virtual machines or containers or the next great thing to come along, your investment in software is protected. You won’t have to re-architect your solution. Just pick another deployment option in the Alpha Cloud dialogs. So you tell me. Isn’t your application on Alpha Cloud already “server-less”?
<urn:uuid:25cc6a15-92de-4489-bc8d-51d678446538>
CC-MAIN-2022-40
https://www.alphasoftware.com/blog/what-is-server-less-computing-and-why-is-it-exciting
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00045.warc.gz
en
0.927633
2,602
3.515625
4
In an extensive report about a phishing campaign, the Microsoft 365 Defender Threat Intelligence Team describes a number of encoding techniques that were deployed by the phishers. And one of them was Morse code. While Morse code may seem like ancient communication technology to some, it does have a few practical uses in the modern world. We just didn’t realize that phishing campaigns was one of them! Let's look at the campaign, and then we'll get into the novel use of an old technology. Microsoft reports that this phishing campaign has been ongoing for at least a year. It’s being referred to as the XLS.HTML phishing campaign, because it uses an HTML file email attachment of that name, although the name and file extension are modified in variations like these: The phishers are using variations of XLS in the filename in the hope the receiver will expect an Excel file if they open the attachment. When they open the file, a fake Microsoft Office password dialog box prompts the recipient to re-enter their password, because their access to the Excel document has supposedly timed out. This dialog box is placed on a blurred background that will display parts of the “expected” content. The script in the attachment fetches the logo of the target user’s organization and displays their user name, so all the victim has to do is enter the password. Which will then be sent to the attacker's phishing kit running in the background. After trying to log in the victim will see a fake page with an error message and be prompted to try again. It is easy to tell from the information about the target used by the phishers, like the email address and company logo, that these phishing mails are part of a targeted campaign that needed some preparation to reach this step. And this phishing campaign is another step to gather more data about a victim. In the latest campaigns the phishers fetch the user’s IP address and country data, and send that data to a command and control (C2) server along with the usernames and passwords. Encodings seen in the campaign included: - ASCII, a basic character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices. - Base64, a group of binary-to-text encoding schemes that represent binary data in an ASCII string format. By using only ASCII characters, base64 strings are generally URL-safe, and allow binary data to be included in URLs. - Escape or URL-encoding, originally designed to translate special characters into some different but equivalent form that is no longer dangerous in the target interpreter. - Morse code, more about that below. Not that encoding is different from encryption. Encoding turns data from one format into another, with no expectation of security or secrecy. Encryption transforms data in a way that only be reversed by somebody with specific knowledge, such as a password or key. So, encoding methods won't hide anything from a security researcher, so why bother? Changing the encoding methods around is designed to make it harder for spam filters trained on earlier versions of the campaign to spot the later versions. Morse code is a communication system developed by Samuel Morse, an American inventor, in the late 1830s. The code uses a combination of short and long pulses, which can be represented by dots and dashes that correspond to letters of the alphabet. Famously, the Morse code for "SOS" is . . . - - - . . ., for example. The International Morse Code encodes the 26 letters of the English alphabet, so the phishers had to come up with their own encoding for numbers. Morse code also doesn't include special characters and can also not be used to distinguish between upper and lower case, which makes it harder to use than other types of encoding. So, technically they didn’t use Morse code but an encoding system that used some base elements from Morse code using dashes and dots to represent characters. During our own research for this article we also came across files that used the pdf.html filename and similar variations on the theme we saw with the xls.html extension. These html files produced the same prompt to log into Outlook because the sign-in timed out. These samples were named using the format: For more information about phishing and how to protect yourself and your company please have a look at our page about phishing. For a full description of the phishing campaign, take a look a the Microsoft blog. … - .- -.-- / … .- ..-. . --..-- / . …- . .-. -.-- --- -. . -.-.--
<urn:uuid:ac2c3278-23b4-423c-aedf-b950cc2ec6ba>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2021/08/phishing-campaign-goes-old-school-dusts-off-morse-code
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00045.warc.gz
en
0.926915
1,221
3.296875
3
Global emissions from new build projects are at record levels. Consequently, construction is moving further away from, not closer to, net-zero buildings. With the current focus very much on the carbon footprint of facility operations, a new white paper presents the case for taking a Whole Life Carbon approach when assessing data centre carbon impact. According to the United Nations Environment Programme (UNEP), the carbon cost of building is rising. The UNEP Global Alliance for Buildings and Construction (GlobalABC) global status report highlighted two concerning trends: Firstly, ‘CO2 emissions from the building sector are the highest ever recorded…’ and secondly, ‘new GlobalABC tracker finds sector is losing momentum toward decarbonisation.’ Embodied carbon costs are mainly incurred at the construction stage of any building project. However, these costs can go further than simply the carbon price of materials including concrete and steel, and their use. And while it is true that not all buildings are the same in embodied carbon terms, in almost all cases these emissions created at the beginning of the building lifecycle simply cannot be reduced over time. Since this is often, and in some cases, especially true in data centres, it is incumbent to consider the best ways for the sector to identify, consider and evaluate the real embodied carbon cost of infrastructure-dense and energy-intensive buildings. Technical environments and energy-intensive buildings such as data centres differ greatly from other forms of commercial real estate, such as offices, warehouses and retail developments. Focusing on the data centre, let’s take for example a new build 50MW facility, it is clear that in order to meet its design objective, it’s going to require a great deal more power and cooling infrastructure plant and equipment to function, in comparison with other forms of buildings. Embodied carbon in data centres Embodied carbon in a data centre comprises all those emissions not attributed to operations, as well as the use of energy and water in its day to day running. It’s a long list that includes emissions associated with resource extraction, manufacturing and transportation, as well as those created during the installation of materials and components used to construct the built environment. Embodied carbon also includes the lifecycle emissions from ongoing use of all of the above, from maintenance, repair and replacements, to end-of-life activities such as deconstruction and demolition, transportation, waste processing and disposal. These lifecycle emissions must be considered when accounting for the total carbon cost. The complexity of mission-critical facilities makes it more important than ever to have a comprehensive process to consider and address all sources of embodied carbon emissions early in design and equipment procurement. Only by early and detailed assessment can operators inform best actions that can contribute to immediate embodied carbon reductions. Calculating whole life carbon Boundaries to measure the embodied carbon and emissions of a building at different points in the construction and operating lifecycle are Cradle to Gate; Cradle to Site; Cradle to Use and Cradle to Grave carbon calculations, where ‘Cradle’ is referenced as the earth or ground from which raw materials are extracted. For data centres, these higher levels of infrastructure are equipment-related, additional and important considerations because, in embodied carbon terms, they will be categorised under Scope 3 of the GHG Protocol Standards – also referred to as Value-Chain emissions. Much of the Scope 3 emissions will be produced by upstream activities that include and cover materials for construction. However, especially important for data centres is that they also include the carbon cost for ongoing maintenance and replacement of the facility plant and equipment. That brings us to the whole of life calculations which will combine embodied and operational carbon. Combining embodied and operational emissions to analyse the entire lifecycle of a building throughout its useful life and beyond, is the Whole Life Carbon approach. It ensures that the embodied carbon (CO2e emissions) together with embodied carbon of materials, components and construction activities, are calculated and available, to allow comparisons between different design and construction approaches. Data centre sustainability is more than simply operational efficiency The great efforts to improve efficiency and reduce energy use – as measured through improvements in PUE – have slowed operational carbon emissions even as demand and the scale of facilities have surged. But reducing operational energy of the facility is measured over time and such reductions are not accounted for until 5, 10, or 30 years into the future. However, embodied carbon is mostly spent up-front as the building is constructed; there is, therefore a compelling reason to include embodied carbon within all analyses and data centre design decisions. A ‘Whole Life’ carbon approach that considers the Embodied and the Operational emissions, provides the opportunity to contribute positively to global goals to reduce emissions of greenhouse gases – and will save financial costs.
<urn:uuid:b59577cb-fa29-4da6-a04f-af84ee37b524>
CC-MAIN-2022-40
https://networkseuropemagazine.com/2022/06/23/the-real-embodied-carbon-cost-of-a-data-centre/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00045.warc.gz
en
0.958486
979
2.671875
3
Some say that in the real (physical) world, to prove one’s identity is an unambiguous process. For example, when you show up at an airport’s TSA security check, you’re required to show a government-issued credential such as a driver’s license or a passport, and the TSA agent can see that you are who you say you are. If you’re going to check into a hotel room, you show your ID and a credit card, then the hotel’s employee is able to physically see that you are who you claim to be. Oh, is that so…? To prove one’s physical identity is highly ambiguous Have you ever heard of perfect counterfeit driver’s license holograms that easily pass TSA? And what about fake (also called synthetic) identities created with kids’ social security numbers, which are then used to open real bank accounts and issue real credit cards? Then, those fake identities and credit cards are used online to extend the scope of the fraud. Actually, the verification process gets even more opaque in the digital world, because businesses must find a way to verify that you are who you say are, even though you are not physically present to show your ID or documentation. So, organizations must find a way to make sure that your digital identity matches your real-world identity. What does it mean to digitize an identity? By definition, a digital identity is “the compilation of information about an individual that exists in digital form,” and that is grouped into two categories: Digital attributes and digital activities. Digital attributes include a date of birth, medical history, identity numbers (SSN, driver’s license), government-issued identities (passport, driver’s license), bank details, login credentials (username and passwords), email address, biometrics (fingerprint, eye scan, 3D face map) as well as badges and tokens. Digital activity includes likes, comments, shares, and photos on social media, purchase history, forum posts, search queries, signed petitions, geotagging, downloading apps, and cell phone usage. The use of these attributes, either as stand-alone or in a combination, can be used to identify an individual digitally. What attributes should be used to build a digital ID? As soon as I read the list of attributes that compose each one of these two categories, the first word that came to mind was privacy. The second one was ethics. That certainly doesn’t reflect much confidence in what’s being proposed… Let’s face it, there is no way I want anything that pertains to my private digital life to be leveraged to establish whether I am worthy of opening a bank account or booking a trip online. Should the fact that I signed a digital petition against fracking in Western Texas ten years ago prevent me from accessing certain services on the Internet? This may sound a bit extreme, but who knows what policies will be in place in the near future, especially since social media companies and search engines already sell my information to the highest bidder without my consent. How should an acceptable digital ID be constructed? An acceptable digital identity is a compilation of attributes that make the individual’s ID proofing process indisputable. What does this entail? First, the process must solely involve digital attributes to mitigate risks inherent to privacy and/or ethical breach. Second, the process should be based on the triangulation of a claim (the individual’s ID photo, address, last name, etc.) with a multitude of company or government-issued documents (driver’s license, passport, etc.) as well as sources of truth (government databases, passport’s issuing country, passport chip, credit cards, bank account, etc.), including advanced biometricslike a liveness test. This ID proofing process eliminates the use of login credentials. Usernames and passwords are no longer needed. The motivation for this is simple: 81% of data breaches are caused by poor password management. Therefore, any system that leverages passwords and weak 2FA such as SMS and Email cannot assure the integrity of an individual’s identity. How to store digital ID data? Most organizations store user identity information in centralized databases, oftentimes supported by legacy software, that operate with numerous single points of failure. Large, centralized systems containing the personally identifiable information (PII) of millions of user accounts stored unencrypted are high targets for hackers. Actually, data breaches mainly target PII: 97 percent of all breaches in 2018. Regulatory legislation and enterprise efforts to increase cybersecurity don’t seem to cut it, since 2.8 billion consumer data records were exposed at an estimated cost of more than $654 billion in 2018, actually making 2018 the second-most active year for data breaches on record. The only alternative to centralized systems is decentralized systems. The user data is stored encrypted in the Blockchain, which virtually eradicates the risk of cyberattacks. A Blockchain network is an infrastructure that puts control in the hand of the end user. So, the user remains in control of a private key that protects his or her personal and financial information at all times and, when his or her data is about to be shared with a third party, he or she consents to send only the information that is pertinent to be shared. With a Blockchain network, most domestic and international guidelines on transparency, privacy rights, and data security are being respected and followed. Cryptocurrencies use blockchain technology to keep transactions safe and private. These are the exact attributes that we need to apply to identity – safety, and privacy. To conclude: It’s all about asking oneself the right question If an organization cannot answer for sure that an employee, a customer, or a citizen who accesses its systems, applications, and/or website is who he or she says he or she is, then the identification and therefore authentication system is broken. The organization cannot trust whether this individual truly is an employee, a customer, or an existing citizen, along with his or her intentions. There is a solution that triggers the unequivocal answer, “I am sure!” And it involves indisputable ID-proofing, along with the use of advanced biometrics for authentication. On the employee, customer, or citizen side, the question that needs to be answered without hesitation is, “Can I trust this organization?” To store user data encrypted in the Blockchain is the assurance that those employees, customers, and citizens will not find their personal and financial information for sale on the Dark Web.
<urn:uuid:2fa267db-9f00-493a-aba9-c198e6e0a515>
CC-MAIN-2022-40
https://www.1kosmos.com/identity-management/the-stakes-behind-using-digital-identities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00045.warc.gz
en
0.936125
1,356
3.21875
3
NASA space budget might have been cut down in the past few years, but on Earth it is still working on some wacky projects, including a fully electric aircraft. Named the LEAPTech wing, it is part of a joint operation between NASA, Joby Aviation and ESAero to build a replacement to the gas chugging airplanes. Joby Aviation is the main organisation working on the design of the LEAPTech wing, previously working on small aircraft using a new style of propellers for maximum efficiency in the air. The LEAPTech wing features 18 electric motors all independent of one another, allowing the pilot to command each individual motor for even more efficiency. These motors are powered by lithium ion phosphate batteries, with a range of 200 miles. NASA has tried the wing on several tests and will use the Tecnam P2006T aircraft with a customised LEAPTech wing to test for the first flights. There is still the issue of range, currently at 200 miles. NASA could use a hybrid system for intermission between gas and fully electric, featuring a range of 400 miles. Even with the hybrid engine, 400 miles is not enough to entice any commercial airline, considering most need enough fuel to travel more than one continent. That said, electric car range has been doubled in the last five years, meaning we should expect to see similar gains in the electric aircraft market once more companies start getting involved. Tesla’s CEO Elon Musk did say the only mode of transport that could not use electric motors would be rockets, and said he would consider working on an electric aircraft sometime in the near future.
<urn:uuid:735e6737-6182-4109-8e61-5e559e3b3dd1>
CC-MAIN-2022-40
https://www.itproportal.com/2015/03/17/nasa-testing-electric-aircraft-wacky-design-concept/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00245.warc.gz
en
0.963558
332
2.84375
3
Liquid cooling vs immersion cooling vs submersion cooling As electronic devices and machines work, they produce heat. However, the way the heat produced is dissipated is usually determined by the cooling system, which also ensures that the optimal temperature of the device is maintained. This article takes a detailed look at the different types of cooling systems which include liquid cooling, immersion cooling and submersion cooling, and the features/functionalities that make them distinct. Read on to find out more… What is liquid cooling? As the name implies, a liquid cooling system applies liquid components in maintaining the temperature of devices. In a liquid cooling system, coolants are circulated in tubes to maintain the desired temperature. The system also has water blocks and radiators for moving the coolants around. Liquid cooling does not require many fans, and herein is its biggest appeal. Only a few fans are needed to move coolant through the tubes and to all required parts of the device. The reduced fan requirement for liquid cooling means that the device or gadget produces less noise. This is particularly true for PCs when liquid-cooled ones are compared with the air-cooled ones. Air-cooled PCs are bound to make much more noise than liquid-cooled PCs. Liquid cooling systems are also great for areas where space is a constraint. Air cooling systems could be replaced with water cooling systems when there is a significant space limitation. What is an immersion cooling system? The way immersion cooling systems work is in the name. Immersion cooling systems are designed in such a way that the electrical components of the device are immersed in the coolant. These systems were designed to achieve a more efficient means of heat transfer. The systems could be open-air or closed. They could also be single-phase systems or double-phase systems. Immersion cooling systems can both reduce the power consumption of devices and gadgets and reduce environmental footprints. The performance of immersion cooling systems is dependent on the type of liquid used. What is submersion cooling? Submersion cooling systems, just like immersion cooling systems, involve placing the equipment in coolants. The equipment is placed in tanks filled with non-conductive coolants. These systems have been especially deemed suitable for use in data centres where high-volume computing occurs. The application of submersion cooling systems in IT can be described as a crossover from the power sector as the cooling systems are routinely used in cooling large power distribution components. Submersion cooling systems do not need fans. A radiator mediates the exchange between the cool water circuit and the warm coolant. Non-conductive coolants are required so that they do not interfere with the operations of the device, particularly IT devices. What are the major differences between the immersion, submersion, and liquid cooling systems? One of the significant differences between the three cooling systems listed in this article is that immersion and submersion cooling involves the use of non-conductive coolants, which coolants could be dielectric too. However, the liquid cooling system is designed as an improvement over air cooling because of its numerous limitations. Liquid cooling was further specialized into immersion and submersion cooling for enhanced efficiency. The cooling systems in this article have various features and functionalities but different in so many ways. In the future, there are predictions that more cooling systems will be designed to be more efficient and meet up with increasing demands in the IT sector.
<urn:uuid:e52fbf7f-427b-4682-a237-e355664b9709>
CC-MAIN-2022-40
https://peasoup.cloud/iaas/liquid-cooling-vs-immersion-cooling-vs-submersion-cooling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00245.warc.gz
en
0.952324
707
3.109375
3
The French government's computer emergency readiness team, that's part of the National Cybersecurity Agency of France, or ANSSI, has discovered a Ryuk variant that has worm-like capabilities during an incident response. For those unacquainted with Ryuk, it is a type of ransomware that is used in targeted attacks against enterprises and organizations. It was first discovered in the wild in August 2018 and has been used in numerous cyberattacks since, including high profile incidents like the attack on the Tampa Bay Times and other newspapers in January 2020. According to the FBI, it is the number one ransomware in terms of completed ransom payments. How has Ryuk changed? The French team found a variant of Ryuk that could spread itself from system to system within a Windows domain. Once launched, it will spread itself on every reachable machine on which Windows Remote Procedure Call (RPC) access is possible. (Remote procedure calls are a mechanism for Windows processes to communicate with one another.) Why is this remarkable? This is notable for two separate reasons. - Ryuk used to be dropped into networks and spread manually, by human operators, or deployed into networks by other malware. - Historically, one of the major players when it came to dropping Ryuk has been Emotet. And as it happens, the Emotet botnet suffered a serious blow when, in a coordinated action, multiple law enforcement agencies seized control of the Emotet botnet. And if the plan behind this takedown works, the botnet will be rolled up from the inside. Targeted ransomware attacks command high ransoms because they infect entire networks, grinding whole organizations to a halt. Until this discovery, Ryuk had always relied on something else to spread it through the networks it attacked. Given the timing of the Emotet takedown (January 27, 2021) and the discovery of the worm-like capabilities (“early 2021”) it's tempting to connect the two. However, it would have required a very quick turn-around for these new capabilities to have been developed in response to the loss of Emotet. On the other hand, I’m not a firm believer in coincidence, especially when there are compelling reasons to suspect otherwise. Not an Emotet alternative But the new-found worm capabilities of Ryuk are not an alternative to the initial infection of a network that was done through Emotet. The worm-like capabilities can be deployed once they are inside and not to get inside. And even though Emotet was renowned for appearing in combination with Ryuk, it certainly wasn't its exclusive dealer. It is still hard to tell what the impact of the Emotet takedown will be on the malware families that were often seen as its companions. Ryuk's technical capabilities The team behind Ryuk has proven with earlier tricks that they are very adept in using networking protocols. In 2019 researchers found that Ryuk had been updated with the ability to scan address resolution protocol (ARP) tables on infected systems, to obtain a list of known systems and their IP and MAC addresses. For systems found within the private IP address range, the malware was then programmed to use the Windows Wake-on-LAN command, sending a packet to the device's MAC address, instructing it to wake up, so it could remotely encrypt the drive. Wake-on-LAN is a technology that allows a network professional to remotely power on a computer or to wake it up from sleep mode. The combination of ARP and RPC. Summing up, this new variant can find systems in the “neighborhood” by reading the ARP tables, wake those systems up by sending a Wake-on-LAN command, and then use RPC to copy itself to identified network shares. This step is followed by the creation of a scheduled task on the remote machine. In 2019, the NCSC reported that “Ryuk ransomware itself does not contain the ability to move laterally within a network," meaning that attackers would first conduct network reconnaissance, identify systems for exploitation and then run tools and scripts to spread the crypto-locking malware. With the development of this new capability, this statement is now no longer true. Mitigating network traversal One of the mitigation processes that were proposed, and that didn’t involve any cyber-security software, was to disable the user account(s) that are in use to send the RPC calls, and to change the KRBTGT domain password. The KRBTGT is a local default account that acts as a service account for the Kerberos Distribution Center (KDC) service. Every Domain Controller in an Active Directory domain runs a KDC service. Disabling the user account(s), and especially changing the KRBTGT domain password, will have a serious effect on the network operations and require many systems to reboot. But these troubles don't outweigh the ramifications of a full network falling victim to ransomware. Keep your networks safe, everyone!
<urn:uuid:6e8878fa-7ac2-4229-8104-1b16b52e5c58>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2021/03/ryuk-ransomware-develops-worm-like-capability
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00245.warc.gz
en
0.956149
1,025
2.703125
3
What's the Difference? IaaS vs. PaaS vs. SaaS? November 25, 2020 November 25, 2020 Cloud computing has become essential to the optimization of enterprise technology. When exploring your options, you will see antonyms like IaaS, PaaS, SaaS, the list goes on. These might just look like a bunch of random letters to you, but they are all "as a service" models. If you are considering moving to the cloud, there are three main types of cloud computing to consider: Traditionally companies used an on-premises model, a private cloud in which your hardware is built into your own office. This model costs the most and requires the most internal management. But the three models mentioned above have slowly taken over. We're sure you're wondering what all of these stands for, so let's get into it. Infrastructure as a Service (IaaS) is a basic service in which your cloud service provider owns and manages the hardware that your software runs on. Meaning you are able to rent IT infrastructure like servers or VM's on a pay-as-you-go basis. With this option, your internal IT team must continue to manage your applications and systems, but will not have to deal with the upkeep of the physical infrastructure. So while your provider will supply the basic compute, storage, and networking infrastructure with a hypervisor, you as the user will have to install, manage, and configure your own operating systems. IaaS requires little commitment to your Cloud service provider and allows you the flexibility to set up your systems in a way that is personal to your business if you have the internal resources that is. Platform as a Service (PaaS) is similar to IaaS in the fact that your cloud service provider supplies your infrastructure, but they also provide your databases and operating system. It is a supply on-demand environment for developing, testing, delivering, and managing software applications and is designed to avoid the expenses of buying and managing software licenses and other resources. PaaS also includes servers, storage, networking, middleware development tools, business intelligence (BI) services, database management systems, and more, which are all accessible over a secure internet connection on a pay-as-you-go basis. PaaS is mainly used by developers to design their own unique applications without the commitment to buying their own infrastructure or having to start from scratch, saving time in the long run. It is often used to quickly create web or mobile applications while avoiding managing software updates or security patches. PaaS allows for better connectivity between teams, allowing users from multiple locations to access and collaborate on application development. Software as a Service (SaaS) is when a cloud service provider supplies an entire application stack. Users are able to log in and use the application running on their provider's infrastructure. SaaS usually runs on a monthly subscription payment model with access through the internet, including services such as maintenance, compliance, security. This option gives you the security of knowing that your applications and systems are in good hands, with little input from you. Because SaaS is an out-of-box solution, this option is best for companies who lack the expertise of how to develop or run applications and puts full responsibility on the provider. SaaS helps you reduce costs by canceling the need of an internal IT team to handle hosting, management, and maintenance of software applications and underlying infrastructure. There are benefits to each and every one of these options, but ultimately the service you choose depends on your budget, security needs, available infrastructure and IT staff availability. At Approyo, we help optimize your SAP solutions and get the most out of your investment. We take responsibility for all cloud service operations including the hosting of your systems with 24/7 monitoring, BASIS support, local Helpdesk, Backups, OS, along with the ability to scale your systems with minimal downtime. Our global team of experts is available at all times to ensure your systems are operating smoothly, and to handle any requests or questions you have. With our extensive background in SAP, and our range of tailored SAP services and solutions, we can be the MSP that guides you to success. Not only are we a reliable MSP, but we are your personal SAP consultant as well. Looking for more information on how we can benefit your business? Set up a free consultation with our executive team today.
<urn:uuid:91921839-40b3-4f7a-b9b8-232fc3eeff2b>
CC-MAIN-2022-40
https://approyo.com/blog/whats-the-difference-iaas.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00245.warc.gz
en
0.949087
916
2.515625
3
Approximately 90 percent of data breaches are caused by human error, according to a report by Kaspersky Lab. When businesses dedicate time and resources to make employees aware of cybersecurity threats, they’re taking a necessary step to reduce risks and prevent cybercrimes from causing financial and reputational damage to the enterprise. Why Is Cybersecurity Awareness Important for Employees? Cybersecurity awareness training is designed to educate employees on the complex cybersecurity landscape. Using various learning methods, cybersecurity awareness training can help employees at all levels understand the threats that exist and provide information about how to identify an attack. Below are some of the top reasons why cybersecurity awareness training is essential for employees. Cyberattacks are Constantly Evolving and Becoming More Successful Technology is continually evolving and cybercriminals get savvier every day, making it increasingly difficult to distinguish a scam from legitimate communication. Most modern businesses rely heavily on technology for all aspects of their operations, from customer communications to routine operations. Through comprehensive cybersecurity awareness training and a continued emphasis on the importance of vigilance, employees can be prepared to spot risks and avoid behaviors that could lead to a breach. Many Employees are Not Properly Trained on Cybersecurity Although many businesses provide training, employees often lack sufficient information regarding cybersecurity. It’s important that employees have a solid understanding of all aspects of cybersecurity, such as the differences between various types of attacks, including spoofing, phishing attempts, social engineering and malware. It should also teach employees to properly use spam filters, verify senders’ addresses and identities, and identify suspicious email addresses, URLs and email attachments. Minor Errors Made by Employees Can Be Costly and Damaging The harm that a data breach can cause is often underestimated. According to a study by Accenture, the average cost of a cyber crime is $13 million. Employees who are not paying attention or are distracted could make one minor mistake that leads to a massive data breach. A Culture of Cybersecurity Awareness Boosts Employee Confidence and Wellbeing When employees are uncertain about how to best protect themselves and the business from cyber risks, it can create ongoing stress that directly impacts the employee’s productivity and performance. When employees are made aware of what threats to look out for and how to safeguard the business from these threats, they gain confidence in their ability to use technology safely to do their job. How to Assist Your Employees with Improving Cybersecurity Awareness There are many ways that businesses can boost their employees’ cybersecurity awareness. The methods that an organization chooses will depend on factors such as their size and budget. Some of the best practices for properly training employees to identify and manage cyber threats that could make the company vulnerable to criminals include the following: Hold Monthly Cybersecurity Awareness Training Sessions Cybersecurity awareness training should not end at onboarding. Consider holding cybersecurity awareness training sessions for all employees on a monthly basis. During these meetings, review cybersecurity guidelines so that they remain fresh in employees’ minds. This is also a great time to address any questions or concerns that workers may have regarding cyber risks. Administer Phishing Tests to Understand their Levels of Awareness Phishing simulations have proven to be highly effective in determining how employees engage with malicious URLs, links and attachments. A phishing test typically consists of mock phishing emails or webpages that are sent to employees to see what action they take when they encounter malicious content. Encourage Them to Monitor for Suspicious Activity or Emails Every organization should have controls in place to monitor and report suspicious activity or emails. Educate employees on what to look for when going through emails, performing web research and navigating unfamiliar websites. Review red flags that could indicate that the content is unsafe and how to react when security gaps are discovered. Work with a Third-Party Cybersecurity Consultant All businesses have their own unique IT infrastructure and face various cyber-related risks. Due to the complexity of the cybersecurity landscape, it is important to consult with a professional who is experienced in the field of cybersecurity awareness. A third-party cybersecurity consultant can provide organizations with a wide range of services to reduce their risk of a cyber event. These services include cyber risk assessments, incident response team formation and planning, IT strategy consulting, and IT coaching and mentoring. Speak with Hartman Executive Advisors for More Information Strengthening employee cybersecurity awareness is one of the best ways for organizations to better protect their business and foster a workplace environment where employees have the skills and resources that they need to keep cyber threats at bay. To learn more about how and why organizations should focus their employees on cybersecurity awareness or to speak with a cybersecurity expert, contact the team at Hartman Executive Advisors today.
<urn:uuid:35d79aa4-b490-4717-8b5a-e481434a94da>
CC-MAIN-2022-40
https://hartmanadvisors.com/how-to-improve-employee-cybersecurity-awareness/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00445.warc.gz
en
0.951111
961
2.59375
3
More and more government workers are teleworking, using Government Furnished Equipment (GFE) for official work and connecting them through personal networks. Cybersecurity is a crucial priority for these users to ensure their data and networks remain secure and uncompromised. This includes being able to identify indicators of a network compromise and pursue potential mitigations. This knowledge aids users in safeguarding their personal networks and data. Personal networks are those used in homes for personal use or telework, such as a home network provided by a residential Internet Service Provider (ISP). These networks usually consist of a router or wireless access points connecting devices to the Internet. They may have computers, mobile devices, gaming systems, or a variety of Internet of Things (IoT) devices connected to them. When setting up these personal networks, implementing proper security from the beginning is crucial. While there is no way to ensure that personal networks will be completely secured from attacks— attackers are persistent and continue to find ways to circumvent security controls—users can still take steps to help prevent future attacks.
<urn:uuid:73f8cb7a-6c0a-4873-885e-3da1898a7626>
CC-MAIN-2022-40
https://cybermaterial.com/compromised-personal-network-indicators-and-mitigations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00445.warc.gz
en
0.936503
214
3.1875
3
Businesses lose millions of dollars to fraud every year. According to PwC, nearly half of all organisations have experienced corruption, fraud, or other economic crimes since 2020. It’s highest in the technology, media, and telecoms sectors, where two-thirds of companies experienced some kind of fraud. Cybercrime is a major issue in Southeast Asia where 25% of respondents experienced an increased risk in cybercrime as a result of COVID-19. The UN Office on Drugs and Crime has noted a 600% rise in cybercrimes in the region. A report by AppsFlyer found that Southeast Asia’s losses accounted for 40% of the total estimated fraud losses in Asia-Pacific, totalling US$650 million. Meanwhile, bot attacks are one of the leading methods of fraud, with Singapore suffering the second highest prevalence after Vietnam. Digital platforms have created new opportunities for fraud, enabling complex webs of transactions using sophisticated layering techniques to mask parties. Numerous small transactions are moved through a maze of agents, companies, and financial institutions, making it immensely challenging for investigators to follow the money trail. With the pandemic-driven rush to digital, many existing checks and balances haven’t been updated in time. Systems and processes that worked offline no longer work in the online world. The internet’s borderless nature gives entities easy access to other countries, but puts them out of reach of those countries’ jurisdiction and law enforcement capabilities. The need for speed Speed is critical for detecting fraud and preventing its spread, but current fraud detection tools can’t cope with the millions of transactions and parties now involved. Some organisations have turned to automation, artificial intelligence, machine learning, and natural language processing, but these are only as effective as the data they are fed. The problem is that with existing forensic methods, data is stored in traditional relational database systems that use a basic spreadsheet format of cells, columns, and rows. Only two pieces of data can be correlated. This means that critical patterns and connections that could indicate irregular behaviour aren’t detected. With business processes becoming much faster and more automated, the time margins for detecting fraud are narrowing. This makes having a real-time solution critical. Identifying suspicious links One such solution, which investigators are increasingly adopting, is graph data science. This is an entirely different way of storing and managing data, using a graph of nodes and links to represent the relationships between them. This adds critical context such as “transacted with” or “registered at”. Cybersecurity and antivirus provider Kaspersky Lab has identified advanced scams and social engineering as a key threat for Southeast Asia in 2022, making tracing connected people critical. With a knowledge graph, chains and rings of people can be visually identified. “Guilty by association” scores are generated, based on the quality, quantity, and distance of someone’s relationship with suspicious entities. The graphs also become more useful over time. Once the pattern of a fraud ring is generated, a similarity algorithm can use the pattern to detect other potential rings and their participants. Prediction and prevention The insurance industry is one of the most susceptible to fraud, the cost of which is estimated to be US$100 billion per annum in the United States alone. In Singapore, insurance fraud tripled between 2018 and 2020. Deloitte has cautioned that the Southeast Asian insurance sector is seeing increased cyber risk since the pandemic due to digitalisation of the insurance business model. Traditionally insurers have used rule-based software to analyse hundreds of thousands of claims, of which up to 10% may be fabricated or inflated depending on category. Zurich Switzerland, part of Zurich Insurance Group, originally had a team of 25 field investigators examining potential fraud cases. However, the volume of automated reports was too large to deal with. To triage more efficiently, Zurich moved to a graph data platform. Investigators have now shifted from the rule-based risk tool to the graph-based application, which stores around 20 million nodes and 35 million relationships. They can then sift through and rapidly identify any issues in the flood of information received. The graph can instantly reveal if the different parties in a dispute are connected, potentially indicating a staged “crash for cash” traffic accident. Securing supply chains Supply chains, being vast and wide-ranging with deep supplier networks and the sheer number of transactions involved, are vulnerable to fraud. One example is food scandals, where cheaper, inferior, or counterfeit products are substituted. As KPMG observes, “In today’s global market, collusive kickback arrangements, bribery and corruption, and bid rigging are increasingly common and becoming harder to detect.” Sourcing platform Transparency-One tried developing a visibility platform based on a traditional database of columns and rows that used SQL (Structured Query Language) to query the data. The aim was to ensure traceability and enable users to search for any product affected by specific raw materials or issues with facilities. For example, if a product contains cocoa powder, a brand needs to know its origin. If a crisis occurs, such as the 2011 Ivory Coast civil war, the brand can quickly evaluate the impact on production and supply capacity, as well as the risk of price increases. But the volume and structure of information was too much for the platform to process at speed. Instead, Transparency-One moved to a graph data platform, where it was able to analyse several thousand different products and generate results within seconds. Manufacturers and brand owners can learn about, monitor, analyse, and search their supply chain, and share significant data about production sites and products. Traditional technologies are not designed to detect elaborate fraud rings or high volumes of suspicious activity, compared to graph data science which has the capacity to store much richer and deeper data, enabling real-time analysis and fraud prevention that could save organisations millions.
<urn:uuid:9ac171b1-1eeb-42d1-ba71-913523e51dd9>
CC-MAIN-2022-40
https://www.frontier-enterprise.com/graph-technology-is-helping-asian-businesses-combat-fraud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00445.warc.gz
en
0.944176
1,211
2.90625
3
In historic milestone, silicon quantum computing just exceeded 99% accuracy (ScienceAlert) Three separate teams around the world have passed the 99 percent accuracy threshold for silicon-based quantum computing, placing error-free quantum operations within tantalizing grasp. In Australia, a team led by physicist Andrea Morello of the University of New South Wales achieved 99.95 percent accuracy with one-qubit operations, and 99.37 percent for two-qubit operations in a three-qubit system. In the Netherlands, a team led by physicist Seigo Tarucha of Delft University of Technology achieved 99.87 percent accuracy for one-qubit operations, and 99.65 percent for two-qubit operations in quantum dots. Finally, in Japan, a team led by physicist Akito Noiri of RIKEN achieved 99.84 percent accuracy for one-qubit operations and 99.51 percent for two-qubit operations, also in quantum dots. Morello explained, “When the errors are so rare, it becomes possible to detect them and correct them when they occur. This shows that it is possible to build quantum computers that have enough scale, and enough power, to handle meaningful computation.” In 2014, Morello and his colleagues were able to demonstrate a whopping 35-second lifespan for quantum information in a silicon substrate. Their qubits were based on the spin states of nuclei, which, isolated from their environment, enabled the setting of a new time benchmark. But that very isolation proved a problem, too: it made it harder for the qubits to communicate with each other, which is necessary for performing quantum computation. To resolve this issue, Morello and team introduced an electron to their system of two phosphorus nuclei via ion implantation into the silicon, one of the fundamental processes for making microchips. This is how they created their three-qubit system, and it worked. The other two teams took a different approach. They created quantum dots of silicon and silicon-germanium alloy, and installed a two-electron qubit gate; that is, a circuit of multiple qubits. Then, they tweaked the voltage applied to their respective systems, using a protocol called gate set tomography to characterize their systems. Both teams found that they too had achieved higher than 99 percent fidelity in their systems. Any one of these papers alone would be a significant achievement. The fact that all three teams have reached the same milestone independently suggests that quantum computing will now be surging ahead.
<urn:uuid:cbdad285-888a-4f3e-8739-af40925d1387>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/in-historic-milestone-silicon-quantum-computing-just-exceeded-99-accuracy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00445.warc.gz
en
0.950977
515
3.015625
3
U of Chicago: Using two different elements creates new possibilities in hybrid atomic quantum computers (UChicago.edu) The Chicago Quantum Xchange recently announced the research creating a hybrid array of neutral atoms from two different elements broading applications in quantum technology. IQT News here the announcement from the University of Chicago of that research. Qubits, the building blocks of quantum computers, can be made from many different technologies. One way to make a qubit is to trap a single neutral atom in place using a focused laser, a technique that won the Nobel Prize in 2018. For the first time, University of Chicago researchers have created a hybrid array of neutral atoms from two different elements, significantly broadening the system’s potential applications in quantum technology. The results were funded in part by the NSF Quantum Leap Challenge Institute Hybrid Quantum Architectures and Networks (HQAN). “There have been many examples of quantum technology that have taken a hybrid approach,” said Hannes Bernien, lead researcher of the project and assistant professor in University of Chicago’s Pritzker School of Molecular Engineering. “But they have not been developed yet for these neutral atom platforms. We are very excited to see that our results have triggered a very positive response from the community, and that new protocols using our hybrid techniques are being developed.” “There have been quite a few milestone experiments over the last few years showing that atomic array platforms are extremely well suited for quantum simulation and also quantum computation,” Bernien said. “But measurements on these systems tend to be destructive, since all the atoms have the same resonances. This new hybrid approach can be really useful in this case.” In a hybrid array made of atoms of two different elements, any atom’s nearest neighbors can be atoms of the other element, with completely different frequencies. This makes it much easier for researchers to measure and manipulate a single atom without any interference from the atoms around it. It also allows researchers to sidestep a standard complication of atomic arrays; it is very difficult to hold an atom in one place for very long. The hybrid array created by Bernien’s group contains 512 lasers: 256 loaded with cesium atoms and 256 with rubidium atoms. As quantum computers go, this is a lot of qubits: Google and IBM, whose quantum computers are made of superconducting circuits rather than trapped atoms, have only gotten up to about 130 qubits. Though Bernien’s device is not yet a quantum computer, quantum computers made from atomic arrays are much easier to scale up, which could lead to some important new insights. The hybrid nature of this array also opens the door to many applications that wouldn’t be possible with a single species of atom. Since the two species are independently controllable, the atoms of one element can be used as quantum memory while the other can be used to make quantum computations, taking on the respective roles of RAM and a CPU on a typical computer. Sandra K. Helsel, Ph.D. has been researching and reporting on frontier technologies since 1990. She has her Ph.D. from the University of Arizona.
<urn:uuid:c170dd1c-55ab-4318-b290-72e37d821160>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/u-of-chicago-using-two-different-elements-creates-new-possibilities-in-hybrid-atomic-quantum-computers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00445.warc.gz
en
0.941513
656
3.625
4
Marc Andreessen wrote his famous Wall Street Journal essay "Why Software Is Eating the World" in 2011. Today, his prediction—that software companies would take over large parts of the economy as various industries are disrupted by software—has largely come to pass. Software has indeed changed the game, with software and online-based services making it possible to build companies with very little infrastructure. Nowhere is this more apparent than with the tech stack—a new term that refers to the suite of software tools that a company uses. How important is the tech stack? Important enough that it’s already gone mainstream. The New York Times Magazine featured the rise of the tech stack in a recent issue. Writer John Herrman says: “Stack,” in technological terms, can mean a few different things, but the most relevant usage grew from the start-up world: A stack is a collection of different pieces of software that are being used together to accomplish a task. A smartphone’s software stack, for instance, could be described as a layered structure: There’s the low-level code that controls the device’s hardware, and then, higher up, its basic operating system, and then, even higher, the software you use to message a friend or play a game. An individual application’s stack might include the programming languages used to build it, the services used to connect it to other apps or the service that hosts it online; a “full stack” developer would be someone proficient at working with each layer of that system, from bottom to top. Understanding your company’s tech stack Every company has a tech stack. Even a one-person freelancer writing operation has a stack. Her stack might look like this: Google Docs for drafting and saving articles; Slack for chatting with editors; Wordpress for her portfolio and blog; MailChimp for her weekly newsletter; Skype to conduct interviews with clients. The combination of these tools is her tech stack, and by sharing it, she can not only get feedback from other freelance writers and gain insights into new ways to use the tools she’s already chosen, but she can also learn about new tools that she hasn’t tried yet. “The larger the business, the larger the tech stack,” writes Herrman. "The stack isn’t just a handy concept for visualising how technology works. For many companies, the organising logic of the software stack becomes inseparable from the logic of the business itself.” So what’s in your stack? A social network for tech stacks Your tech stack is ever evolving, and it’s helpful to see what tools other companies are using. Many people do this in forums, blogs, or even in private emails between colleagues. But in Silicon Valley, where stack are premium, it is especially important to compare and share tech stacks at scale. StackShare is a startup where 100,000+ developers, engineers, VPEs, and CTOs from Silicon Valley's top companies, including Airbnb, Instacart, and Spotify, share and discuss their tech stacks. Any engineer, entrepreneur, or CTO can go to StackShare to see exactly what software combinations are powering growth at the fastest-growing companies in the U.S. Sharing tech stacks means companies can learn what tools their peers and competitors are using, while job-seekers can discover which tools are most in demand and used at their dream companies. It’s a social network for tech stacks, and has even been dubbed the LinkedIn for developers—a place where companies share, show off, and discuss the tools that make up their tech stacks. What you can learn from one company’s stellar stack Companies can save countless hours by learning what tech stacks their peers and competitors are using—instead of wasting resources on solutions that don't work, and reinventing the wheel. For example, HotelTonight, the startup that connects users to same-day hotel deals, has shared its entire tech stack on the platform, and even went further when Jatinder Singh, its director of platform engineering went on the StackShare podcast. We discussed a few of the tools that helped HotelTonight to go from minimum viable product to full app launch in two months—and to planning an IPO in only 7 years. HotelTonight shows users a personalised selection of 15 hotels each time they open the app. By switching from Ruby and MySQL to Elasticsearch, HotelTonight cut down query time from 1 second to 15 milliseconds. When it was building its tech stack, HotelTonight discovered that the cache tool of the hour was Varnish. But many of its peers worked at Fastly, so HotelTonight adopted it—and cut down its servers by 4x. Most people use Datadog as a simple dashboard, but HotelTonight uses it for unified alerts—from load balances to databases and caches. If anything is out of whack, Datadog alerts HotelTonight's designated person on call. Stacks for recruiting (and job searching) StackShare's data also lets you know what tools are the most popular. Founders and CTOs can use this data to choose the most popular tools that will make hiring a breeze, while job seekers can use the data to explore which tools are most in-demand so they can study up and learn the software that is most prevalent in the market. Job seekers can use Stack Match, StackShare’s job search tool, to choose what tools they would like to work with, what tools they don’t want to work with, specific roles they’re looking for, and their preferred location. They can also simply browse the stacks of companies they are interested in to get an idea of what kind of tools those companies are using. The future of the tech stack Software is the present and future, and tech stacks are constantly evolving. New tools launch all the time, and as companies adopt new technologies, their stacks will reflect the changing industry. As software gets smarter, more tools will be developed to replace tasks that you do manually, making your tech stack more important with each day. The rise of the tech stack raises several interesting questions. In a software-driven world, what is the role of the technology office in leading business change? What does a company's—or a person's—tech stack say about them? How will the stack redefine the way people seek jobs and the way companies recruit? The most important thing that founders and business owners can do now is get a sense of their current stack, study up on their competitors’ tech stacks, and keep abreast of developments in software and tools. Because software is eating the world, and it’s best to make sure you don't get eaten, too. Yonas Beshawred, founder and CEO, StackShare (opens in new tab) Image source: Shutterstock/violetkaipa
<urn:uuid:43ce153d-4dc0-49e7-96ce-60e9ec845a1b>
CC-MAIN-2022-40
https://www.itproportal.com/features/why-you-should-be-paying-attention-to-tech-stacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00445.warc.gz
en
0.953336
1,436
2.765625
3
Access control is a fundamental component of data security that dictates who’s allowed to access and use company information and resources. Through authentication and authorization, access control policies make sure users are who they say they are and that they have appropriate access to company data. Access control can also be applied to limit physical access to campuses, buildings, rooms, and datacenters. Explore additional data security topics: Access control identifies users by verifying various login credentials, which can include usernames and passwords, PINs, biometric scans, and security tokens. Many access control systems also include multifactor authentication (MFA), a method that requires multiple authentication methods to verify a user’s identity. Once a user is authenticated, access control then authorizes the appropriate level of access and allowed actions associated with that user’s credentials and IP address. There are four main types of access control. Organizations typically choose the method that makes the most sense based on their unique security and compliance requirements. The four access control models are: Access control keeps confidential information such as customer data, personally identifiable information, and intellectual property from falling into the wrong hands. It’s a key component of the modern zero trust security framework, which uses various mechanisms to continuously verify access to the company network. Without robust access control policies, organizations risk data leakage from both internal and external sources. Access control is particularly important for organizations with hybrid cloud and multi-cloud cloud environments, where resources, apps, and data reside both on premises and in the cloud. Access control can provide these environments with more robust access security beyond single sign-on (SSO), and prevent unauthorized access from unmanaged and BYO devices. See how the right BYOD strategy and policy-based access control helps boost productivity while keeping resources secure. As organizations adopt hybrid work models, and as business apps move to the cloud, it’s important to protect against modern-day threats coming from the internet, usage of BYOD and unmanaged devices, and attacks looking to exploit apps and APIs. Citrix secure access solutions ensure applications are continually protected, no matter where people work or what devices they use. 1 800 424 8749
<urn:uuid:37cacbd7-5106-4289-958c-a6ed35e92202>
CC-MAIN-2022-40
https://www.citrix.com/fi-fi/solutions/secure-access/what-is-access-control.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00445.warc.gz
en
0.906448
459
2.984375
3
Epithelial tissues act as coverings, controlling the movement of materials across their surface. Connective tissue binds the various parts of the body together, providing support and protection. Which type of tissue protects and supports? The correct answer is connective. The connective tissue protects and supports the body organs, binds organs together, stores energy reserves as fat,… What tissues support tissue? Connective tissue supports other tissues and binds them together (bone, blood, and lymph tissues). Epithelial tissue provides a covering (skin, the linings of the various passages inside the body). What type of tissue supports epithelium? The type of tissue that supports epithelium is known as basement membrane. Which is an example of a supporting connective tissue? Supportive connective tissue—bone and cartilage—provide structure and strength to the body and protect soft tissues. A few distinct cell types and densely packed fibers in a matrix characterize these tissues. What does connective tissue protect? Protection is another major function of connective tissue, in the form of fibrous capsules and bones that protect delicate organs and, of course, the skeletal system. Specialized cells in connective tissue defend the body from microorganisms that enter the body. Where is supporting connective tissue found? Supporting connective tissue comprises bone and cartilage. We will examine those tissues in greater detail in Lab 6 Bones & The Axial Skeleton. In both bone and cartilage, as in the different types of connective tissue proper, there are extracellular protein fibers embedded in a viscous ground substance. What type of tissue supports epithelium quizlet? Connective tissue supports the epithelium as well as many internal organs. The space between its components allows for organs to expand. Which tissue type provides support and mechanical protection? Supportive connective tissue—bone and cartilage—provide structure and strength to the body and protect soft tissues.
<urn:uuid:c959440f-4405-4c8f-b729-9cad18d1abc1>
CC-MAIN-2022-40
https://bestmalwareremovaltools.com/physical/which-tissue-is-protective-and-supporting-tissue.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00445.warc.gz
en
0.936795
407
3.875
4
Vulnerabilities and hackers Many of today’s threats exploit software vulnerabilities in order to spread. Learn more about what vulnerabilities are, what the most common vulnerabilities are, and how to fix them. How to detect a hacker attack Hackers may try and access your computer to get access to your data or to use your computing resources for illegal activity. This section provides information on the signs and symptoms of a hacker attack. A brief history of hacking Computer systems have always been targeted by people seeking either to improve security or exploit loopholes. This timeline gives an overview of major events in the evolution of computing along with the evolution of hacking. Hackers and the law Government and law enforcement bodies around the world address the problem of cybercrime with legislation. Find out more about how these laws are used to convict hackers who gain illegal access to systems and data.
<urn:uuid:b213575a-0c86-4bbf-878c-11b12e8b7cb4>
CC-MAIN-2022-40
https://encyclopedia.kaspersky.com/knowledge/vulnerabilities-and-hackers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00445.warc.gz
en
0.898146
179
2.9375
3
Artificial intelligence (AI), Machine Learning & Deep Learning are now major buzzwords when it comes to the career fields of the future. Here’s how you can study these, and the future scope and job opportunities in these avenues. Copyright by www.indiatoday.in What are Machine Learning & Deep Learning? The concept of artificial intelligence in other words, the idea of that a machine can learn and think for itself has been around since the middle of the last century. The term was coined by a group of American scientists John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon in a paper they presented at a conference held in Dartmouth College, USA in 1956. Fast-forward to 2021 and artificial intelligence and its subfield of machine learning are part of our everyday lives. Indeed, AI is used by a growing number of organisations and research centres worldwide, in a huge range of applications and products from chatbots to healthcare, cybersecurity to automobiles. Machine Learning uses numerical and statistical approaches to encode learning into mathematical models, which are then used to make predictions on new data, situations and scenarios. The most cutting-edge branch of ML is Deep Learning, which is based on very deep and complex artificial neural networks, usually referred to as deep neural networks, which essentially try to emulate the human brain’s learning abilities. In deep learning, the first few layers of the network perform feature extraction in a series of stages, just as the brain seems to do. The level of complexity and abstraction of such features increases through the network, with the actual decisions taking place in the last few layers of the network structure. Deep Learning is an extremely exciting development that has sparked an AI revolution in many aspects of our life, and is the key technology behind the recent spectacular developments in fields such as biomedical signal analysis, image recognition, driverless cars, speech processing and natural language processing. […] Read more: www.indiatoday.in
<urn:uuid:14d5d6db-d4cd-4f94-8607-acbae273567e>
CC-MAIN-2022-40
https://swisscognitive.ch/2021/04/14/machine-learning-and-deep-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00445.warc.gz
en
0.954931
398
3.046875
3
Have you ever examined the path on which you find yourself standing today along with all the antecedent steps that brought you to the current moment? The modern path of artificial intelligence is a mix of cognitive science, psychology and dreams. Is there a secret to how organisms collaborate? Methods to coordinate aren’t only found in science labs and the basements of research buildings. Universal coordination mechanisms can be found in many places, if we look. Environmental traces, mass collaboration and group interactions have much in common. The secret to complete organization Stigmergy derives from the Greek words στίγμα stigma meaning “mark or sign” and ἔργον ergon meaning “work or action.” Stigmergy is the universal coordination mechanism: a consensus mechanism of indirect coordination within an environment among agents or actions. While we don’t fully understand all the interactions of self-organizing organisms, the concept of self-organization is found in both robotics and social insects. Remy Chauvin (1956) conducted the earliest work on stigmergy-based coordination in the biological sciences. However, the foundation of stigmergy was envisioned by Pierre-Paul Grasse in 1967, making use of his 1950s research on termites. The idea is that an agent’s actions leave signs in the environment, signs that it and other agents sense and which determine and incite subsequent actions. Stigmergy is also used within artificial intelligence for the study of swarming patterns of independent actors that use signals to communicate. A better understanding of stigmergy and sociometry (a quantitative method for measuring social relationships) and group dynamics offers new insights into the world of multi-agent coordination, which is the essence of swarm intelligence. The essence of stigmergy is that traces left within an environment — the result of an action — stimulate the performance of a future action. This combined positive and negative feedback loop enhances a mutual awareness, fostering the coordination of activity without the need for planning, control and communication. Ants use pheromones. People use wikis. Wasps use secretions. These multi-agent coordination mechanisms function because agents exchange information within a dynamic environment. The agents modify their environment, which triggers a future response. When open-source systems blossom from five users to 50,000 users, we might find our answers buried in the evolution of group work. Stigmergic collaboration has four distinct principles: - Collaboration depends upon communication, and communication is a network phenomenon. - Collaboration is inherently composed of two primary components — social negotiation and creative output — without either of which collaboration cannot take place. - Collaboration in small groups (roughly 2 to 25) relies upon social negotiation to evolve and guide its process and creative output. - Collaboration in large groups (roughly 26 to n) is enabled by stigmergy. Collaboration with consensus Stigmergic collaboration is when agents or individuals work without explicit knowledge of others. Adding a block to a blockchain isn’t controlled by a central function; it’s organic. Editing or changing a wiki page relies on a shared pool of content for mass collaboration and consumption. Stigmergic interactions are coordinations of activities that, over time, use decentralized control. Primitive rules guide the orchestration of activity. There are no instructions, and there’s a self-awareness for actions and the sharing of information. How can stigmergic principles be used in your mobile designs? How does the communication and messaging of self-organizing systems improve your IT landscape? How do unstable systems evolve into stable states in which order and organization are the norms? You can apply these theories to your technological environment. Innovators are using these theories to design interactions that don’t presently exist within conventional artificial intelligence environments. Future artificial intelligence systems will be designed with an awareness of stigmergic collaborations.
<urn:uuid:5fcf142c-3ba6-4664-b2cb-2c9fca5e8093>
CC-MAIN-2022-40
https://www.cio.com/article/234906/why-is-stigmergy-a-good-platform-for-swarm-intelligence.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00445.warc.gz
en
0.912574
831
3.109375
3
Contact closure is the term used to describe discrete alarms, digital inputs, or simply alarm inputs. Contact closures are the alarm points that can be only "ON or OFF", "opened or closed", or "Yes or No". Contact closures may be an "old thing", but you can't ignore them. These types of alarms are very common in many different remote sites where networks are located. For example, in remote sites you'll have discrete alarms for things like: Although contact closures are a very common type of alarm in remote monitoring systems, they will only be efficient if they can reach you in a timely manner before a condition escalates into a network outage. So, your system needs to be able to transport alarm data to your master station in an effective method. Ethernet is now one of the most common data transmission methods in network monitoring. This is because connecting your devices via Ethernet over LAN brings you many benefits, such as: Despite being a popular data transmission method, many network technicians still have many questions on how they can properly send contact closures over Ethernet. That's why, in this article, we'll dive into how the right technology allows the transmission of digital contact closures to a remote point using Ethernet. One of the most popular and efficient ways to send contact closures over Ethernet is using RTUs. Remote terminal units (RTUs) are electronic devices that can collect alarm information from remote sites and send it all to you or to a master station. Through their discrete inputs, RTUs can collect contact closures and convert them to information you can use. Discrete inputs allow RTUs to receive binary information from other devices or sensors. These inputs have two different pins that make the exchange of information possible. The first pin is wired into the device or sensor the RTU is monitoring, then the return is wired to the second pin. This allows the RTU to know when there's the presence or absence of voltage across the wire. In other words, in the dry contact configuration, the input on the RTU is wired into a dry contact closure - meaning the circuit is closed but no electricity is provided. In this configuration, the first (output) pin of the discrete input has electric potential that can be found by the second (ground) pin when the contract relay output is closed. In "wet configurations", the output device needs to emit a certain amount of electrical current/voltage. When the RTU collects contact closures through its discrete inputs, it can send this information to your master station - that will display the data in an easy-to-understand format. You can find RTUs designed to send information in whatever method you prefer, such as Ethernet. If you have LAN installed at your remote sites, all you have to do is plug a standard Ethernet cable into your RTU's standard port. This way, your linked devices are able to access and share data between themselves to communicate alarm data with each other. So, your master station will receive contact closure information from your RTU via Ethernet via LAN and will send notifications to you when alarms happen - the best masters will allow you to choose the notification method that works best for you, such as text message and email. Usually, RTUs will support both protocol communication and contact closures. This is a common hybrid strategy where SNMP, Modbus, or another protocol are used to output hundreds or thousands of detailed alarm points. This same RTU will then have a handful of contact closures to summarize alarm status as "Minor", "Major", "Critical", and "Status" severities. To have complete visibility over all your devices that support contact closures, you need to have an RTU that has the right amount of discrete inputs. But, the number of discrete inputs your RTU should have will solely depend on your unique remote sites and network. Independent of that, there are three rules you need to keep in mind: The bottom line here is that you should buy an RTU with capacity enough to handle all your equipment right now and in the future. But, don't waste money on a large device that is overkill for what you need. Instead, get an RTU with 10% more capacity than you have now - this will be your backup when future growth happens. If you decided that purchasing an RTU that can send your contact closures over Ethernet to you is something you need, then your next step is to find a competent device. So, let's take a look at an example of one. The NetGuardian 832A G5 has support for 32 discrete inputs, so you can monitor all your equipment that sends information over contact closures. Also, the 832A G5 supports not only Ethernet but other two methods of data transfer: dial-up and serial. This is important because allows you to have alternate transmission methods, so if your Ethernet transport fails for some reason you won't lose visibility. If you are slowly transferring your network to Ethernet, having a device like the NetGuardian 832A is important to allow for smooth integration. In order to have an RTU that can efficiently transmit contact closures to you via Ethernet, you need to make an informed decision. Not all RTUs will support the capabilities that you need, such as support for Ethernet transport, or a web browser that allows you to configure your settings, or even the best notification method for you. Because of that, it's important that you work with manufacturers that can provide you with a custom solution. When you buy an off-the-shelf product, you won't be able to get the specific features and capabilities that you need to efficiently keep an eye on your network. With off-the-shelf devices, you might end up deploying multiple RTUs of different models in an attempt to achieve the perfect-fit monitoring solution. Keep in mind all the support, training, sparing, and purchasing hassles that grow as you add more and more different types of RTUs. The cost-effective way to handle this is to invest in a custom RTU that supports all the features you need in a compact device. At DPS, 80% of what we build is personalized products that will attend to the unique requirements of our clients - either by modifying our existing products or manufacturing a completely brand new one. So, if you'd like a monitoring device that was designed and built with your network in mind, simply let us know. We can build to your specs. You need to see DPS gear in action. Get a live demo with our engineers. Have a specific question? Ask our team of expert engineers and get a specific answer! Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
<urn:uuid:b843acd0-2732-410e-8033-315d31c9b5be>
CC-MAIN-2022-40
https://www.dpstele.com/blog/how-to-send-contact-closures-over-ethernet.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00445.warc.gz
en
0.943152
1,408
3.359375
3
Unarchiving is the process of restoring files from an archive by decompressing. Archive files are used to collect multiple data files together into a single file for easier portability and storage, or simply to compress files to use less storage space. VirusBarrier uses the Unarchiving process to scan archived files for malware and viruses. Scanning archives can take a very long time, depending on the size of the file(s) being scanned and the number of archived files stored on the device. When you see Unarchiving during a scan, a file or multiple files are going through the following process: - Files are decompressed (Unarchiving) - Saved in a temporary location - Scanned for infection - Compressed again (Archived) - Deleted from the temporary location. Note: The Archive timeout setting lets you tell VirusBarrier to stop scanning archives that take more than a certain amount of time to decompress and scan. By default, this is set to 60 seconds. However, any files that have been uncompressed before the end of this timeout will be scanned.
<urn:uuid:e1892357-a99b-4eb2-af39-fe343c37d335>
CC-MAIN-2022-40
https://support.intego.com/hc/en-us/articles/115002736471-What-Is-Unarchiving-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00445.warc.gz
en
0.921326
226
2.765625
3
Home Ethernet Wiring Guide: How to Get a Wired Home Network? When moving to a new house, most people would likely to choose Wi-Fi to get the network as laying cable in the house is too complicated and makes the room messy. But the wired network is faster and more secure for internet access, file sharing, media streaming, online gaming and other things. So comparatively, the wired network is better than the wireless. Then how to wire Ethernet cable? Check the following paragraphs for your home Ethernet wiring. Things to Know Before Home Ethernet Wiring The basis of your wired home network will be Ethernet. This word has a very specific technical meaning, but in common use, it’s simply the technology behind 99% of computer networks. Most computers now come already equipped with an Ethernet adapter – it’s the squarish hole that accepts Ethernet cables. For wiring Ethernet cable, the broadband connection usually being cable, DSL, or something else will first go through some kind of device typically called a modem. The modem’s job is to convert the broadband signal to Ethernet. You’ll connect that Ethernet from your broadband modem to a broadband router. Router, as its name implies, is used to “route” information between computers on your home network and between those computers and the broadband connection to the Internet. Each of your computers already has an Ethernet adapter. An Ethernet cable will run from each computer to the router and another cable will connect the router to the modem. Home Ethernet Wiring Diagram Which Network Ethernet Cable Should You Choose for the Your Home Ethernet Wiring? From the passage above we know that the wired home network connection is based on Ethernet cable, next you'll have to decide what type of cable you want to use. Cat5e, Cat6 or Cat7 Network Ethernet Cable There are Cat5e, Cat6, Cat7 Ethernet cables, among which Cat6 cable is highly recommended for its faster speed and cheaper price when compared with Cat5e and Cat7 cables. Wiring your house will take a long time and it's always better to do it right the first time. It is suggested to calculate the cable length before purchasing in case of material waste and always keep in mind to make the cable extra longer than which you actually need. UTP or STP Ethernet Cable If you have made your decision on the cable, then you will have to consider which type of cable you need-UTP or STP? UTP stands for unshielded twisted pair while STP stands for shielded twisted pair. Shielded is much more expensive because it adds a layer of protection on the outside of the cables. For home use, the unshielded is completely fine. Stranded or Solid Ethernet Cable Next, there is the option of stranded or solid core wire. This basically means that the inside of your wire is made up either braided strands or one solid piece. What this comes down to is how much manuevering you will need to do with the wire. If you're going to be fishing it through tight spaces, a solid piece of wire is much easier to move around in a tight space because it is rigid. The drawback to the solid core is that it is harder to connect to the wall outlet or plastic jack. Stranded wire is easier to connect to a wall outlet, but it's pretty flimsy if you're trying to push it through crevices. Home Ethernet Wiring Guide: How to Get a Wired Home Network? Now that you've made a decision about the Ethernet cable types you will use, then you need to know how to wire them. Usually, this job includes installing the wall plates, running the cable, and connecting the cables to jacks. Before the installation, remember to check that you have all the necessary equipment to do the job, that way you wouldn’t have to stop in the middle of the process because something is missing. Basic tools are listed in the table below for your reference. |Cable Assemblies||Network Tools & Testers| |Inline Couplers||Keystone Wall Plates| |Keystone Jacks||Cable Crimping Tools| |Boot Covers||Punch Down Tool| Wall Plates Installation Look at your sketch and find where to install the wall plates. First, line up and measure the size of the wall plate. Then draw the outlines on the wall to prepare for cutting the hole which is the most difficult during this process. And use a stud finder prior to make sure that you don’t hit a stud. Next step is to cut the hole. In this step, just leave the wall plates off. Network Cable Installation Before running your cable, make a measurement to see the cable length for each run. You can measure from floor plans, run one, etc. If you run one cable to each room from the distribution room, gently pull it out and make other cable run like it. Then clear the path in the walls and drill holes. Once you have drilled the holes you can string out the cable and ensure no extra cable is tucked in the wall. After that, you can label cables on both ends and measure the exact cable length. Remember to leave spare cables for stripping and crimping. Connect the Cable to Jacks Now you need to wire the cables. Strip about one inch of the outer jacket off the cable and push the wires into the keystone jack to match the color code marked on it (T568A or T568B standards). Punch down the cables to keystone jacks (or patch panel) with a punch down tool. After you have all the cables connected, you can click the jacks into the wall plates. At last, fix the wall plate on the wall with supplied screws. You can refer to the video below for more details. Test Your Wired Home Network Once all cables are wired, test the network with network cable tester. If LEDs on the tester light up, it means the Ethernet plug is connected correctly. On the contrary, Ethernet plug is not connected right and you should check the Ethernet plug. After everything is prepared well, you can connect the network. Building a home network is very easy. This article talks about detailed steps of home Ethernet wiring. FS provides cat5e, cat6 and cat6a Ethernet cables with many color and length options. Snagless boot design prevents unwanted cable snags during installation and provides extra strain relief. Besides, custom service is also available. For more details, welcome to visit www.fs.com or contact us over [email protected].
<urn:uuid:6f1810b7-496b-4922-9913-b2fb26fa0631>
CC-MAIN-2022-40
https://community.fs.com/blog/how-to-get-a-wired-home-network-with-ethernet-cable.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00645.warc.gz
en
0.919396
1,383
2.609375
3
What Is Hacking?GRIDINSOFT TEAM Hacking is a process of breaking into a particular system with methods that were not foreseen by the person who designed this system. Under the term “system”, I mean any website, database, computer network - anything with a strict structure and specific protection mechanisms. Hackers try to get access to this network or database - to get some valuable information, for sabotage, or just for fun. Sometimes, large corporations may hire specially trained persons, which are paid for finding the possible breaches in the security of a system. Then, trying to hack into the computer network, they will figure out which elements are vulnerable to potential attacks. Sometimes, hackers do this without any offer - reporting to the company about the breaches they found. Then, with a 50/50 chance, they will be granted a certain sum of money or reported to the FBI for committing a cyberattack. How is hacking conducted? Hacking can barely be performed without special tools. It is possible to break into something poorly protected, especially if you have a powerful PC configuration. Even the most simple hacking method - brute force - requires a powerful GPU if you want to conduct this operation fast enough. For example, the brute force operation will take about 4 hours with GTX 1080 GPU and about half an hour with RTX 2080. Hardware facilities are not a single thing that hackers need. Most hacking cases are conducted with software programs designed to break into someone’s system or have the functionality that allows the hacker to perform specific tasks. Various networking applications will enable users to check all opened ports of the particular network, or tools for FTP/SFTP connections are perfect examples of the last category. Initially, these tools were designed for human actions, but hackers discovered that they are helpful for their evil tasks. What are hackers searching? Experienced network engineers know which ports in any network are vulnerable. Hence, they usually close these breaches on the stage of network establishment. However, things may differ when the network is designed by a non-professional. When such a sensitive thing is set up by a person who checks the manual for each step, it is pretty easy to see many vulnerabilities. What can hackers do through the opened and vulnerable ports? Accessing the whole network or a server with this port opened is effortless. Hackers scan the web for possible vulnerable ports left open and then start the brute force, attempting to log into the network as administrators. In case of success, it is pretty easy to imagine what they can do - from infecting the network with ransomware to destroying the network and deleting all files on the server. Some hackers are not targeting malware injection or vandalism. Instead, these crooks are aimed at the sensitive data stored on the server - for personal use or sale. This category of hackers usually chooses servers as their primary targets since they can find much different information. After the successful hacking, a cyber burglar attempts to sell the data on the Darknet. Sometimes, attacks targeted on valuable data are complemented with malware injection - for an additional ransom payment. What is their motivation? The primary motivation of any hacker is money and, sometimes - fame. But, of course, it is a pretty stupid idea to hack the server of a specific corporation just to become famous (and to be captured by cyber police). That’s why most hackers aim at the most valuable thing - information. After the successful attack, they may receive a lot of money as a ransom to avoid publishing the stolen data or selling this information on the Darknet. As mentioned at the beginning of this post, corporations sometimes hire hackers for security systems testing. In these cases, they are paid by companies as they were standard employees. They may use any tool they want - in real life. No one can restrict hackers from using something practical and brutal. However, such testing is pretty efficient, especially when the hired hackers have extensive practical knowledge. Finally, some hackers act in someone’s interest. For example, they may hunt for some private data or inflate some governmental systems’ work. The hackers themselves never choose such targets - the consequences are too dangerous, and selling the leaked information may call the interest of executive authorities. Involving the result of the election, for example, does not generate any money interest for the stand-alone hacker - if no one pays them for this, he will receive no profit, but massive attention from the FBI, for example. What is Black Hat Hacker? Black hats are hackers who maliciously hack computer networks. They can also use malware that steals passwords, destroys files, steals credit card passwords, etc. These cybercriminals make everything in their malice: revenge, financial gain, and other chaos like all hackers. Their target is people they disagree with. So, what is the black hat hacker definition? How Does It Work Black hat hackers work as an organization with both suppliers and partners that produce malware and deliver it to other such malicious organizations for more excellent malware distribution. Some organizations operate remotely, call users, and present themselves as well-known companies. Then advertise some product - a program you want to install, thereby getting remote access to the user's computer. And there's a small thing; they take everything they need from the PC: collect data, steal passwords, and bank accounts, make attacks from it, and others. Sometimes it happens that they also withdraw some amount for it all. 🏆 Top 10 tips to protect your personal data: recommendations, tips, important facts. There are also automated hacks. For example, bots are created that aim to find unprotected computers due to phishing attacks, links to malicious sites, and all kinds of malware. Tracking black hat hackers isn't easy because these hackers leave very little evidence. Sometimes, law enforcement agencies manage to shut down their website in one country, but they continue their activities in another. White Hat Hacker Definition They're called "good hackers" or "ethical hackers". This type of hacker is the exact opposite of black hat hackers because when using a computer network or system, they give their advice on fixing security bugs and deficiencies. How White Hat Hackers Work? The hacking methods in white and black hats are the same. But what they do is they take the permission of the device owner to use. It all makes their actions - legitimate. They don't exploit vulnerabilities; they're with users as network operators and help them solve the problem before other operators find out. White hat's methods and tactics are as follows: - Social engineering. Its deception and manipulation focus on the human being. This tactic gets users to share their account passwords; bank accounts give their confidential data. - Penetration testing. This method allows you to find vulnerabilities and vulnerabilities in the security system using testing and thus enable the hacker to fix them as he wants. - Reconnaissance and research. This method helps a hacker find weaknesses or vulnerabilities in the physical, whether in IT-Enterprise. This method does not involve breaking or breaking into the system. On the contrary, it allows you to bypass the security system to get what you want. - Programming. Hackers' white hats bait cybercriminals to get information about them or distract them. - Using a variety of digital and physical tools. In this case, hackers are equipped with hardware that helps them install malware and bots that give them access to servers or networks. Hackers diligently train to do all the above methods and are rewarded for their skills and knowledge. All their hacking work is rewarded. It is a game with two sides, one of which loses, the other wins, motivating the players. What is a Gray Hat Hacker? The gray hat is located somewhere between the white hat and black hat. Its cracking methods are a mixture of white hat and black hat methods. How do they do it? They look for vulnerabilities, then find and notify the owner, often charging a fee. It is for the elimination of the problem. Their initial intrusions into the servers and networks occur without the owner's knowledge, so it's not strictly legal. But gray hats hackers think they help these users. Why do they need it, you ask? Sometimes to become known and contribute to cybersecurity, sometimes for their benefit. How Does Gray Hat Hacker Work? Let's imagine - that the hacker has entered the user's computer. Its next step will be to ask the system administrator to hire one of his friends to solve the problem. All this is done naturally for some payment. However, prosecutions have somewhat muted this working method. How to Protect Yourself From Hackers Nowadays? Ways to help you protect yourself from various types of hackers: - Use unique, complex passwords. Secure passwords are one of the most important ways to protect yourself from cybercriminals. Put a password that includes capital letters and symbols. It will make it difficult to log in to your account because of the complexity of the password combination. Please don't share your passwords with anyone, don't write them in plain sight, and don't make them corny. - Never click on links sent in unsolicited emails. Don't press everything you see. It can be malicious links that carry malware that can infiltrate your PC and damage your PC. That is why you should pay attention to all the signs that can warn that your computer is infected. - Enable two-factor authentication. Set yourself a two-factor authentication that creates extra security on your PC and your network. When you log into your account, two-factor authentication asks you to confirm the transaction with a code that will be sent to your mobile number or ask for a backup email address. All this will reduce the risk of a break-in. - Be careful when using public Wi-Fi networks. These networks may be unprotected, and you might get alerts when you try to connect to the public Wi-Fi network that this network is unprotected-consider not connecting. If you do, then use a VPN. Check out an interesting guide on the topic "How to secure your Wi-Fi network?". - Deactivate the autofill option. Whether it is convenient - yes, but also dangerous because the hacker will also use this feature when entering your PC. - Choose apps wisely. Install your apps from trusted sources. Update your apps on time, delete old apps, and also do not forget about antivirus software. - Trace or erase. Put a tracking program on your phone. Then, if it goes missing, you'll know where it is. You can also set a limited number of possible login attempts. - Install trusted cybersecurity software across all your devices. Antivirus software blocks viruses and malware, and your data and that of your family will be safe with good antivirus software.
<urn:uuid:359412cf-ddcb-4482-9e4f-74fba71cdae3>
CC-MAIN-2022-40
https://gridinsoft.com/hacker
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00645.warc.gz
en
0.939444
2,203
2.9375
3
Thank you for your interest. We Will Contact You Soon... Your email ID is already registered with us. What is 5G: All You Need to Know Technology - December 1, 2020 5G has been in the news for quite some time now. People have been envisaging 5G as the next breakthrough in cellular communication technology that can deliver ultrafast mobile carrier speed. As the name suggests, 5G is the 5th generation mobile network system, the successor of today’s 4G LTE networks. 5G has been created to realize the requirements of large data coupled with strong connectivity. The scope of connectivity has now expanded from cellphones to the devices connected by the Internet of things (IoT). 5G will initially operate along with 4G before evolving to operate alone as it expands in terms of speed and coverage expansion. Latency is defined as the time taken for devices to respond to one other over the wireless network. Speaking of response time, 3G networks had a response time of 100 milliseconds, 4G is about 30 milliseconds and once operational, the response time of 5G will be as low as 1 millisecond. This is going to be a game changer in the world of connected applications. In many countries, 5G services have already taken their roots; although widespread availability of 5G is only expected by 2025. The first applications using 5G services like mobile phones, tablets, wireless modems, etc. have already been launched. The major benefits of 5G devices include faster speed of access, download and streaming; and enhance computing power and low latency, thus ensuring devices connect to networks instantly. What will the 5G Enable 5G can enable immediate connectivity to billions of devices connected via the Internet of Things (IoT). The Fifth Generation network has the potential to provide speed, enable low latency and connectivity to allow a new generation of applications, services, and business opportunities. a. For societies, 5G can link billions of devices in smart cities, schools and homes; providing safer and more efficient place to live. b. For businesses, IoT powered by 5G can facilitate an abundance of data allowing them to gain better insights into their operations. The key decisions of businesses are driven by data and IoT will enable cost savings, improved customer experience and long-term growth. c. Advanced emerging technologies such as Augmented reality and Virtual reality will expand its reach by providing intuitive connected experience. 5G and VR can let you watch live sports match, inspect real estate, tour any city in the world, with the feeling of being grounded. Machine to Machine (M2M) communications – also known as the Internet of Things (IoT) that includes connecting billions of devices without human intervention at an enormous scale. This has the potential to transform modern industrial procedures and applications. Low latency communications - Real-time control of robotics, industrial devices, home appliances, safety systems, etc. Low latency communications also make remote medical care and treatments possible. – Faster data speed and enhanced capacity. New applications will consist of indoor fixed wireless internet, outdoor broadcast systems, thus eliminating the need for broadcast vans. Overall, 5G promises bring greater connectivity for people on the move. The 5G system: How does it work? Initially, operators will integrate 5G networks with existing 4G networks. This way, a continuous connection can be ensured. Let us see how the 5G system works: A mobile network consists of two key components- ‘Radio Access Network’ and the ‘Core Network’. Radio Access Network: It includes facilities such as small cells, towers, masts, etc., connecting users and wireless devices to the main core network. A major feature of the 5G networks would be small cells, mainly the Small cells running at the new millimeter wave (mm Wave) frequencies. Here the connection range would be very short and to ensure that the connection remains continuous, small cells would be available in clusters and their density depends on where users need connection. 5G Macro Cells will use MIMO (multiple input, multiple output) antennas that consists of many elements or connections to send and receive data. Users benefit in the way that connection would be spontaneous and a high throughput will be maintained. MIMO antennas generally consists of several antenna elements. In fact, they are often referred as ‘massive MIMO’. However, there size is similar to existing 3G and 4G base station antennas. Core Network This comprises of the mobile exchange and data network that handles all mobile voice, data, and Internet connections. In case of 5G, the ‘core network’ will be redesigned to better integrate with the world-wide-web and Cloud. This would also include distributed servers across the network. This way, response time, including latency will be improved. Many of the cutting-edge features of 5G such as virtualization and network will be managed in the core network. Network Slicing is a function that enables segmentation of the network for a specific industry, business, or application. For example, a company’s emergency services can operate on a network slice which would be independent of other users. Coming to virtualization, Network Function Virtualization (NVF) is a feature that allows you to start network functions at any desired location. The only condition is that the location should be within the vendor’s cloud platform. Network functions that used to run on specialized hardware can now operate on virtual machine. NVF is vital in enabling the speed efficiency and dexterity to support. This way, new business applications and technologies could be ready for a 5G core. When a 5G link is established, the device will connect to both the 4G and 5G networks. The 4G network will provide control signaling and 5G network to enable fast data connection by complementing the existing 4G capacity. Where there is inadequate 5G coverage, the data will be passed on the 4G network to ensure continuous connection. In short, the 5G network is complementing the existing 4G network. The 5G Advantage: Key Attributes 5G networks are planned to work in concurrence with 4G networks using a range of macro and small cells, dedicated in-house systems. Small cells are mini base posts planned for localized coverage (10-100 m) facilitating in-fill for a greater macro network. Small cells are indispensable for the 5G networks as the mmWave frequencies have limited connection range. Increased Spectrum: In several countries the initial 5G frequency bands are below 6 GHz (mainly in the 3.3-3.8 GHz bands). Added mobile spectrum above 6 GHz frequency, counting the 26-28 GHz bands, will deliver enhanced capacity compared to the current network technologies. The extra spectrum and enhanced capacity will support more users, more data and quicker connections. It is also believed that there will be reuse of existing low band spectrum for 5G in future. This is because there will be a decline in usage of legacy networks. The improved spectrum in the mmWave band will facilitate localized coverage as it only operate over small distances. Future 5G deployments may leverage mmW frequencies in bands up to 86 GHz and mobile spectrum with 3-100 GHz radio frequency range and new 5G spectrum, ranging above 6GHz. The physical size of the enormous 5G MIMO antennas will be identical to 4G; However, with a greater frequency, the individual antenna element size will be reduced, allowing a greater number of assets in the same physical case. 5G User Equipment’s including cell phones and other devices will also have MIMO antenna technology built into the device for the mmWave frequencies. Plus, 4G sector and 5G base stations will have multi-part massive MIMO antenna array. Also, the 5G base station antenna is expected to be similar to a 4G base station antenna in terms of physical size. MIMO - Beam Steering: Beam steering is a technology that lets the MIMO base station antennas to handle the radio signal with the users and devices. The beam steering technology uses innovative signal processing algorithms to regulate the best path for the radio signal to reach the handler. This enhances efficiency as it leads to reduction in interference which are unwanted radio signals. Lower Latency: Lower latency with 5G is realized through important advances in mobile technology and network architecture. Response time (milliseconds) 4G - LTE systems 5G - enhanced mobile broadband 5G – URLLC, expanded as Ultra Reliable Low Latency Communications) systems To deliver low latency, significant changes in the Core Network (Core) and Radio Access Network (RAN) of the 5G Network - Mobile Network architecture are required. Core Network varies with the changes in core network design, thus signaling the servers. A key attribute is to move the data closer to the end user and to abridge the path between devices for critical applications. Some key examples are video streaming services like Netflix where users can store a copy or ‘cache’ of popular content in local servers. This allows them to quickly access the content. Radio Access Network varies in order to achieve low latency. To do that, the Radio Access Network (RAN) is required to be customized in a manner that is highly flexible and configurable. This way, the facility can support various types of amenities that the 5G system promotes. To minimize the time delays, low latency and high reliability over the air interface would be required. To achieve greater degree of reliability, fewer TTIs (time transmit intervals) along with the sturdiness and coding improvements are also imperative. Implementing a virtual, active and configurable RAN enables the network to work at very low latency and high throughput. Nevertheless, it also allows the cellular network to adjust to changes in carrier traffic, network faults and new topology requirements. 5G is the next breakthrough in mobile cellular technology. In addition to carrying faster connections and better capacity, a prominent advantage of 5G is the fast response time. Speaking of architecture and design, which is dealt prominently in the blog, the new 5G architecture will exist as a 4G/5G split RAN where the 5G would exist as user plane and 4G would exist as control plane. This would mean the separation of the usual hardware and advanced network hardware. The functionality of the usual/general-purpose hardware or nodes are appropriate for network functions virtualization (NFV), where the advanced/specialized hardware in the RAN will be automatically configurable.
<urn:uuid:f9f8a947-a808-4afd-bdbc-158e8c18721c>
CC-MAIN-2022-40
https://www.issquaredinc.com/insights/resources/blogs/what-is-5g-technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00645.warc.gz
en
0.931854
2,202
2.703125
3
Eighteen years into the 21st century, and one of the most important systems contributing to our economy and quality of life is stuck in the past. Transportation infrastructure, which delivers us from point A to point B and back, has yet to catch up with the digital revolution. It does not have to be that way. The advent of the Internet of Things (IoT) and cloud computing makes the time right to digitize crucial maintenance functions. A technology-based solution is safer and will save time and resources. A Time article on how technology can better help maintain America’s infrastructure calls out the Washington, D.C. subway maintenance catastrophe as an illustration of the need for change. The technology exists to revolutionize transportation maintenance, and that means creating a high level of integration among data management, analysis, and deployment environments. It is not a leap to say this technology can help build a crash-free system that is always in service. Previously, transit agencies relied solely on paper-based tracking procedures, using forms and spreadsheets to monitor and manage critical assets. That is not sustainable, especially for Federally funded agencies that typically store seven years’ worth of data. Paper-based systems are also inefficient and costly. Trying to find a specific document quickly in the case of a safety inspection or funding review can be nearly impossible. This manual process is susceptible to inaccuracies and risk. Without a system in place to unify how data is recorded and analyzed, each inspector can have his or her own way of describing things. The resulting information is open to interpretation, enabling systematic irregularities and human error. Saying Goodbye to Siloes There has been exciting innovation in using wireless sensors attached to crucial parts of assets–including engines, brakes, and batteries. Those sensors then feed performance information directly into a centralized system. Thresholds and parameters are preset, while APIs collect and process the data into workable and actionable information. Aside from eliminating error-prone and time-consuming manual processes, deploying this system in the cloud streamlines and integrates information across the organization. This real-time data helps not only to predict, but to pinpoint maintenance needs. That means avoiding removing perfectly working assets from the system. With intelligent, real-time data, maintenance issues are only addressed when there is an actual issue or if a threshold has been crossed–reducing mechanic time and parts purchases. Lastly, this optimizes traveler communication. By monitoring the state of vehicles and to schedule maintenance, riders can remain informed about transport schedules and locations. That is what it comes down to: Living in a connected society, where immediacy and information access is constant. Technology is here to bring our aging infrastructure systems into the digital world, and our antiquated maintenance system is a smart place to start. Kevin Price, Technical Product Evangelist and Product Strategist, Infor EAM
<urn:uuid:4c59ab16-4ff0-45ef-a9ab-28c6d8007657>
CC-MAIN-2022-40
https://origin.meritalk.com/derailing-old-school-asset-maintenance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00645.warc.gz
en
0.914959
589
2.765625
3
UPDATED British Prime Minister Theresa May is to announce plans to commit millions of pounds to a new artificial intelligence strategy for early-stage cancer diagnosis. The aim is to reduce deaths from prostate, ovarian, lung, and bowel cancer by 10 percent within 15 years – saving an estimated 22,000 lives a year. In the first of a new series of speeches on the UK’s industrial strategy, the prime minister will say, “Late diagnosis of otherwise treatable illnesses is one of the biggest causes of avoidable deaths. “The development of smart technologies to analyse great quantities of data quickly and with a higher degree of accuracy than is possible by human beings, opens up a whole new field of medical research, and gives us a new weapon in our armoury in the fight against disease.” May is expected to ask the technology industry and cancer charities to work with the NHS to develop new algorithms that can use a mix of patient data, genetic markers, and lifestyle information to warn GPs of the risk of common cancers developing. NHS Trust uses AI to predict heart attacks News of the government’s strategy comes on the same day that the Royal Liverpool and Broadgreen University Hospitals NHS Trust announced that it has embarked on a new AI programme to improve the treatment of patients who have had a heart attack, in a project that could see wider use of AI to inform treatment decisions across the organisation. The Trust, which was named as the NHS’ Global Digital Exemplar of the year, will use technology from healthcare specialist, Deontics. The new system will enable doctors on the Trust’s acute cardiac unit (ACU) to access AI-driven evidence-based clinical treatment recommendations that are tailored to each patient’s needs. Deontics uses cognitive computing and healthcare-specific logic to act like a “clinical sat-nav” for doctors, according to a joint announcement this morning. This enables them to make treatment decisions that are dynamically informed by relevant standards and guidelines, such as those issued by NICE, and the latest good practice from published papers and other reputable sources. These are then applied directly to the needs of individual patients. “Some of our most frail and elderly patients with acute coronary syndrome are getting some of our most powerful drugs,” explained the Trust’s chief clinical information officer, Mike Fisher. “Using AI-technology means we should reduce the potential for overprescribing drugs for patients at lower levels of risk. “Instead of giving some patients the maximum treatment, we can make sure patients are given the most appropriate treatment.” These latest AI announcements arrive in the same week that GDPR comes into force across the EU – in Britain under the auspices of the Data Protection Act. In April, a House of Lords report warned of the need to give individuals greater control over their own data, and urged regulators to prevent the monopolisation of data by companies such as Google and Facebook. Also in April, the government announced a new Sector Deal for Artificial Intelligence, along with a comprehensive review of the health service, with the long-term strategy of training NHS workers in technologies such as AI and robotics. Dr Eric Topol, executive VP of private US healthcare research group, Scripps Research Institute, is leading the NHS review, looking at opportunities to train existing staff, while also considering the impact that AI, robotics, genomics, and big data analysis may have on skills. Topol said last month that these technologies “will have an enormous impact on improving the efficiency and precision in healthcare. “Our review will focus on the extraordinary opportunities to leverage these technologies for the healthcare workforce and power a sustainable and vibrant NHS,” he said. Internet of Business says The NHS policy announcement this morning is a timely and sensible move by the government, given rising evidence that AI’s ability to spot patterns in big data can help to identify early signs of diseases, or warn of the risk of certain patients being susceptible to them. - Read more: Consumer wearables can detect major heart problem - Read more: Machine learning improves dementia, stroke diagnosis - Read more: Fitbit and Apple Watch can help predict diabetes, says report Prevention and early treatment could save thousands of lives while cutting the enormous costs of treating people at later stages of serious diseases. However, the news is also guaranteed to cause controversy at a time when the NHS is perceived as being starved of funds and undergoing stealth privatisation, and in the wake of rising concerns about technology companies profiting from national datasets. For example, last year an independent panel found that a deal between Google’s DeepMind and the NHS’ Royal Free Hospital Trust to use 1.6 million patient records to identify patients at risk of kidney disease was illegal.
<urn:uuid:53b249ba-1a2d-4197-bb59-3105cac399ba>
CC-MAIN-2022-40
https://internetofbusiness.com/uk-to-commit-millions-to-ai-cancer-diagnosis-strategy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00645.warc.gz
en
0.937179
996
2.53125
3
General Data Protection Regulation (GDPR) The European Union has adopted the new General Data Protection Regulation (GDPR). This rule, which replaces Directive 95/46/EC (the EU’s previous data protection regulation), is designed to increase data protection. The GDPR, which was adopted in April 2016, is set to take effect May 2018. The GDPR will provide both greater uniformity to the handling of sensitive data throughout the EU, ensures that countries throughout the EU will have more standardized laws, and it will better protect personal data processed for non-personal purposes. The foundational theme of the GDPR is the precept that a living person has a fundamental right to his or her own data, , just as was a theme throughout Directive 95. As with the previous regulations, personal data is considered to be any data that—directly or indirectly—identifies, or can be used to identify, a living individual by any reasonably likely means. As part of its effort to create uniformity across the EU, the GDPR is automatically effective in EU member states without requiring adoption from member states’ own legislatures. It does carve out certain exceptions for member states to determine data-handling in specific circumstances, such as law enforcement and established public interest. What does GDPR mean to you? Organizations that have locations in Europe must comply with GDPR. However, this regulation does not affect only European companies. Any organization that does business or operates in Europe, even if it is not physically located in Europe, is subject to GDPR. This includes companies located in the United States that conduct business in Europe, or with European companies. For companies in the US, the GDPR replaces or augments EU-US Privacy Shield. The industries that are most affected are Financial, Drug Manufacturers, and Healthcare. GDPR requires that your business data remains secure. Your company must ensure that personal data is protected, and that you have proper policies and procedures in place to ensure compliance. GDPR also gives individuals greater control over their personal data. For example, you must be able to produce an individual’s personal data, upon request, and the data must be provided in a consumable format. The GDPR makes it clear that your organization must protect personal data. Failure to comply with GDPR can have consequences, including fines and restitution. Fines can reach up to €20 million or 4% of the company’s global revenue, plus organizations can be subject to restitution for any harm from violating GDPR. - One Year to Prepare: The GDPR will take effect May 2018. - Not just for Europe: GDPR applies to companies located in Europe and also for those that do business with European businesses. - Data protection: Companies must have policies, processes, and technology in place to ensure that data stays secure and protected. - Data consent and rights: Individuals have to give consent and the consent has to be explicit and limited. Users also have the right to request data, rescind requests, and revoke consent. - Notification of data breach: The supervisory authority must be notified of a data breach within 72 hours of discovery, unless the breach is not likely to harm a person’s rights and freedoms. The individual who owns the data must also be promptly notified of data breaches, except where there is little risk of harm, when data has been rendered unintelligible or when notification would involve a disproportionate effort. - Penalties: Fines can be levied up to €20 million or 4% of global revenue (whichever is greater). Individuals who are impacted by improper data handling may seek legal restitution. - Data transfers: It is permissible to transfer data outside of Europe as long as companies establish safeguards and permissions that are in line with GDPR. (More specifics about GDPR, its history and why companies in the Unites States should care about it are here) Are there specific requirements for email archiving? There are no specific requirements for archiving email within the GDPR; however, there are other regulations that mandate that email, along with other forms of electronic communication like text messages and social media data, is archived and accessible. GDPR does specify that data must be kept safe, and that if data is shared it must be secure. An archiving solution like Retain can help track and audit who has accessed data. This information is needed to comply with specific requests within the regulation, for example the user’s right to withdraw consent (aka the right to be forgotten). How does Retain by Micro Focus Help? Micro Focus® Retain Unified Archiving™ helps ensure GDPR compliance through secure capture of all electronic communication data, including archiving email, mobile, and social media. Data is stored in a secure, encrypted archive. In addition to secure storage, with messages being archived using AES encryption, EMC Centera, or NetApp Snaplock storage. Optional Windows server or Linux server encrypted partitions can be used. Plus, Retain features native support for iCAS technology. And, when deployed in the cloud, Retain features redundant and secure data centers, keeping your data safe and secure. Access to data is tightly controlled through customizable role-based permissions. Only users with granted rights can access the archive or use the features and functionalities of the Retain system. All access to the archive is monitored with fingerprinting via the audit trail. Retain creates a searchable audit trail of all administrators and users who have permission to access the archive, enabling you to have a record of all activity. Retain features Write Once Read Many (WORM) storage. This ensures that data is written only once to the archive, while still being accessible multiple times. Archived data cannot be changed, ensuring complete compliance to GDPR and other regulations. Retain fulfills the “right to be forgotten” by allowing administrators to delete entire mailboxes from the archive. This allows for all of the user’s information to be deleted from Retain, when requested.
<urn:uuid:b75c5958-a7b9-4fb0-a6e4-68f59a7cf10e>
CC-MAIN-2022-40
https://blog.microfocus.com/what-is-the-gdpr-how-does-it-impact-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00645.warc.gz
en
0.933245
1,239
2.890625
3
The Stone Circle, Silbury Hill, West Kennet Long Barrow, The Avenue, and Tumuli Stonehenge is not to be missed, but the megalithic circle at Avebury and the many surrounding ancient structures are far more interesting. It's easy to spend an entire day there exploring the many sites. Avebury is a complex of several megalithic structures of widely varying design, built over a period of many centuries. Avebury is one of the finest and largest Neolithic monument sites in Europe and dates back to around 3,000 BC. Avebury is older than the megalithic stages of Stonehenge, although some of the purely turf components of that complex go back to a similar age. The village of Avebury is approximately midway between the larger towns of Marlborough and Calne, just off the main A4 road on the northbound A4361 towards Wroughton. It is about 32 kilometers north of Stonehenge. You will probably want to buy the 1:25,000 Marlborough Ordnance Survey map. The first map below (1:250,000 scale) shows Avebury's location with respect to other small villages in the area. The second map below (1:50,000 scale) shows Avebury village and some of the ancient monuments. The two maps below (1:25,000 scale) show the village and the prehistoric monuments in more detail. The public rights of way shown as green here are a fantastic feature of the U.K. — the public is guaranteed access to these footpaths! As for logistics, Avebury is a small village. The only accommodations available are B&B at the Red Lion pub at the crossroads almost at the center of the henge. It's relatively expensive. I have visited Avebury as a day trip while staying at the Bath Backpackers hostel in Bath. Start by taking a bus to Devizes, and then buses run at least hourly between Devizes and Avebury. But be very careful with the bus schedules, the last bus back to Devizes may leave Avebury around 1900! Click here for Wiltshire bus routes and timetables. The Red Lion is the place for food and drink in Avebury! Avebury Village and the Stone Circle The small village of Avebury was built within the protective earth banks around the stone circle. Now some small roads go through the circle. The central monument is a large circular earthworks — a circular inner ditch and external henge, or circular bank, just over 420 meters in diameter. Within the henge is the Outer Circle, the world's largest prehistoric stone circle with a diameter of 335 meters. Here you see a view from the top of the henge along the south-east side, looking down across the ditch and toward Avebury Manor just outside the west side of the henge. The A4361 road cuts south to north through the henge. The Outer Circle was originally made up of 98 sarsen standing stones, varying in height from 3.6 to 4.2 meters and some weighing over 40 tons. Carbon dates from the fill below them date to 2,800-2,400 BC. Here you are looking across the northern edge of the Outer Circle toward the Church of England village church in the distance. It has an 11th century Saxon nave, and two original Saxon windows survive. It's dedication was described as "All Saints" in the 13th century, but it now is dedicated to Saint James. Within the Outer Circle were two separate stone circles of about 100 meter diameter. Only two standing stones remain of the Northern Inner Ring, and the Southern Inner Ring is entirely destroyed. The Northern Inner Ring was 98 meters in diameter. It had a cove of three standing stones at its center, with its entrance facing to the northeast. The Southern Inner Ring was 108 meters in diameter. It had one large central monolith, 5.5 meters high, with an alignment of smaller stones. Some archaeologists believe that parts of the monument were designed for acoustic effect, with sounds produced in the Inner Rings creating special echoes. Here you are standing on top of the henge on its south-east side looking to the north-east. Traveling from outside to in (right to left in this picture), you would first go up the steep outer slope of the henge, up to 10 meters above average ground level. Then you would descend the henge and continue further down into a ditch, down to maybe 5 meters below average ground level. With the ditch on the inside rather than the outside, this clearly is not a defensive structure, or at least not a very well-designed one. Inside the ditch is the Outer Circle of monoliths. The ditch itself is a large project, 21 meters wide and 11 meters deep carved into the chalk. Intermittent excavations from 1908 through 1922 by Harold St George Gray showed the red deer antlers had been the primary digging tools. The two large standing stones flanking the Southern Entrance had unusual smooth surfaces. This is probably because stone axes were ritually polished on their faces. Here I am in the main stone circle as the full moon rises! The local people destroyed many of the standing stones in the Late Medieval and Early Modern periods, for a mixture of religious and practical reasons. Some of this was done to obtain stones for construction, but much was simply destruction for religious purposes. England had been converted to Christianity, and Avebury's obvious non-Christian origin caused it to be associated with the Devil. The largest stone at the southern entrance to the henge was called the Devil's Chair, a set of three close stones was called the Devil's Quoits, and the stones in the smaller Northern Inner Ring within the main circle were called the Devil's Brand-Irons. The local people began digging pits beside the large standing stones, then pulling the stones down and burying them in the pits. This apparently began in the 14th century at the urging of the local priest, probably either Thomas Mayn (served 1298-1319) or John de Hoby (1319-1324). It appears that a travelling barber-surgeon was passing through some time soon after 1325. He was carrying a leather pouch with three silver coins dated to around 1320-1325, a pair of iron scissors, and a lance. The locals were preparing to pull down one of the stones, one standing 3 meters tall and weighing 13 tons. The travelling barber-surgeon either got too involved or was just standing too close — the stone fell on him as he was standing in the burial pit, fracturing his pelvis and breaking his neck while trapping his body under the 13-ton stone. The locals did not have the technology to lift the stone and give him a proper Christian burial. Or at least not a standardized one. But he certainly got buried in the middle of what they saw as a Christian activity. That seems to have been the end of the stone-toppling, maybe because the locals saw this as retribution from a vengeful spirit or maybe even the Devil himself. It sure seems that it would have been one of the locals crushed, rather than a travelling barber-surgeon who just happened to be passing through that day, but they stopped anyway. Then the Black Death arrived in 1349, decimating the population and giving the survivors more pressing requirements for their limited time and man power. The barber-crushing episode made enough of an impression that not only did they stop, but records from the 18th and 19th centuries show that there were still local legends about a man being crushed by a falling stone. Archaeologists found him in 1938. They named the stone the "Barber Stone" and stood it back up. The destruction returned and reached its peak in the later 17th and 18th centuries, influenced by the rise of Puritanism. The majority of the standing stones in the monument were smashed to be used as building material. The Stone Age, named for the best technology of its time, before bronze and even longer before iron was developed, is divided into periods: Paleolithic or Old Stone Age; Mesoolithic or Middle Stone Age; and Neolithic or New Stone Age. The definitions of those periods depends on the region. You ask the question: when were humans occupying that territory and using that level of technology?-- For Britain, the Mesolithic period is applied to that time from about 9,600 to 5,800 years BC. Britain was not an island, as a landmass called Doggerland connected Britain to continental Europe. This was a large area stretching from today's British east coast to today's western coasts of Denmark, Germany and the Netherlands. Enough of Doggerland was submerged by about 6,500 BC to cut off what had been the British peninsula to form the island of Britain, although the Dogger Bank, an upland area, probably remained as an island until at least 5,000 BC. Neolithic spear points, a Neanderthal skull fragment more than 40,000 years old, and other Neolithic objects have been dredged from today's North Sea floor. The Mesolithic inhabitants of Britain were hunter-gatherers, moving around a heavily forested landscape in small family or tribal groups. Some Mesolithic flint tools dated to 7,000—4,000 BC have been found in the area around Avebury. Society in Britain underwent radical changes in the 4th millennium BC, bringing about the transition to a Neolithic culture. The huge change was caused by the development of agriculture, either as the arrival of the concept from its origins to the southeast, possibly in today's Turkey, or its independent local invention. The formerly nomadic hunter-gatherer people could settle down and produce their own food in place. Domesticated species of animals and plants were introduced, and new technology such as pottery was introduced or developed. The people cleared land to increase their agricultural activity. This control of the landscape led to the erection of monuments. Between about 3,500 and 3,300 BC the prehistoric Britons had reached the limit of their expansion for a while, and concentrated on further developing the best agricultural areas. It appears that their religious beliefs and practices changed around that same time. They had been building large chambered tombs, apparently for ancestor veneration. In the 3,500—3,300 BC time frame they began building large circular structures, initially mainly of wood and then from stone. The people who built Avebury clearly had a stable and secure agrarian society, in order to expend so many resources in its construction. Silbury Hill is about 40 meters tall and was built in stages starting around 2,400 BC. It is made mostly from chalk and clay excavated from the surrounding area. Silbury Hill is the largest prehistoric man-made earthen mound in Europe and one of the largest in the world, and it was the largest man-made structure in Europe until the Middle Ages! Archaeologists have calculated that Silbury Hill took 18 million man-hours, or 500 men working 15 years to deposit and shape 248,000 cubic metres of earth and fill. One author says a project of this scope could not be carried out under the Neolithic tribal structure we usually assume was the rule in this time. Instead, there must have been some authoritarian power elite, probably theocratic, with control or at least influence over a broad region. If you're using GPS, this is at UK grid reference SU 100 685 or 51°24'56" N 1°51'27" W. See my page on the National Grid system and OS maps for details on how that works. The base of the hill is circular and 167 meters in diameter. The summit is flat-topped and 30 meters in diameter. A smaller mound was constructed first and much enlarged in a later phase. Recent surveys have shown that the center of the flat top lies within one meter of the center of the outer cone of the hill. The first phase, carbon-dated to 2,400 BC, consisted of a gravel core within a revetment of stakes and sarsen boulders. Alternating layers of chalk rubble and earth were placed on top of the initial core. The second phase involved excavation of an encircling ditch and using the resulting chalk material to extend the core. The ditch was later backfilled, and the mound was extended to its present dimensions using material from elsewhere. Here you see Silbury Hill from the top of the West Kennet Long Barrow. The village of Avebury is hidden behind the hill to the right. So what was its point? It's huge, but its shape is very simple and doesn't really suggest much. Of course that hasn't stopped people from coming up with all sorts of fanciful ideas! The explanation that seems the most plausible, or at least the most possible to support, is that Silbury Hill and some of the surrounding monuments were designed with a system of inter-related sightlines. From Avebury and various surrounding barrows, a subtle step several meters below its summit aligns with hills on the horizon behind it or with hills in front of it. West Kennet Long Barrow The West Kennet Long Barrow is a burial mound for about 50 individuals, dating from around 3,600 BC. It is classified by archaeologists as a chambered long barrow or Neolithic tomb structure. It has two pairs of opposing transept chambers and a single terminal chamber, all used for burial. It's about 104 meters long by 23 meters at its widest. The four people at the left of this picture are standing on top of it looking toward Silbury Hill. The entrance is at one end, between some megaliths. If you're using GPS, this is at UK grid reference SU 104 677. Here you are looking directly toward its entrance and the large sarsen slabs used to seal entry. Construction began about 3,600 BC, some 400 years before Stonehenge, and it was used until around 2,500 BC. The structure is 100 meters long. It has been estimated that about 15,700 man-hours were required for its construction. Excavations allow you to enter behind those sarsen slabs. Excavations have found burials of at least 46 individuals ranging from infants to the elderly. The skeletons were disarticulated with some skulls and long bones missing. It has been suggested that the bones were removed periodically for display or transported elsewhere with the blocking sarsens being removed and replaced each time. It is covered by turf now, but when it use it would have had bare chalk sides and stood out prominently. Here is the view looking in from the entrance. A narrow passage leads back to a larger chamber, with two transept chambers the terminal chamber forming a "T" or cross shape. After use for around 1,000 years, the chambers and passage were filled with a variety of grave goods and earth and stone. One archaeologist suggested that the grave goods had been collected from a nearby "mortuary temple", indicating that this site was used for ritual activity long after it was used for burial. I am in the main chamber. A recently installed small heavy glass skylight provides a little illumination. This is the view back toward the entrance from the main chamber. This is the view from the top of the burial mound, looking roughly east in the direction of the opening. And here, a view in the opposite direction, looking roughly west. The West Kennet Long Barrow is located on a prominent chalk ridge a mile and a half south of Avebury, with Silbury Hill in between. The Sanctuary and Surrounding Tumuli The Sanctuary is a stone circle site on Overton Hill around UK grid reference SU 1180 6805. It started as six concentric rings of timbers erected around 3,000 BC. A series of three increasingly larger timber structures were built, and then replaced around 2,100 BC with two concentric stone circles. The site was largely destroyed in 1723, and today you mainly see short concrete posts marking the positions of the stones and timbers. The timber posts may have supported a roof covered in thatch or turf. The resulting structure could have been a high-status dwelling associated with the nearby ritual site of Avebury. Another possibility is that it was a mortuary where corpses were kept before or after ritual treatment at Avebury. The stones may have been erected after the timber posts and any associated building were removed. Or, the stones may have been erected within the remaining wooden post array. Here you see the markers of the original timber posts and stones. In the distance are two tumuli, seen below. These tumuli are just across the busy A4 highway from the Santuary, near SU 1195 6815. The A4 highway was built on a Roman road. When the Romans built their road, it went past these monuments which were already over 1,000 years old. The full Moon rises over some tumuli on a ridge. Trees, frequently oaks, tend to grow from the tumuli. Now, for those who don't mind recklessly mixing their mythologies and prehistories... Norse mythology gave rise to the concept of a warden tree, or vårdräd in modern Swedish. This came had evolved from the Old Norse concept of a vörðr, meaning "warden" or "watcher" or "caretaker". The vörðr is a spirit that follows the soul or hugr of each person from birth to death. Old Norse vörðr became Old Swedish varþer, leading to modern Swedish vård. Belief in the vörðr remain strong in Scandinavia until the 1700s and 1800s. Under the influence of Christianity, the belief changed to be more like the Christian concept of a good and bad conscience. As for the warden tree or vårdräd, this would be a very old tree growing on a farm, typically a linden, ash, or elm. It was believed to defend the farm and its family against bad fortune. It was a serious offense to break a leaf or twig from the warden tree, which was so respected that the family housing one could adopt a surname related to it, such as Linnæus like the family name of the great scientist, or Lindelius or Almén. The West Kennet Avenue The West Kennet Avenue, or simply "The Avenue", leads off 2.5 kilometers to the south of the Avebury henge toward the Sanctuary. The B4003 road into Avebury was built parallel to it. The Avenue was originally lined by about 100 pairs of stones dating to about 2,200 BC based on Beaker style pottery found buried beneath some of them. Traces of the Beckhampton Avenue extend to the west from the Outer Circle. Continuing along The Avenue toward the Avebury henge, you climb a slight rise before you get to the stone circle. We turn around and look back down The Avenue from near the top of the rise. When we come over the rise we see the Avebury stone circle ahead. It's especially atmospheric if you happen to be there when a nearly full Moon rises around sunset. As for the sometimes amusing views of pseudoscience and neopaganism: Ross Nichols, described as a "prominent modern Druid" and the founder of the Order of Bards, Ovates and Druids, believed that an astrological axis connected Avebury to Stonehenge. This theorized axis flanked on one side by the West Kennet Long Barrow and Silbury Hill. He saw the West Kennet Long Barrow as a symbol of the Mother Goddess, and Silbury Hill as a symbol of masculinity. The reality is that the Druids, of which we really know very little, didn't even appear until well into the Iron Age, some two thousand years after these monuments were built. Megalithic nonsense had been around at least since the 1800s. The Reverend R. Warner wrote The Pagan Altar in 1840, arguing that the Phoenicians had built both Avebury and Stonehenge. Many in the Victorian era believed that the sea-faring Phoenicians had brought civilization to Britain. The Phoenicians were awfully, well, foreign, and James Fergusson's Rude Stone Monuments in All Countries in 1872 explained that Avebury had been built in the Early Medieval period to commemorate the final battle of King Arthur's warriors, many of whom were obviously buried here. Phoenicians? Arthurian warriors? Nonsense! It was obviously those Native Americans from the Appalachian Mountains who crossed the Atlantic from North America to Britain back in prehistoric times just to build the great magalithic monuments of Britain. Or so W.S. Blacket explained it in his 1883 Researches into the Lost Histories of America.
<urn:uuid:3e6387ee-0ba4-408e-b6a0-3364e308135b>
CC-MAIN-2022-40
https://cromwell-intl.com/travel/uk/avebury/Index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00045.warc.gz
en
0.971434
4,399
2.703125
3
Cybersecurity has lately become an inseparable part of even the smallest businesses. Everything is available online, and it has become necessary for companies to make their brands visible on the internet for better exposure and productivity. With online visibility and underlying awareness, small businesses and big corporations are often hunted down by cybercriminals. It could be a technical vulnerability or a weak security system that leads to such thefts. Basically, cybersecurity is supposed to do one thing: help businesses make better risk decisions. In his article for Dark Reading, Javvad Malik discusses several ways of using psychology as an effective tool to enhance your cybersecurity model. When conducting a cybersecurity training program, you should understand that human psychology favors consistency over intensity when learning something new. Bifurcate the training schedule into smaller and exciting chunks. Focus on making your training program more of an interactive session rather than a lecture. Try to add smaller pieces of intense details mandatory to the training process but ensure it does not get too overwhelming. Resolve Negative Stigma Experts believe that most cybersecurity crimes are reported much later, and the damage is already done by then. One of the reasons for this is the negative stigma attached to calling out the mistake someone has made. Companies should strive to create a cohesive and inclusive environment, where employees are not afraid to report if they have experienced a cybersecurity attack. It helps the company in the long run, and it also creates a sense of trust among employees and the enterprise they are working for. If you are conducting training programs, try to make the training simple at the beginning and gradually increase the complexity. This measured shift helps employees increase their knowledge and enhance their self-confidence. Click on the link to read the article:
<urn:uuid:a5b6bd36-02a9-4ba7-9c6f-f4eb4d8a30a3>
CC-MAIN-2022-40
https://cybersecurity-journal.com/2021/11/05/can-psychology-improve-cybersecurity-training-model/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00045.warc.gz
en
0.95881
352
3.125
3
The timing for a new memory technology couldn’t be better as more demanding cognitive applications emerge that require ingesting petabytes of structured and unstructured data for applications spanning the understanding of cancer metastasis to the forecasting of natural disasters. Understanding the memory hierarchy Today, there are three main ways to store digital information, or bits, in the forms of “0’s” and “1’s”: hard disk drives (HDD), dynamic random access memory (DRAM), and flash memory. HDDs, invented by IBM in the 1950s, continue to be attractive for PCs and data centers due to their low cost. But these devices are intrinsically slower, require quite a bit of power and, due to their mechanical moving parts, are less reliable. In 1967, IBM Fellow Robert Dennard filed a patent for his world-changing invention known as DRAM. Decades later, it remains a veritable workhorse, driving almost all computing devices today. Its main drawback is that it’s volatile, meaning when the device shuts off power, DRAM loses the data. This is particularly a problem for any number of applications, such as remote sensor devices with small batteries used for the Internet of Things. Today’s top non-volatile memory technology (meaning that it retains data without the need for continued power) is flash, which can be found in devices ranging from USB sticks to the cloud. It is much faster at reading and writing data compared to HDDs, but is a laggard compared to DRAM. It also has an endurance problem, eventually breaking down after several thousand rewrite cycles. For two decades, IBM scientists have been investigating several non-volatile memory devices, including magnetoresistive random-access memory or MRAM, which uses the spin of an electron to store bits. “We’ve been demonstrating IBM OpenPower systems’ value with non-volatile memory and believe non-volatile memory will be one of the biggest cost-performance levers for future datacenters,” said Bradley McCredie, IBM Fellow and vice president of Power Systems Development. Guohan Hu, manager of the MRAM Materials and Devices group, optimized the perpendicular magnetic materials to enable ultra-low write-error-rate. What is MRAM? MRAM is based on a grid of cells with two ferromagnetic layers, separated by a thin insulating layer. When the two magnets are both pointing north, this corresponds to a “0”. If the second magnet points south, it stores a “1”. The first generation of MRAM, called field-switched MRAM, was written by applying large currents to generate magnetic fields to switch the second magnet between north and south. However, Slonczewski saw a better way. When a current is passed through the cell, the electrons transfer their spin from one magnet to the other and thereby switch the second magnet from north to south, or vice versa, thus writing the cell. When introducing this invention in his 1996 paper, Slonczewski coined the term “spin transfer” torque to describe this new switching mechanism. This new generation of MRAM is called Spin Transfer Torque MRAM (often shortened to Spin Torque MRAM or STT-MRAM). IBM invents Spin Torque MRAM Whereas first-generation field-switched MRAM has been produced for years, technical challenges including lowering the write current, which prevented it from being scalable to high densities and impeded its wider adoption. Slonczewski’s invention opened up the possibility of using much smaller write currents, and hence much smaller write transistors, thus enabling much denser MRAM. However, to realize its full potential, a breakthrough in materials would still be required. Inspired by Slonczewski, scientists around the world, including teams at IBM and Samsung, continued to believe in the promise of MRAM because it offers several distinct advantages. MRAM is the only memory (or indeed emerging memory) that combines unlimited endurance (read and write cycles) with inherent non-volatility. This combination makes MRAM an ideal technology for always-on devices such as Internet of Things sensors, mobile devices and wearable electronics, offering more storage and longer battery life. In addition, because MRAM uses standard transistors, and is compact and robust, it’s more easily embedded on the same chip as logic and other functions, compared to flash memory. Therefore many semiconductor foundries are considering replacing embedded flash with embedded Spin Torque MRAM at the 28-nm node and beyond. “With these capabilities, a single MRAM chip can replace a combination of SRAM and flash for many ultra-low-power, medium-performance mobile and Internet of Things applications. However, our long-term goal remains to scale Spin Torque MRAM to be so dense and fast it can be used as a cache memory in IBM’s servers,” said Dr. Daniel Worledge, scientist and MRAM team senior manager at IBM Research. IBM scientist Janusz Nowak co-authored a paper on scalling STT MRAM down to 11nm. In 2011, Worledge and his colleagues published a paper that demonstrated a novel technique to produce MRAM with manufacturing line widths below 20 nanometers. They called it perpendicular magnetic anisotropy (PMA). It was a critical discovery because it would enable Spin Torque MRAM to scale to much higher densities, thus fulfilling what Slonczewski predicted back in 1996. The trick was to develop magnetic materials that allowed the magnet to point perpendicular to the plane, instead of in the plane. Although it was a notable achievement, Worledge and his colleagues knew they could scale even further, resulting in denser MRAM to lower costs even more. Worledge adds, “Unless we could increase the density and speed, we knew that its use as a cache memory would never happen. So we went back to the drawing board to scale further.” Reaching 11 nanometers Appearing today in IEEE Magnetic Letters, Worledge and his IBM colleagues and partners at Samsung have published a paper demonstrating switching MRAM cells for devices with diameters ranging from 50 down to 11 nanometers in only 10 nanoseconds, using only 7.5 microamperes — a significant achievement. “With PMA we are capable of delivering good STT-MRAM performance down to a write-error rate of 7×10−10 with 10-nanosecond pulses using switching currents of only 7.5 microamperes. This could never be done with in-plane magnetized devices — they just don’t scale. While more research needs to be done, this should give the industry the confidence it needs to move forward. The time for Spin Torque MRAM is now,” said Worledge. “In the 20 years since these scientific discoveries at IBM Research and elsewhere, the phenomenon of spin transfer torque has inspired significant research and development efforts. These have now led to an STT switched magnetic memory device of 11 nm in size, back-end integrated onto CMOS circuits. This advance brings more possibilities to applications, and will accelerate the pace of innovation in the solid-state memory technology space,” said Jonathan Sun, a research staff member on Worledge’s team. Spin Torque MRAM Symposium On November 7, IBM is hosting a special 20th Anniversary Spin Torque MRAM Symposium at the Thomas J. Watson Research Center in Yorktown Heights, New York. The one-day event will feature a series of talks including keynotes by Slonczewski and other leading scientists from around the world. In 2019, IBM and the Broad Institute of MIT and Harvard started a multi-year collaborative research program to develop powerful predictive models that can potentially enable clinicians to identify patients at serious risk for cardiovascular disease (1, 2). At the start of our collaboration, we proposed an approach to develop AI-based models that combine and […] The 45th International Conference on Acoustics, Speech, and Signal Processing is taking place virtually from May 4-8. IBM Research AI is pleased to support the conference as a bronze patron and to share our latest research results, described in nine papers that will be presented at the conference. With the festive season just days away, it’s not just turkey and cranberry sauce flying off the supermarket shelves. It’s also champagne, wine, gin… and, of course, whisky. And it better be genuine. IBM researchers are working on technologies to help you make sure the whisky you buy is indeed what it says on the […]
<urn:uuid:b122d55b-9c04-454b-a5ea-8b6d3f8464b4>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2016/07/ibm-celebrates-20-years-spin-torque-mram-scaling-11-nanometers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00045.warc.gz
en
0.933617
1,810
3.484375
3
“Have you seen the price of Bitcoin?”, “You gotta get in on Ripple, it’s going through the roof!”, “Are we in a crypto bubble? Is it all going to crash?” You may have heard all the hype about cryptocurrencies over the past year. But what people aren’t talking about as much is the valuable technology that underpins Bitcoin, Ethereum, Ripple, and all the other cryptocurrencies out there. That technology is blockchain, and it’s something that may have a tremendous impact on the future of your business, especially if international financial transactions, supply chain management, and data security are important to your company. What exactly is blockchain, how can it help your business, and how can your business experiment with the technology? (Infographic Source: Startupmanagement.org) What is Blockchain? The fundamental concept of a blockchain is that there is no centralized authority that lords over your transaction data and determines what is true or false or right or wrong. When a transaction is requested, instead of a single, centralized party verifying and executing this transaction, it gets distributed across many nodes (aka other people’s computers), where it is validated and confirmed. Once verified, a new block of data is recorded in hash functions and timestamps and added to the existing chain (hence, blockchain) of transactions, making your transaction immutable and permanent. Now, your transaction is complete. The simplest example would be in the form of payments. Let’s say you want to pay your friend for tickets to Hamilton via PayPal. I know this is totally unrealistic because no one can get tickets to Hamilton, but let’s play along. So, you go to your PayPal app and send her money. For this transaction to be verified and executed, PayPal, the centralized entity, must: - Confirm your and your friend’s identity in their databases - Make sure that you have enough money in your account (or have a bank account or credit card linked) - Take money out of your account and transfer it to your friend’s account - Update their databases to reflect the executed transaction Now let’s say you wanted to pay your friend via Bitcoin. You make a Bitcoin payment request (usually through a cryptocurrency wallet like Coinbase) and the process would go something like this: - The transaction request gets distributed to a network of computers called nodes - The network validates the transaction and updates your and your friend’s balances (a lot more goes into this step, but let’s keep it simple) - A new permanent and unchangeable block is added to the Bitcoin blockchain - The transaction is complete As you can see, this transaction isn’t verified by a single, centralized company like PayPal. Instead, it’s validated by many others who contribute to the blockchain in a way that is permanent and unchangeable. Cryptocurrencies like Bitcoin and Litecoin are mentioned as interchangeable with blockchain, but that’s only part of the story. Yes, these cryptocurrencies run on a blockchain, but there are many more uses of blockchain that can help businesses. Key Benefits of Blockchain to Businesses Most cryptocurrencies are overhyped, but in our opinion, blockchain is not. While it’s still in its early stages, there are many ways that blockchain can help businesses by providing better tracking capabilities and more reliable accountability measures. Here are some of its key benefits. With blockchain, transactions are executed without the involvement of a centralized third party, so in many situations they can can be processed much more quickly. Transactions can be completed within seconds or minutes, as opposed to days or even weeks if they were to go through an automated clearing house. Additionally, transaction costs can be minimized when international payments are being made. Many banks charge large international transfer fees, which can be lowered or avoided using cryptocurrencies. Blockchain payments also don’t have to go through foreign exchanges, so you can steer clear of all the exchange rate fluctuations, potentially saving you lots of money. For example, Australian car manufacturer Tomcar pays some of its overseas suppliers with Bitcoin and accepts the cryptocurrency from its customers through a partnership with CoinJar. The company was paying 6-12% in transfer fees to its international suppliers, and now only pays about 1% per transaction. Because of its success with suppliers, the company decided to accept Bitcoin payments from its customers as well. Traceability and Transparency Supply chains can be huge beneficiaries of blockchain technology due to the traceability it provides. Large companies’ supply chains are extremely complex networks of vendors located all over the world. Raw materials move across multiple modes of transportation, and it can be difficult to truly understand where product parts were produced and assembled. Hence, guaranteeing product quality and safety is tough. Blockchain can help better track all this supply chain activity. Because each step of the supply chain can be immutably documented and this data can be accessed easily and instantly, companies can trace exactly where their products are sourced from, when each part was manufactured, and by whom. Walmart has partnered with nine food providers and IBM to experiment with blockchain to trace sources of food. They’ve run pilots to track the paths of Chinese pork and Mexican mangos using Hyperledger Fabric, a blockchain framework originally created by IBM but now managed by the Linux Foundation. In the case that a foodborne illness occurs, Walmart can refer to the blockchain to easily pinpoint the culprit. Similarly, Everledger has used blockchain to build a global registry for diamonds to stop the spread of conflict diamonds. They’re expanding their system to other precious goods like fine wine. While it isn’t a silver bullet to companies’ security woes, the distributed nature of blockchain is inherently more secure than its centralized database counterparts. If a hacker or an evil insider gains access to a centralized database, that person can make edits to the data and use the stored information for nefarious purposes. But because the data on a blockchain is distributed across the world, a single actor can’t make changes to the data without it being verified by the many others in the blockchain network. Thus, blockchain is especially useful for the security of very sensitive information such as financial, health, and identity data. Blockchain is even being used to improve upon existing cybersecurity technologies, such as public and private key cryptography and storage of domain name service entries. The Department of Defense’s Defense Advanced Research Projects Agency (DARPA) is experimenting with blockchain to decentralize its back-office infrastructure to better protect its correspondence from exposure to hackers. If national security can be secured by blockchain, imagine how it can help your secure your business’ data. Key Blockchain-as-a-Service Providers Blockchain remains a very nascent technology and still in the experimental phase for enterprises. While you can develop your own blockchain on top of technologies like Ethereum, this can take a lot of time and specific expertise that’s hard to come by because this technology is so new. Hopefully you’re already benefiting from the use of a cloud computing platform. If so, a few of the major cloud Service Providers (hopefully one that you’re working with) are making it easier for you to deploy blockchains and experiment with how they can help your business. As mentioned before, IBM created and supports the open-source Hyperledger effort with the Linux Foundation. The core product is Hyperledger Fabric, which allows you to create core components like consensus mechanisms and membership services in a plug-and-play manner. There’s no cryptocurrency required to create a blockchain, transactions can be public or confidential, and smart contracts can be written in Golang or Java. IBM also provides educational workshops, premium support, and even an accelerator to guide the development of new blockchain solutions. The company has partnered with many customers across a variety of industries. In addition to the aforementioned partnership with Walmart and food manufacturers, IBM is engaging with various banks to build a blockchain-based trade platform, collaborating with Sony to develop a system to manage students’ learning data on the blockchain, and working with AIG on a multinational insurance policy powered by blockchain. Microsoft’s Blockchain on Azure Microsoft launched its Blockchain-as-a-Service in 2015 in a partnership with ConsenSys, a company who builds decentralized applications on top of Ethereum. The company developed a framework called Coco that makes blockchain networks enterprise-ready and is compatible with any blockchain protocol. Microsoft allows you to build different types of blockchains through its partnerships with various providers like R3’s Corda, Hyperledger, Quorum, and Chain. Its blockchain offerings also seamlessly integrate with its other cloud computing and software products. Microsoft has scored contracts with Bank of America Merrill Lynch to speed up trade finance transactions, partners with EY and Maersk to build a marine insurance blockchain platform, and is pursuing the use of blockchain in the US Government with its launch of Azure Government Secret. Oracle has worked hard to catch and keep up with cloud computing giants AWS and Microsoft. The company is betting that its blockchain offering, launched in October 2017, can help close the gap. Blockchain represents a double-edged sword for Oracle, one of the largest database providers in the world. On one hand, the distributed nature of blockchain is diametrically opposite of the centralized nature of databases. On the other hand, Oracle databases are used by over 98% of the Fortune 500, so its blockchain offering has an organic entry point into all of these large companies and can extend the capabilities of their many ERP and database solutions. Even though Oracle is a bit late to the game, when it sees a worthwhile opportunity, it goes after it hard. And Oracle definitely sees opportunity in blockchain. AWS – maybe? The top cloud service provider isn’t completely committed to blockchain. At AWS re:Invent 2017, CEO Andy Jassy responded to questions about a blockchain offering by saying: “We don’t build technology because we think it’s cool.” But the company is investing in the technology through its partner ecosystem, with collaborations with Sawtooth, R3 Corda, PokitDok, Samsung Nexledger, Quorum, and more. And companies like T-Mobile and PWC have built blockchains with AWS at their cores. It will be interesting to see if AWS fully jumps on the blockchain bandwagon like many of its competitors. Blockchain can be the technology that may fundamentally change how the internet works. And there are many ways that your business can benefit from it, including transaction efficiency, transparency, and increased data security. Cloud service providers like Microsoft, IBM, Oracle, and (maybe) AWS are building and improving upon tools that can help you experiment with blockchain and determine how it can help your company. By Mike Chan Mike Chan is the Chief Marketing Officer of Thorn Technologies, an AWS-certified cloud computing and software development firm. Thorn Technologies excels in the development of cloud architectures, cloud migrations, messaging systems, and mobile apps for companies like Sprint, VMware, Experient, and many more.
<urn:uuid:a85fa8bf-d920-49a9-9fe6-a317d51e9bcc>
CC-MAIN-2022-40
https://cloudtweaks.com/2018/01/how-can-blockchain-as-a-service-help-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00045.warc.gz
en
0.93705
2,346
2.609375
3
Computer networks are probably the best example of graphs these days. I started therefore to consider graph database as an excellent tool for storing experimental results of my networking complexity analysis method. It’s a project that I’m doing (starting to do) in which I will try to create a better method of computer network complexity audit by combining few of already existing methods and by additionally enhancing some of their algorithms to get more precise results out of the whole thing. The idea is that most of network complexity measurement mechanism rely strongly on graph theory in which most metrics for measuring network/graph complexity is related to connectivity, node distance, and similar graph characteristics but with no particular way of measuring implementation complexity nor operation complexity of resulting network. Furthermore, existing methods do not contain a way to evaluate network system from economic perspective in any way, which would greatly increase the use cases for this new method, specifically in planing and designing phases. What that means? It means that network complexity evaluation method, which can evaluate future network design implementation from technical and economic perspective in the same time, could potentially help, not only engineers, to select the best feasible network solution but also help company’s executives to select the most cost-effective solution. As you may know, normal, relational databases use tables which are related using index field in for of separate (mostly) first column in that table. The other table can relate to that index (mostly number) to catch the whole row of data from another table. An that’s it. You can save some data in one table, some other data in another and get out the information you need combining more tables without the need to have all the data inside one huge table and in this way avoiding data redundancy and creation of to big and hard to search data sets. A graph has two types of elements, node and relationship. Each node is an entity like place, thing, person or something, and relationship is connecting two nodes describing how they are related. Graph database is not using tables and relation between them. It is more focused on the connections between sets of nodes which carry most important parts of the data (relations). You have those nodes and the relations between the nodes (connections which are now the new tables). Connections between nodes have much more data than the relations in the “old” Databases. Relations are just pointers which relate (usually in only one direction) one table cell to whole other row in other table inside that database. In graph databases relations carry the important part of data, explaining how one entity is connected to another and how strongly or by which means. Nodes with more connections are usually more important that those with less connections etc. Nodes can also have some attributes but only those which describe in short the difference between nodes. The great thing in graph databases is that data describing most real world systems, and then most computer systems and applications (because they copy real world systems in their way of work), is more elegantly and more precisely represented, saved, indexed and searched by graph database. Idea – Plan The idea and possible plan in my project is to build a Graph Database in which I will save all the attributes from my network design models in order to get the structure that will be used to evaluate the complexity of those suggested network topologies. Basically, things like graph distances, weight, symmetry, centrality, global complexity, average complexity, normalized complexity, sub-graph count, total walk count, vertex accessibility etc. can all be part of complexity evaluation and data which represents those metrics is probably suitable to easily be saved into a graph database for advance handling and investigation. Research project about the whole thing is in the beginning and I will surely write more about it here. Latest updates and eventually some work in progress can be seen here https://www.researchgate.net/project/Measuring-Computer-Network-Complexity and more updates are on the way shortly.
<urn:uuid:d1b3ad6b-35e7-4c6b-b612-79359947b986>
CC-MAIN-2022-40
https://howdoesinternetwork.com/2017/graph-database-network-complexity
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00246.warc.gz
en
0.939276
814
2.640625
3
What if your computer could be hacked remotely, by someone who never gains direct access to your system – and never has to? No phishing emails, lost passwords, or physical access necessary; with the Hertzbleed computer chip attack, this kind of impossible hack becomes possible. The Hertzbleed hack is a new kind of CPU targeting hack that has the possibility to affect almost every computer in the world entirely remotely. While currently slow-moving and in the exploratory phases for the most part, Hertzbleed has some new capabilities that make it especially concerning to cybersecurity experts. What is the Hertzbleed computer chip attack? Hertzbleed is a new development in the world of CPU side channel hacks. A side channel attack is a specific kind of data breach that works without the users’ consent or knowledge. Instead of requiring the user to download infected malware, or access unsecured web pages, a side channel attack like Hertzbleed hacks into a computer based on its constant time mechanisms. How does a CPU hack work? In order to understand Hertzbleed, it is first important to understand how it interacts with CPU. CPU, or central processing unit, is essentially the brain of every technological device we own. Your computer has a CPU that it uses to understand and execute commands. Functioning like a calculator, the CPU uses lines of 1s and 0s to communicate with other elements like your computer’s memory, RAM, and graphics card. Modern CPUs consist of multiple cores that can handle multiple lines of code at once, making technology function faster and more efficiently. Enter the Hertzbleed hack. All CPUs leave a kind of physical signature. It might consist of elements like the noise your computer makes when it performs certain tasks, or how much it heats up. With Hertzbleed, hackers may even be able to read keystrokes and become able to translate sensitive information being transmitted. Alternatively, they may be able to analyze the amount of CPU being used to perform certain tasks, in order to learn what you’re up to when you turn on your laptop or computer. What’s especially terrifying about Hertzbleed is that the entire process can be deployed remotely. Hackers could act as essentially CPU mind readers from afar, learning about the computer’s usage and circumventing most kinds of protections while being based anywhere in the world. How was the Hertzbleed attack discovered? Researchers from University of Texas at Austin, the University of Illinois Urbana-Champaign and the University of Washington in Seattle banded together to discover and demonstrate the potential of the Hertzbleed attack. Their findings were released to Intel and then published in order to help underscore the risks that they uncovered: that Hertzbleed renders the mainstay defense against timing attacks, constant time programming, entirely vulnerable to remote breach. Who is at risk of being affected by the Hertzbleed computer chip hack? The Hertzbleed hack has been shown to have the capacity to affect all Intel and AMD processors. Together, Intel and AMD hold anywhere from 60 to 80% of market share on CPUs. The good and bad news about Hertzbleed data privacy attacks The good news about Hertzbleed is twofold: it was discovered by researchers under laboratory conditions, and brought to manufacturers’ attention before it was discovered being used by hackers organically. This gives data security experts a leg up on learning how to circumvent it. Additionally, Hertzbleed is currently slower moving than other kinds of malware, spyware, or ransomware. It would take a long time to read large amounts of data using Hertzbleed. This means while Intel processors are at risk, the concern is greatest for cryptographic engineering practices. Stealing smaller amounts of data, such as passwords or individual encryptions, remains a very real threat using Hertzbleed. However, accessing the bulk of most users’ activity would likely take up much more time than would be worth it for hackers at the moment. Still, researchers and data privacy experts say not to underestimate the new side channel hack. For one, the ability of Hertzbleed to be carried out entirely remotely is unprecedented, and could herald the creation of other kinds of faster side channel attacks in the future that would be even more dangerous. For another, Hertzbleed is not deterred by time invariant instructions. These kinds of code are effective against most side channel attacks by instructing the CPU to take the same amount of time regardless of what kind of command is being executed. This helps obscure larger or more important commands from spyware. Hertzbleed, however, is not fooled by this kind of programming, which has been the main defense of computer processing equipment since 1996. Finally, even after being informed of the very real risk from Hertzbleed early on by researchers, neither Intel nor AMD currently have any plans available to address the possible breach. No microcode patches have been deployed to fix the problem, nor does it seem that the companies are sure how to move forwards. Should you worry about Hertzbleed? If Intel and AMD’s response is anything to go by, the threat may not be immediate. However this new kind of remote timing hack technology has revolutionized the world of data breaches, and shows how quickly cybersecurity has to move in order to keep up with the latest trends from hackers.
<urn:uuid:31dd6168-64b7-4b47-beee-6ac1e456937b>
CC-MAIN-2022-40
https://news.networktigers.com/industry-news/the-hertzbleed-computer-chip-hack-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00246.warc.gz
en
0.949474
1,116
3.25
3
Science fiction traditionally depicts robots that reflect or caricature humans, replicating our best or worst traits. As robots are becoming increasingly commonplace and lifelike, is humanoid robotics art or deception? Joanna Goodman reports in the run-up to UK Robotics Week. A fascinating debate took place at the CogX AI festival in London last week: Should robots resemble humans? The argument was intriguing, as it didn’t refer to whether robots should look or sound like humans convincingly enough to deceive us into thinking that they might be alive or sentient, but whether they should resemble us at all. One thing was clear: this isn’t solely a question of humanoid design affecting engagement. Although Amazon’s digital assistant, Alexa, has a woman’s name and a synthetic female voice – the perceived gender of AIs and robots is an equally important debate, as it tends to reflect cultural biases – we engage with it, even though the Echo and Dot devices neither look, nor sound, human. The panel, moderated by Kate Devlin, senior lecturer at Goldsmiths College, University of London, included two creators of human-like robots: David Hanson, of Hanson Robotics, who is best known for the android Sophia, the first robot to be granted (honorary) citizenship (by Saudi Arabia), and Will Jackson director of Engineered Arts, creator of, among others, RoboThespian, a human-sized acting robot. On the other side of the argument were two academics who specialise in AI and ethics: Joanna Bryson, associate professor in the department of Computing at University of Bath, and Alan Winfield, professor of robot ethics at the University of the West of England. Bryson’s interest in robot ethics began when, as part of her PhD, she built a humanoid robot that was designed to have a basic level of understanding so that it could learn from human experiences. But the project ran out of money and the robot, named Cog, didn’t work. But that didn’t stop people identifying and empathising with it, she said. “People kept telling me that it was unethical to unplug the robot, even though it wasn’t plugged in and it didn’t work! This was not about intelligence or sentience; it was about anthropomorphism. Human-like AI may be easier to use, but our brain snaps into a set of assumptions when we see something that looks like a person.” “Robots that look like people worry me, because they are a deception,” agreed Winfield, highlighting four ethical objections. “First, robots should not be designed to exploit vulnerable users. We are all vulnerable: we see faces in everything from slices of toast to shadows on Mars. Secondly, we can build lifelike bodies, but we cannot yet equip them with AI to match the expectations created by their appearance. “My third ethical objection is to gendered robots: robots do not have gender, and I find the idea of a man building a mechanical woman deeply troubling. Finally, I worry about the representation of robots in the media. Reports of robots that are ‘alive’ do not advance the cause of robotics and AI.” Hanson was quick to respond. Although his dream is that robots will ‘come to life’ someday, his defence is based on art and utility. “From prehistory and the caves of Lascaux to today’s animated movies and games, we have depicted the human form. We could say that art is deceptive, but it also brings great value into our world. We have created increasingly lifelike behaviour in animated figures in movies and games, and now we are creating AI agents.” Hanson’s ‘characters’ – like Sophia – bring together different technologies, including machine learning and language generation, and are designed to explore the human condition, he said. Hanson’s R&D work includes projects on manufacturing, dealing with autism (where a number of other companies’ robots have been used successfully in classrooms), and medical training. However, he acknowledged that there are ethical concerns. “How do we grapple with these issues? We need to bring them into the light, so that we can educate the public and involve them,” he said, adding that Hanson Robotics has fully disclosed the details of Sophia’s technology and asserted that she is definitely not alive. Like movies and other art forms, anthropomorphism is a willing suspension of disbelief, he said. Jackson has been making humanoid robots since 2004, but he didn’t deliberately make them look like people until he was specifically asked to for the Channel 4 series, Humans. “Seeing the emotive responses and how people engaged with the machine totally changed my perspective,” he said. “It’s so easy to get people to believe that something is ‘alive’. It’s not about the skin – it’s about biological motion. If something moves like it’s alive, people will anthropomorphise it.” Jackson illustrated that by referring to reactions to the YouTube video of an engineer kicking Boston Dynamics’ dog robot, in order to test the machine’s balance. “There are so many comments that react to this as if someone were mistreating a real dog, and it doesn’t even have a head!” he said. “But it moves like a dog.” Jackson agreed with Hanson that human-like robots are an exploration of humanity, which involves a willing suspension of disbelief. And like art and cinema, they are also about engaging people, he said. “It is about communicating with people and presenting information in a way that they understand.” Facial expressions make a big difference, he continued. “The biggest bandwidth you have is not your voice, it is your face,” he said. “For example, surprised, expressive eyebrows on a robot are an intuitive means of communication. Wouldn’t it be great if you could just nod your head when Alexa asked you to confirm you wanted the light switched on?” Like Hanson, Jackson acknowledged the ethical concerns, however, observing that these reflect the way technology is applied, rather than what it can do. Bryson was not convinced. “People get weird about robots,” she says. “At least a billion people belong to religions in which art isn’t allowed to represent humans, and yet Sophia, who is a representation of a woman who is not covered, is an honorary citizen of Saudi Arabia.” Another consideration is that we soon get used to technology that initially makes us uncomfortable. Bryson referenced how the first King Kong movie terrified cinema audiences, yet now people enjoy horror movies. “I have no problem with the willing and transparent deception of the arts,” said Winfield, but he followed up by asking Hanson whether he regarded his robots as art installations or as scientific instruments. The answer is both, said Hanson. “We are combining the best AI, which includes our own inventions and other readily available applications, into our diverse creations, which include artworks and bio-inspired R&D applications. Some look like cartoons and some look more like people.” Bryson observed that some people are defending Sophia’s human rights, even though she is a robot. Jackson countered that this does not mean that they think she is alive – after all, they can see the motors in the back of her head. Hanson was not aware of his creation’s honorary citizen status until he saw it on the news, he said. “My first reaction was conflicted, but then the team agreed that Sophia could be a platform for human rights in Saudi Arabia.” Belief and achievement Although Hanson Robotics fully discloses the workings of its machines, that doesn’t stop people believing in them in the same way that a young child thinks Mickey Mouse is real. More seriously, he referenced Elizabeth Broadbent’s research at the University of Auckland, showing that more realistic agents create greater empathy. This perspective was reflected by a question from the floor: Can robots who look human achieve more than those that do not? Winfield’s experience is that robots don’t need to look convincingly human – a cartoon face is enough to create engagement. Jackson agreed, although skinned – or more realistic – robots (androids) maintain a humanlike presence even when they are switched off. Bryson observed that people identify with Sophia even though Hanson has published ‘her’ workings online, while Google’s sophisticated AI is rarely flagged up as dangerous in the way that robots are, perhaps because it is not shaped like a person. Return to gender Hanson flagged up the value of humanoid robots in training healthcare practitioners to save lives and the discussion returned to gender. While Winfield raised objections, on the grounds that a machine cannot have a gender, Jackson observed that although RoboThespian is a “non-gendered lump of metal” the first question people ask is whether it is a he or a she, and whether it has a name, adding that the French language assigns every inanimate object with a gender. This raised the issue of identity. Hanson develops his robots as characters, in the same way as a character in a movie or computer game, and this includes gender, although one is deliberately non-gendered as it represents an abstraction of the human condition. Ultimately there was some consensus that it is acceptable for robots to look like humans, as long as this offers commercial, societal, or artistic value, and there is sufficient transparency so that people are not deceived into thinking that they are engaging with a human, rather than a machine. - UK Robotics Week takes place in the final week of June, although events have been running throughout this month. Internet of Business says The expectation gap between what people imagine is sentience versus the reality of engaging with humanoid robots is often severe. Some robots, such as SoftBank’s NAO machines, are deliberately designed to appear ‘cute’ – as though they are vulnerable beings, rather than computers with bodies made of plastic and servos. In many senses, that can be seen as manipulative. When Internet of Business editor Chris Middleton met Professor Hiroshi Ishiguro, maker of the Erica android, two years ago, Ishiguro was adamant that his research is designed to explore people’s reactions to humanoid machines, and that in the long-term his aim is to use his work to design less lifelike devices that people still engage with. Nonetheless, the issues surrounding a man designing a machine to be a beautiful, passive woman are real, and Ishiguro is not alone in his pursuit. The question of perceived gender in machines is important, and often troubling. Sage’s Kriti Sharma has done excellent work highlighting the risks of ‘gendered’ machines, which often replicate societal prejudices – assistant, secretary-like AIs that are designed to be female, and professional, decision-making AIs that appear male, and so on. Robots and AIs should celebrate their ‘botness’, she says, rather than perpetuate problems in human society that we are only now starting to unpick. However, the underlying question is a simple one: what are humanoid robots for? Outside of the realm of sci-fi, the answer is that no-one really knows. Arguably, the quest to design and build humanoid machines is an act of hubris first, and an engineering challenge second, although transhumanists would counter that we are all on a journey towards increased mechanisation, merging the worlds of biology, DNA, and chemistry with those of electronics and robotics; our wearables are the next step on that journey, perhaps. That said, the coming generation of household robots is already shaping up to be about devices that resemble mobile home hubs or enhanced smartphones, rather than the mechanical men and humanoid companions of lore. Perhaps the truth is that humanoid robots were designed to entertain us all along, while the real work took place elsewhere. Additional analysis and commentary: Chris Middleton
<urn:uuid:e519fc48-0f7d-490f-8736-3cf2e379c8f9>
CC-MAIN-2022-40
https://internetofbusiness.com/should-robots-look-like-humans-or-like-machines/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00246.warc.gz
en
0.96624
2,564
2.796875
3
Just a few years ago, Big Data revolutionized how businesses make decisions. With the ability to collect and analyze data from diverse sources (such as IoT sensors, click streams, and application logs), businesses could uncover more valuable insights regarding their daily operations. Most companies now use Big Data to learn more about customer behavior, market trends, and production processes. As it currently stands, Big Data is collected, stored, and then analyzed using various tools. This process turns Big Data into historical information that’s only useful for determining future trends. To make Big Data more effective, businesses need to leverage Fast Data. Fast Data is the information that flows into your business at high velocity. Megabytes of data per second, or Gigabytes of data per hour, typically need to be analyzed in real-time to make your business more responsive. Why reacting to Fast Data in real-time is important With how valuable Big Data has been over the years, you may wonder why you need to take the extra step of leveraging Fast Data. There are many benefits of considering the velocity, volume, and variety of Fast Data. Fast Data makes you aware and reactive to real-time events, such as a customer shopping on an e-commerce website. In such cases, the website must be responsive to customers' real-time decisions (such as displaying appropriate product selections based on click-through rates). If your business can capture and react to this Data in real-time, you can uncover valuable insights and meet customer demand more effectively. How to capture value in Fast Data The key to leveraging Fast Data is reducing the time between data arrival and value extraction. There are four key steps that you can follow to develop a framework for capturing the value of Fast Data. This architecture is defined by processing individual events as they arrive- often within timeframes of less than a millisecond. The step-by-step process involves: 1. Designing a data acquisition framework Fast Data is defined by its volume, variety, and velocity. To capture the value in Fast Data, you need a data acquisition framework to deliver this Data in megabytes per second timelines. Simply put, your framework needs to have an asynchronous data transfer method and a parallel process for data transformation. This allows you to capture only the most relevant data from diverse sources, which can be further streamlined into the correct format for analysis. Apache Storm and Kafka are two technologies you can use for data acquisition. Storage solutions for Fast Data are very different from those used in your data center. In the context of Fast Data, storage means designing an appropriate model and temporary storage phase- where data processing platforms can retrieve data in real-time and uncover valuable insights. Think of this storage as a holding cell in a police station, where suspects are temporarily placed before being transferred to other areas 3. Real-time processing and analysis Perhaps the essential step when dealing with Fast Data is real-time processing. Your system needs to be a hybrid between stream and batch processing, where you can capture the accuracy, complexity, and value of each incoming event. NewSQL systems can achieve performance and complexity for your specific needs. 4. Presenting the data in a digestible format After analysis, Fast Data needs to be presented in an easily digestible format- otherwise, its use value will significantly decrease. Your aim should be to present visual data (such as graphs) that your target audience can easily understand. Give preference to high-level data, and have each report summarized in appropriate groupings (parallelized). Talk to Mactores today for help leveraging your Fast Data.
<urn:uuid:1c3cc505-9e54-47ef-bc62-202283fefcb3>
CC-MAIN-2022-40
https://mactores.com/blog/next-steps-leveraging-fast-data-from-big-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00246.warc.gz
en
0.920951
742
2.6875
3
The documented frequency of cyber attacks against the U.S. manufacturing industry increases year over year, making the financial losses from the successful breaches. It is more important than ever that manufacturers and producers undertake continuous vulnerability scans and penetration testing to identify susceptibility and ensure that cybersecurity controls are configured and functioning correctly to minimize losses. The first reason penetration testing is necessary is to reduce loss magnitude associated with successful security breaches and resulting business disruption. When a business experiences a data breach, the costs of containment, recovery, public relations, and fines can quickly add up. Depending on the duration and level of business disruption caused by the breach, the costs of not manufacturing quality products shipped accurately and delivered on time can result in net annual losses. These cyber incidents can be fatal to businesses and family legacies in more severe cases. The second reason penetration testing is necessary is to detect previously unknown vulnerabilities. The worst-case situation is to have exploitable vulnerabilities within your infrastructure or applications while the leadership team assumes assets are protected. The thoughts of being unassailable lead to decisions that cause a further lack of awareness, as attackers are probing your assets. Successful attacks, called breaches, can go undetected for months. Another reason contributing to the importance of penetration testing is to provide feedback on the effectiveness of security tools manufacturers use in their day-to-day operations. Most manufacturers and producers use some form of security tools, such as backup software, anti-virus and anti-malware services, and system maintenance tools. While leadership teams may have confidence that these tools are practical, they cannot assign any confidence level until adequately tested. Penetration testers also identify misconfigurations and default configurations. These mistakes could allow criminals to disable security tools, allowing attacks to be successful and financial losses to occur. Penetration testing is essential to manufacturers because of adherence to regulated guidelines. Manufacturers that follow regulated guidelines such as Defense Federal Acquisition Regulation Supplement (DFARS) or Cybersecurity Maturity Model Certification (CMMC) to enhance the protection of unclassified information within the supply chain must regularly conduct a penetration test to validate the level of security implemented. Without regular tests and a list of other requirements, these manufacturers will fail to meet compliance and certification requirements. DoD contractors should begin planning for CMMC certification because failure to secure an appropriate certification level will render contractors ineligible for new awards starting September 2020. What is Penetration testing? Penetration testing is a controlled simulated attack to identify the potential flaws and weaknesses within a business’ network, devices, or applications resulting in a data breach and financial loss. Penetration testing, also known as ethical hacking or pen testing, can focus on the business needs and wants but can include internal network security testing, external network security testing, web application testing, and mobile application security testing. The purpose of penetration testing is to help the business, and IT leadership identify vulnerabilities within their environment, leading to an attacker accessing privately-owned networks, systems, and sensitive business information. When vulnerabilities are discovered, penetration testers try to exploit these vulnerabilities to access information, elevate the privileges of a user’s account, or take control of the business network. Penetration tests are conducted under strict rules mutually agreed upon by the company in charge of performing the penetration test and requesting the assessment. In some cases, companies will create “flags,” or proof markers, that penetration testers are asked to capture during the assessment. What is the difference between internal penetration testing and external penetration testing? With internal penetration testing, either the device used for the penetration test or the penetration tester is directly connected to the manufacturer’s or producer’s facility network. Internal penetration testing focuses on the vulnerabilities that affect devices on a local network level if one device on the network is compromised, such as an attacker connecting to a computer in accounting. With external penetration testing, the goal for the pen tester is to gain access to the internal network of the business by exploiting external resources, such as company login portals, devices with remote access capabilities accessible to the Internet, or through the use of malicious documents in emails, known as phishing. External penetration tests are performed to simulate an attack from an external entity trying to access your internal assets. What happens during a penetration test? During a penetration test, the pen tester will begin the assessment by scanning the environment to understand better what devices are immediately accessible and learn about the processes and protocols in use. Once the network scan is complete, penetration testers will review the scan results to better understand the network devices and review useful items such as the operating systems used and what ports and services are being used by the systems, devices, and machines. Progressively, the penetration tester will begin reviewing the scan reports to identify vulnerabilities as they test the services in use. Depending on the type of assessment requested, pen testers will either test all of the discovered vulnerabilities or begin testing the vulnerabilities in line with the assessment goals. From there, the penetration tester will begin safely exploiting the vulnerabilities. As the vulnerabilities are exploited, the penetration tester(s) will document their findings for reporting and remediation purposes. As the assessment testing period concludes, the penetration tester will assemble the findings into a report that outlines the vulnerabilities discovered and how the pen tester(s) successfully exploited the vulnerabilities. What are the limitations that can affect the outcome of a penetration test? While there are various types of penetration tests available to manufacturers and producers, many limitations can also affect penetration testing effectiveness. A blog article from Tutorials Point covers seven limitations that can affect the effectiveness of a penetration test. The limitations are the length of time given for the penetration test, the scope of the assessment, the limitation of access to the system or network, the methods allowed, the skill-set of the penetration tester, access to known exploits, and that inability to experiment with custom exploits. - Time: Penetration testers are usually given a time period when the assessment is to be performed. Depending on what is agreed between the business requesting the assessment and the group conducting the assessment, penetration tests usually last for one to two weeks. Compared to penetration tests, attacks conducted by cyber criminals and hackers focused on exploiting vulnerabilities can last for weeks, months, or even years. - Scope: The scope is used to define the penetration test rules, often preventing accidental damage or affecting business operations. The scope can limit the times of day when conducting the assessment, what machines are allowed to be targeted or exploited, and which employees to target during assessments involving phishing emails. When the assessment allows the penetration tester to have a wider assessment scope, the penetration tester can find and exploit more vulnerabilities that criminals could use in a real cyber security attack. - Limitation of access: Depending on the simulation or scenario that the penetration tester is given, the pentester may be requested to test certain systems’ security but start the assessment from a different portion of the network. In these situations, this limitation is imposed on the penetration tester to test the security of the network from various entry points, which can provide the manufacturer a realistic representation of how far an attacker can get through their network from different starting points and show what information could a hacker gain access to during these situations. - Limitation of methods allowed: Limiting the methods and exploits used is generally accepted by penetration testers. This is enforced to prevent accidentally crashing critical systems and affecting productivity. While a penetration test’s primary goal is to find exploitable vulnerabilities, the tester should be wary of any known exploit that could cause a system to shut down unexpectedly. In cases such as this, the penetration testers must inform the client of the vulnerability and the potential result of exploiting it. If the client does not wish to use the vulnerability, the penetration tester should document the finding and include it in the final deliverable report. - Known exploits and experimentation: These two limitations directly impact each other, as, without investigation and lack of current known exploits, an unknown exploit could be later used against a business. These two limitations stem from the amount of time given for the testing period, as experimental testing may result in unintended damages or a lack of provable results. Penetration testers are also limited to known exploits that have been approved for testing, as this prevents accidental damage to systems or system processes. Additionally, experimental testing exploits can take time to perfect and may need specific modifications for each scenario. Compared to penetration testers, malicious attackers often can develop and test custom exploits against various systems of a targeted environment. - The penetration testers’ background and experience: While penetration testing can cover several topics or areas of testing, so can the skill-sets that the penetration tester can have. Penetration testers working within environments that they are not familiar with may miss commonly exploitable vulnerabilities while not fully understanding the assessment scope. To avoid this limitation, manufacturers and business owners should understand the background and limitations of the person conducting the assessment and address this limitation if it is a concern. What should you do after penetration testing? Upon completing the assessment and the review of findings, the leadership team should prioritize resources for remediation. Many companies tend to begin knocking off the easy issues that commonly have a little material impact on business risk. Some considerations for assigning priorities may include: - Disclosure of assumptions and biases. - Identifying the critical assets and workflows. - Isolating the probable threats. - The effects of the concerns related to probable threats. - Determining specific scenarios to be included within the review. Depending upon your agreed-upon definitions for how vulnerabilities and threat event frequency translate to lose event frequency and risk, teams can define their risk rating to categorize and prioritize remediation. A critical level risk rating could be an annualized loss of $1M-$2M for one company, while a crucial loss could be $10M or more for another company. If these labels are translated, leadership teams can avoid subjective interpretations and assumptions. Resources are limited, and without a strategy and plan to determine priorities, you will likely expend resources with little to no impact to reduce your loss exposure. How often should I schedule a penetration test? When manufacturers ask how often they should conduct penetration testing, a few factors can affect how often a penetration test should be performed. According to the EC Council, three factors can affect how often a company should conduct a penetration test. - The first factor affecting how often a company should conduct a penetration test is its size. Large manufacturing companies and businesses will often integrate newer technologies for their internal and external components, requiring more penetration tests to ensure their networks and applications’ security. Smaller companies need fewer penetration tests than larger enterprises, as new features are not frequently changed or installed. As companies change and utilize new technologies, criminals use new vulnerabilities to access sensitive information or internal networks. - The second factor affects how often a penetration test is conducted due to regulations that a business needs or uses. For example, companies that use or maintain Payment Card Industries Data Security Standard, or PCI DSS for short, must complete at least two penetration tests every six months. Manufacturers should understand their requirements for regulated compliance before defining the scope and scheduling a penetration test. - The final factor affecting how often a company conducts penetration testing is the infrastructure where data is stored. As cloud environments for data storage continue to become more prevalent, rules against external penetration testing can affect who and when the penetration test is completed. Some cloud service providers will allow external penetration testing but require the account owner to inform the service provider in advance and wait for a response from the cloud provider, either approving or denying the penetration test. In some cases, cloud service providers will internally conduct a penetration test against their infrastructure to prevent accidental harm to businesses using shared resources. In addition to the three reasons previously mentioned, manufacturers and producers should conduct a penetration test when making changes to the infrastructure and applications used in the network. As changes such as removing and creating new firewall rules or completing application updates, the network’s security should be considered unsecured until adequately tested. As a proud supporter of American manufacturing, Certitude Security® is working diligently to inform leaders and facilitate essential asset protection priorities for supply chain businesses throughout the United States. When you are interested in learning about the empowering services that Certitude Security® can offer, visit our website or coordinate a time to speak to a team member today.
<urn:uuid:f277df29-480c-4ee0-9464-3ec4d5c8c33b>
CC-MAIN-2022-40
https://www.certitudesecurity.com/blog/analysis-and-assessments/why-is-penetration-testing-important/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00246.warc.gz
en
0.934693
2,528
2.625
3
What is Malware Analysis? Benefits, Types, and Tools What is Malware? Malware (malicious software) is software or programs designed to intentionally damage a computer, network, or server. The goal of malware is to disrupt or destroy sensitive data and computer systems by infiltrating computer systems discreetly. The most common types of malware are Trojans, viruses, worms, spyware, malvertising, scareware, keyloggers, backdoors, ransomware, and mobile malware. Signs of Malware Infection on a Computer Malware may exhibit obvious or subtle symptoms. Here is a list of some of the most common signs of a malware infection: - Slow and sluggish computer - Frequent system crashes - Rapid battery drain - Ads and pop-ups appear in unexpected places - An unexpected loss of access to the files and folders on the computer. - Unexpected deletion of files - Abrupt loss of disk space - Antivirus getting disabled - Random Increase in internet Connections - Browser settings change on their own - The browser opens on its own - Strange outgoing messages from your device to your contacts - Random Installation of Unknown Applications on a Mobile Device What is Malware Analysis? Threat actors leverage malware to exploit and disrupt individuals and organizations. Most advanced malware is designed and developed to operate stealthily inside the target systems and networks, avoiding Antivirus/Antimalware software detection. It is extremely difficult to detect malware. In a typical security operations center (SOC), security analysts employ various techniques and tools to analyze suspicious files to detect the presence of malware. This process of analyzing a piece of suspicious software, file, or code to understand its capabilities, functions, purpose, origins, and potential impact is called malware analysis. Malware analysis aims to determine if the suspicious software is malicious. The outcome of the malware analysis helps security analysts understand, detect, and mitigate potential threats to the organization Benefits of Malware Analysis in Cyber Security Malware analysis plays a crucial role in enhancing cyber threat detection in cyber security. Following are some of the benefits of conducting malware analysis in organizations: - Detect unknown cyber threats. - Detect APTs and other stealthily persistent malware - Understanding malware capabilities and intent - Understanding malware Tactics, Techniques, and Procedures (TTP) - Identify the Indicators of Compromise (IoCs) and Indicators of Attack (IoAs) - Help with SOC investigations and triage incidents - Improve the alerting efficiency of the threat detection tools - Assist as a hypothesis in threat hunting - Avoid incidents, breaches, attacks Types of Malware Analysis There are three main types of malware analysis: Static Malware Analysis In the static malware analysis, the components and the properties of the malware file will be analyzed and examined without executing/running/installing the malware. Static malware analysis is considered one of the most challenging types of malware analysis. In this type of malware analysis, a malware analyst examines the static properties of malware like binary-level code, functions, strings, c2c connections, IP addresses, domains, etc., in the code by disassembling and debugging it. Since advanced and sophisticated malware can deploy file-less malware and run-time executables, static analysis cannot be the most reliable way of analyzing malware. It is recommended to perform dynamic and static malware analyses to better understand the malware threat’s capabilities. Dynamic Malware Analysis In dynamic malware analysis, the malware is executed in a secure environment called a “sandbox” to analyze and understand the operational capability of the malware. A sandbox is an isolated system typically equipped with all the necessary tools and software to analyze suspicious files. Since this type of analysis is executed in a closed and isolated environment, the risk of infection to the corporate networks is zero. However, since advanced malware can spread through networks in weird ways, extreme caution must be taken while performing this malware analysis. Unlike static malware analysis, dynamic malware analysis focuses on understanding the malicious file’s behavior upon its execution. In this type of malware analysis, a malware analyst examines the dynamic behavior of malware like new process creations, process manipulations, process terminations, new registry key injections, registry key manipulations, file downloads, run time behavior, lateral movement, run time c2c connections, API calls, etc. Adversaries have become smart, and they know sandboxes are out there, so they’ve gotten very good at detecting them, which creates a challenge for dynamic analysis. To trick a sandbox, adversaries hide code inside them that may remain hidden until certain conditions are met. The actual malicious code runs only when the conditions are successfully satisfied. Hybrid Malware Analysis: Hybrid malware analysis is a sophisticated and advanced malware analysis that combines the static and dynamic types of malware analysis. As we understood earlier, static malware analysis is ineffective in detecting the behavioral properties of the malware, and the malware can evade sandboxes in dynamic malware analysis. By combining these techniques in hybrid analysis, security analysts can eliminate the limitations and achieve an adequate understanding of the malware. Stages of Malware Analysis Malware analysis methods have evolved, and the following are different stages or steps of malware analysis, illustrated by a pyramid diagram representing the complexity of each type of analysis method. Open Source Malware Analysis Tools Here are some of the most famous malware analysis tools: - Process Hacker - Process Monitor - Immunity debugger - Windows Sysinternals - Dependency walker - Ida pro - Hybrid Analysis - Joe Sandbox ThreatResponder’s Malyzer – A World-Class Malware Sandbox Malyzer is NetSecurity ThreatResponder’s in-built malware analysis sandbox that helps security teams to perform deep analysis of evasive and unknown threats. ThreatResponder is an advanced cloud-native EDR solution with an in-built malware sandbox to provide 361° threat visibility of your enterprise assets regardless of their locations. With its diverse features and advanced analysis engine, ThreatResponder can help your team automate malware analysis and reverse engineering processes, making it easy, fast, and hassle-free to analyze malicious and suspicious files. Want to try our cutting-edge Endpoint Detection & Response (EDR) security solution with inbuilt malware sandbox features in action? Click on the below button to request a free demo of our NetSecurity ThreatResponder platform. The page’s content shall be deemed proprietary and privileged information of NETSECURITY CORPORATION. It shall be noted that the contents of this page are copyrighted by NETSECURITY CORPORATION. Any violation/misuse/unauthorized use of this content “as is” or “modified” shall be considered illegal subjected to articles and provisions that have been stipulated in the General Data Protection Regulation (GDPR) and Personal Data Protection Law (PDPL).
<urn:uuid:6e1ab0c0-fc67-4a64-a7c7-becd8d8ae79d>
CC-MAIN-2022-40
https://www.netsecurity.com/what-is-malware-analysis-benefits-types-and-tools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00246.warc.gz
en
0.865131
1,512
3.0625
3
Enumeration techniques are a very fast way to identify registered users. With valid usernames, effective brute force attacks can be attempted to guess the password of the user accounts. Making sure no pages or APIs can be used to differentiate between a valid and invalid username Make sure to return a generic "No such username or password" message when a login fails. In addition, make sure the HTTP response and the time taken to respond are no different when a username does not exist and an incorrect password is entered. - Password Reset: Make sure your "forgotten password" page does not reveal usernames. If your password reset process involves sending an email, have the user enter their email address. Then send an email with a password reset link if the account exists. Avoid having your site tell people that a supplied username is already taken. If your usernames are email addresses, send a password reset email if a user tries to sign-up with a current address. If usernames are not email addresses, protect your sign-up page with a CAPTCHA. - Profile Pages: If your users have profile pages, make sure they are only visible to other users who are already logged in. If you hide a profile page, ensure a hidden profile is indistinguishable from a non-existent profile.
<urn:uuid:fc6178c7-ba58-4f47-b6cf-768362c6f21e>
CC-MAIN-2022-40
https://docs.ostorlab.co/kb/USER_ENUMERATION/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00246.warc.gz
en
0.84595
292
2.59375
3
This CISA Alert reviews many weak security controls and the techniques and procedures routinely used for initial access. This Alert was co-authored by cybersecurity authorities of the United Kingdom (NCSC-UK), Canada (CCCS), New Zealand (NCSC-NZ), the Netherlands National Cyber Security Center, and the United States (CISA, NSA, and the FBI). The following techniques (in MITRE ATT&CK format) were commonly used to implement the tactic (MITRE ATT&CK Tactic TA0001) to gain initial access to victim networks: - Exploit Public-Facing Application [MITRE ATT&CK Technique T1190] - External Remote Services [MITRE ATT&CK Technique T1133] - Phishing [MITRE ATT&CK Technique T1566] - Trusted Relationship [MITRE ATT&CK Technique T1199] - Valid Accounts [MITRE ATT&CK Technique T1078] Threat actors are able to exploit many of the following poor configurations, poor security practices, and weak security controls in order to utilize these initial access techniques as described in the Alert: - Multi-Factor authentication (MFA) is not enforced. MFA, particularly for remote desktop access, can help prevent account takeovers. With Remote Desktop Protocol (RDP) as one of the most common infection vectors for ransomware, MFA is a critical tool in mitigating malicious cyber activity. Do not exclude any user, particularly administrators, from an MFA requirement. - Incorrectly applied privileges or permissions, and errors within access control lists. These mistakes can prevent the enforcement of access control rules and could allow unauthorized users or system processes to be granted access to objects. - Software is not up-to-date. Unpatched software may allow an attacker to exploit publicly known vulnerabilities to gain access to sensitive information, launch a denial-of-service attack, or take control of a system. This is one of the most commonly found poor security practices. - Use of vendor-supplied default configurations or default login usernames and passwords. Many software and hardware products come “out of the box” with overly permissive factory-default configurations intended to make the products user-friendly and reduce the troubleshooting time for customer service. However, leaving these factory default configurations enabled after installation may provide avenues for an attacker to exploit. Network devices are also often pre-configured with default administrator usernames and passwords to simplify setup. These default credentials are not secure—they may be physically labeled on the device or even readily available on the internet. Leaving these credentials unchanged creates opportunities for malicious activity, including gaining unauthorized access to information and installing malicious software. Network defenders should also be aware that the same considerations apply for extra software options, which may come with pre-configured default settings. - Remote services, such as a virtual private network (VPN), lack sufficient controls to prevent unauthorized access. During recent years, malicious threat actors have been observed targeting remote services. Network defenders can reduce the risk of remote service compromise by adding access control mechanisms, such as enforcing MFA, implementing a boundary firewall in front of a VPN, and leveraging intrusion detection system/intrusion prevention system sensors to detect anomalous network activity. - Strong password policies are not implemented. Malicious cyber actors can use a myriad of methods to exploit weak, leaked, or compromised passwords and gain unauthorized access to a victim system. Malicious cyber actors have used this technique in various nefarious acts and prominently in attacks targeting RDP. - Cloud services are unprotected. Misconfigured cloud services are common targets for cyber actors. Poor configurations can allow for sensitive data theft and even crypto jacking. - Open ports and misconfigured services are exposed to the internet. This is one of the most common vulnerability findings. Cyber actors use scanning tools to detect open ports and often use them as an initial attack vector. Successful compromise of a service on a host could enable malicious cyber actors to gain initial access and use other tactics and procedures to compromise exposed and vulnerable entities. RDP, Server Message Block (SMB), Telnet, and NetBIOS are high-risk services. - Failure to detect or block phishing attempts. Cyber actors send emails with malicious macros—primarily in Microsoft Word documents or Excel files—to infect computer systems. Initial infection can occur in a variety of ways, such as when a user opens or clicks a malicious download link, PDF, or macro-enabled Microsoft Word document included in phishing emails. - Poor endpoint detection and response. Cyber actors use obfuscated malicious scripts and PowerShell attacks to bypass endpoint security controls and launch attacks on target devices. These techniques can be difficult to detect and protect against. The Alert reviews many recommended mitigations to include those associated with control access (including the use of a Zero Trust security model), credential hardening, more robust and comprehensive centralized log management, the use of antivirus programs, detection tools (endpoint and intrusion), regular search and assessment of vulnerabilities (penetration testing), and rigorous configuration management programs. Threat Actors Leverage DNS in the Attack Chain The song remains the same. Threat actors frequently use DNS to support malware infiltration, command and control, and attack execution. DNS is continually used to set up and execute attack chains. The attack may involve DNS queries when the victim’s system is compromised and infected. DNS is almost always used when an infected system communicates with the command and control (C&C) servers. The role of core networking services such as DNS in network security are central to network security defense and protection. Advanced, real threat analytics such as those found in BloxOne Threat Defense, focused on DNS services, are critical to identifying and preventing many of these DNS-based attacks. Threat intelligence is an important part of the defensive mix. Threat intelligence can bring you a very current set of malicious hostnames, domains, IP addresses that you can use such that your DNS servers can then detect and block command and control (C&C) communications to malicious destinations. Advanced techniques such as behavioral analytics and machine learning on real-time DNS queries can rapidly detect and stop zero-day DNS tunneling, DGA, data exfiltration, Fast Flux, lookalike domains, and more. Infoblox DDI (DNS, DHCP, IPAM database) data has valuable information about device activity and actionable network context (like what type of device it is, where it is in the network, who it is assigned to, lease history). This information can be used for essential visibility into ongoing attacks and for remediation strategy. Visibility is also key. BloxOne Threat Defense leverages DDI (DNS, DHCP, IPAM database) to provide pervasive asset visibility and awareness. BloxOne Threat Defense does this by using additional contextual info on a compromised system such as location in the network, type of device and an audit trail of all activity from that system. This helps administrators quickly identify systems that are attempting to reach suspicious and potentially malicious destinations and take quick action to mitigate those threats. The integration of data with SIEM and SOAR infrastructure can provide significant reductions in time for the detection of threats and the automation of incident response. When Infoblox detects something malicious, a new device, or virtual workload on the network, it automatically shares that event information and context with existing security infrastructures like endpoint EDR, SIEM, SOAR, and other solutions. This data can trigger the security tools to prevent access to the network or scan for vulnerabilities until it is deemed compliant with policy. For more information on BloxOne Threat Defense: https://www.infoblox.com/products/bloxone-threat-defense/. The full text of CISA Alert AA22-137A can be found here. To know more, please reach out to us directly via [email protected]. Russia’s invasion of Ukraine could impact organizations both within and beyond the region, to include malicious cyber activity against the U.S. homeland, including as a response to the unprecedented economic costs imposed on Russia by the U.S. and our allies and partners. Evolving intelligence indicates that the Russian Government is exploring options for potential cyberattacks. Every organization—large and small—must be prepared to respond to disruptive cyber incidents. As the nation’s cyber defense agency, CISA stands ready to help organizations prepare for, respond to, and mitigate the impact of cyberattacks. When cyber incidents are reported quickly, we can use this information to render assistance and as a warning to prevent other organizations and entities from falling victim to a similar attack. Organizations should report anomalous cyber activity and/or cyber incidents 24/7 to [email protected] or (888) 282-0870.
<urn:uuid:53748c81-1d26-4a4e-8379-d86e3e6d0dd4>
CC-MAIN-2022-40
https://blogs.infoblox.com/security/weak-security-controls-and-practices-routinely-exploited-for-initial-access/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00246.warc.gz
en
0.903685
1,819
2.703125
3
The attack chain. It’s a term used often in infosecurity. Also known as the kill chain, it was originally used as a military concept to describe the structure of an attack. It serves the same function in cybersecurity, where various methods of malware infiltration, deployment, and execution are outlined. To break the attack chain, then, means to preempt the attack. This is of obvious significance to business owners, who’d much rather avoid expensive and time-sucking breach cleanups with programs that prevent attacks altogether. But breaking the attack chain is not as simple as it used to be. Cybercriminals are constantly changing methodologies and deployment vectors to fool endpoint defenses. The attack chain is evolving and multiplying, out-thinking traditional, signature-based endpoint security. In fact, nearly 80 percent of businesses have suffered a security-related breach in the last year. That’s why businesses need to evolve their endpoint protection strategy, using a multi-layered approach to stop malware deployment and execution in multiple attack chains. In the following infographic, we’ve outline how Malwarebytes does just this, using seven different, complementary technologies. Click here for the full PDF version.
<urn:uuid:71205cdb-1385-4316-8d44-d0eea0278dec>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2017/06/breaking-the-attack-chain
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00246.warc.gz
en
0.95039
249
2.59375
3
Threat actors are using new phishing techniques to steal credentials to the digital gaming platform Steam with the intent to sell them to other users. The phishing technique used is known as a Browser-in-the-Browser attack, a sophisticated technique involving the creation of fake browser windows within the active window. In these campaigns, targeted users receive direct messages on Steam inviting them to join a tournament for a popular video game. This message includes a link to a website for what appears to be an organization hosting eSports competitions and requires users to log in to their Steam account to sign up. This triggers what appears to be a new browser window to open, containing the login page for Steam. This window, however, isn’t a new browser window and is instead a fake window created within the current page. The fake window is mirrored to look like the Steam login page, including the legitimate Steam URL in the address bar as well as the HTTPS secure lock, but when any credentials are entered, they are sent to the threat actor instead. These pages are sophisticated enough to be able to prompt for and steal MFA codes as well. Once the authentication process has been successful, the webpage redirects the web browser to a legitimate address in an attempt to hide the fact that credentials were just stolen. At this point, the threat actors quickly hijack the Steam accounts, changing passwords and email addresses to make it more difficult for victims to regain access. This phishing method, using Browser-in-the-Browser attacks, is gaining in popularity among threat actors due to its sophisticated nature and users’ difficulty in determining that it is a phishing attempt.
<urn:uuid:3b28dc93-8c48-4d83-8187-c753f884d07a>
CC-MAIN-2022-40
https://www.binarydefense.com/threat_watch/hackers-steal-steam-accounts-in-new-browser-in-the-browser-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00246.warc.gz
en
0.951566
332
2.640625
3
There is no clear-cut way to define all the types of computer viruses in the world without also addressing the sheer quantity of them that are alive and well, or dwindling down, or completely out of circulation – each and every one of them features distinctive elements that you might begin to understand on a rudimentary level but will render you utterly perplexed with the next one you encounter. Despite a virus’s status at any given moment, the most prudent computer users will want to be well-versed in the history of computer viruses to achieve an overarching understanding of the capabilities, distinct features, and prowess some viruses may have had over others throughout the years. It’s of little benefit to computer users to be solely aware of the short list of circulating viruses, which are at this moment, posing negligible dangers to users well equipped with antivirus software and other protective accessories. However, have a thorough understanding of the brief history of computer viruses will educate computer users immensely. For example, novice users to the computing world may not recall the infamous “Slammer” virus which had all of the power of a digital terrorist army and was able to effectively bring down powerhouse institutions like Google, Bank of America, The U.S. 911 emergency contact system, and even the internet as a whole for fifteen entire minutes. The viruses we have floating around today are nothing like some of the evil firestorms experienced by the world in years past. Educating yourself on the likes of these notorious viruses serves you in myriad ways. Primarily, it informs you on a deeply personal and resonating level by giving you a clear-cut idea of the potential viruses hold, the creative means in which they are deployed, and the seemingly insurmountable damage that can occur to any computing user who doesn’t understand the importance of taking preventative, protective measures to safeguard their lives. Wikipedia even bemoans the fact that compiling a comprehensive list of computer viruses (even a list of the current ones) is made exceedingly difficult due to the many issues involved in naming viruses. To explain further, the appearance of a new virus onto the cyber scene will result in a collective rush of personnel, including, but not limited to, antivirus software development teams, security advisory and standards organizations, and even reporters for tech magazines to analyze the virus, its potential, and to ultimately give it a name. What ends up happening (as evidenced time after time in cyber history) is that the virus will receive a name of sorts to be reported to the public, while countless antivirus companies work around the clock to devise powerful and effective countermeasures to quell the attack. Often, the anti-virus company that has made the most headway in the effort to capture and destroy the virus, also typically end up giving it an altered name – perhaps to align the visibility of their company’s efforts with the antivirus or maybe for other reasons. Whatever the motivations happen to be, time and again, the entry of new viruses into the cyber world always goes through multiple iterations in the naming – thus making an accurate, historical list of viruses a difficult ordeal indeed. An odd example highlighting this phenomenon is the “Palyh” virus which was later renamed to “Sobig.n” In this instance, the old name is still commonly used to this day, although cyber analysts state that the quicker the renaming process takes place, the more grounded in place it becomes. Internationally renowned company Symantec provides solid numbers on the number of viruses since the early 2000s. According to Symantec, there were 40,000 viruses in the year 2000, 103,000 viruses in 2003, and well over a million viruses existing in 2018. Fortunately, Symantec also informs us that a very small number of today’s viruses are actually in circulation and are worth self-educating yourself on. A virus or lists of old viruses that are out of circulation are referred to as a “zoo virus,” while an active and collectively recognized virus is referred to as an “in-the-wild” virus. These are actual professional terms used in the cyber industry. Would you happen to like our advice for becoming a well-versed, savvy pro at virus lexicon and top-caliber protection techniques? Read through our article detailing the most compelling in-the-wild viruses of interests to businesses, government, and individuals today and check out our accompanying pro tips for protecting all the important data that defines your life. Then, when you have free time, visit, explore, take notes, and make sure to bookmark the following site “The Wildlist Organization International,” which has been referred to by countless internet denizens as one of the most important websites of all time. MSNBC has even stated that The Wildlist Organization International is generally “regarded as the most authoritative collection of viruses that are running around the Internet.” With exhaustively detailed lists going back to 1996, this one-of-a-kind document categorizes and describes in detail the earliest viruses of our technological times, including the ones that wielded significant destruction as we came nearer to Y2k. Brimming with research-rich details and copious amounts of research, this site is a true wonder to behold. The one irony? It’s no longer in operation. However thanks to cool internet technology like the way back machine, we can see the site’s contents in their entirety, with the ultra-detailed listings of virus listings from the 90s and early 2000s right here. Chiam Patak once uttered some of the truest words ever heard with his quote “If you don’t know the past, you can’t understand the present and plan properly for the future.” Honest, prudent, timeless, classic advice that applies to everything in life – not even just viruses, but especially viruses. Read on below to learn more about the top computer viruses you need to be aware of in 2018 and how to protect yourself in multifaceted fighting techniques like a champion MMA pro! 2018 has yet to come to close. However, there has been an abundance of viruses and hacking attempts that have ultimately defined the year. Symantec has already compiled a running list with currently 42 entries on it. Most are Trojans, four are classified as pure viruses, and five entries make a note of worm-related activity. The Center for Internet Security has also released a report and accompanying infographic highlighting the top ten threats of 2018 show below. |Virus name||Virus class||Virus features| |Emotet||Modular Trojan for banking||Infection occurs via email, PDF or Word attachments| |Redyms||Click-fraud Trojan||Infection occurs via download exploitation tactics| |NanoCore||Remote Access Trojan | |spread via malspam as a malicious Excel XLS spreadsheet. As a RAT, NanoCore can accept commands to download and execute files, visit websites, and add registry keys for persistence| |Gh0st||Remote Access Trojan | |Gh0st creates a backdoor into a device, allowing an attacker to fully control the infected device and infected endpoints| |ZeuS/Zbot||Modular Banking Trojan||Keystroke logging compromises credentials at banking websites| |CoinMiner||Trojan||Uses PCs to generate Bitcoins and installs software slowing systems down| |Mirai||Malware Botnet||Designed to conduct DDoS attacks after a successful exploit| |Ursnif and Dreambot (identical)||Banking trojans are known for weaponizing documents and | its web injection attacks |Recently upgraded to include TLS callbacks in order to thwart antimalware and antivirus software| |WannaCry||Ransomware Worm||Uses the EternalBlue exploit method to spread.| |Kovter||Trojan||Acts as a click fraud malware or a ransomware downloader| Did I forget to mention that while you should learn the background and distinctive details about some of the old-school powerhouse viruses that shook the cyber world, you should also definitely be well versed in the assorted types of viruses in today’s tech world? Yes, believe it or not, a computer virus is not just a dangerous threat to your identity, data, and personal details with a “virus name:” slapped on it. Rather, there are different kinds of viruses, many of which want your identity, data, and personal details regardless of what it is called or how it is classified. Fear not, ladies and gentlemen! This may begin to feel like school again, but it is very much not. It is life. Your life. The lives of your family, spouses, partners, children, and other loved ones, whose personal details must be safeguarded in the most comprehensive manner you can manage Computer viruses basically cause threat and harm to identity details and sensitive data on your computer. To get off it, antivirus software is developed to protect personal, financial and most sensitive data. 1. Boot Sector Virus – a recurring virus that does most of its damage in the partitioned storage part of your computer 2. Web Scripting Virus – are transmitted through the clicking of alluring links 3. Browser Hijacker- auto-redirects your homepage to an unsafe site, essentially hijacking your browser 4. Resident Virus – comes in, destroys an array of elements and leaves, with the ability to come back 5. Direct Action Virus. – executed through the downloading of illicit files or attachments and then infects your entire system 6. Polymorphic Virus – this oddball’s job is to detect the presence of any virus by seeking its coding, changes the coding, and is often difficult to remove 7. File Infector Virus – uses a “file infector: to rewrite all of your files 8. Multipartite Virus – dual-mechanism virus uses single or double payloads to deliver damage 9. Macro Virus. – this virus exploits the user’s details as well as the details of their friends, family, and contacts You might be asking yourself how you are you supposed to protect yourself from so many different kinds of viruses, each with different attack mechanisms and interests. The answer is easy. There are in fact a scary seemingly countless number of viruses in the world. There are ones we have never heard of, ones that seek to exploit not only our details but the details of our loved ones, ones that weird names and seemingly unreal abilities like being able to run in and out of your system creating havoc as they please. All this is very true. You need to remember; you didn’t know about all of these viruses before today. At a basic level, you knew about the dangers of viruses, the need to avoid them, and the investments (antivirus protection) you are required to make to protect yourself. The same remains true here. Just because you are now aware of the immense scope of viruses in existence doesn’t change any element of your protection game. Keep doing what you’re doing; don’t click on bad links, don’t open suspicious emails, never download anything that you aren’t well familiar with, and most importantly, have your antivirus system setup and in place with your firewall on and be at the ready to practice smart internet behavior. As a reminder, take a look at the vintage viruses of the last few decades. All of the great historians tend to believe on some level that history repeats itself. Learning about the earliest viruses and their significant impact will do well in serving you in the future by adding dimension to your plan of action in the event of a hacking threat attempt.
<urn:uuid:72b2e7dc-4c66-4219-9c9c-5e1addb5e7d3>
CC-MAIN-2022-40
https://antivirusrankings.com/types-of-computer-viruses-and-top-tips-for-protection
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00446.warc.gz
en
0.942602
2,460
3.09375
3
Programmers need to understand the principles and practices common to production software development. Production considerations include such things as version control, code libraries, source control systems, documentation, code reviews, testing tools and methodology, and software release management. Whether you're a one-man shop writing custom applications, or a member of a 100-developer team, understanding these principles will make your life as a programmer much easier. Many companies now use specific methodologies for managing the software development lifecycle, and programmers should be familiar with those methodologies and how they relate to the production cycle. Some common methodologies include Agile, Lean, Scrum, Spiral, and Waterfall. Certifications and workshops are available for many of these methodologies. Take time to review the job postings for your dream company to find out what methodology is used; then make sure that you understand the particulars of that process.
<urn:uuid:6f1f4455-6c73-4062-8eae-27ef7d2a6f59>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=1655229&seqNum=5
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00446.warc.gz
en
0.922447
182
2.703125
3
What is a UDP flood attack The receiving host checks for applications associated with these datagrams and—finding none—sends back a “Destination Unreachable” packet. As more and more UDP packets are received and answered, the system becomes overwhelmed and unresponsive to other clients. In the framework of a UDP flood attack, the attacker may also spoof the IP address of the packets, both to make sure that the return ICMP packets don’t reach their host, and to anonymize the attack. There are a number of commercially-available software packages that can be used to perform a UDP flood attack (e.g., UDP Unicorn). User Datagram Protocol (UDP) is a connectionless and sessionless networking protocol. Since UDP traffic doesn’t require a three-way handshake like TCP, it runs with lower overhead and is ideal for traffic that doesn’t need to be checked and rechecked, such as chat or VoIP. However, these same properties also make UDP more vulnerable to abuse. In the absence of an initial handshake, to establish a valid connection, a high volume of “best effort” traffic can be sent over UDP channels to any host, with no built-in protection to limit the rate of the UDP DoS flood. This means that not only are UDP flood attacks highly-effective, but also that they could be executed with a help of relatively few resources. Some UDP flood attacks can take the form of DNS amplification attacks, also called “alphabet soup attacks”. UDP does not define specific packet formats, and thus attackers can create large packets (sometimes over 8KB), fill them with junk text or numbers (hence the “alphabet soup”), and send them out to the host under attack. When the attacked host receives the garbage-filled UDP packets to a given port, it checks for the application listening at that port, which is associated with the packet’s contents. When it sees that no associated application is listening, it replies with an ICMP Destination Unreachable packet. It should be noted that both amplified and non-amplified UDP floods could originate from botnet cluster of various sizes. The use of multiple machines will classify this attack as Distributed Denial of Service (DDoS) threat. With such attack the offender’s goal is to overbear firewalls and other components of the more resilient network infrastructures. Methods of mitigation At the most basic level, most operating systems attempt to mitigate UDP flood attacks by limiting the rate of ICMP responses. However, such indiscriminative filtering will have an impact on legitimate traffic. Traditionally, UDP mitigation method also relied on firewalls that filtered out or block malicious UDP packets. Yet, such methods are now becoming irrelevant, as modern high-volume attacks can simply overbear firewalls, which are not designed with overprovisioning in mind. Imperva DDoS protection leverages Anycast technology to balance the attack load across its global network of high-powered scrubbing servers, where it undergoes a process of Deep Packet Inspection (DIP). Using proprietary scrubbing software, specifically designed for inline traffic processing, Incapsula identifying and filters out malicious DDoS packets, based on combination of factors like IP reputation, abnormal attributes and suspicious behavior. The processing is performed on-edge, and with zero delay, allowing only clean traffic to reach the origin server. Learn more about Imperva DDoS Protection services.
<urn:uuid:7086d2dd-f2c9-4252-a7b0-bde53551301e>
CC-MAIN-2022-40
https://www.imperva.com/learn/ddos/udp-flood/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00446.warc.gz
en
0.931512
734
2.984375
3
The U.S. Department of Energy (DoE) awarded $70 million in funding to seven research projects that will assist in the continued development of a supercomputer model of Earth’s climate system, according to an Aug. 30 press release. The Energy Exascale Earth System Model (E3SM) provides climate simulations and predictions through an ultra-high-resolution model of Earth that is run on exascale supercomputers. These computers are millions of times more powerful than modern personal computers, with DoE’s technology being recently named the fastest in the world. “The model is constantly being improved to provide the best simulation and prediction possible to researchers in Earth system science,” the agency said. This technology enables scientific discovery through collaborations between climate and computer scientists, as well as mathematicians. Data from this model enhances scientists’ understanding of climate change. According to the award list, three of the research projects will take place at the Los Alamos National Laboratory in New Mexico, three others will be conducted at the Pacific Northwest National Laboratory in Washington, and the seventh project will be conducted at the University of New Mexico. The studies on the E3SM will range from simulations of ocean circulation in the Atlantic to the dynamics of compound flooding. These projects will give university and National Lab researchers deep insight into the oceans, air and climate, said U.S. Secretary of Energy Jennifer M. Granholm. It will also demonstrate how emissions are impacting the world around us right now and in the future. “Being able to understand and predict what is happening in a system as complex as planet Earth is crucial to finding solutions to climate change,” Granholm said.
<urn:uuid:710a1a10-52a9-4581-8cde-78afabea0b86>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/doe-awards-70m-for-climate-driven-supercomputer-work/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00446.warc.gz
en
0.932219
349
2.96875
3
November 8, 2013 Three Mistakes Schools Make With BYOD Mobile devices and education are nearly synonymous these days. Higher education students are bringing more than 3 Wi-Fi-enabled devices with them to campus. Students in K-12 are bringing smartphones/iPod touches, kindles, tablets and laptops which are used for digital textbooks and on-line testing. The explosion of these powerful mobile devices put desktop applications into the hands of students, while the latest Wi-Fi standards such as 802.11n and the introduction of 802.11ac eliminate the need for wires. All these Wi-Fi devices can create havoc on the educational institution’s Wireless LAN and overload an IT department or administrator. The challenge for IT is how to on-board all these devices securely and apply the appropriate policy for network access to protect the network, resources, and individuals using the network. This has led to BYOD (Bring Your Own Device), which started simply enough, how to on-board all these devices without a manual setup or registration of MAC addresses by users or IT staff. However, BYOD is beyond simple on boarding. It is about identifying the student, authenticating that student, and then on-boarding student devices with secure connections while provisioning that device with the appropriate access. View the Full Article from Business Solutions
<urn:uuid:6481066c-1f0a-438d-b85f-e82b0185c39d>
CC-MAIN-2022-40
https://www.motorolasolutions.com/newsroom/press-releases/three-mistakes-schools-make-with-byod.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00446.warc.gz
en
0.926332
269
2.6875
3
Machine learning and your business: How seemingly intelligent machines are changing the world The field of machine learning is advancing rapidly, and it won’t be long before it touches your business. Just as the advent of mobile apps upended business, and as the advent of the Internet of Things changed industries, so too will advances in machine learning require adaptation by workers, managers, and executives. Machine learning is no longer a futuristic concept, nor an empty buzzword. Its most visible incarnation is in self-driving or assisted-driving cars – Google, Uber, Apple, and even Ford are all making headlines with their investments and advances. These are likely to eliminate many transportation jobs as they become more effective and economical – and their astonishing capabilities are made possible by machine learning. And in case you think this is only about robot cars, or robots building cars, consider how JPMorgan Chase recently unveiled an application that takes away 360,000 hours of work each year from lawyers and loan officers. This field is still in its infancy, yet is already having profound effects. In the next few years, we can expect to see changes that will affect business all over the world. The nature of machine learning Machine learning refers to any computer that can draw insights from data without explicitly being told where to look or what to do. The software learns through iterative modeling. As new data are discovered, the models are adapted to account for it. With enough iterations, a machine can understand complex tasks and situations to a degree that would be impossible without machine learning. One special type of machine learning that exemplifies the qualities and nature of the field involves the use of “neural networks.” These are systems of software modeled on the human brain, and designed to emulate some of its capabilities. This technique is also known as “deep learning.” How it works: Neural networks simulate layers of “neurons,” which individually analyze data. These neurons are interconnected, so the output of each “layer” of calculation affects how the next layer goes about interpreting its inputs. Over time, these layers can practice interpreting a set of data, and learn to perform a task successfully. This is a fairly abstract explanation; an example will help. Let’s say you are creating software designed to recognize and read road signs. Tasks such as these have traditionally been very difficult for machines – hence the effectiveness of old-style recaptcha technology. While it’s pretty easy for us humans to recognize and parse those warped letters, it has traditionally been hard to do that in software. Machine learning is changing that. While the machine is learning, images of road signs, tagged with what they say, together with objects that are not road signs will be fed into the machine. Once the data has passed through each layer, the machine will make a prediction about what is in the image. These predictions will initially be very inaccurate, but will become better as the machine practices. Once it is good enough at recognizing the learning data that we gave it, it’s ready to start working on things it has never seen before. As a side note, you may have noticed lately that Recaptcha has moved from warped text to asking you to identify things in images. That’s because hackers have gained the ability to “read” that warped text. As an interesting aside, your responses to those image-based Recaptcha challenges are in fact being used to further train Google’s own image recognition engine. The underlying processes of machine learning are complex, and heavy with math and statistics. But it may help for you to understand the basic flow. Data will pass through four layers: - Convolution: The image is broken down into different features. Yellow pixels, straight lines, edges, etc. - Activation: Less-obvious features are picked out from the pool of basic pictures. Words and letters start to emerge. - Pooling: A huge amount of data is generated in the first two layers. The pooling layer prunes this data. Only the best examples of each feature are preserved. - Fully connected: All the data are collated and a final determination is made as to the nature of the image. The potential of machine learning If machine learning has not touched your industry yet, there is a surprisingly good chance that it will one day, and soon. Here are just a few of the industries that are making use of machine learning today: Financial services: Machine learning is being used to prevent fraud, and to identify market opportunities, by learning from customer data. And, as noted earlier in the JPMorgan Chase example, it’s being used to interpret contracts and loan agreements in great volume. Government: Public utilities are using data mining and machine learning to pinpoint insights in huge volumes of usage data – insights that humans would be unlikely to identify on their own using such massive datasets. Healthcare: In this field, machine learning is starting to overlap with another notable trend in technology: the Internet of Things. Wearable devices and sensors are being used to gather data about patient health in real time, and machine learning is used to analyze that data to gain insights into its meaning. Marketing: The ability of sites such as Amazon to analyze your buying history and make accurate targeted recommendations, or to dynamically adjust pricing, is due in large part to the power of machine learning. Oil and gas: Machine learning has a tremendous number of applications in this field, from finding new sources of oil and gas, to analyzing the potential failure of equipment, to streamlining oil distribution. Transportation: On top of the obvious impact of assisted or autonomous driving, machine learning technology is revolutionizing this field through better mapping, scheduling, fleet planning, route-optimizing, and much more. Like many innovations, machine learning did not just spring into existence. The mathematics are well understood, and neural network research has been going on for decades. What makes today the era of machine learning is that we can now realize its potential due to the massive amounts of data available to us – the proliferation of free data available online, data you can get from your customer interactions, IoT sensors gathering data from your systems, and advances in data storage to hold it all cheaply – coupled with scalable processing power available in the cloud. These technologies are only likely to become more powerful over time; the long-term effects are very difficult to predict. Machine learning and your business It’s time to start thinking about how machine learning will affect your field. How could it make your life easier? More importantly, how might competitors start using it against you? What opportunities are there for your company to take advantage of it? As a long-time developer of software, we’ve seen how software is increasingly used to solve business problems. Even if software isn’t “what you do,” your business runs on software and your products are partially (if not fully) software-based. The same trend is already happening with machine learning. Market leaders are finding ways to incorporate machine learning into their products and services, or to develop new products and services that depend completely on machine learning. Solving a business problem or opening new opportunities with machine learning may be a matter of choosing to invest in it. The advantage will lie with those who stay on the bleeding edge.
<urn:uuid:ff9a1da6-8f65-4594-a180-4045d42b26b8>
CC-MAIN-2022-40
https://www.avi.com/blog/machine-learning-and-your-business-how-seemingly-intelligent-machines-are-changing-the-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00446.warc.gz
en
0.958431
1,514
2.796875
3
High-bit-rate Digital Subscriber Line (HDSL) technology is a transparent replacement for a T-1 repeatered line. It allows DS-1 signals to be transported over distances of up to 12,000 feet (3700 meters) on unconditioned copper cable, which are cables without line repeaters. HDSL is easier to maintain and provision than conventional T-carrier span designs because HDSL requires a repeater only at both ends of the line, not every 1800 meters (6000 feet), as required by conventional T1 lines. HDSL was designed as an alternative to T-carrier services (such as T1 and E1 lines). HDSL essentially operates in the same way as ADSL except that it is always symmetrical, which means the data speeds are the same both up and downstream. HDSL can carry both voice and data over a single communication link.
<urn:uuid:71ea17d9-9b2a-448a-9070-590b3785d397>
CC-MAIN-2022-40
https://www.dialogic.com/glossary/high-bit-rate-digital-subscriber-line-hdsl
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00446.warc.gz
en
0.951403
190
2.609375
3
OTDR that is short for optical time domain reflectometry, is a fiber optic tester for the characterization of optical networks that support telecommunications. It can be used to measure loss, optical return loss (ORL) and optical distance on a fiber optic link. Besides, by providing pictorial trace signatures of the fibers under test, an OTDR can offer a graphical representation of the entire fiber optic link. However, there are so many OTDR brands in the market. Choosing the right OTDR for your application can be challenging. So this post is intended for giving some reminders when choosing an OTDR. Hope it may help you. As we all know, fiber testing is an essential procedure to make sure that the network is optimized to deliver reliable and robust services without fault. So here are two reasons for why an OTDR is needed. First, service providers and network operators want to insure that their investments into fiber networks are protected. Installers need to use OTDR performing bi-directional tests and providing accurate cable documentation to certify their work. Of course, OTDRs can be used for troubleshooting problems such as break locations due to dig-ups. Second, premises fiber networks have tight loss budgets and less room for error. Therefore installers have to test the overall loss budget with a light source and power meter, which is a big task. While OTDR can easily pinpoint the causes for excess loss and verify that splices and connections are within appropriate tolerances, which saves lots of time. Besides, it is also the only way to know the exact location of a fault or a break. Before choosing a suitable OTDR, ask yourself the following two questions. Loss, reflectance, splicing alignment and distance, which one are you going to test? Make sure the OTDR you choose can do what you want easily, quickly and accurately. If you need to make “live” test (like during a “hot cut”—splicing of fibers in a working cable), you need an OTDR that can do an active splice loss measurement in “real time”. Where are you going to do testing? A good understanding of the applications of an OTDR will help you make the right choice for specific needs. For example, what kind of networks will you test? LAN (local area network), metro or long haul? What is the maximum distance you might have to test? 700 m, 25 km, 150 km? Many people may be familiar with OTDR but not know how to choose a real right one. Except for the quality that we must focus on, the following three factors also should be attached great importance to. Maintaining fiber health is just as challenging and makes fast troubleshooting critical. Almost every OTDR on the market today is designed to cover carrier applications. As a result, many OTDR have very complex user interfaces which require the user to make sense numerous buttons and controls and navigate cumbersome multi-level menus. It’s bad for users improving operating efficiency. So a simplified and task-focused user interface test equipment is important. With the wide use of short patch fibers and various types of fiber connectors, details on network link—loss, connector and reflectance—are critical to ensuring performance. However, OTDR with an attenuation dead zone of more than 3m are no longer applicable for testing data center fiber. But when problems arise, an OTDR with precision fiber channel information can help users with various skill levels efficiently perform troubleshooting and accelerate network recovery. As data centers grow and change, it’s challenging to ensure all fibers are installed with certificated quality. Therefore, integrated project management capabilities with cable-by-cable granularity can save time and planning effort. An OTDR with built-in project management capability that allows users plan day-to-day activities without using a personal computer or laptop. Selecting a proper OTDR to test your network not only can strengthen its reliability, but also improve the efficiency of the testing job as well as documenting the quality of work. Therefore, before selecting an OTDR, considering the applications and specific needs of your testing work will ensure that it is suited for your applications. FS.COM provides various types of OTDR with different wavelengths such as 850 nm, 1310 nm, 1550 nm and 1625 nm. You can find one that best suits for your network.
<urn:uuid:1bf7d559-84ba-49c0-8ed5-cf0da45052e3>
CC-MAIN-2022-40
https://www.fiber-optic-components.com/tag/otdr
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00646.warc.gz
en
0.939115
901
2.796875
3
guruXOX - stock.adobe.com Internet usage around the world is soaring to unprecedented heights, yet billions of people – including most of the world’s poorest – remain unconnected, and this situation is unlikely to change in the near future, according to Cisco’s latest Visual Networking Index (VNI) research. Currently, there are thought to be a little over 3.4 billion internet users around the world (based on 2017 figures), representing 45% of the world’s population, and while this will grow to 4.8 billion users by 2022, around 60% of the global population, this will still leave at least three billion offline. In a blog posted to Cisco’s website, Thomas Barnett, the organisation’s director of service provider thought leadership, wrote that realistically, the internet would never be a priority for many people. He noted that the internet alone cannot possibly solve all of the pressing social, economic and environmental problems that humans face. “That being said, there are some compelling correlations that have been associated with internet access and better living conditions (or prosperity) in general,” he wrote. In May of 2018, the World Economic Forum (WEF) said the internet was in danger of creating an “unequal wealth explosion” that would serve to exacerbate existing divides between the richest and poorest. It pointed to the success of Vodafone’s M-Pesa micropayment service in parts of sub-Saharan Africa as evidence of how securing internet access can bring socioeconomic benefits to people in the developing world. In spite of the digital divide, by 2022 there will be three devices or connections per person globally (with half of those being internet of things connections), rising from 18 billion in 2017 to 28.5 billion four years from now, while some of the world’s richest users will own and operate up to 10 connections each. The vast proliferation of connections over the next four years also heralds a massive spike in data generation and consumption. Already, global internet protocol (IP) traffic has hit 122 exabytes per month and will hit 396 exabytes per month, or 4.8 zettabytes per year, by 2022. For context, at the present moment, only around 4.7 zettabytes of IP traffic has crossed over the internet since the early 1980s. “The size and complexity of the internet continues to grow in ways that many could not have imagined. Since we first started the VNI Forecast in 2005, traffic has increased 56-fold, amassing a 36% CAGR with more people, devices and applications accessing IP networks,” said Jonathan Davidson, senior vice-president and general manager of Cisco’s service provider business. “Global service providers are focused on transforming their networks to better manage and route traffic, while delivering premium experiences. Our ongoing research helps us gain and share valuable insights into technology and architectural transitions our customers must make to succeed.” As forecast in previous editions of the VNI, video, gaming and multimedia will form the bulk of the traffic transiting the internet, up to 85% of the total, with video representing 82% of this. To cope with this demand, Cisco predicts operators will ramp up the pace of their fixed and mobile broadband network investments over the next few years, with average global fixed speeds expected to nearly doubly from 39Mbps to 75Mbps by 2022, and average mobile network speeds expected to triple from 8.7Mbps to 28.5Mbps. Read more about the internet - An upcoming whitepaper is likely to set out proposals for an internet regulator in the UK, a House of Lords committee has been told. - Too much of the internet and people’s personal data is under the control of a handful of mega-corporations. Is there a better way? According to web creator Tim Berners-Lee, the solution lies in more openness.
<urn:uuid:b623de40-67fd-4132-a466-6aecdd4c38e3>
CC-MAIN-2022-40
https://www.computerweekly.com/news/252453278/Billions-missing-out-on-digital-society-as-internet-use-soars
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00646.warc.gz
en
0.939998
817
2.75
3
The 20-day, 150,000-hand Brains vs. Artificial Intelligence Texas Hold’em Poker tournament, which is held at the Rivers Casino in Pittsburgh, is about half-done. And while it’s not a total surprise that an artificial intelligence platform created at nearby Carnegie Mellon University is beating its human counterparts - the size of the lead and the way its winning is definitely raising some eyebrows. In 2015 four leading pros amassed more chips than their AI opponent, called Claudico, but the margin of victory wasn’t large enough to prove whether the humans or AI were actually the better poker players. Much of this was attributed to the humans’ ability to adapt to Claudico’s strategies, whereas the AI didn’t have the same ability. Heads-Up, No Limit Texas Hold’em is seen as an ultimate test for artificial intelligence, as more cards are hidden than traditional Texas Hold’em, which AI “solved” back in 2015. And with only two head-to head competitors, there’s more guessing and anticipating what an opponent might do. The AI being used this time around is called Libratus and was developed by Carnegie Mellon computer science professor Tuomas Sandholm and Ph.D. student Noam Brown. They equipped Libratus (which is Latin for balance) with algorithms that allow it to analyze the rules and establish its own strategy. A powerful supercomputer called Bridges also allows Libratus to refine its poker-playing skills by sifting through past games and performing calculations in real-time, helping it to compute strategies for each hand based on what it has learned from its opponents, as well as its own mistakes. Each night the Pittsburgh Supercomputing Center’s Bridges computer performs computations that further sharpen the AI’s strategy. Heading into gameplay today, Libratus had built a lead of over $700,00 on four of the game’s best professional players. This level of problem-solving, learning from mistakes and critical “thinking” could transport AI to a whole other level. On the manufacturing front, this could lead to greater troubleshooting, even fewer errors and greater plant floor safety as machines learn how to be even more efficient and aware of interactions with their human counterparts.
<urn:uuid:aa690477-1959-407c-b3df-4474345f912d>
CC-MAIN-2022-40
https://www.mbtmag.com/home/video/21101755/the-machine-takeover-starts-with-poker
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00646.warc.gz
en
0.962221
479
2.71875
3
The latest tools from Apple have the potential to drive the educational digital shift and transform classroom practices. But, with any advancement in education technology comes the concerns of misuse or propagating inferior teaching habits. When technology like Apple’s Classroom app can improve the way teachers teach and students learn, these concerns should be mitigated rather than used as an excuse to not implement certain technology advantages. Here are a few benefits to share with the skeptics out there and provide more freedom for teachers. Apple’s Classroom app comes with the ability to view a student’s screen while in the classroom. This functionality allows teachers to be mobile while still being able to check in on students’ progress. Untethering teachers from their desk, whiteboard, or podium enables them to meet students' learning needs. Teachers are free to move about the room working one-on-one or with small groups of students. With screen view, the possibilities are endless In addition to increased mobility, Classroom app comes with a variety of features that promote positive and effective teaching practices: - Real-time checks for understanding. Acting as a student response system, teachers can see student progress, notes, or answers to questions displayed on their iPads in real time. Instead of waiting until test day, teachers can check for understanding multiple times throughout a lesson to ensure students are on track. - Academic achievement for every student. Seeing the progress of individual students through screen view helps teachers recognize which students are progressing adequately and which students may need more assistance. Being able to identify which students may fall behind earlier in a lesson increases the likelihood that they’ll get the help they need. - More student-to-student engagement. With the ability to AirPlay screens, teachers who observe students’ screens have the ability to recognize opportunities where student work can be spontaneously shared with the class; setting students up to be better, lifelong contributors and collaborators. - Less interruptions or conflict when students get off task. If a teacher suspects a student is off task, they can quickly and unobtrusively check in and pause their screen if necessary, thereby reducing escalation, frustration, and conflict that could arise from a negative encounter. And, this streamlined interaction ultimately leads to more active learning and a better experience for students and teachers. Fear, uncertainty, and doubt should not drive decisions. Identify concerns and risks and put plans in place to mitigate them, especially when tools enable a more engaged environment that supports student learning. While these tools bring a change to the classroom environment, they support a teacher’s need to check for understanding and ensure students are progressing as expected—all while minimizing interruptions and distractions. While the fear of new technology or the unknown may come into play, schools and teachers should consider how the benefits (creating a more engaged environment for students) far outweigh any uncertainty. Have market trends, Apple updates and Jamf news delivered directly to your inbox.
<urn:uuid:9067e06d-dbad-4ba2-9f3b-24d3d7fb6d7f>
CC-MAIN-2022-40
https://www.jamf.com/blog/dont-fear-the-screen-how-screen-sharing-benefits-teachers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00646.warc.gz
en
0.935987
631
2.984375
3
Being a teacher is hard. Classrooms are filled with kids who come from a variety of socioeconomic backgrounds and have varied levels of learning abilities. Sewanhaka Central High School District struggled with this reality. Then they implemented iPad devices. Sewanhaka: Supporting individual learners with Apple technology Creating equity among students with Apple Being a teacher is hard. Classrooms are filled with kids who come from a variety of socioeconomic backgrounds and have varied levels of learning abilities. These factors alone create challenges for educators who strive to deliver the same quality education to all of their students. Sewanhaka Central High School District, too, struggled with this reality. Then they implemented iPad devices. Comprised of five high schools that serve a diverse population of 8,600 students on Long Island, New York, Sewanhaka knew they needed to shift their teaching mentality to better meet the needs of their student population. An extensive search for the most comprehensive tool brought them to Apple - a progressive change for the district’s teachers and students. Selecting the tool for student learning “I was told this would never be an iPad district,” said Brian Messinger, district coordinator of classroom instructional technology and student achievement, Sewanhaka Central High School District. At the time, in 2015, the district was piloting Microsoft Surface tablets with teachers. But Messinger said they knew they needed a better option. That’s when they began a decision analysis - a process where teachers, students, parents and administrators weigh in on which devices would ultimately best serve the students. During a technology expo at the district, representatives from Google, Microsoft and Apple presented how their tech would meet the district’s specific needs to 250 members of the selection committee. “We went through a detailed, mathematical process to rank the devices against our criteria, and the iPad won by a landslide,” Messinger said. Sewanhaka started a pilot with teachers and a small number of students in each school. The following year, all seventh and eighth grade students received iPad devices. Each year, two more grades got devices, until the final rollout in the fall of 2018. Now, every Sewanhaka student has an iPad - a total of nearly 9,000 across the district. Seeing the impact in special education While there’s no doubt that implementing a one-to- one iPad program helps bridge the socioeconomic gap between students, Sewanhaka special education math teachers, Caitlin Wheeler and Susan Bach, said the devices arguably benefit their students the most. And when it came to defining the biggest benefit to this student population, they commented in resounding agreement, “The iPads help our kids with organization.” “Some of our students tend to need extra support with organization and executive functioning,” Bach explained. This all changed with the iPad devices. Prior to the implementation, Bach, who’s been with the district for 11 years, said she received many phone calls a year from parents looking for answers. “Most of the time they were calling to say they couldn’t find their child’s notes,” she said. “They asked, ‘Without them, how can I help my child?’ And it was a fair question.” She sought out the answer and found it with OneNote, a digital note-taking app. “Now with this technology, it allows us to not only have notes all the time, but it also enables us to make tutorial videos. There’s a lot more support with the organization piece, which is fantastic for our students.” The parents are grateful, too. Bach provides all of her students’ parents with the iPad login information so they can easily access their child’s homework and notes - a simple, yet valuable way to help parents make sure their child is achieving a higher level of success. And all those calls from parents... Bach said they dropped to just one in the past two years. Giving teachers the gift of time As an additional bonus, Wheeler said digital notes save her loads of valuable time. She explained that almost all of her students have a modification on their individualized education program (IEP) that allows them access to class notes. “You used to have to make a photocopy, give it to the student, and the student puts it in their folder never to be seen again,” she explained. “Now students can find our class notes, and they’re categorized and organized by the teachers.” Wheeler said this functionality alone gives her more time to focus on other areas of teaching. Both Wheeler and Bach said the iPad is also a valuable tool when it comes to grading. “If I can do their 20 multiple-choice questions on the iPad, and they get their feedback right away, then I can grade their long-answer problems with extra time,” Wheeler said. She then uses the extra time to build lessons that support the areas where most students struggled on the test. She said, “That freedom for us as educators is huge.” Eliminating the stigma of personal differences While the benefits the iPad devices provide teachers are undeniable, Wheeler said it’s more important to recognize that the iPad can help eliminate the stigma students face when they differ from their peers. Many students, she said, previously had to leave the room to have readers for tests and quizzes - a modification on their IEP. “Now they just put in their headphones and listen to the question,” she explained. “This not only allows those students to remain in class, but it also levels the playing field for students with different reading levels.” Christopher Carmody, the assistant principal at H. Frank Carey High School, part of the Sewanhaka Central High School District, couldn’t agree more. He said, “Whether it’s a differentiation in terms of a learning style, the ability to accommodate certain disabilities, a hearing impairment or a vision impairment, teachers are instantly able to cater to all different learners through the iPad.” Implementing a new form of old technology But the benefits of the iPad, Wheeler said, span even further. When it comes to testing, she explained, having the right technology can make or break a student’s success. Last year, for instance, Wheeler said she didn’t give any homework to her algebra class - not because they didn’t need the practice, but because most of her students couldn’t afford graphing calculators. “And some of them didn’t have Wi-Fi at home,” she added. Thankfully, things changed with the addition of GeoGebra in 2018. The district added the GeoGebra graphing calculator app to the district’s iPad devices as a way to encourage digital equity for all students. Their first large success with the free app came in early 2019 when more than 250 students completed their Algebra I, Geometry and Algebra II New York State Regents exams with support of the app. The district used Jamf Pro to lock each device into exam mode and restrict all other device functionality. “Without Jamf, this wouldn’t have worked,” Messinger said. “From the start, Jamf recognized this was a unique project, and they invested resources to help us succeed. We never could have done it without that partnership.” As the first district to have a significant number of students take a paper-based New York state exam with iPad devices, Messinger said he’s proud of Sewanhaka for using technology to break down equity barriers. Robert Pontecorvo, coordinator of mathematics, Sewanhaka Central High School District, said in one of the district’s schools, 300 kids didn’t have graphing calculators for a previous test - an unacceptable disparity between students. “It’s a very different feeling to be a kid who has to borrow a calculator after school in order to complete homework, versus that kid who goes home with a calculator,” he said, “That wasn’t right, and that’s thankfully changed now that we have GeoGebra.” Ensuring streamlined device management A small team of three uses Jamf Pro on a daily basis - the only way Messinger said the district can support their nearly 10,000 devices. “Jamf allows us, in a moment’s notice, to provide our teachers and students with whatever they need, when they need it,” he said. “So when we make a decision that’s in the interest of students for education, Jamf allows us to do that quickly and seamlessly.” Messinger, and his small team, live in configuration profiles, as well as device and app records. He said it’s these functionalities that allow them to streamline the district’s device management in ways no other competing MDM could. But there’s more to Jamf than the power of Jamf Pro, Messinger said. “Jamf has become more than just another company we do business with,” he explained. “It’s become a true partnership, where if we have something we need to figure out, Jamf offers us the solution and works with us.” Whether it’s in the IT office or the classroom, district administrators and educators continuously work together to deliver an elevated level of learning to their students. And while they agree the iPad devices support their mission of creating modern classrooms, they also recognize it’s important to continually examine how technology can enhance traditional teaching methods. Only then, they said, will their students truly benefit from the district’s iPad implementation. Encouraging teachers to embrace technology While Kevin Dougherty, the principal at Elmont Memorial High School, part of Sewanhaka Central High School District, said they’ve made big steps in providing students with 21st century skills, they need to continue to live in a could-first environment. “Giving them that space to work on projects, to collaborate and communicate on the iPad, gives students a sense of what they’re going to have to do after they leave us, in college, but more importantly, after college when they’re in the workplace,” he said. “They’re going to have a leg up on other students, because they will have some of that experience under their belt, as opposed to students who are still working with just paper and pencil.” Of course, setting the paper and pencil aside for an iPad is easier for some than others. Even still, Bach tells hesitant teachers, “Once you take that first step and get out of your comfort zone, the benefit that will come from it is invaluable.” Like she often does in her own classroom, Bach encourages her fellow educators to test new apps and explore the extensive functionality of the iPad. While everything she does isn’t perfect, it’s still part of the educational journey. “We’re becoming learners, so we need to change too,” she said. “We ask students to do it all the time, so now it’s our turn.”
<urn:uuid:2b3d2e8d-c6af-44e7-aec8-f8afb78dc4af>
CC-MAIN-2022-40
https://www.jamf.com/resources/videos/sewanhaka-supporting-individual-learners-with-apple-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00646.warc.gz
en
0.964846
2,363
2.9375
3
HARD DRIVE ERASER When it comes to data erasure, it would seem many people think that clicking “empty trash” is adequate. That’s unfortunate, because not only is that thinking inadequate in most typical cases, it’s also extremely dangerous. What we typically consider an act of “data erasing” in the modern workspace is often fundamentally insecure and dangerous, and relying upon these methods can expose you, your clients, and your work to monumental threats. Adopting a secure data erasure process is not only important in terms of your ethical and moral obligations as a data provider, but it’s also extremely important to your security posture. Add on to this the fact that your company image is represented entirely by what you’re able to do – and what damage you can prevent – and adopting a more secure method of data erasure becomes that much more important and fundamental to success. Let’s look at why typical data deletion is not enough, and what systems can be implemented to negate these issues. WHY DELETION IS NOT ENOUGH In order to understand why simple deletion is not enough, we should consider how data is stored on the hard drive. Drives store data as a series of 1s and 0s in a magnetic medium on the platter of a hard drive. This storage method ultimately creates patterns on the drive that represent the data structure of the contents of that drive, allowing for data to be read in a magnetic fashion. When data is deleted, what is actually happening is not “deletion” in the true sense of the word. During this process, data is simply marked for deletion, and as more space is required by the operating system, this section of the drive is written over and utilized for storage. This makes for extremely fast data handling, but unfortunately, does not actually erase any data. What you end up getting instead is forensic data or data that is left on the drive and marked as “deleted”, but is still recoverable for those who know what they’re doing. This results in incredible amounts of insecurity, in part because of the fact that data is recoverable, and in part because the owner assumes that data is erased when it’s not. METHODS OF COMPLETE ERASURE Before we dive into strong solutions for data erasure, we need to differentiate further on some terminology. First and foremost, we need to discuss erasure and destruction, as these terms are often conflated with one another. DRIVE DESTRUCTION IS NOT ERASURE While destruction is often conflated with erasure when talking about data handling processes, the fact is that they are fundamentally different. Methods including using magnetic fields to render drives unusable, shredding the drives, or even pulverizing them are effective at removing the data from a physical viewpoint, and in this way, we could consider it “data destruction”. The problem is that the drive is destroyed in all of these processes, and as such, these methods are a very expensive way of dealing with the issues of stored data. Additionally, in some cases, this destruction doesn’t even remove the security concern at hand. First, you typically have to send the drive out of a secure environment to a second company processing the drives, which makes the data insecure through transit. Second, the drive is destroyed, but in some cases (such as with SSDs and other flash-based memory), even a single surviving chip could contain a great amount of data that exposes your data in an insecure manner. The simple fact is that drive destruction, while not perfect, is still too expensive, too risky, and too time-consuming for most low-security data concerns. SOFTWARE BASED DATA ERASURE Thankfully, we have some less expensive and consuming methodologies at hand. One of the best approaches is using software-based drive erasure. In this case, the software overwrites the actual data itself with a set pattern. This set, usually all zeroes, all ones, or a semi-randomized pattern, “resets” the drive into a forensically empty drive, resulting in a clean drive that can be used for many different uses. This is fundamentally more secure than simple erasure and formatting because you’re not just marking the data for removal or changing the file format, you’re actually changing the magnetic value of the drive itself in order to actually destroy the data without affecting the drive itself. That being said, this solution is also fundamentally less secure than shredding or destruction, with the caveat that, while you lose some measure of absolute security at the basal level, you also end up with a drive that can actually be used and an auditable trail of data destruction. ELEMENTS OF AN EFFECTIVE ERASURE SYSTEM Now that we know what type of solution we want to use, what specific implementation is best? We can identify the optimum system by breaking down what we expect our solution to have on offer. First, our solution should be able to use patterns. Utilizing different patterns for different data sets, or even for the same data set over multiple passes, can ensure that data is properly erased. While software-based tools are very powerful, magnetic storage can leave traces of erased data through partially set bits that were “skipped” during initial passes, so ensuring multiple passes are available can result in more secure erasure. Second, we need to have a solution that is compliant with specific legal requirements that might be inherent in the type of data that we handle. For instance, PCI DSS or the Data Protection Directive both have relatively strong protections provided to data from a legal standpoint, and ensuring that your solution is compliant with such regulations is extremely important. Third, our solution should be simple to use without minimal training and hardware cost for setting up, but at the same time, it should be very powerful. We need a solution that is adaptable without requiring extensive professional education and training in order to leverage those values. ClaraWipe represents all of the elements that we require. Not only is ClaraWipe extremely powerful, offering multiple passes, random character substitution, and other such advanced data processing modes, it also is easy enough to use without requiring extensive training. Clarawipe can be integrated into existing solutions without massive restructuring or rebuilding, and can actually be integrated into your current stack. Most importantly, ClaraWipe meets or exceeds all major national, international regulatory, and technical standards including: • Sarbanes-Oxley Act (SOx) • HIPAA & HITECH • The Fair and Accurate Credit Transactions Act of 2003 (FACTA) • US Department of Defense 5220.22-M • CSEC ITSG-06 • Payment Card Industry Data Security Standard (PCI DSS) • Personal Information Protection and Electronic Documents Act (PIPEDA) • EU data protection directive of 1995 • Gramm-Leach-Bliley Act (GLBA) • California Senate Bill 1386 Simply put, Clarawipe matches every single one of our considerations, delivering extreme value and security in all stages of the data erasure process. As we said at the beginning of this piece, implementing a solution for secure data erasure is hugely important. Ignoring your moral, legal, and ethical obligations, adopting a secure posture for data destruction can only increase your business image, and, frankly, not adopting this process does more harm than the relatively low-value savings that action offers. Failure to adopt this process makes for massive insecurity, adds cost for data in processing and operation, and increases the extreme potential for massive security failures.
<urn:uuid:fe1ff2ed-ae99-4282-a763-34fd8a909ae1>
CC-MAIN-2022-40
https://clarabyte.com/blog/how-do-i-completely-erase-a-hard-drive/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00646.warc.gz
en
0.936309
1,609
2.828125
3
For people who have hypertension and certain other conditions, eating too much salt raises blood pressure and increases the likelihood of heart complications. To help monitor salt intake, researchers have developed a flexible and stretchable wireless sensing system designed to be comfortably worn in the mouth to measure the amount of sodium a person consumes. Based on an ultrathin, breathable elastomeric membrane, the sensor integrates with a miniaturized flexible electronic system that uses Bluetooth technology to wirelessly report the sodium consumption to a smartphone or tablet. The researchers plan to further miniaturize the system—which now resembles a dental retainer—to the size of a tooth. “We can unobtrusively and wirelessly measure the amount of sodium that people are taking in over time,” explained Woon-Hong Yeo, an assistant professor in the Woodruff School of Mechanical Engineering at the Georgia Institute of Technology. “By monitoring sodium in real-time, the device could one day help people who need to restrict sodium intake learn to change their eating habits and diet.” Details of the device are reported May 7 in the early edition of the journal Proceedings of the National Academy of Sciences. The device has been tested in three adult study participants who wore the sensor system for up to a week while eating both solid and liquid foods including vegetable juice, chicken soup and potato chips. According to the American Heart Association, Americans on average eat more than 3,400 milligrams of sodium each day, far more than the limit of 1,500 milligrams per day it recommends. The association surveyed a thousand adults and found that “one-third couldn’t estimate how much sodium they ate, and another 54 percent thought they were eating less than 2,000 milligrams of sodium a day.” The new sodium sensing system could address that challenge by helping users better track how much salt they consume, Yeo said. “Our device could have applications for many different goals involving eating behavior for diet management or therapeutics,” he added. Credit: Rob Felt, Georgia Tech Key to development of the intraoral sensor was replacement of traditional plastic and metal-based electronics with biocompatible and ultrathin components connected using mesh circuitry. Sodium sensors are available commercially, but Yeo and his collaborators developed a flexible micro-membrane version to be integrated with the miniaturized hybrid circuitry. “The entire sensing and electronics package was conformally integrated onto a soft material that users can tolerate,” Yeo explained. “The sensor is comfortable to wear, and data from it can be transmitted to a smartphone or tablet. Eventually the information could go a doctor or other medical professional for remote monitoring.” The flexible design began with computer modeling to optimize the mechanical properties of the device for use in the curved and soft oral cavity. The researchers then used their model to design the actual nanomembrane circuitry and choose components. The device can monitor sodium intake in real-time, and record daily amounts. Using an app, the system could advise users planning meals how much of their daily salt allocation they had already consumed. The device can communicate with a smartphone up to ten meters away. Next steps for the sodium sensor are to further miniaturize the device, and test it with users who have the medical conditions to address: hypertension, obesity or diabetes. The researchers would like to do away with the small battery, which must be recharged daily to keep the sensor in operation. One option would be to power the device inductively, which would replace the battery and complex circuit with a coil that could obtain power from a transmitter outside the mouth. The project grew out of a long-term goal of producing an artificial taste system that can sense sweetness, bitterness, pH and saltiness. That work began at Virginia Commonwealth University, where Yeo was an assistant professor before joining Georgia Tech. Explore further: Quinn on Nutrition: Sodium: How low can we go? More information: Yongkuk Lee el al., “Wireless, intraoral hybrid electronics for real-time quantification of sodium intake toward hypertension management,” PNAS (2018). www.pnas.org/cgi/doi/10.1073/pnas.1719573115 Provided by Georgia Institute of Technology
<urn:uuid:3560c0e6-2f13-4b6b-805e-65def42b17c9>
CC-MAIN-2022-40
https://debuglies.com/2018/05/10/flexible-wearable-oral-sodium-sensor-could-help-improve-hypertension-control/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00046.warc.gz
en
0.936695
895
3.25
3
Interacting with computers and robots using normal, everyday language has been a mainstay of sci-fi moves since the 1950s. However it’s only been in the last five years or so that it has become an everyday reality, thanks to innovations such as Apple’s Siri, Amazon’s Alexa, and the widespread rollout of web-based instant messaging ‘chat’ platforms. These platforms connect people to chatbots – computer programs that can mimic human conversations using artificial intelligence – to handle a range of interactions between people and software, from following simple instructions to maintaining a quasi-conversation. Chatbots have been widely deployed in consumer-facing business sectors such as retail, insurance and financial services, providing additional support to call centre staff to reduce enquiry resolution times and deliver cost savings. Indeed, recent research from analyst Juniper estimates that in some business sectors, chatbots can deliver average time savings of around 4 minutes per enquiry. The potential of chatbots In addition to this successful implementation, I believe that there’s also tremendous potential using chatbots in enterprise applications. Enterprises could utilize chatbots to accelerate and automate information-sharing across areas of the business in which data has traditionally been siloed and hard to get access to – such as between IT and security teams, and business application owners. For example, getting an answer to the simple question “Is network traffic currently allowed from this specific server to another specific server?” can be complicated. If the enterprise does not have a Network Security Policy Management (NSPM) solution that can automatically discover and map network flows, getting a definitive response would be a laborious process, involving several different stakeholders and using multiple firewall and device management consoles. Furthermore even if the organization uses a NSPM solution, a user might not get an immediate answer. They would have to either access the NSPM system and know how to use it, or request the information from a member of the IT or security team – which may take time and interrupt more pressing tasks. Making network security accessible So, imagine if it was possible to have access to expert security knowledge about the enterprise network – such as the status of a business application’s connectivity, which firewalls protect that application, or whether traffic is being allowed to certain servers – without needing to have expertise in using security management tools, or distracting busy networking or security staff? A chatbot can make this a reality across the organization, enabling users outside the network and security teams – such as application owners, developers or other roles who may not have access to, or permissions, to use an NSPM system – to obtain the answers they need about network and application flows. This will help break down siloes of information, and democratizes access to critical network and security data to non-specialist users, in non-technical language (based on access rights of course). Accelerating the business By making important network and security information accessible to a wide range of internal stakeholders, chatbots enable faster decision-making and speed up processes. This in turn will accelerate business productivity, by helping to ensure that security processes don’t unnecessarily delay new initiatives and innovations. [su_box title=”About Professor Avishai Wool” style=”noise” box_color=”#336588″][short_info id=’104140′ desc=”true” all=”false”][/su_box]
<urn:uuid:e3f0af8c-ad36-47e4-9942-5d5bc3d3c6a6>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/a-personal-assistant-for-network-security-how-chatbots-can-enhance-business-agility/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00046.warc.gz
en
0.921821
722
2.6875
3
Wi-Fi is changing the world as we know it – many of us cannot live without it if we're being 100% honest with ourselves. Today it is common to have Wi-Fi access not only in homes, coffee shops, hotels, airports – but also in actual Airplanes and Subways. There has been a dramatic increase in the devices that connect to the Internet using Wi-Fi and in 2016 alone, more than 15 billion Wi-Fi devices were shipped around the world. And with the rise of Internet of Things (and not just mobile phones or TVs), but smart home devices like refrigerators, thermostats, speakers, security cameras and even light-bulbs, Wi-Fi is going to dominate airwaves. It is more important than ever as an IT Admin and Engineer to understand what is going on in your network, especially on your Wireless network – this is where a Wifi Analyzer comes into play. Wi-Fi can be inconsistent and laggy at times due to external factors that you cannot always control. People using the technology in closed places like buildings, usually suffer from congestion due to overlapping Wi-Fi signals, Walls, Microwaves and other signal interfering items. At times, neighboring Access Points (APs) are using the same frequency range which causes a decrease in wireless speed and performance as well. There are so many other devices that use similar Radio Frequencies “RF” as Wi-Fi. Microwave ovens, Bluetooth devices, Baby Monitors, Wireless Home Phones are some of the devices that can interfere with your Wi-Fi signal and weaken it. We haven't even discussed physical barriers such as Concrete walls, Wood, Brick, Plaster, Glass and other common building materials that further impede signal strength and Network throughput. But signal noise and interference is not always the worst enemy of Wifi. Hackers can perform Wi-Fi attacks by setting up an insecure Rogue AP inside the network that creates a backdoor into a trusted network. A hacker can also create an Evil Twin AP, an attack that masquerades a trusted AP by displaying the SSID to lure victims. Monitoring and analyzing Wi-Fi networks is a critical move to keep up with speed and security. Wi-Fi Analyzing software allows you to analyze wireless Access Points (APs) and channels working on the 2.4 and 5GHz (Wi-Fi bands), and give you the proper information to make adjustments to your AP and Extenders to help you get the best coverage, speed and security necessary. Wi-Fi Analyzers can provide an easy way to find the right wireless channel to avoid interference from neighboring APs, find hidden networks or rogue APs, dead spots in your building and much more. The following list shows a baseline of what a Wi-Fi analyzer can do: - AP Scanner: Scan reachable Wi-Fi APs and get information like channel, DB signal strength, MAC address, vendor, etc. - Client Scanner: Scan devices connected to your AP and block unauthorized users. - Visualize APs: Show graphics for Signal Strength, Connected users, Signal-To-Noise ratio, etc. - Security Mechanisms: Manage security for WEP, WPA, WPA2 and Enterprise. Displays historical data and test results for easy troubleshooting. Here's a List of the Best Wi-Fi Analyzer Tools & Software of 2022: 1. Solarwinds Wifi Analyzer SolarWinds is one of the leaders in IT infrastructure management software. The first release of popular SolarWinds’s Network Performance Monitor “NPM” was on 2001. During that time they have been improving and upgrading their software tremendously to keep up with the latest technologies to hit the market. Solarwinds' Wi-Fi monitor and Analyzer comes with full NPM and their software is very easy to use and install but limited to Windows systems only. With Wi-Fi monitor, you can manage numerous types of networks from SMB to an Enterprise-level Wi-Fi infrastructure. NMP allows you to have control of every detail of your network anywhere you are. You can monitor your wireless network from virtually anywhere by accessing NPM through its customizable web console. Within the console, you can periodically query APs (availability, number of clients, signal strength, security type, etc), controllers and connected devices. You can see active alerts right on the main dashboard and configure the alert system to send you SMS or e-mail when a critical alert is active. The software can also generate reports with granular details including IP address, device type, SSID, channels, number of clients connected and even details on the connected client itself. When you first install NPM, it will dynamically discover wireless APs and other network devices. When a new AP is added to your network, NPM will quickly prompt you for monitoring it. With NPM v11.5 or later, you can create customizable network maps to display your Wi-Fi network environment in heat view mode. NPM creates the map by automatically polling the signal strength of each AP and displaying it on your map. Wireless heat maps allow you to see the signal range of each AP and find dead zones between them. With detailed client information and heat maps, you can find the physical location of rogue APs or missing devices in your network. Try the SolarWinds Wi-Fi monitor for free by downloading an NPM trial from their official site from the link below: Free Download (30 Day Trial, No CC needed) PRTG is a monitoring tool built by Paessler, a leading network monitoring software developer. The solution can be used by SMBs or large enterprises.. PRTG call themselves “The Swiss Army Knife For Sys Admins” because it can adapt to specific requirements using its powerful API. PRTG Professional Wi-Fi Analyzer, a component of PRTG, can analyze APs, load, traffic, signal strength and availability in your wireless network. The tool is easy to set up and can be configured in no time with relative ease – we've recently reviewed PRTG as well as set it up to monitor windows within minutes here. With the help of auto-discovery, your Wi-Fi network can be easily found and displayed in a matter of minutes. From there, you can set periodical queries to check the status of each device with SNMP system uptime and Ping sensors. PRTG comes with built-in sensors that help monitor bandwidth load. It can also help you query signal strength by sending an SNMP advanced sensor. PRTG sensors can quickly alter you when a Wi-Fi disruption occurs or a bandwidth limit is reached. You can get notified via email, SMS or Syslog. With this tool, you can also generate reports to analyze historical data in HTML, PDF, CSV and XML format. PRTG is a well-balanced Wi-Fi monitoring tool for Windows systems only. But the tool is aimed at general network monitoring which makes it lack some pre-defined Wi-Fi features. Features like “number of connected clients” or “limiting a certain client” are missing, unless you manually create them. But the most amazing benefit from PRTG is its flexible pricing. You can download the free trial with unlimited sensor monitoring for 30 days or the freeware full functional version which allows you to monitor up to 100 sensors. Download a Free Trial of PRTG software below and try it out on your Wi-Fi network: This tool is very lightweight and portable but limited for Windows systems only. It can scan nearby wireless APs and give out complete information such as SSID, MAC, PHY type, Signal Quality, RSSI, channel, vendor, security, maximum speed, stations count etc. This discovery process can also allow you to see the date that a network was first and last found. For advanced wireless engineers, the tool shows extended information in Hexadecimal format. If you are looking for a Wi-Fi analyzer with many features, alarm systems, graphs, reporting, etc – you should consider Solarwinds or PRTG (as seen above), as Nirsoft is very Basic. While there is nothing fancy about this tool, WifiInfoView is very portable, easy to use, it gives out the most useful information and it’s completely Free. NirSoft’s WifiInfoView is aimed for personal use, but can be used SMBs and even for the Enterprise level troubleshooting. WifiInfoView provides the right amount of information that anybody needs for troubleshooting a Wi-Fi network. Unfortunately, some Anti-virus software manufacturers have detected Nirsoft’s software as Potentially Unwanted Software. Even though the site and software are %100 safe, this still causes reputation issues. The Free Software comes as it is, meaning that there is no support. If you want to troubleshoot you can try finding the answer in forums and notes online. Download the free, full and unlimited version of WifiInfoView in NirSoft’s official site. Probably the first of its kind, NetStumbler remains one of the preferred tools for wireless engineers and general wireless enthusiasts. NetStumbler can detect wireless networks using common WLAN standards. NetStumbler scans the network and gives valuable information, such as SSIDs, channels, signal strength, SNR, vendor, type of security, etc. NetStumbler is targeted for wireless professionals but is widely used by anyone with basic networking skills. It is limited to Windows but there are variations for Linux and MAC systems. NetStumbler is a very simple piece of software, lightweight and easy to use. Unfortunately, the software hasn’t changed or improved that much, the latest version 0.40 was released in 2005. Since then, there are many wireless cards that are currently not compatible with this software package. What makes it stand out among other Wi-Fi Analyzers is its strong reputation. Still, the software can help you analyze Wireless networks by creating Signal/Noise graphs, export CSV network reports, and testing connection quality. Another unique feature of NetStumbler is that it can be connected to GPS in order to track the precise geographical location of APs. Since NetStumbler is 100% Free, there is No Support. Although NetStumbler is free, they accept donations if you intend to use it for commercial or government use. Download a Full, Free and Unlimited version of NetStumbler on their official site. 5. Acrylic Wifi Acrylic Wi-Fi Home, developed by Tarlogic Security, claims to be “the most advanced WLAN scanner” in the market. The software is free and intended for non-commercial use only. But with a paid upgrade it can be used for SMBs and Enterprises. Acrylic Wi-Fi Home is a WLAN scanner limited for Windows systems only. It can analyze Wi-Fi traffic, visualize APs and clients at any range. The software scans local APs, connected clients, and displays a table with information such as SSID, MAC address, vendor, PHY type, RSSI, signal strength, channel, client details, type of security and when it was seen first and last. The software can also take signal strength over time and generate graphs to allow easier troubleshooting. The software also supports GPS, to see the geo-location of the Wi-Fi network on services like Google Maps. Acrylic Home Wi-Fi doesn’t have any reporting or exporting capabilities except for a screenshot button that allows you to post in Twitter. But if you upgrade to paid version, Acrylic Wi-Fi Professional can create reports and has other cool features. Acrylic Wi-Fi Professional includes a built-in connectivity module that allows password strength testing. The module runs a brute-force password test using a built-in dictionary of potential passwords, something not common in Wireless analyzers. Another unique feature in Acrylic Wi-Fi Professional is that it supports monitor mode (promiscuous mode) to capture Wi-Fi traffic using its own driver (with the help of Wireless analysis hardware or AirPcap cards). Monitor mode can help you test your network by monitoring all wireless traffic received from the AP. To try the product, download a free, full and unlimited version of AcrlicWi-Fi Home from their official website. Finding the Right Wifi Analyzer comes down to the level of control and analysis you want in your network and business. For Enterprise users, the Answer is Simple – Either go with Solarwinds or PRTG for their long standing reputations in the network monitoring industry. They both have commercial support teams that can assist you with any issues that may arise and will have software updates consistently throughout the years. Solarwinds offers a 30 Day Free Trial (no CC needed!) from the link below: PRTG also offers a 30 Day Unlimited Trial as well from the link below: If your a home user – then we definitely recommend any of the last 3 solutions – Nirsoft, NetStumbler or Acrylic for Analyzing your Wireless network quickly and fairly cheap.
<urn:uuid:294c61b4-71f1-4549-a014-36fde0b5b55a>
CC-MAIN-2022-40
https://www.ittsystems.com/wifi-analyzers-for-windows/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00046.warc.gz
en
0.922622
2,780
2.71875
3
The March on Washington, in which Martin Luther King Jr. delivered his famous “I Have a Dream” speech, is one of the most iconic events in American history. So it shouldn’t be surprising that when anybody wants to drive change in the United States, they often begin with trying to duplicate that success. Yet that’s a gross misunderstanding of why the march was successful. As I explain in Cascades, the civil rights movement didn’t become powerful because of the March on Washington, the March on Washington took place because the civil rights movement became powerful. It was part of the end game, not an opening shot. Unfortunately, many corporate transformations make the same mistake. They try to drive change without preparing the ground first. So it shouldn’t be surprising that McKinsey has found that only about a quarter of transformational efforts succeed. Make no mistake, transformation is a journey, not a destination, and you start by preparing the ground first. Start With A Keystone Change Every successful transformation starts out with a vision, such as racial equality in the case of the civil rights movement. Yet to be inspiring, a vision needs to be aspirational, which means it is rarely achievable in any practical time frame. A good vision is more of a beacon than it is a landmark. That’s probably why every successful transformation I found in my research first had to identify a keystone change which had a tangible and concrete objective, involved multiple stakeholders and paved the way for future change. In some cases, there are multiple keystone changes being pursued at once seeking to influence different institutions. For example, King and his organization, the Southern Christian Leadership Conference (SCLC), mobilized southern blacks, largely through religious organizations, to influence the media and politicians. At the same time, through their work at the NAACP, Charles Hamilton Houston and Thurgood Marshall worked to influence the judicial system to eliminate segregation. The same principle holds for corporate transformations. When Paul O’Neill set out to turnaround Alcoa in the 1980s, he started by improving workplace safety and, more recently, at Experian, when CIO Barry Libenson set out to move his company to the cloud, he started with internal APIs. In both cases, the stakeholders won over in achieving the keystone change also played a part in bringing about the larger vision. Lead With Values Throughout his career, Nelson Mandela was accused of being a communist, an anarchist and worse. Yet when confronted with these, he would always point out that nobody needed to guess what he believed, because it was all written down in the Freedom Charter way back in 1955. Those values signaled to everybody, both inside and outside of the anti-apartheid movement, what they were fighting for. In a similar vein, when Lou Gerstner arrived at IBM in the early 90s, he saw that the once great company had lost sight of its values. For example, its salespeople were famous for dressing formally, but that was merely an early manifestation of a value. The original idea was to be close to customers and, since most of IBM’s early customers were bankers salespeople dressed formally. Yet if customers were now wearing khakis, it was okay for IBMers to do so as well. Another long held value at IBM was a competitive spirit, but IBM executives had started to compete with each other internally rather than working to beat the competition. So Gerstner worked to put a stop to the bickering, even firing some high-placed executives who were known for infighting. He made it clear, through personal conversations, emails and other channels that in the new IBM the customer would come first. What’s important to remember about values is, if they are to be anything more than platitudes, you have to be willing to incur costs to live up to them. When Nelson Mandela rose to power, he couldn’t oppress white South Africans and live up to the values in the Freedom Charter. At IBM, Gerstner was willing to give up potential revenue on some sales to make his commitment to the customer credible. Build A Network Of Small Groups With attendance at its weekend services exceeding 20,000, Rick Warren’s Saddleback Church is one of the largest congregations in the world. Yet much like the March on Washington, the mass of people obscures the networks that underlie the church and are the source of its power. The heart of Saddleback Church is the prayer groups of six to eight people that meet each week, build strong ties and support each other in matters of faith, family and career. It is the loose connections between these small groups that give Saddleback its combination of massive reach and internal coherence, much like the networks of small groups convened in front of the Lincoln Memorial during the civil rights movement. One of the key findings of my research into social and political movements is that they are driven by small groups, loosely connected, but united by a common purpose. Perhaps not surprisingly, research has also shown that the structure of networks plays a major role in organizational performance. That’s why it’s so important to network your organization by building bonds that supersede formal relationships. Experian, for example has built a robust network of clubs, where employees can share a passion, such as bike riding and employee resource groups, that are more focused on identity. While these activities are unrelated to work, the company has found that it helps employees span boundaries in the organization and collaborate more effectively. All too often, we try to break down silos to improve information flow. That’s almost aways a mistake. To drive a true transformation, you need to connect silos so that they can coordinate action. Make The Shift From Hierarchies To Networks In an earlier age, organizations were far more hierarchical. Power rested at the top. Orders went down, information flowed up and decisions we made by a select priesthood of vaunted executives. In today’s highly connected marketplace, that’s untenable. The world has become fast and hierarchies are simply too slow. That’s especially true when it comes to transformation. It doesn’t matter if the order comes from the top. If the organization itself isn’t prepared, any significant transformation is unlikely to succeed. That’s why you need to lead with vision, establish a keystone change that involves multiple stakeholders and work deliberately to network your organization. Yet perhaps most importantly, you need to understand that in a networked world, power no longer resides at the top of hierarchies, but emanates from the center of networks. You move to center by continually widening and deepening connections. That’s how you drive a true transformation. None of this happens overnight. It takes some time. That’s why the desire for change is not nearly as important as the will to prepare for it.
<urn:uuid:8434ca98-ee48-4bd3-b722-07e59bc9d8b9>
CC-MAIN-2022-40
https://resources.experfy.com/future-of-work/how-to-design-your-organization-for-transformation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00046.warc.gz
en
0.970839
1,428
2.546875
3
What is Ransomware In 2022 ransomware is becoming a bigger threat than ever. Businesses run the risk of being attacked by cyber criminals every day and must ensure that their cybersecurity can stand up to all types of direct and indirect attacks. If your business is not sufficiently protected or has previously given in to a cyber criminals demands, you might find yourself under repeat attacks, especially if your systems have not been suitably cleaned and repaired. In this article, we will go over what is ransomware, the different types of ransomware attacks you might face, how ransomware works, the cost of ransomware to your business and proven strategies to defend your business against any attack you might face. What is a Ransomware Attack? In simple terms, ransomware is a type of malicious software, often referred to as malware, that will threaten to block access or publish sensitive data your business might have. This is done by encrypting the date or system until a ransom is paid to the attacker within a certain deadline. The main aim of ransomware is to extort funds from companies by blocking important data behind encryption keys. Cyber criminals will look to extort companies over private citizens as a business is much more likely to pay the ransom. It has been found that businesses that do pay the fees that come with ransomware are then much more likely to be targeted in the future. Ransomware attacks date back to 1989 with what was known as the ‘AIDs virus’. In 1996 ransomware was further developed and introduced at the IEEE Security and Privacy conference. The virus shown at the conference contained the attacker’s public key and encrypted the victim’s files. The malware then prompts the victim to send payment to the attacker to decipher and return the decryption key. So in essence, a ransomware attack is any sort of malware that is used to encrypt data such as files, applications, systems and databases so they can no longer be accessed without paying for a decryption key within a certain period of time. In many cases when the ransom is not met then the data will be released or the ransom will be increased. How Ransomware Works Ransomware can work in one of two different ways. The first way is by encrypting data that makes it no longer accessible. With the data out of reach, the business will then be contacted with the requested ransom with the promise that once payment is made they will provide a decryption key so the business can gain back the data that is hidden. However, there is no guarantee that the decryption key provided will actually work and can lead to further ransoming for the data or lead to more attacks. The second way ransomware can work is by blocking access to the system with a lock screen. This lock screen will contain the details of the ransom. Again once the fee is paid there is no guarantee that the block will be removed. Ransomware will often start in malicious emails and will begin to infect a system or database once an unsuspecting user opens an attachment or clicks on a url that has been compromised with the malware. After which the ransomware agent is installed and will begin to encrypt key files. Once the encryption is complete you will begin to see the message explaining what has happened and the demands to unlock the data. Virus vs Malware vs Ransomware - What’s the Difference? Malware encompasses any sort of programme that has been designed to gain access to computer systems without the users permission. Malware covers a range of programmes from viruses, trojan horses, ransomware, spyware and any other malicious programs you can think of. A virus is a piece of code that attaches itself to another program which can either be harmless or can modify and delete data. When a programme runs with a virus attached it will begin to perform an action such as deleting a file. Viruses cannot be controlled remotely like other pieces of malware. So what is the difference between a virus and ransomware? The first difference is in how they operate. Viruses will attach to another programme and wait to be activated by running that programme. Ransomware will look to encrypt data or block access until a fee is paid. Ransomware is much more harmful than a virus. Ransomware can only be removed by its creator through payment of the requested fee. Many viruses are blocked by antivirus software. They also differ in their main objectives. Viruses only look to modify or delete information. Ransomware looks to take money from businesses by gaining access to their systems and blocking that access from the business. The two types of malware also differ in how they gain access. The main difference between phishing and ransomware is, that ransomware generally gains access through phishing emails that have malicious attachments or links. Viruses come in as part of executable files. Cost of Ransomware Attacks As previously stated, ransomware can be extremely harmful to your business. It has been estimated that the cost of ransomware attacks on businesses will exceed $20 billion dollars by the end of 2022, with the average cost of ransomware having doubled in 2022. These are shocking statistics that really put this issue into context. For 2022, ransomware has been identified as a major threat to businesses. Many cybersecurity firms have predicted that a business will face a ransomware attack every 11 seconds. At this rate, it is further estimated that by the year 2031 Ransomware will reach a cost of $265 billion dollars. All of these estimated costs are not just limited to the number of payments made but what it will cost companies to mitigate damage and restore data after an attack. This issue should be of the utmost importance when budgeting and planning strategies for cybersecurity. Strategies to Protect Against Ransomware Attacks 91% of cyber attacks begin with a phishing email, the delivery method for ransomware. One of the best strategies to combat and prevent ransomware is by training all staff on how to recognise a phishing email and then conduct regular unannounced tests to see its effectiveness and who may require further training. Another effective strategy is to implement ransomware prevention best practices. These best practices can include all of the following: - Introduce cyber security user awareness training for all staff - Introduce email filters to identify suspect emails - Maintain offline, encrypted backups of data. - Regularly test and verify backups - Create and maintain a cyber incident response plan - Regularly patch and update all software and operating systems - Consult with cybersecurity professionals Protect your business against ransomware Ransomware is an extremely dangerous threat to any business. Blocking and encrypting your company’s data can be extremely harmful in many ways and paying the requested ransom fee does not guarantee that you will gain back access to that data. In 2021, ransomware cost businesses an estimated 20 billion dollars, with an attack taking place every 11 seconds. Without the right strategy in place, a successful ransomware attack can seriously impact your business. By working with cybersecurity experts such as ourselves, you can begin to protect your business, educate your employees and keep your data secure. Get in touch with our team to work on your cybersecurity strategy and overcome any challenges you might have. Call us on Tel: 01252 917000, email [email protected] or get in touch with us via our contact form.
<urn:uuid:10ceaead-70f7-4c07-a1f6-bc8cf0418609>
CC-MAIN-2022-40
https://www.bluefort.com/news/latest-blogs/what-is-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00046.warc.gz
en
0.95064
1,493
2.984375
3
William is a Data scientist at Quora, interested in data-driven decision making to improve both product and business. Always interested in learning new things and exploring the ubiquity of data in everyday life. Table of Contents 1. Design and interpret experiments to inform product decisions Observation: Advertisement variant A has a 5% higher click-through rate than variant B. Data Scientists can help determine whether or not that difference is significant enough to warrant increased attention, focus, and investment. They can help you understand experimental results, this is especially useful when you’re measuring many metrics, running experiments that affect each other, or have some Simpson’s Paradox happening in your results. Let’s say you’re a national retailer and you’re trying to test the effect of a new marketing campaigns. Data Scientists can help you decide which stores you should assign to the experimental group to get a good balance between the experimental and control groups, what sample size you should assign to the experimental group to get clear results, and how to run the study spending as little money as possible. Statistics Used: Experimental Design, Frequentist Statistics (Hypothesis Tests and Confidence Intervals) 2. Build models that predict signal, not noise Observation: Sales in December increased by 5%. Data Scientists can tell you potential reasons why sales have increased by 5%. Data scientists can help you understand what drives sales, what sales could look like next month, and potential trends to pay attention to. Seeto understand why its important to only fit on signal. Statistics Used: Regression, Classification, Time Series Analysis, Causal Analysis 3. Turn big data into the big picture Observation: Some customers only buy healthy food, while others only buy when there’s a sale. Anyone can observe that the business has 100,000 customers buying 10,000 items at your grocery store. Data Scientists can help you label each customer, group them with similar customers, and understand their buying habits. This allows you to see how business developments can affect certain groups of the population, instead of looking at everyone as a whole or looking at everyone individually. Dunnhumby breaks down grocery shoppers into groups including Shoppers On A Budget, Finest, Family Focused, Watching the Waistline, and Splurge and Save Statistics Used: Clustering, Dimensionality Reduction, Latent Variable Analysis 4. Understand user engagement, retention, conversion, and leads Observation: A lot of people are signing up for our site and never coming back. Why do your customers buy items from your site? How do you keep your clients coming back? Why are users dropping out of your funnel? When will they come out next? What kinds of emails from your company are most successfully engaging users? What are some leading indicators of engagement, activity, or success? What are some good sales leads? Statistics Used: Regression, Causal Effects Analysis, Latent Variable analysis, Survey Design 5. Give your users what they want Given a matrix of users (customers, clients, users), and their interactions (clicks, purchases, ratings) with your companies items (ads, goods, movies), can you suggest what items your users will want next? Statistics Used: Predictive Modeling, Latent Variable Analysis, Dimensionality Reduction, Collaborative Filtering, Clustering 6. Estimate intelligently Observation: We have a banner with 100 impressions and 0 clicks. Is 0% a good estimate of the click-through-rate? Data Scientists can incorporate data, global data, and prior knowledge to get a desirable estimate, tell you the properties of that estimate, and summarize what the estimate means. If you’re interested in a better approach to estimating the click-through rate, check out Statistics Used: Bayesian Data Analysis 7. Tell the story with the data The Data Scientist’s role in the company is the serve as the ambassador between the data and the company. Communication is key, and the Data Scientist must be able to explain their insights in a way that the company can get aboard, without sacrificing the fidelity of the data. The Data Scientist does not simply summarize the numbers, but explains why the numbers are important and what actionable insights one can get from these. The Data Scientist is the storyteller of the company, communicating the meaning of the data and why it is important to the company. The success of the previous six points can be measured and quantified, but this one cannot. I’d say this role is the most important. Statistics Used: Presenting and Communicating Data, Data Visualization Follow my blog at TL; DR – With statistics, data scientists derive insights to encourage decisions that improve product or business, distilling the data into actionable insights that promote the vision of the company. (This post was originally published on Quora) (Image Credit: Simon Cunningham)
<urn:uuid:59a0a947-c39e-414f-8b5d-7960a14f50b1>
CC-MAIN-2022-40
https://dataconomy.com/2014/11/7-ways-data-scientists-use-statistics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00046.warc.gz
en
0.912501
1,038
2.65625
3
Messaging apps such as WhatsApp, Discord, and WeChat are a great way to keep in touch with friends and family. But not all messaging apps Passwords are the touchstone for securing access to sensitive data—both personal and professional. They are the main defense against computer hackers, protecting our identities on websites, e-mail accounts and more. They are also used for bank transactions and making secure purchases. With all of this sensitive data at stake, creating good passwords is extremely important. Hackers typically try to break into a computer or secure account either by guessing passwords one at a time or using an automated tool to repeatedly guess passwords from a database of common words or other information. Even the best passwords can be conquered with enough time, skill, and computer processing power. This means a strong password is vital to prevent attacks by less determined hackers, and buy time by sending up red flags that can help catch hackers in the act. The problem, though, is that users often find passwords cumbersome, have trouble remembering them, and try to make them simple while using them over and over again. For example, a 2010 study found that users will simply capitalize the first letter of their password and add a “1” or “!” to the end, making the password no harder to crack since hackers have identified and expect these patterns. And in 2016, Experian found that millennials had, on average, 40 services registered to a single email account, and only five distinct passwords. To combat some of these issues, the National Institute of Standards and Technology (NIST) recently released new password guidelines. A non-regulatory federal agency, NIST produces guidance documents and recommendations that often become the foundation for best practice across the security industry and are incorporated into other standards. They are essentially the rule of the thumb that most of us in IT live by. In their latest revision of Digital Identity Guidelines (SP 800-63-3), NIST calls for taking a more user friendly approach to password requirements. This includes not requiring periodic password changes and modifying password complexity rules. Many see these guidelines as a stark contrast to previous recommendations, and while they do vary somewhat, they really aren’t much different from what we here at M.A. Polce Consulting currently train end users on. So what does this mean for you and your end users? Let’s break down the new guidelines to see what they really mean. The key principals in NIST’s new standards for password complexity are: - Passwords should be a minimum of 8 characters and a maximum of at least 64. - Every new password should be checked against a “blacklist” that can include dictionary words, repetitive or sequential strings, passwords taken in prior security breaches, and variations on the site name. - Don’t use password hints or knowledge-based authentication. With the constant dissemination of personal information on social media or through social engineering, the answers to such questions or hint prompts can be easy to find. - Limit the number of password attempts. There is a noticeable difference between the number of guesses a typo-prone user needs and the number of guesses an attacker needs, so there’s no reason not to include a cutoff or delay. - Use a passphrase. Using a phrase or string of words will help create longer passwords that are harder to break and easier to remember. - Do not require password resets unless there is reason to suspect compromise. NIST’s premise that incorporating variety and utilizing a longer password will be more secure is spot on. Passphrases and a longer password length are something we believe strongly in and seek to educate users on. A 21-character password with upper, lower and special characters would resist a brute force cracking attack for about 1 quintillion years. The goal of moving users away from the methodology of shifting just one character in a password that may consist of one short word is one that we wholeheartedly embrace. Making users reset their passwords every few months is a classic security measure. The thinking is, any unauthorized person who obtained a user’s password will soon be locked out. And while this is true, NIST found that frequent mandatory password resets can actually make security worse. It’s hard enough to remember one good password a year. When users have to create new passwords regularly, they tend to make them weaker from the start. The overall concept behind these new standards is that easier and more convenient security for users will translate to more people taking proper security precautions. Bottom line is, setting a firm password policy in your organization and using technical controls to implement them is the first step to securing your network. Educating your users on the policy as well as basic principles like password storage is crucial in today’s threat landscape. How will you implement these new standards? Let us know today!
<urn:uuid:86993c26-bca0-4223-965e-971e1635f870>
CC-MAIN-2022-40
https://mapolce.com/can-hack-password/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00246.warc.gz
en
0.928757
999
3.109375
3