text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Distributed Denial of Service (DDoS) A Distributed Denial Service Attack is a cyber attack in which multiple compromised computer systems attack a target, such as a server, website or other network resource and cause a denial of service for users of the targeted resource. The flood of incoming messages, connection requests or malformed packets to the target system forces it to slow down or crash and shut down, thereby denying service to legitimate users or systems. DDoS threats come in many forms, but by far the most common is a Volumetric attack where devices on the public internet are hijacked to overwhelm a specific target. These attacks can cripple the internet access to an organisation – freezing web, email and internet interactions. And traditional security devices such as firewalls, switches and routers simply cannot cope.
<urn:uuid:2136889b-c836-41d6-8d28-b3c2336e2534>
CC-MAIN-2022-40
https://macquariegovernment.com/glossary/distributed-denial-service-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00041.warc.gz
en
0.911143
163
3.046875
3
The University of Manchester is leading a £4.6 million robotics research project to develop systems capable of working collaboratively and autonomously on hazardous nuclear sites too dangerous for human workers. Decommissioning dated nuclear plants, disposing of nuclear waste and remediating the surrounding areas are challenging enough. Just as big a threat is the potential exposure to dangerous levels of radiation, which means that human access is restricted and the majority of that work needs to be completed by robots. A consortium led by the University of Manchester, which includes the University of Birmingham, the University of the West of England (UWE) and industrial partners Sellafield, EDF Energy, UKAEA and NuGen, has been funded with £4.6 million from The Engineering and Physical Sciences Research Council to develop more sophisticated robotic solutions. The group is aiming to develop autonomous robots that can handle tasks with dexterity and collaborate with each other. The project will build on the University of Manchester’s pioneering work in robotics to date, which includes the MIRRAX 150 (below), an adaptable robot designed for use on nuclear sites. Safe decommissioning of nuclear plants requires investment The cost of cleaning the UK’s historic nuclear sites now stands at £117 billion ($154 billion). The University of Manchester’s Professor Barry Lennox, who is leading the nuclear robotics research project, highlighted the importance of investment and pointed out that his team’s research could prove useful beyond nuclear projects. “If we are to be realistic about clearing up contaminated sites, then we have to invest in this type of technology,” he said. “These environments are some of the most extreme that exist, so the benefits of developing this technology can also apply to a wide range of other scenarios.” “This programme of work will enable us to fundamentally improve [robots and autonomous systems] RAS capabilities, allowing technologies to be reliably deployed into harsh environments, keeping humans away from the dangers of radiation.” The project will aim to develop robots that solve the challenges faced by previous models, which include an inability to grasp and manipulate objects effectively, as well as difficulties with computer vision and perception. Central to the research is the introduction of autonomy; these robots need to be able to operate without direct supervision. After announcing the funding to the consortium’s nuclear robotics project, Professor Philip Nelson, chief executive of the Engineering and Physical Sciences Research Council (EPSRC), said, “For several decades, EPSRC has been at the forefront of supporting the UK’s research, training and innovation in robotics, automation and artificial intelligence systems. “Throughout the world, however, from the United States to South Korea, China to Japan, governments are investing billions of dollars into these new technologies. We are faring very well against this global competition, and we should not slow the momentum. These investments are vital for continuing the pipeline that transforms research into products and services.” The challenges of building a robotics network in a nuclear environment Speaking exclusively to Internet of Business, Professor Barry Lennox outlined a few of the many challenges the project will face. While nuclear environments can be deadly to humans, working in them isn’t plain sailing for robots either. “A major problem with using robotic systems in nuclear environments is that the electronics can be damaged by gamma radiation – several robots have failed at Fukushima Daiichi power plant for example,” he explains. A priority then will be developing techniques to help enable robotic systems to survive for longer periods of time. The MIRRAX 150, in the video above, was designed to fit through a tiny access port before mapping the facility. “We will be investigating techniques that allow it to map its environment whilst avoiding some of the high dose rate areas within the facility and recharge itself by energy harvesting,” said Lennox. “We will also be developing fault tolerant control systems that will enable the robot to survive in the event that some of its electronics become damaged.” Beyond simple survival, Lennox and his team, with help from vision system experts at the University of Birmingham, will need to give their robots spatial awareness and the ability to understand their environment, “so that they are able to make decisions autonomously and not have a heavy reliance on human operators.” And as for the single biggest challenge ahead for the project? It’s impossible to name just one. “This is the hardest question as there are so many engineering challenges,” said Lennox. “An ideal robot would be a submersible ROV that is untethered and able to transmit HD video underwater – this is extremely challenging and not possible using existing technology. We will be developing manipulators that are able to grasp objects – how will they be able to autonomously determine the most appropriate way to grasp objects of varying dimension? However, possibly the biggest challenge is to ensure that if we develop a robotic tool that is useful, can we actually convince the nuclear industry, which is naturally highly conservative, to adopt it?”
<urn:uuid:00cd1cc8-471c-4c5c-ae71-2a0666468bb2>
CC-MAIN-2022-40
https://internetofbusiness.com/university-manchester-nuclear-robots/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00241.warc.gz
en
0.94887
1,048
3.09375
3
Information security can be confusing to some people. Okay, maybe most people. What is infosec, and why is information security confusing? Maybe it’s because we miss some of the basics. Understanding information security comes from gathering perspective on the five Ws of security: what, why, who, when, and where. Understanding InfoSec Through the Five Ws - What is infosec? - Why do you need information security? - Who is responsible for information security? - When is the right time to address information security? - Where does information security apply? - We could also include the sixth W, which is actually an “H” for “how.” The “how” is why FRSecure exists. What is Information Security? The most important thing to understand when asking, “What is infosec?” is this: Fundamentally, information security is the application of administrative, physical, and technical controls in an effort to protect the confidentiality, integrity, and/or availability of information. Simplified, that’s understanding our risks and then applying the appropriate risk management and security measures. In understanding information security, we must first gain an understanding of these well-established concepts. Administrative controls address the human factors of information security. Typically administrative controls come in the form of management directives, policies, guidelines, standards, and/or procedures. Good examples of administrative controls are: - Information security policies - Incident response plans - Training and awareness programs - Business continuity and/or disaster recovery plans - Hiring and termination procedures Physical controls address the physical factors of information security. Physical controls are typically the easiest type of control for people to relate to. Physical controls can usually be touched and/or seen and control physical access to information. Good examples of physical controls are: - Building alarm systems - Construction materials Technical controls address the technical factors of information security—commonly known as network security. Technical controls use technology to control access. Much of the information we use every day cannot be touched, and often times the control cannot be either. Good examples of technical controls are: - Access control lists - File permissions - Anti-virus software Confidentiality, Integrity, and Availability As mentioned previously, these concepts are what our controls aim to protect. This is how we define them: - Confidentiality: Confidentiality is keeping information secret, allowing only authorized disclosure. - Integrity: Data integrity is ensuring that information is accurate. Accurate data is critical to making important decisions soundly. - Availability: Availability is making sure that information is accessible when it needs to be accessed. Basically, we want to ensure that we limit any unauthorized access, use, and disclosure of our sensitive information. Why Do You Need Information Security? In addition to asking, “what is infosec?”, it’s also important to ask why your organization needs to work on understanding information security in the first place. This is sometimes tough to answer because the answer seems obvious, but it doesn’t typically present that way in most organizations. As we know from the previous section, information security is all about protecting the confidentiality, integrity, and availability of information. So, answer these questions: - Do you have information that needs to be kept confidential (secret)? - Do you have information that needs to be accurate? - Do you have information that must be available when you need it? If you answered yes to any of these questions, then you have a need for information security. Understanding information security and how it can reduce the risk of unauthorized information access, use, disclosure, and disruption is key. We need information security to reduce risk to a level that is acceptable to the business (management). We need information security to improve the way we do business. Who is responsible for information security? This is an easy one. Everyone is responsible for information security! A better question might be “Who is responsible for what?” A top-down approach is best for understanding information security as an organization and developing a culture with information security at the forefront. First off, information security must start at the top. The “top” is senior management and the “start” is commitment. Senior management must make a commitment to understanding information security in order for information security to be effective. This can’t be stressed enough. Senior management’s commitment to information security needs to be communicated and understood by all company personnel and third-party partners. The communicated commitment often comes in the form of policy. Senior management demonstrates the commitment by being actively involved in the information security strategy, risk acceptance, and budget approval among other things. Without senior management commitment, information security is a wasted effort. Business Unit Leaders Keep in mind that a business is in business to make money. Making money is the primary objective, and protecting the information that drives the business is a secondary (and supporting) objective. Information security personnel need to understand how the business uses information. Failure to do so can lead to ineffective controls and process obstruction. Arguably, nobody knows how information is used to fulfill business objectives more than employees. While it’s not practical to incorporate every employee’s opinion into an information security program, it is practical to seek the opinions of the people who represent every employee. Establish an information security steering committee comprised of business unit leaders. Business unit leaders must see to it that information security permeates through their respective organizations within the company. All employees are responsible for understanding and complying with all information security policies and supporting documentation (guidelines, standards, and procedures). Employees are responsible for seeking guidance when the security implications of their actions (or planned actions) are not well understood. Information security personnel need employees to participate, observe and report. Third parties such as contractors and vendors must protect your business information at least as well as you do yourself. Information security requirements should be included in contractual agreements. Your right to audit the third-party’s information security controls should also be included in contracts, whenever possible. The responsibility of the third-party is to comply with the language contained in contracts. When is the Right Time to Address Information Security? On the surface, the answer is simple. The right time to address information security is now and always. There are a couple of characteristics to good, effective data security that apply here. Information security must be holistic. Information security is not an IT issue any more or less than it is an accounting or HR issue. Information security is a business issue. A disgruntled employee is just as dangerous as a hacker from Eastern Europe. A printed account statement thrown in the garbage can cause as much damage as a lost backup tape. You get the picture. Information security needs to be integrated into the business and should be considered in most (if not all) business decisions. This point stresses the importance of addressing information security all of the time. Information security is a lifecycle of discipline. In order to be effective, your information security program must be ever-changing, constantly evolving, and continuously improving. Businesses and the environments they operate in are constantly changing. A business that does not adapt is dead. An information security program that does not adapt is also dead. Your information security program must adjust all of the time. Perhaps your company hasn’t designed and/or implemented an information security program yet, or maybe your company has written a few policies and that was that. When is the right time to implement and information security program? When is the right time to update your existing program? You have the option of being proactive or reactive. Proactive information security is always less expensive. Less expensive is important if your company is into making money. Where Does Information Security Apply? You may recall from our definition in “What is Information Security?” that fundamentally information security is: The application of Administrative, Physical, and Technical controls in an effort to protect the Confidentiality, Integrity, and Availability of information. In order to gain the most benefit from information security, it must be applied to the business as a whole. A weakness in one part of the information security program affects the entire program. Now we are starting to understand where information security applies in your organization. It applies throughout the enterprise. Where does information security apply? It applies throughout your organization. An information security assessment will help you determine where information security is sufficient and where it may be lacking in your organization. Hopefully, we cleared up some of the confusion. If you have questions about how to build a security program at your business, learn more at frsecure.com.
<urn:uuid:83e6921c-b878-478d-9101-9ab6e180db20>
CC-MAIN-2022-40
https://frsecure.com/blog/the-5-ws-of-information-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00241.warc.gz
en
0.933473
1,840
3.34375
3
By Ali Qamar In May, a debilitating ransomware attack crippled the U.S. oil production company Colonial Pipeline. The attack paralyzed their operations and forced the company to shut down its 5,500-mile pipeline. As a result, half of the gasoline supply normally distributed to the East Coast couldn’t be delivered. The attack caused panic as people scrambled to find gasoline, resulting in a rise in gas prices throughout the United States. The attackers were DarkSide, a Russian criminal group. Colonial Pipeline ultimately paid a reported $5 million ransom in bitcoin to DarkSide in return for a decryption key. (Some of that ransom was eventually recovered by the U.S. Department of Justice.) The gasoline shortage remained for three weeks even after the ransom was paid. In addition to performing its own attacks, DarkSide operates as a ransomware-as-a-service (RaaS) gang, leasing its malware to others for a cut of the profits from any successful attack. This has opened the door for an exponential increase in attacks. Just what is ransomware-as-a-service, and why has this threat grown so much recently. We’re going to give readers an overview of how ransomware-as-a-service works — and why it’s become such a threat. DarkSide emerges — and so does ransomware-as-a-service DarkSide emerged in August 2020 and went on an unprecedented crime spree. It targeted organizations in more than 15 countries and locked the computer files of hundreds of accounts. The act compelled firms to pay large ransoms for decryption keys. DarkSide also threatened victims that they would publish the stolen data online. It was a risky gamble for the cybercriminals, but it paid off for them. As DarkSide increased their technological know-how, they leased out their software to other cybercriminals. DarkSide raked in millions of dollars from their own ransomware attacks and received payments from affiliates using their ransomware. Hence the term, ransomware-as-a-service. Ransomware-as-service: Kits for sale Ransomware-as-a-service is as lucrative, if not more lucrative, than the traditional ransomware business. However, only those with technical skills can build their own kits. Others can buy the kits outright from other criminals. Kits are available on the Dark Web for one-time fees or monthly subscriptions. When you buy a paid plan, you may have access to technical documentation and even customer support. Modern ransomware is a well-paid business. As a result, many developers are turning to the Dark Web to advertise their services. These providers offer upsells such as portals to their clients. These portals allow their subscribers to glean the level of infection, files encrypted, total payments, and other information about the target. The ransomware industry is booming. Cybercriminals like DarkSide are creating more and more new opportunities for affiliates. Competition among ransomware software developers leads to progressively refined malware loads. This rivalry also promotes ever-rising demands from hackers looking to make more money. Hackers do not decrypt your data or files until you pay a ransom fee, typically in bitcoin. The total payout to ransomware criminals is expected to top $20 billion worldwide this year and could hit more than $250 billion a year by 2031. The ransomware-as-a-service future DarkSide’s attack on a critical pipeline raised the ante on the potential damage cybercriminals can inflict. Indeed, attacks on critical infrastructure are growing. As for DarkSide, expect them to do even more damage in the future. Fortinet’s FortiGuard Lab published a report in May that said a new function was discovered in the DarkSide ransomware variant that can target disk partitions. The finding was made before the DarkSide attack on Colonial Pipeline. This new variant can expose and compromise hard drive partitions and detect any hidden files in masked partitions. How to stay safe from ransomware We all know that online security is paramount for every Internet user, whether you are an individual or a giant enterprise. Therefore, you need to act carefully while online. For example, protecting your Microsoft Office 365 account from a ransomware attack requires a few safety measures. But many people and companies do not take the necessary precautions against ransomware attacks because they’re too lazy to install software, update their OS, or patch known vulnerabilities. Here are few tips to protect yourself from ransomware: - Upgrade your devices. Keep your devices constantly updated; never delay the updates, as these updates might have crucial patches against malware and vulnerabilities. - Look out for phishing emails. Malicious actors actively send emails with a link to entice them to open and release their data unknowingly. If the message looks like spam or is from somebody you do not know, be careful. - Only download from trusted sources. One easy way to avoid getting malware is by downloading all your software from the official source. You’ll be able to prevent dangerous infections if you download it directly from a company’s website. - Apply a firewall. A firewall can keep your data secured and private. As the name suggests, it prevents anyone from accessing your device without authorization by implementing a protective barrier. In addition, firewalls block traffic in or out of the system. The best part about this is that you don’t have to worry because they’ll be denied access if someone tries. - Use a VPN. While a virtual private network (VPN) is a security and privacy tool, it can also act as a firewall. When connected to VPNs, you connect to an outside network through a VPN server located away from your actual location. Thus, it acts as a “secure gateway” between the Internet and your device, ultimately keeping you and your business safe online. Ali Qamar covers privacy and cybersecurity for TechGenix. His work has been featured in major tech and security blogs, including InfosecInstitute, Hackread, ValueWalk, Intego, and SecurityAffairs. He is the founder and editor of PrivacySavvy.com now. Follow Ali on Twitter @AliQammar57.
<urn:uuid:27168744-c2d4-413e-91ec-07543dc8428e>
CC-MAIN-2022-40
https://techtalk.gfi.com/ransomware-as-a-service/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00241.warc.gz
en
0.947879
1,261
2.578125
3
Digital currency (or digital money) uses an Internet-based medium of exchange instead of physical money, i.e. notes and coins. This could be used to buy and sell physical goods and services, or it could be a money substitute that is only accepted within a specific virtual community (e.g. an online game). A crypto-currency uses encryption to secure transactions and generate units of currency: Bitcoin is probably the best-known crypto-currency.
<urn:uuid:eb3f93d3-d1e1-422b-8820-34f720f51c04>
CC-MAIN-2022-40
https://encyclopedia.kaspersky.com/glossary/digital-currency/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00241.warc.gz
en
0.918304
92
3.484375
3
We live in an era of ever-increasing cyber security threats and data breaches. As security professionals we know that it is not whether our systems have been compromised as much as it is when we will find they have been and to what extent. Add to this, the aggregation of data sets at geometric rates and the amplified concerns about how well organizations can protect all of this data entrusted to them… there is clearly a problem. Data is at the center of all we do. In this ever-increasing digital and interconnected world, simple perimeter security or anti-virus has proven insufficient. Additionally we are fumbling in the drive to integrate our security program efforts. We all know the axiom that defenders of have to be vigilant 24/7, adversaries only have to be lucky once. This problem is not one of security for securities sake, the fear of a data breach impacts trust in our ability to protect an individual’s assets, privacy, and ability to operate their business functions. It is more important than ever to stay connected and well informed on all topics related to cyber security, information risk management, and privacy concerns. Each of these is interconnected and that places a requirement on today’s cyber security professional to maintain better knowledge of what they are protecting, why they are protecting it, how they are protecting and for whom they are providing this protection. Privacy concerns are obviously interwoven with security concerns, it's unimaginable to have privacy in this environment without strong security. Additionally organizations like the Electronic Freedom Foundation and American Civil Liberties Union have integrated with other lobby groups to strengthen demands for increased privacy protection in legislation and technology advancement. Legislation creates requirements for stronger governance and equally demanding reporting which already takes weeks, months, or years to prepare and submit to oversight bodies for both privacy and security. There will be data breaches. A more robust emphasis will be placed on data loss prevention and tagging data at the appropriate classification levels. This will promote the application of the appropriate controls, add greater confidence in knowing where the data resides, and know precisely what has been lost during the breach. Requirements will be placed on speed of identification of whom, how and what has been compromised in order to notify the impacted persons and or business functions, so that they can address the damage and mitigate the impact. Increased knowledge of the attackers will assist in faster Cyber Kill Chain responses and smarter solutions on internal network defense. Cyber security concerns will embrace privacy concerns with monitoring, data storage, and data access rights. There will be a stronger push to integrate cyber security with privacy, legal, and human resources in order to ensure that cyber vigilance and insider threat programs are monitored in accordance with policy and legislative requirements. This integration will not be easy, due in large part to the cultural distinctions existing between these highly specialized areas, but organizations will have no choice but to consolidate areas of privacy into their overall cyber security governance models. Full adoption of the information technology risk based methodology across sections will take a significant amount of awareness and education. Integration of privacy and cyber security controls and processes along with the adoption of a risk-based approach is required, and should be desired for maximum efficacy and efficiency of scale. True Enterprise Risk Management. Organizations like the Electronic Freedom Foundation and American Civil Liberties Union will not cease to exist but rather grow stronger in demands for increased privacy protection during this period. In the end, data protection will be on all sides and the data. The reason why the data exists, to enable functional and business operations, will be balanced with a variety of other impactful concerns such as compliance with Federal and state information sharing and data laws, protection of civil liberties and individual privacy rights. 2019 – How do we get here? There will be data breaches. The complexities of the environment and the human factor most certainly guarantee this to be true. The damage can be mitigated through early adoption of a holistic cyber security approach, leaning forward on implementation. Apply a holistic approach, integrate the strategies for both privacy and cyber security to reach a confluence. No matter the maturity of a cyber-security or privacy program, there should be a viable combined program strategy and complementary implementation plan to ensure our goals and objectives within the strategy are being driven to achievement. This solution should drive a correct increase in knowledge base through targeted requirement, closer attention to processes and process improvements that support the strategic goals and objectives, and smartly aligned investment in technologies. Manage the "wet ware". As an organization, we must view the user interacting with our systems as part of the system, so integrate cyber-security and privacy training to the greatest extent possible. We cannot forget to proactively educate the public on threats to privacy, and to cyber security. All that the public knows is what they may hear from mainstream media, which is practically insufficient. Embracing a planned public awareness program that delivers awareness, teaches, and advises will support a more security and privacy aware society. Shift from prevention solutions to detection tactics, techniques, and procedures. Focus on response to cyber security threats and increase threat assessments to create preliminary damage assessment and swifter response times. Preliminarily evaluate all possible impacts on the security architecture, applicable security laws, and privacy implications. Measure operational effectiveness of current controls and use Red Teams to simulate an attack with the defenders prepared to map the attacker’s life-cycle. How fast do they operate within the cyber kill chain? Measure the performance of the defenders against these simulated impromptu incidents. What was the response time? Did the Red Team achieve their goal before they were noticed? Real world attackers evolve and so should we! It is all about the data. Work with all information owners to properly classify data based on confidentiality, integrity, and availability impact-level determinations to smartly apply the correct amount of security to the data. The knowledge of what has been compromised is required, and the proper data classification schema supports greater visibility into the enterprise and connected networks. We will need to use diligence to not only protect the data’s confidentiality, integrity, and availability but also in understanding the data purpose and that many data sets require protection to support the individual’s civil liberties, thus further supporting a holistic perspective to the management of the data and the systems in which it resides. Organizational change management is needed. Balance can be achieved, and while not all parties may be happy during the transition, it is a necessity to move toward integration sooner than later. It will be more important than ever to provide balanced secure solutions to support the business of the organization.
<urn:uuid:ab852f90-593d-4d6a-86a2-f6d30d81c99c>
CC-MAIN-2022-40
https://enterprisetechsuccess.com/article/Cyber-security-and-Privacy,-2020/ZENxb1AydXNacklzejhpRG92ck5BUT09
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00241.warc.gz
en
0.935615
1,332
2.546875
3
If you have completed the previous lessons, you now know why and how to create strong passwords, why you should never share them with anyone, or use the same password for different accounts. Plus you know how to keep them safe from cyber thieves. However, there are situations when protecting your password proves impossible, in which case it is vital to change it as soon as possible so as not to jeopardize your account and all the data in it. What are those situations? For example, when your credentials are not stolen from you personally, but leaked from a service where you have an account. Sometimes developers overlook certain security aspects, or make errors when setting up the system. Attackers, for their part, never give up trying — they know they only have to be lucky once. Alas, such thefts (aka leaks) are common. And are often quite large. One of the biggest and most high-profile occurred in 2013 when hackers gained access to more than two billion Yahoo! mail accounts. Leaks have also hit the databases of well-known services like LinkedIn and Dropbox — that is, a famous name is no guarantee of security. Such database records often show up for sale on the black market later, while the victim company itself may not immediately realize it has been hacked. Likewise, the hackers do not always seek to hijack the affected accounts straight away, and sometimes that is not their goal at all. Instead, they might spend years monitoring how you use the service in search of tasty morsels that can be sold or used for blackmail or phishing purposes. Or they could use your account to distribute malware or spam — for this they don’t need to change the password, and you may not even know that you have a “squatter” in your account. So it’s important to detect leaks as soon as possible in order to take action. How? First off, read emails carefully. Big-name services with a reputation and more to protect try to inform affected users of incidents as early as possible. So don’t dismiss requests to change your password — it’s in your own interests to do so. And don’t forget that it is better not to follow a link in an email, but to enter the website address manually. That way you will guard against potential phishing — scammers like to send emails in the name of well-known sites with fake password change requests in an attempt to find out your current password. However, not all services are equally responsible: Some may put off notifying users, try to hide a leak, or simply not know about it themselves. An additional source of information can be messages from cybersecurity experts monitoring the appearance of account credentials on the black market. But there are many such experts, and they sometimes write in their own professional jargon. That said, most of the information will have nothing to do with you. To make it easier for you to learn about leaks, as well as other cybersecurity issues, we created a special notification service for Kaspersky Security Cloud. Our experts monitor all leaks and hacks, while the solution identifies the ones of relevance to you personally, and sends notifications only about them. These messages describe what happened, what the risk is, and what needs to be done in clear and simple terms. They ensure that you change the password promptly, remain permanently alert, and do not swallow the bait of scammers trying to exploit stolen information. Where possible, it is also better to protect accounts with two-factor authentication — this will complicate matters for a scammer who somehow got hold of your username and password. So, your accounts are now protected with strong passwords, and you know to change them in the event of a leak. Cybercriminals will not be able to get inside and learn things you’d rather keep private. Now it’s time to find out if you yourself are revealing more than you should. In the following lessons, together we’ll check whether your pages in popular social networks and other services are a target for prying eyes. What is a data leak?
<urn:uuid:37edc1ba-dbcc-4073-b299-c3bc68aa47e1>
CC-MAIN-2022-40
https://education.kaspersky.com/en/lesson/16/page/70
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00241.warc.gz
en
0.949524
851
2.828125
3
The remote procedure call, or RPC, might be the single most important invention in the history of modern computing. The ability to reach out from a running program and activate another set of code to do something — get data or manipulate it in some fashion — is a powerful and pervasive concept, and has given rise to modular programming and the advent of microservices. In a world that is so unlike the monolithic code of days gone by, latency between code chunks and elements of the system running across a cluster means everything. And reducing that latency has never been harder. But some innovative researchers at Stanford University and Purdue University have come up with a co-design network interface card and RISC-V processor that provides a fast path into the CPUs that can significantly reduce the latencies of RPCs and make them more deterministic at the same time. This research, which was presented at the recent USENIX symposium on Operating Systems Design and Implementation (OSDI ’21) conference, shows how a nanoPU hybrid approach might be the way forward for accelerating at least a certain class of RPCs — those with very small message sizes and usually a need for very small latencies — while leaving other classes of RPCs to use the normal Remote Direct Memory Access (RDMA) path that has been in use for several decades and that has pushed to its lower latency limits. Stephen Ibanez, a postdoc at Stanford University who studied under Nick McKeown — the co-developer of the P4 network programming language and the co-founder of Barefoot Networks who is now general manager of the Network and Edge Group at Intel — presented the nanoPU concept at the OSDI ’21 conference. He explained why it is important and perhaps blazes a trail for other kinds of network acceleration for all manner of workloads in the future. Here is the problem in a nutshell. Large applications hosted at the hyperscalers and cloud builders — search engines, recommendation engines, and online transaction processing applications are but three good examples — communicate using remote procedure calls, or RPCs. The RPCs in modern applications fan out across these massively distributed systems, and finishing a bit of work often means waiting for the last bit of data to be manipulated or retrieved. As we have explained many times before, the tail latency of massively distributed applications is often the determining factor in the overall latency in the application. And that is why the hyperscalers are always trying to get predictable, consistent latency across all communication across a network of systems rather than trying to drive the lowest possible average latency and letting tail latencies wander all over the place. The nanoPU research set out, says Ibanez, to answer this question: What would it take to absolutely minimize RPC median and tail latency as well as software processing overheads? “The traditional Linux stack is terribly inefficient when it comes to managing tail latency and small message throughput,” Ibanez explained in his presentation, which you can see here. (You can download all of the proceedings and really hurt your head at this link.) “And the networking community has realized this and has been exploring a number of approaches to improve performance.” These efforts are outlined in the table below: The definition of wire-to-wire latency used in the nanoPU paper, to be crystal clear, for any of these methods above is the time from the Ethernet wire to the user space application and back to the wire. There are custom dataplane operating systems with kernel bypass techniques. But the software overhead with trying to get a specific IPC onto a specific available CPU thread is onerous and it ends up being very coarse-grained. You can get a median latency of between two microseconds and five microseconds, but the tail latencies can be anywhere from 10 microseconds to 100 microseconds. So this doesn’t work for fine-grained tasks like those that are becoming more and more common with RPCs in modern applications. There are also specialized RPC libraries that reduce latencies and increase throughput — eRPC out of Carnegie Mellon University and Intel Labs is a good example — but Ibanez says that they don’t rein in tail latencies enough and that is an issue for overall performance. So there are other approaches that offload the transport protocol to hardware but keep the RPC software running on the CPUs (such as Tonic out of Princeton University), which can boost throughput but which only address part of the problem. And while it is great that many commodity network interface cards (NICs) support the RDMA protocol, and it is also great that the median wire-to-wire latency for RDMA can be as low as 700 nanoseconds, the issue is that bazillions of small RPCs flitting around a distributed computing cluster need remote access to compute, not memory. We need, therefore, Remote Direct Compute Access. (That is not what Ibanez called it, but we are.) And to do that, the answer, as the nanoPU project shows, is to create a fast path into the CPU register files themselves and bypass all of the hardware and software stack that might get into the way. To test this idea, the Stanford and Purdue researchers created a custom multicore RISC-V processor and an RPC-goosed NIC and ran some tests to show this concept has some validity. This is not a new concept, but it is a new implementation of the idea. The Nebula project, shown in the table above, did this by integrating the NIC with the CPU, and got median latencies down into the range of 100 nanoseconds, which is very good, and tail latencies were still at two microseconds to five microseconds (what dataplane operating systems do for their median latencies), but the Stanford and Purdue techies said there was more room to drive latency and throughput. (The Nebula NIC came out of a joint effort between EPFL in Switzerland, Georgia Tech in the United States, and the National Technical University of Athens, working with Oracle Labs.) Here is what the nanoPU looks like: The nanoPU implements multicore processors, with hardware thread schedulers as you might imagine, each with their own dedicated transceiver queues, integrated with the NIC, and like Tonic, it has a programmable transport implemented in the NIC hardware. The DMA path from the hardware transport in the NIC to the last level cache or main memory of the CPU is still there, for those applications that have much longer latencies and do not need the fast path directly from the network into the CPU registers. The hardware thread schedulers operate on their own and do not let operating systems do it, since that would add huge latencies to go all the way up the OS stack and back down again. The nanoPU prototype, implemented in FPGAs, is simulated to run at 3.2GHz, which is about what a top-speed CPU does these days, and uses a modified five-stage “Rocket” RISC-V core. The wire-to-wire latency comes in at a mere 69 nanoseconds, and a single RISC-V core can process 118 million RPC calls per second. (That’s with 8 B messages in a 72 B packet.) Here is what the nanoPU can do running one-sided RDMA with legacy applications, compared to a NIC supporting RDMA: We presume this is InfiniBand for the left side of the chart, but do not know the vintage; it could be Ethernet with RoCEv2. And here is how nanoPU does running the MICA key-value store, which also came out of Carnegie Mellon and Intel Labs: As you can see, the nanoPU can boost throughout compared to traditional RDMA approaches on this key-value store as well as significantly lower latencies — and do them at the same time. It is looking like we may need a new standard to allow for all CPUs to support the embedded NICs without making anyone commit to any specific embedded NIC. Or, more likely, each CPU vendor will have its own flavor of embedded NIC and try to tie the performance — and the revenue — of the two for each other. The market always goes proprietary before it goes open. That’s the best way to maximize sales and profits, after all. By the way, the nanoPU work was funded by Xilinx, Google, Stanford, and the US Defense Advanced Research Projects Agency. And one final thought: Would not this approach make a fine MPI accelerator? Way back in the day, according to the nanoPU paper, the J-Machine supercomputer and then the Cray T3D had support for low-latency, inter-core communication akin to an RPC, but because they required atomic reads and writes, it was hard to keep threads from reading and writing things that they were not supposed to. The integrated thread scheduler in the nanoPU seems to solve this problem, with its register interface. Maybe all CPUs will have register interfaces one day, like they have integrated PCI-Express and Ethernet interfaces, among others.
<urn:uuid:693cceb8-e03f-490e-b5d7-8a40476e349d>
CC-MAIN-2022-40
https://www.nextplatform.com/2021/09/13/forget-microservices-a-nic-cpu-co-design-for-the-nanoservices-era/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00241.warc.gz
en
0.952589
1,872
2.625
3
As is the case with any valuable resource, there must be limitations on who can access and use your wireless medium. In some situations, such as when offering wireless access to attract customers, these limitations will be minimal. In others, we want the greatest possible protection available. Controlling access to computer resources is best illustrated in the AAA framework: Authentication, Authorization, and Accounting. Authentication is the ability to identify a system or network user through the validation of a set of assigned credentials. If you have ever been prompted for a username and password when turning on your computer, you have experienced authentication first hand. Authorization defines the ability of a specific user to perform certain tasks, such as deleting or creating files, after the authentication process has taken place. Finally, accounting allows us to measure and record the consumption of network or system resources. The AAA framework lends itself well (as it does to any computer resource) to wireless network access control. Based on the AAA framework, RADIUS is a popular clientserver approach for authenticating remote users. In order to do this, the RADIUS protocol challenges, or prompts, end users for their credentials through a Network Access Server, or NAS. The NAS is actually a client of a RADIUS server, which centrally controls user access to its client’s (the NAS) services. A RADIUS server is responsible for receiving end user requests, authenticating the user, and then providing the NAS with all of the information necessary for it to deliver services. RADIUS can use several Database Management Systems and directory protocols to manage the list of network users and their privileges. As you can see, this method of authentication provides a secure and centralized way to control access to network resources. But what does it have to do with wireless networking? Extensible Authentication Protocol is used by wireless access points to facilitate authentication. When a user requests access to an AP, EAP (if enabled) will challenge the user for his or her identity. EAP then passes the credentials to an authentication server such as RADIUS, which will allow or deny access to its resources. EAP can be easily implemented because it can be used with a back-end authentication server such as RADIUS, and it supports multiple authentication methods such as Kerberos and Public Key Infrastructure (PKI). There are several different types of EAP, which employ different methods of passing authentication information, but for our purposes it is only important to know that EAP is the component of the authentication process that lies on the wireless tier. Lightweight Directory Access Protocol, or LDAP, is a straightforward technology that defines the way information is organized and accessed. As a protocol, it is inherently a set of rules for communication. By implementing LDAP, network administrators can centralize and secure user information for easy management. LDAP can work in conjunction with RADIUS in order to authenticate users. RADIUS, EAP, and LDAP: Solid Wireless Authentication Though there are other solutions for authenticating wireless clients, the combination of RADIUS, EAP, and LDAP is the most common and available solution in use in business today. Each component has associated open-source software that is freely available for network administrators to download, configure, and use. Thus, with the hardware in place, installation of an authentication system is inexpensive. There are other authentication frameworks and methods that you can employ that will perform in different ways. Another popular method is NoCat, which was initially developed as a project for community and as an amateur wireless network authentication scheme that does not require time and resource-consuming RADIUS server and user database setup. NoCat uses a wireless access point and a Linux router or gateway box to control access. Whatever authentication method you decide to employ, if you decide to employ one at all, remember that the number one goal is to protect your valued resources effectively and within your specific business’ constraints.
<urn:uuid:08e7203a-bcd5-4354-8c09-2b6df3f4d02f>
CC-MAIN-2022-40
https://it-observer.com/wireless-authentication-solutions.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00441.warc.gz
en
0.922391
814
3
3
With IBM leading in Deep Learning AI technology at scale – and one of the most visible in Quantum computing research – a lot of us were wondering when we’d see a presentation from the company combining the two technologies. Well, this week that wondering ended as IBM briefed us on what appeared to be the beginning of a new hybrid computer. One that combines the power of Watson with the power of their Quantum effort to create something very new and different. What made this particularly interesting is that the two technologies are very different. Watson largely sprang out of a Neural Networking effort, partially focused on emulating the human brain and initially winning game shows as a showcase. So, at its heart, Watson is kind of an electronic computer emulating an organic computer. But Quantum technology is vastly different: it really didn’t even come from the technology market but from Physics Theory. And it not only has little in common with existing computers, it pretty much has nothing in common with organic computers – meaning that combining the two technologies is monumentally difficult. But IBM has apparently figured out a path to this future. Let’s talk this week a bit about what that means. The Quantum Computer Superpower We had a lot of issues when we moved from single core processors to multi-core solutions. The problem we had was that most programs were meant to execute sequentially, and this meant that when you put them on a typical multi-core computer you pegged one of the cores, leaving the rest idling unused. It took us awhile to figure out how to write and rewrite code so that it could be executed in parallel and performance jumped dramatically. The change to Quantum computing makes that arduous process look exceedingly simple in comparison, because Quantum computers deal with data far differently. They computing elements don’t even have the same states, which both allows for more flexibility and creates a huge problem with regard to writing optimized code, because few coders understand this difference. Even taking a simple program and converting would be problematic. And current thought is that you’d generally have to start from scratch with one of the handful of folks that might be able to write code for this platform. Ceating an application that was vastly different than anything that had been seen before. Now what motivates you to do this is that the processing potential for a Quantum Computer is massively higher than we have yet experienced. And potential performance growth rates make Moore’s law look frozen in place in comparison. This means taking a platform like Watson and converting it to run optimally on a Quantum computer will likely be beyond our skill set for the foreseeable future. But you could create a hybrid, though, and have the Quantum computer do a task that the AI doesn’t do well to allow the AI to perform more quickly. Think of it like a turbocharger for a car. A turbocharger is a compressor and more similar to a jet engine in design, but turbines and cars didn’t work. But tied into an engine they compress the air/fuel mixture and make the car far faster. Together they are better than separately, and that is similar to what I’m talking about here. If a Quantum computer can turbocharge Watson, the result should be a significant performance boost. The Quantum AI Turbocharger One of the things IBM has discovered that Quantum computers do very well is structure unstructured data like images. They can, with an incredibly high degree of accuracy, sort highly complex objects. Now Watson needs to be able to make decisions from unstructured data and traditional CPUs typically aren’t great, from performance standpoint, at structuring unstructured data. That is why we use GPUs instead for high performance efforts. Watson uses a lot of GPUs in its most advanced form, but Quantum computers are potentially far faster than a GPU. Thus, combining the two systems should result in vastly faster unstructured data analysis. This won’t obsolesce GPUs because they will still be needed in the decision-making process. But now they will be fed data at far higher speeds than they could have previously accepted and, much like compressing the charge in that high-performance engine, the result should be a vastly more capable result. Now there apparently needs to be an intermediate computer that bridges what the Quantum Computer structures and what the AI accepts. That intermediate computer is called a NISQ (noisy intermediate-scale quantum) computer. And it creates something like a translation bridge between the Quantum Computer and the AI. But the result should be massively beyond, in terms of both data complexity and performance for unstructured data, what a classical computer can accomplish. AI by nature is performance limited, particularly with regard to unstructured data. What IBM proposes is kind of a Quantum Turbocharger for unstructured data, which could not only be applied to AIs but any computer solution that uses unstructured data by using a two-step approach. That approach starts with the Quantum computer structuring the data so it can be consumed more quickly, a NISQ computer further organizing the data so it can be consumed at the primary computers full speed, and that primary computer which may or may not be an AI. The result should be, according to IBM, logarithmically faster than anything we have on the market today and a true revolutionary game changer. If you think things are moving fast now, just wait until this puppy ships in the mid to late 2020s.
<urn:uuid:b9e02e99-09ce-4626-a9ae-d233c2ec6705>
CC-MAIN-2022-40
https://www.datamation.com/data-center/ibm-and-the-promise-of-quantum-hybrid-deep-learning-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00441.warc.gz
en
0.959944
1,131
3.0625
3
When it comes to the forensic investigation of Apple devices, a Keychain analysis is of particular importance. Not only does Keychain contain passwords from websites and applications, but it can also provide computer forensics with access to the same user’s other Apple devices. Let’s take a closer look. Types of Keychains Keychain or Keychain Services is the password management system in macOS and iOS. It stores account names, passwords, private keys, certificates, sensitive application data, payment data, and secure notes. These records are dynamically linked to users’ particular login passwords so that, when they log on to a Mac device, all of their various accounts and passwords are made available to the operating system and select applications. The Keychain storage is located in: - ~/Library/Keychains/ (and subfolders) There are three types of Mac Keychains: Login Keychain, System Keychain, and Local Items (iCloud) Keychain. The Login Keychain is the default Keychain file that stores most of the passwords, secure notes, and other data. The data is stored in a file named login.keychain located in /Users/<UserName>/Library/Keychains. By default, the Login Keychain password is the same as the Mac user password. The password recovery process for this Keychain is time-consuming, but it can be accelerated by using GPU, reaching speeds of up to 1,200,000 passwords per second on an AMD 6900 XT. The System Keychain stores items that are accessed by the OS, such as Wi-Fi passwords, and shared among users. The file, which is usually located in /Library/Keychains/, can be decrypted instantly if a “Master Key” file is available (usually located in /private/var/db/SystemKey). Local Items (iCloud) Keychain The Local Items Keychain is used for keychain items that can be synced with iCloud Keychain. It contains encryption keys, applications data, webform entries, and some iOS data synced with iCloud. It presents two files: a keybag (user.kb file) and an SQLite database with encrypted records (keychain-2.db). If the iCloud synchronization is turned on, the keychain-2.db may contain passwords from other devices as well. Passware Kit recovers a password for the user.kb file and then decrypts the keychain-2.db database. By default, the user.kb password is the same as the macOS user password. To recover the user.kb password on a Mac without a T2 chip, Passware Kit requires the 128-bit universally unique identifier number (UUID), which is the same as the name of the Keychain folder. Unfortunately, the password recovery for Local Items Keychain cannot be accelerated on GPU. After the successful recovery of a password, Passware Kit extracts all records that appear readable and saves the rest of the data in a file. Strings shorter than 128 symbols are considered passwords and saved to a Passwords.txt file, while json and bplist binary files are extracted as-is. Passware Kit also creates an extracted-records.json file with the complete extracted data. It is extremely important to analyze as many Apple devices linked to the same iCloud account as possible. A decrypted Keychain from one device can gain entry into a device with stronger encryption, such as a Mac with a T2 chip. If there are no additional devices to extract the Keychain from, Passware offers a T2 Decryption Add-on to decrypt APFS disks from Mac computers protected with an Apple T2 security chip. Read the full article to see the cases in which Passware Kit facilitates the extraction of data from locked devices. The table below summarizes the decryption and password recovery options for different types of Keychain. A comprehensive forensic investigation involves the analysis of multiple devices and artifacts. Starting from the least-secure devices (e.g., memory images, iTunes backups, and Macs without T2/M1 chip), Passware Kit extracts and decrypts a Keychain that can then be used to access data from other devices.
<urn:uuid:2b9861e9-8274-40ef-b274-cc6ec34324c7>
CC-MAIN-2022-40
https://www.forensicfocus.com/articles/a-deep-dive-into-apple-keychain-decryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00441.warc.gz
en
0.905639
901
2.71875
3
Cyber safety best practices SIM Swap Fraud is on the Rise. These Tips Will Help You Avoid it Like many other types of cyber crime, SIM swap attacks have been on the rise in recent years. You may have heard of SIM swapping when Twitter CEO Jack Dorsey was targeted in 2019, which made major headlines. After all, if the CEO of a major social media company could fall prey to such an attack, then how can the average person prevent it from happening to them? Despite the dangers, using the Internet safely and confidently is possible. It’s all about educating yourself and taking proactive measures to protect your data. Here’s everything you should know about SIM swapping fraud and how to prevent it from happening to you: What is SIM swapping? SIM swapping fraud involves a few steps. First, a cyber criminal acquires private information about a victim, typically by impersonating their phone service provider via phishing emails or fraudulent phone calls. They may also buy this information on the dark web if the targeted individual has been involved in a data breach — and many people have, whether they know it or not. Next, the scammer calls the victim’s mobile carrier, using the stolen personal data to impersonate the victim and report their phone’s SIM card as stolen or missing. While many phone service providers require customers to use PIN numbers on their accounts in order to prevent fraudulent account access, SIM swap perpetrators often insist they’ve forgotten their PIN if they weren’t able to retrieve this code when stealing the victim’s personal information. If successful, the attacker convinces the mobile carrier to transfer the victim’s phone number to a new SIM card in their possession. There have even been cases in which employees of phone providers have collaborated with criminals to perpetrate SIM swap fraud. Dangers of SIM swapping Here’s why SIM swapping is dangerous for victims: It enables the attacker to access any of your accounts that use SMS or phone call verification by allowing them to request password resets or bypass two-factor authentication (2FA) and multi-factor authentication (MFA). 2FA and MFA are security measures commonly implemented on online accounts, especially those containing sensitive or financial information. They offer an extra line of defense against cyber attacks so that if your password is compromised, there’s another layer of authentication to prevent hackers from accessing your accounts. However, SIM swap attacks exploit a critical weakness in 2FA and MFA. Because these cyber security protocols often rely on SMS or phone call authentication, a scammer performing a SIM swap can bypass this added protection and do things like access individuals’ bank accounts to steal money or sell access to those accounts to other bad actors on the dark web. If you’re the target of SIM swap fraud, you could not only lose access to your accounts, but you could also have your personal data leaked, your social media accounts hacked, your cryptocurrency transferred out of your digital wallet, or money stolen from your bank account. How to tell if you’ve been SIM swapped If your SIM card suddenly becomes inactive, then you could be the victim of a swap. The most immediate effect you’ll notice if your SIM is deactivated is that you unexpectedly and completely lose service on your mobile phone and are unable to send or receive texts and calls. You might also get a text message alerting you that the SIM card for your phone number has been changed. If either of these things happens but you did not request a new SIM, then you should call your phone service provider immediately and take steps to protect your online accounts — more details on this below. How common is SIM swapping? Unfortunately, like many digital crimes, SIM swap attacks are increasing in frequency. According to the FBI, these types of cyber attacks resulted in more than $68 million in losses in the US in 2021 alone — which marks significant escalation compared to prior years. A huge cyber attack on T-Mobile in 2021 compromised the PIN numbers, SSNs, and other sensitive personal information of current and former T-Mobile customers. This particular attack leaked the data of millions of people, opening the way for SIM swap fraud and showing that anyone with cell phone service could be susceptible to an attack of this kind. There have even been high-profile victims of this type of fraud, most notably Twitter CEO Jack Dorsey. It was this particular event in 2019 that brought public awareness to SIM swapping, because a group of attackers actually took control of Dorsey’s own Twitter via SIM swap fraud, tweeting out offensive messages to his more than 4.2 million followers. In short, anyone — even CEOs or celebrities — can be the victim of this type of cyber attack, and the associated costs are often high. That’s why it’s worth taking proactive measures to protect yourself. How to protect against SIM swap Although SIM swapping and other types of cyber attacks are on the rise, you aren’t helpless. There are certain actions you can take to strengthen your digital privacy and security, and they’re all simple enough for anyone with any level of digital proficiency to take on. Beware of phishing attempts This one is simple: be careful what you click on. Phishing is a very common entry point for perpetrators of SIM swap attacks. Hackers have gotten very good at sending fraudulent messages that seem legitimate, including text messages or emails that appear to be from your mobile carrier, bank, or another account. These messages may include links that may appear to be a legitimate login screen for one of your accounts but that actually catalog and steal your login credentials. They may also include links that install malware onto your device that can steal your passwords and other sensitive data, or personal questions that help a scammer to answer the security questions on your accounts. Whatever form they take, these types of phishing messages can give attackers everything they need to commit SIM swap fraud. Always verify that the email address or phone number you’ve gotten a message from matches the one in your contacts or on the business’s site. As an added security measure, simply avoid clicking on links received in messages. Instead, go directly to the desired website on your browser, log in, then locate the page you need. Don’t post personal information online Avoid posting personal details online that could help bad actors guess answers to security questions on your accounts. This recommendation holds true even if you have private social media accounts or if you’re sending information via direct messages, because large social media companies can and have experienced data breaches and because your accounts could be hacked. This includes practicing caution when sharing photos online or via direct message. For example, a picture of your driver's license and vaccination card may show your address or date of birth, which can give an attacker the information they need to hack your accounts or commit SIM swap fraud. Additionally, don’t make any details of your financial assets public online, including any investments, cryptocurrency, and similar. Advertising these assets can draw unwanted attention and make you a target for cyber attacks. Protect your cellular account Because your cell phone number can be a single point of failure when it comes to your cyber security, it’s important to add multiple layers of protection to this account. Most mobile carriers offer various options for account protection, but there are also additional steps you can take. Here are some of the best practices for protecting your cellular account: - Use a strong, unique password. Password security is essential for all your accounts, but especially on sensitive accounts like your cell provider and bank. - Add a PIN number or passcode. Most cell carriers provide a built-in option for a second layer of protection on your account — usually a PIN number or passcode. Always opt to add this layer of security, but be sure not to make this code too easy to guess (like numbers from your home address, phone number, or SSN), and make sure not to share the number through social media, text, or email. - Be careful with your information. If you receive a phone call from someone claiming to be with your phone provider, you shouldn’t volunteer personal details like your PIN number or login credentials. Instead, hang up and call your mobile carrier’s customer service line to verify that you aren’t sharing access to your account with a scammer. Strengthen your authentication SIM swapping exploits a significant vulnerability of 2FA and MFA protections — but this doesn’t mean you should forsake authentication altogether. Instead, opt for layered protections that don’t involve SMS or phone call verification. Biometrics are the future of authentication, and biometric-based authentication offers an easy way to secure your accounts from attackers. Because this type of protection relies on something that can’t be taken from you in phishing or SIM swap attacks — i.e., your face, fingerprints, or other biomarkers — you can navigate the Internet with more confidence when you have biometrics in place to protect your personal data. What to do if SIM swapped If you’re the target of SIM swap fraud, take action immediately. Here are the next steps to take as soon as you become aware of an attack: - Immediately contact your cell phone service provider to recover control of your number. - Change passwords on all your accounts, making sure to use strong, unique passwords for each account. Implement authentication that doesn’t rely on SMS or phone call verification, such as biometrics, physical security tokens, or separate authentication apps. - Contact your bank(s) and other financial institutions to place alerts on all accounts monitoring for suspicious activity, unknown login attempts, or fraudulent transactions. - Reach out to local law enforcement or your local FBI field office to report all suspicious activity. Additionally, report the information to the FBI's Internet Crime Complaint Center. Being the victim of any type of fraud or cyber attack is scary, and it can be challenging to regain your confidence. But if you’re the victim of SIM swapping, there are actions you can take to recover your accounts and measures you can enact to make sure it never happens again.
<urn:uuid:745110ec-a2c7-47ea-b71e-1ade947f2cc0>
CC-MAIN-2022-40
https://blog.ironvest.com/sim-swap-fraud-is-on-the-rise.-these-tips-will-help-you-avoid-it
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00441.warc.gz
en
0.915541
2,113
2.53125
3
As the uses for High Performance Computing (HPC) continue to expand, the requirements for operating these advanced systems has also grown. The drive for higher efficiency data centers is a subject closely tied to a building’s power usage effectiveness (PUE, defined as total facility energy divided by IT equipment energy). HPC clusters and high density compute racks are consuming power densities of up to 100kW per rack, sometimes higher, with an estimated average density of 35kW per rack. Building owners, co-location facilities, enterprise data centers, webscale companies, governments, universities and national research laboratories are struggling to upgrade cooling infrastructure to not only remove the heat generated by these new computer systems but also to reduce or eliminate their effects on building energy footprints or PUE. The new and rapid adaptation of Big Data HPC systems in industries such as oil and gas research, financial trading institutions, web marketing and others is further highlighting the need for efficient cooling due to the fact that the majority of the world’s computer rooms and data centers are not equipped nor prepared to handle the heat loads generated by current and next generation HPC systems. If one considers that 100 percent of the power consumed by an HPC system is converted to heat energy, it’s easy to see why the removal of this heat in an effective and efficient manner has become a focal point of the industry. New high performance computer chips have the capability of allowing an HPC system designer to develop special clusters that can reach 100kW per rack and exceed almost all available server cooling methods currently available. Submersion cooling systems offer baths or troughs filled with a specially designed nonconductive dielectric fluid allowing entire servers to be submerged in the fluid without risk of electrical conductance across the computer circuits. These highly efficient systems can remove up to 100 percent of the heat generated by the HPC system. The heat, once transferred to the dielectric fluid can then be easily removed via a heat exchanger, pump and closed loop cooling system. To be applied, the traditional data center is typically renovated to accept the new submersion cooling system. Legacy cooling equipment such as CRACs, raised flooring and vertical server racks are replaced by the submersion baths and updated closed loop warm water cooling system. These baths lay horizontally on the floor which provides a new vantage point for IT personnel although at the cost of valuable square footage. Servers are modified either by the owner or a third party by removing components that would be negatively affected by the dielectric fluid such as hard drives and other components that might not be warranted by original equipment manufacturers (OEMs). Special consideration should be paid to future server refresh options considering that such a monumental shift in infrastructure greatly limits OEM server options in the future and limits the overall uses for a server room with dedicated submersion cooling technology only. While submersion cooling offers extreme efficiencies for the world’s most extreme HPC systems, the general scarcity of such HPC systems combined with required infrastructure upgrades and maintenance challenges poses an issue for market wide acceptance at this time. Direct-to-chip and on-chip cooling Direct-to-chip or on-chip cooling technology has made significant advances recently in the HPC industry. Small heat sinks are attached directly to the computer CPUs and GPUs creating high efficiency close coupled server cooling. Up to 70 percent of the heat from the servers is collected by the direct-to-chip heat sinks and transferred through a system of small capillary tubes to a coolant distribution unit (CDU). The CDU then transfers the heat to a separate closed loop cooling system to reject the heat from the computer room. The balance of the heat, 30 percent or more, is rejected to the existing room cooling infrastructure. The warm water cooling systems commonly used for direct-to-chip cooling are generally considered cooling systems that don’t utilize a refrigeration plant such as closed loop dry coolers (similar to large radiators) and cooling towers and have been recently quantified by the American Society of Heating Refrigeration and Air-Conditioning Engineers (ASHRAE) to produce “W-3 or W-4” water temperatures or water ranging from 2°C - 46°C (36°F-115°F). These systems draw significantly less energy than a typical refrigerated chiller system and provide adequate heat removal for direct to chip cooling systems as they can operate with cooling water supply temperatures in the W3 - W4 range. Direct to chip cooling solutions can also be used to reclaim low grade water heat that if repurposed and used correctly can improve overall building efficiencies and PUE. The advantages of this form of heat recovery are limited by the ability of the building’s HVAC system to accept it. HVAC building design varies around the world. Many parts of Europe can benefit from low grade heat recovery because of the popular use of water based terminal units in most buildings. In contrast most of the North American HVAC building designs use central forced air heating and cooling systems with electric reheat terminal boxes, leaving little use for low grade heat recovery from a direct to chip or on chip cooling system. The feasibility to distribute reclaimed warm water should also be studied in conjunction with building hydronic infrastructure prior to use. A recent study performed by the Ernest Orlando Lawrence Berkley National Laboratory titled “Direct Liquid Cooling for Electronic Equipment “concluded that the best cooler performance achieved by a market leading direct to chip cooling system reached 70 percent under optimized laboratory conditions. This leaves an interesting and possibly counterproductive result for such systems because large amounts of heat from the computer systems must still be rejected to the surrounding room which must then be cooled by more traditional, less efficient means such as computer room air conditioners (CRACs) and/or computer room air handlers (CRAHs). To understand better the new result of deploying a direct or on chip cooling system, one must consider the HPC cluster as part of the overall building energy footprint which can then be directly tied to the building PUE. Considering a 35kW HPC rack with direct to chip cooling will reject at least 10.5kW (30 percent) of heat to the computer room and an average HPC cluster consists of six compute racks (excluding high density storage arrays), a direct-to-chip and/or on-chip cooling system will reject at least 60kW of heat load to a given space. HPC clusters have been largely overlooked or exempt from the conversation [about efrficiency] due to their previously more limited use in government labs and large research facilities Utilizing the most common method of rejecting this residual heat by CRAC or CRAH results in a significant setback in original efficiency gains. Additional challenges are presented when considering the actual infrastructure needed inside the data center and more importantly inside the server rack itself when considering an on-chip cooling system. In order to bring warm water cooling to the chip level, water must be piped inside the rack through a large number of small hoses which in turn feed the small direct-to-chip heat exchangers/pumps. This leaves an IT staff looking at the back of a rack filled with large numbers of hoses as well as a distribution header for connecting to the inlet and outlet water of the cooling system. Direct-to-chip cooling systems are tied directly to the mother boards of the HPC cluster and are designed to be more or less permanent. Average HPC clusters are refreshed (replaced) every 3-5 years typically based on demand or budget. With that in mind, one must consider the costs of replacing the direct-to-chip or on-chip cooling infrastructure along with the server refresh. Direct-to-chip cooling offers significant advances in efficiently cooling today’s high performance computer clusters, however once put into the context of a larger computer room or building, one must consider the net result of overall building performance, cost implications and useful life on total ROI. Active rear door heat exchangers Active rear door heat exchangers (ARDH) have grown in popularity among those manufacturing and using HPC clusters and high density server racks. An ARDH’s ability to remove 100 percent of the heat from the server rack with little infrastructure change offers advantages in system efficiency and convenience. These systems are commonly rack agnostic and replace the rear door of any industry standard server rack. They utilize an array of high efficiency fans and tempered cooling water to remove heat from the computer system. The electronically commutated (EC) fans work to match server air flow rate in CFM to ensure all heat is removed from the servers. An ARDH uses clean water or a glycol mix between 57F-75F which is often readily available in most data centers and if not, can be produced by a chilled water plant, closed loop cooling systems such as cooling towers, dry fluid coolers, or a combination of these systems. Utilizing an ARDH allows for high density server racks to be installed in existing computer rooms such as co-location facilities or legacy data centers with virtually no infrastructure changes and zero effect on surrounding computer racks. Active rear door heat exchangers are capable of removing up to 75kW per compute rack, offering users large amounts of scale as clusters go through multiple refresh cycles. These systems once deployed also offer advantages to owners by monitoring internal server rack temperatures and external room temperatures, ensuring that a heat neutral environment is maintained. Recent laboratory tests by server manufacturers have found that the addition of an ARDH actually reduces fan power consumption of the computers inside that rack, more than offsetting the minimal power consumption of the ARDH fan array. While counterintuitive at first glance, in depth study has shown that ARDH fans assist the server’s fans allowing them to draw less energy and perform better even at high density workloads. Tests also show that hardware performance improves resulting in increased server life expectancy. ARDHs offer full access to the rear of the rack and can be installed in both top and bottom water feed configuration offering further flexibility to integrate into new or existing facilities with or without the use of raised floors. As densities in high performance computer systems continue to increase, the applied method of cooling them has become increasingly more important. Continued expansion for the use of HPC and high density servers in ever broadening applications will see these types of computers installed in more traditional data centers bringing overall data center PUE conversations into focus where in the past, HPC clusters have been largely overlooked or exempt from the conversation due to their previously more limited use in government labs and large research facilities. There are several reliable technologies currently available to cool today’s HPC and high density server systems and one must select an efficient and practical system that works within the associated building’s cooling infrastructure, future refresh strategy and budget. The engineering and design plans for cooling these advanced computers should take place prior to or in parallel with purchasing of the computer system as the cooling system itself is now often leveraged to ensure optimal and guaranteed computer performance. Rich Whitmore is the President and CEO of Motivair For more on cooling, watch for the June issue of DCD magazine
<urn:uuid:b41f6990-024b-49e4-bafc-e3f10c73dc40>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/opinions/cooling-solutions-for-hpc/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00441.warc.gz
en
0.939822
2,309
2.890625
3
NBC News has collected and analyzed a trove of children's personal information it discovered on the Dark Web. Even though this information may not be as useful to cybercriminals as credit card details or login credentials, the information is still out there, where we don’t want it. So what is it, and how did it get there? Modern ransomware gangs don't just encrypt data, they frequently steal it too. If their ransom demands aren't met, they leak the stolen data via their Dark Web sites. These data leaks have lead to information about (amongst others) businesses, police officers, hospital patients, and school children ending up on the Dark Web. And schools and school districts have been very popular targets for ransomware attacks. In 2021, ransomware gangs published data from more than 1,200 American K-12 schools, according to a tally provided to NBC News by a ransomware analyst. Ransomware threat actors are always looking for low-hanging fruit. And schools have always been easy targets for ransomware, because of their limited budgets, especially for security. All of which was made worse by the demand for distance learning created by the Coronavirus pandemic. What information is out there? Some schools may not be able to tell you how much, and what, information they have about your child if you ask them. But the evidence says it’s even worse than you might expect; it isn't just the information you may have handed over to the school when you filled out the application. Over time, information like medical conditions or your family's financial status may get added. Some information, like social security numbers or birthdays, will be a constant in the child’s life, and that information in the wrong hands can set up a child for identity theft throughout their life, and at any time in their life. The NBC article provides a few examples that may raise your eyebrows. A few months after a ransomware attack on Toledo Public Schools in Ohio, which lead to students’ names and social security numbers being published online, a parent discovered that someone had started trying to take out a credit card and a car loan in his elementary school-aged son’s name. Following an attack on Weslaco Independent School District, data relating to approximately 16,000 students was leaked, including: Their names, dates of birth, race, social security numbers, gender, immigration status, whether they were homeless or economically disadvantaged, and if they’d been flagged as potentially dyslexic. Can the information be removed? The chances of permanently removing information from a ransomware leak site are slim to none. By the time the victim of a ransomware attack pays the ransom, their data has already been stolen, so they have nothing more than the word of criminals that it will be destroyed or kept safe. There is little incentive for ransomware gangs not to trade the data of payers and non-payers alike on some Dark Web forum. And when data has been shown on a leak site, anyone could have grabbed a copy. What is the Dark Web? Maybe it’s a good idea to clear up some of the misconceptions about the Dark Web. There are two "dark" regions on the World Wide Web: The Deep Web, and the Dark Web. The Deep Web is an unindexed part of the web, which includes anything behind a login screen, for example. The indexed part of the web—the part that can be found by search engines—is likely to be a small fraction of the entire web, which makes the Deep Web enormous. The Dark Web is a part of the web that can only be accessed via Tor. The Dark Web is designed to hide the location (strictly, the IP address) of everyone and everything on it. And if you can't trace the real IP address of a user or a website, you can't find them, arrest them, or shut them down. Which is why the Dark Web is where you'll find ransomware leak sites. Unlike the Deep Web, the Dark Web is extremely small, but it is very popular with criminals, for obvious reasons. Alongside ransomware leak sites, the Dark Web also hosts forums where cybercriminals can buy and exchange information, and marketplaces that sell anything and everything that's illegal. What can you do? School cybersecurity is increasingly important, and parent-pressure makes a difference. Ask your school about its approach to cybersecurity, and what information about your child it keeps. Should you or your children’s information become part of a data breach you may want to read some more about identity theft, and credit monitoring.
<urn:uuid:fc886d5e-8883-4eed-ad0b-9a257cca7efc>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2021/09/how-the-dark-web-became-a-haven-for-childrens-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00441.warc.gz
en
0.960723
956
2.828125
3
Customers have moved past the search engine. At the turn of the 21st century, a search engine was the most valuable tool for finding what you needed on the internet. Websites such as Netflix, eBay and Amazon grew due to their powerful and persuasive search engines that enabled users to freely mine site content and discover solutions to their wants and needs. However, this simple “find what I’m looking for” capability that was considered so valuable now seems almost antiquated. A search engine works reactively, only producing results when the user requests something. In contrast, a recommendation engine predicts what user might want and preemptively provides results that aid in a discovery process. A recommendation engine presents relevant content that users did not necessarily search for or of which they might not already be aware. Recommendation engines are a branch of information retrieval that uses artificial intelligence. These engines provide powerful tools and techniques to analyze volumes of data—especially product information and user information—and are designed to correlate with user profile themes and characteristics and then provide relevant suggestions. In technical terms, a recommendation engine is a mathematical model that can predict how much a user will prefer an item. A key difference from the search engine is that the underlying goal is not to sell more but to learn more about prospects and then offer a great recommendation that becomes a catalytic precursor to insight. In short, recommendation engines are automated "hypothesis recommenders" that identify correlations which might merit real-world exploration. And they can be used in many sorts of virtual interactions beyond a retail sale. We think it is time for government agencies to embrace the potential of recommendation engines. As an example, consider a competitive grant maker such as the National Science Foundation. NSF conducted more than 240,000 reviews in 2017 of about 50,000 grant application proposals. Every proposal received includes a written abstract. NSF staff managing the proposals often spend considerable time finding suitable reviewers for the application by looking through CV databases, academic credentials, availability and history. But what if NSF’s Grants Management System could automatically "recommend" reviewers for grant applications by understanding proposal abstracts and correlating the abstracts to reviewers’ background and work history? Such a capability could considerably decrease the time to find, assign and complete the peer review process. Similarly, consider the Food and Drug Administration’s imports processing system. FDA electronically screens foreign-made food, drugs and medical devices before they enter the U.S. There were about 40,000,000 product lines imported into the U.S in 2017. FDA uses PREDICT, a rules engine that forecasts issues in import lines using data from several sources, such as U.S. Customs and Border Protection’s IT systems, FDA’s own inspection results, and inherent product risks (i.e., spoilage of perishable food or drugs). The rules-based system employs proprietary coding developed by a contractor that supports PREDICT. Changes in that programming to accommodate revisions to regulations can be costly. Instead, an AI-based recommendation engine could predict issues in both new and existing import lines without needing explicit programming or rules definition by reviewing imports and inspection data, historical data for similar products, and compliance data, then recommending import lines most likely to have issues and require inspections. Another federal activity that could benefit from recommendation engines is the procurement and grant-making function. Government agencies collect and report data on federal procurements and grants through the Federal Procurement Data System and the System for Awards Management. Using past performance and capability statements, plus bids, proposals and grant applications received, a recommendation engine could identify suitable vendors that the agency should invite to compete for a contract or other award. This might dramatically reduce the market research effort needed by acquisition staff. While there are many benefits to recommendation engines, building a good one poses challenges to all the actors of the system. For example, deep learning algorithms cannot provide a rationale for any particular recommendation. Since government agencies favor transparency, we suggest supplementing deep learning with machine learning techniques that can offer a better rationale for the recommendations. From a government perspective, transparency is essential to seek relevant suggestions from a trusted source for decision making. So, the recommendation engine needs to be built in such a way that it ensures the confidence of its users. From a data perspective, organizations should evaluate their data strategy to ensure that they can systematically organize and access vast amounts of structured and unstructured input and output data. To conclude, government organizations should explore AI-based recommendation engines by understanding and investing in AI technologies and techniques, and considering their implications, challenges, and constraints. AI capabilities, uses and experience are rapidly evolving, thus agencies may want to identify a trusted partner to help develop the infrastructure and processes needed to explore the potential of AI. Sanjeev Pulapaka is a solution architect and Srikanth Devarajan is an enterprise architect at REI Systems.
<urn:uuid:82b95af4-611b-47b3-86d8-bacc74ac826f>
CC-MAIN-2022-40
https://www.nextgov.com/ideas/2018/11/what-government-can-learn-netflix/153062/?oref=ng-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00441.warc.gz
en
0.937627
1,003
2.8125
3
In the 1950s, the vision of what technology would be like in the year 2000 reads like a sci-fi novel. Fanciful ideas of an artificial planet circling Earth or automatic food generators seem laughable to us now. But back then, experts believed that vertical cities would be everywhere. Or even that airplanes would be able to fly at 25,000 mph. Given that technology explodes every year, it is easy to get carried away with what the future might hold. Tap or click here to see the potential future of virtual power plants. Some predictions from years past have come to fruition. And now, Toyota is building a “smart city” that takes us into a world of technological dreams. Let’s take a look at this incredible design. Fingerprints and Smell-O-Vision Strikingly accurate is the description of what is called the Push-Button Home. We take virtual assistants and Wi-Fi-connected devices for granted, but in 1950 it was unthinkably futuristic. “People will live in houses so automatic that push-buttons will be replaced by fingertip and even voice controls. All homes will have temperatures maintained at constant comfortable levels the year-round for human efficiency,” according to an Associated Press article from 1950. But while we are still waiting for Smell-O-Vision while watching TV, Toyota is already building a city of the future. And even by expert predictions in 2021, the “Woven City” will be a marvel of futuristic engineering. The “Woven City” name comes from Toyota’s collaboration with software developer Woven Planet Holdings. The more than 400 square mile project will consist almost entirely of artificial intelligence, robots and environmental sustainability. Constructed next to the former Higashi-Fuji Plant, the city will incorporate three types of streets for transportation. One street will solely be for automated vehicles, another for pedestrians, and the third for personal mobility vehicles. There will be a fourth street as well for deliveries, but this will be underground. Toyota’s e-Palette autonomous cars will be doing most of the driving and take care of deliveries. They will also serve as mobile shops or workspaces. Homes will have a personal robot assistant to help to restock the fridge, cleaning tasks or run errands. Covered in an array of sensors, robots and autonomous cars know where to go. If you want to take a deep dive into the Woven City, check out the following video. Building a city of the future Environmentally-conscience construction will be at the forefront of the city’s creation. All buildings constructed use robot-made wood and get power from the latest renewable energy methods. “In Toyota’s shift from an automobile manufacturer to a mobility company, the project will bring new technology to life in a real-world environment across a wide range of areas, such as automated driving, personal mobility, robotics, and artificial intelligence (AI),” explained Toyota in a statement. The city will initially house 350 senior citizens, families with young children and investors. This will expand to over 2,000 residents, including Toyota employees, when the project increases in size and need.
<urn:uuid:092551bc-3927-4579-98fd-45fbd808cbd4>
CC-MAIN-2022-40
https://www.komando.com/technology/toyota-woven-city/781163/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00641.warc.gz
en
0.929777
675
2.6875
3
Robots haven’t taken over the world just yet and may still be a product of sci-fi movies, but there’s no denying the fact that humanity will be more exposed to them over the next few decades. Although machines are constantly evolving and picking up new skills, many people remain sceptical of the role they’re going to play in the future. And getting humans to like them is challenging. However, a team of scientists may have found a solution. They claim that by creating robots that are awkward and clumsy, humans will feel more confident around them. Flawed robots win That may sound bizarre, but scientists from the Center for Human-Computer Interaction, University of Salzburg, Austria, Bristol Robotics Laboratory, University of the West of England, UK, and the Center for Technology Experience, Austrian Institute of Technology, claim they have found proof that people are more likely to prefer and get on with robots if they behave awkwardly and make mistakes regularly – much like many of us humans. This research, published in the Frontiers in Robotics and AI journal, shows that humans aren’t convinced by robots that act flawlessly. In fact, this is something people can fear. And there have been countless examples of imperfect robots. Recently, a robot managed to escape from a research facility in Russia, and a security robot drowned itself in a fountain last month. Events like these hardly strike confidence in the hearts of the sceptics. Mistakes are normal Nicole Mirnig, who’s a PhD candidate at the Center for Human Computer Interaction at the University of Salzburg in Austria, said this research proves a theory that claims humans are more attractive if they make mistakes. “Our results showed that the participants liked the faulty robot significantly more than the flawless one. This finding confirms the Pratfall Effect, which states that people’s attractiveness increases when they make a mistake,” she said. During the experiment, the researchers encouraged robots to interact with humans and complete several LEGO building tasks. Following this, they asked the humans to rate the robots on their likability, anthropomorphism and perceived intelligence. The scientists took care to note the participants’ reactions when the robots made mistakes. When mistakes were made, laughter was the most common emotion elicited from the candidates, much like humans do when we see our friends and family do something clumsy. Mastering social intelligence However, while robots making mistakes aren’t exactly suitable for practical tasks, the researchers claim that they could be taught to master these errors to develop social intelligence skills. At the end of the day, humans aren’t perfect, either, but maybe robots can learn from their mistakes. Just like we do. “Specifically exploring erroneous instances of interaction could be useful to further refine the quality of human-robotic interaction,” Mirnig added. “For example, a robot that understands that there is a problem in the interaction by correctly interpreting the user’s social signals could let the user know that it understands the problem and actively apply error recovery strategies.”
<urn:uuid:7c871a3a-5969-4c43-816e-eee1d38f2317>
CC-MAIN-2022-40
https://internetofbusiness.com/scientists-humans-awkward-robots/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00641.warc.gz
en
0.948349
645
3.09375
3
Infamous groups like Maze, REvil and DarkSide have made “ransomware” a household word. No company is too small or too large to be selected as a victim for these insidious attacks that can have catastrophic consequences. From disrupted services and expensive recovery, to fines from regulators and loss of reputation, ransomware attacks are a serious issue. In the next sections, we’ll take a closer look at ransomware and how attackers use it to profit at the great expense of their targets. What is ransomware? A type of malicious software and often described as crypto-malware, ransomware is designed to encrypt a victim’s key systems and data files, effectively locking them out. How is ransomware deployed? Ransomware attacks use a variety of different vectors to hit companies. Among the most common ways to deploy this malicious software is via phishing messages. An unsuspecting employee clicking on either a link embedded in a phishing email or downloading its attachment will activate the ransomware payload. Alternatively, the employee may be led to a phishing site via a fake link, where their company credentials are stolen. The ransomware operators can then access the enterprise’s system and infect it with ransomware. How does a ransomware attack work? The ransomware encrypts the victim’s systems, servers, and data files. This means the target cannot access the data it needs in order to operate as a business or, if in the case of a local authority, to provide services to the local community. Those behind the attack leave a digital ransom note requesting a payment in exchange for the safe return of access. The payment is typically asked to be made in cryptocurrency, like Monero or Bitcoin, as it is difficult to trace or reclaim. What is double extortion? To increase their chances of success, most ransomware gangs now use double extortion tactics. While locking victims out of data records, they simultaneously steal copies of the files. If the victim refuses to pay, the ransomware gang threaten to disclose the private data online. This typically happens on dedicated leak sites located on the dark web. It’s worth noting that even if a company pays a ransom and their files are decrypted, the attack still constitutes a data breach, as private information was exposed. Additionally, there is no way to guarantee that any data stolen by ransomware gangs has definitely been destroyed. Experts in data security solutions Our secure platform at Galaxkey was designed to provide local councils, educational institutions, and enterprises of all sizes, with a safe workspace free from cyber threats. It has no back doors that ransomware attackers can exploit, and no passwords are stored where they can be stolen. Our powerful three-layer encryption locks all but authorised personnel out of data. Whether you are sharing files with fellow collaborators or storing records on staff and suppliers on your server or the cloud, our encryption will ensure information remains out of reach of ransomware operators. Get in touch with our technical team today to experience a trial of our system for 14 days, free of charge, and keep compliant with your data secure.
<urn:uuid:758b622f-6c75-494a-811d-ed0bea92cda8>
CC-MAIN-2022-40
https://www.galaxkey.com/blog/how-do-ransomware-attackers-operate/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00641.warc.gz
en
0.93981
625
2.671875
3
SpaceX has secured a $97 million contract from NASA to launch a satellite that will work to continue a long-term effort to measure the height of the global sea surface. NASA said Friday it expects the Sentinel-6A spacecraft to lift off by November 2020 from Vandenberg Air Force Base, California, aboard SpaceX’s Falcon 9 rocket. Sentinel-6A, also called Jason Continuity of Service, is a joint effort of NASA, the National Oceanic and Atmospheric Administration, the European Space Agency and the European Organisation for the Exploitation of Meteorological Satellites. The mission is designed to provide ocean topography measurements to continue the record of global sea surface height data that began in 1992. NASA added that Sentinel-6A will also employ the Global Navigation Satellite System radio-occultation sounding technique to measure temperature changes in the troposphere and stratosphere and to aid numerical weather prediction.
<urn:uuid:5c193cd0-fb35-443b-b5f3-74744a8f817b>
CC-MAIN-2022-40
https://www.govconwire.com/2017/10/spacex-lands-97m-nasa-contract-for-ocean-topography-mission-launch-services/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00641.warc.gz
en
0.897939
188
2.671875
3
Jessica Shankleman (Bloomberg) -- By the start of the next decade Google wants to make sure all the electricity it uses for its data centers and offices will be truly 100% renewable. Under its previous pledge the tech giant has been mostly offsetting all of its electricity with renewable energy certificates as well as buying power directly from some projects. On paper, that allowed it to claim 100% carbon-free electricity. But when the sun doesn’t shine or the wind doesn’t blow it’s still drawing its power from polluting fossil fuels. The new policy will ensure that doesn’t happen again. As part of a series of commitments on Monday, Google also said it will leverage more than $5 billion dollars of investment in 5 gigawatts of new clean-energy projects across its supply chain over the next decade. It will also create 20,000 new jobs. “The science is clear: The world must act now if we’re going to avert the worst consequences of climate change. We are committed to doing our part,” Sundar Pichai, Google’s chief executive officer, said in a blog post. The announcement comes as wild fires rage across California and huge blazes darkened skies in the Bay Area where Google is headquartered. To deliver clean energy around the clock Google will use a bundle of measures including pairing wind and solar projects, using more batteries to store power and investing in artificial intelligence to improve power demand and forecasting. The decision shines a light on the inadequacy of companies using renewable energy certificates to meet their climate targets instead of directly buying power from projects. For every renewable energy certificate bought, a company is guaranteed that someone somewhere will generate one unit of electricity using renewables. But that doesn’t mean electricity used, say at night, won’t have emissions attached to it. So despite the certificates officially covering all of Google’s demand, its data centers are only run on clean energy for 65% of a day on average, the company estimated. “At the most basic level, this company is going to need to double-up, or even triple-up, on clean energy purchases,” said Kyle Harrison, analyst at Bloomberg NEF. Pichai also announced Monday that Google has offset its historical emissions, effectively clearing its carbon debts for the past 22 years. But those legacy emissions, from 1998 to 2006, are estimated to be smaller than one year of its current net operational emissions - less than 1 million tons of carbon dioxide equivalent. “We’ve already seen Google begin to purchase complementary solar in markets where it has existing wind deals, and vice versa,” Harrison said. “It will need to continue with this strategy until storage is cost-competitive to develop and pair with renewables at such a large scale.” Microsoft Corp., which is also one of the world’s biggest buyers of clean power, announced a similar plan to fine tune its carbon-free target and has also pledged to eliminate its historical emissions debt. But unlike Google, it’s created new financial products that can be added onto existing power purchase agreements and reduce the risk of intermittent renewable technologies. It’s also decided to find technological and nature based solutions to erase its past debts, instead of relying on offsets, which have proved controversial. Future carbon goals are more important for tackling climate change, particularly in expanding companies like Google, which has seen its power demand soar by 450% since 2010, according to BloombergNEF. The analysts estimate Google will need to buy 15.5 terawatt hours of clean power by 2030 just to keep meeting its existing 100% renewable power target. Much more will be needed to meet its new goal of round-the-clock carbon-free energy. Google said it hopes more companies will follow suit. “A big part of what we are aiming to do is provide a template and a blueprint, talk to people and show that it’s possible to get to carbon-free operations,” said Google spokeswoman Jenny Jamie. --With assistance from Akshat Rathi.
<urn:uuid:5ea730df-e7a6-42fd-8963-c9798d4a83cc>
CC-MAIN-2022-40
https://www.datacenterknowledge.com/energy/google-targets-100-percent-renewable-energy-its-data-centers-2030
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00041.warc.gz
en
0.948586
852
2.578125
3
Input, Output, redirections and PIPEs. Every program in Linux can get, process and finally output some data. Among others, data sources could be files or pipes. For example, command cat /etc/passwd uses file /etc/passwd as it’s data source and our terminal screen is used as an output destination also known as standard output (stdout). Also, programs may produce error messages which are also printed out in our terminal screen but via another channel, known as standard error (stderr). One program’s output (stdout/stderr) could be transferred to another program’s standard input (stdin) for further processing. Particularly, any program has two different outputs: stdout and stderr and one input channel: standard input (stdin) It is possible to redirect stdout of a command to a file using > (greater-than) sign. And the opposite, get file contents to command’s standard input (stdin) by using < (less-than) sign. cat /etc/passwd > /tmp/outputdata instead of printing file contents on our terminal screen, we redirect stdout to the file /tmp/outputdata, as result it will create a destination file with stdout’s contents. As opposite example: base64 < /etc/passwd as result /etc/passwd contents are redirected to the stdin of base64 command. All these streams stdin, stdout, stderr have appropriate file descriptor numbers, which could be used in specific channel redirection. stdin = 0 stdout = 1 stderr = 2 Let’s produce some error message: After issuing this command we will see an error on our terminal screen: id: ‘nosuchuser’: no such user If we’ll execute id nosuchuser > /tmp/outputdata, output file will be empty because id command will use stderr for it’s output stream. But, id nosuchuser 2>/tmp/outputdata will successfully redirect stderr stream. By searching the internet for some commands, you may notice redirections like 2>&1 or 2>/dev/null and I don't want you to be confused: 2>&1 is a redirection of stderr to stdout. /dev/null is a device that will throw to trash all data that was sent to it. So it is possible to suppress (ignore) any chosen output by redirecting to /dev/null some_command >/dev/null 2>/dev/null >/dev/null suppresses stdout 2>/dev/null suppresses stderr And finally, 2>&1 will redirect stderr stream to the stdout stream, which could be saved in one file: id nosuchuser 2>&1 >/tmp/outputdata /tmp/outputdata will contain both stdout and stderr content, while there will be no output at all on our terminal screen. To transfer output of one program to another we should use PIPE. Pipe is some kind of a virtual connector between one channel to another. To make that work, one end of the PIPE must have a sender, and other receiver. It is not possible to send data to the PIPE without a receiver on the other side. For piping data from one program (command) to another, we use special character | (vertical bar) between the two commands: cat /etc/passwd | grep root In this example, `stdout` of the `cat /etc/passwd` is transferred to the `stdin` of the `grep root` command, which by itself outputs on our terminal screen (stdout) one line containing the ‘root’ string. It is possible to construct a very long and complex command chain, while each piped command’s output is being catched and processed by another command stdin. For example, getting login shell of the root user: cat /etc/passwd | grep root | cut -d ':' -f7
<urn:uuid:bbace535-c57f-40f4-9094-8f9542c2b00c>
CC-MAIN-2022-40
https://www.korznikov.com/2022/04/howto-linux-input-output-redirection.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00041.warc.gz
en
0.852245
908
4.28125
4
July 27, 2017 | Written by: Preetam Kumar Categorized: Data Analytics | Internet of Things Share this post: Diabetes is one of the greatest global health threats. Today, 415 million adults worldwide have type I or type II diabetes, and that total is expected to grow to more than 600 million by 2040. It’s a scourge for individuals, a challenge for the healthcare community and a huge financial burden for society. In the United States alone, $240 billion is spent every year on diabetes care. Individuals with diabetes take the brunt of it. They see their doctors for a few minutes every few months, so it’s largely up to them to manage their conditions—finding a balance between not having enough sugar in their blood and having too much. If their glucose level drops too low, they face the threat of hypoglycemia, which can cause confusion or disorientation and in its most severe forms loss of consciousness, coma or even death. If it’s too high over a long period of time, they risk cardiac disease, blindness, renal failure and amputation of fingers and limbs. What is Medtronic doing to help? With cases of both type I and type II diabetes rising, Medtronic recognized the need to create a new generation of glucose monitoring solutions that would give people the tools to manage their diabetes more easily, in combination with routine support from healthcare professionals. Traditionally, Medtronic has provided systems such as continuous glucose meters and insulin pumps, which are physical devices used predominantly by people on insulin therapy to monitor glucose levels and administer insulin directly to the body. Now, Medtronic wants to harness these devices and provide continuous feedback on individuals’ glucose levels to support millions of people in the daily management of their condition. The growing popularity of wearable technology has made it easier to capture the biometric data on diet, exercise, sleep and medication but the challenge was to gain actionable insight from this massive volume of data, and deliver it to users quickly enough for them to make appropriate decisions. To bring its new solution to market rapidly, Medtronic worked with IBM Streams and IBM Watson Health to develop Sugar.IQ With Watson, a cognitive mobile personal assistant app that aims to provide real-time actionable glucose insights and predictions for individuals with diabetes, helping to make daily diabetes management easier. They are also designing a solution that will process the factors that affect each individual’s personal glucose levels – food, sleep, stress and so on. The aim of this app is to coach each individual by helping them make smarter glucose-related decisions, for example avoiding foods and habits that tend to cause problems for them, so that they can live their lives to the fullest. Medtronic is leveraging IBM Streams to analyze the data as it flows in from the devices, using predictive models to assess each person’s current situation and the risk of their glucose levels falling outside safe thresholds. So, they are able to provide meaningful and personalized tools and insights and create awareness if the models detect significant patterns. Learn more about the Medtronic case study on how they worked with IBM to build Sugar.IQ with Watson, a cognitive mobile personal assistant app that will provide real-time actionable glucose insights and predictions. In addition, get more information on IBM Streams, and try the cloud services or the IBM Streams Quick Start Edition. Also, be sure to join the IBM Streams community.
<urn:uuid:2e1098ec-175f-43b0-95c9-b39b27644078>
CC-MAIN-2022-40
https://www.ibm.com/blogs/cloud-archive/2017/07/medtronic-makes-diabetes-management-easier-real-time-insights-ibm-streams/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00041.warc.gz
en
0.94944
716
2.640625
3
New Training: Advanced Networking Devices In this 9-video skill, CBT Nuggets trainer Keith Barker teaches you about advanced networking devices, such as multi-layer switches; intrusion detection systems (IDS) and intrusion prevention systems (IPS); as well as AAA and RADIUS servers. Watch this new CompTIA,Networking training. Watch the full course: CompTIA Network+ This training includes: 39 minutes of training You’ll learn these topics in this skill: Advanced Networking Devices: Multi-Layer Switch Overview Advanced Networking Devices: Configure Multi-Layer Switch: Lab Advanced Networking Devices: Load Balancer Overview Advanced Networking Devices: IDS and IPS Advanced Networking Devices: Proxy Servers and Content Filtering Advanced Networking Devices: Security Network Devices Advanced Networking Devices: AAA/RADIUS Server Advanced Networking Devices: VoIP PBX and Gateway Advanced Networking Devices: Wireless Controller What is A Multilayer Switch? In the networking environment, a switch is a device that forwards traffic between various devices attached to the network. A traditional switch examines data at the data link layer to push traffic where it needs to go. On the other hand, multilayer switches can make forwarding decisions based on layer 2 or layer 3 of the OSI networking model. By examining traffic at multiple layers, multilayer switches can be used as both a switch and a router. Likewise, because one device can perform both tasks, it can be more convenient than using two separate devices, a layer 2 switch and a layer 3 router. Another benefit of using multilayer switches is that they are typically ASIC-based instead of microprocessor-based. That can be a bit of a confusing statement since ASICs also are micro-processor-based. The difference is that an ASIC can be programmed for a specific task where the traditional processor cannot be. For a typical processor, software needs to be developed that targets the capabilities of that processor itself whereas the capabilities of an ASIC can be programmed into it. This can significantly increase the efficiency of the computing system in the device.
<urn:uuid:edcee050-d899-4916-b17e-7c3adfcff453>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/new-skills/new-training-advanced-networking-devices
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00041.warc.gz
en
0.886913
446
2.609375
3
The last of three hearings on artificial intelligence examined challenges to the technology and government's role in driving AI forward. In the history of driverless cars, the Defense Advanced Research Projects Agency’s 2004 and 2005 Grand Challenges contributed to improving the tech to the point of viability. The government could play a similar role in the development of artificial intelligence, said Jack Clark, the director of Open AI, at an April 18 congressional hearing focusing on government’s role in artificial intelligence. DARPA’s 2016 Cyber Grand Challenge concentrated on autonomous systems and is a model other agencies can adopt, Clark said during a hearing of the House Oversight's Subcommittee on Information Technology “Every single agency has … problems it's going to encounter, and it has competitions that it can create to spur innovation, so it’s not one single moonshot, it’s a whole bunch of them," he told lawmakers during the hearing. "I think every part of government can contribute here.” Ranking Member Rep. Robin Kelly (D-Ill.) said securing the large amounts of data to AI needs is critical. With recent news about a political research firm gaining access to information on Facebook users and massive data breaches at companies like Equifax, what can be done to secure the data used to inform these AI systems? Kelly asked. “If a company or government organization cannot protect the data, they should not collect the data,” said Ben Buchanan, a postdoctoral fellow at Harvard Kennedy School's Belfer Center who focuses technology deployment and government. However, there are some technologies that could help ensure privacy while allowing for data needs of artificial intelligence, Buchanan said. One of these is a mathematical technique called differential privacy. “Differential privacy is the notion that before we put [an individual's] data into a big database, we add a little bit of statistical noise to that data, and that obscures what data comes from which person," he said. "In fact, it obscures the records of any individual person. But it preserves the validity of the data in the aggregate." Another technology that could help increase user privacy is on-device processing, Buchanan said. “If we’re going to have [users] interact with an AI system, it is better to bring the AI system to them rather than bring their information to some central repository,” Buchanan said. “So if an AI system is going to be on your cell phone, you can interact with the system on your own device rather than at a central server where the data is aggregated.” Subcommittee Chairman Rep. Will Hurd (R-Texas) said a summary of the three hearings will be released in the coming weeks that will include the lessons learned and what steps the committee expects to take going forward. NEXT STORY: Assessing the impact of algorithms
<urn:uuid:db723147-d9b6-423e-883c-49b30a663772>
CC-MAIN-2022-40
https://gcn.com/2018/04/advancing-ai-with-grand-challenges-greater-security/299959/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00041.warc.gz
en
0.946813
589
2.796875
3
CSFB (Circuit-Switched Fall-Back): Explained Circuit Switched FallBack (CSFB) is a technology whereby voice and SMS services are delivered to LTE devices through the use of GSM (global system for mobile communications) or another circuit-switched network. LTE is an all-IP technology and therefore cannot transport switched services such as voice or SMS, so CSFB is needed. When an LTE device is used to make or receive a voice call or SMS, the device “falls back” to the 3G or 2G network to complete the call or to deliver the SMS text message. CSFB has become the principal global solution for voice and SMS interoperability in early LTE devices, primarily due to inherent cost, size and power advantages of single radio solutions on the device side. In 2011, CSFB has commercially launched in several regions around the world, and is the first step toward subsequent LTE voice evolution phases, which are also based on single radio solutions. CSFB requires a software upgrade of the operator’s core and radio network. It is often seen as an interim solution for LTE operators. Voice over LTE (VoLTE) is considered to be the long-term goal for the delivery of voice services on LTE networks. The network architecture of CSFB Legacy 2G and 3G networks and the LTE network co-exist within mixed networks. That mean when a call is made, it reaches a mobile switching centre (MSC), which communicates to a mobility management entity (MME) to identify network compatibility. If it is an LTE network attempting to connect to a legacy network, the MME recognises this and subsequently routes the call through a 2G or 3G network. CFSB is seen as a temporary measure but it is expected to be used for some time yet as operators need more money and time to fulfil the idea of all-IP, LTE and IMS and issues have been raised over possible timing and delays in the network that could affect the reliability of the calls. CSFB is the initial step to enable mainstream use of LTE, by providing an initial step towards VoLTE and other services that will benefit both the operator and customer alike. Get all of our latest news sent to your inbox each month.
<urn:uuid:95be3ab5-b82e-41e9-9aca-458ebf3ad200>
CC-MAIN-2022-40
https://www.carritech.com/news/circuit-switched-fall-back-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00041.warc.gz
en
0.946771
476
2.53125
3
The Emerging Role of AI and Machine Learning in the Enterprise Artificial intelligence (AI) and machine learning have advanced significantly in recent years. Once the stuff of science fiction novels, AI and machine learning are gaining traction in the enterprise, offering tremendous promise for improving organizational profitability and efficiency. First, it’s important to distinguish the differences between AI, machine language, and their various subsets. AI, which had been focused around the use of inference engines in the 1980s and 1990s, is the use of computers to simulate human reasoning. A familiar current example of this is IBM Watson, which can examine thousands of pieces of text to identify trends and offer up conclusions. For its part, IBM recently announced that Watson will be involved in a year-long research project to detect and respond to cybercrime threats. Today, there are two common models for AI. In supervised models, the systems are trained to tackle a particular problem. A good example of this is with facial recognition where an AI system is provided examples of faces to help make a match based on facial features. Meanwhile, unsupervised models are provided no such background information and must learn on their own. By contrast, machine learning involves the use of algorithms to iteratively learn from and adapt to data, enabling computers to find hidden insights from data without being instructed where to look. Artificial intelligence and machine learning have progressed from the annals of science fiction to real-world applications, delivering tangible business impact. For its part, deep learning involves a set of techniques that allow data scientists to use many layers of neural networks across different parameters. Still in its early days for enterprise use, deep learning is showing tremendous promise in applications such as fraud prevention and speech/image recognition. The Business Upside of Machine Learning The methodology where enterprises can move the needle the most is with predictive machine learning algorithms. Machine learning technology providers are beginning to focus on specific business challenges, such as identifying and responding to top-line opportunities for sales and marketing teams. Predictive machine learning algorithms also offer great opportunities for sales, marketing, and customer service teams to identify and immediately take the next best action with a customer or prospect. For instance, if a customer calls into the contact center for a wireless service provider to cancel his service, the agent can be prompted to offer a bundle of services that are aimed at retaining the customer. Meanwhile, machine learning can enable a sales associate to determine the most effective content and messaging to share with a prospect or customer based on their current position in the sales pipeline. A growing number of Platform-as-a-Service (PaaS) providers are now emerging, offering support for both general-purpose platforms (applications which include self-service tools) as well as vertical industry applications. Vertical-focused applications in this space include: • Risk modeling for customer loans in financial services. • Detection of fraudulent credit card usage. • Recommendation engines used by companies such as Amazon and Netflix to determine the products and services that ‘lookalike’ customers most likely will want. • Calculating which prospective members will deliver the greatest potential long-term customer value to healthcare insurers. PaaS-based predictive machine learning algorithms offer a number of operational and business benefits to enterprise companies. For starters, by using a cloud-based approach to PaaS-based predictive machine learning, enterprise companies can focus their resources on solving business problems and not have to worry about coding algorithms on their own. Moreover, by utilizing machine learning as a service, enterprises don’t need to hire or retain a pool of in-house data scientists and other costly specialists. Plus, under a cloud-based model, enterprises pay only for the resources and services that they use. In addition, in bypassing the setup that’s normally required for development, enterprise companies can also achieve faster time to value. Furthermore, a cloud-based model enables easier integration with existing data sources. Another benefit to using a cloud-based predictive machine learning algorithm is that it can scale to handle the biggest time sink in the data science timeline which is data preparation. This includes gathering, cleansing, and extracting data, which represents up to 80 percent of the time consumed in data prep. Tapping Open Source Open source AI and machine learning frameworks also offer cost-effective alternatives for enterprises. Thanks to open source offerings such as Google TensorFlow, OpenAI, and PredictionIO, AI and machine learning initiatives can be less expensive for enterprises by leveraging a larger pool of servers for compute power via the cloud. Meanwhile, as more people contribute to an open source AI platform, this helps accelerate the evolution of deep learning systems. A terrific example of an open source predictive modeling platform is Kaggle. Launched in 2010, Kaggle hosts crowdsourced predictive modeling and analytics competitions in which companies and researchers post a business or technical challenge and data miners known as Kagglers compete to produce the best models. Kaggle is a novel way for enterprises to engage an army of data miners without having to build their own models. Another intriguing approach to machine learning in the cloud is DataScience.com. Effectively positioned under a data science as a service model, DataScience.com can unleash its team of on-demand data experts to tackle a company’s pressing business challenge (e.g. identifying the root cause behind churn with a specific set of customers). Under this model, enterprise companies don’t have a dedicated team of data scientists. But they can obtain resources as needed through a monthly subscription model. The potential for applying AI and machine learning in the enterprise has been talked about for years. Thanks to advances in the accuracy and power of AI and machine learning engines, combined with wider availability of resources, predictive modeling is now within reach of enterprise users, without having to hire and build from scratch. Check This Out: Top Machine Learning Solution Companies Check This Out: Top Machine Learning Consulting/Services Companies
<urn:uuid:3d811cca-9d45-4d3f-813c-8e2b34078244>
CC-MAIN-2022-40
https://ca.cioreview.com/cxoinsight/the-emerging-role-of-ai-and-machine-learning-in-the-enterprise-nid-15037-cid-72.html?utm_source=google&utm_campaign=cioreview_topslider
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00041.warc.gz
en
0.936938
1,224
2.6875
3
In the throes of a military action, everything is heightened. In February 1998, the U.S. and President Bill Clinton were preparing to bomb Iraq, as the country’s then-President Sadaam Hussein refused to comply with United Nations Security Council inspectors who were searching for weapons of mass destruction. Just as tensions in the Gulf were coming to a head, a systematic cyberattack, which would come to be known as Solar Sunrise, was launched against the U.S. In all, this attack — which was called “the most organized and systematic attack the Pentagon has seen to date” by Deputy Secretary of Defense John Hamre — took control of more than 500 government and private computer systems. Institutions like NASA, the Air Force, the Navy and MIT were all impacted. Because of the lingering tensions in the Middle East, it was immediately assumed that this highly professional looking attack was coordinated by Iraqi operatives looking to strike back at the U.S. But, that couldn’t have been farther from the truth. The Solar Sunrise attack Alarm bells were raised in early 1998 when several Department of Defense (DoD) networks were attacked, exploiting a well-known vulnerability in the Sun Solaris — thus the name Solar Sunrise — computer system, a UNIX-based operating system from Sun Microsystems. Ultimately, what the attackers did wasn’t overly complicated; they probed DoD systems to look for a vulnerability, found one, exploited it, planted a sniffer program to mine data and then came back later to collect that data. In the process, they accessed military defense networks, where they were able to steal sensitive passwords and other confidential information. Once the military detected the intrusion, the U.S. government mobilized quickly, assembling the FBI, CIA, U.S. Department of Justice, National Security Agency and others to investigate the digital assault. According to the Washington Post, the U.S. Central Command out of Tampa, Florida, also called in a new defense system. It “had just tested a new Defensive Information Operations (DIO) plan in a mock military exercise called Internal Look 98 when it discovered the intrusion. Gen. Anthony Zinni ordered the DIO plan into effect for real. The Air Force’s 609th Information Warfare Squadron saw first combat, erecting a complex cyber intrusion detection system.” The government had long known that cyberattacks could be one of the next frontiers in modern warfare, and this hit confirmed many of their fears about impending information warfare. While the intrusion also impacted private computer systems, including commercial and educational sites, it seemed “systematic” and targeted toward the U.S. government and military. “For days, critical days, as we were trying to get forces to the Gulf, we didn’t know who was doing it. We assumed, therefore, it was Iraq,” said Richard Clarke, national coordinator for security, infrastructure protection and counterterrorism in the White House, in the same Washington Post article. Think local, act global As it turns out, the real threat was much smaller and much more local. Iraqi operatives were not involved; nor was any nation-state out to steal U.S. government secrets. Within a few weeks of the attack, the FBI raided the homes of two high school students from Cloverdale, California, who were arrested and pled guilty to the crime. Then-Attorney General Janet Reno said the arrests “should send a message to would-be computer hackers all over the world that the United States will treat computer intrusions as serious crimes.” The two California teens did have some help, however. In March 1998, a third teen, 18-year-old Israeli hacker Ehud Tenenbaum, was arrested by Israeli police after they were given evidence of his activities by U.S. authorities. Tenenbaum, who goes by the hacker name The Analyzer, pled guilty in 2001, but claimed he was not after government secrets and only wanted to prove how insecure the systems were. He eventually went on to form his own security company but was arrested again in 2008 for credit card fraud. More than a prank This was, of course, not the first or last time major institutions harboring classified information have been hacked by teenagers. Shortly after Solar Sunrise made headlines, in 1999, a Florida teen made a name for himself when he infiltrated DoD and NASA computers. While it’s easy to dismiss attacks like these as the work of idle youth, especially since the Justice Department claimed no classified information was compromised, they should serve as a warning as to how vulnerable even major networks are to motivated threat actors. Government bureaucracy often impedes a quick and effective response. In this case, the teens were able to take advantage of a well-known and unpatched vulnerability. In the wake of this attack, the Clinton administration established several new agencies to defend against cyber warfare. But the defenders are often playing catch-up to the attackers, and critical infrastructure continues to find itself in the crosshairs of motivated hacking groups and nation-state adversaries to this day.
<urn:uuid:7c6b4afe-aa72-4f1b-99e7-dca9268d8162>
CC-MAIN-2022-40
https://www.industrialcybersecuritypulse.com/threats-vulnerabilities/throwback-attack-three-teens-stoke-fears-of-a-cyber-war-with-the-solar-sunrise-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00041.warc.gz
en
0.976456
1,055
2.8125
3
SAN FRANCISCO (Reuters) – International Business Machines Corp is looking to the building blocks of our bodies — DNA — to be the structure of next-generation microchips. As chipmakers compete to develop ever-smaller chips at cheaper prices, designers are struggling to cut costs. Artificial DNA nanostructures, or “DNA origami” may provide a cheap framework on which to build tiny microchips, according to a paper published on Sunday in the journal Nature Nanotechnology. Microchips are used in computers, cell phones and other electronic devices. “This is the first demonstration of using biological molecules to help with processing in the semiconductor industry,” IBM research manager Spike Narayan said in an interview with Reuters. “Basically, this is telling us that biological structures like DNA actually offer some very reproducible, repetitive kinds of patterns that we can actually leverage in semiconductor processes,” he said. The research was a joint undertaking by scientists at IBM’s Almaden Research Center and the California Institute of Technology. Right now, the tinier the chip, the more expensive the equipment. Narayan said that if the DNA origami process scales to production-level, manufacturers could trade hundreds of millions of dollars in complex tools for less than a million dollars of polymers, DNA solutions, and heating implements. “The savings across many fronts could add up significantly,” he said. But the new processes are at least 10 years out. Narayan said that while the DNA origami could allow chipmakers to build frameworks that are far smaller than possible with conventional tools, the technique still needs years of experimentation and testing. Copyright 2009 Reuters. Click for restrictions.
<urn:uuid:6a44f2c6-7d6d-47dd-a343-33c70077c2f4>
CC-MAIN-2022-40
https://www.datamation.com/networks/ibm-to-use-dna-in-microchips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00242.warc.gz
en
0.914962
359
3.359375
3
Chances are good you’ve spent at least a moment or two thinking about your internet speed — especially if you’ve experienced a slow or spotty connection. You may have run a speed test, or you may have no idea what your internet speed is. Either way, if you’re like a lot of people, you simply want to know: “Is my connection fast enough?” To really answer that question, you must understand a bit more about internet speed, WiFi speed, and network connections. So let’s jump in. What’s the difference between speed and bandwidth? You may hear “bandwidth” and “speed” used interchangeably, but they do refer to slightly different aspects of internet service. - Bandwidth is the maximum volume of data that can be transmitted over an internet connection, measured in Megabits per second (Mbps). - Speed is the rate at which information or content reaches your device (tablet, laptop, smartphone, etc.) from the internet. This is also measured in Mbps. There’s a plumbing metaphor that can help us understand these two concepts. Think of bandwidth as the width of a water pipe, which determines the volume of water that can flow through the pipe at any given moment. The speed, then, is how fast that water comes out of the tap when you turn it on. So, if your internet plan has an advertised bandwidth of up to 20 Mbps, this is the highest volume of information that can be sent over the network to your router. From the router to your devices, then, the speed you see will typically be a bit lower. That’s because once the connection is divvied up by the devices in your home and sent over WiFi, some of that network speed is lost. What does your speed test tell you? Many internet speed tests out there, including the one we provide here, on the CenturyLink site, measure internet speed on a particular device. In this case, you have to take a few things into consideration before you can determine if the test results are “good” or “bad” for your situation. Note that another type of speed test, such as those offered by certain modem manufacturers, measures the speed from the network directly to the modem. In that case, the information below would not apply, as the results would not change based on these factors. Factors that impact your test results The speed test measures the flow of data at one moment in time. The results tend to fluctuate depending on a number of factors: - Whether you’re testing over WiFi or over an Ethernet (wired) connection to the router - How many devices are running on your home network (and how intensive your online activities are at that moment) - The time of day - Which device you run the test on, and how old it is or what its speed capacity is - Which test server is used How does an internet speed test work? When you run an internet speed test, the testing site transfers a file from a nearby test server over the internet to your computer and measures how long it takes. That gives you your download speed. Then that same file is transferred back to the server again to measure the upload speed. The test will automatically select the closest server, which will typically be in a nearby city. When the test is done, you’ll see your results as two numbers reflecting the download and upload speeds. One megabit is roughly equal to 1,000 kilobits, which means 1 Mbps is 1,000 times faster than 1 Kbps. Similarly, 1 Gbps is 1,000 times faster than 1 Mbps. On your internet plan, you may see something like 40/20 Mbps (or “40 by 20 Megs” in the industry lingo). This is shorthand for 40 Mbps of download bandwidth and 20 Mbps of upload bandwidth. These “asymmetrical” speeds put the “A” in ADSL connections, which have higher downstream bandwidth than upstream bandwidth. One of the awesome advancements in internet technologies is the increased capacity for upstream bandwidth. This is leading to more internet plans that have what they call “symmetrical speeds” or equal download and upload speeds. With high-speed fiber internet, for instance, you can get speeds up to 940/940 Mbps. That means information can pass into and out of your devices at the same super-fast rate. While the speed test runs, it also measures several other aspects of internet speed. Each speed test may have slightly different features, but many of them measure ping and jitter. What is ping (latency)? Ping is the measure of latency from your device to the server and then back to your device. Essentially, it is one of the elements of connection speed that measures any lag you might experience while online. Lags are typically most noticeable when streaming music or HD video, gaming, videoconferencing, or doing other high-bandwidth activities. Ping is measured in milliseconds (ms), and the lower the number, the less lag you’ll get. Generally, a ping value under 50 ms is good and over 100 ms is poor. What is jitter? Jitter tends to be less important to many internet users, but gamers are often concerned about both ping and jitter. Jitter is essentially a measure of the variation in ping over time. This is given in a percentage. A lower percentage is better, because it means the connection is more stable. Some sources say you want to have a ping of less than 200 ms and a jitter of less than 15%, but this can depend a lot on what types of online activities you like to do. Avid online gamers, for instance, will want to shoot for even lower ping and jitter numbers. What impacts your internet speed? Not surprisingly, the factors that affect the results of your speed test are the same factors that impact the real-life speeds you experience while working, schooling, streaming or surfing. Below are some of the key factors that contribute to your internet connection speed: 1. The network speed from your provider. This is the first one that many people think of, and it definitely plays a role. Most of the time, though, the connection speed of the CenturyLink network (or any internet provider) stays pretty consistent within a given range. In other words, the network speed that comes into your router from outside doesn’t fluctuate as much as the speed you experience on your side of the router (on your devices). 2. The speed of your devices. Each device, from desktops to tablets, and from smartphones to smart televisions, has its own speed limit, which in some cases may not be as fast as your internet service. The newer the device, the more likely it is to have a faster processor, as well as additional wireless antennas that allow it to send and receive data over WiFi much faster. Older devices can even slow down the speed you get on another device. How? All the information traffic from your entire home network has to wait in line to pass through your router, and older devices with slower connections can cause delays for any traffic lined up behind them. The same is true for your router, which is why it is recommended that you replace this essential equipment if it is more than 3-4 years old. The type of device also matters; smartphones today typically have one to two wireless antennas built in, while laptops are likely to have three to four. In other words, your laptop is likely to have a slightly faster WiFi connection than your smartphone. 3. Your WiFi use. Wireless connections have become such a standard that many people forget there’s any other way to get online. But there are benefits to accessing the internet over cables instead of over air waves. We all love WiFi because it gives us the ultimate mobility, but some connection speed is always lost in translation from the router to the wireless signal. This is why many blogs and tech experts out there (including us) suggest using a wired connection — a LAN or Ethernet cable plugged into the router from your device — when you can. This will give you a faster, more stable connection, especially for online gamers and others with particularly high bandwidth demands. Alternately, WiFi extenders and signal boosters are popular for their ability to help overcome signal loss. And, when you’re using WiFi, you will get the strongest signal by being closest to your router with as few devices connected as possible. Using a 5 GHz WiFi frequency can help too. Why? WiFi is just one of many radio frequencies all around us these days. Devices that use the same 2.4 GHz frequency range as some wireless signals, such as microwaves, cordless phones and more, can hurt your internet speed. Many newer routers also support automatic band-switching, meaning they will detect and switch devices to a faster frequency without you having to go in and select a different network manually. Just keep in mind that with a 5 GHz WiFi frequency, the signal is faster but covers a smaller area. So it’s even more important to close the distance between your device and the router as much as possible, and to make sure there are no major physical obstructions blocking the signal’s path. 4. The number of devices using up your bandwidth. All the devices in your home share your internet connection. So the number of devices running at the same time impacts your internet speed significantly. Imagine your total bandwidth is a pie, and each device that is connected to the internet at one time takes a piece of that pie. The fewer the devices, the bigger each piece of pie, meaning the faster the speed. But the more devices you start adding to the network, the less speed each one of those devices will display. 5. The speed of the sending party. Especially during peak hours, certain websites or apps can get bogged down due to high demand on their servers. These high-traffic platforms can make it seem like you have a slower connection when you’re visiting them. Similarly, content occasionally has to pass through peer networks that have data caps or bandwidth limitations, which could cause you to see less than top download or upload speeds as you send and receive information. Going back to our plumbing idea, imagine a massive network of pipes connecting to your computer. There are different sizes of pipes linking multiple servers and routers that make up the world wide web. As the water (content) flows through all the different pipes, a narrow pipe anywhere along the line will limit its total speed as it passes through. Similarly, if any part of the network has lower bandwidth or is congested, this can impact your internet speed. A number of other factors can impact your internet speed as well, including the distance from your router to your device, the age and type of modem/router, the type of technology used for your internet connection (copper, fiber-optic, etc.), and even the age of wiring inside your home or building. See for yourself It can be helpful to run multiple speed tests while changing some of the key factors mentioned above. This may give you a more complete view of your speed in different situations. For instance, try some of these variations and compare your results: - Test on different devices (tablet, laptop, mobile phone). - Instead of testing over Wifi, test on a computer plugged into the router with an Ethernet/LAN cable. - Test first thing in the morning, again in the middle of the day, and again late at night. - First, test when the whole family is online. Then run it again when only one or two devices are connected to the internet. If you’re not up for experimenting, simply understand that every one of these factors comes into play when you consider your connection speed at any moment. So… is my internet speed good or not? We circle back to where we started — that ultimate question every internet user really wants to answer: is my internet connection fast enough? Now that you understand how the pieces fit together, you can answer that question for yourself. As you do, here are a few final points to keep in mind: - The FCC presents a general guideline to household internet usage. The definitions of basic, medium and advanced service relate to the number of users/devices and the degree of internet use. Basic service (Low speed) = 3-8 Mbps Medium service (Average speed) = 12-25 Mbps Advanced service (High speed) = Over 25 Mbps (this is the FCC’s definition of broadband internet) - A user often won’t notice the difference between, say, a 20 Mbps and 5 Mbps connection when doing most online activities. However, as you add people and more devices all running at once, you are more likely to notice buffering, lags, or congestion at lower speeds. If you determine that your speed is not sufficient for your home’s needs, then there are a few actions you can take: - Optimize everything you possibly can, from your router to your devices. - Consider strategies to manage your home’s internet usage, as a way of improving speed performance. - Consider upgrading to a faster internet speed tier, if available in your area. - Consider adding a second line of the same speed to double your bandwidth.
<urn:uuid:8ce06895-deb1-4c66-aa9e-91e2b9a2c01d>
CC-MAIN-2022-40
https://discover.centurylink.com/understanding-internet-speed.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00242.warc.gz
en
0.947934
2,776
3.1875
3
Copper Switch Off is the new norm. In short, investing in fiber optic communication technology and stop investing in copper-based communication systems is the Copper Switch Off. How and Why telecommunication companies decided to move towards investment in fiber optics? This article analyzes the history of communication/telecommunication and looks into the historical aspects of shifting towards optical technology. Over the past 30 years, an undercurrent of developments has been happening in our world of communication. This change has started affecting our daily lives, which we can notice by the flood of enhanced communication devices that have reduced the terrestrial distances. Transmission of large amounts of information/data over vast distances with extreme clarity and reliability is real now. This change in the communication scenario over 30 years was possible due to the development and deployment of fiber optic cables. We can say it is a revolution in the communication that our homo sapiens have ever gone through. This revolution was started by the replacement of copper communication wires with thin strands of glass fibers called optical fibers. The use of light for communication obviously is not a new thing for Homo sapiens. Light has been used to communicate since the earliest recorded history. Earlier communication using light was slow and the techniques employed were not sophisticated, and the communication has been limited by climatic conditions. Ancient Greeks, Phoenicians, Chinese, and Indians reflected the light on mirrors or shining objects to send specific signals that only their group members could decode. Sunlight is replaced by artificial light and over a period of time ON/OFF switches were introduced, but the overall concept remained the same. Some military ships use a kind of variation of this old technique for low-speed communication. Alexander Graham Bell developed a photophone to send voice signals over the light beam. In his photophone sunlight was reflected off a mirror that vibrated to voice sound waves. He placed a photocell, which was the receiver, and connected it to an electric circuit that finally connected to a speaker. He could successfully send the signal and the ideas were good, but the technology was not put in place practically. The invention of the Laser was a turning point in the history of communication. Lasers pushed for further studies for light communication in the air. Lasers provided a narrow band of light radiation that is bent with reflecting mirrors. Since light communication in the air required a clear line-of-sight, practical light communication by lasers was limited by fog and rain. Somehow, the necessity for low loss guiding medium was realized by the scientists. The scientific community continued its effort to develop glass fiber that guides the light. The first low loss glass optical fiber was developed in 1970. Commercial manufacturing of 250-micrometer diameter fibers was done at Corning factory and the first fiber optic cable was made in the mid of 1970s. Many telecommunication companies started using optical fibers for shorter distance communication. Many companies started the refinement of optical glass fiber technology that eventually lead to the use of optical fibers for long-distance communication. Bell announced the installation of approximately 978 kilometers of fiber optic cable in its Northeast U.S. corridor in 1980. In Canada, the Saskatchewan Telephone installed 3,600 kilometers of fiber optic cable. Fiber optic cables were first used for television signal transmission in the 1980 Lake Placid Winter Olympics. There was no looking back since then for the fiber optic industry. From 1980, fiber optics gained popularity in the telecommunications world and has reached to the status of a widely accepted and proven technology today. Replacement of copper wires with the optical fiber cable is the new norm.
<urn:uuid:c0d077ff-0d6d-4056-bb44-577f62f09d3e>
CC-MAIN-2022-40
https://www.fiberlogs.fomsn.com/fiber/optical-communication/copper-switch-off-is-the-new-norm/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00242.warc.gz
en
0.963242
716
3.109375
3
Cabling doesn’t receive as much attention as other parts of the infrastructure, but it’s an integral part of your network. It’s essential to choose your cabling wisely when building your data center and designing your network infrastructure. However, this is impossible if you are unfamiliar with different cables and their components. Two of the most common cabling methods are backbone and horizontal cabling. This article will take you through what goes into backbone cabling. What is Backbone Cabling? Backbone cabling is a set of cables used to connect networks. It can be installed between entrance facilities, telecommunication rooms, and equipment rooms. These cables help deliver communication between one floor of the same building to another or different rooms on the same floor. Backbone cables control traffic problems and serve as an effective troubleshooting tool if things go awry. Components of Backbone Cabling A backbone cable consists of the following parts. These are the routing channels for the cabling process. They include floor penetrations, shafts, raceways, or conduits. Cable pathways are designed to allow you to efficiently run cables through walls and ceilings without damaging existing infrastructure. This includes connectors, cabling, and hardware, such as patch panels. Connecting hardware allows information to be transmitted along a network, which is an integral part of networking. Each network device connects to another network device by using connecting hardware. There are many types of wires in backbone cabling, but the main ones are copper and fiber. Copper wiring is typically composed of copper-wire twisted pairs that send digital pulses at high frequencies. On the other hand, fiber wiring consists of a thin glass filament wrapped by multiple layers of protection. Each wiring system has its pros and cons. Copper wiring is easy to install, durable, affordable, and sturdy enough to handle large amounts of data. However, it might not be efficient in transmitting data over long distances. Fiber optic is suitable for installation on any computer hardware. Because it is not affected by electromagnetic interference, this wiring is reliable for transmitting data over long distances. Its speed and reliability increase network security and make fiber-optic cables ideal for communication equipment. Trust Guidance From the Professionals While backbone cabling can seem like an overly complex topic, it’s much simpler than you might think. The system breaks down to cable pathways, connecting hardware, and wiring. Knowing what goes into your cabling system enables you to use them optimally. FiberPlus can help you understand the best cabling for your property. Get in Touch with FiberPlus FiberPlus has been providing data communication solutions for over 25 years in the Mid Atlantic Region for a number of different markets. What began as a cable installation company for Local Area Networks has grown into a leading provider of innovative technology solutions improving the way our customers communicate and keeping them secure. Our solutions now include: - Structured Cabling (Fiberoptic, Copper and Coax for inside and outside plant networks) - Electronic Security Systems (Access Control & CCTV Solutions) - Wireless Access Point installations - Public Safety DAS – Emergency Call Stations - Audio/Video Services (Intercoms and Display Monitors) - Support Services - Specialty Systems - Design/Build Services - UL2050 Certifications and installations for Secure Spaces FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry.
<urn:uuid:5a1e6d24-231a-4ee4-bcd9-cbe791df1d1d>
CC-MAIN-2022-40
https://www.fiberplusinc.com/systems-offered/what-goes-into-backbone-cabling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00242.warc.gz
en
0.927555
730
2.90625
3
The first American woman to walk in space shares her experience, three decades later. Almost 30 years ago, Kathryn Sullivan – now the administrator of the National Oceanic and Atmospheric Administration – became the first American woman to walk in space as a crew member of the space shuttle Challenger. Sullivan was among six women who comprised NASA’s first class of female astronauts and is widely recognized as a trailblazer for women in science, technology, engineering and mathematics fields. Millions watched Sullivan stroll around several hundred miles above the Earth’s surface on Oct. 11, 1984. And those who didn’t see the event didn’t have to look far to read about it. “The picture and story were on the same page as [vice presidential candidate] Geraldine [Ferraro] in The New York Times,” Sullivan said Wednesday at an event hosted by Pew Charitable Trusts. “Front page, above the fold.” As the first female vice presidential candidate, Ferraro’s debate that same day with then-Vice President George H.W. Bush was perhaps an equally momentous occasion. Yet Sullivan’s efforts were not the pinnacle of her achievements, but rather a glimpse of what was to come for the esteemed scientist. The first American woman to walk in space went on to document Earth’s oceans before her current dual role as NOAA administrator and undersecretary of commerce for oceans and atmosphere – titles that led to TIME magazine dubbing her “The World’s Weatherwoman.” In her NOAA role, Sullivan oversees programs, satellites and instruments that essentially take the Earth’s pulse each day. All told, they produce 20 terabytes of data per day – twice the printed volume of the Library of Congress – that feed into global weather forecasts and an assortment of other measurements. “It was pretty crazy back then,” Sullivan said, reflecting on her days as one of NASA’s first female astronauts. “NASA had not welcomed astronauts for nine years – NASA was figuring this out as well. And I don’t think NASA even realized what the reaction was going to be.” It was, she said, an on-the-job education. Prior to Sullivan, the only female anything NASA launched in space were two spiders and a monkey, so NASA’s astronaut facilities weren’t exactly equipped for women. As least, not human women. Among the interesting tidbits Sullivan shared Wednesday: - Initially, ground spaceflight facilities where astronauts trained didn’t have female locker rooms. Sullivan credits Carolyn Huntoon, who later went on to serve as the Johnson Space Center director, for handling several gender-related issues that sprang up for NASA’s first class of female astronauts. Huntoon was an integral mentor to the astronauts. - Apparently, Sullivan said, some NASA officials wondered about a dress code for the new female astronauts. Huntoon, she said, again stepped in. If the astronaut men didn’t have a dress code, then the female astronauts wouldn’t, either. That Sullivan’s story – overcoming gender barriers and busting down the good old boys’ club door – still resonates today indicates larger problems within the tech- and science-heavy sides of government. Recent statistics reported by Nextgov showed that women hold just 31 percent of information technology positions across government. Even fewer women – about 28 percent – occupy physical science positions, according to the Equal Employment Opportunity Commission. And only 15 percent of the government’s engineering and architecture positions are held by women. Problems are compounded by the fact that tech and science workplaces apparently don’t do much to attract or retain female talent. Statistics in a 2014 multinational report by the Center for Talent Innovation suggest women are fleeing science and tech fields in droves. It’s still big news when a woman such as former Google executive Megan Smith fills a high-ranking tech or science position – not because of her tech chops or acumen, but because she’s a she. Still, Sullivan’s efforts illuminate a path that, while less traveled, has been astoundingly rewarding to her. “These women carried the hopes and dreams of millions who saw their bold journeys as their own tickets to success,” said Lynn Sherr, a journalist who has followed America’s space program for several decades. Sherr spoke about Sullivan and the original class of female NASA astronauts at the Pew event. “Kathy’s great accomplishment is not only that she helped shatter that glass ceiling, but that she did it so magnificently,” Sherr said. “She continues to serve people and the planet.”
<urn:uuid:7776700f-cd1c-46f4-8f7a-e86d28803a1a>
CC-MAIN-2022-40
https://www.nextgov.com/technology-news/2014/10/30-years-later-noaa-administrator-talks-about-her-spacewalk/96110/?oref=ng-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00242.warc.gz
en
0.954953
986
3.0625
3
Bluetooth is a short range wireless technology, which was first introduced by Ericsson Company in 1994 after its initiation in 1989. This technology operates in the ISM band, which is reserved for industrial, scientific, and medical applications. This is a free band and does not require lengthy licensing procedures and fees. It operates in two frequency ranges – 2400 to 2483.5 MHz and 2402 to 2480 MHz – via the frequency hopping spread spectrum radio interface technology. Bluetooth transmission frequency band is divided into 79 Bluetooth channels to transmit the data packet. The latest version of this communication technology known as Bluetooth Low Energy or BLE accommodates 40 channels because it uses 2 MHz spacing guard band. Bluetooth Low Energy or BLE is developed by the Bluetooth Special Interest Group BSIG for high energy efficiency and performance. This technology is commonly marketed as Bluetooth Smart technology in the marketplace. The devices enabled with the BLE technology can operate for many months even years on a single button cell/battery. BLE technology is very suitable for the internet of things (IoT). BLE technology can operate with a maximum speed of 50 Mbits/s up to an effective distance of 800ft (240 meters). Bluetooth Technology Versions Bluetooth technology has evolved through multiple versions since its inception. There are five major versions with its sub-versions released so far, as given below. - Bluetooth 1 (with multiple sub releases). This represented the first major innovation and invention of the technology, but it was plagued by compatibility issues and a comparatively slow data transfer limit of 1 Mbps - Bluetooth 2 (with multiple sub releases). This addressed some compatibility issues and increased the data limit to 3 Mbps - Bluetooth 3. This was a major boost, introducing the HS protocol that increased the data transfer limit to 24 Mbps, and allowing Bluetooth to run over an alternate radio, like the one used by Wi-Fi devices. This allowed it to pervade the market significantly more. - Bluetooth 4 (With multiple sub releases). This, among other fixes, introduced the LE (low energy) technology. This innovation allowed Bluetooth to be deployed in all sorts of wearable and portable systems, as they would need to be charged much less often (or would require far fewer battery changed). It allowed Bluetooth to break into the physical security market, as it could now be installed in all sorts of sensors and readers. This was the beginning of Bluetooth security systems. - Bluetooth 5 is still in development. According to BSIG, it will be optimized for IoT devices, by increasing range, speed, and broadcast capacity. It will truly enable Bluetooth to be deployed in all physical security systems. Bluetooth in Access Control Applications Like other short-range wireless access control technologies, such as Zigbee and others; Bluetooth can also be efficiently used as an effective access control technology. The Bluetooth security systems suite has been increasing at an ever greater pace in recent years, with the advent of smart homes and modern access control. The main applications of Bluetooth access control include home automation and access security systems. There are large numbers of software applications available in the market that can be configured – commonly known as profiling – on Bluetooth-enabled mobile phones to establish short-range communication with the Bluetooth-enabled proximity sensing devices. Other than those software applications, this technology can be implemented in an autonomous Bluetooth access control system too. In a normal access control system based on iPhone, you will need a proximity reader enabled with the Bluetooth signals. A software application is installed on the iPhone smartphone, and the application is then profiled for access security use. When the smartphone is neared to the proximity reader, the application communicates with the reader and thus, exchanges the security key to authenticate and open the door lock. Similarly, in an automated and networked house or office, your phone communicates with the centralized access controller, which controls the door locks in a networked environment. The security key is authenticated and the authorized door is opened through the signals from the main access controller. The introduction of iBeacon protocol is another important milestone in using Bluetooth technology in more access control and security-related applications. It can be extensively used in the applications pertaining to location finding, which subsequently will trigger a major potential of marketing services. Recently, iBeacon has been implemented in multiple BLE-enabled devices that can easily communicate with the mobile applications in their respective ranges over Bluetooth technology. The power consumption of iBeacon transmitter is very low and can operate for many months continuously on a small battery. Bluetooth and Smart Locks Amongst the backdrop of changing lifestyle habits to doing everything on mobile, there is now a general push within the access control market to go from cards to mobile too. With unlocking doors from mobile becoming an ability people expect to have, it is poised to become the main method of unlocking doors in the next few years. The latest example is a new generation of Bluetooth Low Energy (BLE)-enabled smart locks that are able to interact directly with the phone to unlock doors. Allegion has a range of such locks -- and for those looking to utilize the BLE function in their own Apps, they will be able to integrate the functions provided by Allegion. For those who would like to utilize BLE to unlock doors without building an app from scratch, Kisi is integrated with Allegion to enable just that. Users will be able to install Allegion door locks and unlock their doors through the Kisi app on their mobile. The best use-cases for BLE locks are organizations looking for an access control security solution since these locks will be able to provide real-time communication and feedback. Among other benefits, this also means being able to grant or relinquish access immediately, and review real-time activity logs. Organizations with high traffic inflow and outflow will find such features most useful in streamlining their daily operations and strengthening their security. As users will be required to swipe in-app to unlock, this also presents opportunities for any facility or workspace management apps looking to drive usage and engagement of their apps. By requiring users to unlock doors through their app, the app will now become a daily necessity rather than a possibility. Access control over Bluetooth technology uses the mobile devices in association with the third-party controllers and readers; this will reduce the hardware cost for this particular application. Only software profiling is sufficient to use your mobiles devices in different access control systems. With the advent of BLE technology in the market, this field has a bright future in this industry. To learn more about Kisi, take a look at our product overview or get in touch with our team.
<urn:uuid:08e106ff-b404-4064-ac35-55a52c56116d>
CC-MAIN-2022-40
https://www.getkisi.com/guides/bluetooth-access-control
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00242.warc.gz
en
0.943346
1,355
3.203125
3
Many organizations are experimenting with IoT projects, but these bring in significantly different security challenges, which can have far-reaching consequences. An appreciation of these unique challenges is important for the effective rollout of IoT projects. These challenges differ from those that arise in more conventional technology infrastructures. Strategies that involve ring fencing core systems and applications and tightly controlled access do not work with IoT projects. Here the scale is exponentially multiplied as you’re dealing in real time with potentially tens or hundreds of thousands of small devices spread across large areas. Unlike traditional cybersecurity, which mostly results in data compromise, security challenges of real-time IoT networks can have far-reaching implications for human security and safety. Popular IoT deployments vary from those of building automation systems and sensor networks to critical connected healthcare solutions, connected vehicles, and industrial robotics. Such deployment scenarios can automate device management, improve efficiencies and reduce operational costs while improving the customer experience. There are opportunities in every business sector and early adopter organizations are racing to secure a first-mover advantage. IoT systems’ security challenges Security challenges of IoT systems can be broadly categorized into the three-tier IoT security architecture: - Security of devices: It is important that each device only does what it is intended to do and offers no scope to anyone or anything to infiltrate and reprogram it. With the wide range of IoT devices, there are large sections of the code to be protected either through encryption or access control. While essential for speed and efficiency, OTA (over-the-air) update capabilities for software and firmware updates, can compromise the security of the system. These IoT devices face numerous vulnerabilities because of the way they operate. - Security of communications: IoT communications happen over both public and private networks, industrial networks, and IT networks. Securing network protocols is an important challenge. As a lot of IoT devices have sensors with low computational power, providing data and network-based encryption will fall on gateways, which in turn will need to secure vast amounts of structured and unstructured data in addition to supporting different types of connections (Wi-Fi, Bluetooth, Cellular, Zigbee, NFC, etc.) and device architectures. - Security of cloud/data center: Data from IoT devices goes into cloud and applications. Insecure cloud and mobile interfaces for these applications are huge challenges, as they most often use open-source libraries and technologies. Furthermore, all types of IoT devices and users connect to the cloud remotely. Securing these connections is very important. Rather than securing the entire data store, one would need to secure every data packet as there are innumerable sources with different levels of security. The challenge of IoT devices As more devices are added to IoT networks, the security challenge grows. According to Gartner, around 26 billion IoT devices will be connected by 2020. This gives hackers 26 billion potential targets, posing three key challenges: - Limitations of ring-fencing: A significant proportion of the security challenges surrounding IoT deployments come from the nature of the devices being connected. Since these devices are always connected and periodically transmitting data, the traditional ring-fencing model with intermittently connecting roaming personal devices like smartphone, tablets, etc. is already proving to be a struggle. The small size, large-scale, and distributed nature of IoT devices will overwhelm such cybersecurity models. This is further exacerbated by the expectation that the device will be owned by the customer; yet the onus of its security is on the manufacturer, which then renders moot the ring fence concept. - Limited compute capability of IoT devices: Many such sensors and other monitoring devices have very limited computational capabilities. As a result, the security tools that work on computers often simply can’t be installed due to a lack of CPU power and data storage capacity. Most of such tools are written for computer architectures which are significantly different from those in the devices, nor can one rely on digital certificates mandated by the cybersecurity model. Also, many have not been designed to readily accept updates and patches, which makes ongoing security maintenance problematic. Some also have configuration and security settings set in the firmware that simply can’t be updated. Furthermore, as more insights are gained from the data collected from IoT monitoring devices, these devices are being enhanced to perform corrective actions which then add to the challenges. - Irregular communication patterns: The sheer volume of IoT devices, together with their irregular communication patterns, can overwhelm many security tools. Data patterns that would indicate a compromise or attack in a conventional IT infrastructure are likely to be common in an IoT infrastructure. One of the major reasons for such irregular data patterns is that their communication patterns are logical in the context of the local conditions. Another reason is IoT network going beyond just connecting devices to being increasingly smarter devices, which trigger contextually adaptive communication patterns. The conventional static models deployed in the infrastructure are bereft of this context and hence, unlikely to handle correctly such dynamic situations. In addition, the knee-jerk reaction of cybersecurity experts to deny access to ring-fenced assets further aggravates the situation. In the second half of this blog post, we’ll be moving on to explore three specific industry examples of security challenges that IoT deployment can pose and underline the importance of thinking strategically to head off IoT security threats.
<urn:uuid:d18de335-8852-4e9f-9b97-d911ef3074ff>
CC-MAIN-2022-40
https://www.hcltech.com/blogs/ghrao/security-challenge-posed-internet-things-part-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00442.warc.gz
en
0.937441
1,076
2.734375
3
In this article we explain: - What is a DKIM Record? - How do I create a DKIM record for a domain? - How do I add a DKIM record? - How can I test my DKIM record? - Can I have multiple DKIM records? What is a DKIM Record? Like SPF, DKIM is an open standard for email authentication that is used for DMARC alignment and exists in the DNS record of the domain, but it is a bit more complicated than SPF. DKIM gives emails a signature header that is added to the email and secured with a public/private key pair. This DKIM signing acts like a watermark for email so that email receivers can verify that the email actually came from the domain it says it does and hasn’t been tampered with. Each DKIM signature contains all the information needed for an email server to verify that the signature is real, and it is encrypted by a pair of DKIM keys. The originating email server has what is called the “private key,” which can be verified by the receiving mail server or ISP with the other half of the keypair, which is called the “public key.” The public key exists in the DKIM record in your domain’s DNS as a text file. In order to connect and decipher these encrypted signatures, a DKIM selector is used. More information about DKIM selectors, and discovering which ones your domain uses, can be found here. How do I create a DKIM record for a domain? 1 – Create a list of all domains and sending services (such as marketing campaign platforms or invoice generators, also referred to as ESPs) that are authorized to send email on your behalf. Contact them and request DKIM to be configured and that you need a copy of the public key. 2 – Generate the key pairs. Here are a few options: - If your organization has its own email server, it may have native DKIM functionality. Check the available documentation for the public/private key generation and policy record creation (or check in with your IT staff who are responsible for the server). - There are third-party tools available to generate the DKIM record. Note: check with your organization’s security policy prior to utilizing third-party tools. - To create the keys without a third party, an open-source project called opendkim is available. - DKIM keys also can be generated via openssl. How do I add a DKIM record? 1 – Publish your public key to your DNS record as a text (TXT) record. Check with your DNS provider to see if they allow more than 255 characters in the input field or not, as you may have to work with your provider to increase the size or to create the TXT record itself. 2 – Save the private key to your SMTP server / MTA (mail transfer agent). How can I test my DKIM record? Can I have multiple DKIM records? A domain can have as many DKIM records for public keys as servers that send mail. Just make sure that they use different selector names. Read about the importance of rotating your DKIM keys and automating that process here.
<urn:uuid:d5f18ce9-3863-4ebc-915e-6bb927457f54>
CC-MAIN-2022-40
https://dmarcian.com/create-a-dkim-record/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00442.warc.gz
en
0.925833
719
2.84375
3
Employees of federal, state and local governments; and businesses working with the government. In this course, students will learn how to plan and implement an operating system deployment strategy using modern deployment methods, as well as how to implement an update strategy. Students will be introduced to key components of modern management and co-management strategies. This course also covers what it takes to incorporate Microsoft Intune into your organization. Students will also learn about methods for deployment and management of apps and browser-based applications. Students will be introduced to the key concepts of security in modern management including authentication, identities, access, and compliance policies. Students will be introduced to technologies such Azure Active Directory, Azure Information Protection and Windows Defender Advanced Threat Protection, as well as how to leverage them to protect devices and data. After completing this course, learners should be able to: - Plan, develop, and implement an Operating System deployment, upgrade, and update strategy. - Understand the benefits and methods of co-management strategies. - Plan and implement device enrollment and configuration. - Manage and deploy applications and plan a mobile application management strategy. - Manage users and authentication using Azure AD and Active Directory DS. - Describe and implement methods used to protect devices and data. 1 – MODERN MANAGEMENT - The Enterprise Desktop - Azure AD Overview - Managing Identities in Azure AD 2 – DEVICE ENROLLMENT - Manage Device Authentication - Device Enrollment using Microsoft Endpoint Configuration Manager - Device Enrollment using Microsoft Intune 3 – CONFIGURING PROFILES - Configuring Device Profiles - Managing User Profiles 4 – APPLICATION MANAGEMENT - Implement Mobile Application Management (MAM) - Deploying and updating applications - Administering applications 5 – MANAGING AUTHENTICATION IN AZURE AD - Protecting Identities in Azure AD - Enabling Organization Access - Implement Device Compliance Policies - Using Reporting 6 – MANAGING SECURITY - Implement device data protection - Managing Windows Defender ATP - Managing Windows Defender in Windows 10 7 – DEPLOYMENT USING MICROSOFT ENDPOINT MANAGER – PART 1 - Assessing Deployment Readiness - On-Premise Deployment Tools and Strategies 8 – DEPLOYMENT USING MICROSOFT ENDPOINT MANAGER – PART 2 - Deploying New Devices - Dynamic Deployment Methods - Planning a Transition to Modern Management 9 – MANAGING UPDATES FOR WINDOWS 10 - Updating Windows 10 - Windows Update for Business - Desktop Analytics 10 – MANAGING WINDOWS 10 SECURITY AND FEATURE UPDATES - Describe the Windows 10 servicing channels - Configure a Windows update policy using Group Policy settings - Configure Windows Update for Business to deploy OS updates - Use Desktop Analytics to assess upgrade readiness
<urn:uuid:7b644be0-2080-4f96-8c39-cc43e152b973>
CC-MAIN-2022-40
https://www.itdojo.com/courses-microsoft/md-100t00-windows-10-copy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00442.warc.gz
en
0.790006
640
2.515625
3
Digital Asset Management – what is it, and why does my organisation need it? In brief, Digital Asset Management (DAM) is the collection of data, images, files, and associated material in one central repository to make it accessible. A DAM platform allows users to search for all of these assets within a single repository rather than searching through disparate solutions. This has the immediate effect of getting information out of siloes and allowing it to be widely available. The core concept behind these solutions is that an employee can log into the platform and easily search for assets like images and files rather than searching multiple legacy solutions. An advanced, modern DAM platform sits in the same broad category as a content services platform. Information in an advanced DAM platform is stored as data rather than documents and files, and therefore becomes easier to search. By categorising and storing it as data in one central repository, the information is then retrievable through its metadata – in other words, the data that exists about the data itself – its digital footprint. Doing this democratises the data in a way, breaking down barriers between, say, an image, an invoice and a barcode. The DAM platform will instantly make connections between them if, indeed, they are related in any way, then present the information to the user. By so doing, it is possible to improve many business processes, tying together otherwise loose threads of information into one cohesive unit. For example, a major global fashion group realised that it could repurpose existing marketing images across various labels rather than spend tens of thousands of dollars each season on separate photography sessions. By helping to connect the dots, their DAM platform saved them hundreds of thousands of dollars simply by presenting their existing assets in a simple, unified way. DAM technology is more than a repository, of course. Picture it as a framework that holds a company's assets, on top of which sits a powerful AI engine capable of learning the connections between disparate data sets and presenting them to users in ways that make the data more useful and functional. Advanced DAM platforms can scale up to storing more than ten billion objects – all of which become tangible assets, connected by the in-built AI -- at the same time. This has the capacity to result in a huge rise in efficiency around the use of assets and objects. Take, for example, a busy modern media marketing agency. In the digital world, they are faced with a massive expansion of content at the same time as release windows are shrinking – coupled with the issue of increasingly complex content creation and delivery ecosystems. A DAM platform can manage those huge volumes of assets - each with their complex metadata - at speeds and scale that would simply break a legacy system. Another compelling example of DAM in action includes a large U.S.-based film and TV company, which uses it for licencing management. Its DAM solution ensures that it retrieves and uses the correct image in the right market at the right time – despite instantaneous releases across multiple countries and languages, each with its own set of IP and licencing agreements. Similarly, another TV giant uses its DAM platform for storing and retrieving stock footage – ensuring there is little to no duplication across the many productions the company works on simultaneously. The company can then locate and re-use iconic images such as a panorama of the Brooklyn Bridge or the famous Hollywood sign -- identifying contract details to avoid licencing issues and saving it otherwise costly re-shoots. Companies in the beauty industry are also beginning to use DAM to streamline the digital supply chain and ensure all products sold on eCommerce sites are featured alongside complete product information such as ingredients, local distributors and sales rights in a specific region. Aside from creating a centralised, smart repository for a company's digital assets, a DAM platform can help empower better collaboration through improved workflows. For example, this might make acquiring approvals for marketing materials faster and easier and generate a trigger to engage automated quality assurance processes. As the world re-emerges from pandemic business conditions, it becomes apparent that digital transformation has only accelerated. In fact, the Australian government's Digital Transformation Agency suggests that as many as 90 per cent of Australian businesses have adopted new technologies to maintain business continuity since the start of 2020. Similarly, in Singapore, 84 per cent of businesses increased their tech spend in the last year to meet new challenges and speed up digital delivery, according to a different report. Again, New Zealand has also experienced a dramatic evolution, with one report suggesting that as much as five years of advances in digital transformation occurred in the country in a period of just eight weeks. All this suggests that enterprises must adapt, bring new technologies online and reduce the time to get new products and services to market. A comprehensive content plan, which focuses on reducing siloes and collating all a company's assets in one easily searchable, AI-powered platform, will help immeasurably.
<urn:uuid:2cbbf8a9-5e66-4e5f-b3b3-287ae2c1c815>
CC-MAIN-2022-40
https://datacenternews.asia/story/digital-asset-management-what-is-it-and-why-does-my-organisation-need-it
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00442.warc.gz
en
0.940558
998
2.578125
3
What Is a Safe Work Environment? A safe work environment prevents the spread of diseases, eliminates recognized hazards and meets safety regulations. Employee safety has been the biggest concern for organizations of all sizes over previous decades. The COVID-19 pandemic has further intensified these concerns. Statistics from the Department of Labor depict vast change within the workplace, including: - 2.7 million nonfatal injuries and illnesses were reported in the private sector in the U.S. in 2020. - Workplace illness cases increased from 127,200 cases in 2019 to 544,600 cases in 2020. - Employer-reported respiratory illness cases increased by nearly 4000% from 2019 to 2020. Organizations could have avoided many of these nonfatal injuries and illnesses if they had adopted the best practices for a secure environment and implemented safety manuals of the Occupational Safety and Health Administration (OSHA). This article provides an overview of the five best practices for workplace safety that every organization should implement to keep employees safe at the workplace. Physical Security vs. Cybersecurity Two major aspects exist when it comes to workplace security: physical security and cybersecurity. - Physical security: Physical security refers to the preventive measures an organization takes to protect employees, facilities and equipment from getting injured, stolen or destroyed. - Cybersecurity: Cybersecurity refers to the practice of protecting electronic data from being stolen or destroyed. Organizations should give equal priority to both these security aspects. A recent research study suggested breaches in physical security or cybersecurity can cost huge amounts of money: - The global average cost of a data breach was $4.24 million. - The average cost of a data breach in the United States was $9.05 million. - The average cost of physical security compromise was $3.54 million. To avoid such huge losses, organizations should implement access control systems that help monitor and protect both physical and digital assets. 5 Best Practices for Workplace Safety Best practices such as creating a robust workplace safety plan, developing office security measures, unifying software systems with API integrations and implementing access control systems help organizations boost workplace safety. 1. Create a robust workplace security plan Every organization is vulnerable to physical and cyber threats, which may include tailgating, identity theft, piggybacking, social engineering, unaccounted visitors, malware, ransomware, phishing and more. Since every organization is unique, physical and cyber security vulnerabilities vary from one organization to another. The only way to identify these vulnerabilities is to conduct a risk assessment in regular intervals. Without assessing risks, organizations will fail to develop effective security plans. Workplace security should include policies that address potential risks. For instance, the security plan of an educational institution should include emergency lockdown and active shooter procedures, including: - Implementing a cloud-based access control system - Integrating smart locks in a way that security teams can select the doors that should open or close automatically. - Setting role-based permissions to override the lockdown plans. With these role-based permissions, exceptions can be made within the lockdown plan to allow law enforcement officials to enter into college premises and take control of the situation. 2. Establish health programs and safety programs Prevention is the best form of health. Business owners should establish health and safety programs to protect employees from contagious diseases, hazardous items and workplace injuries. A health and security program provides safety guidelines, action plans and checklists that can make work environments safer and prevent mishaps: - Safety guidelines: Guidelines may include wearing protective equipment, reporting unsafe conditions, and using machines property. - Action plans: Action plans may include implementing an innovative security solution, offering safety training to employees, and making every employee accountable for their safety. - Checklists: Safety checklists are the documents used by frontline workers to identify and verify potential hazards. During the COVID-19 pandemic, organizations must follow safety procedures such as social distancing, frequent sanitization, and touchless business operations. This is where innovative solutions such as touchless visitor management systems and touchless access control systems help organizations follow guidelines and protect employees from contagious diseases. A touchless visitor management system ensures that the employees don’t get infected by unknown people who visit the office. This system generates an entry QR code or a bar code for outsiders only after they fill out the health questionnaire, provide the purpose of visit, and upload documents of vaccination proof. This solution makes contact tracing easier for organizations and enables them to deal effectively with pandemics like COVID-19. Organizations can also create touchless workspaces by implementing technologies like Mobile Access Control that eliminates the need for physically touching doors while moving in and around the office. 3. Buildout office security measures Organizations should imbibe physical and cybersecurity into the organization’s culture. Human resource teams should schedule safety training programs in regular intervals to equip employees with sufficient knowledge on: - Dealing with emergencies. - Handling hazardous items. - Reporting hazardous conditions to relevant authorities at the right time. - Whistleblowing procedures. - Occupational health guidelines. The training goes a long way to help employees prepare for emergencies and deal with occupational safety issues. Furthermore, video management systems are a helpful tool to increase security in the workplace. IT teams should protect server rooms, electrical closets, and other personal spaces with the help of IoT devices such as smart sensors, AI-enabled security cameras, and video management tools as unauthorized entry into these spaces could prove costly to the organization. 4. Unify Software Systems with API Integrations Organizations use different types of software systems, innovative solutions, and security tools to protect human, physical, and digital resources at the workplace. However, the existing IT infrastructure may not be sufficient to protect the sensitive data that organizations use for informed decision-making. IT teams should actively look out for third-party applications that can easily be integrated into the existing IT infrastructure to protect sensitive information. Here is a list of API integrations IT teams should consider implementing to create a secure workplace: - Video Management: Integrated access control and video management enables security teams to sync AI-enabled video surveillance cameras to the access control system with an API token. This API integration triggers a warning when unauthorized door access events occur. - Identity Management: Integrated access control and identity management enables the security teams to give role-based permissions for employees and deactivate access rights for former employees immediately after their resignation is accepted. 5. Implement a great access control system A cloud-based access control system can overhaul the entire security system of office buildings. It provides a centralized dashboard where security teams can manage both the physical security and cybersecurity of the organization. By implementing a cloud-based access control system, security teams can do the following activities with ease: - Assign physical key fobs or mobile keys that streamline the entry and exit of employees. - Assign role-based access permissions to employees to restrict unauthorized employees from accessing sensitive information like intellectual property. - Add, activate, deactivate, and remove access rights of employees with a single mouse click. - Monitor access activities of all employees in real-time. - Integrate APIs for visitor management, critical event management, elevator controls, and single sign-on (SSO). Genea is Here to Help Organizations should not leave the workplace vulnerable to physical and cyber threats. The lack of security procedures not only ruins the reputation of the organization but also costs hugely in terms of data loss and productivity loss. Implementing a cloud-based access control system can eliminate most of these security threats and save a lot of money for organizations. If your organization is fighting against frightening physical and cyber threats, you may need to look into Modern Access Control Solution from Genea that can help manage your entire workplace security from a centralized dashboard. Leaders across industries such as healthcare, education, commercial real estate, and retail have implemented Genea’s innovative solutions such as cloud-based access control, mobile access control, a touchless visitor management system, and innovative API integrations for visitor management, single sign-on, and critical event management to streamline their security operations. Learn more about how Modern Access Control Solutions from Genea can help you create a secure work environment that meets regulatory standards and improves the wellbeing of employees.
<urn:uuid:7e070a05-8d47-4abc-8148-db747f705c21>
CC-MAIN-2022-40
https://www.getgenea.com/blog/is-your-office-safe-5-tips-to-improve-workplace-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00442.warc.gz
en
0.916484
1,716
2.890625
3
On Monday, November 30, specialists from the Ben-Gurion University of Negev (Israel) presented a new type of cyber-biological attack that can bring biological warfare to a new level. The attack, presented by the researchers, allows biotechnologists working with DNA to inadvertently create dangerous viruses and toxins. The researchers described how an attacker can spoof DNA sequencing chains using malware on a biotechnologist’s computer. In particular, vulnerabilities in the Screening Guidelines for Suppliers of Synthetic Double-stranded DNA and Harmonized Screening Protocol 2.0 allow “to bypass the protocols using a common obfuscation procedure.” According to the US Department of Health and Human Services, specific screening protocols must be followed to identify potentially harmful DNA when sequencing genes during DNA sequencing. However, the researchers were able to bypass these protocols by using obfuscation, as a result of which 16 out of 50 obfuscated DNA samples were not detected using “DNA screening for best match.” The software used to develop and manage synthetic DNA projects is also vulnerable to man-in-the-browser attacks. With these attacks, attackers can inject arbitrary strands of DNA into gene sequences – what researchers have called an “end-to-end cyber attack.” To demonstrate the possibility of their attack, the researchers cited a Cas9 protein residue, using malware to convert this sequence into active pathogens. According to the scientists, using the CRISPR protocols, the Cas9 protein can be used to “deobfuscate harmful DNA in host cells.” For the unsuspecting scientist processing the sequence, this could mean the accidental creation of hazardous substances, including synthetic viruses or toxins. As I mentioned, cybercriminals attacked UCSF, the US leading COVID-19 vaccine developer.
<urn:uuid:556d2263-0219-46a0-9795-e61ac0643abc>
CC-MAIN-2022-40
https://gridinsoft.com/blogs/apocalypse-now-experts-presented-a-new-type-of-cyber-biological-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00442.warc.gz
en
0.923663
374
3.015625
3
Secure your business with CyberHoot Today!!! AES Encryption, also known as Advanced Encryption Standard, is a symmetric block cipher used by the U.S. government to protect classified information. In 1997, the National Institute of Standards and Technology (NIST) commissioned the development of an encryption replacement for the Data Encryption Standard (DES), which became vulnerable to brute force attacks with increases in computational power. A contest was held, cryptographers made proposals, and NIST eventually crowned AES the winner and the new encryption standard in 2001. AES won because it was a computationally efficient encryption algorithm that improved on security and encryption speed enormously over DES, 3DES, and rival proposed algorithms. AES is implemented in both software and hardware throughout the world to encrypt sensitive data. It uses block ciphers with various key lengths leading to greater strength and flexibility as computational power was predicted to increase in accordance with Moore’s Law. - AES-128 uses a 128-bit key lengths - AES-192 uses a 192-bit key lengths - AES-256 uses a 256-bit key lengths There are 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys. A round consists of several processing steps that include substitution, transposition, and mixing of the input plaintext to transform it into the final output of ciphertext. The AES encryption algorithm defines numerous transformations that are to be performed on data stored in an array. The first step of the cipher is to put the data into an array, after which the cipher transformations are repeated over multiple encryption rounds. The first transformation in the AES encryption cipher is the substitution of data using a substitution table. The second transformation shifts data rows. The third mixes columns. The last transformation is performed on each column using a different part of the encryption key. Longer keys need more rounds to complete. The end result of these computations is encryption that today remains largely unbreakable by even supercomputer resources. However, in the future, quantum computing gains may require increases in key lengths and possibly a change in the algorithm to continue to secure data requiring encryption. What does this mean for an SMB? Encryption and cryptography are important to all SMBs and MSPs in order to protect the confidentiality and integrity of critical and sensitive information. Encryption also plays a role in protecting data availability in that backups need to be protected with encryption if they contain critical and sensitive data. SMBs or MSPs may fall under legislative controls such as CMMC. HIPAA, or PCI, all of which require specific forms of data encryption. Examples of these legislative requirements include individual healthcare records under the Health Insurance Portability and Accountability Act (HIPAA), Credit Card PAN information under the Payment Card Industry Data Security Standards (PCI-DSS), Controlled Unclassified Information (CUI) under the CMMC legislative controls, and even Non-Public Personal Information (NPPI) under the General Data Protection Act in the EU, or here in the US, the California Consumer Privacy Act. One strategy for SMBs to deal with industry compliance requirements is NOT to have such data in their possession, to begin with. For example, PCI compliance obligations can often be avoided by partnering with an online Payment Service that performs the Credit Authorization outside of your control and simply provides an approval or authorization code back. However, in cases where an SMB/MSP must collect and store critical and sensitive data, then they must protect it with encryption. Today, that means using the Advanced Encryption Standard (AES) encryption, currently the most powerful algorithmic way to produce one-way functions to protect your data from compromise and exposure. SMBs/MSPs should encrypt laptops and tablets with Microsoft’s BitLocker or Apple’s FileVault to protect the critical and sensitive data they contain from compromise. This limits a stolen or lost device to a financial loss or cost instead of larger financial fines from a breach of regulated critical or sensitive data (HIPAA records, PCI, or CMMC). As with physical keys, logical key management is important. Be certain you store decryption keys in a secure place, not on the device for which the key decrypts the data itself. Make sure you protect the use of the key with a strong, long, and unique password, stored in a Password Manager which itself is similarly protected with similar password controls AND two-factor authentication. Beyond encryption and key management, companies can do the following things to further protect themselves from compromise. Additional Cybersecurity Recommendations The recommendations below will help you and your business stay secure with the various threats you may face on a day-to-day basis. All of the suggestions listed below can be gained by hiring CyberHoot’s vCISO services. - Govern employees with policies and procedures. You need a password policy, an acceptable use policy, an information handling policy, and a written information security program (WISP) at a minimum. - Train employees on how to spot and avoid phishing attacks. Adopt a Learning Management system like CyberHoot to teach employees the skills they need to be more confident, productive, and secure. - Test employees with Phishing attacks to practice. CyberHoot’s Phish testing allows businesses to test employees with believable phishing attacks and put those that fail into remedial phish training. - Deploy critical cybersecurity technology including two-factor authentication on all critical accounts. Enable email SPAM filtering, validate backups, deploy DNS protection, antivirus, and anti-malware on all your endpoints. - In the modern Work-from-Home era, make sure you’re managing personal devices connecting to your network by validating their security (patching, antivirus, DNS protections, etc) or prohibiting their use entirely. - If you haven’t had a risk assessment by a 3rd party in the last 2 years, you should have one now. Establishing a risk management framework in your organization is critical to addressing your most egregious risks with your finite time and money. - Buy Cyber-Insurance to protect you in a catastrophic failure situation. Cyber-Insurance is no different than Car, Fire, Flood, or Life insurance. It’s there when you need it most. All of these recommendations are built into CyberHoot the product or CyberHoot’s vCISO Services. With CyberHoot you can govern, train, assess, and test your employees. Visit CyberHoot.com and sign up for our services today. At the very least continue to learn by enrolling in our monthly Cybersecurity newsletters to stay on top of current cybersecurity updates. To learn more about the Advanced Encryption Standard (AES), watch this short 2-minute video: CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like: - Cybrary (Cyber Library) - Press Releases - Instructional Videos (HowTo) – very helpful for our SuperUsers! Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’.
<urn:uuid:c0b3ffd7-3603-4a14-bf6f-3fb964196baa>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/aes-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00442.warc.gz
en
0.917708
1,538
3.890625
4
Unlike earlier “feature length” articles, today’s article is designed as a bit of a security primer. This is really a heads up on some newer thoughts circulating in the security field, and how those thoughts apply to you as an individual. The topic at hand is that of passwords. What is a password? A password is a token that allows someone to positively identify themselves. Combined with a username, you get Authentication: Alice is who she says she is, because she has something only Alice has. A password is only good as long as it’s something only Alice knows. If Bob knows Alice’s password, it really doesn’t authenticate Alice anymore. Our goal in choosing a password is to make sure that only Alice will know it. The way security professionals measure the strength of a password is how random it is or how much “entropy” the password has. Password length is great, but a password which is nothing but “1” typed in 100 times isn’t very random, and is very easy for password cracking programs to penetrate. In the old days, the goal of a password was to make it difficult for someone else to guess when you were typing, as well as difficult for someone to guess when you weren’t around. While that goal still holds true today, one of the single largest goals of a password in today’s age – the age of interconnected, high powered, remotely accessible systems – is to prevent people with malicious intent and sophisticated software from gaining access to your profile – i.e.: trying to make a system think they are you, when obviously they aren’t. The Entropy of the Matter As I mentioned earlier, entropy is the key to determining how secure a password is. In simple terms, this is usually expressed in 2 raised to a certain number, i.e., how many “bits” it has. For instance, a random number between 0-15 would have 4 bits of entropy, a strength of 2^^4. A single Roman letter, a-z (no case differences) has 4.7 bits of entropy if chosen randomly (Log2(26)). These numbers are theoretical, though, as in reality few sequences of anything are truly random. Traditional logic finds that if you have a 9 character password like monkeyboy, it is made more random by swapping out certain characters: 0 for o, 3 for e and so forth. However, there is nothing truly random about this: you have applied simple logic to create the password, and all someone who is trying to break your password needs to do is apply simple logic to break it. Therefore, you really have two options: make really random short passwords, or make “less random” longer passwords. In an ideal world, we could give users 5-7 character passwords which were both completely random and totally easy to remember. In addition, we could give users new passwords every 2-4 weeks. However, this isn’t a perfect world. The second option is to go for length: 50+ characters of words are inherently more secure than 10 characters of words. Since most users, or most anyone for that matter, can’t actually remember 50 characters of “text”, more and more security consultants and system administrators are turning to what are commonly known as “Passphrases”. A Phrase is a Phrase Much like a “password” – which is a secret made up of a word – a Passphrase is a secret made up of several words, the more the merrier. A passphrase could be as simple as “to be or not to be, that is the question”. This 40 character passphrase should be easy for most anyone to remember. The advantage to this is that the current generation of cracking programs is effectively useless at cracking anything longer than 10 characters. This means that for the next 2-3 years, your data is basically safe – from a passphrase perspective. The downside to passphrases is that while they are longer, their length doesn’t guarantee security. For example, while there are roughly 250,000 words in the English language, most people’s vocabulary is only about 50,000. When asked to think up a simple phrase, that dictionary drops to a mere 10,000 words. In a purely random sense, 10,000 is a fantastic seed. However, some words occur much more often than others, and there are certain language constructs which make guessing a sentence easier than guessing a single a string of 50 characters. So while security professionals are currently evangelizing passphrases, it is with the knowledge that as soon as there are programs which can incorporate the English dictionary, learn to parse sentences and pick apart words instead of characters, we may be back to the proverbial drawing board. However, evening the long term there are some simple steps that can be taken to make your passphrase more secure. By doing to passphrases what many people do to passwords – swapping out characters – the inherent security of the passphrase becomes much greater, largely because it makes the dictionary of the cracker practically useless. Another commonly held suggestion is to simply spell one or two words in your passphrase wrong, which will also render the passphrase impossible to guess. After all, you only need to fool a cracker on one word to keep your passphrase secure. From the Experts Obviously no security article would be complete without input from true experts. The current world record holder for making the biggest splash on an article in relation to passphrases goes to Jesper M. Johannson, Ph.D., ISSAP, CISSP. Jesper recently finished one of the defining pieces on the subject of passphrases. Another industry expert is Robert Hensing. His blog is full of useful information on the subject, and he encourages readers to email him with questions. What does this all boil down to, to quote Jesper’s conclusion to his fantastic 3-part series: While no one can conclusively answer the question of whether pass phrases are stronger than passwords, math and the logic appear to show that a 5- or 6-word pass phrase is roughly as strong as a completely random 9-character password. Since most people are better able to remember a 6-word pass phrase than a totally random 9-character password, pass phrases seem to be better than passwords. In addition, by adding some substitutions and misspellings to a pass phrase, users can significantly strengthen it, which is not possible with a totally random 9-character password. Contrary to what your grade school teacher taught you, there actually is value in misspelling things!
<urn:uuid:7b5aa129-dab6-4f2b-9a1c-d9f308834ddc>
CC-MAIN-2022-40
https://it-observer.com/why-passwords-dont-work.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00442.warc.gz
en
0.962984
1,393
3.609375
4
The inception of passwords in the 1960s changed the digital world as we know it. Passwords are now an unconscious standard practice in the lives of most, and from your first pet to the street you grew up on, they are deeply ingrained in our minds. The first passwords introduced the concept of authentication to cybersecurity, or the ability to prove one's identity through a passcode, PIN, security question, or other secret means of identification. But passwords are not secure, and never have been—almost immediately after passwords were invented, the first breach occurred. The history of passwords has been a strange, inconvenient journey, but one that has led us to much better authentication solutions. Fernando Corbató first presented the idea of passwords at MIT in 1960, without any idea of the huge societal and security impact it would have. At the time, the Compatible Time-Sharing System (CTSS) had recently been developed and was available for research use, but lacked a way to secure private files by user. Enter the password. For years, the password was something only used in research and academic circles, without any major real-world applications, but as computers became more accessible, hackers attacking operating systems became more prolific, frequent, and targeted. When computers began to make their way into homes and offices, the true weakness of passwords was discovered.. Even Beyond Identity founder, Jim Clark, recognized his role in making the password a commonplace form of authentication. But there is good news on the horizon: what was originally considered a pitfall of owning a device is now something we can fight back against with passwordless technology. Since the early years of passwords, we have seen many transformations in digital identity and authentication, but some things, unfortunately, remain the same. In 2020, the Verizon DBIR reported that over 80% of data breaches involved the use of lost or stolen credentials, further proving that passwords are just as insecure as they were in the 1960s. But we are no longer relegated to the CTSS or insecure authentication methods, and that’s where Beyond Identity comes in. Rather than trying to enhance password security or add additional factors or security questions, we eliminate the insecure factor altogether—passwords. Check out an in-depth timeline of the history of passwords.
<urn:uuid:01f99121-6560-4bd6-b6bd-b42239afbf4d>
CC-MAIN-2022-40
https://www.beyondidentity.com/blog/history-and-future-passwords
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00642.warc.gz
en
0.967739
472
3.40625
3
Behavior Driven Development (BDD) is a method invented by Dan North that focuses on describing application behavior in a formalized notation using concrete examples. When it comes to implementing BDD in a real project, it has many advantages but also some pitfalls. I had the opportunity to gain experience with BDD in a couple of projects. In this blog, I will share my thoughts and learnings from one such project on how to approach applying behavior driven development in an agile software development team. An overview of the project I was a test manager in the project. Our goal was to modernize a web portal through service-oriented architecture. Independent services were grouped into modules to enable functionality that was used by the business processes. For developing and testing such services, we decided to use BDD methods. We did not implement BDD by the book – it was a selection of some core aspects that Dan North describes, that worked well for us as a team. The following learnings and explanations do not, by any means, describe a perfectly executed implementation of BDD. These are just my opinions, based on my experience in this project. The rationale behind implementing BDD To implement the required modules and services, we chose an iterative and agile approach with a 2-week delivery cycle. This meant we had a steady flow of stories within the team, where one story usually described the scope of one service to be developed. We used behavior driven development to: - Clearly illustrate the application behavior by describing specific use cases - Create a shared understanding of the requirements within the project team - Cover the acceptance criteria with BDD scenarios - Determine a basis for test automation After a couple of sprints, we identified various useful practices and methods that worked in our project. Since a detailed description of our approach would be too much to share in a blog, I’d rather focus on some of the key aspects and the learnings. If you are interested in knowing more, feel free to contact us at Nagarro. Ten eyes see more than two One topic, many perspectives - that's what matters The suggested “Given, When, Then” syntax for behavior driven development helps the team to explicitly think about the behavior of a system or the desired outcome of a described functionality. During our story refinement sessions where someone from every role (tester, developer, analyst, product owner, software architect, etc.) was present, we quickly realized that BDD enabled us to create a common understanding about the requirement. This also benefited us in strengthening the confidence in the story’s purpose and in understanding user requirements. Test data availability When describing the scenarios as a tester or a test manager, it is important to consider the test data you need for execution. If possible, try to include real and anonymized data from a test or production system. Later, this helps with implementing the scenarios as automated test cases and integrating them into the test environment. For example, a test that deals with test data for a certain shipping address like “Test street 7, Test City” might be easy to integrate on a stage that uses mockups but will never run on a stage that uses real data. Including the business or end-user in the process of creating or reviewing the BDD scenarios might uncover some edge cases or special data constellations that could be covered with tests too. By defining the necessary test data, we often stumbled upon cases and scenarios that were not covered in the description of the story. Dealing with specific date/time formats for different countries is just one such example. It is useful to tag the scenarios right away when creating them, so you know at first glance what type of scenario you are dealing with (e.g. @Regression, @Smoketest, @System-Integration, etc). Doing so will enable you to find certain tests more easily and group them together to execute them in one run, especially if they are automated. For example, after deployment, all tests with the category “Smoketest” should pass before engaging in any further test activities. Keep the prerequisites to a minimum One of the most important learnings during the project was to keep the scenario outline (which describes the scope of the scenario) and the “Given” blocks as small as possible. Otherwise, dealing with and automating the scenarios quickly becomes very complex and cluttered. If the prerequisites of a scenario are getting out of hand, it is often better to split the scenario into smaller chunks, thereby reducing the prerequisites per scenario. Too big or too complex scenarios can reflect the story’s quality and size. Usually, if the story description is not detailed or comprehensive enough, the scenarios tend to be inaccurate. A good rule of thumb, is to split a story if it has more than 8-10 acceptance criteria. Too many conditions? Too complex? The key is to focus on the behavior! Manage the complexity With highly integrated software (comprising a lot of connections to third-party systems), the scenarios naturally become complex. The external systems need to be up and running, and the test data gets manipulated by a lot of other systems before being tested or validated. Story-splitting can be a solution here, where we have only the minimal, necessary connections to an external system covered in one story. Another learning was about the importance of focusing entirely on the behavior of the system under test. We do not need to describe the behavior of external systems in the scenarios – sometimes, our scenarios also went on to cover external system behavior, when all that was needed was a simple input or output from the system! Even if you try hard to manage the complexity, some stories are complex by nature – in that case, BDD helps in at least arriving at a common understanding as a team, by discussing the scenarios together. Another thing with complexity is that it is a risk for test automation. If there are a lot of dependencies, it is difficult to run the tests reliably and in isolation. You can try to use mock or service-virtualization to mitigate that risk. But, at the same time, you might find data-related errors only at a later stage in testing, when the tests are run on a stage with integrated real services. Keeping a consistent notation While the notation or syntax of behavior driven development is quite clear, there are cases where you have multiple possibilities to describe certain constellations of your scenarios. For example, one scenario could look something like this: Scenario: Edit Size in ‘Choose Size’ UI Given I am on the “Choose Size“ tab in the UI And the Size is 0 When I click the “Edit“ Button And I click in the Input field And I enter the value “50“ And I click the save Button Then the new Size should be 50 This description is very detailed and relies heavily on UI elements such as “button” or “input field”. It is also more of a description of the process instead of the behavior. Keeping it like this would mean a lot of maintenance effort if some UI buttons change or if the functionality moves to another tab in the UI. The following can be a better approach: Scenario: User wants to edit Size Given the Size is <initial> When the User changes the Size to <newSize> Then the new Size should be <ResultSize> In this case, it doesn’t matter what the buttons are called or where to find the functionality – it’s just about the behavior that is illustrated with three examples. The behavior will be the same regardless of any implementation details that are described in the story. In the project, it took everyone a few sprints to think on a more abstract level that only focuses on behavior, as we constantly reviewed the scenarios and agreed upon a common notation. Define the responsibilities The last insight that I’d like to share is that BDD helps a lot in understanding the responsibilities of various roles in a software development team. In this project, we had a distributed team, which sometimes makes communication a little tricky. With BDD, you can derive tasks for different team members as per the scenarios, thus making things clearer. - the tester is responsible for providing test data - the developer or automation engineer is responsible for implementing the scenario as code - the product owner takes care of getting input on edge cases from the users - the software architect ensures that any functional gaps discovered in the requirements are considered in a new service Behavior driven development helped a lot in increasing the understanding between team members, especially when it came to having clarity about different responsibilities and discussions about the scenarios among the different roles. Starting out with BDD can be a bit daunting, but sticking to it really pays off, provided you always adapt and improve how you use it as a team. For me, the most prominent benefit that BDD has to offer is that it encourages and requires communication and coordination within the team, thus ensuring high quality right from the beginning of the project.
<urn:uuid:3a23cec8-c40a-4b21-8fd5-aca479a99fbf>
CC-MAIN-2022-40
https://www.nagarro.com/en/blog/bdd-behavior-driven-development-agile-project-experience
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00642.warc.gz
en
0.93856
1,985
2.59375
3
How can you identify prejudice? Sometimes you discover prejudice by listening to the words people use and watching the actions they take. A prejudiced person may speak or act in a way that shows explicit favoritism. This is the easiest and safest way to identify prejudice, but prejudices can also be revealed by the omission of words and the failure to act. There are many kinds of prejudice, but few which are put on open display because most people want to be seen to be fair. This article is about professional prejudice, and how it hurts revenue assurance professionals and so ultimately hurts the communications providers and customers they are trying to help. Suppose we watched a Hollywood television series set in a modern US city which only has characters that are white and male. The omission of black and female characters would lead many to infer some prejudice on behalf of the show’s makers, though we should still be wary of jumping to conclusions because there will be real-life locations and activities where the gender and racial mix does not represent society as a whole. I make this point to illustrate the professional prejudices of the GSMA, an association that says it “represents the interests of mobile network operators worldwide”. To be precise, they represent some interests but not others. They get away with this by simply ignoring subjects that do not suit them. This is not so different from a prejudiced white man choosing not to listen when a woman speaks, or changing the TV channel whenever a show features black actors. For many years I have suspected the GSMA has an irrational prejudice against the work done by revenue assurance professionals, but this was impossible to prove because I could only point to the omission of revenue assurance from the GSMA’s publications and activities. However, I think it can now be proven by looking at a specific document where the omission of revenue assurance is jarring to anyone who has not internalized the same prejudices as the GSMA. That document is the GSMA Mobile Money Programme: Mobile Money Policy and Regulatory Handbook, which was published in October of this year. Before I go any further, I want to be clear that I find the content included in this document to be excellent. My argument relates to the importance of what has been left out. I do not disagree with what has been written by the author, Rishi Raithatha, who was the Senior Advocacy Manager at the GSMA until recently. Nor do I feel the document exhibits any explicit racial or gender bias; the cover features a black woman, as perhaps representative of the most dramatic societal benefits delivered by mobile money. But I do believe the author has been influenced by a working environment that wants to talk about some risks whilst never acknowledging related risks of similar importance. Consider the content from the perspective of an objective and impartial risk manager, concerned that his or her business should perform its duties well and serve its customers appropriately. Some of the sections of this document are as follows: - Know your customer - Anti-money laundering - Privacy and data protection Revenue assurance is omitted from this document. All of the above-mentioned topics are important. Mobile money providers and regulators need to address them. But why should we care more about customers losing money because of an external fraudster or a denial of service attack than because the money was lost as a result of a screw-up with the company’s internal processes? How foolish would we be if we assumed there is a neat dividing line between internal fraud and innocent mistakes that have the same effect? And whilst customers will rightly be concerned about leaks involving their data, they will be even more concerned about leaks involving their money! I believe there is one clear difference between revenue assurance and the topics that were included in the GSMA’s mobile money guide. Businesses do not want to talk about assurance because they do not want to admit how often they make basic mistakes. Not wanting to talk about assurance does not mean they should neglect the need to talk about assurance. This document is written for regulators too. Sometimes mobile money is addressed by banking regulators, sometimes by communications regulators and sometimes by both, but no banking or comms regulator should assume transactions are always processed according to the customer’s instructions, or that customers are never charged for services they did not receive. This guide cannot be considered complete whilst omitting any mention of policies to tackle such fundamental risks. The failing does not belong with the author, who has since left the GSMA. Even the best author relies heavily on synthesizing information received from other sources. If nobody in the GSMA ever talks about revenue assurance then nobody in the GSMA is ever going to write about it either. The GSMA does not expose themselves to revenue assurance, so they do not talk about it, so they do not write about it. This is galling because there are very many revenue assurance managers who interact with the GSMA, through the GSMA’s Fraud and Security Group. It just happens to be the case that these revenue assurance managers are also fraud managers. The GSMA is effectively conditioning them to ignore revenue assurance and billing accuracy risks, and to concentrate their efforts elsewhere. The results of the new RAG RAFMCS Survey show that about half of all comms providers have a joint revenue assurance and fraud management function, whilst considerably fewer have common management for fraud and cybersecurity. However, the GSMA expects fraud managers in mobile operators to take an active interest in cybersecurity, and none in revenue assurance. This also reflects the GSMA’s prejudices about what should matter to risk professionals. Revenue assurance is not new. 20 years ago it was difficult to find telcos that had an RA function; now they all have them. The GSMA made a terrible error about a decade ago when it was proposed that their fraud forum might also discuss revenue assurance from time to time. They consciously decided to exclude revenue assurance. The story reported to me by a trusted source was that the usual corporate envy and enmity came into play on that occasion. If half of all telcos have joint RAFM departments, then the other half of telcos may sometimes possess fraud managers with their own selfish reasons for wanting to keep these related functions separate. A fraud manager who participates in GSMA events can enjoy some one-upmanship over a rival RA manager excluded from them. But this would be a petty and stupid reason not to address the risks covered by RA teams. I believe this is the reason why the GSMA remains ignorant of the importance of revenue assurance and so exhibits prejudice in the advice given to external parties too, including national regulators of services that have gained popularity since that terrible decision about revenue assurance. The GSMA’s prejudices do not reflect the realities of how regulators and governments think either. London-based professionals working for the GSMA will rarely hear of white Western governments talking about revenue assurance. That is why it is striking to hear African governments and African regulators casually and habitually referring to the importance of revenue assurance. Those same African countries and regulators have often been at the forefront of the mobile money revolution. The GSMA would benefit from occasionally pausing for breath and listening to the words actually used by the people they seek to influence. If the regulators of the biggest mobile money markets all treat revenue assurance as a priority, then how can the GSMA pretend to give them credible advice whilst never once acknowledging the need for revenue assurance? I have seen many people progress their careers by leaving a revenue assurance job in a telco for the equivalent job in a payments processing business. Electronic payments is a growth industry and hence a source of new opportunities for ambitious assurance professionals with experience gained from telcos. If the payments industry needs revenue assurance, then why would the mobile money industry not need it too? Even more obviously, the business of mobile money is conducted by businesses that are also mobile operators. The GSMA is keen to address a valuable source of additional revenues for companies that are already GSMA members. What kind of twisted risk analysis would ever conclude that a mobile operator has such unreliable systems, processes and staff that it must employ a specialized second-line function like revenue assurance for its communications services, but the same company need do nothing to check for errors with its mobile money transactions? It would be absurd to argue that these companies must do more to address fraud, AML and data protection without doing anything to ensure the underlying integrity of transaction processing. That is why the GSMA’s prejudice is so pernicious. They are not explicitly arguing that revenue assurance is unimportant. They more effectively undermine revenue assurance by simply ignoring it. If I could write just one page of advice to be added to this guide, then I would remind comms and banking regulators that they all have rules to protect customers from being overcharged. The GSMA guide has a section on consumer protection, but it is scanty. The author assumes the integrity of transaction processing instead of discussing the work that should be done to ensure it. The countries where mobile money is popular are also likely to be countries where the comms regulator uses independent test services to verify the accuracy of charges for comms services. Some of these regulators employ specialist auditors and monitoring services to check the reporting of revenues for communications and mobile money services. Is the GSMA simply unaware of this form of risk management, or the reasons why these regulators are spending large amounts of money on verifying revenues? In countries that use less sophisticated techniques to verify billing accuracy, such as the UK, it is not hard to find examples of GSMA members who have been fined by the regulator for incorrectly charging customers. Would we feel confident that these Western mobile operators would adopt an infallible approach to mobile money, if they chose to offer this service to customers in Western countries? Many of these Western operators belong to international groups that provide mobile money services in developing economies. Banking regulators tend to use different methods, but are just as interested in guaranteeing that a bank statement is an accurate reflection of the transactions that occurred. Mistakes do happen, even with the most basic banking services. I once moved money between two accounts in my own name, only to have to chase both banks after the money left one account but did not arrive in the other! We should be especially conscious of risks associated with services that are new or growing rapidly. That is why the GSMA is right to issue advice relating to mobile money policies. However, an insular approach to performing research leaves them oblivious to their own blind spots. The risk universe for mobile money providers and their regulators is more extensive than the risks the GSMA is willing to discuss. Instead of learning the lesson of revenue assurance, which is that mistakes do happen, especially when using novel automation to process billions of transactions, the GSMA appears oblivious to a salient aspect of the recent history of comms providers. That is why I say the GSMA’s advice is bad because it is prejudiced against the vital work of revenue assurance professionals.
<urn:uuid:04cfb305-4ab6-40bc-97c2-c41d2bf39435>
CC-MAIN-2022-40
https://commsrisk.com/gsma-mobile-money-handbook-shows-prejudice-against-revenue-assurance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00642.warc.gz
en
0.968597
2,225
2.609375
3
Most of the recent changes don’t make voting easier or harder. They make the process easier for the state and local officials who run elections. There’s been a good deal of crying foul about what are being called anti-democratic new state laws that make it harder to vote. But it turns out such laws might have little impact on voter turnout and vote margins in an election. That’s according to a February 2022 analysis by Sabato’s Crystal Ball, a newsletter that provides nonpartisan election analysis. The 2020 presidential election had the highest voter turnout of the past century, with 66.8% of citizens 18 years and older voting in the election. Robust voter turnout could be the difference maker in the 2022 midterm election. Voter turnout for midterm elections is typically lower than for presidential elections. All 435 seats in the House of Representatives and 35 of the 100 Senate seats will be up for grabs during the election. Democrats hold slim majorities in both chambers. If Republicans are able to increase the number, they will likely flip control of both chambers in their favor. As a political scientist, I study state voting and elections rules. Voting is important to a healthy democracy because it is how we consent to being governed and let elected officials know what policies we want. Voting laws also protect against voter fraud. Thanks to these laws, election fraud in the United States is very rare and typically has no impact on the election outcome. Sometimes these rules can create burdens for citizens who want to vote. This can lead to citizens’ losing trust in their government. Citizens’ losing trust in the government can be harmful to our democracy. It is too soon to say the full effect that these new voting laws will have in shaping the 2022 elections. New laws might not make voting easier or harder Most of these recent laws don’t make voting easier or harder. They make the process easier for the state and local officials who run elections. For example, a new law in Utah improves communication between the Social Security Administration and election officials to ensure that dead voters are removed from voter registration lists. States making it harder to vote Nineteen states, meanwhile, enacted 33 laws that can make it harder for Americans to vote, according to the Brennan Center for Justice. In Texas, for example, a 2021 law requires voters to provide part of their Social Security or driver’s license number on their mail-in ballot request. The number must match the one voters used when they registered to vote. Two million registered voters in Texas lacked one of the two numbers in their voter file that they gave when they registered to vote. A large number of mail-in ballot requests for Texas’ March 2022 primary have been rejected because of this change. Texas has also limited the hours for early voting locations and banned the popular trend of drive-through voting. In Georgia, voters requesting absentee ballots must now provide a photo ID when they request a mail ballot and when they return it. Georgia also joined Texas, Iowa and Kansas in passing a law forbidding county and state election officials from automatically sending mail-in or absentee ballot requests to registered voters. In some cases, things are getting easier Twenty-five states, meanwhile, have passed 62 laws since 2020 that could make voting easier. Delaware and Hawaii joined 20 other states that now automatically register citizens to vote when they turn 18. Early research shows that automatic voter registration may modestly increase voter turnout. Some states made it easier for specific groups of voters. For example, in Maine, students can use their student photo ID to vote. In North Dakota, students can share a letter from a college or university to vote. Indiana now allows a document issued by a Native American tribe or band to serve as valid ID to vote. Ten states – including California, Connecticut, Hawaii, Illinois and Kentucky – increased access to mail ballot drop boxes and locations in 2021. Hawaii, Illinois, Maryland, New Mexico, Nevada and Vermont passed bills that protect or ease voter access to polling places. Maryland’s bill requires counties to offer a minimum number of early voting centers based on population. Vermont will now allow outdoor and drive-up voting. Starting with the 2022 primary election, all voting in Hawaii will be by mail. These changes give voters more options or simply make it easier to vote and may help increase turnout. What will happen in 2022? Many voting rights activists expect that turnout will decrease in states that made voting harder and increase in states that made it easier. The answer may not be that simple. Scholars don’t agree about how voting rules affect voter turnout. Studies don’t consistently show that individual voting laws lower voter turnout. But a state’s overall collection of voting laws can have more sway during elections. Scholars call the combined effect of voting laws “the cost of voting.” When the cost of voting grows higher, overall turnout decreases. Voting laws are not the only influencers of voter turnout. But adding extra hurdles to voting may lead to frustration that keeps some voters at home. The upcoming midterm elections will provide clarity about whether these new voting laws have a measurable impact on voter turnout. [The Conversation’s Politics + Society editors pick need-to-know stories. Sign up for Politics Weekly.]
<urn:uuid:4ec35834-40c7-4aea-9c0d-6207c1104402>
CC-MAIN-2022-40
https://gcn.com/state-local/2022/03/some-states-are-making-it-harder-vote-some-are-making-it-easier-its-too-soon-say-if-will-affect-voter-turnout-2022/363350/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00642.warc.gz
en
0.937755
1,095
3.234375
3
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding artificial intelligence. In October, Amazon had to discontinue an artificial intelligence–powered recruiting tool after it discovered the system was biased against female applicants. In 2016, a ProPublica investigation revealed a recidivism assessment tool that used machine learning was biased against black defendants. More recently, the US Department of Housing and Urban Development sued Facebook because its ad-serving algorithms enabled advertisers to discriminate based on characteristics like gender and race. And Google refrained from renewing its AI contract with the Department of Defense after employees raised ethical concerns. Those are just a few of the many ethical controversies surrounding artificial intelligence algorithms in the past few years. There’s a six-decade history behind the AI research. But recent advances in machine learning and neural networks have pushed artificial intelligence into sensitive domains such as hiring, criminal justice and health care. In tandem with advances in artificial intelligence, there’s growing interest in establishing criteria and standards to weigh the robustness and trustworthiness of the AI algorithms that are helping or replacing humans in making important and critical decisions. With the field being nascent, there’s little consensus over the definition of ethical and trustworthy AI, and the topic has become the focus of many organizations, tech companies and government institutions. In a recently published document titled “Ethics Guidelines for Trustworthy AI,” the European Commission has laid out seven essential requirements for developing ethical and trustworthy artificial intelligence. While we still have a lot to learn as AI takes a more prominent role in our daily lives, EC’s guidelines, unpacked below, provide a nice roundup of the kind of issues the AI industry faces today. Human agency and oversight “AI systems should both act as enablers to a democratic, flourishing and equitable society by supporting the user’s agency and foster fundamental rights, and allow for human oversight,” the EC document states. Human agency means that users should have a choice not to become subject to an automated decision “when this produces legal effects on users or similarly significantly affects them,” according to the guidelines. AI systems can invisibly threaten the autonomy of humans who interact with them by influencing their behavior. One of the best-known examples in this regard is Facebook’s Cambridge Analytica scandal, in which a research firm used the social media giant’s advertising platform to send personalized content to millions of users with the aim of affecting their vote in the 2016 U.S. presidential elections. The challenge of this requirement is that we’re already interacting with hundreds of AI systems everyday, including the content in our social media feeds, when we view trends in Twitter, when we Google a term, when we search for videos on YouTube, and more. The companies that run these systems provide very few controls over the AI algorithms. In some cases, such as Google’s search engine, companies explicitly refrain from publishing the inner-workings of their AI algorithms to prevent manipulation and gaming. Meanwhile, various studies have shown that search results can have a dramatic influence on the behavior of users. Human oversight means that no AI system should be able to perform its functions without some level of control by humans. This means that humans should either be directly involved in the decision-making process or have the option to review and override decisions made by an AI model. In 2016, Facebook had to shut down the AI that ran its “Trending Topics” section because it pushed out false stories and obscene material. It then returned humans in the loop to review and validate the content the module was specifying as trending topics. Technical robustness and safety The EC experts state that AI systems must “reliably behave as intended while minimizing unintentional and unexpected harm, and preventing unacceptable harm” to humans and their environment. One of the greatest concerns of current artificial intelligence technologies is the threat of adversarial examples. Adversarial examples manipulate the behavior of AI systems by making small changes to their input data that are mostly invisible to humans. This happens mainly because AI algorithms work in ways that are fundamentally different from the human brain. Adversarial examples can happen by accident, such as an AI system that mistakes sand dunes for nudes. But they can also be weaponized into harmful adversarial attacks against critical AI systems. For instance, a malicious actor can change the coloring and appearance of a stop sign in a way that will go unnoticed to a human but will cause a self-driving car to ignore it and cause a safety threat. Adversarial attacks are especially a concern with deep learning, a popular blend of AI that develops its behavior by examining thousands and millions of examples. There are already been several efforts to build robust AI systems that are resilient to adversarial attacks. AutoZOOM, a method developed by researchers at MIT-IBM Watson AI Lab, helps detect adversarial vulnerabilities in AI systems. The EC document also recommends that AI systems should be able to fallback from machine learning to rule-based systems or ask for a human to intervene. Since machine learning models are based on statistics, it should be clear how accurate a systems is. “When occasional inaccurate predictions cannot be avoided, it is important that the system can indicate how likely these errors are,” the EC’s ethical guidelines state. This means that the end user should know about the confidence level and the general reliability of the AI system they’re using. Privacy and data governance “AI systems must guarantee privacy and data protection throughout a system’s entire lifecycle. This includes the information initially provided by the user, as well as the information generated about the user over the course of their interaction with the system,” according to the EC document. Machine learning systems are data-hungry. The more quality data they have, the more accurate they become. That’s why companies have a tendency to collect more and more data from their users. Companies like Facebook and Google have built economic empires by building and monetizing comprehensive digital profiles of their users. The use this data to train their AI models to provide personalized content and ads to their users and keep them glued to their apps to maximize their profit. But how responsible are these companies in maintaining the security and privacy of this data? Not very much. They’re also not very explicit about the amount of data they collect and ways they use it. In recent years, general awareness about privacy and new rules such as the European Union’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA) are forcing organizations to be more transparent about their data collection and processing practices. In the past year, many companies have offered users the option to download their data or to ask the company to delete it from its servers. However, more needs to be done. Many companies share sensitive user information with their employees or third-party contractors to label data and train their AI algorithms. In many cases, users don’t know that human operators review their information and they falsely believe that only algorithms process their data. Very recently, Bloomberg revealed that thousands of Amazon employees across the world access the voice recordings of the users of its Echo smart speakers to help improve the company’s AI-powered digital assistant Alexa. The idea does not sit well with the users, who expect to enjoy privacy in their homes. The European Commission experts define AI transparency in three components: traceability, explainability and communication. AI systems based on machine learning and deep learning are highly complex. They develop their behavior based on correlations and patterns found in thousands and millions of training examples. Often, the creators of these algorithms don’t know the logical steps behind the decisions their AI models make. This makes it very hard to find the reasons behind the errors these algorithms make. EC specifically recommends that developers of AI systems document the development process, the data they use to train their algorithms, and explain their automated decisions in ways that are understandable to humans. Explainable AI has become the focus of several initiatives by the private and public sector. This includes a widespread effort by the Defense Advanced Research Projects Agency (DARPA) to create AI models are open to investigation and methods that can explain AI decisions. Another important point raised in the EC document is communication. “AI systems should not represent themselves as humans to users; humans have the right to be informed that they are interacting with an AI system,” the document reads. Last year, Google introduced Duplex, an AI service that could place calls on behalf users and make restaurant and salon reservations. Controversy ensued because the assistant refrained from presenting itself as an AI agent and duped its interlocutors into thinking they were speaking to a real human. The company later updated the service to present itself as Google Assistant. Diversity, non-discrimination and fairness Algorithmic bias is one of the well-known controversies of contemporary AI technology. For a long time, we believed that AI would not make subjective decisions based on bias. But machine learning algorithms develop their behavior from their training data, and they reflect and amplify any bias contained in those data sets. There have been numerous examples of algorithmic bias rearing its ugly head, such as the examples listed at the beginning of this article. Other cases include a study that showed popular AI-based facial analysis services being more accurate on men with light skin and making more errors on women with dark skin. To prevent unfair bias against certain groups, EC’s guidelines recommend that AI developers make sure their AI systems’ data sets are inclusive. The problem is, AI models often train on data that is publicly available, and this data often contains hidden biases that already exist in the society. For instance, a group of researchers at Boston University discovered that word embedding algorithms (AI models used in tasks such as machine translation and online text search) trained on online articles had developed hidden biases, such as associating programming with men and homemaker with women. Likewise, if a company trains its AI-based hiring tools with the profiles of its current employees, it might be unintentionally pushing its AI toward replicating the hidden biases and preferences of its current recruiters. To solve hidden biases, EC recommends for companies that develop AI systems hire people from diverse backgrounds, cultures and disciplines. One consideration to note however is that fairness and discrimination often depends on the domain. For instance, in hiring, organizations must make sure that their AI systems don’t make decisions. But in another field like health care, parameters like gender and ethnicity must be factored in when diagnosing patients. Societal and environmental well-being “[The] broader society, other sentient beings and the environment should be also considered as stakeholders throughout the AI system’s life cycle,” EC’s guidelines state. The social aspect of AI has been deeply studied. A notable example are social media companies, which use AI to study the behavior of their users and provide them with personalized content. This makes social media applications addictive and profitable, but also causes a negative impact on users, making them less social, less happy and less tolerant toward opposing views and opinions. Some companies have started to acknowledge this and correct the situation. In 2018, Facebook declared that it would be making changes to its News Feed algorithm and provide users with more posts from friends and family and less from brands and publishers. The move was aimed at making the experience more social. The environmental impact of AI is less discussed, but is equally important. Training and running AI systems in the cloud consumes a lot of electricity and leaves a huge carbon footprint. This is a problem that will grow worse as more and more companies use AI algorithms in their applications. One of the solutions is to use lightweight edge AI solutions that require very little power and run on renewable energy. Another solution is to use AI itself to help improve the environment. For instance, machine learning algorithms can help manage traffic and public transport to reduce congestion and carbon emissions. Finally, EC calls for mechanisms “to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use.” Basically, this means there should be legal safeguards to make sure companies keep their AI systems conformant with ethical principles. U.S. lawmakers recently introduced the Algorithmic Accountability Act which, if passed, will required companies to have their AI algorithms evaluated by the Federal Trade Commission for known problems such as algorithmic bias as well as privacy and security concerns. Other countries, including the UK, France and Australia have passed similar legislation to hold tech companies to account for the behavior of their AI models. In most cases, ethical guidelines are not in line with the business model and interests of tech companies. That’s why there should be oversight and accountability. “When unjust adverse impact occurs, accessible mechanisms should be foreseen that ensure adequate redress. Knowing that redress is possible when things go wrong is key to ensure trust,” the EC document states.
<urn:uuid:0367d541-aba2-4aab-b553-f41086657e3d>
CC-MAIN-2022-40
https://bdtechtalks.com/2019/04/15/trustworthy-ethical-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00642.warc.gz
en
0.953857
2,711
2.84375
3
A Closer Look at Block Chain Technology Don't Start With Blockchain Blockchain: the path forward Blockchain to Revolutionize the Financial Industry Safwan Zaheer, Director, Financial Services Digital & Head of FinTech, KPMG US Riding the Blockchain Wave: Challenges and Opportunities Eric Piscini, Principal Global Blockchain Leader, Deloitte Blockchain Community Leader, I am part of the Innovation & Digital... Xavier Laurent, Blockchain Community Leader for Credit Agricole CIB Blockchain: The Paradox Sean Khozin, MD, MPH, Associate Director, FDA Thank you for Subscribing to CIO Applications Weekly Brief Benefits Of Blockchain Technology In The Education Industry Blockchain technology will contribute to the development of more efficient online learning. Colleges can develop user-friendly educational platforms and initiatives that link students and teachers. Fremont, CA: Blockchain is a digital ledger that powers bitcoin and other cryptocurrencies. It is, without a question, the most relevant technology in cryptocurrencies. In addition, due to the safety and integrity of blockchain technology, it is making its way into the education field. There are several advantages to implementing blockchain technology in educational institutions today. Let's look at the benefits of blockchains used in education. - Helps in the verification and accreditation of student records Blockchain technology is transforming the storage of certificates and student credentials in educational institutions. There is no need for a middleman in certifying degrees, certificates, diplomas, and other academic papers with blockchain technology. - Reduce the number of cases of educational fraud Education is among the businesses most affected by fraud and cybercrime. In an ideal world, hackers would be able to change and erase data from educational systems. Unfortunately, they do this, particularly to politicians, to award forged certifications. user can avoid all of this academic fraud using blockchain technology. For all academic qualifications, blockchain guarantees a consistent and transparent ledger. However, it isn't easy to update student information once recorded on an online ledger by a college. In addition, to influence the network, you will need the authorization of other network users. - Decentralizing Online Learning Educational institutions may use blockchain technology to enable decentralized online learning. It allows students and teachers to communicate knowledge in real-time. When blockchain technology decentralizes online learning, institutions will no longer be able to decide which courses to publish and how much to charge for each online course. - Improving Learning Platforms Blockchain technology will contribute to the development of more efficient online learning. Colleges may develop user-friendly educational platforms and initiatives that link students and teachers. However, the education concept is used in schools to expand access to or exchange study materials. Users acquire internal tokens to receive feedback via online backup teachers. They can also use other necessary services and download instructional resources.
<urn:uuid:0e539ab7-93ff-4616-8bb0-649aedc6b481>
CC-MAIN-2022-40
https://www.cioapplications.com/news/benefits-of-blockchain-technology-in-the-education-industry-nid-9598.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00642.warc.gz
en
0.899476
587
2.734375
3
Last week we took the eagle’s eye view of the principles behind SELinux. Today we’ll dig a bit more deeply into SELinux policies, and then fire up Fedora 8 and see what SELinux looks like in practice. I recommend using the latest Fedora version as a SELinux training tool, because Fedora has the most mature implementation and userspace tools. Red Hat Enterprise Linux and CentOS, the leading Red Hat clone, have similar SELinux setups to Fedora. Gentoo also has a nice SELinux implementation. I don’t recommend starting from scratch. Start with a working setup, and then plan to spend considerable time learning your way around it, because it is a big complex beast. It’s not that SELinux itself is so complex; it’s the scale of it. SELinux wants to touch every file and process on your system. Fedora, RHEL, and Gentoo come with prefab policies, and this is a good thing because writing SELinux policies is a large undertaking. To write a good policy you need a thorough understanding of what your application does, and how it interacts with everything else on a system. Dan Walsh, the lead SELinux developer for Red Hat, cheerily claims that “customizing your system’s protection by creating new policy modules is easier than ever!” Which is very much a relative statement, akin to “Thanks to modern high-tech gear, visiting the Moon is easier than ever!” But it’s not impossible, and Mr. Walsh is encouraging and helpful, having written reams of SELinux documentation. For now we’re going to make sure we understand SELinux fundamentals, and take a look at the nice Fedora tools for managing SELinux. Policies: The SELinux Master Control Center SELinux uses policiesto enforce mandatory access controls (MAC), which you’ll recall from part 1 foil zero-day attacks and privilege escalation, so let’s see what goes into making a policy. SELinux calls users, processes, and programs subjects. objectsare files, devices, sockets, ports, and sometimes other processes. Subjects can be thought of as processes, and objects are the targets of a process operation. SELinux uses a kind of role-based access control (RBAC) combined with type enforcement. Type enforcement enforces policy rules based on the types of processes and objects, which it tracks in a giant table. Types and domainsare the same thing; you’ll see both terms a lot. Type enforcement means every subject on the system—that’s right, all of them&mash;has to have a type assigned to it. Types are stored in security contexts in the extended attributes (xattrs) of the files. This means they are stored in the inodes, which means that no matter how many weirdo soft or hard links are attached to your file, the security context is inescapable, and will not be fooled by silly evasions such as renaming the files or creating crafty softlinks. Types are included in the security context. A security context has three elements: identity, role, and typeidentifiers, like this: You can see these with the Z option to the lscommand: $ ls -alZ /bin/ping -rwsr-xr-x root root system_u:object_r:ping_exec_t:s0 /bin/ping What do these things mean? system_u is a system user. Files on disk do not have roles, so they are always object_r. ping_exec_t is the type for the ping command. You will also see documentation that calls this the domain. The security context is used by your SELinux policy to control who can do what. The identity controls which domains the process is allowed to enter. This is defined somewhere inside the vast directory — /etc/selinux— that contains your SELinux policy. In the targeted SELinux policy, every subject and object runs in the unconfined_t domain, which is just like running under our old familiar Unix DAC (Discretionary Access Control) permissions. Except for a select set of daemons that are restricted by SELinux policy and run in their own restricted domains. For example, httpd runs in the httpd_t domain, and is tightly restricted so that a successful intrusion will be confined to the HTTP process, and not gain access to the rest of the system. Nor will users or processes who have no business with httpdbe allowed to interfere with its operation, or access data files they have no business looking at. The pscommand will show you some examples of this in action: $ ps aZ LABEL PID TTY STAT TIME COMMAND system_u:system_r:getty_t:s0 2587 tty1 Ss+ 0:00 /sbin/mingetty tty1 system_u:system_r:xdm_xserver_t:s0:c0.c1023 2664 tty7 Ss+ 7:38 /usr/bin/X What does the s0 mean? Well now, that opens a whole new can o’ terminology. That field belongs to Multilevel Security (MLS); it sets a sensitivity value that ranges from s0-s15. When you use MLS you also need a capabilitiesfield, which goes from c0 – c255, so it would look something like s1:c2. MLS is super-strict and overkill for most of us. So instead Fedora uses Multi-Category Security (MCS). The MLS sensitivity field is required by the kernel and it always says s0, but you can ignore it. MCS allows you to further refine access controls with user-defined categories. For example, you could have a MCS category called “super-secret!_yes_really!”. Then files labeled with this will be accessible only to processes with permissions to enter this category. In the ps output above, you’ll see an example of this with the X process. If you want to try your hand at these read A Brief Introduction to Multi-Category Security (MCS), and Getting Started with Multi-Category Security (MCS). While most files can be controlled by SELinux without any modifications, a few have had to be patched to become SELinux-aware, such as the Linux coreutils files, login programs like login, sshd, gdm, cron, and the X windows system. You should also find these on systems that do not ship with SELinux, such as Ubuntu. If your system does not have SELinux, they will return empty fields where the SELinux labels should go, like this psexample: $ ps aZ LABEL PID TTY STAT TIME COMMAND - 4248 tty4 Ss+ 0:00 /sbin/getty 38400 tty4 - 4249 tty5 Ss+ 0:00 /sbin/getty 38400 tty5 The nice SELinux devs have kindly made Z the universal “show me the security context” option. SELinux comes with it own set of user-space commands, which are bundled up in the policycoretutilspackage. You can run a number of SELinux commands without hurting anything, like see your own personal security context: $ id -Z system_u:system_r:unconfined:t:s0 You can check SELinux status: SELinux status: enabled SELinux mount: /selinux Current mode: permissive Mode from config file: permissive Policy version: 21 Policy from config file: targeted avcstat displays AVC (Access Vector Cache) statistics. avcstat 5runs it every five seconds; of course you can make this any interval you want. Fedora’s SELinux Tools Fedora 7 and 8 have three good graphical SELinux tools: SELinux Management, SELinux Policy Generation Tool, and SELinux Troubleshooter. Start with SELinux Management; this lets you fine-tune the existing SELinux policy, or change to a different policy type entirely. It costs nothing but time and a spare PC to learn your way around this potent security tool. I’ve seen a lot of comments on forums and mailing lists that say it’s too complex to bother with. I don’t agree with this; I think a security tool of this nature is overdue for Linux. Any Internet-facing server is a good candidate for SELinux, and especially the notoriously porous category of LAMP servers. What about AppArmor and GRSecurity? We’ll soon be looking at these as well. - Dan Walsh’s LiveJournal contains reams of SELinux howtos, which is good because he is the lead Red Hat SELinux developer. Yes, it’s all his fault! - A step-by-step guide to building a new SELinux policy module - SELinux FAQs - SELinux Commands - Targeted Policy Overview
<urn:uuid:ac01da81-03bb-4ae3-8e36-c9089799744a>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/security/tips-for-taming-se-linux-part-two/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00642.warc.gz
en
0.874346
2,029
2.71875
3
Thanks to the continued growth of IoT, cloud computing has great potential to continue to drive technological advancements. Cloud computing, born in 2007, has aided technological revolutions through 14 years of development. You may have discovered that cloud computing has expanded its functions in recent years beyond simple storage services, such as iCloud and Google Drive. These functions include IaaS, PaaS, and SaaS. So what are IaaS, PaaS, SaaS, and how do they play an important role in cloud computing? First of all, let us look at the definition of cloud computing. The ‘cloud’ refers to a shared pool of configurable computing resources. It plays a vital role in integrating computing resources and realizing automatic management through online platforms. This means that users of cloud computing can reduce labor costs, and at the same time, can achieve resource utilization efficiency. Cloud computing means more in commercial activities. Like all other commercial resources, the computing resources have become purchasable and have flexible liquidity through resource pooling. Their low prices also make them one of the top software developers or engineers’ options. There are three layers of cloud computing, including Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). I will introduce them more specifically in the following context. Layers of Cloud Computing To illustrate the concept of the three layers of cloud computing, let us begin with an example introduced by Albert Barron, an executive software client-architect of IBM. Suppose you are a caterer who plans to start a pizza business and wants to make handmade pizzas from start to the end entirely on your own. But, the complicated preparation work may make you feel stressed. Therefore, you have decided to outsource part of your work to reduce your workload. Now, you have been provided with three plans: The outsourcers provide you with resources including kitchen, oven, gas, etc. You can use these infrastructures to make pizzas. Expect the infrastructures; the outsourcers also provide you pizza crusts. All you have to do is to sprinkle your ingredients on the crust and let the outsourcer bake it for you. In other words, once you have customized your needs, the cloud platform will help you realize them. The outsourcer has already prepared pizzas for you without your participation. You can package them and print your logo. Then, all you have to do now is sell them. If we map pizza production to systematic processes, we can easily see the differences between IaaS, PaaS, and SaaS. According to the picture shown above, it is obvious that the workload is decreasing during the service application process. IaaS > PaaS > SaaS Simply put, IaaS is the bottom layer of cloud services and mainly provides some essential resources. For example, Amazon EC2, Microsoft Azure, Rackspace, etc. In addition to being unable to change the infrastructure, users can install any operating system or other software on the infrastructure at will. However, the installation and use process are relatively complicated, with high maintenance costs. Users need to control the bottom layer by themselves to realize the use logic of infrastructure. PaaS provides runtime, simplifying hardware and operating system details and seamlessly scaling. Developers only need to focus on their business logic instead of the bottom layer logic. Platforms including Google App Engine and AWS Elastic Beanstalk show this feature very well. Generally speaking, PaaS refers to updating cloud-built operating software for the users. Users only need to download and install the software they need on the built platform. SaaS means leaving the development, management, and deployment process to the outsourcers, releasing worries regarding technological matters. All the resources provided are ready to be used at any time. The internet services that ordinary users encounter are almost all SaaS, such as Facebook, Twitter, and Instagram. Its advantage is that resource utilization efficiency can be highly optimized. Because all applications such as the operating system have been deployed in the cloud, users can log in directly without other operations. All in all, what IaaS, PaaS, or Saas can do is make our work and life more convenient. The charm of technological progress also lies here. In the world of cloud computing, what can be shared is both information and technology. Even if no maintenance staff is specialized in the cloud computing industry, the multiple-choice service platform of cloud computing allows you to use its full functions easily. This advanced technology can help reduce the work of digital transformations.
<urn:uuid:f538c476-70b0-4c8d-b41d-ee70ac1f269d>
CC-MAIN-2022-40
https://www.iotforall.com/what-are-iaas-paas-and-saas
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00642.warc.gz
en
0.937115
995
3.171875
3
Ethernet II Frame: In preparation of your CCNA exam, we want to make sure we cover the various concepts that we could see on your Cisco CCNA exam. So to assist you, below we will discuss the CCNA concept of Ethernet Technologies. So even though it may be a difficult concept and confusing at first, keep at it as this is the first step in obtaining your Cisco certification! Preamble – Synchronization. They give components in the network time to detect the presence of a signal and read the signal before the frame data arrives. Start of Frame (SOF) – Start of Frame sequence Destination and Source Addresses – Physical or MAC addresses. The source address is always a unicast address, the destination address can be unicast, multicast, broadcast. Length – Indicates the number of bytes of data that follow this field. Type – Specifies the upper layer protocol to receive the data. Data – User or application data. Ethernet II expects a minimum of 46 bytes of data. If the 802.3 frame does not have a minimum of 64 bytes, padded bytes are added to make 64. Frame Sequence Check (FCS) – CRC value is used to check for damaged frames. This value is recalculated at the destination network adapter. If the value is different from what is transmitted, the receiving network adapter assumes that an error has occurred during transmission and discards the frame. EIA/TIA Horizontal Cabling: (Using CAT5 cabling in an Ethernet network) 3 Meters – 90 Meters – 6 Meters Collision Domains – A collision domain is defined as a network segment that shares bandwidth with all other devices on the same network segment. When two hosts on the same network segment transmit at the same time, the resulting digital signals will fragment or collide, hence the term collision domain. It's important to know that a collision domain is found only in an Ethernet half-duplex network Broadcast Domain – A broadcast domain is defined as all devices on a network segment that hear broadcasts sent on that segment. All devices plugged into a hub are in the same collision domain and the same broadcast domain. All devices plugged into a switch are in separate collision domains but the same broadcast domain. Although, you can buy special hardware to break up broadcast domains in a switch, or use a switch capable of creating VLANs. VLANs breakup broadcast domains. Hubs and Repeaters extend collision and broadcast domains. Switches, Bridges and Routers break up collision domains. Routers (and Switches using VLANs) break up broadcast domains. I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. Achieving your CCNA certification is much more than just memorizing Cisco exam material. It is having the real world knowledge to configure your Cisco equipment and be able to methodically troubleshoot Cisco issues. So I encourage you to continue in your studies for your CCNA exam certification.
<urn:uuid:21301745-d38d-4285-8bcd-1da2320a2265>
CC-MAIN-2022-40
https://www.certificationkits.com/ccna-ethernet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00042.warc.gz
en
0.894691
640
3.328125
3
Federal Communications Commission (FCC) The Federal Communications Commission (FCC) is an independent agency of the United States government created by statute (47 U.S.C. § 151 and 47 U.S.C. § 154) to regulate interstate communications by radio, television, wire, satellite, and cable. How is Bandwidth Involved with the FCC In order to operate as a successful VoIP provider, Bandwidth must comply with all FCC rules and regulations applicable to its service offerings, including obligations to file a wide variety of FCC required reporting (such as service outage reports, rural call completion statistics, CPNI certification, etc.). What are the Benefits of Bandwidth’s Involvement with the FCC Bandwidth believes in the phrase that, “with great power comes great responsibility” which is why we are driven to comply with all applicable rules and regulations that are established by the FCC. This is not only important to us as a communications software company, but also proves to be important to our customers as well. Bandwidth plays an important role by providing communications services which is why we are focused on upholding the integrity of all of our services whether an end user is sending a message or dialing 911. Bandwidth focuses on mastering communication services, so your business can focus on succeeding in what matters most to you.
<urn:uuid:007ac660-9e21-46a1-a750-ef7bf000fdba>
CC-MAIN-2022-40
https://www.bandwidth.com/glossary/federal-communications-commission-fcc/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00042.warc.gz
en
0.959769
275
2.734375
3
I can take a wager that many of you poring over this blog post may not have stumbled into PimEyes. I was in the same boat until I started exploring facial recognition technology online. So, to unravel, PimEyes uses facial recognition tool to search images. When you upload your snap on this site, you discover hundreds of images with matching facial features. And, the pictures meet your eye in no time- in less than two seconds. This simple online experiment helps you come to grips with the power of facial recognition technology- a touchless disruptor. Yes, there could be cons. Like any other disruptive technology, facial recognition can be abused. The big concern, of course, is on breach of privacy. Facial recognition technology can be misused to identify or misidentify people randomly. Some even view it as a creepy technology tool which can be misused for stalking. This explains why a handful of US laws curb the use of facial recognition systems. Then, this technology has its own limits. To illustrate, the use of facial recognition systems has proven to be less accurate when police used it for tracking down suspects of coloured skin. But looking at the way this technology is evolving, the boon outweighs any bane. In 2019, the global market for automated facial recognition stood at $3.2 billion. This market is tipped to zoom over 6X to reach $12.92 billion in 2027. Moreover, the latest algorithms are making facial recognition systems more accurate. Why Facial Recognition Technology and How it Works? Facial recognition is the process of identifying a person based on his face. It captures, analyses, and compares patterns based on a person’s facial details. The snap thus captured is matched with the image already stashed in a digital database. Many aspects of the facial geometry are factored in for identification like spacing between the eyes, bridge of the nose, contour of the lips and ears and the gap between forehead and chin. Today, facial recognition is considered as the most natural of all biometric mapping tools. But why opt for facial recognition when we are already capturing Iris and fingerprints for biometrics? That’s because in facial recognition, there is no physical contact with the end user. Also, it is easy to deploy and use. So.. The big boys Are Already Using It Facebook introduced its DeepFace program in 2014, which can determine whether two faces belong to the same person with an accuracy rate of 97.25 per cent. Google went one up on automated face analysis with the launch of FaceNet in 2015. On the popular dataset Labeled Faces in the Wild (LFW), FaceNet achieved a new record accuracy of 99.63 per cent. Amazon has also started a cloud-based face recognition service called Rekognition designed for law enforcement agencies. The system can recognize 100 people in a single image and retrieve their faces from millions of databases. Where’s the Uptake of Facial Recognition Growing? Security & Law Enforcement: It is a promising technology for police and law enforcement agencies for detecting, preventing and combating crime and acts of terror. Facial recognition can be handy while using identity documents. Also, face mapping can be employed for border checks and police checks. You can bet on Facial Recognition based CCTV systems in conducting public security missions like finding missing children and disoriented adults, identifying exploited children, tracking criminals as well as supporting and accelerating investigations. - Healthcare: Face analysis powered by Deep Learning makes it possible to track a patient’s use of medication more accurately, detect genetic diseases with a success rate of 96 per cent and support pain management procedures. A more relevant use of Facial analysis technology could be in the ongoing Covid vaccination drive wherein beneficiary biometrics and demographics could be managed better. - Banking & Retail: The least expected yet the most promising, Facial Recognition could be used by banks for completing KYC online. The banking sector has already tapped this technology, integrating it with their ATMs and using to check unauthorized entry to their premises. In retail, facial recognition can be of utility in detecting shoplifting and mapping the needs of visiting customers. - Managing Visitors To Government Offices: With automated Facial Recognition systems, governments can oversee and streamline the entry and exit of visitors to key offices and avert entry to high security and confidential zones. Available both in the portal and as a mobile app, the technology can help government authorities to cut down drastically on time taken to identify and authenticate visitors and improve transparency and efficiency. For example, technology powered facial analysis can verify a face in two to three seconds which done manually could have devoured 30 minutes or even more. The system is capable of generating detailed reports on Visitor Analytics, capturing data like approved and rejected requests, the count of visitors and the frequency, thus helping governments to make prudent decisions. It also makes government interface easy for the registered visitors as they can apply and check status of their requests as well as grievances lodged (if any) online. Going ahead, facial recognition technology is slated to generate massive revenues. Its widespread adoption will be in surveillance and law enforcement, both functions anchored by governments. Driven by biometrics and embracing emerging technologies like Artificial Intelligence (AI) and Deep Learning, Facial Recognition promises to fast-track Digital Transformation efforts worldwide. In it, you have technology mapping your face to change the face of the world around you. This article first published on CSM Blog: Know How Facial Recognition Technology Is a Quiet Disruptor. About The Author Jayajit Dash works as Research Analyst with CSM Technologies. His forte is public policy oversight and market analytics. Jayajit is a compulsive blogger, producing content that blends technology with policy narrative.
<urn:uuid:c1cd3650-07c7-426e-b355-54a200c911a1>
CC-MAIN-2022-40
https://www.dailyhostnews.com/know-how-facial-recognition-technology-is-a-quiet-disruptor
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00042.warc.gz
en
0.930372
1,187
2.703125
3
What is a vulnerability assessment (and why it isn't enough) It’s pretty obvious to say it but, the world has changed a lot in the last two decades and that is largely thanks to the way the digital world has taken over so many of our day-to-day lives and the way we work. Therefore, it is important to realise that when we come to speak about cybersecurity and cyber threats, vulnerability assessments are no longer enough to keep our organisations safe. Why? Because the more technology advances the more vast and complex these threats have become. Whilst we need to continue to do vulnerability assessments - after all they are important - we need to do more as well. In this article we are going to take a closer look at what a vulnerability assessment is, why they are important, the different types of vulnerability assessments and why they are not enough. What is a Vulnerability Assessment? A vulnerability assessment is the process of identifying, classifying, and prioritising security vulnerabilities in IT infrastructure. When undertaking vulnerability assessments, they are designed to assess and evaluate whether an IT system is exposed to known vulnerabilities and in response to these vulnerabilities, assign severity levels to each one. Following that, there are recommendations provided to help remedy or mitigate steps where required. A vulnerability assessment is a common security procedure and provides a detailed view of the security risks an organisation may face, enabling them to better protect their information technology and sensitive data from cyber threats. But, a vulnerability assessment is one important component in an organisation’s overall cybersecurity strategy. There are other things an organisation should and must do when it comes to their own cybersecurity strategy. Why Vulnerability Assessment is Important We understand that a vulnerability assessment is still needed in today’s fast paced and fast moving cyber society but why is it important if it is only part of your cybersecurity strategy? The short answer is that a vulnerability assessment is important because it helps to provide you with security weaknesses in your environment and helps to provide direction on how to remediate or mitigate the issues before they can be exploited. Of course, there are other reasons why you should carry out a vulnerability assessment and these include; - Creates and inventory of all assets and risks What is more or less at risk and what needs addressing. - Establishes a basis for risk / benefit evaluation. What do you need to do as an organisation to assess what the risks are and how to best benefit from it. Types of Vulnerability Assessment There are different types of vulnerability assessments which are designed to discover different types of system or network vulnerabilities and these include: - Network-based assessment: what are the possible network security issues and how to detect vulnerable systems on wired and wireless networks. - Host-based assessment: This is assessing vulnerabilities in servers, workstations, and other network hosts. Things such as open ports and services can offer visibility into the configuration settings and patch management of scanned systems. - Wireless network assessment: This is assessing the Wi-Fi networks and wireless network infrastructure. It can validate that your company's network is securely configured to prevent unauthorised access and can also identify rogue access points. - Application assessment: What are the security vulnerabilities in web applications and their source code by using automated vulnerability scanning tools on the front-end or static/dynamic analysis of source code. - Database assessment: The assessment of databases or big data systems for vulnerabilities and misconfiguration. Why a Vulnerability Assessment is Not Enough As described in the earlier section of the article, vulnerability assessments are important and needed as part of your cybersecurity strategy; however, they are not the only assessment tools you should be using. Whilst they form part of the cybersecurity strategy, there are disadvantages which translate into being not enough for many organisations, this includes; - Vulnerability report is just a starting point. - Fixing and patching vulnerabilities is a manual process. Fixing issues still requires action from your team and can take a significant amount of time to resolve. This means that the impact of the most critical vulnerabilities may not be addressed in time before it happens. - Resolving identified vulnerabilities can be highly technical requiring specialist skills. - It’s easy to underestimate just what is required when it comes to fixing these issues. Specialist skills are needed for many highly technical fixes. - Vulnerability scans will not detect all weaknesses. - Automated tools won’t catch all the issues that haven’t emerged or of emergent new threats. Some of these may be critical and could lead to big issues if not caught. - Security breaches can be because of unpatched vulnerabilities. In a recent report, nearly 60% of security breaches involved unpatched vulnerabilities. Whilst vulnerability management is an essential part of cybersecurity, it is not the only solution you should consider to help protect your organisation. Wrap Up Paragraph Keeping your business safe from cyber attacks has become commonplace - especially in today’s fast paced society and performing a vulnerability assessment is a great starting point. These assessments help to identify, classify, and prioritise security vulnerabilities in IT infrastructure. However, they are not the only things you need to get ahead of your cybersecurity needs as they can often miss new vulnerabilities and require specialist skills to help patch up old ones as well. If you are looking to get a better understanding of where your organisation’s cyber weaknesses lie, Bluefort’s Evolve IT Services can not only help you to get a much better understanding of these threats but also provide you with the solutions to protect your organisation in the long term. Call, 01252 917000 , email [email protected] or get in touch with us via our contact form.
<urn:uuid:c953192e-180c-4823-86ef-b758c3193f98>
CC-MAIN-2022-40
https://www.bluefort.com/news/latest-blogs/what-is-vulnerability-assessment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00042.warc.gz
en
0.943514
1,176
2.671875
3
Linux, being an open-source operating system, is a popular choice amongst people. There are a variety of distribution systems available for Linux each differentiated by its package managers. Let’s first understand what the main role of a package manager is. In the simplest sense, a package manager is a tool that felicitates the installation, up-gradation, configuration, removal, and management of software on a Linux operating system. Therefore, it becomes important to choose the best package manager for your Linux system to get the smoothest experience. If you are a student and if you use Linux, this article is specifically helpful for you. We are going to talk about some of the best Linux package managers for newbies. Also, if you have a lot of coursework, you can buy essays for college here, save some money and time, and focus on mastering Linux. DPKG is short for Debian Package and is a low-level package manager used most commonly by people. It is often used in the background to carry on commands by other package managers, such as APT. It is mostly used to install, manage and remove Debian packages. One of the most common drawbacks of DPKG is that it is unable to carry on automatic downloading and installation from repositories, which creates an issue when the users want their files to be upgraded automatically. It stands for Advanced Packaging Tool. It was developed by Ubuntu Foundation and is an open-source package manager for Debian-based distribution systems, such as Ubuntu and Linux Mint. Although, if you wish to explore more about Ubuntu in detail and need a paper to understand it in layman language, you can easily check for the killer papers discount code and help yourself. Coming back to APT, it acts as a front end for the DPKG package and thus gives commands to DPKG for carrying out basic tasks in the backend, but uses its own software for downloading and management of packages. It is considered a great package manager for beginners since it offers a basic understanding of packaging manager tools. APT offers a variety of GUIs to choose from and provides the flexibility to choose one according to your choice instead of forcing any GUI upon you. It is one of the most preferred package managers for beginners. The most used APT command tool is apt-get. It is used for installation, up-gradation, or removal of software packages. The best thing about this tool is that the whole OS can be upgraded using this command tool. Aptitude Package Manager It is another front-end package management tool, initially designed for Debian operating systems and their derivatives. Now, it is also relevant for RHEL based operating systems. It is found to be quite similar to APT and you can choose the one according to your needs between the two by testing out both the package managers. Some features of Aptitude manager that distinguish it from APT are safety upgrades, to allow for upgrades without affecting the existing data and package holding, which prevents certain specified packages from getting updated automatically. Some call Aptitude the higher-level cousin of APT due to these features. If you wish to not use a command line-based manager, you might want to try Synaptic Package manager. It is a GUI package management tool and can be used in place of the apt-get command-line tool. RPM (Red Hat Package Manager) This is an open-source package manager developed by Red Hat and is used on Red Hat based systems, such as CentOS, RHEL, Fedora, etc. It allows users to install, update, query, uninstall, verify, or manage system software packages in Linux. However, it is unable to manage or install packages directly from the internet. YUM (Yellowdog Updater, Modified) and DNF (Dandified Yum) are some of the popular command-line package managers for RPM-based systems. While YUM uses RPM files to unlock a lot of functions, DNF is an advanced modified version of YUM. Pacman Package Manager If you are using distributions such as Arch Linux, you should go for Pacman manager. Arch Linux is a rolling release OS. Pacman uses a combination of a simple binary package format and an easy-to-use build system. It is considered quite a streamlined package manager, simple yet doesn’t fail to maintain its depth. One of the distinguishing features of Pacman manager is that it will keep your system updated as it synchronizes packages lists with the master server and will also connect to the internet to acquire packages from there. It uses tools like makepkg, RCMP, etc, and has different types of GUIs available. One of the drawbacks of Pacman is that it cannot install files from third-party repositories. Zypper Package Manager It is a type of command-line package manager useful for installing, removing, and updating packages. It also has a utility for managing repositories and resolution of dependency issues. The biggest advantage of using Zypper is that it is faster and light on resources. It is also suitable for use with servers and remote machines. It works for OpenSUSE Linux and uses the Libzypp library. Portage Package Manager It is one of the most efficient package managers for Gentoo. It works for the installation and management of packages. This manager is also used by Chrome OS, Calculate, Sabayon, and Funtoo Linux among many others. Some of the distinguished functionalities of this package manager include backward compatibility, automation, and many more. ABS Package Builder ABS stands for Arch Build System. As the name suggests, it is for Arch Linux-based systems. It has been developed to create installable software packages out of the source code. One distinguishable feature of ABS package builder is that it customizes existing packages. It can also build or install the existing kernel. However, this package manager builds files instead of using pre-compiled packages which makes it a lesser user-friendly interface. Being a beginner at something as technical as Linux can be overwhelming. One may want to make a wise decision and pick the package manager that one finds most convenient and simple. You must have seen new alternatives to packages being introduced in the market, but nothing beats the supremacy of having a package manager. And they are not going to be out of the market for quite some time, so it is better to get yourself acquainted with these managers. Bio: Adri is a computer application graduate and a content writer. Her niche is technical writing and she comes up with great ways to explain technical content with ease to a layperson. She aspires to write her book someday.
<urn:uuid:cb72440a-08cf-4d9d-bcf8-981512c6672c>
CC-MAIN-2022-40
https://www.cyberpratibha.com/blog/best-linux-package-managers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00042.warc.gz
en
0.943478
1,385
2.78125
3
The Deadly Trio We will take at look at the three main dangerous members of this community namely as follows. - Trojan Horses Worms & Their Mitigation Worms are similar in nature to viruses but a bit advanced as far as propagation is concerned. The worm installs itself in the memory of the computer but it has the capability to transfer itself to other hosts automatically even without human intervention, thus making it much more serious than a virus. The only thing to be noted in the case of worms is that they do not spread due to any magic or non-understandable phenomenon, its just that they take advantage of automatic file sending and receiving features that have been enabled on computers in a network either unintentionally or due to some purpose. Worm Attack Procedure The anatomy of a worm attack can be broke down into three small steps with far reaching effect and these stages have been listed as follows: - The first stage goes by the name of enabling vulnerability and is the first step of the process wherein the work gets installed on a vulnerable system. - After getting installed on a vulnerable system, the worm then spreads its wings further by spreading to new targets. It does so by automatic replication of itself and attacking other systems - During the last or the payload stage, the hacker tries to raise the level of his/her access. Normally the hackers get to use the targeted system as a privileged user but with the passage of time, they normally aim to have access rights equivalent to that of an administrator and can cause maximum damage after that. - Once the above cycle is complete, the worm propagates through automatic transmission mechanisms using the vulnerability of the other systems attached to the initially targeted system and the entire process starts all over again. - Hence in this manner the hacker may be able to gain access to multiple systems and could even use the entire array of such infected systems to cause botnet attacks The figure below shows a description of such a propagation wherein the network is supposedly safe and secure with a firewall. Figure 1: Worm Attack How to Mitigate Worm Attacks Due to the specific nature of worm attacks and their methods of self propagation, it is obvious that it would require significantly more intelligent efforts and coordination between the various departments related to network management such as the engineering staff and the security administrators. It would be really difficult if not impossible to control and respond to worm attacks. Proper and systematic planning is necessary for work attack mitigation and normally the process can be divided into the following steps. Four Steps to Worm Mitigation - Containment: This step involves compartmentalization of the network into infected and not infected parts. This helps to contain the spread of the worm attack. - Inoculation: This step involves scanning and patching of the vulnerable systems. - Quarantine: In this step the infected machine is detected, disconnected and removed. If removal is not possible the infected machines are blocked. - Treat: This is the step where cleaning up and patching is done. Some worms may need reinstalling of the entire system for a thorough clean up. The response methodologies for incidents can be categorized into six classes. These classes are based on the response methodology adopted by the network service provider. The classes are enumerated below. Classes of Response Methodology - Preparation: This is acquisition of the resources in order to achieve successful mitigation - Identification: This is identification of the worm. - Classification: This involves classification of the worm into a specific category. - Traceback: This refers to a sort of reverse engineering process wherein the source of the worm is traced. - Reaction: This involves a reaction to the worm in the form of isolation and repairing the targeted systems. - Post mortem: This involves documenting and analyzing the entire process used. This is done to deal with such attacks in the future. A virus is a piece of deliberately fabricated code which carries out destructive or non-productive task on the computer system to which it gets attached. Similar to biological viruses, computer viruses can get attached to normal programs and modify their behavior in a destructive manner. The only “safe” factor in a virus is that it cannot automatically transmit itself over a network just like worms (as we shall see next) unless there is human intervention. The activities carried out by a virus could be as simple as displaying an annoying or teasing message to the user, or it could be as severe as deleting the entire file system of the computer. This obviously is a serious consequence since valuable data could be lost which could possibly result in a catastrophe if proper backup is not available. The word Trojan horse is taken from the tales of Greek mythology from the Trojan war where solider hid inside the statue of a horse and won over the city of Troy. So as you can gather from the short description of this story, in computer terminology the world Trojan horse is used to refer to those programs which appear attractive and genuine from above, but have malicious code embedded inside them. This code could be either a virus, a worm or both of these. Figure 2: The Legendary Trojan Horse The Trojan hose can then be used by the attacker to carry out a variety of nefarious activities from a remote location which could include tampering with the target computer files, stealing passwords, viewing screenshots, getting key-logging reports and so forth. Some of the ways in which the Trojan horse program could get inside a computer is through embedding in an otherwise genuine program, through email attachments, executable web content such as say the ActiveX controls and so forth. One of the most notorious Trojan horse programs of the recent times was the Love Bug which originated somewhere from Philippines and infected innumerable computer systems around the globe. Actually this horse contained the worm of a vbs program which caused the damage of nearly 6 billion US dollars and even organizations of the likes of CIA and Pentagon had to shut down their systems temporarily to get rid of it. Virus and Trojan Horse Attack Mitigation Virus and Trojan horse attacks can be kept under control by the use of proper precautions such as the usage of proper antivirus software. The following are the few mentioned steps which should ensure that attacks are kept at the minimum threat level, even if not totally eliminated: - There are a wide variety of antivirus software applications available in the market. Some of them come as stand along software or embedded with a large array of applications known as the Internet Protection Suite. It is important to use some tested and effective antivirus software in order to keep viruses and Trojans at bay. - Installation of the appropriate software application is necessary but it is certainly not sufficient in order to keep malicious code at bay. Unless these applications are constantly upgraded on a continuous basis and care is taken to keep them in order, they will not be effective against latest threats and attacks. It has been found that with the advancement of technology, the safe period which is available to security personnel between launch of new viruses and threats has been reducing constantly, and currently stands at perhaps a few minutes as compared to a few days or weeks in the past. - Knowledge is power goes the old but equally wise saying and this is applicable to the network and security administrators as well. They should strive hard to keep themselves updated of the latest threats, attack methods and principles being deployed by hackers. Having such knowledge, they would then be better equipped to deal with such situation and prevent a possible catastrophic situation from emerging. - Deploying appropriate Intrusion Detection and Prevention systems is also effective in warding off such dangers. One example of this is the CSA or Cisco Security Agent Intrusion Detection & Prevention A brief mention has been made in passing about intrusion detection and prevention systems in the previous section about warding of viruses and Trojans. Hence I will be dealing with this area in brief in this tutorial since a detailed study of intrusion prevention and detection has been carried out in a separate tutorial dealing specifically with the broader aspects of the issue from a higher perspective. It would suffice here to define the basic constituents of these IDS and IPS systems. Basically these stand for Intrusion Detection System and Intrusion Prevention Systems respectively. The main idea behind using these systems is to continuously monitor traffic flow along a network path, normally at the interface between the trusted and the un-trusted sections. An intrusion detection system is simply a passive system which only monitors activity in the bypass mode, not obstructing traffic directly and only raising an alarm if any anomaly is noted. The IPS on the other hand is an active service which directly intercepts traffic and allows only permitted packets to pass through it, blocking everything else. These systems can be deployed using appropriate hardware, software or a combination of both. Cisco also offers multifarious devices which play an important role in intrusion prevention and detection.
<urn:uuid:d0daba1f-0a6b-42d8-896d-062f6da2e0e0>
CC-MAIN-2022-40
https://www.certificationkits.com/cisco-certification/ccna-security-certification-topics/ccna-security-describe-security-threats/ccna-security-worm-virus-and-trojan-horse-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00242.warc.gz
en
0.954484
1,809
3.046875
3
Thank you for Subscribing to CIO Applications Weekly Brief Emerging Technologies to Transform the World in Near Future The advent of nanomedicine, a nanotechnology application, is already proving to be an important solution to the treatment of serious diseases. Since drugs in the human body are absorbed rapidly and eliminated as waste before treatment, nanomedicine may prolong the time a drug stays active in the body. Fremont, CA: The last decade has seen remarkable technological breakthroughs. Without question, technology has permeated all aspects of a person's life as well as a business. Technology has the ability to change everything, from the evolution of artificial intelligence, the internet of things (IoT), and 5G to big data, cloud computing, and analytics, revolutionizing the world's future. Already, the world is witnessing the rapid deployment of autonomous vehicles in trial phases. Forward-thinking businesses appear to seize any opportunity to deliver world-changing creativity. Here is some emerging technology boom that will redefine the future: Autonomous things (AuT), also known as the internet of autonomous things, are an impressive technological breakthrough. AuT is becoming a more exciting concept in technological advances, driven by emerging technology such as AI, big data, which the cloud, and will continue to progress. Autonomous things have the potential to provide cutting-edge performances that function more naturally with their environment and with individuals because it uses AI to act on complex tasks that were previously done by humans. Self-driving cars, robots, and unmanned aerial vehicles (UAVs) and drones are all examples of autonomous things. The advent of nanomedicine, a nanotechnology application, is already proving to be an important solution to the treatment of serious diseases. Since drugs in the human body are absorbed rapidly and eliminated as waste before treatment, nanomedicine may prolong the time a drug stays active in the body. Recent developments in genetics, proteomics, molecular and cellular biology, material science, and bioengineering have all led significantly to the advancement in nanomedicine. Currently, medical scientists are developing a range of medical devices and equipment that use nanotechnology to improve performance, protection, and personalization. Rapid technical advancements are making space tourism a possibility. Today, there is a slew of companies racing to build sub-orbital tourist aircraft, including Blue Origin, SpaceX, and Virgin Galactic. Though human space exploration is still in its infancy, the possibilities that space exploration now provides the world will decide its maturity for future generations. On the other hand, since humans have already selected the shuttle as a mode of transport to outer space, several businesses are reimagining space elevators.
<urn:uuid:342b2a86-3cb4-4ad2-bcca-12a3df13c7e5>
CC-MAIN-2022-40
https://www.cioapplications.com/news/emerging-technologies-to-transform-the-world-in-near-future-nid-7480.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00242.warc.gz
en
0.94332
562
2.96875
3
The great promise of white space technology has firmly moved from the regulatory and testing realms into the field. Two recent pieces of news show how flexible the technology is. White space — the use of spectrum formerly in the hands of broadcasters for wireless services — sometimes is referred to as “Wi-Fi on steroids.” While technically inaccurate, the term paints a pretty good picture of what is possible. In addition to the travails of the normal development cycle, the evolution of white space has been tricky for a couple of extra reasons. For one thing, wireless mics, which can use the same frequencies as white space, needed to be protected. A second challenge is that different amounts of white space spectrum are available in different areas and that availability shifts on an ongoing basis. Figuring out how to manage that was difficult, but the advances made to do so will be felt beyond white space itself. This week, Ars Technica and other sites reported that Cal.net, an ISP in California, said that it has deployed a white space service to provide broadband to more than 59,000 of its rural customers in the central and northeastern part of the state. While the RuralConnect service isn’t the first to use white space — the Ars Technica story says they have been rolling out since January of last year — it’s still news when a new service launches. Cal.net could provide speeds as fast as 16 megabits per second (Mbps), the story says. Wireless techniques such as white space understandably are ideal for rural areas and nations with less infrastructure. Late last month, The Register reported on a trial Google is running in South Africa. The story reports that the trial will focus on 10 schools in Cape Town. The nation is earlier in the process than the United States and, consequently, the story focuses on regulatory and technical issues that are in play. Apparently, more is at stake in South Africa than impressing regulators. Caroline Gabriel of Rethink Wireless reported on April 4 that it will use version 1.0 of an emerging standard called Weightless. Weightless was written by companies including ARM, Google and Neul, an English firm that was active in white space development. It is aimed at harnessing white space platforms for machine-to-machine (M2M) communications. In other words, massive armies of sensors will be able to use Weightless to communicate over white space networks. Trident Systems offers more details on the connection between M2M and white space. The tentative standard was agreed to at a plenary session of the Weightless Special Interest Group (SIG) and now will be voted on by the entire membership.
<urn:uuid:2cd09ec2-3db2-4459-902b-601d83ced33d>
CC-MAIN-2022-40
https://www.itbusinessedge.com/communications/from-broadband-to-m2m-white-space-has-great-potential/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00242.warc.gz
en
0.957412
545
2.65625
3
Password Spraying Attack A Password Spraying Attack is a type of brute force attack where a malicious actor attempts the same password on many accounts before moving on to another one and repeating the process. This is effective because many users use simple, predictable passwords, such as "password123." A common practice among many companies is to lock a user out after a number of failed log in attempts (usually 3-5 attempts) in a short of time. Becuase of the nature of a password spraying attack, a bad actor is able to avoid being detected and locked out of an account, which is a common problem with regular brute force attacks. "I was asked to change my password when my bank fell victim to a password spraying attack. It turns out some hacker managed to try millions of username and password combinations against the bank's users - and I was one of them."
<urn:uuid:4498c6a9-32f4-4f5a-8991-ff35d52b0cce>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/password-spraying-attack
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00242.warc.gz
en
0.969515
180
2.671875
3
Identity theft is one of the biggest security concerns in India and worldwide. The Cyberworld is becoming more and more sophisticated – They’re using advanced machine learning and artificial intelligence to steal your identity. Identity theft could land you in serious social/financial trouble e.g. someone could hack your Facebook account and demand money from your friends or may start misusing it to such an extent that will ruin your image. They may collate information to hack your other accounts including high valued. According to Unisys’ Global Security Index 2020. Nearly four in 10 Indian adults have experienced some sort of identity theft in their lifetime. The good news is that there is much awareness in India, India actually rates second-highest on the scale with a score of 223 on average. The Indian consumers are now concerned about security in Covid-19 circumstances because everybody’s working from home. Let’s look at how identity theft can affect you and what are the ways to protect your identity. Though nothing can provide 100% security, the advice here can certainly enhance it. How Identity theft happens? - Dumped/stolen documents: through bank statements, credit card offers, and other papers stolen/dumped in garbage. This let them apply for accounts in your name. - Phishing: When an attacker disguised as a trusted entity dupes a victim into opening emails/instant message/text message which can lead to the installation of malware, the freezing of the system as part of a ransomware attack or the robbing of sensitive information. This can result in unauthorized purchases, the stealing of funds, or identify theft. - Phone scams: Over phone by pretending to be an authorized entity e.g. bank, to convince individuals to give up their personal and financial information. - Data dumps: Sophisticated hackers can access customer data of retail stores, medical facilities, bank, and other organizations. - Dictionary attack: If you use common dictionary words as password, it’s highly vulnerable as hackers can run a program which tries all dictionary words as possible passwords. See most commonly used passwords. - Brute force attack: The attacker automates software to try as many combinations as possible It could be around 350 billion guesses per second. Anything under 9-12 characters is vulnerable to being cracked. - Website Breaches: The website storing your login information can be hacked. Some hackers try to log into other websites using these credentials. - Credential recycling: If you’ve recycled your credentials (i.e., used that same username and password elsewhere), you’re at great risk when one of those account gets hacked. - Social media: If you share too much sensitive information, you’re at greater risk as hackers can misuse that to get further information about you from elsewhere. - Ad-Campaign: People collect personal information by luring people with some offers. Usually the information is sold which could be misused. How to protect yourself from identity theft? These 4 tips for being more secure in your online life will help keep you safer. 1. Enable Two-Factor authentication (2FA) Two-factor authentication verifies your identity by another factor, which is typically one of these three things: - Something you know: e.g. ATM Pin, Security questions set up during registration. - Something you have: e.g. mobile (receives an authentication code via SMS/Authenticator App), or ATM (which you can verify using the CVV code on the back). - Something you are: e.g. fingerprint or IRIS scan - When you withdraw money using ATM (1st factor) and PIN (2nd factor). - When you operate your mobile (1st factor) using your PIN/fingerprint/face-unlock (2nd factor). - When you perform online transaction using your credentials (1st factor) and OTP (2nd factor) received on your mobile/ in an email. - Login to your social media account using your credentials (1st factor) and one-time-use codes (2nd factor) or OTP (2nd factor) in a separate Authenticator app e.g. Google authenticator or Authy. Advantage: Extra layer of security Disadvantage: Inconvenience on first time login on a new device – for subsequent logins you may choose to ‘remember me on this device’. Curious to know which websites provide 2FA? Check this out 2FA Providers Caution: SMS based OTP are unsafe so better opt for alternatives (one-time-use codes or App based). For details visit SMS based OTP are unsafe 2. Don’t reuse passwords If you reuse your passwords across every single online account, you’re at greater risk as hackers can easily hack all your accounts if they manage to hack one. Separate passwords are difficult to memorise so one may consider following options: Use password manager – dedicated software to store all your passwords and get them filled automatically when you visit a site afterwards. Most browsers have their own but those aren’t as matured and secured as dedicated ones. These are cloud based or offline (more secured but less convenient) and let you generate random passwords up to 40 characters which are very difficult to crack. As it’s advisable to frequently change your passwords, password managers are handy as you can easily generate and store a new password whenever you need. All you have to remember is a master password which isn’t stored by password managers while your store(vault) is encrypted. So, make sure you choose a strong master password which you can remember and also set up backup security questions if available. You may compareand choose the one which suits best to your needs. Start with less valued accounts and then consider storing high valued ones after getting comfortable with password managers. Some antivirus software suites also offer password managers(vaults). Those may not be as good as dedicated password managers but are a bit economical and just suffice the need. These also allow users to store other sensitive information like ATM number in a secured way. The password managers are of 2 types – offline and cloud-based. Offline are more secured but a bit less convenient. However, before you start using a password manager look at these suggestions. Use passphrase/sentence – If you don’t want to use a password manager, create a passphrase (should be random and only known to you) for critical accounts and add modifiers to distinguish between them e.g. if your passphrase is ‘2 be or not 2 be, that is the ?’, you could use ‘2 be or not @sbi 2 be, that is the?’ as passphrase for you SBI account. In case there’s a limit on number of characters you could choose first/last or first/last 2 characters of every word in your passphrase to generate a password for you e.g. ‘Axis#2bon2b,tit?’. Choose a second passphrase for all non-critical accounts. The idea is to use unique random passwords which are easy to memorize. However, if that’s getting difficult consider using a password manager so that you just have to remember one password/passphrase. Here’re few passphrases tips. 3. Use email Aliases/disposable emails This is to be safe from phishing attacks. If you use email aliases/different email addresses to register in every online account, it becomes easier to spot phishing emails. E.g. if you used [email protected] register on Facebook and you start receiving email from SBI using this email/alias, it’s a phishing attack. This may happen when either Facebook sells your email addresses or it’s involved in a data breach. Check if your email address is ever involved in any. Consider below options to be safe from phishing attacks: Use Aliases – Every email provider provides certain number of aliases for your main account e.g. if your email account is [email protected], you can use aliases like [email protected] or [email protected] etc to register to an online account and your emails will be delivered to your mail account. However, aliases provided by email providers aren’t as effective as a dedicated aliasing provider like Anonaddy. These providers have features like email forwarding and UUID aliases as well which makes your email ids hard to guess. If you are looking for a fresh start and would like to create a new email address and keep it free from spam you can use a service such as Tutanota, Protonmail or Posteo. These are end-to-end encrypted emails providers which makes sure no one (not even provider) can access your emails. In the absence of end-to-end encryption, you’re at greater risk – not just during over-the-wire transfer but while your email is in inbox – the provider can snoop on it in the absence of end-to-end encryption. Use disposable emails – You can use temporary/disposable emails during testing/trial of some temporary accounts which you’ll never come back to. Providers like Guerrillamailprovide you temporary emails to help you protect your privacy in such cases if not blocked by the service you want to sign-up for. Aliases/disposable emails help you: - Avoid spam in your inbox - Identify who has sold your data or is involved in a data breach - Protect your identity - Quickly update where emails are forwarded in case of data breach - Protect yourself from phishing attacks In case you’re interested in how many times your email address is already involved in a data breach visit haveibeenpwned 4. Stay informed and be careful who you trust One of the easiest but most effective ways of keeping your accounts secure is just to keep up with the tech news. If you know about the latest threats and breaches, and how to deal with them, you won’t fall prey to those. Also avoid sharing sensitive information with website which don’t’ support HTTPS and password hashing. If a website supports HTTPS you see a lock icon in address bar as shown in this image Additional tips to keep your PC safer - Do Install anti-virus software if using windows OS – many are bundled with password managers though not as good as dedicated ones. - Keep your software updated – Many a vulnerability comes through outdated software. - Turn off when not in use – don’t’ give hackers 24/7 window to somehow gain access and install malware. - Regularly Clear Your Browser Cache – Saved cookies, saved searches, and Web history could point to sensitive personal data. - Turn Off ‘Save Password’ Feature in Browsers – it’s best to leave password protection to the experts who make password managers. - Be vary of phishing attacks and don’t click any unverified link. - Use incognito mode of browser or safe browser for financial transactions. Many anti-virus software come with safe browsers as well. Useful browser extensions for safe browsing - HTTPS Everywhere: This extension tries to open the most secure version of web pages. - ublock Origin: Ads can lead to shady websites with malware, and ads are also annoying and slow down web pages. - Bitdefender TrafficLight: This add-on/extension warns you about dangerous websites, so you do not visit them. - Password Manager: Most online-based password managers have browser extensions, and if you use a password manager, I highly suggest using this as well. Additional tips to keep your Mobile safer - Install anti-virus software if the OS isn’t that secured. - Keep your software updated - Don’t give unnecessary access to any app during installation e.g. if Grofers asks for photos permission which is absolutely not required for its operation. - Never install a software by clicking on a link provided as part of an SMS/Instant message – Always visit App store to install one. - Consciously check and configure app privacy settings. - Enable remote location and device-wiping. - Lock your smartphone and tablet devices – use strong passcodes rather than less secure 4-digit pin if possible. While Iris/face/biometric are there for convenience, and are more secured, these can be bypassed. - Hide/Lock apps especially sensitive ones. - Disable Bluetooth when you’re not using it – it leaves data vulnerable to interception – to spread malicious files and viruses - Be overly cautious when sharing personal information - Watch out for impersonators – Don’t give out personal information on the phone/email. - Regularly Clear Your Browser Cache - Turn Off ‘Save Password’ Feature in Browsers Useful tips for online accounts - Consider using virtual credit/debit cards for online transactions – Almost all major banks provide virtual cards which get unique 16-digit number on every request. The validity is single transaction or up to 48 hours, whichever is earlier. - Limit social media sharing and never reveal your real identity e.g. if you create email id with your real name and date of birth, you’re already giving away these 2 information to hackers to be able to search rest related to you. - Use 2FA – Google, Facebook, Twitter, LinkedIn, Amazon, Github and most bank support 2FA. - Keep changing your password on regular basis even if it’s not enforced. - Always set up recovery email/phone numbers/security questions/codes to help you recover your account when it’s hacked. - In case your account is hacked, immediately change password. - Always turn on notifications, if available, to notify you of any suspicious activity in your account e.g. Netflix sends you notification for every new login. Also keep checking your online activity – many websites provide activity log as part of your profile setting. Using this you can even log out of all devices except current one. - Avoid public Wi-fi or use VPN as some rogue actor might be snooping and may steel your sensitive data. - Close unused accounts and log out of active accounts as soon as your work is done. - Remove unused social media login connections – if you had chosen to login though Google, Facebook or twitter in the past, keep checking for connected apps by visiting ‘Apps’ section of corresponding provider and remove unused accounts. - Specify trusted contacts for Facebook – in case your account is still hacked you can still recover it by using the one-off codes sent to your trusted contacts. - Use secret email – don’t create email addresses with any sort of personal identity in it. Hackers find it easy to get additional information about you by using the information in mail. Identity theft may not seem too big an issue to bother about affront but may ruin your life in case you’re not aware of its consequences and the ways to prevent it. So, be proactive to prevent it and spread the awareness so that others can be protected as well. Let me know if this helped you secure your identity and how we can further secure it. You may leave your comments here or mail me.
<urn:uuid:18d538c2-5e0c-43f0-9f94-ed30e60f4998>
CC-MAIN-2022-40
https://www.comakeit.com/blog/how-to-prevent-identity-theft/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00242.warc.gz
en
0.903989
3,236
3.109375
3
What Is Hyperautomation? | Buzzwords Hyperautomation definition: Hyperautomation is the combined use of several technologies and solutions that can be leveraged to systematically identify and automate as many business operations processes as feasible. What Is Hyperautomation? Business automation is critical for the operations of modern organizations. Even small SMBs can benefit from a certain degree of automation like automated invoices. For larger companies, there are a raft of applications for automation, whether it’s the payroll, customer service, onboarding, data handling—the possibilities for automation are many. Technologies like robotic process automation (RPA) have been increasing in popularity in recent years, to the extent that the RPA market is currently growing at a rate of 31% each year and is expected to be worth $4 billion by 2025. So, what happens when a business gets a taste of automating individual tasks and wants to apply it on a larger scale? That’s where hyperautomation comes in. Hyperautomation uses a variety of tools and solutions, like data mining, RPA, low-code, document management, machine learning and artificial intelligence to create a working “loop” that can be used by companies to identify automatable tasks to create as efficient a business as possible. Watch this Buzzwords video to get a brief overview of what hyperautomation is, why businesses use it, and its benefits. Find out more below: The Buzzword series features short videos that break down popular topics focused on business and technology. Each video takes a look at a top trend, and why you may want to incorporate it into your organization. View the full series here. More on automation Amazon’s “FBA” and Similar Business Automation Learn about Amazon FBA automation and other forms of RPA with Impact. Our experts review how it works alone and as part of a Digital Innovation strategy. Watch webinar here. What Is RPA? Your Guide To Robotic Process Automation More businesses are implementing robotic process automation solutions every year. But what is RPA, and how can it help you? Find out here. Understanding Workflow Automation Workflow automation helps provide businesses with a key competitive advantage. Learn the ins and outs of it by downloading this eBook.
<urn:uuid:d886407d-1593-447d-b615-5e5529b2f482>
CC-MAIN-2022-40
https://www.impactmybiz.com/videos/what-is-hyperautomation-buzzwords/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00242.warc.gz
en
0.909707
492
2.875
3
It’s time to dig deeper into our work with numbers (including math) and strings. What is Your Type? First of all, it might be handy (especially as you learning), to have Python report the type of a variable or a literal value. For example: C:\Users\terry>py Python 3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> type(199.9) <class 'float'> >>> type("Howdy!") <class 'str'> >>> type(10043) <class 'int'> >>> a = "This is cool!" >>> type(a) <class 'str'> >>> More on Division Division in Python comes in two variations: / carries out floating point division // caries out integer (truncating) division >>> 15 / 4 3.75 >>> 15 // 4 3 Many of the math precedence rules would make sense to us (those that took some math in school!) For example, multiplication wins over addition. But fortunately, Python supports the use of ( ) to indicate precedence. This keeps us from having to worry about default behaviors. Here is an example: >>> 10 + 20 + (10 * 7) 100 What Base Are You On? Of course, Python permits you to speak to it in more than Base 10. Use the following: 0b for binary 0o for octal 0x for hex >>> 0b11001101 205 >>> 0x23e 574 Change Your Type! Use the int() function to change other data types to an integer: >>> int(19.0) 19 >>> int(True) 1 >>> int("2020") 2020 I hope you had fun in this lesson! More soon!
<urn:uuid:87bc76fb-3d12-4673-accd-48fd20fdb96d>
CC-MAIN-2022-40
https://www.ajsnetworking.com/learning-python-lesson-5-more-on-numbers-and-math/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00242.warc.gz
en
0.792477
433
3.828125
4
Traffic shaping (or packet shaping) is a technique of limiting the bandwidth that can be consumed by certain applications to ensure high performance for critical applications. Traffic shaping enables organizations to increase network performance by controlling the amount of data that flows into and out of the network. Traffic is categorized, queued, and directed according to network policies. Essentially, traffic shaping regulates the network by slowing the transmission of packets classified as less important so that priority applications are delivered without delay. By managing the bandwidth of the network, organizations can ensure the performance and quality of service of their key applications and business traffic. Any network has a finite amount of bandwidth, which makes traffic shaping through bandwidth management a key tool to ensure the performance of critical applications and the delivery of time-sensitive data. Traffic shaping is a powerful and flexible way to ensure quality of service and defend against bandwidth-abusing distributed denial-of- service (DDoS) attacks. It protects networks and applications from traffic spikes, regulates abusive users, and prevents network attacks from overwhelming network resources. The first step in implementing an efficient traffic shaping system is categorizing the different kinds of traffic on the network. For example, organizations may want to prioritize traffic to and from a key web application to ensure that no matter how busy the network gets, this important traffic is forwarded normally. What this means is that other kinds of traffic may be deprioritized. When this happens, the packets are simply held in a buffer until they can be forwarded without exceeding the total desired and configured rate. Once the categorization system is set up, the traffic shaping appliance (often an Application Delivery Controller) begins to manage the bandwidth into and out of the network. With new types of applications, traditional traffic shaping techniques may not be sufficient. BIG-IP Local Traffic Manager offers sophisticated and granular traffic shaping technology, enforced by customizable business policies called iRules.
<urn:uuid:7bc65bc3-0e3a-4f61-927b-e65bf715d542>
CC-MAIN-2022-40
https://www.f5.com/services/resources/glossary/traffic-shaping
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00242.warc.gz
en
0.930182
385
3.65625
4
As technology evolves, so too does cybercrime. The recent rise in remote work and the broadening of the attack surface that accompanied it have shown that cyber criminals are nothing if not resourceful and opportunistic. So, as bad actors advance their tool kits to include artificial intelligence (AI) and machine learning (ML) strategies, those who defend against cyber attacks must do the same. AI-driven security technologies have the potential to anticipate attacks and counter them in real-time. Given that cyberattacks of the future are expected to occur in microseconds, the ability to react at machine speeds is crucial. The role of humans in defending against attacks will shift, focusing instead on ensuring that enough intelligence is fed into security systems to make them successful. The need for AI-driven security Rich media services, increasingly intelligent endpoint devices, semi-intelligent IoT devices and the emergence of 5G capabilities have combined to create new edge networks and fundamentally change how data is shared. This ongoing shift in how people work and live creates a host of new security concerns to address. Not only are AI-driven technology and ML useful in protecting against attacks, but when the prospective attackers are using that same technology, it becomes a necessity. Bad actors are already using AI and ML to their advantage, building platforms to deliver malware at unprecedented speeds and scale. And because humans alone cannot keep up with the increasingly complex techniques deployed by cyber criminals, those in the threat detection business must use AI, ML and automation to maintain an edge over these malicious actors. Proactive security using AI and ML Staying ahead of cyber threats requires proactive strategies. As a general rule, it’s much easier to have proper defence measures in place before something happens rather than having to undo the damage after an attack. Organisations can transition to proactive security strategies by using AI/ML techniques and sandboxing to analyse information gathered from global threat intelligence networks. Training systems using all three ML learning modes—supervised, unsupervised and reinforcement learning—further increases accuracy over time. A successful security-driven networking approach will be one that joins AI-driven security systems with modern threat intelligence and networking technologies to create a unified system. With this strategy, security becomes woven throughout the network in the form of segmentation, behavioural analytics and zero-trust access. In addition, a distributed security system that replaces traditional sensors with learning nodes can both gather information and function as the first line of defence. In this way, it acts similarly to the human nervous system. Such a system is made possible by using stored knowledge supplemented with ML for threat detection and coarse-grain response. The human element in modern cybersecurity The use of AI and ML in cybersecurity solutions, along with automation, will also lead to a shift in the role of cybersecurity professionals. Next-generation cybersecurity technologies enable integrated, enhanced user interfaces that leverage task automation. This makes it easier to onboard new junior staff and requires less senior-level staff oversight. Moreover, these technologies can effectively compensate for the cybersecurity skills gap and leave more meaningful and high-value work for the humans involved, thereby increasing staff retention. AI-driven cybersecurity is the future Modern networks are increasingly complex, requiring inhuman levels of awareness and response to keep them safe. As cyber criminals deploy increasingly sophisticated attacks powered by AI and ML, cybersecurity professionals must use those same technologies in response. The threat landscape will continue to evolve, meaning AI-driven systems, trained and refined by humans with high-quality data, will grow more essential as a means to protect digital assets. By Derek Manky, Chief, Security Insights & Global Threat Alliances, FortiGuard Labs
<urn:uuid:1ab06003-cdd5-4374-957a-816f24242fe7>
CC-MAIN-2022-40
https://internationalsecurityjournal.com/combating-cybercrime-with-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00242.warc.gz
en
0.932164
746
2.78125
3
Digital signage is a common feature in retail businesses across the world, be it giant digital billboards or smaller, poster-sized displays on store walls. Modern digital signage systems take advantage of cloud computing and Internet of Things to enable them to display high-quality visuals and animated content while also being able to change them at a moment’s notice thanks to real-time data exchanges and control systems. More recently, digital signage has begun to implement another emerging technology, that of viewer recognition systems. With recognition systems being implemented to digital signage in retail, store managers and administrators are gaining access to huge amounts of data about their customers and their behavioral patterns that were, until now, inaccessible. Now that these systems are slowly but surely being implemented around the world, both the applications of this technology and the kinds of data it can gather will likely continue to be developed and implemented so as to allow the retail industry and others to experiment with this new technology and work out how it could best be put to use in their businesses and organisations. In order to understand how recognition systems in digital signage could benefit the retail industry, we’ll need to look at what recognition systems are as well as how they’re being used today. So, let’s now take a look at what recognition systems are and the roles they play in digital signage for retail. What Are Recognition Systems? Recognition systems are computer applications that are able to identify a person or individual, their gender, or age from the frames of a digital picture or video source. In order to do this, recognition systems gather and process data from the image and then use it to identify an individual or their age or gender. Recognition systems have a multitude of applications across several industries including manufacturing, agriculture, and healthcare, however, their use in digital signage is, so far, mostly limited to the retail sector. There are several types of recognition systems being deployed today, either as a single unit or individually, these are: Facial recognition systems are both the oldest and most mature technologies of the three main recognition systems used in digital signage. One of the most common ways for facial recognition systems to detect faces is to focus on specific facial features from the image, then compare and contrast them with a database of facial imagery. The system can then be programmed to produce a list of potential matches for the face from which a human operator could then make the final selection. Facial recognition systems are currently being used for both targeted advertising and security applications within the retail industry. Age recognition systems are some of the more recent developments in recognition systems. Age recognition is slowly becoming seen as more and more essential in digital signage for retail due to the insights into the age ranges of potential customers. These systems work by analyzing the features presented by an individual and processing data with regards to the positioning of facial features such as the nose and chin, the gauntness of one’s eyes, and the levels of grey measured within an individual. Age recognition systems will typically group ages ranges for ease of access, for example, Child (Ages 1-15), Young (Ages 16-25), Adult (Ages 26-59), and Senior (Ages 60+). Gender recognition systems within digital signage for retail are also fairly new on the block, with retail outlets only recently beginning to implement this new technology. Gender recognition systems will typically take a digital image of an individual, smooth out the imperfections in the image using a 2D Gaussian Smoothing tool, then describe and make a template of the face in question. Feature extraction then takes place where the system will make note of any distinguishing features that could differentiate between two genders. This data can then be sent to support vector machines (SVMs) for the gender classification. What Are the Benefits of Facial Recognition Digital Signage? Now that we understand how facial, age, and gender recognition systems work, we can start to look at the benefits these systems can provide for digital signage within the retail sector. One of the biggest benefits of using recognition systems within digital signage for retail is the vast swathes of data these systems give retail businesses and organisations access. Being able to identify key demographics in a retail chain’s customer base is an ideal way for stores to analyse which age groups or gender is purchasing what and where. Identifying the interests of your potential customers using a combination of facial, age, and gender recognition systems allows retailers to generate better targeted advertising. Targeted advertising in itself has seen various transformations over the past few years, however, using recognition systems in digital retail signage gives the retail industry the ability to target specific groups of people with adverts most likely to be of interest to them, thus enhancing the likelihood of converting this attention into a sale. Gender recognition systems, for example, allow signs to react to the different genders of people passing them by showing adverts targeting either men or women. Combining age and gender recognition systems would allow retailers to target men or women that fall into a specific age range. This also brings the additional benefit of using data to understand the individual behaviors of customers of different age ranges and sexes, an insight which has only recently become quantifiable through advances in both technology and marketing and advertising. In an age where digital signage and recognition systems are both undergoing transformative periods driven by the increasingly expansive artificial intelligence, the potential applications of these two technologies seem to be on the increase. While their combined use hasn’t quite reached peak levels just yet, it would appear sensible to expect steady and continual growth in their implementation. In order for advertising to achieve optimal results, bespoke, targeted digital signage, such as the systems described above, would need to be practical and financially feasible in order for businesses and organisations to buy, implement, and then manage as the data they generate creates increasingly deeper profiles of their customer base, an essential commodity in the age of Big Data.
<urn:uuid:ebd45196-15d5-4330-9868-6b645465ef7d>
CC-MAIN-2022-40
https://www.lanner-america.com/blog/recognition-systems-digital-signage-retail/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00242.warc.gz
en
0.948108
1,204
2.53125
3
Do you have some anxiety about Artificial Intelligence (AI) bias or related issues? You’re not alone. Nearly all business leaders surveyed for Deloitte’s third State of AI in the Enterprise report expressed concerns around the ethical risks of their AI initiatives. There is certainly some cause for uneasiness. Nine out of ten respondents to a late 2020 Capgemini Research Institute survey were aware of at least one instance where an AI system had resulted in ethical issues for their businesses. Nearly two-thirds have experienced the issue of discriminatory bias with AI systems, six out of ten indicated their organizations had attracted legal scrutiny as a result of AI applications, and 22 percent have said they suffered customer backlash because of these decisions reached by AI systems. As Capgemini leaders pointed out in their recent blog post: “Enterprises exploring the potential of AI need to ensure they apply AI the right way and for the right purposes. They need to master Ethical AI.” [ Check out our primer on 10 key artificial intelligence terms for IT and business leaders: Cheat sheet: AI glossary. ] 7 Artificial Intelligence ethics questions leaders often hear While organizations aggressively pursue increased AI capabilities, they will look to IT and data science leaders to explain the risks and best practices around ethical and trusted AI. “In a future where AI is ubiquitous, adopters should be creative, become smarter AI consumers, and establish themselves as trustworthy guardians of customer data in order to remain relevant and stay ahead of the competition,” says Paul Silverglate, vice chair and U.S. technology sector leader for Deloitte. Here, AI experts address some common questions about ethical AI. You may hear these from colleagues, customers, and others. Consider them in the context of your organization: 1. Isn't AI itself inherently ethical and unbiased? It may seem that technology is neutral, but that is not exactly the case. AI is only as equitable as the humans that create it and the data that feeds it. “Machine learning that supports automation and AI technologies is not created by neutral parties, but instead by humans with bias,” explains Siobhan Hanna, managing director of AI data solutions for digital customer experience services provider Telus International. “We might never be able to eliminate bias, but we can understand bias and limit the impact it has on AI-enabled technologies. This will be important as the cutting-edge, AI-supported technology of today can and will become outdated rapidly.” 2. What is ethical AI? While AI or algorithmic bias is one concern that the ethical use of AI aims to mitigate, it is not the only one. Ethical AI considers the full impact of AI usage on all stakeholders, from customers and suppliers to employees and society as a whole. Ethical AI seeks to prevent or root out potentially “bad, biased, and unethical” uses of AI. “Artificial intelligence has limitless potential to positively impact our lives, and while companies might have different approaches, the process of building AI solutions should always be people-centered,” says Telus International’s Hanna. “Responsible AI considers the technology’s impact not only on users but on the broader world, ensuring that its usage is fair and responsible,” Hanna explains. “This includes employing diverse AI teams to mitigate biases, ensure appropriate representation of all users, and publicly state privacy and security measures around data usage and personal information collection and storage.” 3. How big a concern is ethical AI? It’s top of mind from board rooms (where C-suite leaders are becoming aware of risks like biased AI) to break rooms (where employees worry about the impact of intelligent automation on jobs). “As organizations become more invested in AI, it is imperative that they have a common framework, principles, and practices for the board, C-suite, enterprise and third-party ecosystem to proactively manage AI risks and build trust with both their business and customers,” says Irfan Saif, principal and AI co-leader, Deloitte & Touche. 4. How does diversity (or lack thereof) impact ethical AI? A culturally diverse team is a powerful way to detect and eliminate baking conscious and unconscious biases into AI, Hanna says. “By tapping into the strength of this diversity, your brand might be better positioned to think and behave differently about trust, safety, and ethics and then transfer that knowledge and experience into AI solutions,” says Hanna, who recommends a “human-in-the-loop” approach. This ensures that algorithms programmed by humans with inherent blind spots and biases are reviewed and corrected in the early phases of development or deployment. 5. What else are leading companies doing to address the ethical risks of AI? At a tactical level, avoiding data-induced AI bias is critical. CompTIA advises ensuring balanced label representation in training data, as well as making sure the purpose and goals of the AI models are clear enough so that proper test datasets can be created to test the models for biases. Open source tool kits from organizations such as Aequitas can help measure bias in uploaded data sets. Themis–ml, an open source machine learning library, aims to reduce data bias using bias-mitigation algorithms. There are also tools available to identify flawed algorithms, like IBM’s AI Fairness 360, an extensible, open source toolkit which combines a number of bias-mitigating algorithms to help teams detect problems in machine-learning models. Some organizations are creating ethical guidelines for AI development as well as clear processes for informing users about the use of their data in AI applications. Nearly half (45 percent) of organizations had defined an ethical charter to provide guidelines on AI development in 2021, according to the Capgemini Research Institute survey – a dramatic increase from just 5 percent in 2019. However, the 59 percent of organizations last year that informed users about the ways in which AI decisions might affect them was a drop from the 73 percent that did so the year prior. 6. Will AI steal my job? Myth-busting can also be a meaningful aspect of ethical AI. “There are a number of misunderstandings that business leaders should be aware of,” says Hanna of Telus International. “For instance, AI will not replace the human workforce, but support it. While the technology has proven to be helpful across a number of industries, including outperforming humans in diagnosing cancer or reviewing legal documents, a cataclysmic impact on human jobs is not in our future.” Business leaders should focus their efforts not on automating as many employees as possible out of their roles, but rather on what new work those employees may be freed up to do. What business opportunities does that automation open up? For more advice on managing job loss fears, read Adobe CIO Cynthia Stoddard's article: "How IT automation became a team eye-opener." As Stoddard puts it, "When you mention the word “automation,” everyone’s visceral reaction is to wonder what will happen to their job. But once our team saw the outcomes – that they could participate in the future of thought, experiment with new technologies, and focus on honing new skills – IT automation became a real eye-opener." 7. Is ethical AI IT's job? Ethical AI is important to everyone, but IT leaders can play a powerful role. “AI greatly impacts our lives and is being integrated into every single industry. Trust is everything in the digital era, and enterprise IT leaders should be educated on what constitutes trusted and ethical AI to lead their organizations to establish the correct guidelines and frameworks into their programs,” Hanna says. “While industry standards in this area are still maturing, there is widespread recognition that product architecture and development should be based on appropriate ethics.” What to read next Subscribe to our weekly newsletter. Keep up with the latest advice and insights from CIOs and IT leaders.
<urn:uuid:1183a15d-5907-446b-8593-165fc54e25ba>
CC-MAIN-2022-40
https://enterprisersproject.com/article/2021/9/what-is-ethical-artificial-intelligence-ai-7-questions-answered
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00442.warc.gz
en
0.951113
1,667
2.65625
3
The topic from the list I have selected today is: Are blogs and wikis records? To start with I need to define what a 'record' is. There are a range of definitions out there, so I'm going with one from a pamphlet on records, by ARMA, typically a good authority on Records Management: A record is recorded information that supports the activity of the business or organization that created it. It can take the form of - paper documents such as a hand-written memo or a hardcopy - electronic records such as databases or e-mail - graphic images such as drawings or maps; these may be in - photographic, electronic, or hard-copy formats The ARMA pamphlet has an interesting take on records that I haven't often seen elsewhere; the importance of the document preparation and its content. Many sources that focus on records are more interested in their retention and eventual destruction, which is also important but does not actually ensure that you are retaining information that is actually 'record-worthy'. The ARMA pamphlet has an interesting section, "How do I write for the record?", which has some standard rules for writing business documents: The creation of or writing for the record begins the life cycle for recorded information. The purpose for writing is to: When writing, ask yourself these questions: - communicate information for use immediately - transfer or convey information for reference - document or record an event, decision, or process - Am I the right person to author this? - Would I cringe if my mother read it? - Would I be embarrassed if it were published in a newspaper or put on a bulletin board? - Would I be comfortable if senior management read it? - Do I have any hesitation signing my name to it? I think many people would agree that these rules should also be applied to professional blogs and wikis. Maybe the author will apply them with some latitude, since he or she wants people to read something interesting and not just another dull business document. Since a professional blog hosted within a corporate website will feel the need to follow these writing rules, this implies that the content being published is important enough to represent the views of the company and something that customers might act on (disclaimers accepted). This in itself puts the blog posts in the realm of business records, that should be appropriately written, retained and destroyed according to the records policies of the company. Wikis need to be treated more stringently than blogs. Since the point of a wiki is typically to convey information in a form that appears reasonably authoritative, the apparent value of the information presented is further inflated. The fact that anyone (within the bounds of your organization) can potentially edit the information again makes the first half of the lifecycle uncontrolled and therefore uncomfortable for companies. Companies are still adapting to how they deal with the seamingly uncontrolled authorship of content, the first stage in the records lifecycle. Standard records management policies can still be applied to published information while they are deciding, handling the second half of the records lifecycle, and providing an essential historical record of information, enabling the company to react and respond to questions down the line. While it sounds like I'm proposing professional blogs and wikis should be highly controlled, I don't believe that their underlying value as collaborative and distributed authoring and publishing tools should be diluted. I'm not proposing formalized review and approval processes. It is important though, for the protection of everybody, that companies publish concise policies on the use and content of these collaborative tools. The more restrictive the policies, the less value the tools will likely have, since they will be relegated to being a new view onto an outdated information publishing policy. From a control perspective I am proposing that corporately hosted blogs and wikis should be treated as formal business records at the point of publishing. Every post or wiki entry should be captured and stored on publishing or subsequent editing, complete with details of the user performing the actions. This gives companies the audit history needed to respond to questions about information that was made publicly available on their site. Corporate blogs and wikis are tools that are valuable for communicating with a company's customers and prospects. Given that value, companies must treat these tools as another source of records. Although the authorship policies around the first half of the record lifecycle are being developed, the traditional second half of the lifecycle including retention and disposition should still be applied. This may require organizations to look at how they can integrate their current tools into a records management system, or select from the more corporately focused tools that are coming on to the market.
<urn:uuid:20d98e7d-5c44-45a4-9f77-de0f720bf24e>
CC-MAIN-2022-40
http://blog.consected.com/2006/09/are-blogs-and-wikis-records.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00442.warc.gz
en
0.955412
951
2.515625
3
What Does No-Code Mean? No-code is an approach to designing and using applications that doesn’t require any coding or knowledge of programming languages. This type of software is part of the self-service movement that empowers business users to create, manipulate, and employ data-driven applications to do their work better. In reality, there is coding involved with any no-code automation tools. The best of them simply mask the required coding behind visual mechanisms that let users point-and-click, drag-and-drop, and create maps of processes that are part of their applications. The underlying software provides the coding. Low-code solutions are those in which users need IT to customize no-code development with minimal amounts of coding. What are the benefits of no-code software? There are many benefits of no-code development and applications. Some of these directly pertain to the automation advantages of the no-code process, including increased productivity, greater efficiency, and fewer human errors. Additionally, no-code allows IT teams to concentrate on more meaningful tasks that simplify building software for the business. Business users, however, are the biggest winners of no-code programming. They’re able to use these data-driven resources in a self-service manner to improve their job performance, spend less time waiting on IT, and devote more time to achieving business objectives. Specific benefits include: Empowering the business with self-service tools to be more efficient and effective decreases time to action and the costs for waiting on IT to create business applications with conventional programming approaches. No-code lets companies save time and money in both departments. Organizations are much more agile when business users can devise their own programs with no-code and low-code tools. These equip them to react to emerging market conditions and business circumstances faster while better preparing themselves for the future. The self-service nature of these software solutions decreases the amount of training business users need to perform their jobs. This lowers costs and accelerates time to value for business users, who can work at their own pace without learning technical skills. No-code programming ultimately supports better application building by giving the power to create apps to the end users who actually need them, know what’s required of them, and can modify them as desired for iterative improvements. No-code platforms have the distinct ability to integrate with a wide range of other platforms and systems. This makes it much easier to take complex steps, such as digitizing an organization’s business processes, moving the business forward without much disruption. It also means that businesses are much more likely to be able to control application updates. One of the challenges of development for business is getting processes approved. No-code platforms synergize IT and business teams. This makes collaborative development an easier road to walk, allowing the company to be more responsive. It also paves the way for Citizen Developers to quickly create enterprise-grade applications according to their needs. A must read, one-of-its-kind, industry report Learn how top performers achieve 8.5x ROI on their automation programs and how industry leaders are transforming their businesses to overcome global challenges and thrive with intelligent automation. Who is using no-code software? No-code software is used by organizations in just about any industry you can find. It’s mainly used in two ways: for front-end, customer facing applications and backend internal applications. No-code and low-code solutions are frequently used for web applications. Although they’re simple enough for business users, many developers use low-code approaches to build complex applications for cloud integrations and numerous other use cases. Codeless application building is widely deployed by a variety of organizations in financial services. It’s also leveraged by a number of horizontal business functions like accounting, human resources, sales and marketing, and customer support/service. No-code automation is key for solving common problems businesses face today, like employees transitioning back to physical offices, or even for implementing solutions in which they work remotely. Any company that’s undergoing some form of digital transformation is likely using no-code to make its workforce more efficient. Specific use cases include: No-code development is critical for assisting IT users in two ways. It either helps them accelerate the time they spend building complicated apps for the business or enables them to add low-code implementations to fine-tune business applications for specific use cases. For example, some of the most complex serverless computing deployments rely on low-code to reduce cloud costs. Several insurance companies use low-code software to transform manual processes into digital ones. One of the most time-honored use cases is ingesting customer information to deliver quotes. By building applications to automate this process, insurers can abandon manual, spreadsheet-based processes to give customers quotes much quicker. The classic example of no-code software creating self-service value for the business is using this technology to create dashboards for analytics. With intuitive drag-and-drop capabilities, users can assemble data from their sources for Business Intelligence and reporting to get a comprehensive look at results for informed decision-making. Low-code options are increasing in healthcare settings for building customer-facing applications. Some of these are for patients to access their own data and possibly share them with other providers. Regardless of the specific use case, low-code tools speed up the time required to make these apps. In finance, low-code development is routinely used to connect external, customer-facing systems with internal ones for things like mobile banking or online banking. No-code options are perfect for API configurations and integrations between these systems so consumers can manage their finances, access their accounts, transfer funds between them, and more. No-code software is perfect for the rapid transactional systems powering many e-commerce use cases for retailers. This approach simplifies the programming and integration of the various data types involved in this example. It’s also helpful for creating user-friendly customer interfaces. Frequently asked questions about no-code automation From an organizational standpoint, it depends on your needs. If you replace your IT department with no-code solutions, you’re limited to what those solutions were made for. In other words, if you have a large business with lots of moving parts, no-code should be an IT supplement, not an IT solution. How do I get started with no-code? One of the quickest, painless, and easiest ways to get started with no-code automation is to use Robotic Process Automation (RPA). The Automation Anywhere cloud-native RPA platform, Automation 360, enables users to accelerate almost any type of process for any industry or use case. It’s based on employing dynamic software agents, also known as bots, to implement the action required to automate steps for paying an invoice, for example. The best part is that the no-code user interface allows users to train bots to do any task required without knowing a programming language. An Automation Anywhere RPA solution lets users reap the benefits of both automation and no-code. This combination not only results in less errors, greater productivity, and heightened efficiency, but also democratizes these capabilities across the organization. With RPA, everyone from C-Level executives to new hires can quickly learn the steps involved in automating a process and become a citizen developer. No-code RPA software delivers these advantages in a couple of different ways, all of which are intuitive and swiftly learned. The first has already been discussed and involves point-and-click, drag-and-drop techniques for interfacing with RPA tools. Instead of writing lines of codes to indicate which sources to pull data from to integrate with HR systems to fulfill employee vacation requests, for example, end users can simply use their mouse to demonstrate this action. The underlying system will internalize it and mimic it upon request. The second way no-code RPA lets workers maximize the gains from automation is by using machine learning and AI to watch everything users do when executing a process. Techniques like computer vision can observe everything a user does on the screen to provide timely information for a customer service request, for example. It can see what takes place from individual mouse clicks to which downstream systems are used to process this information. All users have to do is complete the process manually, as they’ve done numerous times before, and the RPA system will watch and learn how to do it. Users simply indicate when the process starts and stops, then do everything required to complete it. With this ease of use they can train bots to accomplish the same goals, but do so tirelessly and without human error. With this no-code RPA approach you get the best of both worlds—no-code and automation—to help you do your job better.
<urn:uuid:27bda350-5544-450b-b7b1-837035a6f70b>
CC-MAIN-2022-40
https://www.automationanywhere.com/rpa/no-code-automation
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00442.warc.gz
en
0.929268
1,862
2.8125
3
A botnet is created when more than one bot is carrying out the malicious functions of a bot. Botnets begin when bots are installed onto devices, generally undiscovered, using the infected device's computing power and connectivity to carry out a series of commands. As the number of devices recruited increases this enhances the activities for which the bot was deployed. These include credential-stuffing, distributed denial of service (DDoS) attacks, click fraud, rewards scams, digital currency mining, and auction sniping. Once discovered, bots are removed by uninstalling them. This is sometimes a difficult undertaking as some bots are sticky and may turn one's device into a "Bitcoin mining zombie" that defensively keeps working while limiting the user's control over other functions. "My computer was inaccessible so I took it in for service. They told me a bot was installed and that my laptop is part of a botnet that's doing click fraud. They uninstalled it and suggested I be wary of clicking email attachments that might have malware that installs a bot."
<urn:uuid:a00a8f78-a80b-4b4e-9fd4-40e94b3717d3>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/botnet
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00442.warc.gz
en
0.962849
218
3.46875
3
Business Email Compromise (BEC) is a type of cyber thereat or exploit, which is rapidly becoming one of the most widespread email attacks faced by organizations around the world. The premise of the attack is very simple: an attacker will compromise or imitate a legitimate business account, and then use this account to request fraudulent payments from customers or contacts. While simple in concept, these attacks are highly damaging, and difficult to prevent. The FBI has reported that BEC attacks are becoming more harmful. They found that between December 2016 and May 2018, there was a 136% rise in the number of successful reported BEC attacks around the world. It’s estimated that between October 2013 and May 2018, Business Email Compromise alone has cost businesses over $12 billion. Analysts expect these attacks to only become more common, with the financial costs associated continuing to rise. Business email compromise targets businesses of all sizes, but it is especially prominent against Fortune 500 companies, educational institutions and small and mid-sized businesses. They rely heavily on social engineering to try and trick employees and executives into making fraudulent payments. In this guide, we’ll cover how business email compromise works, the different types of business email compromise threatening organizations, why business email compromise is so damaging, and how your organization can take steps to stop business email compromise attacks. What is Business Email Compromise and how does it work? Compromise is a damaging email attack that involves cyber criminals compromising email accounts to try and trick employees into making fraudulent payments Email continues to be the main way in which businesses communicate with their trusted contacts, partners and other businesses. It’s very likely the case that you’ve used email at some point to email out an invoice, or request a payment. However, email addresses can be easily impersonated, and email accounts can be easily BEC attacks exploit the weakness of emails to target top-level people within an organization. Often BEC starts with a phishing attack which allows cyber-criminals to gain access to an important email account within an organization. For example, someone in the finance department, or the company CFO or CEO. Once attackers have access to this account, they can then send out emails that appear to be legitimate, asking for wire payments to be made from others in the organization, or across their supply chain. These emails won’t be flagged as malicious by any anti-virus or basic email filtering technologies, and most users probably won’t expect their boss or a trusted contact to be compromised, making this a particularly harmful kind of attack. cyber-criminals can use is simply spoofing the domains of high-level business email accounts. For example, the attacker will see the email address [email protected] and use [email protected] instead. This is known as Lookalike Domain Spoofing. The similarity of the email addresses may be enough to fool suspecting users into believing it’s the real contact that has emailed them, which could convince them to make a payment. This type of BEC attack is less sophisticated than full account compromise, but it is much more common. It’s also much more likely to be stopped by email security technologies, as they can detect when a domain has been spoofed. However, it can still very successful in convincing unsuspecting users. spoofing is commonly used to impersonate brands, such as Microsoft or Apple. Attackers copy these brand domains to try and in convince users to enter passwords, or make payments. What are the Different Kinds of Business Email Compromise? We’ve broadly covered two methods in which attackers can carry out Business Email Compromise attacks, but the FBI has identified 5 unique variants of BEC. Here’s a brief rundown of what each involve: CEO Fraud: Attackers impersonate a CEO, or a high-level executive, and target employees with requests for payments. Account Compromise: An employee’s email account is compromised, and attackers use their contacts to request payments to their own accounts. Bogus Invoice Schemes: Attackers will impersonate suppliers of foreign companies, in order to request fraudulent fund transfers and payments. Data Theft: Employees in HR and admin departments are compromised so that attackers can gain access to sensitive company and customer information. Attorney Impersonation: Attackers impersonate lawyers or solicitors to find out confidential business events. This is sophisticated type of account compromise attack, and much less common. Why Are Business Email Compromise Attacks Becoming More Common? analysts agree that BEC attacks are becoming more common because they are low risk for attackers, can be relatively low cost to pull off, and they are often Rather than needing to spend time developing malware, or trying to gain access to systems, Business Email Compromise allows cyber criminals to very quickly get access to accounts and send out emails asking for payments. With just one compromised account, cyber criminals can send out hundreds of fraudulent emails, with a pretty good chance that at least one will be opened or replied to. For high profile targets, cyber criminals may not even need to collect information for account compromise attacks themselves. High level employee email credentials are commonly bought and sold on the dark web. Research from LastLine tells us that CEO, CFO and executive account details fetch a high price, but attackers can make a profit of thousands by successfully mounting a business email compromise scam. Why is Business Email Compromise So Dangerous? approaches to email security rely on detecting threats. This could be a malicious domain that’s been known to send out spam emails. Or, it could be an attachment that contains malware, or a URL that leads to a harmful website. Email security technologies can identify threats based on patterns or signatures and stop those emails from being delivered to your users. attacks don’t involve any malware or harmful content being sent. These emails come from legitimate domains and will appear to most email security technologies to be completely innocuous. This means that the email has a high chance of being delivered to your users’ inboxes. Because they target the human factor within the organization to succeed, once in the email inbox BEC attacks have a good chance at tricking employees into believing they are real. As we’ve covered, BEC attacks often target company executives, like CEOs or CFOs, or employees that work within company finances. When an invoice arrives from an employee like this, people usually trust that it is legitimate, and may go ahead and make the payment without caching the legitimacy of the In addition, attackers are spending more time to develop BEC, spending more time investigating which individuals within an organization are likely to have authority in asking for invoices to be paid. Considering these factors, it’s no surprise that Business Email Compromise is growing more common and becoming more harmful to organizations. There have been numerous examples of high profile BEC attacks, against organizations of all sizes. The US Treasury found that the number of business email compromise attacks reported nearly doubled from 2016 to 2018, with nearly 1100 attacks reported every single month. The costs associated also continue to grow, now costing US companies an average of $301 million every single month, according to a Treasury Department Analysis. How Can You Stop Business Email Compromise Attacks? As we’ve covered, stopping business email compromise attacks is a challenge for businesses. However, with a strong multi-layered security approach and increased user education, business can greatly reduce their risk of attack. Here are the steps businesses can take to stop business email compromise. Secure Email Gateway The first layer in a multi-layered approach should be a Secure Email Gateway. This acts as a firewall for your email communications, and stop spam, malware and viruses from being delivered to your users via email. A strong email gateway will detect a spoofed domain coming from an attacker and will in most cases block those types of business email compromise from being delivered. Admins can also use a secure email gateway to check for keywords commonly used in business email compromise attacks, such as ‘payment,’ ‘urgent,’ ‘sensitive’ and ‘secret.’ While it may not be practical to ban these emails altogether, the gateway will detect if something appears suspicious, and place it into a The second step is implementing Post-Delivery Protection. Post-Delivery Protection tools use machine learning and artificial intelligence to monitor email networks for signs of malicious activity. They can pick up on the tell-tale signs of account compromise, such as multiple failed login attempts, unusual locations and times for sending email. If an advanced post-delivery protection service identifies these logins, it will mark any emails from this account as suspicious or stop them being delivered altogether, helping to minimize the risk of business email compromise. post-delivery protection solutions also allow users to report emails as suspicious, which will remove those emails from the inbox of anyone else who has received them. They will also place warning banners on emails from new or unusual contacts, helping to mitigate the risk of lookalike domain spoofing in order to protect users against business email compromise. Implement Security Awareness Training and Phishing Simulation Training users to be aware of what malicious emails and phishing attacks look like is an important step in increasing your organization’s protection against business email compromise. This kind of attacks target users that are unaware of security issues, and trust that the emails they receive are genuine. Security awareness training can teach users what phishing emails look like, and instil crucial security behaviours, like never replying to suspicious emails, and reporting them immediately to IT departments. It can also be useful to for IT departments to implement policies such as never paying invoices or transferring data without authorization from multiple sources. Security Awareness Training helps to reinforce these There are multiple vendors who deliver Security Awareness Training as a service, offering interactive and bite-sized learning courses to help users detect and report phishing attacks. While not every user will need these courses, and not every user will learn from them, implementing security awareness will still be useful in improving your resilience against business email compromise. In addition to awareness training, many vendors also offer phishing simulation. This involves admins creating simulated phishing emails, which look genuine. These emails are then sent out to users to test how effectively they can spot phishing attacks. This can be really useful for IT teams to monitor how at risk they are from phishing attacks, and to teach users what phishing attacks look like, and how vigilant they need to be looking out for real email threats. Because of this, phishing simulation can be a useful way of tackling business email compromise. Implement Identity Management and Account Security A final step in implementing strong defences against account compromise is improving the security of your accounts themselves. This has the benefit of stopping attackers from being able to compromise your accounts in the first place, and having a strong identity management solution in place can stop attackers gaining access to accounts in the first place. The most important aspect to protecting accounts is ensuring that each user has a unique password for all of their accounts, which cannot be easily guessed. Business password management tools can help users to keep on top of their passwords, while allowing admins to ensure users are keeping passwords updated regularly. passwords, each of your corporate accounts, but especially email should enforce multi-factor authentication. Identity Management solutions offer multi-factor authentication which can be enforced across all accounts. They will ensure that accounts can only be accessed when a user has something they know (like a password) and something they have, which can be a code from a text message, or even a fingerprint. Obviously, it’s unlikely that even the most sophisticated hacker will be able to steal a fingerprint from a target, so this is a great step to take to thwart business email compromise attacks. management platforms take this further by offering Adaptive Authentication. This analyses each request to log in individually, taking into account factors such as the device being used, the location, and IP-reputation. This information can immediately enforce multi-factor authentication, and alert IT teams to suspicious attempts to login. This greatly improves account security, without getting in the way of legitimate users logging into their accounts. Top Vendors to Stop Business Email Compromise Proofpoint Essentials is one of the market leading email security solutions. It’s an Email Security Gateway product, meaning it sits in front of your inbox and blocks spam, malware and phishing attacks from reaching your inboxes. Proofpoint Essentials stops threats such as impostor email, phishing, spam, bulk email, and viruses. Proofpoint Essentials is very popular among MSPs, Resellers and Small and Midsized businesses. It provides enterprise grade threat protection, as well as being easily deployed and well-integrated with Office 365. Proofpoint also offers a range of additional features with the Essentials bundle, including Email Encryption, Email Archiving, Continuity, and full email logs and audits.
<urn:uuid:1fd0f8bb-3b08-4157-84a4-62f53e5238a8>
CC-MAIN-2022-40
https://expertinsights.com/insights/how-to-stop-business-email-compromise/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00442.warc.gz
en
0.932406
2,898
2.671875
3
Flash storage is a solid-state technology that uses non-volatile memory (NVM), meaning data is never lost when the power is turned off. It can be electronically erased and takes up less energy and space than other storage solutions. This technology is crucial for organizations in many sectors that require massive amounts of storage and persistence to power their operations. See the flash storage hardware case studies below for a real-world look at the importance of storage innovation across industries: See more: The Flash Storage Market 5 Flash Storage Hardware Case Studies 1. University of Pittsburgh Medical Center Industry: Health care Flash solutions: IBM FlashSystem and IBM Spectrum Storage - 2,400% growth in storage under management with no increase in IT staff - 50% cost savings for primary storage - 5:1 data reduction - 50% shorter time to search patient records The University of Pittsburgh Medical Center (UPMC) teamed up with IBM to improve the utilization and performance of their data storage. This allowed them to improve patient care by making it faster to access records and easier to manage storage. Kevin Muha, director of storage and data protection at UPMC, described the challenge faced by the medical center: “The volume of data that we store and make available today is far beyond even the wildest dreams of a few years ago — and it’s still growing,” Muha says. “For example, we’ve got more sophisticated clinical imaging systems generating increasingly large files and patient records getting progressively more detailed. “Unless the tens of thousands of employees across UPMC can lay their hands on the data as and when they need it, innovations won’t have the hoped-for impact on patients.” To address their needs, UPMC opted for a software-defined storage (SDS) infrastructure using IBM’s Spectrum Storage Suite. They also utilized Spectrum Protect for data resilience, allowing them to back up their whole system overnight and minimize the impact on daily operations. UPMC used IBM’s Spectrum Scale to store their imaging archive because it supports synchronous replication in a stretched cluster across two data centers, thus minimizing the risk of data loss. It also allows multiple users to access data simultaneously without loss or inconsistency. Finally, they embarked upon a data reduction project to compress and eliminate redundancy. They migrated 6,000 VMware virtual machines (VMs) running business-critical applications without loss and reduced their data volume by five times. 2. School District of Palm Beach County Flash solutions: NetApp AllFlash and NetApp hybrid controllers - Consolidated 3–4 vendors into one cluster - Migrated 1,000 virtual machines - Reduced data center footprint The School District of Palm Beach County has over 200 schools with almost 250,000 students, making it the 10th largest district in the country. Plus, with approximately 30,000 employees and shrinking budgets, they had more data than they could handle. By partnering with NetApp, they were able to migrate 1,000 VMs to one NetApp controller at one site within a two week timespan. They also reduced their data center footprint from 12 racks to one. This consolidation gave all of their applications better throughput and allowed them to condense data from 3–4 legacy vendors into one cluster. They can now provide services more efficiently and improve student experience. Most notably, the district upgraded and secured their Student Information System, the records, grades, attendance, etc. for all students. “These students are our lifeblood of the county. They are the future,” says Joe Zoda, the senior system engineer for Palm Country District. “By partnering with NetApp, we were able to ensure that the data, the information, everything is secure. “Basically, we were able to future-proof our technology as far as storage.” This transition made data storage, management, and analysis easier to help improve classroom experience and reduce administrative burden. See more: 5 Trends in the Flash Storage Market 3. BDO Unibank Flash solution: Huawei OceanStor Dorado All-Flash Storage - Achieved an active-passive system to protect business data - Supported elastic service expansion - Cut rollout time from two days to six hours - Sped up data monetization in storage resource pools The largest bank in the Philippines, BDO Unibank, is working with Huawei to provide digital financial solutions to over half of the country’s population who are unbanked. In April, 2021, the bank created a mobile wallet called BDO Pay, offering users contactless payment and digital banking services to improve the inclusiveness of financial solutions. In order to innovate, the bank had to upgrade their legacy infrastructure, powered by Huawei’s OceanStor Dorado All-Flash Storage Solution for secure, reliable data sharing and cloud storage. The solution is built on the Huawei Data Management Engine (DME) and OceanStor Dorado Storage to safeguard financial data in response to increasing capacity requirements. The legacy system was siloed and limited in its processing capacity, preventing BDO from using more than 40% of their capacity. As a replacement, Huawei’s solid-state drive (SSD) technology and storage area network (SAN) with network-attached storage (NAS) increased throughput and unified their file sharing and backup. 4. British Army Flash solution: PureStorage FlashBlade Multi-cloud solution - Connected remote staff with secure communications - Provided cost-effective storage for the private cloud - Modernized data protection The British Army faced challenges with their digital transformation program THEIA. They needed to connect teams from all over the word in an economical and secure way but didn’t have the virtual infrastructure needed. By partnering with PureStorage, they were able to adopt a modern private cloud system with a smaller data footprint. They were also better able to secure their sensitive data and communications so remote teams can work together effectively. PureStorage’s FlashBlade technology allowed for an IT and business transformation with more efficient digital processes, cost savings, and security at scale. 5. Cerium Networks Flash solution: Dell PowerStore Scalable All-Flash Storage - Upgraded storage systems to enable better competitive research - Data-centric design made to support workloads with unified storage IT Solutions provider Cerium Networks partnered with Dell Technologies’ EMC PowerStore to modernize their digital transformation efforts. Dell’s PowerStore Servers allow for processing of massive workloads without reduction in performance. They also provide integrated security for easier data management. PowerStore Servers offer modular infrastructures, so IT teams can easily tailor and manage infrastructure without added costs or hassle. PowerStore offers a 25% faster mixed workload performance with fully adaptable architecture to bring about maximum operational efficiency. See more: Best Data Storage Solutions and Software
<urn:uuid:f7a92d45-62f6-4a6d-a21d-ed025934ec6b>
CC-MAIN-2022-40
https://www.enterprisestorageforum.com/hardware/flash-storage-use-cases/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00442.warc.gz
en
0.925885
1,475
2.71875
3
Threat Assessment Teams (TATs) play a very important role in prevention efforts and making organizations and communities safer and are even more critical today due to the COVID-19 pandemic that has created more stressors and even more at-risk individuals for TATs to address. TATs have existed for a very long time in schools, higher education institutions, organizations, and communities, but increasing violence, suicides, and other incidents create concerns and questions on what is missing. Threat Assessment Teams: A Brief History TATs are often one of the main topics of discussion after horrific tragedies. For example, after the Columbine Massacre in April 1999 federal entities like the NCAVC, FBI, Department of Justice, Department of Education, and others focused on TATs and issued guidelines for creating and implementing TATs to help prevent future shootings, violence, and other tragedies. After the shooting at Virginia Tech, the state of Virginia took it a step further and issued guidelines requiring schools to implement a TAT, as well as guidelines on who the members of the TAT should be. Over the past 20+ years since 1999, TAT guideline documents have been released year after year by FBI, DOJ, DoED, Secret Service, National Threat Assessment Center (NTAC), American Psychological Association (APA), National Association of School Psychologists (NASPA), other associations, state agencies, training institutes, universities, and others. However, after 20+ years of publishing and sharing TAT guidelines and TAT recommendations, the numbers and frequency of shootings, acts of violence, and other tragedies are increasing in many communities (schools, houses of worship, organizations, etc.). WHY? WHAT is missing? What’s Missing? – Recipes, Ingredients, and Tools Using an analogy, let me explain what is missing. If I handed you the best cookie recipe ever, could you make the cookies with only the recipe? NO. Just having the recipe is not enough. To make the cookies, you also need the right ingredients AND the right tools. The same goes with TAT guidelines and recommendations, they are essentially TAT recipes without the ingredients and tools. To actually acheive the desired prevention and safety results, TATs also need the right ingredients and the right tools. For most TATs, the INGREDIENTS (pre-incident indicators, red flags, information, sources, investigations, resources, TAT members, experts, data, etc.) are almost always available in their organization and community, on social media, with family and friends, with law enforcement, and numerous others. So, what is missing? Lessons learned and research-based evidence overwhelmingly exposes most failed prevention efforts were because community and organization TAT(s) did NOT have the right TOOLS. What are the right tools? The right TOOLS include a central community-wide platform to funnel all information and data into one secure information sharing platform and the right TOOLS to collect, share, and connect the INGREDIENTS to see the bigger picture involving the at-risk individual(s), so TATs and other resources are unable to take the right actions at the right times to intervene, disrupt, and prevent. The Funnel and Tools Threat Assessment Teams (TATs) are Missing #1: Community-wide Funnel Why is a community-wide funnel needed? Because research and evidence reveal over and over how the majority of pre-incident indicators (red flags, warning signs, concerning behaviors, etc.) are not getting to the community-wide and organization-wide TATs because they are almost always scattered. With so many new and different incident reporting options, the pre-incident indicators are scattered across federal agencies (FBI, DHS, USSS, etc.), across law enforcement (see something say something at federal, state, and local law enforcement levels), across law enforcement agencies and units (city agencies, county agencies, terrorism, homicide, gangs, drugs, domestic violence, homeland security, sex crimes, special weapons, traffic, human trafficking, investigations, detention, area commands, community policing, houses of worship, tourism, and others), across county offices, across businesses, across universities, across schools, across mental health, across hospitals, across nonprofits, and across numerous others. The pre-incident indicators are also scattered across numerous incident reporting silos such as hotlines (federal, state, local, and specialty), website options, text line options, emails, and app options that seemingly increase every day (but download rates and sustained rates of apps are extremely low). The pre-incident indicators are also scattered across people – trusted adults – which includes family, friends, managers, supervisors, employee assistance, teachers, counselors, nurses, neighbors, social media contacts, and many others. The strategy to create more and more incident reporting options and systems has proven to be ineffective and dangerous without a secure community-wide funnel. The critically needed indicators end up scattered across numerous silos without the right tools to funnel all the pieces of the puzzle to the appropriate community-wide TAT and/or organization-wide TAT, who are unable to see the bigger picture and unable to take the right actions at the right times. #2: Secure, Automated, and Immediate Information Sharing Why are secure, automated, and immediate information sharing tools needed? Funneling everything to everyone is not an option due to FERPA and HIPAA and other privacy, confidentiality, need to know, legal, liability, TAT specialties, and other requirements. The right tools must securely, automatically, and immediately route information according to incident type, location, team, and other appropriate criteria so immediate actions can be taken by the community-wide and/or organization-wide TAT and other resources in order to intervene and prevent incidents before they occur as well as get individuals the help they need before they escalate. #3: Community-wide Investigations & Assessments Why are community-wide investigation and assessment tools needed? Once the funneling and the sharing of indicators has taken place, TATs will need to immediately begin investigations about the individuals and the situation to determine risks, threats, and next actions. To assist with investigations, tools are needed so anyone can share additional information confidentially or anonymously using non-law enforcement options. In most cases, assessments (risk, threat, behavior, mental, etc.) will be necessary and may require community or external resources and experts to assist which will require secure, community-wide, and even nation-wide connectivity. #4: Connecting Information with Internal and External Resources Why are connecting the dots tools needed? As investigations and assessments are conducted on an ongoing basis and more information is received on an ongoing basis, tools are needed to continuously, securely, and immediately connect the information with the appropriate TAT members as well as internal and external resources who need to be added to the TAT, investigation, assessment, and other related efforts. #5: Intervention & Monitoring of At-Risk Individuals Why are intervention and monitoring tools needed? As more and more at-risk individuals are identified in schools, organizations, and communities, TATs need tools to keep track of at-risk individuals, intervention actions, ongoing behaviors, de-escalations, escalations, follow-ups, and keeping all appropriate resources updated on the latest status of each and every at-risk individual and situation. Too many at-risk individuals have “slipped through the cracks and gaps” and caused costly and even tragic results because community-wide and organization-wide TATs did not have the right tools for intervention and monitoring. #6: Data Analytics & Preventive Intelligence Why are data analytics and preventive intelligence tools needed? When you have tools to funnel all indicators with your TAT investigations and assessments, your TAT interventions, your prevention results, and your lessons learned into one, central, secure, community-wide platform where you can identify patterns, trends, and other analytics that help provide preventive intelligence for improving overall prevention efforts. When TATs – community-wide, organization-wide, and even nation-wide – are missing the right tools, they are unable to make the recipes (guidelines) into the results (prevention) needed for consistently improving public, school, organization, and community safety. To see what you are missing, click here right now and help your TAT make your community safer.
<urn:uuid:143c4521-bb97-487a-a346-25c0f772fb07>
CC-MAIN-2022-40
https://www.awareity.com/2021/01/07/threat-assessment-teams-six-tools-most-community-and-organization-tats-are-missing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00442.warc.gz
en
0.934416
1,746
2.734375
3
When global aluminum producer Norsk Hydro was struck by an extensive cyberattack in the early hours of March 19, 2019, the situation escalated quickly. Within hours, the company network had to be taken down: Production in most of its 170 plants was switched to manual operations and stopped in some of them. Although Norsk Hydro responded quickly and decisively, the financial impact — of what corporate information security officer Torstein Gimnes Are called “a company crisis” — ultimately amounted to US$71 million. Norsk Hydro’s laudable transparency affords us rare insights into the aftermath of such a security breach. But it is only one example of many such attacks in recent years, serving as a wake-up call to manufacturers everywhere that this can happen to any organization inadequately equipped to defend against cyberattacks. Adversaries come in many forms: state-sponsored attackers, hacktivist, and those bent on corporate espionage. They can disrupt manufacturing operations or steal sensitive product design information or proprietary and differentiating production techniques. On a small scale, a successful cyberattack can affect the manufacturing organization’s reputation and financial performance. On a larger scale, cyberattacks can have a negative impact on national security and the nation’s gross domestic product (GDP). Connected manufacturing environments increase security risks The backbone of most manufacturing organizations is their operational technology (OT), which includes industrial control systems connected via programmable logic controllers (PLCs). These OT solutions focus on the safety of human operators and the integrity of the manufacturing equipment. Most legacy manufacturing equipment uses proprietary control system network protocols and isn’t connected to the internet, and so the industry has not put much emphasis on cyber security. Historically, this air-gapped network architecture, with separation between corporate business systems and the operational and control systems on the manufacturing shop floor, was sufficient. However, as manufacturers implement more automation to improve production throughput and quality and to reduce operating costs, they are increasingly connecting equipment with surrounding processes, business systems and remote operators. Traditional IT and OT silos that used to operate almost entirely independently have started to converge, with equipment becoming IP-enabled and connected with other enterprise network environments. This makes it possible, for example, to consolidate and centrally operate and control manufacturing processes from remote operations centers, and removes the need for human operators to be physically located on the shop floor. And while that helps increase productivity, it also amplifies the risk that production equipment can be remotely accessed and controlled by external parties. A striking example of the risks that convergence can introduce is the discovery of issues related to Ripple20 in common OT devices. Ripple20 is a set of 19 security vulnerabilities discovered and published by Israeli security research group JSOF in June 2020. These defects stem from a widely used software library from U.S.-based company Treck. The code, which is embedded in a wide range of automation and internet of things (IoT) devices across all industries, potentially allows a malicious attacker to remotely gain control of vulnerable devices and obtain or manipulate sensitive data on these devices. Treck is a known and reputable company, and its shared library was used extensively in millions of products over two decades. Analysts and national advisory agencies have given the Ripple20 vulnerabilities high criticality ratings. While many automation vendors have informed customers of which products are affected, this is not true for all suppliers; in fact, the full exposure may never be fully established. Updating or replacing such products will, therefore, be a lengthy activity and leave manufacturing operations at elevated risk. From a security perspective, IT/OT convergence and challenges like Ripple20 are driving organizations to develop a holistic and harmonized approach to security in order to deliver an optimized technology solution and reduce business risk. But even as manufacturers struggle with this development, they must contend with the fact that their enterprise’s traditional boundaries are expanding to support the increasing globalization of manufacturing supply chains. The shift to demand-driven supply models requires manufacturers to expand their reach across a growing network of supply chain partners, business partners and consumers; all this increases risk. Connected devices, sensors and smart things must be secured To support these changes, manufacturers need a way to securely exchange data and connectivity with manufacturers, partners, consumers and connected devices. It is critical that all business-to-business connections be protected — not just with firewalls and intrusion prevention devices, but also by monitoring all traffic traversing into and out of the organization, so that anomalous network activities can be identified. Covert activities and insider attacks can be spotted through the use of behavioral learning. By knowing which users typically access what systems at a given time, a security solution can detect unusual behavior. A cyberattack can be detected early, before it starts to have an impact on the production process. Of course, IoT adds another wrinkle to the manufacturing security scenario, with the addition of countless new components and network-connected devices that communicate internally or externally. These smart connected devices, sensors and controllers are changing the game in manufacturing plants. For example, real-time tracking solutions for manufacturing supply chains such as DXC OmniLocation™ can improve decision making by providing new visibility into inbound materials and track-and-trace solutions for outbound finished goods. When these IoT components and devices are connected to the network, they instantly fall under the purview of the cybersecurity management process. Today, while a number of IoT standards have been put forth, no dominant standard exists. This complicates the IoT security challenge. It is important for manufacturers to address IoT security risks and mitigate these with an enterprise-wide solution. Manufacturers should use IoT gateways and edge devices to segregate and provide layers of protection between insecure devices such as those affected by Ripple20 and the internet to help manage the overall lack of security that exists with IoT. Protect intellectual property Protecting information in a manufacturing organization is very different from protecting information in other industries. Whereas the financial, retail and healthcare industries focus on securing personally identifiable information (PII) and credit card records, the manufacturing industry needs to reduce disruption in the supply chain and protect intellectual property. IT controls for protecting information systems are mature, ensuring confidentiality, integrity and availability. The same cannot be said for the OT domain: This is an area where manufacturing organizations need to adapt and evolve their capabilities to protect production processes from cyberattacks. The introduction of advanced digital manufacturing applications will require next-generation solutions for monitoring both IT and OT, including sensors, networks and connectivity, and edge-oriented computing. Operating models for security management will need to be modernized and strengthened to ensure end-to-end integration, with security mandated as a prerequisite across both IT and OT domains. Many manufacturing practices in use for years should now be deemed vulnerable and unsuitable to ensure a secure modern environment. The same is true for many IoT and OT assets, such as those including Treck’s exploitable shared libraries (which will require additional layers of protection until the equipment can be updated or replaced). DXC Technology Security services protect some of the largest manufacturing companies in the world. Our Advisory and Managed Security Services have helped customers defend against modern-day attacks and improve and strengthen their OT security, so cyberattacks can be mitigated through preventive, detective and proactive response measures. Learn more about DXC security services.
<urn:uuid:0d9bb546-fcb9-4c3c-827f-24ca317fefff>
CC-MAIN-2022-40
https://dxc.com/us/en/insights/perspectives/paper/a-wake-up-call-for-manufacturers-cyber-attackers-are-at-your-door
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00442.warc.gz
en
0.94588
1,503
2.5625
3
On the heels of a recent study warning about mobile malware emerging as the new frontier of cyber crime comes another report that discusses the evolution of both smart devices and the threats that target them. The “Smartphone Malware Report,” by Panda Security and Spain’s National Cyber-Security Advisory Council, follows the historical milestones of mobile devices, starting with IBM Simon, the first smartphone designed in 1992, as well as discusses security issues, threat vectors, and predictions for the future. Boosting the security of cellphones is a major challenge for any security department, and the threat must be dealt with as soon as possible to help protect users’ information and businesses, said Luis Corrons, technical director of PandaLabs. “Even though cellphone malware is not a priority for cyber crooks yet, we are starting to see the first major attacks on these platforms,” he said. “We predict that the next few months will see significant growth in cellphone attacks, especially on Google’s Android operating system.” Detailing the evolution of the mobile malware, the report discusses how Cabir, the first malicious code for smartphones appeared in 2004. The malware was soon followed by Pbstealer, one of the first binary files that could steal confidential information from cellphones; Ikee.A, the first-ever iPhone worm that changed the wallpaper to an image of Rick Astley; and the more recent malicious application Droid09 that infiltrated the Android Market. The report also highlights future threat scenarios, including: - Cellphones as a new method of payment - Online banking applications for cellphones - User tracking (using GPS technology) - Advanced social-engineering attacks Unlike the previous generations of cellphones that were vulnerable to local Bluetooth hijacking, the report said, modern smartphones are susceptible to the same risks as PCs. “New attack vectors will increasingly be exploited by fraudsters as online banking services use these devices as second authentication factors given the current convergence between PCs and cellphones,” the report concluded.
<urn:uuid:d703b33d-4a0d-464f-9a1e-580d41d9fd3e>
CC-MAIN-2022-40
https://blog.executivebiz.com/2011/06/mobile-malware-report-predicts-significant-uptick-in-cellphone-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00642.warc.gz
en
0.927912
422
2.546875
3
Data Protection Day 2022: To Protect Privacy, Remember Security Today’s privacy and security conversations often happen in silos, but key privacy principles from decades ago remind us that they are intertwined, especially in the face of today’s risks. January 28, 2022, marks 15 years since the first Data Protection Day was proclaimed in Europe and 13 years since Data Privacy Day was first recognized by the United States. Since then, dozens more countries recognize the day, including Canada and Israel. However, decades before these modern commemorations, key principles were introduced that remain just as relevant now, when data breaches pose some of the greatest threats to privacy. In the 1970s, the U.S. Federal Trade Commission (FTC) established the Fair Information Practice Principles (FIPPs), and in the 1980s, the Organization for Economic Cooperation and Development (OECD) published its Guidelines on the Protection of Privacy and Transborder Flows of Personal Data. Both made clear that those holding personal data must implement appropriate security measures to protect such information against risks to loss, access, destruction, modification or disclosure. Decades later, these principles have been enshrined in laws around the globe. In the face of today’s cyber risks and modern data protection laws, remembering these fundamental concepts is important. Security and privacy compliance are as intertwined as ever in the European Union’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), Japan’s Act on Protection of Personal Information (APPI), and privacy laws around the globe. Nonetheless, privacy and security dialogues are often siloed from one another. This means that it is not always obvious that many of the concerns, and even objectives, of a Chief Privacy Officer are actually shared by a Chief Information Security Officer. When these disciplines operate in a vacuum, prescriptive privacy and security requirements can impede the overarching goal of data protection. This can take the form of selecting antiquated on-premises technologies and designing geo-restricted terms in pursuit of perceived privacy requirements or flawed perimeter security approaches. However, modern legal standards actually require protecting data with safeguards appropriate to the risk, which is dependent upon embracing the latest technologies and processes. Take GDPR, for example. In complying with sometimes ambiguous interpretations of the European Court of Justice (CJEU) Schrems II cross-border data flow assessments, it is important not to lose sight of GDPR’s oft-enforced Article 5 and Article 32 security requirements. In fact, in its “State of the Art” guidelines, the European Union Cybersecurity Agency (ENISA) calls out technologies generally dependent upon cloud-native designs, like extended detection and response (XDR), and threat hunting techniques traditionally dependent upon global data flows. The reason for this is clear. As highlighted in CrowdStrike’s 2021 Global Threat Report, the definition of risk is ever-evolving in the face of innovative adversaries, novel techniques and the work-from-anywhere transformation exacerbated by COVID-19, during which millions of workers retreated to hastily equipped home offices, creating a feeding frenzy for cyber predators spurred on by the windfall of easy access to sensitive data and networks. The need to not only be thoughtful about “privacy by design” but also “security by design” is more challenging today as data may be collected via UI and cached locally, transmitted to a database, and accessed by a cloud-based application. Consequently, the entire data lifecycle must be considered, from SecDevOps in building software, following best practices on identity and Zero Trust when granting access to data, and ultimately securing the workload, whether it is on a traditional endpoint or in the cloud. Consistent with Ann Cavoukian’s Principle of Full Functionality – Positive Sum Not Zero Sum, privacy and security should not be treated as tradeoffs where the path of least resistance is taken. Organizations responsible for data must protect the entire attack surface. As we recognize Data Protection Day, it is important to reflect on what holistic data protection entails, and how critical cybersecurity is, not only to compliance but to protecting privacy. For policy makers and regulated organizations alike, it is critical to focus on the big picture goal of incentivizing the adoption of the best way to protect data rather than arbitrary geo-restrictions not respected by cyber adversaries. The FIPPS and OECD guidelines mentioned above may have been developed in an era with simpler threat vectors, but implementing “appropriate” security remains critical to protecting privacy today. It is with these challenges and big picture objectives in mind that CrowdStrike seeks to provide customers with the contextual information needed to implement the robust technologies in pursuit of data protection compliance. Drew Bagley is VP and Counsel, Privacy and Cyber Policy at CrowdStrike. - To learn more about data protection and best practices, read this blog: Data Protection Day: Harnessing the Power of Big Data Protection. - For an overview of the GDPR and how it may affect your organization, download The General Data Protection Regulation (GDPR) and Cybersecurity. - Read more about GDPR in this blog: GDPR at Three Years: Risk Takes On New Meaning. - Keep up-to-date with cybersecurity policy developments at the CrowdStrike Public Policy Resource Center. - Learn more about the powerful CrowdStrike Falcon platform by visiting the webpage. - Get a full-featured free trial of CrowdStrike Falcon Prevent™ and see how true next-gen AV performs against today’s most sophisticated threats.
<urn:uuid:55f5d5fa-c8ac-49d5-8c7d-673d0fd54ebc>
CC-MAIN-2022-40
https://www.crowdstrike.com/blog/no-privacy-without-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00642.warc.gz
en
0.917284
1,141
2.890625
3
The technology behind Bitcoin is generating fanfare and shifting exchange of value paradigm. Financial services actors were the first to be enthusiastic about blockchain and continue to forward this technology. According to IBM, 66% of banks around the world will use blockchain by 2021. However, this trend becomes contagious to other industries in many areas, prompting them to consider blockchain as a viable alternative to some existing technologies. So, how did blockchain start? Read more here (https://dynanetcorp.com/wp-content/uploads/Blockchain-Pervasive-Growing-1.pdf).
<urn:uuid:71b1b81e-3325-416f-8ebb-c32986b94825>
CC-MAIN-2022-40
https://dynanetcorp.com/blockchain-pervasive-and-growing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00642.warc.gz
en
0.944529
118
2.578125
3
Think about how fragmented your digital identity has become. Every time you enter a password or PIN, wherever you are, you're leveraging some element of your digital identity. Every time you pay with a credit card or recite your Social Security number. Every time you digitally sign a contract. That holistic digital identity is tied to your physical likeness, finances, conversations, property, and credibility, making it an exceedingly valuable asset. Unfortunately, with pieces of our digital identities being handed out to everyone from retailers to government agencies to employers, those identities are more vulnerable than ever. It's been well-documented over and over and over again how many lives are rocked by identity theft every year (nearly every reputable source calculates the total in the double-digit millions of people in the U.S. alone). As our digital identities become more disparate and attractive to fraudsters, we need a way to protect our digital selves. Enter blockchain. Any organization can deploy blockchain — a promising, relatively new technology and methodology — to build trust among users. In its purest form, blockchain lets companies instantly make, approve, and verify many types of transactions by leveraging a collaborative digital ledger and a predetermined network of individual contributors or keepers of the blockchain. Once transactions or other data are inside the secure blockchain ledger, cryptography takes over and verification hurdles drastically decrease the chances of data being stolen. There are two often-referenced categories of blockchain: private, which is permission-based, and public, which is anonymous. Each has its own strengths, but private, permission-based blockchain has an added layer of protection in that participants in a transaction are known and trackable. Would we be willing to let blockchain serve as a clearinghouse or executor for our full digital identities? Think of how that could play out in a few different scenarios. Private aka "Firm Private": This type is already taking hold. Through blockchain, a specific financial institution can verify and facilitate a stock purchase in real time,but after its completion that transaction can also become a part of a digital identity, protected by blockchain. That way, the information doesn't have to sit in a separate, isolated account behind the bank's walls, but can instead be instantly verified, referenced, and acted upon with other digital identity elements. It also allows the bank to retain some level of authority and management. Public aka "Classic": As the Internet of Things expands, public blockchain can serve as the ledger in scenarios where only certain elements of a digital identity are necessary and a central authority isn't as integral. For instance, buying a burger at a drive-through. The combination of blockchain and a Bluetooth beacon could verify the car associated with a digital identity, verify the Visa Checkout app running on the car's console, communicate to the restaurant's payment system, and debit a bank account the proper amount. All of that can occur without a holistic digital identity being part of a known or closed network, sharing and accessing only the portions of the digital identity that are relevant to the sale. Private Shared aka "Industry Private": This is a hybrid type of blockchain that could be the happy medium for financial institutions or stock exchanges, as digital identities and transactions are managed by a "circle of trust." Changes don't require mass approvals nor does the private shared blockchain allow just anyone to read and amend, but it keeps power from being consolidated in a sole authority's hands. So in the stock purchase example, a few interconnected industry stakeholders would need to approve the transaction — perhaps a bank, the stock exchange, and the Federal Trade Commission — before it becomes a verified part of the blockchain and of an individual's digital identity. Those scenarios may be theoretical, but there are already many real-world applications leveraging blockchain. The Leonardo da Vinci Engineering School in Paris uses blockchain to validate and secure diplomas. The Royal Bank of Canada is experimenting with blockchain to authenticate and secure cross-border remittances. Blockchain is even being used for smart contracts that manage solar energy ownership and exchange across smart grids. Whether it's used between private financial institutions or in the public IoT, blockchain is securing elements of digital identities and lives. Blockchain players still need to take some security measures in order to store, unite, and effectively use entire digital identities within the construct. All solutions leveraging blockchain rely on the integrity of the information published in the ledger. Although it isn't possible to corrupt the ledger itself, fraudsters will focus on attacking individual users. It's crucial to implement strong two-factor authentication for all users who contribute to the blockchain. Data encryption is also key, as is device-level security such as Trusted Execution Environments or Secure Elements that protect against potential man-in-the-middle attacks. Once those security priorities are addressed, blockchain technology is poised to reach its full potential and serve as the guardian for our valuable digital identities. - Cryptographic Key Reuse Remains Widespread In Embedded Products - More Than 40% Of Attacks Abuse SSL Encryption - Why Hackers Are Getting 'All Political' This Election Year
<urn:uuid:deb50080-5b7e-4be9-9575-9f010a65c9b4>
CC-MAIN-2022-40
https://www.darkreading.com/endpoint/blockchain-the-battle-to-secure-digital-identities
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00642.warc.gz
en
0.937228
1,033
2.65625
3
Veles visualizations are purely statistical representations of binary data. We take a sequence of bytes and visualize correlations between certain values. It doesn’t matter if it’s an executable file, a picture or a disk image – from the perspective of visualization any file is just a sequence of bytes. There are a few different visualization modes supported by Veles: digram, layered digram and trigram. Let’s explain them one by one. In digram visualization we look at all 2‑byte sequences (digrams) and compare their relative frequency in the file. We treat each 2‑byte sequence as pair of coordinates that we draw on a 2d surface. Let’s imagine a tiny example file made of following bytes: 02 03 05 01 To create a digram visualization we iterate through the file and list all the 2‑byte sequences we encounter: <2, 3>, <3, 5>, <5, 1>. We treat each pair as a 2d coordinate of a point we put in our visualization. The result is shown on the diagram below: Note that each byte (except the first and the last one in the file) is used twice, once as a coordinate on x‑axis and once on y‑axis. Of course real files are much larger and contain many digrams. In Veles the brightness of each point is determined by the relative frequency of each digram in the file. The most common ones will be white, while those encountered only a few times will be very dim, almost completely black. Let’s take a look at a few examples: Based on the description above everything should be grayscale, however, there are different colors visible on the screenshots above. Veles uses them to add just a little bit of information to the visualization. If a digram is found predominantly at the beginning of the file the corresponding point will be more yellow. If it’s more common at the end it will be blue. If it’s evenly distributed or most common in the middle of the file the point will be white. In this visualization we divide the file into 256 evenly sized parts. For each of this parts we calculate a digram visualization. We display all of those visualizations on top of each other to create a 3d cube showing how digram distribution changes through the file. This view is most easily explained by relating it to the digram view. In fact it’s exactly the same thing, except this time we use 3‑byte sequences (trigrams) to create a 3d cube. Each byte in the file is used 3 times: once as a coordinate on x-axis, once on y-axis and once on z-axis. The meaning of different colors is also the same as in digrams – yellow for trigrams most common in the beginning of the file, blue for those found mostly at the end. Trigram and layered digram views give user the option to change the shape between cube, cylinder and sphere. All those views present exactly the same data. Each point is represented by 3 values (trigram or digram + layer number). By switching between cartesian, cylindrical and spherical coordinate systems we get different graphical representation of the data. Performance‑wise it’s not always feasible to visualize the whole file. For larger files the amount of data to process is just too much. However, the binary visualization is most useful when working with large files. In Veles we handle this problem by visualizing a sample of data instead of the whole file. Sampling is done before any further processing. From the point of view of visualization the sampled data is the byte stream we’re analysing. Increasing sample size improves accuracy of visualization and limits the chance of introducing artifacts at the cost of performance. Depending on available hardware we normally use sample size between 1MB and 10MB. Note that Veles will not sample files smaller than the sample size set by user. In that case it just shows the whole file. Additionally we provide an option to disable sampling in UI. Selecting this option will lead to poor frame rates or even crash Veles when working on very large files! Uniform random sampling Let N = sample size. The default sampling algorithm randomly picks sqrt(N) continuous byte sequences of size sqrt(N) from file. This approach is a compromise between the need for taking data from multiple different places in the file and the requirement for the sample to represent actual byte sequences for n‑gram visualizations. The panel on the left side of the visualization is minimap. It shows a simple visualization of the whole file. You can slide the 2 bars to select a (continuous) part of the file. Only the selected part of the file is visualized in the main part of the window. You can add additional minimaps if you want to zoom in a specific part of the file. Leftmost minimap always shows the whole file. Each consecutive minimap shows only the selected range of the previous one. Using 3 or 4 minimaps you can easily select just a specific few hundred of bytes out of a very large file. Minimap divides the file into parts of equal size and represents each part with a single texel. The file is represented left‑to‑right and top‑to‑bottom. In default (green) mode the value of each texel is calculated based on average value of bytes in the corresponding part of the file. Alternative mode (red) shows the Shannon entropy instead. Check out our earlier post to see how we can identify different parts of a real file (libc.so from Ubuntu 14.04) and find out CPU architecture of the machine: Binary data visualization. Veles visualizations were inspired by Christopher Domas’ talk from Derbycon 2012. Check out the video of his presentation https://www.youtube.com/watch?v=4bM3Gut1hIk.
<urn:uuid:bf4653d1-cb09-4374-91b4-03fc3cfa3586>
CC-MAIN-2022-40
https://codisec.com/binary-visualization-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00642.warc.gz
en
0.898365
1,275
3.171875
3
In 2019, it’s a good time to be a game designer – and not just for the reasons you might think. Certainly the gaming industry is already massive, generating close to $135 billion in revenues last year, and the rise of eSports promises to take gaming to new levels of popularity and revenue. But gaming has become a disruptive force in itself as game design techniques filter into everyday communications apps, and emerging technologies like big data and AI will make gamification even more disruptive – and potentially unethical if we’re not careful. The Hong Kong campus of SCAD (Savannah College of Art and Design) offers interactive design and game development programs that teach students not only the skillsets necessary to design games, but also to apply those gamification skillsets to all sorts of apps, from social media sites like Facebook (think of collecting “likes” as the same thing as scoring reward points) to work orientation videos and educational tools that turn things like learning calculus or how to repair an escalator into a game. The common thread is engagement, says Professor Wan Chiu, a game designer who teaches interactive design and game development at SCAD Hong Kong, which in this case is a form of psychology. “Game designers are like psychologists – our job is to engage the user and make them participate in certain things that we design.” Design as you play Chiu says that while the basic structure of how games are made hasn’t drastically changed in the last 30 years, new technologies like big data analytics are providing powerful tools that alter the design process by allowing game designers to put the game online before it’s even finished, and continue designing the game on the fly as users play it. “As a game designer, a lot of times these days, we don’t just design a game and wait for the player to react to it – we can actually make half of a game, put it on the market, charge half price for it, and see how it goes,” Chiu says. “So every second when you playing, we can receive data based on what you’re doing – are you clicking the button we want you to click? If not, why? Is the experience actually engaging so that people will keep coming back? We don’t have to wait for anybody to tell us – we can look at our analytics in real time. That’s why big data is such a revolutionary force for games.” SCAD Hong Kong’s Associate Dean of Academics Derek Black adds that game design is shifting to an iterative design approach where developers can make small changes very rapidly and test them. There are already examples of this outside of the gaming apps sector, he adds. “In the US, there’s a company called Intuit that create a software called Turbo Tax. And historically speaking, they would only roll out a new product every year on a yearly basis,” Black explains. “Today, they’re rolling out 100 different changes within the tax season, and they’re able to test to see which ones are working, and make those changes based upon that big data. So what they’ve done is that, in essence, all of their designers are ‘entrepreneurial designers’ looking at what are the best features to add into that. So it’s taking all that user-experience data being pulled together and starting to find tune those interactions and experiences.” Chiu points to recent gaming sensation FortNite as a live example of successful iterative game design in action. “FortNite is such a big powerful force because of the way they’re able to use the data and make iterative changes – even the placement of a button or the size of the font can impact whether the user is going to click that button or not, or the way they write the copywriting,” he says. “FortNite is so successful that during every second you’re playing, you’re totally engaged.” Chiu adds that engagement doesn’t necessarily mean constant action – designers can come up with games that proceed at a much more relaxed pace. “For example, there’s a very successful game called Monument Valley – it is not about fast action, it’s about how you can let other users keep playing at a certain level at their own pace for a long, long time.” Even relatively simpler games like the New York Times crossword puzzle app can harness user data to tailor the experience by providing easier puzzles for players who struggle to complete puzzles beyond a certain level of difficulty, Chiu says. Inevitably, artificial intelligence and machine learning are also being integrated into game design, particularly for things such as making non-player characters in RPGs like Final Fantasy more realistic, conversational and interactive. One challenge with AI and ML as a design tool is ensuring that it can be trusted to run on its own without producing unintended results. Chiu says that’s why it’s important to train AI with the right data to ensure it produces accurate results and responses. He cites an example of a public hospital in California where a company is helping the oncology department train AI to look at slides and detect which ones show cancer. “After about 10,000 slides, the machine knows how to look at a slide and very high accuracy rate,” he says. In fact, he notes, even the AI training process can be gamified. “For example, to train AI to recognize cancer, you could get doctors worldwide to help in a way that makes it fun and rewarding to verify the right answer.” Design ethics: the game! The combination of big data and gamification in the name of higher engagement does raise ethical issues for interactive designers, Chiu cautions, citing Facebook as an obvious example. “With big data we can see who is playing in this country or that country, and in what region – that’s why Facebook is such a powerful force, because Facebook can tell you not only where the person is, but also that this person is actually a 12 year old Caucasian female at home,” he says. “In a few years it can probably tell you which room in the house they’re in.” That’s why ethics are a key component of SCAD game design and interactive design courses, Chiu says. “In my game design class I tell them, ‘You guys are wielding unlimited amount of power – you didn’t know that! You have power over people’s life and death’,” he says citing examples of people who stage dangerous stunts on social media sites like YouTube and Instagram just to get more clicks and likes – sometimes with tragic results. “I tell them about how people actually lose their lives because of this, so you have to be responsible for what you’re doing.” At the end of the day, game designers are challenged with the delicate task of making games or other apps and services as engaging as possible without pushing users into full-blown tech addiction, which is already emerging as a very real social problem for many people. Chiu doesn’t dismiss the issue of tech addiction, but says that parents’ concerns about kids spending too much time online is the modern version of parents of previous generations who worried about the negative impact of television, movie violence, rock music and home video game consoles. “I think this is a call for a big discussion on how can people of all walks of life can incorporate this,” he says. “That’s a discussion we should have, but we cannot push it away – we cannot go back to before the internet age.” Meanwhile, SCAD’s Derek Black notes that we’re already seeing device makers respond with software tools that can help people to restrict their own usage – for example, by making it easier to switch off instant notifications. As for how to get fully engaged users to actually use those features – well, even that can be gamified, he says. “We could gamify our existence with technology in the real world so that you get these rewards for not playing games, or unlock features,” Black says. “We could gamify parental interaction within any digital or communication device – so you get rewarded for sending communications between your grandparents and grandchildren. That’s what the software is capable of, so there’s no reason why we can’t build these positive elements into this gamification interaction response.”
<urn:uuid:11a89509-2a2d-4af8-bbdc-52e2a6ba7313>
CC-MAIN-2022-40
https://disruptive.asia/big-data-game-design/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00642.warc.gz
en
0.953489
1,805
2.671875
3
It won’t clear the polluted air around Beijing, but the new IBM supercomputer purchased by the Beijing Meteorological Bureau (BMB) will go a long way to help forecast the weather and air-quality control in next year’s Olympic games. The System p575 is an 80-node machine with IBM’s POWER5+ microprocessors, capable of delivering 9.8 teraflops per second. Based on the initial configuration, it will be among the top 10 supercomputers in China, according to the Top 500 list of the world’s fastest supercomputers. The IBM computer will provide 10 times the computational power of the BMB’s current weather forecasting system and allow for hourly forecasts of weather and air quality in and around Beijing. The system is capable of sweeping an area up to 44,000 square kilometers to provide hourly numerical weather forecasts for each square kilometer. Herb Schultz, marketing manager for IBM Deep Computing, said doing localized weather prediction is actually harder than big picture, regional forecasting because it requires more precise analysis and prediction. “A supercomputer is needed for the size of memory required and precision and accuracy of the calculations and the scalability it brings,” he told internetnews.com. “This system is capable of doing the kind of calculations you can’t get on a commodity system. A basic Linux cluster would have a hard time. Even if it could do the calculations, it won’t have the memory or redundancy needed.” The BMB system has the option of adding even more processing nodes to scale up. Each node can use up to 256GB of memory.
<urn:uuid:e650d7c9-56fc-442e-9a34-5f1bb9a97589>
CC-MAIN-2022-40
https://www.datamation.com/applications/ibm-supercomputer-the-latest-olympic-entry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00642.warc.gz
en
0.88934
345
3.046875
3
At Stanford University’s Institute for Computational and Mathematical Engineering (ICME), running computational clusters and massively parallel programs for sponsored research demands equally high performance storage. So when the existing network file system (NFS) servers couldn’t live up to the task, the educational facility turned to storage provider Panasas Inc. A product search that began early last year resulted in the installation of the Panasas ActiveScale File System, an integrated hardware/software product that combines a parallel file system with an object-based storage architecture. “With the Panasas high-speed storage, we can add nodes, run more jobs and run them faster than we could with the NFS,” says Steve Jones, technology operations manager at ICME. “Panasas solved our pain point, which was I/O.” ICME is part of The School of Engineering at Stanford University, which has nine departments, 231 faculty and more than 3,000 students. According to Jones, ICME runs sponsored research for government agencies, such as the Department of Energy and DARPA, for example. Running Out of Steam For years, ICME relied on NFS servers, a relatively inexpensive way to implement storage. According to Jones, ICME has a total of 1,200 processors, each capable of writing a stream of data to a storage node at some location. There are 600 servers. Over the years, the institute’s storage requirements grew from two terabytes on one system to anywhere between one and four terabytes on a system. According to Jones, his group runs 12 computational clusters. A single cluster, for example, may have 200 processors and two terabytes of storage. Despite running a Gigabit Ethernet network, it wasn’t uncommon to expect jobs to run from hours to weeks, says Jones. The Linux-based servers with RAID arrays reportedly had a 25Mbps limitation on the amount of data written to disk. “Everything runs over the network. We have a front-end node for the computational cluster, we have compute nodes and we have storage nodes,” says Jones. Ultimately, he says, “The amount of data we’d write would overwhelm the appliance. The job would sit in the I/O, stop, and we would have to wait until it would write the data, taking increasingly longer lengths of time.” Jone’s ICME team relies heavily on computational clusters based on Rocks open-source software for sponsored national research. A recent project for the team, for example, was to better understand the impact of turbulent air flow, or flutter, on turbine engines. The research objective aimed at improving the performance and reliability of jet engines, as well as improving the noise and air quality of communities near airports. As the wait time for the NFS servers increased, Jones began to add more servers. This fix, however, led to more problems. According to Jones, the servers were difficult to manage and the multiple logical name spaces made it difficult to use. Jones and his team put together a list of criteria for a new storage solution: easy to grow; easy to implement; the ability to run the cluster distribution tool kit for the Rocks cluster; no single point of failure, no single I/O path, or full redundancy; parallel I/O support for writing streams of data in parallel; a single point of support for the hardware and software; and single name space. “Most importantly, we wanted vendors to provide a live demo of integration into a cluster,” says Jones. Some background research and conversations with other lab development centers enabled Jones to draw up a short list of storage solution providers. As luck would have it, Panasas was the first vendor Jones contacted. Other vendors included DataDirect, EMC, Ibrix and Network Appliance. In early Spring 2005, Jones contacted Panasas. “We explained our requirements and the vendor asked for a week to set up a demo at their facility,” he says. At the first meeting with Panasas, Jones expected to see a PowerPoint presentation but not a demo. He was happily surprised. “The Panasas engineers asked me which I wanted to see first, the demo or the PowerPoint,” he says. He chose the demo. In the lab, a Panasas system was integrated into a cluster, and Jones saw a 20-minute live demo. “I was impressed,” he says. He then explained to the vendor that a true test of the system would be to set up a demo in real time in production. A week later, a system was delivered to ICME, and Panasas engineers integrated it into a single parallel storage cluster that contained 172 processors and two terabytes of storage on two one-terabyte NFS servers. In two hours, the solution was ready to accept production jobs, according to Jones. “It was an unheard of amount of time,” he says, noting that a Fibre Channel SAN would require days or weeks to configure the hardware and network, build software and file systems and meet with the engineers to set it up. Bonnie ++, an open source benchmarking tool, was the first application that they ran for the demo. Similar test jobs were set up on the NFS servers and the new Panasas file system. “We wrote an 8GB file and multiple copies of it, which is a small job for us,” says Jones. For test purposes, they wrote eight files. The NFS server wrote data at 17.8Mbps, and the read process from the eight nodes was 36Mbps. The same job on the Panasas system was 154Mbps for the write process and 190Mbps for the read process. The equipment was then scaled to 16 nodes on the benchmark. The NFS servers wrote at 20.59Mbps and read at 27.41Mbps. The Panasas system wrote at 187Mbps and read at 405 Mbps. Pleased with the huge increase in performance on the Panasas system, Jones proceeded to write live data. “Basically, the NFS servers had 85 to 90 percent CPU, memory and I/O utilization, while the Panasas system had three to four percent utilization,” he says. “As we added capacity, the system got faster and we never had I/O wait time.” Jones did contact and meet with other vendors. “We got to see PowerPoint presentations, but no other vendor would meet our requirement of seeing a live demo that included real-time integration into a cluster,” he says, noting that he was shown published performance benchmark statistics. Additionally, no other vendor was able to provide a single point of contact for hardware/software support other than Panasas. Full Steam Ahead ICME purchased two systems that consisted of a single shelf, each with a meta data server on it. The meta data server has 10 StorageBlades. The Panasas ActiveScale File System, a hardware/software product, consists of DirectorBlades and StorageBlades that plug into a rack mountable shelf. According to the company, performance scales with capacity, so bandwidth increases as additional shelves are added. With additional products on its plate, ICME has plans for a multi-shelf purchase in the near future. The cost of the system? “It’s more expensive than cheap NFS, which has no reliability, and less expensive than other solutions, such as a Fibre Channel SAN,” he says. This article was first published on Enterprisestorageforum.com.
<urn:uuid:e5f72251-b7e8-46ca-bb16-ce00d608a2cd>
CC-MAIN-2022-40
https://www.datamation.com/storage/panasas-powers-stanford/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00042.warc.gz
en
0.960596
1,587
2.703125
3
Working from home is the new normal, at least for those of us whose jobs mostly involve tapping on computer keys. But what about researchers who are synthesizing new chemical compounds or testing them on living tissue or on bacteria in petri dishes? What about those scientists rushing to develop drugs to fight the new coronavirus? Can they work from home? Silicon Valley-based startup Strateos says its robotic laboratories allow scientists doing biological research and testing to do so right now. Within a few months, the company believes it will have remote robotic labs available for use by chemists synthesizing new compounds. And, the company says, those new chemical synthesis lines will connect with some of its existing robotic biology labs so a remote researcher can seamlessly transfer a new compound from development into testing. The company’s first robotic labs, up and running in Menlo Park, Calif., since 2012, were developed by one of Strateos’ predecessor companies, Transcriptic. Last year Transcriptic merged with 3Scan, a company that produces digital 3D histological models from scans of tissue samples, to form Strateos. This facility has four robots that run experiments in large, pod-like laboratories for a number of remote clients, including DARPA and the California Pacific Medical Center Research Institute. Strateos CEO Mark Fischer-Colbrie explains Strateos’ process: “It starts with an intake kit,” he says, in which the researchers match standard lab containers with a web-based labeling system. Then scientists use Strateos’ graphical user interface to select various tests to run. These can include tests of the chemical properties of compounds, biochemical processes including how compounds react to enzymes or where compounds bind to molecules, and how synthetic yeast organisms respond to stimuli. Soon the company will be adding the capability to do toxicology tests on living cells. “Our approach is fully automated and programmable,” Fischer-Colbrie says. “That means that scientists can pick a standard workflow, or decide how a workflow is run. All the pieces of equipment, which include acoustic liquid handlers, spectrophotometers, real-time quantitative polymerase chain reaction instruments, and flow cytometers are accessible. “The scientists can define every step of the experiment with various parameters, for example, how long the robot incubates a sample and whether it does it fast or slow.&rdquo To develop the system, Strateos’ engineers had to “connect the dots, that is, connect the lab automation to the web,” rather than dramatically push technology’s envelope, Fischer-Colbrie explains, “bringing the concepts of web services and the sharing economy to the life sciences.” Nobody had done it before, he says, simply because researchers in the life sciences had been using traditional laboratory techniques for so long, it didn’t seem like there could be a real substitute to physically being in the lab. “It’s frictionless science, giving scientists the ability to concentrate on their ideas and hypotheses.” Late last year, in a partnership with Eli Lilly, Strateos added four more biology lab modules in San Diego and by July plans to integrate these with eight chemistry robots that will, according to a press release, “physically and virtually integrate several areas of the drug discovery process—including design, synthesis, purification, analysis, sample management, and hypothesis testing—into a fully automated platform. The lab includes more than 100 instruments and storage for over 5 million compounds, all within a closed-loop and automated drug discovery platform.” Some of the capacity will be used exclusively by Lilly scientists, but Fischer-Colbrie says, Strateos capped that usage and will be selling lab capacity beyond the cap to others. It currently prices biological assays on a per plate basis and will price chemical reactions per compound. The company plans to add labs in additional cities as demand for the services increases, in much the same way that Amazon Web Services adds data centers in multiple locales. It has also started selling access to its software systems directly to companies looking to run their own, dedicated robotic biology labs. Strateos, of course, had developed this technology long before the new coronavirus pushed people into remote work. Fischer-Colbrie says it has several advantages over traditional lab experiments in addition to enabling scientists to work from home. Experiments run via robots are easier to standardize, he says, and record more metadata than customary or even possible during a manual experiment. This will likely make repeating research easier, allow geographically separated scientists to work together, and create a shorter path to bringing AI into the design and analysis of experiments. “Because we can easily repeat experiments and generate clean datasets, training data for AI systems is cleaner,” he said. And, he says, robotic labs open up the world of drug discovery to small companies and individuals who don’t have funding for expensive equipment, expanding startup opportunities in the same way software companies boomed when they could turn to cloud services for computing capacity instead of building their own server farms. Says Alok Gupta, Strateos senior vice president of engineering, “This allows scientists to focus on the concept, not on buying equipment, setting it up, calibrating it; they can just get online and start their work.” “It’s frictionless science,” says CEO Fischer-Colbrie, “giving scientists the ability to concentrate on their ideas and hypotheses.” credit // spectrum.ieee.org
<urn:uuid:6516c436-1605-42b7-8198-870eb62df97a>
CC-MAIN-2022-40
https://meltechgrp.com/scientists-can-work-home-lab-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00042.warc.gz
en
0.933898
1,159
2.609375
3
The proliferation of open office spaces has led to a noise problem. Traditionally, newsrooms and other collaborative fields have long used an open space to enhance colleague communication. Now other professions are switching to open spaces after decades of using individual office spaces or high-walled cubicles. Employees used to their own private space or salespeople whose jobs require them to be on the phone for their jobs, are now out in the open experiencing significant reduction in their privacy levels. Open space offices have led to problems regarding confidentiality and clarity. However, even when people are not talking, the ambient noise, including HVAC systems, pen tapping, and more can be distracting. Short of extending cubicle walls and creating separate offices again, there has to be some way to reduce ambient noise. Sound masking can help in all these instances. Sound masking provides what is essentially a curtain of white noise, making it harder for unintended listeners to hear what someone is saying. A sound masking generator carefully positioned blocks out the unintentional sounds. By placing sound masking speakers correctly, it creates a sort of acoustic wall that allows a person to hear their phone conversation instead of paper rustling, footsteps, and even other noise occurring in the room. Sound masking does not mean that sound cannot be heard at all. They may still hear someone talking, and of course, they will listen to the white noise. However, the speech will not be clear enough to make out a complete conversation. Sound masking is appropriate for hospitals and offices, be they corporate, sales, medical, legal, or for another profession. The volume of the white noise and how loud people can talk will vary by space due to personal preferences and room acoustics. However, sound masking can be modified to the appropriate setting and is controlled easily by internal staff. Sound masking offers a simple solution to the problem of noise interference and sound intrusion in open office environments. Your customers' comfort and workers' ability to concentrate are worth protecting. Sound masking in hospitals prevents conversations from being overheard and provides a better environment for patients' recovery. Sound masking in Physicians' offices, exam rooms, and patients' rooms are the most common applications. Improve the acoustic environment, enabling employees to feel more comfortable and less distracted in corporate environments, and increasing productivity. Is your call center too noisy? For busy call centers, sound masking systems is one of the best solutions. Control the noise levels while increasing customer satisfaction. Use Sound Masking if you require a high level of speech privacy. Ideal for Law offices, law enforcement agencies, and courtrooms
<urn:uuid:1227dbee-b1d4-4d41-9608-a7096e3813ae>
CC-MAIN-2022-40
https://www.bcsconsultants.com/solutions/sound-masking/applications-for-sound-masking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00042.warc.gz
en
0.931179
533
2.578125
3
Businesses, government organisations and individuals are being subjected to increasingly sophisticated phishing scams. These scams – designed to trick victims into disclosing sensitive information such as bank account numbers, passwords and credit card details – use a range of techniques to achieve their malicious aims. Email is a key channel – along with phone calls and text messages – for phishing scams. Many early phishing emails were easy to detect due to poor grammar and spelling and email sender addresses that bore no relation to the person or business the message claimed to be from. However, in recent years, scammers have appropriated logos, graphics and text from legitimate organisations – including major telecommunications companies, banks, electricity providers and government organisations – and used more authentic-looking email addresses. Government security advisory service Stay Smart Online recently provided an example of a convincing email phishing scam. The email hijacks legitimate branding from Medicare and the MyGov website to solicit information such as login details, user security questions and answers and bank account details. The groups and individuals behind phishing scams are becoming increasingly adept at using social engineering techniques to extract information from users. These techniques may be used in attack types called ‘spear phishing’ or ‘whaling’. The Australian Competition and Consumer Commission describes these attacks as targeting ‘businesses using information specific to the business that has been obtained elsewhere’. Scammers typically misrepresent themselves as business executives to convince other people within the business to disclose sensitive or financial information. So how can businesses and government organisations minimise the risk phishing presents to their operations and to their people in business and personal contexts? A key step is to educate people about the threat presented by phishing scams. Businesses should also implement processes for the handling and disclosure of sensitive or financial information that address the risks presented by spear phishing or whaling. Finally, they should deploy sophisticated filters as part of a comprehensive email security platform to minimise the risk of scam emails entering the business environment. If you would like to learn more, please contact Roger at [email protected] Stay Up To Date Stay up to date with the latest news, tips and product features
<urn:uuid:f931c83c-cbdd-458b-a1f3-e8698e6a0d4c>
CC-MAIN-2022-40
https://firstwave.com/blogs/how-to-protect-businesses-and-individuals-from-increasingly-sophisticated-phishing-scams/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00042.warc.gz
en
0.941451
442
2.828125
3
You’ve probably heard about the rapidly evolving field of quantum computing. These systems operate completely differently than classical computers, e.g. by crunching qubits (quantum bits versus binary bits). QPUs (which refer to quantum computers) process multiple states simultaneously using qubits, and vendors are working hard on their QPUs to bring faster machines to market. But innovation in the space is not solely driven by hardware. Software is starting to take center stage as a driving force for bringing quantum into the mainstream. It’s a subtle difference in wording, but we can think of this as a shift from quantum ‘computers’ to quantum ‘computing.’ One reason that software is now becoming so prominent relates to the fact that quantum computing is beginning to jump from the lab into business environments. It’s early for sure, but the trend is real. For non-quantum experts in business, the difference between classical computing’s serial data analysis versus quantum’s multi-dimensional computation is nothing short of night and day. Software has become a linchpin for quantum adoption. - IBM sets out plans to scale quantum computing (opens in new tab) Classical vs. quantum software Before delving into business applications, it will help to shed light on the basic constructs of this new field – and how they vary from traditional computing. Classical and quantum software environments are fundamentally different. With classical computing, a programmer writes software using binary elements of ones and zeros (abstracted by app development software), which converts them into instructions that are processed sequentially. They solve optimization problems using binary search techniques and only return a single result. In contrast, quantum programming approaches accelerate complex analysis by delivering a matrix of multiple elements presented in a format that is already pre-optimized for a qubit to resolve. This process better mirrors the natural multi-dimensional state of most problems. Quantum computers aren’t designed to produce singular results. Quantum computers solve optimization problems in a multi-dimensional space, processing multiple states, or potential situations, simultaneously. They identify all possible options that meet the criteria of constrained optimization, giving users a diversity of results to explore for their solution. The diversity of results offers more and better opportunities to find the best possible solution in different business situations. Quantum software draws from mathematical pattern matching and optimization techniques adapted from areas like machine learning. For example, Quadratic Unconstrained Binary Operation (QUBO) is used to create the quantum lattice for annealing machines, like D-Wave, while that QUBO is converted to a quantum circuit using the Quadratic Approximation Optimization Algorithm (QAOA) for more common gate model machines (like IBM, Rigetti, Honeywell, IonQ and others). Math, physics and quantum experts need to program complex circuits, algorithms and more to create the problem submission for the quantum computer. They also have to program low level hardware configurations for each QPU, and again for all upgrades or expansions. In essence, quantum computing is a completely new paradigm. Not only does it require new types of hardware, it also demands new and highly technical quantum programming skill sets to create the software. Multi-dimensional presentation and optimization means that quantum requires highly trained experts to define the problem and convert it to code for these systems. Even with the specialized knowledge and experience of quantum experts, the quantum SDKs (software development kits) can still be extremely complex. One quantum programmer recently noted that it took over eight months to begin to program a very simple problem using a popular quantum software development toolkit. Moreover, quantum software requires that each processing flow includes low-level coding that is proprietary to each vendor’s QPU requirements, not to mention unique to the specific quantity of QPUs in the system. Once a user has spent dramatic time and money getting to a point where they can actually run applications, they’re effectively locked into that specific hardware and vendor. When the system is upgraded or changed in any way, all that code has to be rewritten. As you would expect, given the dramatic differences in hardware architectures, quantum software requires a significant shift. Every circuit, gate, algorithm, action and process must be created using new quantum programming approaches. The evolution of classical and quantum as the data grows The role of technology in business can be better understood through a look at a class of problems related to constrained optimization; which is about optimizing a function based on a set of constraints. E.g., when an airline creates a global flight plan, it tries to optimize capacity for the number of fliers, land the planes in ideal destinations to pick up new passengers and complete the route with the least distance and fuel. The airline needs to consider a myriad of variables related to weather patterns, airport traffic, maintenance downtime, crew airtime, where crews are located at any given moment, coordinate food service to match the flight schedules, and many other factors. Classical computers are solving these types of problems today. But they’re doing so by using approximations and cutting corners to get solutions that aren’t as exacting as they were when the data sets were smaller and constraints were less demanding. They’re not the best solutions, rather, they’re the best solutions for what we can deliver today. Today’s data volumes are threatening to limit the performance and results that a classical application can achieve. As data grows, the volumes will potentially completely overload classical resources. Serial processing in a binary space can handle large data volumes, but the growth of data is stretching these systems, forcing users to limit the size of the analytics they process. This means Subject Matter Experts (SMEs) and programmers must compress, reduce and limit the data that is processed, resulting in potentially lower quality solutions. Additionally, classical computers generally return one result, limiting the range of decision insights. The types of problems quantum can tackle range from route optimization for the proverbial traveling salesperson (long considered an intractable challenge) to supply chain management, logistics and even drug discovery. - Majority of decision makers plan to harness quantum computing in near future (opens in new tab) Quantum software will drive adoption The reality is that, in the future, both quantum and classical systems will coexist. That’s because each is designed to solve a different type of problem. For now, classical computers are chipping away at tough problems, like the traveling salesman, with estimations and approximations. Emerging software solutions aim to bridge these two worlds by using quantum-ready techniques that produce better results for constrained optimization on classical computers and eventually for quantum systems. What will it take to bring the technology to wider adoption and ultimately a mass market? Perhaps its evolution will follow the path of other technologies that emerged from research labs, like high-performance computing and artificial intelligence. Open source and cloud software, combined with commodity hardware, made it easier to access the power of these technologies and drive commercialization. Similarly, giving those who aren’t experts the ability to tap the power of quantum will drive adoption, fund further development, and bring down costs. Major companies are combining software and services for easier access to quantum computers. Amazon is a pioneer; it's Braket is a managed service that helps researchers and developers get started with the technology. Braket provides a development environment to explore and build quantum algorithms and simulate and run them on different quantum hardware technologies. Innovations are also flourishing in areas like quantum SDKs and quantum operating systems intended to support quantum-experts in accessing and running quantum applications. Another area that has emerged is the quantum application accelerators. The goal is to remove the highly technical, quantum-expertise dependency from the use cases of quantum. Many of us still don’t know how our cars work but we drive them every day. Subject matter experts don’t need to understand all the inner workings. What they do need is an advanced computer that can help them solve real business problems. As different as it is from classical, certain quantum software techniques are already improving the diversity of results and performance for classical computers. Thus, an ideal solution would be the ability for an SME to have the power of both classical and quantum without the scientific rigor needed for quantum and without the need to reprogram from classical to quantum or from one quantum machine to another. Quantum computing has vast potential. It’s also a new, complex paradigm that organizations need to thoughtfully and thoroughly plan for as part of their near-term computational infrastructure...and software is a key to this transformation. - Solving problems by working together: Could quantum computing hold the key to Covid-19? (opens in new tab)
<urn:uuid:fd6d4623-4d38-49ab-bba8-c2b8a59166cb>
CC-MAIN-2022-40
https://www.itproportal.com/features/quantum-computing-software-takes-center-stage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00042.warc.gz
en
0.929829
1,805
3.078125
3
What is Google? Who today does not know Google? Google is first and foremost a company (Google Inc.) was founded in 1998 by Larry Page and Sergey Brin, who had set a goal of organizing information globally. Today, Google is known for its search engine, but the company offers a suite of online tools (Calendar, Blogger, Docs, Gmail, Groups, Picasa, Reader, Sites, SketchUp, Talk, English, YouTube, Mobile , Maps, Pack, News, Alerts, directory, Toolbar, Google Chrome, Desktop, Earth, iGoogle, Images, Books) which continues to expand and also its own operating system (Chrome OS). To supply the basic indexing Google , robots, called "bots" regularly travel pages on the Internet looking for new links, allowing them to discover new pages will be added to the Google index. In addition, Google also provides a database archive , called "cache" . When a website is inaccessible for example, or a page has been removed from a Web site, it is always possible to access through the cache function of Google search results . - The " Google bots " programs are hosted on Google servers , which run Web pages looking for new content, new links. The passage of these "bots" is called "Google dance" . - The Google index is based on information reported by the " bots " . Google is the search engine with the largest index in the world ( billions of pages: see http://www.worldwidewebsize.com/ ) . For each page indexed , Google combines similar content and calculates poiur each area a PR ( Page Rank ) , a guarded secret formula partly by the publisher. This is the position of PR for positioning the pages in the results of a Google search. - Cache: For each page covered, Google keeps in its database " cache " 101k of text data (also HTML, DOC , PDF , PPT, ... ) - The Google API is a small external program offered by Google and allows developers to integrate remote query capabilities of search engine. To use it , you must have a free license key ( supplied by Google). Each key can be queried 1,000 searches per day . Google has a high level language to filter the search results. The following table provides some of the available keywords. |site:domain||Displays a list of all links associated with the domain name||port site:aldeid.com displays a list of all pages contained in the portal aldeid.com and containing the keyword "port"| |link:page||Displays the list of sites that link to the target site||link:aldeid.com displays the list of sites referring to aldeid.com| |intitle:terms||Search pages where keywords are specified in the list of terms||To check the qe aldeid.com site does not contain indexed directory listing the contents of its files, we can use the following syntax: intitle: "index of" site:aldeid.com| |related:site||Provides a list of related links (Google algorithm) site provided parameter||related:aldeid.com provides a list of links to similar aldeid.com| |cache:page||Allows you to search a page in Google's cache||cache:aldeid.com| |filetype:extension||Allows you to filter the links of research based on an extension||filetype:pdf site:aldeid.com displays the list of PDF documents present on aldeid.com.| |rphonebook: name and city or country||Allows querying in the telephone directory for residents (U.S.)||rphonebook:william saw NY| |bphonebook: name and city or country||Can query the directory business (U.S.)||bphonebook:bob robinson LA| |phonebook: name and city or country||Interviewed both directories previously seen||phonebook:william shakespeare| |Literal correspondence ("")||Includes all words provided in quotes in the order||site:aldeid.com "taking fingerprint" provides a list of pages that deal with the impression, not the impression or two individual words (socket, footprint).| |Exclusion operator (-)| |Inclusion operator (+)| |Synonyms (~)||Allows research with oncoming words||Search powerpoint ~ help will research powerpoint AND help or tips, faq, tutorial, etc..| |info:domain||Provides information about an area||info:aldeid.com|
<urn:uuid:45b7ef01-1c1c-4253-9d50-b9aae872d06a>
CC-MAIN-2022-40
https://www.aldeid.com/wiki/Google
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00042.warc.gz
en
0.81284
986
2.921875
3
A firewall is an essential component of a network security system as it isolates and protects the network from unwanted access and malicious intrusions. It is important since it acts as a barrier between external and internal networks. Organizations require robust firewalls to prevent intruders, such as hackers, Trojan attackers, and viruses from accessing and harming a network or data centers. Additionally, firewalls monitor incoming network data packets to identify and remediate various threats, including DDoS attacks, network snooping, and password cracking attacks. There are two primary types of firewalls – software and hardware firewalls. A hardware firewall is a physical device designed to be a network barrier. Once deployed to a network, it enforces security policies and access controls and inspects all outbound and inbound traffic. On the other hand, a software firewall is a computer program created to filter malicious network traffic, prevent unauthorized network access, and protect against threats and attacks. Therefore, an open-source firewall can be categorized as a software firewall. In contrast to commercial firewall solutions, a community maintains and updates an open-source firewall to meet the ever-changing cybersecurity landscape. Are Open-Source Firewalls the Best? An open-source firewall is distributed and developed under a general public license and other open-source licenses. One of the primary reasons they are the best firewalls is that anyone can access the source code for free. As a result, it enables a peer-review approach, which theoretically permits various individuals to identify and correct existing flaws in the software. Thus, it is often more secure and has better features than most available commercial firewall solutions. Open-source firewalls are also best suited for individuals possessing high technical expertise. These include white hat, black hat hackers, and professionals that advocate for open-source operating systems. The most notable advantage is the cost, given that open-source programs are free and suitable for small businesses. Also, the open-source licenses used to develop and distribute an open-source firewall mean that anyone is free to copy, modify, study, and use it without restrictions. Despite the benefits, some drawbacks come with using an open-source firewall. They include: - They are not documented: Open-source firewall programs are free to develop, modify, and change to meet emerging security needs. While this is a good thing, developers may not be inclined or may lack time to prepare and document help files for open-source firewall products. Coupled with the inherently unintuitive interfaces, new users may find it challenging and frustrating to learn how to configure and set up an open-source firewall correctly. - Hard to use: Accurately configuring an open-source firewall software requires users with a high level of expertise. Most of the available open-source firewalls are configured using obscure commands and command-line interfaces, in contrast to commercial products that come with easy-to-use interfaces. Learning the commands may pose a challenge to new users and home users, especially if they are not well-versed with the underlying operating system. - Lack of real-time monitoring: A significant number of open-source firewalls lack extra features like real-time monitoring, alerting, and logging. Such features may appear insignificant for individual or home use but are crucial to a corporate organization or business environment. The lack of such critical features may prevent administrators from tracking security events, providing forensics data required to investigate a security incident, or justifying security decisions backed by documented information. Although the disadvantages described above may make open-source firewalls less appealing, multiple open-source firewall solutions have gained traction and become immensely popular in different business settings. Therefore, it is worth identifying some of the most popular open-source firewalls for 2022. The Best Open Source Firewall for 2022 Most experts regard PfSense as the best open source firewall globally. PfSense is an open source, custom kernel based on FreeBSD, a free firewall that protects vital corporate networks against intrusions and attacks. Numerous organizations rely on PfSense to prevent unauthorized or malicious individuals from accessing sensitive information. Additionally, PfSense enables secure connectivity and access to cloud networks. Essentially, PfSense developers built the product on the concept of a stateful firewall to ensure it contains packet filtering and features that are mostly found in the more expensive commercial firewalls. In addition, PfSense enables companies to access a wide and comprehensive network of security solutions suited for different kinds of threat landscapes and environments. The PfSense open source firewall solution unlocks access to some of the most reliable platforms, engineered to provide the most robust levels of performance, stability, security, and confidence. PfSense also delivers valuable support through comprehensive documentation. Some of PfSense’s key features include: - Real-time monitoring - Has a dynamic DNS by including multiple DNS clients - Firewall capabilities like port/IP filtering, scrubbing and limiting network connections - Inbuilt load balancing for distributing load to several backend servers - Network address translation for port reflection and forwarding - Failover to seconder in the event the primary fails, which ensures high availability - A virtual private network that supports OpenVPN and IPsec - Maintains a history of resource utilization to enable reporting IPFire is a Linux-based open source firewall built on top of Netfilter to provide advanced network security for corporate business networks. Specifically, IPFire delivers extensive protection from various internet and DDoS connections attacks. The IPFire open-source software solution results from the work of a dedicated online community consisting of thousands of developers. Besides powerful capabilities, IPFire open-source firewall is lightweight, making it easy to deploy and implement. For example, IPFire enables users to access an intrusion detection system and use it to analyze home network traffic and pinpoint potential anomalies or exploits accurately. It is worth noting that the IPFire firewall enables users to set up a system to block attackers automatically once it detects attacks. Similar to some of the most popular firewalls, IPFire provides a web interface through which users can set or modify various configurations. Besides, IPFire permits users to configure a network to meet different requirements, such as advanced logging and graphical reports. IPFire’s key features include: - Enables stateful packet inspection - Provides an intrusion detection system - Provides a proxy server capable of catching and content filtering functionalities - Provides a virtual private network with OpenVPN and IPsec - Wake-on-LAN (WOL) capabilities - Has a dynamic DNS - Provides a DHCP server VyOS is an open-source firewall network solution designed to operate on a Linux distribution system. As a result, it is one of the few open-source firewall products with a unified interface for managing all functions. In addition, the VyOS open-source network provides access to a free routing platform that complements most of the functions found in other commercially available firewall products from leading vendors. Furthermore, the VyOS open-source firewall solution runs on standard operating systems. Therefore, it is suitable for use as a firewall platform or router platform for multiple kinds of cloud deployments. That said, VyOS enables companies to utilize a comprehensive firewall system that provides access to industry-standard routing protocols and enables policy-based and multi-path routing. Also, users can set up the VyOS solution on specific VPN solutions to ensure secure remote access and communications. Moreover, the unified management interface provides access to multiple applications like StrongS/WAN, OpenVPN, DHCPD, and Quagga. VyOS stands out from most open-source firewalls since it can be installed on a cloud platform, virtual machine, or other physical hardware. VyOS key features include: - Quality of Service (QoS) policies, such as traffic redirection, drop tail, fair-queue, among others - sFlow and NetFlow - IPv6 and IPv4 traffic firewall rulesets - Dynamic and static routing - Tunnel interfaces - URL and web proxy filtering - DHCPv6 and DHCP server and relay - VXLAN, static L2TPv3, SIT, IPIP, GRE, PPPoE - Network address translation 4. Untangle open source firewall Untangle is an advanced open-source firewall solution that provides a host of security functionalities and solutions to modern digital brands. Also, Untangle delivers a secure and powerful environment for company digital networks. The open-source firewall product is also dynamic since users can install it on a server, dedicated virtual appliance, public cloud, or virtual machine and use it to secure their networks, applications, and data. Untangle is also dynamic since it can be downloaded in various formats to suit multiple deployment needs. For example, users can download Untangle as a VMware image, ISO image, or USB image. The company also provides the same open-source software package as a standalone hardware solution that users can connect to their networks as a hardware firewall. Untangle open-source firewall is also designed to simplify network security to save users’ time. The firewall is built to strike a balance between; protection and performance; and productivity and policy. Thus, it is ideal for companies looking for a cost-effective, powerful network security product that can address any emerging security challenge. The firewall is applicable across diverse settings, including large distributed enterprises, schools, and small remote offices. Untangle comes with different software modules that can be disabled or enabled individually. The firewall’s key features include: - Intrusion prevention - Virus blocker - Firewall functions - Spam blocker - Web monitoring 5. Smoothwall Express The Smoothwall express open-source firewall delivers seven layers of application control and can be a part of or be a standalone package. Also, the Smoothwall firewall can be combined with the Smoothwall filter to provide organizations with a complete package for securing their online activities. Alternatively, companies can use the firewall on its own to manage network bandwidth, filter dynamic threats in real-time, and use it as a gateway anti-malware protection. The Smoothwall open-source firewall is one of the exciting security tools on the market. Thousands of developers continuously develop and update the GNU- and Linux-based Smoothwall solution. It is also security-hardened to minimize the risks of exploitable vulnerabilities that can impact users adversely. It is important to note that the Smoothwall firewall is a Linux firewall that can be configured through a web-based graphical user interface. The firewall requires users to possess little knowledge of a Linux system to install, configure, and use it to secure a network. Smoothwall express firewall supports external/internal network filtering, demilitarized zones, Local Area Networks, web proxy for acceleration, etc. The key features include: - Simple to use QoS - Outbound filtering - List of malicious IP addresses to deny access - vvgTimed access - Port forwarding - Supports external connectivity through DHCP ethernet, PPPoA, PPPoE, and static ethernet - Snort rules updated automatically for an intrusion detection system OPNSense is an open-source firewall project that is free, easy to use, and ideal for scaling infinitely. OPNSense delivers a powerful firewall that supports IPv6 and IPv4 live views on blocked and passed traffic. It also provides the best-in-class intrusion detection and virtual private network solutions. Moreover, OPNSense provides multi-WAN capabilities that include state synchronization, intrusion detection, and hardware failover. Installing the OPNSense open-source firewall enables two-factor authentication throughout the secured system for users and other services like a VPN gateway. Unlike most open-source projects, OPNSense provides multi-language support for different users and has an intuitive user interface designed to provide easy development and access. You can find most security solutions in a commercial firewall included in OPNSense firewalls, which are built to provide a rich set of security offerings that come with the advantages of verifiable and open sources. Some of the OPNSense features include: - Hardware failover and high availability - DNS forwarder and DNS server - Inline prevention and intrusion detection - Built-in monitoring and reporting tools - VPN solutions - Supports various plugins The Endian Firewall Community (EFW) is an open-source firewall solution and UTM that provides a unique combination of various security capabilities. It comes as a free version, but the developers do not provide additional support. Companies can use the Endian open-source firewall to establish email and web security through powerful built-in analytics. Once users download the Endian firewall solution, they get turnkey products, additional unified threat management, and open-source anti-virus products. It also provides powerful VPN services through which users can unlock extra support. Endian’s key features include multi-WAN, QoS, intrusion prevention, and email security. The ClearOS open-source firewall is based on CentOS, and it is designed to transform a standard PC into a dedicated gateway/internet server and firewall solution. ClearOS comes in three editions- ClearOS community, ClearOS business, and ClearOS home. The ClearOS community edition comes as a free version, but users must purchase a subscription for the business and home edition. The ClearOS firewall is suited for SMBs and startups. Also, ClearOS comes as a complete network firewall solution whose functionalities can be extended by installing various applications, among them being DNS server, DMZ, DHCP server, bandwidth manager, among others. The applications enable various functions that can be configured through a web-based interface. The firewall’s most notable features are: Bandwidth QoS manager Content and web proxy filtering Provides multiple security levels Managing file-sharing usage Intrusion detection and prevention systems Firewall functionalities, security, and networking
<urn:uuid:0f18b48b-ddd0-44ce-96c6-2527152f534d>
CC-MAIN-2022-40
https://cyberexperts.com/best-open-source-firewall/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00243.warc.gz
en
0.900637
2,913
3.265625
3
Some Commonly Used Terms in Computers Programme: It is a set of instructions given to the computer in a particular sequence for solving a given problem. In other words, it contains a set of actions to be performed by the computer on the data to produce necessary result. Programming is done in one of the computer languages. Software: It is a collection of programmes written to bring the hardware of a computer system into operation. We cannot do any thing useful with the computer hardware on its own. It has to be driven by certain utility programmes called software which are stored in the computer system. There are two types of software: - Application Software: It refers to programmes or sets of programmes that perform a specific processing application, e.g., payroll and inventory control. - System Software: It consists of sets of programmes that act as an interface between the user and hardware, e.g., operating system like Windows, Mac OS, and UNIX etc. Hardware: It is the term given to the machinery itself and to the various individual pieces of electronic equipment. Liveware: The users working on the system are termed as ?liveware?. Firmware: It is defined as software embedded into the hardware, e.g., ROM, which has the basic input-output system (BIOS). Compiler: A programme which translates a high-level language programme into machine language. Interpreter: A programme that translates each instruction of high level language and executes it before passing on to the next instruction. Assembler: A programme which converts assembly language programme into machine language programme. It is system software. Multiprocessing: In this of processing, the CPU has a number of processors which operate in parallel, thereby allowing simultaneous execution of several programmes. Multiprogramming: This type of processing enables more than one programme to reside in the central memory at the same time, and share the available processor time and peripheral units. Distributed Data Processing: It is also called decentralized processing. This approach involves using a network of computers interconnected by or minicomputer lines where each remote location has a small computer or minicomputer for input-output communication with a central computer and some local processing. Bit: It is the basic unit of digital information. It can have only two values- one and zero. Nibble: combination of four bits. Byte: Combination of eight bits. - 1 Kilobyte = 1024 bytes - 1 Megabyte = (1024)x(1024) bytes - 1 Gigabyte = (1024)x(1024)x(1024) bytes Word: Combination of two or more than two bytes. Database: It is a general collection of data shared by a variety of users. In particular, it has the following features: - Redundancy of data is eliminated. - Data is independent of any programme. - Date is usable by many users, simultaneously. Time Sharing: It is the concurrent use of a single computer system by many independent users. In time sharing, many terminals can be attached to a central computer. The terminal users can thus share time on the computer, that is, time sharing. The operating system can allocate the CPU time of the various users by giving each a time slice, each operating independently without awareness of use by others. Microprocessor: It is a single chip based device, which is a complete processor in itself and is capable of performing arithmetic and logical operations. Modem: An electronic device used to convert computer (digital) electronic signal to communication channel (analog) electronic signals and vice-versa. It is used in distributed data processing where terminals are joined by a telecommunication link to the host computer.
<urn:uuid:ac7877cb-7182-4277-ae2f-9deccd7a4c47>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/649/some-commonly-used-terms-in-computers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00243.warc.gz
en
0.919622
769
4.125
4
Two-factor authentication is an extra layer of security, known as “multi-factor authentication”, that helps safeguard against unwanted access to any login or account. It requires at least two of three types of authentication. - Something you know A password is the most common method of security, and something you know or keep in your head. - Something you have An email address or cell phone are both something you have or own. With email or cell phone enabled 2FA, a correct password plus a unique numeric code that is sent to your device are both required before granting access to your account. - Something you are Biometrics are the third authentication method. For example, many mobile apps like LifeSite Vault are beginning to take advantage of existing fingerprint technology on your smartphone to verify your identity each time you access the app. Why use two-factor authentication? While a strong password can help keep your account secure, both usernames and passwords have become increasingly susceptible in hacking and phishing attacks. With two factor authentication enabled, thieves must gain access to your password (something you know) as well as your device (something you have) or fingerprint scan (something you are). Without your physical factors of authentication, remote intruders won’t be able to gain unauthorized access. This extra step protects against dangers like remote hacking. You can set up two factor authentication for your LifeSite account in your Settings. Make sure your digital security is as robust as possible by enabling 2FA on all your accounts!
<urn:uuid:a5cdeb88-1173-4a68-8884-5f1687685262>
CC-MAIN-2022-40
https://support.lifesite.co/hc/en-us/articles/360000957794-Two-Factor-Authentication-2FA-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00243.warc.gz
en
0.927242
314
3.171875
3
Understanding how the Internet of Things is going to change childhood is hard to imagine. The IoT (Internet of Things) describes the way objects can “talk” to each other via technology. Your smart watch talks to your phone. Your apps can talk to your new smart thermostat. Widespread fear about technology ruining childhood is in large part thanks to sensationalism. New technologies are often painted as either a soul-consuming monster or the flawless face of advanced society. Somehow, people often forget the role of parenting. Technology advances exponentially fast, but we won’t have connected robot nurses for several years. The final factor in how a child will be affected by technology still comes back to the parents. As parents, it’s important to understand what technology is and what it means for children as a life experience. Table of Contents The Beautiful Positives of Connected Children Yes, the internet of things does offer great opportunities for kids. With the rate that technology is developing, products that help children learn abound. Kids can (and have learned) to code on ever cheaper digital platforms like Raspberry Pi. If you thought Lego helped creativity and problem-solving, picture those hooked up to wires, screens and batteries. Kids don’t just create rocket ships; they can make apps that drive rocket ships. The internet is filled with resources for kids to learn through technology. Studies are even finding that technology greatly improves a child’s ability to learn. It keeps them engaged, and allows them to work on their own. Kids today can learn fast—faster than ever before. For those that need to keep track of health information, the IoT is a godsend. Health monitoring can be vital for children, especially those too young to understand what it means. The perfect example is Teddy the Guardian, who is the smartest stuffed animal around. It checks baby’s heart rate, temperature and the oxygen saturation just by receiving hugs. It can alert mommy and daddy to any signs of trouble. Smart, practical tech exists. It doesn’t have to replace parenting or interfere with a child’s growth, and few parents could say “no” to something as logical as Teddy the Guardian. The terrible, horrible truth about connectivity The FiLIP watch takes helicopter parenting to a stalkerish low. It grants parents “the peace of mind they crave, while providing kids the freedom they need to be kids.” It embodies the possibility that parents will too far. FiLIP lets parents know where their children are at every point in the day. It is designed to give parents the ability to talk to their kid whenever they feel the need. Tech that acts like a leash on a child is not just unnecessary, but also kind of creepy. It could be as detrimental as hooking them up to a FitBit and counting their every step and calorie. It highlights a very strange disconnection. Many parents who fear technology are also the same who feel compelled to hook children up to a perpetual monitoring system. A study printed in the October 2014 issue of Computers in Human Behavior also shows that interacting with technology makes it harder to interpret emotion. By taking two groups of children and allowing one to consume large amounts of media while constricting the other, researchers found a very strong correlation. Those who consumed less media could read facial and physical expressions more easily. However, another study from professor Doris Bergen in the Miami University Department of Educational Psychology may accidentally shed some light on this entire conversation. Some see technology as signs that parents and educators are becoming lazy. Convince your kid to brush their teeth with a game app. Consult your phone to see when you need to talk to kid. Perhaps sensors could be used to track the eyes of a student to get a better idea of their ability. What if, as one major news outlet suggests, these numbers were used to track and grade engagement levels in school. These numbers would help grade students on effort and send the better performers to university. Don’t worry, that model is far too expensive for any school in the near future to even consider. Technology in the classroom is not hyper-connected, and will not be for many, many years. The IoT has not brought doomsday to education and childhood, quite yet. The Grey Matter The brain of an infant grows fast. Within the first few years of life, a lot of rules are set for things to come. While the infant brain has been extensively mapped, not much information actually exists on the relationships between baby and tech. Studies are not always conclusive. Research has shown tech to be both a blessing and boon. As Bergen explains, “If young children spend more time in technology-augmented play, this type of engagement may result in fewer interactions with parents, other caregivers, other children, and even with physical objects in the environment. Thus, brain developmental patterns and inactive cognition in such children may differ from that of children in past generations.” One of the biggest fears parents (and bystanders) seem to have about children and the IoT is not the technology itself. Rather, they fear how adults will use it. Given how little hard evidence exists, and that new technology is popping up every day, the best option may be to step back. Many who argue that connected technology is good also follow the “interaction not isolation” mantra. The world doesn’t need MIT studies to tell it that learning from experience and interaction can be better than through video and games. There is no “all-in” or “out” with technology and children. Adults of all ages often see new technologies as newfangled nonsense they would never have been allowed to play with as children. Actually, most of us did play with tech as children. What twenty- or thirty-old hasn’t owned a gameboy? Who hasn’t heard of AOL chat rooms, or napster? The internet of things is very different from the technology of yesteryear; yet, perhaps, today’s children are not as different as they seem. One vital step forward has been the removal of the mouse. Removing the mouse and big, klunky keyboards has lifted the wall between user and technology. The “swipe” has been the focus of many psychology essays. A four-year old can now interact with a computer in a highly direct manner. This could lead to removing mental barriers between the self and technology, itself, allowing kids to grow up and completely change our ideas about technology. Growing up in a connected world also means data. The topic of protecting children’s’ data in a connected world is a paper all its own. Data generation and analysis, however, does mean greater effectivity and opportunity. Playthings, education, health. These all benefit from the circular nature of data. The big looming question is the time and manner in which children interact with technology. Research has shown the children only begin to understand the symbol nature of a screen at about three years of age. That may be the only golden number when it comes to the Internet of Things and your child. image credit: Marcus Kwan Like this article? Subscribe to our weekly newsletter to never miss out!
<urn:uuid:a4f5cadd-8a9e-4911-99fe-dc667e821af8>
CC-MAIN-2022-40
https://dataconomy.com/2016/02/the-iot-for-kids-how-technology-affects-our-children/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00243.warc.gz
en
0.964851
1,491
2.640625
3
Sven Morgenroth on Paul’s Security Weekly #720 Appearing on the Paul’s Security Weekly cybersecurity podcast, Invicti security researcher Sven Morgenroth talked about the many challenges of secure authentication and authorization in web applications, complete with practical examples of real-life vulnerabilities. Watch the full interview below and read on for an overview of authentication and authorization security. Authentication, authorization… What’s the difference? While it may seem surprising that such fundamental topics are still often misunderstood and confused, authentication and authorization not only sound and look similar but are also closely related. Let’s start with clear working definitions: - Authentication is about proving your identity. This could mean entering a username and password, logging in via an SSO flow, or providing a unique access key. - Authorization is about checking if you have the right permissions. Once you are authenticated, the application should verify if you are allowed to access a specific resource or operation, typically based on your role in the application. Because they are usually performed together and also go wrong together, authentication and authorization combined are sometimes called simply “auth” (which is also easier to spell and faster to type). With modern enterprise web applications so dependent on correctly enforcing access control to protect sensitive data, auth-related vulnerabilities and attacks are at the forefront of web security. When auth goes wrong – common types of vulnerabilities Implementing effective and comprehensive access control is always challenging, and there are many factors at play that make it even trickier. All too often, security is an afterthought in the software development process, and enforcing access control is no exception. For easier and faster development, entire applications might be built without any access restrictions and then have a login form bolted on as a final step, which increases the risk of auth-related security gaps. Distributed software architectures take the auth challenge to a whole new level, with requests often passing through multiple services and interfaces. Ensuring proper access control across different contexts is extremely difficult, especially when it needs to span modules developed by entirely separate teams. When you add problems with passing auth information across APIs, data formats, origins, and physical systems, it’s no surprise that auth-related vulnerabilities are so common. Here are just a few typical examples, many previously discussed in detail on this blog. Insecure value comparisons Imagine you have a login form where the username and password values are sent directly to a vulnerable PHP script that uses typical loose comparisons with the == operator. For reasons explained in detail in this post about PHP type juggling vulnerabilities, comparing any string to 0 using this operator will (by default) give the result TRUE. So in the simplest case, just sending the boolean TRUE in a JSON login request may be enough to bypass authentication. Using strict comparisons with the === operator or (better still) a dedicated comparison function is usually enough to avoid this class of vulnerabilities, but in large software projects, even such seemingly trivial errors can sometimes make it into production. See our earlier post to learn more about a real-life PHP type juggling vulnerability in CMS Made Simple. The pitfalls of path-based auth Another bad practice that may lead to auth bypass attacks is implementing access control by checking for a specific path. For example, an application might check if the user is authorized to access the admin panel simply by comparing the path requested by the user with the path of the admin panel. Without input sanitization, it may be possible to bypass auth by inserting directory traversal strings to request a path that looks different (so it isn’t blocked) but eventually leads to the same admin panel URL. This is exactly what happened with the Oracle WebLogic Server authentication bypass vulnerability that we covered in detail back in 2020. Reliance on path comparisons can lead to similar path/directory traversal vulnerabilities due to parsing inconsistencies across different servers. For example, a proxy server might be set up to restrict access to a specific path by returning an error code for unauthenticated users. If an attacker is able to slip in a path traversal string, the proxy might be fooled into allowing access to a path that looks different but actually resolves to the restricted resource. Misplaced trust in auth sources In complex architectures, deployments, and data flows, developers will often find themselves in situations where auth-related decisions are clearly someone else’s problem. This is especially true when dealing with secondary contexts, for example when handling requests received via an API rather than directly from users. In applications assembled from hundreds of microservices, user auth is often done only by the front-end API, so the back-end services have no idea who issued what request. If the front-end code has a path traversal vulnerability that lets attackers manually specify an endpoint that would normally be inaccessible, the back-end will duly accept and execute the request, potentially revealing sensitive data. Deciding who and what your code should trust is a design decision that can have far-ranging consequences. One real-life vulnerability (long since fixed) comes from 2017, when the Uber website was found to be vulnerable to account hijacking via subdomain takeover. Security researcher Arne Swinnen noticed that one of Uber’s subdomains was actually pointing to another domain, at that time unclaimed, which he was able to claim. Uber’s auth flow included setting a session token that was valid for all Uber subdomains, including the one Arne had claimed. By redirecting users to that domain, he would have been able to read session tokens and take over user accounts. And all because of implicit trust in domain-level auth. Missing function-level access control Related to misplaced trust is missing function-level access control (and if the name looks familiar, that’s because it used to be a separate OWASP Top 10 category). This class covers failures to check if a user is authorized to call a specific function. We wrote about a real-life example of this a while back when analyzing a vulnerability in Maian Support Helpdesk. In this case, users could call certain API endpoints that were not visible in the user interface yet were still accessible, including some admin operations. Without systematic function-level access control, such omissions can easily evade testing and allow attackers to obtain unauthorized access. Deciding where and how to implement access control can be especially tricky when working with GraphQL APIs, where there are many ways to access the same data through different queries. Ideally, each query should only return data that the current user is authorized to access, but this is easy to get wrong, and any error at this stage could allow attackers to obtain sensitive information. This is especially true when dealing with secondary contexts, which is common when adding a GraphQL layer to an existing REST API. At the level of HTTP requests, access control depends on securely generating the right access tokens and sending them to the right system at the right time. Whenever attackers are able to get their hands on a valid access token (such as a session cookie), you run the risk of session hijacking. The risk of client-side token theft is especially high with complex single sign-on flows, where misconfigurations can allow malicious actors to intercept access tokens via redirects or read them from server logs. Password reset tokens are another critical security mechanism that is prone to mismanagement and abuse. For example, an application might expose an API endpoint for generating password reset tokens. Any vulnerability that lets attackers call that API may also allow them to reset passwords for known user accounts, leading to account takeover. As discussed above, such vulnerabilities may be caused by confusion around authorization in secondary contexts or simply by assuming that because the endpoint is private, it cannot be called by unauthorized users. Security best practices for implementing authentication and authorization If you look at the OWASP Top 10 for 2021, you will see that broken access control is the #1 cause of web application security issues, with no fewer than 34 weaknesses grouped under this category. As shown above, implementing and enforcing authentication and authorization is never easy, and there are many ways it can all go wrong. Ensuring secure access is all about careful planning, secure implementation, and regular verification. Follow these best practices to minimize the risk of auth-related vulnerabilities: - Include auth in all planning and design work – many vulnerabilities result from bolting on access control at the last minute. - Keep access control logic at the application level – denying resource access at the server level is never bulletproof. - Follow secure coding practices and check access control in code audits to avoid code-level auth vulnerabilities. - Whenever possible, use specialized libraries with a proven security track record rather than rolling your own. - Routinely check your access controls across the entire development process, up to and including production. - Ensure you have control over any third-party infrastructure that could be abused to break auth flows.
<urn:uuid:bb26c157-1708-4b35-ab0f-448af6ae4e0d>
CC-MAIN-2022-40
https://www.invicti.com/blog/web-security/how-to-avoid-authentication-and-authorization-vulnerabilities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00243.warc.gz
en
0.926259
1,940
2.765625
3
A multitude of data breaches occurred in 2013 due to inadequate security protections. While this is distressing, there are lessons we can learn to prevent this from happening again in 2014. - Back Up Your Data To Protect Yourself From Ransomware. Cybercriminals use malware exploits to encrypt your data or lock you out of your PC. They then demand payment in exchange for the decryption key to restore access. Cryptolocker was the biggest ransomware threat of 2013 — Cryptolocker developers acquired $30 million from the collection of ransom fees within a period of 100 days. Security researchers predict the ransomware trend will migrate to mobile devices in 2014. Many mobile-device users don’t practice basic security measures, making them vulnerable to ransomware. When it comes to ransomware, data backup is the best way to avoid having to pay ransom fees. If you don’t have a recent backup, you’ll have to pay the ransom or lose your data. This includes the data on your mobile devices. - Use Strong Passwords Data breaches in 2013 impacted many users who didn’t follow proper security measures, such as changing their passwords every few months. Living Social, Adobe, and Evernote all experienced major data breaches where tens of millions of user accounts and passwords were compromised. Many of these data breaches occurred because the stolen passwords were weak or stored in insecure ways. Make sure your passwords are secure by choosing a different password for each of your online accounts. Your passwords should consist of letters, numbers, and symbols to avoid successful hacking attempts. If a cybercriminal gains access to one, at least the rest will be safe. - Use Caution When Downloading Apps To Prevent Infections From Mobile Malware Mobile malware will continue to grow at a rapid rate in 2014. According to FortiGuard Labs, 50,000 malicious Android samples were detected in January 2013. A threat called AndroRAT hides Trojan horses in various applications. The RAT (remote application tool) allows the attacker to send SMS text messages from the infected mobile device, direct the device’s browser to a URL, monitor SMS texts and phone calls, or perform a variety of malicious actions to compromise personal information. The mobile malware threat isn’t limited to personal data theft and adware. There’s a huge concern regarding banking transactions performed on smartphones. Malware, such as FAKEBANK and FAKETOKEN, has been developed to steal financial data from mobile-device users. Avoid financial and personal data theft by using caution while installing apps. Only download applications that are approved by your subscriber or smartphone vendor. - Ensure Data is Properly Deleted or Encrypted Before Disposing of Computer Devices and Confidential Information When recycling or selling old computers or hard drives you must ensure all your data is deleted. If the data isn’t erased properly, it can easily be retrieved. In addition, data stored on media must be protected with encryption or a strong password to avoid theft. When disposing of sensitive paper files, shred them first with a crosscut shredder. Always practice adequate security measures to protect your confidential information. To learn more about safeguarding your data, give us a call at (888) 330-8808 or send us an email at [email protected].
<urn:uuid:e2368136-c05b-4cfe-bc52-2d9d1fb8a065>
CC-MAIN-2022-40
https://integrisit.com/2013-will-go-down-in-history-as-the-year-of-data-breaches/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00243.warc.gz
en
0.89482
688
2.65625
3
Static routing tips When your network goes beyond basic static routing, here are some tips to help you plan and manage your static routing. Always configure a default route The first thing configured on a router on your network should be the default route. And where possible the default routes should point to either one or very few gateways. This makes it easier to locate and correct problems in the network. By comparison, if one router uses a second router as its gateway which uses a fourth for its gateway and so on, one failure in that chain will appear as an outage for all the devices downstream. By using one or very few addresses as gateways, if there is an outage on the network it will either be very localized or network-wide — either is easy to troubleshoot. Have an updated network plan A network plan lists different subnets, user groups, and different servers. Essentially is puts all your resources on the network, and shows how the parts of your network are connected. Keeping your plan updated will also help you troubleshoot problems more quickly when they arise. A network plan helps your static routing by eliminating potential bottlenecks, and helping troubleshoot any routing problems that come up. Also you can use it to plan for the future and act on any changes to your needs or resources more quickly. Plan for expansion No network remains the same size. At some time, all networks grow. If you take future growth into account, there will be less disruption to your existing network when that growth happens. For example allocating a block of addresses for servers can easily prevent having to re-assign IP addresses to multiple servers due to a new server. With static routing, if you group parts of your network properly you can easily use network masks to address each part of your network separately. This will reduce the amount of administration required both to maintain the routing, and to troubleshoot any problems. Configure as much security as possible Securing your network through static routing methods is a good low level method to defend both your important information and your network bandwidth. - Implement NAT to obscure your IP address is an excellent first step. - Implement black hole routing to hide which IP addresses are in use or not on your local network. - Configure and use access control list (ACL) to help ensure you know only valid users are using the network. All three features limit access to the people who should be using your network, and obscure your network information from the outside world and potential hackers. Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU! Don't Forget To visit the YouTube Channel for the latest Fortinet Training Videos and Question / Answer sessions! - FortinetGuru YouTube Channel - FortiSwitch Training Videos
<urn:uuid:34d8b4f9-e609-4228-a2f6-a7eb82edd453>
CC-MAIN-2022-40
https://www.fortinetguru.com/2016/06/static-routing-tips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00243.warc.gz
en
0.909041
600
2.671875
3
Today’s cars and trucks travel using an internal combustion engine that burns gasoline or other fossil fuels, which has caused some serious concerns. As per United States Environmental Protection Agency, a conventional passenger car emits about 4.1 metric tons of carbon dioxide per year. The number may vary based on the vehicle's fuel, condition, and the number of miles driven per year. This has not only impacted negatively on the environment but also caused severe concerns to human health, economy, and global warming. Transport alone is responsible for around 30% of the EU’s total CO2 emissions, of which more than 70% is coming from road transport. This issue is enormous to handle and needs a strategic plan to control. Governing authorities have taken measures to control vehicle pollution by setting up standard rules and regulations across the globe. Despite decades of efforts undertaken to control pollution caused by automobiles, people are still living in areas with chronic smog problems including metropolitan cities across the globe. As per our secondary sources, two thirds of death from air pollution in India were attributed to exhaust emissions from diesel vehicles. There were nearly 385,000 deaths in 2015 in the country due to pollution caused by air pollution and on-road vehicles were responsible for approx. 50% of them in 2015. So, what could be the solution to control or completely eliminate the pollution caused by automobiles? There are two ways to minimize the CO2 emission by vehicles: Let’s see how electric vehicles, which are gaining much attention and commercialization among users will impact the conventional automotive sector. The background of the concept of zero emission vehicles goes back to the early 2000’s. However, the concept of electric vehicles took a leap when Tesla came into the picture and the Lotus Elise-based, lithium-ion battery-powered Tesla Roadster was introduced in 2008. Now the electric vehicle ecosystem is huge and incorporates numerous stakeholders including vehicle manufacturers, battery manufacturers, technology integrators, and others. We predict that there will be more than 22 million passenger electric cars by the end of 2025, globally. This growth is majorly attributed to changing mindsets of consumers towards lower levels of CO2 emissions and mandatory zero emission vehicle programs as well as other supportive rules & regulations imposed by respective governments across the globe. Now, how this growth in demand for electric vehicles will affect the conventional automotive industry? Let’s analyze it for each of the stakeholder present in the value chain: Automobile manufacturers are now diverting their focus towards electric vehicles. In April 2021, Toyota announced its BEV series – Toyota bZ. A concept version of the first model in the mentioned series – Toyota bZ4X is expected to be introduced in Shanghai. The company plans to launch 15 BEVs including 7 from the bZBEV series by 2025, globally. The company is taking initiatives to set up a full line-up of electrified vehicles which is working to reduce carbon emissions, based on the concept of introducing sustainable vehicles to the market. Dealers will have to embrace the new technology and learn to sell both the conventional model of cars as well as electric vehicles. A diversified skill-set will be required among sales representatives which are well aware about the product descriptions, benefits, and usage. Suppliers will need to re-invent their business models as per the emerging trends in the automotive industry. Failure to do so can compromise their survival in the industry. Incentives and subsidies have created a positive impact among consumers. The increasing number of charging stations network, technological advancements such as emergence of supercharging, battery swapping stations, etc., and superior features of electric vehicles are expected to boost the demand for electric vehicles among consumers. Governing authorities have welcomed the concept of electric vehicles with open arms as it significantly reduced the carbon footprint and other concerns caused by conventional automobiles. Initiatives such as subsidies or incentives, removal of tolls on expressways, and priority of parking spots for electric vehicles will promote the adoption of electric vehicles. Other stakeholders include market players who have taken a keen interest in the electric vehicle concept and have been actively participating in the industry through some breathtaking initiatives. For instance, Uber – one of the largest rideshare companies recently introduced its zero-emission incentive and Uber Green category which offers its riders an option to choose drivers with electric cars. The company is also planning to include only electric cars in its fleet by 2030. Envoy Technologies is a vendor who connects real estate property owners to electric vehicle fleets with which they can promote zero-emission mobility within their premises. The company offers a platform wherein users can raise requests to reserve electric vehicles on demand to travel from one place to another within the boundaries. The market for electric vehicles surely has a great future ahead, however it demands supportive factors such as awareness among consumers, regulations, promotional activities from vendors as well as governments, and so on.
<urn:uuid:d90421f8-a382-4f53-8c65-2ad89c00564f>
CC-MAIN-2022-40
https://www.alltheresearch.com/blog/electric-vehicles-are-expected-to-disrupt-the-conventional-automotive-ecosystem-in-coming-years
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00443.warc.gz
en
0.967654
992
3.59375
4
VPN and Cyber Security In the modern world, more and more people think about online safety. Cybercrime is developing rapidly: malware attacks, suspicions that a global cyberwar is unfolding between states, Internet users, private data thefts, numerous hacker attacks, as well as a huge number of online fraud cases, including phishing scams and more. Involuntarily, you start thinking of the way to protect yourself online. (Infographic Source: https://security.harvard.edu/infographic-why-use-vpn) The Internet is not only a wellhead of incredible resources but also a source of crime. However, there are lots of ways to protect yourself from malicious activity. A virtual private network (Read more), also known as VPN, is one of the most popular ways to do that. If you want to learn more about VPNs, have a look at a comparative review ExpressVPN vs NordVPN. Safety Comes First The number of concerns for software security is growing with each passing year. Internet security experts are trying to convey the information about possible risks and threats to people, as well as the urgent need to create a new solid foundation for personal cybersecurity which is indispensable in everyday life. Despite the fact that the Internet security area is developing at the same speed as the computer performance, the question whether ordinary users are aware of possible protection methods and whether they apply this knowledge in practice remains open. Modern cybersecurity can be represented as a set of methods and tools for protecting Internet connections’ security and data integrity. Nowadays, when hackers invent new harmful influence methods, when the number of developments in the artificial intelligence systems area is growing rapidly, it’s obvious that the number of risks and potential threats to any person increases many times. If you create an encrypted connection and start using virtual private network services, all your data will go through the created channel which is not available to third parties. Now no one will monitor your actions, and won’t see your IP address. There are many reasons why you should use VPN services: 1) The ability to visit banned websites. If you live in or travel to a country where your favorite websites and resources are banned by the Government, you won’t be able to visit them in the usual way. VPN will help you avoid all the restrictions. Watch broadcasts. Content is not equally accessible everywhere. Using a VPN, you can watch broadcasts blocked in your country. Also, if you happen to visit a country where these resources are not available, you can bypass this obstacle the same way and access them using the VPN service. 2) Avoid Internet service provider monitoring. Taking advantage of VPN, your ISP will no longer be able to track your activity. Also, it won’t be able to limit the range of resources you visit, charge an additional cost for visiting premium sites or prohibit downloading content from file hosting or torrents. Security when using Wi-Fi. If you are actively using wireless Internet, a VPN service is a must for you. Free wireless Internet can be used in almost any restaurant, shopping center, cafe, etc. 3) These networks are known for their vulnerability. Lots of them are controlled by hackers. The owners themselves can also hunt for the users’ personal data. They will see all the sites you visit. Passwords can be stolen from you, as well as your electronic wallets can be hacked. All these can be avoided using a VPN service. 4) Anonymity. As soon as you open the website, your IP address becomes visible immediately. But if you connect to the Internet through a paid or free VPN service, it will be impossible to find out your real IP. It should be noted that your activity can be tracked not only by IP. Therefore, it will also be useful to use an anonymizer, such as Tor. Willing to steal your private data, cybercriminals use very sophisticated tools to collect them. The worst thing about this is that you probably won’t know you have become a cybercriminals’ victim until strange transactions appear on your bank account or your card is blocked. VPN provides additional protection against third parties attacks. In this case, you will minimize their ability to spy on you. VPN Services for Each Country Often, you start missing your favorite TV series, sports, entertainment, and educational shows as soon as you leave your country, for example, the UK. Or, on the contrary, you make a business trip to London and are willing to keep in touch with your friends, relatives, and employees. In either case, there are some compelling reasons to use VPN in the UK and use a service to change your IP to the UK one. Thanks to UK VPN service , it is possible for customers to use servers to communicate with the global Internet space. Wherever you are, you get instant access to secure private communication channels from any computer or mobile device connecting to the UK VPN server. If you want to protect your personal or business data from intruders, the VPN service is the best way to do that. It’s known fact that access to the British BBC iPlayer is blocked outside the UK. Using special VPN servers, you can easily bypass these blocks and watch streaming UK TV channels from anywhere in the world. VPN gives the client full control over free access to the web from anywhere, from any country of the world. Internet connections through VPN servers are the most convenient for users. Connecting to the Internet through the UK VPN server, you can be sure that your web activity will be carried out through encrypted VPN tunnels ensuring anonymity and connection confidentiality. Since there is a huge amount of Malware that can help the hackers find out your name, IP address and financial information, it won’t be difficult for them to create your personal portrait, steal your private data, and use it for own gain. Responsible attitude towards data protection, personal life privacy, prevention of hacker attacks, and any other harmful effects on your devices and your life – this is an indispensable duty of every user. By Daniel O’Reilly A cloud computing community providing thought leadership, news, podcast information and services.
<urn:uuid:e1cf461b-e976-4c6b-adb0-cf013c64e0a7>
CC-MAIN-2022-40
https://cloudtweaks.com/2021/12/vpn-helps-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00443.warc.gz
en
0.938512
1,270
2.921875
3
IBM Mainframe Communications Concepts The IBM Mainframe Communications Concepts course provides an overview of traditional SNA and TCP/IP communication protocols and the logical and physical components associated with them. Help desk operators, junior and senior operators, network and system programmers, and other personnel requiring knowledge of the IBM Mainframe networking. Basic knowledge of z/OS and networking concepts. After completing this course, the student will be able to: - Identify and understand the protocols used by mainframe networks. - Identify mainframe networking hardware. SNA Subarea Networks Logical Units and Terminals Physical Units and Controllers Defining and activating resources Sessions and the SSCP SNI and CDRMs LU 6.2 and APPC DLU Requesters and Servers Introducing TCPIP on the Mainframe The Mainframe OSA Adapter Other TCPIP Connectivity options SNA through TCPIP Communications Controller for Linux (CCL)
<urn:uuid:46b06c73-5b93-4de1-afb1-53f88f7f156f>
CC-MAIN-2022-40
https://bmc.interskill.com/course-catalog/Comms24-IBM-Mainframe-Communications-Concepts.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00443.warc.gz
en
0.721043
269
2.921875
3
“Malware” is a shortened version of the words malicious software. It is defined as: a generic term used to describe any type of software or code specifically designed to exploit a computer/mobile device or the data it contains, without consent. Most malware is designed to have some financial gain for the cybercriminal. Whether they are seeking your financial account information or holding your computer files for ransom or taking over your computer or mobile device to “rent” it out for malicious purposes to other criminals, they all involve some sort of payment to the cybercriminal. And because they are making money with malware, they continue their malicious ways. There are a number of ways that malware can get “on” your computer or mobile device. You might open an attachment from someone you know whose files have already been infected. You might click a link in the body of an email or on a social networking site that automatically downloads a virus. You might even click an ad banner on a website and end up downloading a virus or malware (known as “malvertising”). Or just by visiting a site you could get infected from what is called a drive-by download. Malware is also spread by sharing USB drives and other portable media. And, now that mobile phones and tablets are basically mini computers, cybercriminals are targeting mobile devices. They are taking advantage of the inherent nature of the device to spread the malware, so as a mobile user you not only need to be aware of the same tricks cybercriminals use for computers, but also ones that apply to mobile devices. Currently most mobile malware is spread by downloading an infected app so you need to be aware of what sites you download apps from and what permissions it accesses on your mobile device. Mobile malware can also spread via text messages (SMS). Scammers send phishing messages via text (called SMiShing) to try and lure you to give up personal or financial information or sign you up to premium text messages unknowingly. What does this mean for you? You need to be aware of these tricks and scams as it could mean financial loss, reputation harm and device damage to you and your friends.There are things you should do to protect yourself, including making sure you protect all your devices with a cross-device security software like McAfee All Access. You should also make sure to: - Keep your operating system and applications updated, as updates often are to close security holes that have been exposed - Avoid clicking on links in emails, social networking sites, and text messages, especially if they are from someone you don’t know - Be selective about which sites you visit and use a safe search plug-in (like McAfee SiteAdvisor which is included with McAfee All Access) to protect you from going to malicious sites - Be choosy about which apps you download and from which sites you download them and be sure to look at the permissions for what information its accessing on your mobile device - Be smart and stay aware about cyber tricks, cons, and scams designed to fool you Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:03e93ddf-1396-49d8-a402-e0409814344e>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/consumer/malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00443.warc.gz
en
0.942179
656
3.515625
4
How traditional telephony works Traditional telephony utilises copper pairs to route the call through a series of public exchanges (PSTN), and then on to the endpoint receiver by using analogue signalling. Why Openreach are withdrawing the network This network has been used in the UK for well over 100 years, with little change to the infrastructure. As such, the underlying hardware is degrading and maintenance is costly. Use of the PSTN has also been declining as mobile has become more popular, meaning the costs of the upkeep were no longer justified. Furthermore, the increasing availability of broadband, which is supported on this network, has meant that the PSTN has evolved into an almost completely digital network. This means there is no longer a requirement for traditional fixed line telephony. What is the future of telephony? Openreach are now moving forward with a fibre-first philosophy, bringing superfast connectivity to more locations throughout the UK. These more reliable and robust networks will enable users to access broadband that is capable of supporting IP communications. VoIP will become the replacement solution for traditional telephony, particularly in businesses, however, this technology will not be provided by Openreach. Communications Providers will have to administer and maintain the over the top voice service themselves, giving them the opportunity to bundle connectivity and voice services together and capitalise on convergence. Other Articles in This Series: Beginner’s Guide to Broadband Beginner’s Guide to Ethernet Beginner’s Guide to Hosted VoIP Beginner’s Guide to Mobile Device Management Beginner’s Guide to Web Security Beginner’s Guide to Email Security Beginner’s Guide to SIP Trunking Beginner’s Guide to SD WAN Beginner’s Guide to Mobile Beginner’s Guide to Network Mobile Beginner’s Guide to 5G
<urn:uuid:3f39378c-19b9-42f8-af38-e335eef5248a>
CC-MAIN-2022-40
https://digitalwholesalesolutions.com/2020/03/beginners-guide-to-telephony/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00443.warc.gz
en
0.922381
394
2.984375
3
The thickness of growth marks in primary (or “baby”) teeth may help identify children at risk for depression and other mental health disorders later in life, according to a groundbreaking investigation led by researchers at Massachusetts General Hospital (MGH) and published in JAMA Network Open. The results of this study could one day lead to the development of a much-needed tool for identifying children who have been exposed to early-life adversity, which is a risk factor for psychological problems, allowing them to be monitored and guided towards preventive treatments, if necessary. The origin of this study traces back several years, when senior author Erin C. Dunn, ScD, MPH, learned about work in the field of anthropology that could help solve a longstanding problem in her own research. Dunn is a social and psychiatric epidemiologist and an investigator in MGH’s Psychiatric and Neurodevelopmental Genetics Unit. She studies the effects of childhood adversity, which research suggests is responsible for up to one-third of all mental health disorders. Dunn is particularly interested in the timing of these adverse events and in uncovering whether there are sensitive periods during child development when exposure to adversity is particularly harmful. Yet Dunn notes that she and other scientists lack effective tools for measuring exposure to childhood adversity. Asking people (or their parents) about painful experiences in their early years is one method, but that’s vulnerable to poor recall or reluctance to share difficult memories. “That’s a hindrance for this field,” says Dunn. However, Dunn was intrigued to learn that anthropologists have long studied the teeth of people from past eras to learn about their lives. “Teeth create a permanent record of different kinds of life experiences,” she says. Exposure to sources of physical stress, such as poor nutrition or disease, can affect the formation of dental enamel and result in pronounced growth lines within teeth, called stress lines, which are similar to the rings in a tree that mark its age. Just as the thickness of tree growth rings can vary based on the climate surrounding the tree as it forms, tooth growth lines can also vary based on the environment and experiences a child has in utero and shortly thereafter, the time when teeth are forming. Thicker stress lines are thought to indicate more stressful life conditions. Dunn developed a hypothesis that the width of one variety in particular, called the neonatal line (NNL), might serve as an indicator of whether an infant’s mother experienced high levels of psychological stress during pregnancy (when teeth are already forming) and in the early period following birth. To test this hypothesis, Dunn and two co-lead authors – postdoctoral research fellow Rebecca V. Mountain, Ph.D., and data analyst Yiwen Zhu, MS, who were both in the Psychiatric and Neurodevelopmental Genetics Unit at the time of the study – led a team that analyzed 70 primary teeth collected from 70 children enrolled in the Avon Longitudinal Study of Parents and Children (ALSPAC) in the United Kingdom. In ALSPAC (which is also called Children of the 90s), parents donated primary teeth (specifically, the pointed teeth on each side of the front of the mouth known as canines) that naturally fell out of the mouths of children aged five to seven. The width of the NNL was measured using microscopes. Mothers completed questionnaires during and shortly after pregnancy that asked about four factors that are known to affect child development: stressful events in the prenatal period, maternal history of psychological problems, neighborhood quality (whether the poverty level was high or it was unsafe, for instance), and level of social support. Several clear patterns emerged. Children whose mothers had lifetime histories of severe depression or other psychiatric problems, as well as mothers who experienced depression or anxiety at 32 weeks of pregnancy, were more likely than other kids to have thicker NNLs. Meanwhile, children of mothers who received significant social support shortly after pregnancy tended to have thinner NNLs. These trends remained intact after the researchers controlled for other factors that are known to influence NNL width, including iron supplementation during pregnancy, gestational age (the time between conception and birth) and maternal obesity. No one is certain what causes the NNL to form, says Dunn, but it’s possible that a mother experiencing anxiety or depression may produce more cortisol, the “stress hormone,” which interferes with the cells that create enamel. Systemic inflammation is another candidate, says Dunn, who hopes to study how the NNL forms. And if the findings of this research can be replicated in a larger study, she believes that the NNL and other tooth growth marks could be used in the future to identify children who have been exposed to early life adversity. “Then we can connect those kids to interventions,” says Dunn, “so we can prevent the onset of mental health disorders, and do that as early on in the lifespan as we possibly can.” Exposure to early-life adversity is one of the biggest risk factors for both mental and physical health problems across the lifespan. Early-life adversity encompasses experiences of threat or deprivation that deviate from a child’s expectable physical and psychosocial environment and require some form of adaptation (1). These early-life adversities can thus be both physical and psychosocial in nature-spanning experiences of food deprivation resulting from poverty to witnessing or experiencing violence or having a parent with mental illness. These adversities are estimated to affect nearly half of all youths in the United States (2). Although not all children who experience early-life adversity will go on to have mental health problems (3), exposure to adversity has been associated with about a twofold increase in risk for depression, anxiety, or substance use disorders (4,5). In fact, researchers estimate that if the association between adversity and mental health risk was causal, approximately one third of all mental disorders could be attributable to childhood adversity (5–7). Emerging evidence suggests that there may be certain stages in development, or sensitive periods, when the brain is highly plastic and thus when adversity may have even more enduring effects (8,9). Studies finding support for sensitive periods suggest that exposure to early adversity during prenatal life (10) and from birth to 5 years of age (11,12), may be especially important in shaping long-term risk for psychiatric disorders. These sensitive periods are often conceptualized as high-risk periods-or windows of vulnerability-when adverse life experiences, such as exposure to stressors, are most harmful in increasing disease risk. However, sensitive periods can also be viewed as high-reward periods-or windows of opportunity-when enriching life experiences, including exposure to health-promoting interventions, are even more beneficial in preventing disease and promoting long-term health. Of note, relatively few studies on the time-dependent effects of adversity have been performed, and the evidence both for (11–13) and against (14–16) the existence of sensitive periods is mixed. Given the well-established association between early-life adversity and a variety of psychiatric disorders, there is an urgent need to both 1) refine our understanding of whether and when in development these sensitive periods occur and 2) identify children who experience early-life adversity-particularly during possible developmental sensitive periods-to guide targeted prevention efforts. Yet, the lack of tools to reliably and validly measure both the presence and timing of early-life adversity remains one of the biggest obstacles in the field. Current gold standard measures of childhood adversity rely on either retrospective or prospective self-reports, which are susceptible to major biases in recall or self-disclosure (17). In fact, a recent meta-analysis of 16 studies found that retrospective and prospective measures of childhood maltreatment, one of the most common types of childhood adversity, showed poor agreement, with more than half of individuals with prospective observations of maltreatment not reporting it retrospectively and, similarly, more than half of individuals with retrospective reports lacking concordant prospective measures (18). Moreover, asking a child to directly report his or her own adversity exposure may raise ethical and other concerns and pose a risk of harm to the child (19). Official reports, such as health and social services records, provide an alternative strategy, but these can also dramatically underestimate the prevalence of certain adversities (20,21). Although promising biomarkers of early-life adversity and subsequent risk for mental health problems–such as altered DNA methylation patterns (22–24) and changes in amygdala connectivity (25,26)–are beginning to emerge through epigenetic and neuroimaging studies, respectively (27), these measures are currently too costly, time-consuming to implement, and/or lacking in reproducibility. Thus, there is a need for objective measures that are noninvasive, inexpensive, and able to provide more accurate information about the presence and timing of childhood adversity. If such a measure existed, its public health implications would be profound. For the first time, clinicians would be able to confidently identify children-on a population-wide scale-who experienced childhood adversity during sensitive periods in development and are therefore at future risk for developing a psychiatric (or other) disorder. Such early, accurate risk identification could unlock the full potential of primary prevention programs, altering the course of children’s development before psychopathology symptoms ever even onset. In this article, we propose that teeth could potentially serve as a promising and actionable new tool capable of achieving these goals. To support this claim, we first summarize empirical work from dentistry, anthropology, and archaeology on human tooth development and show how these fields have collectively studied human and animal teeth for decades, using teeth as time capsules that preserve a permanent, time-resolved record of life experiences in the physical environment. This body of literature discusses teeth not as they relate to oral health but rather as fossil records in which the history of an individual’s early environmental exposures is permanently imprinted. Importantly, many of the studies cited here were conducted in samples considered large by the standards of their disciplines. This includes those studies investigating human archaeological populations and nonhuman primate samples where there are a limited number of available specimens. Although these sample sizes are small in comparison with most psychiatric studies, we argue that insights from this collection of studies nevertheless provide initial suggestive evidence of the untapped opportunities for the field of mental health research and, potentially, clinical practice to prevent brain disease and promote brain health. Building from this literature, we then integrate these insights with knowledge about the etiology of psychiatric disorders and the role of early-life adversity in shaping mental health risk to present a working conceptual model that links past psychosocial stress exposure to markers of tooth development and, ultimately, risk for neuropsychiatric disease. We end with a research agenda and discussion of future directions for rigorously testing this conceptual model and with a call to action for interdisciplinary research to meet the urgent need for new transdiagnostic biomarkers of adverse early-life experiences and psychiatric outcomes. Although the evidence to support this conceptual model is in its nascent stages, the time is right to begin empirically testing this model, given increasing investment in the formation of large birth cohort studies that have already collected teeth, the availability of techniques to characterize between-person variability in teeth-related features (28), and the growing recognition of the potential for biomarkers to guide prevention and intervention planning. THE PROPERTIES OF TEETH AS RECORDS OF EARLY-LIFE EXPERIENCE Human teeth possess at least five properties that make them promising potential biomarkers of exposure to early-life adversity and therefore helpful tools to guide prevention efforts in psychiatry. Teeth Develop During Known Sensitive Periods in Development Most humans have two sets of teeth: a set of 20 primary (deciduous, “baby,” or “milk”) teeth that are shed and replaced by 32 permanent teeth (29). Each tooth is made of enamel (the hard outermost layer of the tooth crown), dentin (the underlying layer extending into the tooth root), and pulp (the innermost core of the tooth containing blood vessels, nerve cells, and dentin-forming cells called odontoblasts) (Figure 1A). Primary teeth begin to mineralize at approximately the fourth fetal month, begin to erupt at approximately 6 months of age, and are completely formed by 2 to 3 years of age (30) (Supplemental Table S1). In contrast, the formation of permanent second molars extends from 3 years up until 14 to 16 years of age, while the permanent third molars, or wisdom teeth, complete their formation at around 18 to 25 years of age (31). These time frames coincide with known sensitive periods for brain development (32,33) and programming of stress response circuitry (34,35). Teeth Leave a Permanent Record of Their Incremental Formation, Much Like the Rings in a Tree The process of tooth formation is well documented (Figure 1B). In the final stage of tooth formation, odontoblasts (dentin-producing cells) and ameloblasts (enamel-producing cells) secrete proteins that incrementally mineralize the dentin and enamel, producing growth marks that remain visible in the completed tooth crown. These growth marks act as permanent records of the formation process, much like the rings in a tree marking its age. Cross-striations record roughly daily growth (Figure 1C). Longer period growth lines (36), called striae of Retzius (37), correspond to roughly weekly growth in humans. These growth marks are preserved in teeth across mammal species (38–41). Exposure to adversity may affect this growth process, resulting in abnormal growth marks or stress lines, as discussed below. Because each tooth develops in a specific time window during ontogeny (Supplemental Table S1), these growth marks permanently record different phases of development. In other words, each tooth may tell its own story about human growth and development. Depending on whether the tooth root is present, the growth marks in a primary central incisor record daily and weekly development from prenatal life up to 2 years postnatally, whereas a permanent second molar records development up to 14 to 16 years (30,42,43). Thus, one remarkable consequence of this natural variation across teeth is that a continuous record of growth from prenatal life up to midadolescence can be pieced together between these different types of primary and permanent teeth. In cases where the tooth root is unavailable, as is the case for most shed primary teeth, this timeline is truncated (as noted in Supplemental Table S1). Human Teeth Preserve Biological Memories of the Existence and Timing of Past Physical Stressors Exposure to physical stressors during tooth formation, such as poor nutrition, disease, and ingested toxicants like heavy metals, can affect dentin and enamel cell function (44,45), resulting in alterations that are visible as structural defects or recorded as changes in chemical composition within the tooth crown (44,46,47). Among the most commonly studied developmental defects are enamel hypoplasias, which appear on the surface of erupted teeth as pits, grooves, or complete absence of enamel. The prevalence and predictors of enamel hypoplasias in both living human participants (48,49) and archaeological populations (50) are well described in archaeology, anthropology, and dentistry. Through visual inspection of tooth characteristics–whether using macro-level tools (e.g., hand lenses) or more micro-level tools (e.g., scanning electron microscopes, microcomputed tomography)–this work has revealed that individuals exposed to famine (51), malnutrition (48,52), infectious diseases (52), and injuries (47) have significantly higher risk for enamel hypoplasias as compared with individuals without such physical stress exposures. Similarly, individuals exposed to poor diet, disease (53), and maternal hypertension (54) have also been shown to have teeth that are significantly smaller than those of their unexposed peers. Perhaps most uniquely, these physiological stressors have also been shown to produce accentuated growth marks known as stress lines (55,56) (Figure 1C). These stress lines permanently record the specific day or week in development when the stressor occurred. One of the most studied stress lines in teeth is the neonatal line marking an individual’s birth (57). Seminal work by Andra et al. (58) and Smith (59) revealed that by using the neonatal line as a kind of temporal benchmark, teeth can be used to capture the developmental timing of a variety of physical environmental exposures, including exposure to heavy metals (60,61), organic chemicals (62), injury and infections (63), and extreme wintertime cold (61). Human Teeth May Also Preserve Biological Memories of the Existence and Timing of Past Psychosocial Stressors To our knowledge, no studies have yet examined the extent to which psychosocial-based early-life adversities, such as changes in family or household structure (e.g., divorce, bereavement following family death) and experiences of deprivation or threat (e.g., physical or sexual abuse and neglect, other interpersonal and noninterpersonal traumas), are recorded in human teeth. However, at least three preliminary yet intriguing lines of evidence suggest that teeth may preserve biological memories of past psychosocial stressors, with the timing of these stressors recorded in stress lines. As noted, the majority of research on stress lines in humans has focused on the neonatal line, which can be seen in the primary teeth of about 90% of children (64). Most commonly, anthropologists and forensic experts use the neonatal line to determine the causes and timing of infant death (65) because the neonatal line is absent in the case of stillbirth (66). A small number of researchers have used the neonatal line as a marker of different types of potential perinatal stress. From these studies, there is initial evidence showing an association between certain stressful perinatal factors (64,66–70)–including preterm birth, winter birth, and a more complicated or longer duration of delivery–and a wider neonatal line (see Supplemental Table S2). Of note, models of prenatal stress that include high-risk pregnancies and maternal prenatal exposure to chronic social disadvantage have, in turn, identified an impact of these factors on adverse offspring brain development and risk for psychiatric disorders later in life (71,72). Determining whether these associations represent the effects of psychosocial stress experienced by the mother or physiological stress experienced by the infant will require more routine measurement of the neonatal line in cases where the conditions of delivery are well documented, as is the case for many current birth cohort studies. As summarized in Supplemental Table S3, a second body of evidence comes from seven studies in nonhuman primates exploring the associations between potential psychosocial stressors and markers of disrupted tooth development. Like humans and other mammals, nonhuman primates have two sets of teeth that develop incrementally and leave behind time-resolved growth marks (38); nonhuman primates are also affected by the same types of social stressors known to affect humans such as disruptions in parent-child bonding (73). Thus, primate studies provide a strong animal model to complement human studies. As shown in Supplemental Table S3, three studies did not have animal life histories and thus made inferences about stress exposures using evidence such as local rainfall records and knowledge of typical weaning patterns (74–76). Among the four studies in which animal life histories were known, all four documented the emergence of stress lines corresponding to the timing of psychosocial stress exposure such as separation from the mother (77), transfers to new enclosures (78), postsurgery hospital checkups (78), death of a sibling (79), and other disruptions in the caregiving environment (63,79). In one suggestive study of captive juvenile rhesus macaques, Austin et al. identified stress lines in enamel that corresponded to the timing of individuals’ temporary separations from their mothers and the social group to undergo biobehavioral assessments (63). These biobehavioral assessments included measures of behavioral and physiological stress response to a novel environment (80) and coincided with stress lines that typically appeared within a day of the assessment. These stress lines also correlated with the timing of changes in chemical composition. Based on these primate findings, there is reason to hypothesize that the time resolution of social stressors may also be captured in human teeth. Empirical research in both humans and animals is needed to investigate this question further and, as we discuss later, to clarify which types of social experiences produce stress lines. A third body of evidence suggesting that teeth may preserve biological memories of past psychosocial stressors comes from a very small collection of studies showing that psychosocial stressors may have time-resolved effects on human hair and nails, which are formed from the same ectodermal tissue as tooth enamel (81). Like enamel, hair and nails also grow incrementally and are affected by circadian cycles (36,82,83). The same physical stressors known to compromise ameloblast functioning–including injury, malnutrition, and physical illness–also disrupt hair and nail growth cycles. In hair, these stressors can trigger an abnormal shift of scalp follicles from the growing (anagen) stage into the dying (telogen) stage, resulting in acute temporary hair loss 2 to 4 months after the inciting event (84). In nails, these disruptions can manifest as linear grooves called Beau’s lines. Given that nails grow at a known rate, the timing of exposure can be estimated by measuring the distance of the lines from the nail bed (85). Similar to the neonatal line, Beau’s lines appear in the fingernails of 92% of infants at 4 weeks of age and then disappear with growth (86). Notably, acute temporary hair loss (telogen effluvium) has been empirically linked to acute psychological stressors such as car accidents and bereavement (87). The appearance of Beau’s lines has also been anecdotally attributed to similar adverse psychosocial experiences (88). Teeth Are Spontaneously Shed or Routinely Removed Across the First Two Decades of Life, Making Them Potentially Ideal Tools to Guide Primary Prevention Efforts in Psychiatry A final useful property of human teeth is that healthy or nondecayed teeth are naturally shed or routinely extracted during the first 2 decades of life. As an alternative to discarding or storing those unused teeth, three possibly easy and inexpensive screening opportunities exist when teeth could instead be used to measure early-life exposure to both physical and psychosocial stressors and thus to identify children at highest risk for a psychiatric disorder. To illustrate this point, we highlight these possibilities below and in Figure 1D in relation to major depressive disorder (MDD), one of the most common and burdensome psychiatric disorders that onsets at different stages of the early-life course (89). First, most primary teeth begin shedding at around 6 to 8 years of age (30). This time period precedes the onset of puberty, a known high-risk period for the onset of depression, particularly in girls (90). It is therefore reasonable to imagine the possibility that one day pediatricians or dentists could collect children’s shed teeth from parents, send these teeth to specialized labs for analysis, and use the results as an additional MDD risk assessment tool. A second opportunity exists during early adolescence, when otherwise healthy primary and permanent teeth are surgically extracted for orthodontic reasons (91). Approximately 14% of U.S. children have at least one tooth extracted by 13 years, before the age of onset for most adolescent MDD cases (92,93). Moreover, approximately one third of preschool children experience a traumatic injury to one or more primary teeth, and approximately one quarter of school-age children experience a traumatic injury to the permanent teeth (94). Although the treatment of traumatic injury varies depending on the nature of the injury and clinician training, some of these cases result in the extraction of the injured tooth, providing yet another opportunity for assessment of brain health and risk for future brain health problems. A third opportunity exists during late adolescence and early adulthood, when about half of all insured individuals in the United States have their third molars, or wisdom teeth, removed (95). This period spanning 15 to 20 years of age coincides with the developmental stage when approximately 25% of MDD cases onset (96). Of course, these prevention opportunities could also be realized for other psychiatric disorders as well. These include disorders that onset during the early teen years, including attention-deficit/hyperactivity disorder, and oppositional defiant disorder (93), as well as disorders that onset during young adulthood, including schizophrenia, bipolar disorder, and substance use disorders (93). THE TEETH CONCEPTUAL MODEL Based on these prior findings and the previously described potential of teeth to serve as new biomarkers, we introduce the TEETH (Teeth Encoding Experiences and Transforming Health) conceptual model (Figure 2). This model proposes that early-life psychosocial stressors disrupt multiple developmental processes (97), potentially including those involved in tooth formation, and that these developmental disruptions leave behind measurable biological imprints that can then be leveraged to predict risk for later psychiatric disease. Through this model, we propose that primary and permanent teeth may serve as dual markers of both past psychosocial stress exposure and future mental health risk. Below, we describe and review the evidence supporting each of the three main tenets of our model in the hopes of translating the current literature on tooth formation into testable research hypotheses for the mental health field. Tenet 1: Early-Life Adversity May Be Associated With Disrupted Processes Involved in Brain and Tooth Development Psychosocial stress during an organism’s early development is associated with disruptions in key biological processes, including programming of brain structure and function (98,99), the body’s stress response circuitry (97,100), and the epigenome (22,23). As discussed, there is also preliminary support for the notion that psychosocial stressors can leave a detectable trace in the microstructure and chemical composition of primate teeth (63,77,78). We propose that these stressors may also affect human tooth formation (Figure 2). Nascent evidence suggests parallels between biological processes involved in the development of teeth and the brain, the key organ giving rise to psychiatric disease and modulating stress responses. For instance, receptors for neuropeptides, including serotonin and melatonin, are expressed by ameloblasts and potentially modulate enamel formation (101,102). Other markers specific to glial cells (the most abundant central nervous system cell type) are also expressed in dental pulp (103). Like enamel, brain structures are also derived in ontogeny from ectodermal tissue (104), supporting observations that developmental defects in enamel are disproportionately common among people with Down syndrome, cerebral palsy, and other brain-related congenital conditions (105,106). Therefore, enamel formation not only appears to track ameloblast function but also may be susceptible to processes affecting early brain development (107,108). Together, these findings led Morishita and Arora to suggest that “it is possible that the timetable of key neurodevelopmental events is imprinted in an individual’s teeth” (109). As noted previously, the relationship between psychosocial stress and tooth development in humans is largely unexplored. However, one previous study did examine the association among features of primary teeth, socioeconomic status (an indicator of both material and social deprivation), and cortisol reactivity (a commonly used proxy for stress response system dysregulation) (110). This study found an interaction between socioeconomic status and cortisol reactivity, such that the children with the greatest enamel thickness tended to have both low socioeconomic status and low salivary cortisol reactivity. Thus, these initial findings suggest important interrelationships among socioeconomic disadvantage, biological sensitivity to stress, and tooth-based markers of development that require further elucidation. Tenet 2: Developmental Disruptions During Tooth Formation May Produce Time-Resolved Biological Imprints That Can Be Objectively Captured Both prenatal and postnatal disruptions in brain development following exposure to adversity are increasingly being identified through neuroimaging markers of structural changes (e.g., cortical thinning) (111,112) and functional changes (e.g., decreased amygdala connectivity) (25,26). Early-life adversity has also been associated with altered stress response functioning, which may manifest in the form of chronically low or high cortisol reactivity (35,113). Similarly, altered epigenetic processes following early-life psychosocial stress appear to become encoded in the epigenome, detectable at birth (24) and beyond (22,23). We propose that psychosocial stress-induced disruptions in tooth formation may result in macro-level alterations, such as changes in tooth dimensions, as well as micro-level biological signatures, including changes in microstructure and chemical composition as visible in stress lines (Figure 2). Importantly, because teeth form during known developmental periods, all markers of tooth developmental disruptions can be considered time resolved, with the level of temporal specificity varying depending on the measure used and the tooth analyzed. For example, macro-level measures may reveal the existence of exposures within the 3- to 5-year window corresponding to that tooth type’s mineralization. Examination of more micro-level measures, such as stress lines, could more precisely pinpoint the timing of exposures to within a 1-week margin of error (78). Tenet 3: Disrupted Developmental Processes May Predict Mental Health Risk Most research on biological markers of psychiatric risk have focused on the brain, indicators of stress reactivity, or epigenetic markers. Our model proposes that teeth may serve as an additional, albeit novel, biomarker linking early-life psychosocial stress exposure to mental health risk (Figure 2). Although this proposition has not yet been widely tested, recent work from at least eight studies on tooth-based markers of environmental toxins (e.g., pollutants, heavy metals) has provided some evidence that these physical exposures can be captured in teeth and used to predict risk for mental disorders such as schizophrenia and psychotic disorders (114,115), autism spectrum disorder (60,116–118), and both internalizing and externalizing symptoms (119,120) (Table 1). Whether tooth-based markers of psychosocial stress can function as indicators of psychiatric risk will be a rich area of future inquiry. reference link :https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7822497/ More information: Rebecca V. Mountain et al, Association of Maternal Stress and Social Support During Pregnancy With Growth Marks in Children’s Primary Tooth Enamel, JAMA Network Open (2021). DOI: 10.1001/jamanetworkopen.2021.29129
<urn:uuid:75c1f617-42db-478a-9f96-f8add17f04ba>
CC-MAIN-2022-40
https://debuglies.com/2021/11/10/the-thickness-of-growth-marks-in-primary-teeth-may-help-identify-children-at-risk-for-mental-health-disorders/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00443.warc.gz
en
0.939032
6,419
3.6875
4
As we know, a fiber optic multiplexer is a device that processes two or more light signals through a single optical fiber. It is introduced as an effective solutions to a fiber’s transmission capacity using different techniques and light source technologies. By using a multiplexer, media or data signals can be forwarded further, more securely with less electromagnetic and radio frequency interference. The so called WDM (Wavelength Division Multiplexing) utilizes the total available pass band of an optical fiber. It assigns individual information streams into separate wavelengths, or portion of the electromagnetic spectrum. Frequency and wavelengths can be regarded as the same concept. The only difference is the frequency is typically used to describe the radio carriers. Frequency division multiplexing assigns each signal a distinctive frequency. Digital video multiplexer is a typical fiber optic multiplexing devices for video and data signal’s fiber optic transmission. Digital video technology emerged as the ultimate facilitator of surveillance needs and has played an important role in the security area, that enables flexible, real-time, highly manageable and tunable solutions. Digital video multiplexer combined with this one-up digital uncompressed technology with WDM technology, make it possible to extend video and data service up to 100km distance simultaneously through one single fiber, and get the real-time high-definition video on the receiver side. This device is usually used for security application to control and monitor video cameras signals in airports, train station and public hotspots. FiberStore fiber optic digital video multiplexer adopt the advanced international digital video and coarse wavelength division multiplexing (CWDM) technology, these multiplexers can transmit from 1 channel video & audio & data channel to max 64 channel signals in different optical distances. These video multiplexer are single mode and multimode fiber type with multichannel track mount or standard units. Insert card version is also available, which can be inserted into our 16-slot, 19inch 2U or 4U rack-mountable card cage for 10-digit coding and non-compression video transmission. Typical fiber ports of these multiplexer are FC, SC or ST, interfaces are RS232, RS422 or RS485, which can be customized by users for actual demand. Now, in the following text, let’s overview FiberStore four types of digital video multiplexers: Video multiplexers, video & audio multiplexers, video & data multiplexers, video & audio & data multiplexers. 1-64 Channel Video Multiplexers Video multiplexers are built on Coarse wavelength division multiplexing (CWDM) and can encodes multi-channels video signals and convert them to optical signals to transmit on optical fibers. It handled several video signals simultaneously, and can also provide simultaneous display and playback features. We provide video multiplexers in different channels such as 1, 2, 4, 8, 16, 24, 32, 64 channels. They are available in both color and black and white video multiplexers, digital video multiplexers and processors. With a video multiplexer, users can record their combined signals on the VCR or wherever else they want to record. Our video multiplexer help users built a cost effective network system with the best process control and quality assurance. Digital multiplexer has two VCR IN connections and two VCR OUT connections. There is one pair of VCR IN/OUT 4-pin DINI connectors (Y/C) and one couple with BNC connections (composite video). The VCR IN connectors is used to connect the multiplexer to a VCR which will be used to playback recorded images. Connect the Video OUT connector on a VCR to one of the VCR IN connectors on the multiplexer. The VCR OUT connectors are used to connect the digital multiplexer to a VCR which will be used to record video. Connect the Video IN connector on a VCR to one of the VCR OUT connectors on the multiplexer. Video Data Multiplexers Video data multiplexers are based on digital video technology to provide fiber optic transmission of video and return or bidirectional data signals in demanding environments. They can provide highly reliable data transmission and expandable data capacity over fiber optic cables up to a few tens of kilometers. The video data multiplexers simultaneously transmit multi-channel 8-bit digitally encoded broadcast quality video over one multimode or single mode optical fiber. This module is directly compatible with NTSC, AL, or SECAM CCTV camera systems and support RS-485, RS-422, and RS-232 data protocols. These muxes are typically used with cameras that have PTZ capability. The plug and play design ensures adjustment-free installation and operation. LED indicators are provided for instant monitoring system status. We supply video & data multiplexer in channels includes 1, 2, 4, 8, 16, 24, 32 channel. Typical installation utilizes the transmitter unit at the camera end of the link, and connects via a single fiber optic cable, to a receiver unit at the monitoring end of the link. These Video & Data multiplexers are suitable for concentration management in 1U/2U/4U Racks, and we also can supply the rack chassis for you. Video & Audio Multiplexers Video and Audio Multiplexer combines digital video with digital audio to form the embedded signal. It has optical remote monitoring capabilities so that operation can be controlled remotely. The audio video multiplexer can simultaneously transmit 1-64 channels of 8-bit digital encoded broadcast quality video/unidirectional or bidirectional audio signals over multimode or single mode optical fiber. These multiplexers are used in applications where the cameras have P/T/Z capabilities. We supply video & Audio multiplexer in different channels such as 1, 2, 4, 16 channels, they are ideal for applications of security monitoring and control, highway, electronic policy, automation, intelligent residential districts and so on. Video & Audio & Data Multiplexers Video & Audio & Data Multiplexers transmit 1-64 channels of 8-bit digitally encoded broadcast quality video / return or bidirectional data / unidirectional or bidirectional audio over one multimode or single-mode optical fiber. These multiplexers are used in applications where the cameras have P/T/Z capabilities. With Plug and Play design, it can convert, integrate, groom and multiplex multiple video/audio/data streams effortlessly. The Video & Data & Audio Multiplexers are ideal for a wide range of multiplexing and remultiplexing applications including Broadcast /Studio, CCTV audio and Professional AV applications. FiberStore is a leading global supplier of telecommunications solutions for the electric utility, pipeline, transportation and industrial applications. This powerful family of optical multiplexers permits consolidation of all telecommunications requirements into a single, integrated network.
<urn:uuid:9a7afa1a-f407-45dd-9941-cb00a6c919a8>
CC-MAIN-2022-40
https://www.cables-solutions.com/category/optical-solutions/network-media-conversion/video-multiplexer
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00443.warc.gz
en
0.880346
1,423
3.015625
3
A common argument of creationists is that mutations only cause loss in information, not progression, not advancement of a species, not additions to the code. Firstly genes are only information in the way we determine them as information, we apply the information. So we have to forgive how wrong this statement is. Genes are information the same way a stream is information, we assign the name to the stream, we put it on maps, we give it information. Genes are simply used in biological processes as a way to build proteins. Evolution is small mutations selected for or against by natural selection. A vast majority of these mutations are neutral providing no immediate benefit, though they may build through successive generations to provide a benefit at a later stage (see salmonella's digestion of citric acid). Some of these mutations will be on the negative scale (from bad to terrible) some so bad the fetus may miscarry (around 50% of human pregnancy ends in miscarriage), others may infer an advantage in specific environments, eg colour blindness can assist in spotting camouflaged predators. Then again some maybe even slightly advantageous, taller, faster, smarter etc. Lets think about knowledge. Sometimes knowledge will come about due to incorrect information, this knowledge may persist for a time, it may never be tested and thus a group will hold it true even though it may not be, eg Iraq and it's supposed WMD's, thalidomide, everyone else's religious beliefs but your own. This incorrect knowledge could also use some assumption and be developed over time till it is discarded, eg the aether theory of space. Sometimes correct knowledge will come about completely by chance, increasing the knowledge of humanity via a fluke, such as the supposed discovery of penicillin, tyre rubber etc. Sometimes correct and thus valuable knowledge will develop over time based on many generations of work that has come before, Newton's famous word "I have seen so far as a I stood on a the shoulders of giants". Regardless of how the knowledge is found or whether it is any good, it adds to the vast sum of human knowledge. Obviously here I am comparing knowledge to DNA and life, it can take a long path of small iterative changes and flukes, but it can still add to the totality of diversity, it can to use the creationists phrase “add information” in that new knowledge is discovered by fluke or by iterative process. Saying knowledge can’t form this way as you don’t know where it came from is the same argument also used. But knowledge and evolution do exist even if we are not sure how they started, though we have some ideas, we may never know. Like life, knowledge likely started in a very different form, with very different conditions than what we have today. It wasn’t humans that likely started knowledge, there is evidence that neanderthals had tool use (even modern chimps and Bonobos, even some birds show tool use), painted stories (there are elephants and birds that have shown this talent) and the use of fire (which lots of animals use to their benefit), there may have even been a rudimentary language that would have allowed the passage of ideas and thus a kind of natural selection on these ideas, with ideas that were important enough being iterated and reiterated through a young creatures upbringing. Knowledge shows direct parallels with life, other people have noticed this before with the introduction of memetic theory and the very word meme. But I think this analogy goes the other way, and also points to the origin of knowledge as a parallel to the origin of life. Some may claim that knowledge requires a mind, yes it does, even a rudimentary avian one. Life it would seem does not, life only requires a drive to replicate, something even the most simple single celled bacteria or proto-celled virus strives to do. Some interesting things this brings up. Knowledge can develop similarly independently eg the idea itself of evolution driven by natural selection was developed a few times independently, akin to the eye and the many times it has evolved independently or the awesomeness that are slaters (Woodlice for the non-Australians), there is a case of very well defined convergent evolution, where two different animals evolved to look the same and live in the same environment, one the Woodlouse that is a crustacean, and one the pill millipede that is related to millipedes. An idea can also develop once only, like gunpowder use (China), or the idea of having 0 or less (India), and then these ideas get spread around and were appropriated, akin to the eye again or bipedalism, although evolution can’t really take another creatures advantage and make it, its own. An idea can also develop once only (at least a further development) that then gets lost for ever, eg; Greek Fire, or Aztecs and their masonry, akin to the billions of creatures that have gone extinct in the history of earth. So even if we grant DNA as information, information can change by accident, information can have an origin if we think about it, and information can be independent of an advanced mind. Information is different to DNA, DNA is just chemistry and demanding it needs a mind to contain it, is the same as demanding a twisting river demands a mind to explain its twists and turns, it obeys understood rules of fluid dynamics and flow, similarly DNA obeys rules of biological chemistry. If you claim these rules had to be given by a rule giver then you are simply shifting the burden and playing a God of the gaps, eventually the gap will be closed and you will have to retreat or simply deny the discovery. Beyond all this babble, we have some interesting proof that a single mutation can make a big change, even with a "loss of information", in humans of all species; http://www.motherjones.com/politics/2014/01/bill-nye-creationism-evolution
<urn:uuid:da5c27b9-398f-4d74-907d-c4c12c27efd3>
CC-MAIN-2022-40
https://atheism.morganstorey.com/2015/01/mutations-as-loss-of-information.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00443.warc.gz
en
0.952856
1,234
2.71875
3