text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Following the COVID19 pandemic outbreak, most enterprises around the world are witnessing the consequences of global supply chain disruptions. From cars to white appliances, many products are less readily available. Hence, consumers must wait for significant amounts of time prior to getting and using the products that they order. The situation is particularly challenging in specific sectors like the high-tech industry which is facing a major shortage of chips. This shortage is affecting the production of high-tech products worldwide, ranging from computers and laptops to modern automotive vehicles. Experts predict that this disruption could last up to two more years, which means that it will take significant time prior to returning to a production normality. This major supply-chain disruption is largely due to the COVID19 pandemic and its consequences in areas like employment and availability of equipment. To cut a long story short, COVID19 has disrupted the operation of conventional supply chains, while challenging the production capacity of many manufacturers. At the same time, the industry is witnessing shortages in manufacturing workers, equipment, and warehouse space, which makes the effects of the original disruptions much worse. Specifically, as soon as lockdowns were waived, manufacturers were faced with a booming demand. The latter cannot be fulfilled given pre-existing disruptions, inadequate numbers of employees and limited warehouse space that leads to inventory shortages. By and large, despite the gradual lifting of COVID19 measures, manufacturers are having hard times to recover the operations of their supply chains. In this context, supply chain stakeholders are looking into novel and effective approaches for coping with large scale disruptions. Such approaches are important not only for coping with the implications of COVID19, but also for preparing for future disruptions. This time supply chain disruptions were caused due to a large-scale healthcare crisis. Nevertheless, there are many other factors that can cause significant disruptions to global supply chains, including: In this context, one cannot rule out future disruptions in the global supply chain. Rather than addressing these disruptions after they occur (i.e., in a reactive fashion), manufacturers and other supply chain stakeholders had better invest in their preparedness to cope with them. In this direction, supply chain actors can benefit from best practices and lessons learnt during the COVID19 pandemic. During the COVID19 period, many manufacturing enterprises and other supply chain actors developed effective, agile, and fast responses to supply chain disruptions. To this end, they employed one or more of the following measures: Overall, the COVID19 pandemic highlighted the importance of novel, agile and responsive methods for supply chain management, supply chain logistics, and logistics management. It also indicated novel ways for supply chain integration with business continuity practices. Building on these best practices, organizations can develop their effective response to future supply chain disruptions. Essential Tips for fostering a successful hybrid work environment How will the Smart Restaurants of the Future look like? Six Factors Affecting Security and Risk Management in the Post COVID Era 2021: From Digital Firms to Autonomous Digital Enterprises Digital Customer Experience: A Critical Success Factor during COVID19 and in the New Normal Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:a364a7f2-9289-4eca-ac15-ce243663cfe5>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/the-path-to-supply-chain-resilience-during-the-covid19-era-and-beyond/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00465.warc.gz
en
0.948013
849
2.796875
3
The amount of data being collected and stored globally is inconceivable and it grows as we speak. The end result is “big data.” As the term implies, big data is used to denote large sets of structured or unstructured data. However, the large volume isn’t what makes big data so popular – it’s the potential and what businesses do with it that matters. To put this in perspective, businesses can use this data to recognize patterns or trends that, if capitalized on, can be beneficial. On that note, big data made some pretty big waves in 2016 and, considering its impact in 2017, seems like it is set to do the same this year as well. Here are some recent trends that show how – 1. Artificial Intelligence (AI) Artificially Intelligent mechanisms use deep learning to detect patterns in large sets of data (big data) – a fact that many businesses are using to their advantage. The marriage of big data and AI has had phenomenal effect on industries with commerce and automobile industries being two of the most notable examples. For e-commerce specifically, technologies like AI driven digital assistants in popular messaging applications now allow online marketers to interact with customers without being physically present. In the automobile industry, Tesla used artificial intelligence and big data to power its autopilot feature. 2. Variety of Big Data It is not only the sheer volume of information that big data is famous for – it is the variety of information that comes with it that is so valuable. Variety, in this case, refers to usable data. Big data investments are largely dependent on this variety and the trend continues this year too. Unlike structured data that can fit into relational databases like financial data that can be sorted into tables by region or product type, big data is unstructured. Examples include text based conversations on social media, photos, video recordings, live videos, sensor data and more. Big data technology has now made it possible for us to harness different types of data in a structured format and use it. To illustrate the variety of big data in real life applications, consider Google and Facebook that rely on the variety of big data to improve services. 3. Business Intelligence (BI) Grows Popular As might be obvious by now, big data is growing at an unprecedented rate. In fact, businesses are seeing the value in the volume and variety of the data that they receive and harnessing it for business intelligence tools and systems. Statistics show that the year 2017 is vital for business intelligence thanks to cloud based data and smarter integration solutions. With advanced analytics, business owners believe that a deeper and more insightful perspective can be gained from this data – perspective that can be used to their advantage. For example, this information can be used to detect patterns that can be used to improve key business processes.” 4. The Internet of Things (IoT) Businesses are seeking to benefit from all types of data from the sources that are available to them. One of these sources, the Internet of Things, made a great impact last year by showcasing the potential of big data. IoT is a system of physical objects that are connected to the internet and are capable of transferring data over networks without requiring human intervention to do so. The transfer of data is possible through sensor technology. Since last year, many industries are cashing in on the benefits of IoT enabled networks and the trend seems set to continue this year as well. Case in point is retail businesses. Major applications might include smart stores where data gained from sensors on sales racks can be used to analyze customer behavior in-store. A good example for this is the fashion retailer Zara which used RFID technology for faster inventory management. 5. Data Storage in the Cloud Many businesses are realizing the benefits of cloud storage for business. And they are looking for ways to take the immense amount of data they own out of data centers and into the cloud. This applies to big data analytics as well. The premise is to reduce storage costs and gain greater flexibility in accessing and exiting solutions. And most importantly, cloud storage spares businesses from hiring specialists to process this data. The year 2017 has just begun and from the trends mentioned above, it is obvious that big data is here to stay and is well on its way to becoming an inseparable part of web based services.
<urn:uuid:3c265d9f-2562-45f0-a7d5-66c4f855b982>
CC-MAIN-2022-40
https://www.crayondata.com/5-big-data-analytics-trends-that-will-change-businesses-in-2017/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00465.warc.gz
en
0.946141
885
2.75
3
NASA’s Hubble Space Telescope helped an international team of astronomers find that an unusual object in the asteroid belt. In fact, two asteroids orbiting each other that have comet-like features. These include a bright halo of material, called a coma, and a long tail of dust. Hubble used to image the asteroid, designated 300163 (2006 VW139), in September 2016. Just before the asteroid made its closest approach to the Sun. Hubble’s crisp images revealed that it was actually not one. But two asteroids of almost the same mass and size, orbiting each other at a distance of 60 miles. Asteroid 300163 (2006 VW139) discovered by Spacewatch in November 2006 and then the possible cometary activity seen in November 2011 by Pan-STARRS. Both Spacewatch and Pan-STARRS are asteroid survey projects of NASA’s Near Earth Object Observations Program. After the Pan-STARRS observations given a comet designation of 288P. This makes the object the first known binary asteroid that is also classified as a main-belt comet. The more recent Hubble observations revealed ongoing activity in the binary system. “We detected strong indications for the sublimation of water ice due to the increased solar heating similar to the tail of a comet is created,” explained team leader Jessica Agarwal of the Max Planck Institute for Solar System Research, Germany. The combined features of the binary asteroid wide separation, near-equal component size, high eccentricity orbit, and comet-like activity also make it unique among the few known binary asteroids that have a wide separation. Understanding its origin and evolution may provide new insights into the early days of the solar system. Main-belt comets may help to answer how water came to a bone-dry Earth billions of years ago. The team estimates that 2006 VW139/288P has existed as a binary system only for about 5,000 years. The most probable formation scenario is a breakup due to fast rotation. The two fragments may moved further apart by the effects of ice sublimation. A tiny push to an asteroid in one direction as water molecules are ejected in the other direction. The fact that 2006 VW139/288P different from all other known binary asteroids raises some questions. About how common such systems are in the asteroid belt. “We need more theoretical and observational work, as well as more objects similar to this object. To find an answer to this question,” concluded Agarwal. The Hubble Space Telescope is a project of international cooperation between NASA and ESA (European Space Agency). NASA’s Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore, Maryland, conducts Hubble science operations. STScI operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C.
<urn:uuid:5139b66e-643b-405e-a547-207370be1a8f>
CC-MAIN-2022-40
https://areflect.com/2017/09/25/astronomers-discovered-a-new-asteroid-as-comet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00465.warc.gz
en
0.934237
607
3.96875
4
Ransomware is not a new cyber attack strategy, but it is a growing concern to the financial sector. The old way was to simply hold systems or data for ransom; if you paid the money, access would be granted. However, times have changed. In 2020, the newest extortion tactic is for threat actors to publicly name victims and publish their data online. This has profound implications for financial institutions, whose businesses depend on the trust of their customers. The main purpose of ransomware is to prevent the user access to their files or system, usually to hold hostage in exchange for money. Ransomware takes a variety of forms: it can encrypt your files and folders or servers, lock you out of your computer or phone, or change the hard drive to interrupt the bootup process. It can be opportunistic - delivered by spam or phishing - or targeted, exploiting unpatched vulnerabilities in a company’s systems. It is easier than ever to execute ransomware attacks; with Ransomware-as-a-Service (RaaS), less technical cyber criminals can simply buy ransomware kits on the dark web. In December 2019, a ransomware group pioneered a new attack pattern — not just encrypting and ransoming a victim’s data, but exfiltrating it to also extort the victim over the data being leaked publicly, thereby preventing victim companies from keeping an attack under wraps. This tactic has become known as double-tap and has since been adopted by at least a dozen ransomware groups. Most of these groups have set up dedicated leak websites to disclose the victims’ data should the extortion demand not be paid. Another recent development is cooperation between operators to use the same data leak platform to share intelligence and help drive successful extortions. Additionally, the threat actors have added a third stage to the monetization of compromised data by auctioning compromised data to the highest bidder. This suggests that these groups are taking time to analyze the data for its potential value and will publish the victim data if the auction is not successful. It may become apparent that there is no market for this type of data, or it could drive up the price of compromised data on the digital underground. While financial services account for only four percent of breaches, third parties such as IT vendors, energy suppliers, telecommunications providers, and transport companies are also susceptible to attacks. With the move to working from home due to COVID-19, cloud providers and other third parties critical to remote operations could become major targets. Even if these are not direct attacks on financial institutions, it has already been shown that financial services could be vulnerable to them. For example, the 2019 ransomware attack on British currency exchange bureau Travelex disrupted operations at multiple client banks. Of course, not all attacks result in major outages, but given our highly interdependent global financial system, it could only be a matter of time until a ransomware attack disrupts the functioning of a large enough institution or multiple institutions to cause a crisis of customer confidence that impacts the larger economy. Therefore, the potential business impact of ransomware is now much higher than the cost of the ransom. In addition to the compliance and regulatory considerations such as mandatory data breach reporting, public disclosure and GDPR fines, the brand damage could be material and long-lasting. One of the most important tools that institutions have at their disposal to protect themselves is becoming part of an intelligence sharing community. Since criminal groups often attempt the same attack on many financial institutions in multiple countries, when one member of the financial services community shares information about an attack, vulnerability or threat, others can quickly put up defenses against it, thus lowering the attacker’s returns by forcing them to start over with new infrastructure. It makes ransomware, as well as other kinds of cyber attacks, less cost-effective for the criminals and less attractive as a result. Intelligence sharing makes cybersecurity cheaper as well. Seeing the techniques that threat actors are using on other institutions enables firms to address vulnerabilities, construct pre-emptive defenses and even block potential attacks before they are attempted. And prevention is much cheaper than picking up the pieces after an attack, both in terms of cost and reputation. Once mainly considered a cost of compliance, strong cybersecurity is increasingly a competitive differentiator in the market. Ransomware strategies continue to evolve and get even more sophisticated. It is impossible for every institution to anticipate and defend against every attack. Now more than ever, collaboration is one of the best ways for financial services institutions to continue to adapt and thrive in the ever-changing realities of the post-pandemic world. © 2022 FS-ISAC, Inc. All rights reserved. Teresa Walsh leads FS-ISAC’s Global Intelligence Office (GIO) to protect the financial sector against cyber threats by delivering actionable strategic, operational, and tactical intelligence products. Based in the United Kingdom, she oversees...Read More FS-ISAC’s global member sharing operations and a team of regional intelligence officers and analysts who monitor emerging threats. Under Teresa’s leadership, FS-ISAC’s GIO provides an invaluable niche for financial institutions' understanding of how the threat landscape impacts the sector. Previously, Teresa served as the Europe, Middle East and Africa lead for fraud intelligence and external relationships at JPMorgan. Prior to that, she served as a cyber intelligence analyst for Citigroup in the US and Europe. Teresa began her career as a civilian intelligence analyst with the US Naval Criminal Investigative Service (NCIS) and holds a master’s in political science with a focus on international relations from the University of Missouri-Columbia.
<urn:uuid:8afef45f-6074-4a81-9c0f-255c809bc49b>
CC-MAIN-2022-40
https://www.fsisac.com/insights/ransomware-today-a-diversified-business-model
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00465.warc.gz
en
0.949875
1,143
2.5625
3
In a three-part blog series, we review the emerging digital government movement around the world. In this first part, we look at the trends of governments going digital. The second part will address the benefit of digital government models. The third and final part will conclude with presenting a three-level model of maturity for the digital government. This blog series is based on the white paper “Better governance, one API at a time.” Digital government refers to the use of the internet, mobile and flexible IT architecture to improve and advance government operations and enhance service delivery. It encompasses a greater use of data to inform decision-making, and the use of more automated systems to reduce cost, increase efficiency, and support more sustainable use of resources. It includes digital transformation efforts that combine these new data and services in imaginative ways that allow partners and external stakeholders to create products and services that contribute to creating a just, dynamic, sustainable, and joyful society. Following the trend for digital transformation throughout the industry, digital transformation activities are already occurring in most governments and clear trends are now observable, for example: In Australia and Singapore, after the birth of a baby, a birth certificate is registered and the young family is automatically informed by their national, state and local government bodies of early childhood services, healthcare, and new family social services payments available to them. In Estonia, new businesses can register and be up and running within 48 hours, rather than in one month. They receive their business registration and a link to be able to automatically submit their quarterly tax reports directly from their accounting software. In Barcelona and other cities across Europe, city governments use APIs to integrate their neighborhood plans with online consultation platforms. Citizens can add feedback that is shared regularly with district planners and incorporated into neighborhood action plans. All of these opportunities are made possible by a wave of digital government transformation. Governments are using Application Programming Interfaces (APIs) as a way to open up and connect various digital systems. This is helping governments to create new and more dynamic models for citizen interactions and business partnerships. These new methodologies and approaches are still in their infancy. Governments are learning fast, however, and sharing their best practices. But it can still be complex for governments to get started with this massive reorientation. What are the main trends in this space of digital government? Globally, there are three ways governments are managing digital transformation. Governments may give digital responsibilities to the central government decision body, set up a new cross-cutting organization focused on digital government, or incorporate digital government work in existing department action plans. A recent OECD survey shows there is an equal split amongst these three approaches across a wide range of countries surveyed. OECD defines two characteristics that are a necessary part of moving towards a data-enabled digital government model. They stress the importance of both opening data by default and creating ongoing mechanisms to engage with stakeholders around data needs. According to recent OECD surveys, over the past three years, the majority of governments globally have moved towards digital government systems that uphold these two principles. OECD’s findings mirror other studies, such as the Open Data Barometer and the Open Data Index, both of which indicate that open data has rapidly gained traction in more than 70% of UN Countries. Today, most governments provide open data portals. Within the European Union, 25 out of 27 Member States have national Open Data portals. Over 200 portals have been launched by different levels of government across South, East and Southeast Asia. The United States data.gov portal publishes more than 230,000 datasets. In Europe, the Digital Economy and Society Index (measured every two years) calculated that 64% of citizens have used online services to engage and connect with governments, up from around 50% just five years earlier. In the U.S., a survey by the Center for Digital Government found that 57% of citizens paid taxes online, 39% obtained or renewed a driver’s license vehicle registration, and 13% paid a fine or a fee. For businesses across Europe, the pace of digital services is even faster, with the majority of European countries scoring highly for the availability of online services for businesses. Globally, the majority of governments have begun moving towards a greater focus on opening data publicly, and on providing digital services through online and mobile channels. There are also early signs showing the level of government partnerships that are emerging to create new digital products and services for citizens and businesses. New approaches to transparency, community feedback, and business engagement are being built with APIs. New products and services are being created that reimagine citizen and business engagement in a digital government era. To extend these promising trends, a new wave of transformation is required that cultivates API-First approaches. This will enable digital government models to increase and generate even more benefits. Download the white paper, “Better governance, one API at a time.”
<urn:uuid:820cdaba-00d3-4abe-a438-0b93bf0184a1>
CC-MAIN-2022-40
https://blog.axway.com/learning-center/apis/api-management/api-first-digital-government-approach
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00465.warc.gz
en
0.943084
1,002
2.6875
3
Identity theft: what is it? If you have received unauthorized charges on your credit card statement, or if a dental clinic is charging you for services you didn’t receive, you were probably a victim of Identity Theft. Identity Theft is a crime that happens when someone uses your personal information without permission to get involved in scams and often make money. Identity Fraud is a synonym for Identity Theft. Table of Contents What personal information are important to fraudsters? The list of personal information that may be useful to criminals is extensive. But we have created a collection of the most important ones: Full Name, Date and Place of Birth, Social Security Number, Address, Email Address, Driver’s License, Credit and Debit Card Number, Bank Account Number, Phone Number, and Passport Number. What can a criminal do with your personal information? How does it happen? Criminals have several ways to steal your personal information. The most common techniques and ways are: - Phishing and Spear Phishing: fraudsters send fake emails to you pretending to be someone you know, sometimes with malicious links and attachments, so you can provide sensitive data, open a backdoor to a malware or even make a money transfer to them. - Insecure Wi-Fi: we all know that some Wi-Fi networks, especially public ones, are unsafe. Through them, cybercriminals can gain access to your machine, hijacking valuable information and even infecting your device with malware. - Data Breach: that’s not your fault, but companies that own your personal information, such as Facebook or your medical clinic, can release them on the web, creating some trouble for you. - Direct Theft: we use the term Direct Theft to define actions such as stealing your wallet, installing fake credit and debit card machines, hijacking your mailing, applying scams over the phone, and others.
<urn:uuid:e511868f-feef-4361-bca0-da90334a46f3>
CC-MAIN-2022-40
https://gatefy.com/blog/identity-theft-what-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00665.warc.gz
en
0.921981
394
3.03125
3
What is Attenuation? In a nutshell, attenuation is the loss of transmission signal strength measured in decibels (dB). As it increases, the more distorted and unintelligible the transmission (e.g. a phone call or email you’re trying to send) becomes. To combat the distortion, networks send multiple repeat signals to ensure at least one successfully reaches its destination. The main side effect of this is a reduction in total speed available due to those extra signals being sent. Think of it Like This: When you are chatting with a friend on a busy street you can hear them clearly. Now if you tried to talk to that friend from across the street, the street traffic and background noise (attenuation) would make that conversation inaudible What Causes It? - Noise. Extra noise on networks, like radio frequencies, electrical currents, and wire leakage, may interfere with the signal and cause attenuation. The more noise you have, the more attenuation you experience. - Physical surroundings. Physical surroundings like temperature, wall barriers, and improper wire installation may distort the transmission. - Travel distance. The further transmissions have to travel from their current location (e.g. your home or workplace) to a Central Office (C/O; the location of your connection provider), the more noise they experience along the way. Attenuation Rates in Fibre vs. Copper Attenuation may occur to any type of signal whether it be copper, fibre, satellite, LTE, or even that overly-catchy pop song on the radio. When it comes to fibre and copper connections, however, fibre far outshines the alternatives. Fibre signals travel on high-frequency wavelengths of light insulated by glass tubes. Since light is resistant to sources of noise like electricity and radio frequencies, fibre connections have a very low attenuation rate. (Fun fact: Sharks love the taste of fibre cables. Luckily Google loves reinforcing those cables even more). Since copper signals are made up of electrical frequencies that are susceptible to noise, they are much more affected by physical surroundings than fibre. Anything from temperature to improper installation (this stuff ain’t your average DIY) may affect the copper line and increase the attenuation rate. As a rule of thumb, the lower the attenuation dB of your connection the better. Attenuation is a loss of signal strength measured in dB that reduces a connection’s maximum speed available due to the need for multiple repeat transmissions. Ultimately the attenuation you experience – and how it impacts your business – depends on the distance between you and your C/O. The further the distance, the lower your available connection speed will be. If you truly notice a difference in your copper connection due to attenuation, you may want to consider other alternatives.
<urn:uuid:8a463c30-7aaf-448f-954e-08340ce5b941>
CC-MAIN-2022-40
https://itel.com/what-is-attenuation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00665.warc.gz
en
0.912773
582
3.765625
4
Indoor Air Quality Monitoring is often associated with human health, well-being, or comfort. However, in data centers, IAQ is pertaining to the “health” of critical information technology (IT) and datacom equipment. Air Quality Monitoring is important in data centers to keep possibilities of corrosion in check. The inevitability of outdoor air pollution has been a reality that we have had to deal with on this planet. It has existed since the advent of the industrial age. In recent years, data centers in urban locations are experiencing higher equipment failure rates due to the effects of gaseous pollutants, higher temperatures, and fluctuating humidity inside the data center. As the years passed, many companies learned the importance of air filtration systems in avoiding the disastrous effects of poor indoor air quality on their data centers. There is also the air quality monitoring system that provides real-time updates to ensure that the air filtration devices are working at optimal efficiency. Data centers need clean air for their servers and computing equipment to operate. Impurities and particulates in server rooms can cause corrosion and damage. What is Data Center Corrosion? Data Center corrosion is defined as a form of chemical contamination. This form of contamination is brought about by the vapor of harmful and corrosive gases. They come from pollutants that make their way into air-conditioned server rooms, control rooms, switchgear rooms, process control, and signal switching rooms in data centers. These gases can degrade the service life of the mission-essential gear. They also bring about a phenomenon called micro-electronic corrosion. This occurrence can lead to equipment failure and increased downtime of the facility. A data center with maintained standards of indoor air quality usually has filtration devices to catch fine particulates. It also has top-of-the-line temperature and humidity control systems. Electronic equipment is comprised of a complicated assembly of metallic and non-metallic parts. These components are vulnerable to corrosion. If allowed to go unfixed, they can damage electrical components like: Collected data could be damaged at the reaction site. Also, as reaction residue builds up, mechanical malfunctions can happen on data tracks that have not yet been corroded. These connectors are located on the circuit boards. They are manufactured from copper with gold or copper-plated substrate. Corrosion can cause interference in the connection points. The presence of corrosives may also impede efficient data transfer. The configuration of disk drives is such that they are especially vulnerable to electronic corrosion even in an indoor atmosphere. Molecular gaseous pollutants pose a danger to both IT and sensitive electronic equipment. Gases such as Sulfur Dioxide and Hydrogen Sulfide are known to be the most notorious for corroding electronic equipment. These two gases alone do not pose that much of a threat to silver or copper. But the combination of both with ozone can spell total disaster for electrical gear. The rate of corrosion of copper depends on the factor of humidity, but with silver, it is less so. These corrosive gases in a data center environment not only pose a threat to electrical equipment but the health and well-being of the employees as well. How Does Indoor Air Quality Impact Data Centers? - Dangerous chemicals such as Sulphur and Nitrogen Oxides, along with other pollutants found in the atmosphere can cause corrosion. This then causes failure of the electronic and computing components of data processing gear. These destructive gasses are the result of the burning of fossil fuels. - Board Level Corrosion manifests itself in several ways. Some of these manifestations include contaminated solder joints, corrosive shorting across conductive pathways, and whisker growth. This sort of corrosion is brought about by particulate matter and can lead to electrical shorts in contact points. The total effect is the reduced operating life of data center gear. - There is also a risk to the human element in a data center. This comes in the form of Volatile Organic Compounds (VOC) such as fluorine. These compounds are usually found in power and communication cables that line the data centers. These chemicals can harm the human respiratory system as well as the eyes. This factor is another reason why the indoor air quality of your data centers may not be healthy. How to Quantify if Your Data Center has Acceptable Indoor Air Quality Levels G1- Severity Level Mild This is a data center atmosphere where all preventive measures to filter out indoor air pollution have been implemented. There is vigilant and constant monitoring for pollutants and harmful gases. Corrosion has also been eliminated as a factor for equipment failure. G2- Severity Level Moderate This is a data center atmosphere where there is a minimal amount of indoor air pollution and corrosion. But these amounts are still within measurable and manageable parameters. Yet, these corrosives and pollutants may now begin to show a small effect on equipment dependability. G3- Severity Level Harsh This is a data center atmosphere where there is a high risk that a corrosive attack will happen. These harsh levels now need unrelenting vigilance in environmental monitoring. They also prompt the use of designed and packaged equipment. GX- Severity Level Severe This is a data center atmosphere where only specially designed and packaged equipment is expected to survive. Specifications for equipment at this level are a matter of negotiation between user and supplier. Monitoring IAQ Levels in Data Centers The sensor detects various Metal Oxide (MOx) gases, displaying the value as a VOC Index. Examples of these gases are: - Acetone (eg. paints and glues) - Toluene (eg. furniture) - Ethanol (eg. perfume, cleaning fluids) - Hydrogen Sulfide (eg. decaying food) - Benzene (eg. Cigarette smoke) The VOC Index is a logarithmic scale that is relative to the typical indoor gas composition over the past 24 hours. With a range of 0 to 500, the typical value for a normal environment is 100. Values greater than 100 indicate worsening air quality with a higher concentration of metal oxide gases over the past 24 hours. Values lower than 100 indicate improving air quality. Detection for 5 different sizes. PM0.5, PM1.0, PM2.5, PM4 and PM10. The sensor is able to measure the mass concentration of particles in the PM1.0 to PM10 range and particle number concentration in the PM0.5 to PM10 range. The typical particle size is also measured. This measurement is based on the average size of the current sample. An air particle sensor is utilized during indoor air quality (IAQ) assessments of clean rooms and workplaces. The specific type of particles is not detected, but it identifies the quantity or mass of airborne particles. These air particles could be sourced from: - Exhaust smoke - Airborne dust particles Airborne pollutants can be a health hazard, and result in sneezing, headaches, asthma, and so on. In addition, during many agricultural and industrial processes, airborne dust can be a serious hazard forming combustible dust clouds. Data centers store the invaluable equipment and data of the business. It is important to always be on the lookout for all the factors that affect its efficiency, especially if this factor can cause disasters like corrosion. If ever data center’s air quality needs to be tested and monitored, don’t hesitate. Monitoring systems might be a little expensive. But, it is nothing compared to tons of dollars of expenses with the repair or replacement of the equipment. In the end, the efficiency of the data center is the most important. Because prevention is always better than cure. Do not let poor indoor air quality affect uncertainties in the business. For queries and questions about Air Quality Monitoring, contact us using the links below:
<urn:uuid:109911da-fd94-44a4-a5f7-b1908e30c725>
CC-MAIN-2022-40
https://www.akcp.com/blog/how-air-quality-monitoring-keeps-data-centers-efficient/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00665.warc.gz
en
0.920398
1,867
3.3125
3
“Spoof” is an interesting word, as it’s one that used to have more benign connotations, referring to humorous pranks or imitations, with the term commonly being used for genre spoofs (such as many of the works of Mel Brooks) in movies and television. However, that terminology has largely fallen by the wayside in recent years (replaced, generally, by “parody”) in part because of the more sinister uses of the term being more commonly used . Today, when you say the word spoof it will be understood almost universally to refer to some kind of scam or hoax – and that is primarily because the prevalence of caller ID has resulted in a meteoric rise of phone or caller ID spoofing: someone calling and pretending to be someone else. What Exactly is “Spoofing”? For such a silly sounding word, spoofing has become a surprisingly serious headache for people and agencies around the world. Caller ID spoofing is, simply put, any time someone calls using some sort of falsified number. Usually this number is dolled up somehow to look like a number you would trust. Most typically, a basic spoofing tactic is to change the caller ID number the scammer is calling from to appear to be from your area code. To a lot of people, this instantly makes them relax their guard, at least slightly. Some types of spoofing go even deeper, and mimic actual phone numbers. This could be the customer support number of your bank or credit card holder, a government agency like the IRS of the city hall of your town, or even a private citizen like one of your friends or family members. Some of these things are more easily recognizable than others. The main takeaway: don’t let your guard down just because a call comes from a familiar number. You can’t relax until you’ve somehow verified it’s legitimate. How to Recognize a Spoofed Call While the number often looks legitimate, once the call begins it can often be very easy to tell that a phone call isn’t quite right somehow. Here are a few common variants: If you pick up the phone and hear a computerized voice immediately start talking, it is some sort of robo-call. Whether this call comes from a legitimate business (unlikely) or not, my best advice to you? Hang up the phone. Even if it is some sort of legitimate call, if it’s not important enough for them to get a real person on the line for it, it’s probably not important enough to listen to. Keep an ear out for a variant of the robo-call: calls that are completely silent. These are usually lines that are waiting to hear a human voice on the other end, which tells them the line is live and they can begin playing the computerized playback. Of course, if the call isn’t from a legitimate business you have even less reason to talk to them, and even a simple “hello” is enough for them to have something on you. You’ll start receiving more and more of these scam calls as they have verified that your number is active and that someone will pick up the phone when they call. These are typically the ones that will just mimic your area code and call it a day, relying on your natural habit to pick up the phone and immediately greet the caller to help them. The Bill Collector These people call claiming to be from a legitimate business and have the phone number to back it up. If you look at one of your bills or on the company’s website they’ll usually match. These are probably the most dangerous type of spoofed call, but are thankfully less common than more “generic” calls you’ll receive from scammers. This is because generally speaking you’ll only start receiving these calls if your information has already been leaked to someone. They know you have an account with whatever business they’re impersonating, and are using that to directly target you specifically. These calls are, as a result, not usually scattershot calls and are going to sound a lot more convincing. They’ll get someone who sounds professional, reading from a script that is probably derived from a real customer service script modified for their own means. While the most common form of this will be some kind of bill collection call (because in their logic, why not swing for the fences?) this could be any legitimate sounding business call. Much of the time, these calls will try to solicit you for all sorts of personal information, in the guise of verifying your identity. The scariest part is, calls that are legitimately from your bank or whoever you have an account with will often do the exact same. All of this sounds very reasonable. That is why you should never, under any circumstances, trust an incoming call. That is the true danger of caller ID spoofing: it throws everything you might be willing to trust into question. These are probably the least common call. They involve impersonating a specific person; a friend or family member of the victim. Like the business related calls above, these are usually the result of someone already having some of the victim’s information. In this case there is also usually an extra layer: they are aware, or at least suspect, the person they’re calling might be in some way impaired. Either in terms of mental or physical ability, they are banking on the person being called being compromised in some way, through a condition like Alzheimer’s that impairs decision making or through partial or total deafness. These types of conditions make it hard for the person being called to identify the caller, especially if they rely on captions and the like to make phone calls. Removing all tonal cues that the person is an impostor makes things very difficult. If you are in such a position, you’ll need to be extra vigilant for scammers like this. Try not to engage with anyone you haven’t called yourself. Likewise, if you know someone whose judgment is compromised, try to keep an eye out for them and make sure they aren’t taken in by such scams. What to Do if your Receive a Spoofed Call The best thing you can do? Just hang up. Don’t ever try to bait a caller, or engage with them in any way. Doing so puts you on their list, and you’ll start receiving more and more calls of this nature. Better yet, unless you recognize an exact phone number one of the best things you can do is just not pick up. Obviously, this is not a foolproof plan, as sometimes you’ll be awaiting a call from unfamiliar phone numbers, and sometimes as mentioned, you’ll receive a call that appears to be from a legitimate, real phone number, but it’s a good practice to get into. If you do find yourself in the position of being obligated to pick up the phone for a call from an unfamiliar number (eg., you are waiting for a business call and either don’t know or can’t remember the exact phone number), take a moment to wait before saying anything. Crush that desire to reflexively say “Hello?” on picking up the phone. If the person on the other end is real, they’ll engage first. This is, of course, not a way to ensure you’re not talking to a scammer but it does at least filter out many robo-calls (most of which only start their recording once you have indicated your presence). Some robo-calls start their playback no matter what, but it’s usually easy to recognize an automated message and let it play, then hang up without responding. Once on the line with a real person, make sure to keep in mind that you can refuse any requests for information they throw at you. While this might be mildly annoying for any real customer service representative you get on the phone with (telling them you’re not willing to talk), it’s better than being taken in by a very convincing scammer and giving up vital information like your password, security questions, account numbers, or even social security number. Always keep in mind that you can tell someone you’re not comfortable giving them that information from an unsolicited call, then hang up and call back using a number you know is legitimate. Alternatively, you can often just deal with whatever business you need to via the company’s website if it’s legitimate. Bill collection calls are a notable example of this. It does not take much effort to tell someone who is asking to verify your identity for a bill payment “Hey, I’ll handle it online” and then politely give them your farewells and hang up on them. They have no ability to compel you to stay on the line, and even if it’s legitimate (and know it’s legitimate) there’s plenty of reasons you may be unwilling or unable to deal with that bill at the exact moment anyway. Ultimately the best thing to keep in mind when you suspect you’ve been called from a spoofed number (which is something you should sadly always assume; it’s safer that way) is that no phone call from a business you have an account with is really that important. They’ll try to make it sound vital and urgent, and sometimes the information they’re trying to get across (there’s a problem with your account, a bill is past due, etc.) is actually important, but nothing ever needs to be dealt with right that exact second. Take your time, verify all the information presented, and handle it at your own pace. This alone will keep you safe from most types of scams. Last Updated on
<urn:uuid:553337e5-edc5-484e-91fa-1e6aa633cae7>
CC-MAIN-2022-40
https://www.homesecurityheroes.com/everything-you-need-to-know-about-caller-id-spoofing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00665.warc.gz
en
0.954486
2,101
2.640625
3
Commonly referred to as the fourth industrial revolution, Industry 4.0 describes the current trend of automation and data exchange in manufacturing technologies which includes cyber-physical systems, the Internet of Things (IoT), cloud computing and cognitive computing. Industry 4.0 is paving the way for the “smart factory of the future,” where cyber-physical systems monitor physical processes, create virtual copies of the physical world and make decentralized decisions. Over IoT, systems communicate and cooperate with each other and manufacturing personnel in real-time both internally and across organized services offered and used by those in the value chain. Early adopters are already succeeding with new Industry 4.0 initiatives. For example, a large multinational industrial conglomerate leveraged these capabilities to understanding production quality in real time and predict product quality to reduce the need for end of the line testing and increase yield. And, a major automotive OEM (original equipment manufacturer) analyzed data from multiple pieces of production equipment and used the data to predict quality exceptions 20 to 30 minutes before they occurred to take remedial actions in advance and removed the quality defects. Most manufacturers are eager to harness the power of Industry 4.0 to drive innovation, efficiencies, improved customer service and cost savings, yet first they must embrace core enabling technologies — namely IoT devices and sensors, automation, artificial intelligence (AI)/machine learning and analytics — to make that happen. IoT is essential to Industry 4.0 because it provides bi-directional communication that ensures not only status can be received but also that information can be sent to change the machine performance, even on a unit by unit basis when needed. Analytics provides the foundation for understanding machine condition but also is used to determine more advanced needs such as determining machine performance as it relates to maintenance needs and production quality requirements in real time. Also essential to Industry 4.0 is predictive analytics which is used to determine future maintenance needs from a long-term perspective while also driving short-term quality requirement prediction to improve production yield. Lastly, machine learning and AI must be used to ensure predictive algorithms remain accurate by identifying operating changes and applying them to algorithms. Most manufacturers are behind the curve when it comes to adopting these core enabling technologies. The early adopters that have implemented them have realized early success and are in the process of scaling these technologies widely across their factories. These manufacturers have learned some early and important lessons about how to best evaluate these technologies and determine which approaches are best for their particular organization. Best Practices for Evaluating and Implementing IoT, Automation, Machine Learning/AI, Analytics Evaluating these core technologies is a relatively simple process. Successful early adopters in the manufacturing space have followed these steps: - Understand Device Management Capabilities - Understand all of your device management capabilities immediately as security and connectivity rely on this important area. Without it, your IoT implementation is doomed. - Embrace Multiple Deployment Options - First, understand the capabilities of multi-tenancy and make sure you have multiple deployment options for the cloud, on-premise and edge in order to support multiple deployment options which your factory or manufacturing floor will likely need. - Get Ready for More Advanced Analytics - Closely examine your analytics capabilities and ensure that your widgets can be easily used for simple analysis while your IoT platform is capable of handling much more advance algorithms. - Connectivity Is Still Key - Understand you need connectivity to enterprise systems to enrich data with context but to also share performance data throughout the manufacturing enterprise. Culture Change Requirements Some manufacturers have been wondered if there is a culture change involved when new technologies must be implemented and embraced. Culture change needs vary from organization to organization, but the most predominant change needed is the ability to focus on the possible, not just on the blueprint. Specifically, organizations must have more than an understanding of the ROI they wish to achieve or the desired outcomes of the implementations. Manufacturing teams must share a complete vision of what they hope to achieve along with a step-by-step execution plan of how they will reach goal, with the caveat that new learnings and critical needs will occur on this transformational journey. This mindset is new for the typically risk-averse manufacturing industry as it is more akin to the startup world and can be tough to maintain. However, it is important for teams to remain flexible in their thinking when implementing ground-breaking technologies like IoT, predictive analytics and AI. Other Best Practices With IoT, traditional best practices are still relevant, but they will need to be modified to follow scaled-back, iterative approach. Manufacturers should start by analyzing the collection of problems they wish to solve and define a business value for each. But rather than follow the traditional stage gate process of creating a complete multi-year IRR calculation, scale it back to a shorter, iterative ROI. This ROI is then tested based on short experimental implementation of the technology for a period of less than 30 days. This requires implementation of a minimum viable product with basic non-industrial grading sensors that are not meant to last years but are instead used to validate ROI before proceeding to a larger and longer technology rollout. Later on, scalability will be a key focus during the longer rollout phase. Thriving in the brave new world of Industry 4.0 depends on the manufacturers’ ability to embrace IoT, automation, AI, machine learning and analytics and scale those new technologies across the manufacturing enterprise. By following these best practices and a more entrepreneurial and experimental approach, manufacturers will create their own factory of the future in no time. Sean Riley is Global Industry Director of Manufacturing and Transportation at Software AG.
<urn:uuid:be2b956b-29ee-4f00-9233-e1bbfb0f6be2>
CC-MAIN-2022-40
https://www.mbtmag.com/home/article/13246500/best-practices-on-how-to-embrace-technologies-required-for-industry-40
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00665.warc.gz
en
0.924885
1,146
2.59375
3
Bridgeworks features in this article from Government Public Sector Journal for their offer to the UK’s Cabinet Office which has been approved to make PORTrockIT products available free of charge for a year to “any Health Organisation or Medical Research Establishment engaged in this COVID-19 work.” MAY 1, 2020 The Coronavirus COVID-19 pandemic is spreading rapidly throughout the world and scientists are racing to find a vaccine. Big Data analytics, data visualisation and artificial intelligence are also being recruited to track the spread of the virus. This is with the aim of helping governments and public authorities to make better decisions about how to prevent and reduce the infection rate. The virulent nature of the COVID-19 means that time is of the essence. For example, ZDNET reports that “The Center for Systems Science and Engineering”, at John Hopkins University, is running an online dashboard that tracks the spread of the deadly coronavirus, as it makes its way across the globe…The data is visualised through a real-time graphic information system (GIS) powered by Esri.” For the livestreamed dashboard, data is collated from the World Health Organisation (WHO), as well as from centres for disease control in the US, China and Europe. This is for the purpose of showing all confirmed and suspected cases of coronavirus COVID-19. It also records the number of recovered patients and deaths. Improving contact tracing Big data can also be used to make contact tracing more efficient. In Indonesia, for example, there is no time to wait for a vaccine that may be at least 18 months away. So, in the meantime, the country’s government and health authorities need to be able to “record the applied medication, treatments and the patients’ responses to find out statistically which treatment is the most effective”, writes Alexander Senaputra, Technical adviser for PT Geoservices, in the Jakarta Post on 17th March 2020. He adds: “This approach is similar to efforts being made to find a cure for cancer in countries with advanced medical systems. This is where all patients’ data — especially those who recover — are taken and processed by algorithm to find something in common that gives doctors a lead about the best medication.” Data analysis can be deployed to track the average time is takes patients to recover whenever they are receiving treatment. They can also use the data to predict how many more beds are required following a spike in COVID-19 infections, leading to an increased number of patients needing hospital treatment. China: Big data and AI In China, the epicentre of the origins of the global pandemic, artificial intelligence and Big Data has been deployed in its cities. Shawn Yuan, writing for Aljazeera on 1st March 2020, says thermal scanners were introduced to spot people showing the symptoms, such as a high temperature. These temperature checks can be used to inform passengers, transport and health authorities to ensure that preventative action can be taken to reduce the spread of the virus. The belief is that the development of technology can enable the authorities to fight the disease in a way that was not possible during the SARS outbreak in 2003. However, much depends on the quality of the data that’s collated and how it’s defined. Data also needs to be collated from a wide variety of sources, shared and backed up. The obstacle: latency However, latency and packet loss can make the synchronisation of databases inefficient and slow networks can reduce the accuracy of ‘real-time’ data. With accurate real-time data, everyone will be able to get back to normal as soon as possible, and to speed up the march towards finding a vaccine against the coronavirus. However, a lack of real-time data modelling could lead to the wrong decisions being made – including over when to reduce lockdown measures in order to kickstart the economy in each country across the world. Some countries have begun, at the time of writing of 20th April 2020, to take these tentative steps. The results are being closely watched. David Trossell, CEO and CTO of Bridgeworks, reveals the types of data that are crucial to decision-making: “Key data to help form this decision has to include at least: number of infections, number of deaths, number of survivors, number of tests, outcome of tests, drug trials, locational data – on a global basis!” In other words, governments, scientists and health authorities across the world should ideally share this data to beat back the virus. This kind of data is so invaluable it’s important to protect it against ransomware and get the data together in a timely manner to ensure accurate data analysis. Trossell adds: “Intelligence is at the heart of decision-making and that is driven by data. Big Data. Not so long ago, we were all talking about Big Data in the 4 key pillars, Velocity, Veracity, Volume and Verity each is key, but if we want to look at this on a global basis the velocity is going to play a key part.” The preservation of data is essential, and this isn’t just about real-time data. To enable the right decisions to be made, there is also a need to be able to analyse historic data. All the data the scientists, government, health authorities and other interested parties generate now could be useful in the future – helping to prevent a future pandemic. “Unfortunately, we’ve already seen cyber-attacks on medical organisations just when they are distracted elsewhere”, says Trossell before advising: “Offsite air-gapped back-ups are critical; the more the merrier in my mind, and with the right technology this is now highly possible.” “The problem with off-site air-gapped data storage, and also where data has to be transferred across any distance between organisations, is one that many see as impossible to implement in an efficient way is due to latency and packet loss.” However, one way to achieve this is by deploying WAN data acceleration solution such as PORTrockIT. There are also concerns about data accuracy. “As we‘ve seen in the UK, daily data information figures can be skewed because of the lack of velocity in the data: sometimes these cover a period of number of days, sometimes the time between events and central reporting can be over 5 days, which makes the decision of how and when to lift the lockdown problematic”, he comments. Trossell adds: “So, if we’re going to combine big data analysis with AI, we’ve got to meet the 4 pillars of Big Data, especially the velocity pillar; and to crack the latency problem we need a different approach to transporting data not only efficiently but securely.” The issue of data accuracy is exacerbated by different governments and authorities using different data models, making a bit like comparing apples and pears. Trossell explains: “This has always been the problem with any data and digital in particular. A common reporting format would be extremely useful for the electronic gathering of data – perhaps it is something the WHO should look into for the next emergency – as we all know there will be others.” Trossell concludes that the pandemic is likely to change the way people work, with more people continuing to use technology to work from home. Yet, he claims we are very social beings and that lack of contact with others is causing many mental health concerns. UK Cabinet Office Meanwhile, joining the fight against COVID-19, Bridgeworks has written to the UK’s Cabinet Office to make its PORTrockIT products available free of charge for a year to “any Health Organisation or Medical Research Establishment engaged in this COVID-19 work.” In the letter, the company says: “PORTrockIT massively accelerates the transfer of vast quantities of data over a very long distance, in a manner that is unique and which overcomes the problems of latency, packet loss and congestion on the line, in a way that no other organisation in the world has come anywhere near matching.” The solution uses machine learning, artificial intelligence and parallelisation to mitigate wide area network (WAN) latency and packet loss. While this can’t change a scenario where poor quality data leads to poor decisions, it can make real-time big data analysis more accurate and enable voluminous amounts of data to be shared, backed up and transferred across the globe – making it quicker and easier to conduct research and collaboration against COVID-19.
<urn:uuid:3eec98ac-1196-46b8-914a-3f1f7082dc94>
CC-MAIN-2022-40
https://www.4bridgeworks.com/covid-19-how-a-i-and-wan-acceleration-can-help-to-find-a-cure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00065.warc.gz
en
0.95264
1,810
2.84375
3
In the first of a series of blog posts on his research, he said "content escaping," while not a sophisticated obfuscation technique, is effective at hiding - or obfuscating - the malicious content of a message. It is also far more commonly used on malicious websites than in phishing or scam email messages. It's the technique's growing use in email that caught Katz's attention. "There is a movement from using solely emails as a way to propagate phishing scams into social networks and messaging and social messaging platforms to deliver a lot of those scams," he says. "When you try to distribute attacks through of social media, then you are actually using the power of that platform to do a very rapid kind of distribution that is dependent on the trustworthiness of the people that are distributing them." Because the techniques are being so successful, Katz says that they're not limited to a single criminal organization or geographic area: they're being used worldwide by a wide variety of threat actors. And because they can come from so many sources, and hide in so many ways, Katz says that basic user education may still be one of the most powerful tools to use against them. It starts, he says, with reminding users that an email message that seems too good to be true probably is. And if the URL seems unusual, or appears from an unusual location in a message or on a Web page, that should be a red flag. "Stop at that point, think twice and try to figure out if you need to give any personal information." If it's suspicious enough to make you think, he says, then it's almost certainly suspicious enough to make you stop.
<urn:uuid:17369a1b-884f-4902-9ffd-8f74f1b2f5f3>
CC-MAIN-2022-40
https://www.darkreading.com/threat-intelligence/javascript-obfuscation-moves-to-phishing-emails
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00065.warc.gz
en
0.953798
722
2.609375
3
The Password Reset Feature – Changing your Password or Someone Else’s? Most user-based web applications offer a password change or reset feature. When working correctly, it is an essential part of maintaining user accounts and keeping customers happy. However, a small misconfiguration within this key feature can result in the compromise of any user account. A correctly configured password reset feature will verify if a user is authorized to change their password before executing the reset. Typically, this verification occurs through validating the user’s session cookie or authorization token. Why Does it Matter? Some applications instead rely on the username sent through a POST parameter to verify a user’s legitimacy. For example, the screenshots below show weak password reset requests: The above requests rely on a ‘temp-forgot-password-token’ to verify that the user is authorized to perform a reset. However, when this token is sent empty the request is still successful, allowing an attacker to reset any user’s password simply by changing the username parameter. Additional methods to manipulate password reset features through an email parameter can be seen below: Password reset features are also commonly implemented through directly emailing users a password reset link. This link typically contains a token that is correlated to a user account. If an attacker can identify that the token uses weak entropy, is a hash of the user email, or is used repeatedly, it can likely be compromised and used to take over user accounts. How Can You Fix It? Password reset features are a common part of web applications. These features are typically unique to the application and require manual testing to determine if the business logic can be manipulated by an experienced attacker. As a developer, it is best to ensure that users are properly authorized to reset an account password through their session token. Additionally, any tokens used in the reset process should be long and randomly generated.
<urn:uuid:e73ec54e-ec9e-4ee6-96ce-f61bc221b2b1>
CC-MAIN-2022-40
https://echeloncyber.com/intelligence/entry/hackers-perspective-web-app-vulnerabilities-password-reset-feature
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00065.warc.gz
en
0.884802
387
2.703125
3
The Metaverse and its Role in Decarbonization | SPONSORED How digital twins and metaverse further greenhouse gas reduction My last two pieces focused on two subjects that are receiving more and more attention every day. The Metaverse, which for many is synonymous with Facebook changing its name to Meta, and decarbonization, a term that is vitally important in the combat for climate change. Digital twin and metaverse technologies ally At UrsaLeo, we’ve spent a lot of time discussing these two terms – digital twin and metaverse – and how they coincide with the unique digital platform we’ve built. For some, making the connection between the Metaverse, decarbonization, and digital twins may seem like a stretch. However, there are important synergies between these terms and what they mean that will help in the fight for a greener and more energy efficient planet. We’ve all heard quite a bit about the Metaverse lately and, like the early days of the Internet, not everyone has the same vision for what it means and how it will redefine the way we work, play, and communicate online. Digital Twin – an essential element of Metaverse Our team defines the metaverse as a simulated digital environment that harnesses technologies including digital twins, mixed reality, internet of things, and other platforms to mimic the real world. Think of the sensors you’ve installed that send information about your home or office into the cloud and then to an app on your phone (e.g., temperature, energy usage, etc.). The Metaverse works in a similar way but, in this case, the outside physical world is represented in a digital format. Decarbonization refers to the reduction of carbon and greenhouse gas emissions produced by the burning of fossil fuels. This involves monitoring energy usage in an effort to decrease CO2 output per unit of electricity generated. Reducing the amount of carbon dioxide occurring as a result of power generation in an industrial setting is essential to meet global temperature standards set by The Paris Agreement and governments around the world. Governments Jump on the Decarbonization Bandwagon Most governments have aggressive goals to reduce carbon emissions and buildings are a huge source due to their high energy usage. Just looking at California, the Building Decarbonization Coalition lays out a plan for the state to cut building emissions by 20 percent in the next six years and 40 percent by 2030 — and to adopt zero-emission building codes for residential and commercial buildings by 2025 and 2027, respectively. Residential buildings produce roughly two-thirds of the state’s building emissions with commercial buildings producing around one-third. Additionally, government buildings are often targeted first as a way to showcase successes when they move to enact laws or put pressure on other companies. Across the pond, the UK government launched its Public Sector Decarbonisation Scheme in 2020, to fund low carbon heating projects in public sector estates. The scheme aims to put the public sector at the forefront of decarbonising buildings, showcasing projects that pave the way to help the UK meet its net zero target. Digital Twins Align With the Metaverse Digital twins are an inherent part of the Metaverse because they digitally represent the physical world and allow users to interact with these digital replicas from anywhere and in a number of productive and efficient ways. They are especially effective in helping to control energy usage and carbon emissions. Using a digital replica of your site or building allows you to measure what is happening realtime (e.g., temperature, humidity, energy consumption, etc.). When you can measure and continually gather data as it changes, you’re better equipped to understand and act on the information given the environmental surroundings. Once a user is tuned into the surroundings, systems can then be controlled based on building thresholds, regulatory requirements, or other defined parameters. What are the applications of Digital Twin in Metaverse? For example, lights and temperature can be configured to automatically reduce usage depending on the time of day or when the sensor detects no one is in the room. HVAC systems can be made much more efficient by retrofitting outdated equipment and requiring the installation of newer technology that operates more efficiently and gives direct feedback to users on consumption – which translates to costs. These systems can then be turned into a digital twin and monitored to ensure that they are operating efficiently and adjusted from a dashboard when necessary. One of the benefits of digital twins is that they can also be connected to solar panels and heat pumps, plumbing systems, and exterior lighting to help optimize energy usage. The options are limitless and the opportunities being presented by governments all over the world are making it more attractive and affordable to rethink how technology can help reduce greenhouse gasses. UrsaLeo is currently working with the UK government on a decarbonisation and energy optimization project in partnership with the Active Building Centre. UrsaLeo’s Gemini Platform Creates a Seamless Connection With the Metaverse Harnessing the company’s Gemini Platform, a 3D digital twin is being used to decarbonize and optimize energy management for a show-home. UrsaLeo’s Gemini delivers a holistic view of assets and facilities, so users can monitor and manage everything from anywhere, anytime. Built with the Unity Gaming engine, the 3D digital twin showcases a scaled layout of the facility and offers a virtual walkthrough that can be manipulated and viewed from multiple planes. The Gemini product also collects real-time information from numerous environmental sensors and provides a complete picture of energy usage versus generation (through solar panels) within the house. The model is being used to educate consumers, contractors, and developers on the value of 3D Digital Twin technology and management and how it can be used in the fight against climate change and ensure future decarbonization. Let’s start a conversation. Share your thoughts on LinkedIn on the Industrial Metaverse and other ways you see digital twins helping to meet carbon reduction requirements set by local and national governments globally.
<urn:uuid:3511d8ce-c098-484a-bd36-d5a1bfb7a6a1>
CC-MAIN-2022-40
https://www.iiot-world.com/industrial-iot/connected-industry/the-metaverse-and-its-role-in-decarbonization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00065.warc.gz
en
0.924908
1,226
3.03125
3
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Fighting fake news has become a growing problem in the past few years, and one that begs for a solution involving artificial intelligence. Verifying the near-infinite amount of content being generated on news websites, video streaming services, blogs, social media, etc. is virtually impossible There has been a push to use machine learning in the moderation of online content, but those efforts have only had modest success in finding spam and removing adult content, and to a much lesser extent detecting hate speech. Fighting fake news is a much more complicated challenge. Fact-checking websites such as Snopes, FactCheck.org, and PolitiFact do a decent job of impartially verifying rumors, news, and remarks made by politicians. But they have limited reach. It would be unreasonable to expect current artificial intelligence technologies to fully automate the fight against fake news. But there’s hope that the use of deep learning can help automate some of the steps of the fake news detection pipeline and augment the capabilities of human fact-checkers. In a paper presented at the 2019 NeurIPS AI conference, researchers at DarwinAI and Canada’s University of Waterloo presented an AI system that uses advanced language models to automate stance detection, an important first step toward identifying disinformation. The automated fake-news detection pipeline Before creating an AI system that can fight fake news, we must first understand the requirements of verifying the veracity of a claim. In their paper, the AI researchers break down the process into the following steps: - Retrieving documents that are relevant to the claim - Detecting the stance or position of those documents with respect to the claim - Calculating a reputation score for the document, based on its source and language quality - Verify the claim based on the information obtained from the relevant documents Instead of going for an end-to-end AI-powered fake-news detector that takes a piece of news as input and outputs “fake” or “real”, the researchers focused on the second step of the pipeline. They created an AI algorithm that determines whether a certain document agrees, disagrees, or takes no stance on a specific claim. Using transformers to detect stance This is not the first effort to use AI for stance detection. Previous research has used various AI algorithms and components, including recurrent neural networks (RNN), long short-term memory (LSTM) models, and multi-layer perceptrons, all relevant and useful artificial neural network (ANN) architectures. The efforts have also leveraged other research done in the field, such as work on “word embeddings,” numerical vector representations of relationships between words that make them understandable for neural networks. However, while those techniques have been efficient for some tasks such as machine translation, they have had limited success on stance detection. “Previous approaches to stance detection were typically earmarked by hand-designed features or word embeddings, both of which had limited expressiveness to represent the complexities of language,” says Alex Wong, co-founder and chief scientist at DarwinAI. The new technique uses a transformer, a type of deep learning algorithm that has become popular in the past couple of years. Transformers are used in state-of-the-art language models such as GPT-2 and Meena. Though transformers still suffer from the fundamental flaws, they are much better than their predecessors in handling large corpora of text. Transformers use special techniques to find the relevant bits of information in a sequence of bytes instead. This enables them to become much more memory-efficient than other deep learning algorithms in handling large sequences. Transformers are also an unsupervised machine learning algorithm, which means they don’t require the time- and labor-intensive data-labeling work that goes into most contemporary AI work. “The beauty of bidirectional transformer language models is that they allow very large text corpuses to be used to obtain a rich, deep understanding of language,” Wong says. “This understanding can then be leveraged to facilitate better decision-making when it comes to the problem of stance detection.” Transformers come in different flavors. The University of Waterloo researchers used a variation of BERT (RoBERTa), also known as deep bidirectional transformer. RoBERTa, developed by Facebook in 2019, is an open-source language model. Transformers still require very large compute resources in the training phase (our back-of-the-envelope calculation of Meena’s training costs amounted to approx. $1.5 million). Not everyone has this kind of money to spare. The advantage of using ready models like RoBERTa is that researchers can perform transfer learning, which means they only need to fine-tune the AI for their specific problem domain. This saves them a lot of time and money in the training phase. “A significant advantage of deep bidirectional transformer language models is that we can harness pre-trained models, which have already been trained on very large datasets using significant computing resources, and then fine-tune them for specific tasks such as stance-detection,” Wong says. Using transfer learning, the University of Waterloo researchers were able to fine-tune RoBERTa for stance-detection with a single Nvidia GeForce GTX 1080 Ti card (approx. $700). The stance dataset For stance detection, the researchers used the dataset used in the Fake News Challenge (FNC-1), a competition launched in 2017 to test and expand the capabilities of AI in detecting online disinformation. The dataset consists of 50,000 articles as training data and a 25,000-article test set. The AI takes as input the headline and text of an article, and outputs the stance of the text relative to the headline. The body of the article may agree or disagree with the claim made in the headline, may discuss it without taking a stance, may be unrelated to the topic. The RoBERTa-based stance-detection model presented by the University of Waterloo researchers scored better than the AI models that won the original FNC competition as well as other algorithms that have been developed since. To be clear, developing AI benchmarks and evaluation methods that are representative of the messiness and unpredictability of the real world is very difficult, especially when it comes to natural language processing. The organizers of FNC-1 have gone to great lengths to make the benchmark dataset reflective of real-world scenarios. They have derived their data from the Emergent Project, a real-time rumor tracker created by the Tow Center for Digital Journalism at Columbia University. But while the FNC-1 dataset has proven to be a reliable benchmark for stance detection, there is also criticism that it is not distributed enough to represent all classes of outcomes. “The challenges of fake news are continuously evolving,” Wong says. “Like cybersecurity, there is a tit-for-tat between those spreading misinformation and researchers combatting the problem.” The limits of AI-based stance detection One of the very positive aspects of the work done by the researchers of the University of Waterloo is that they have acknowledged the limits of their deep learning model (a practice that I wish some large AI research labs would adopt as well). For one thing, the researchers stress that this AI system will be one of the many pieces that should come together to deal with fake news. Other tools that need to be developed in the area of gathering documents, verifying their reputation, and making a final decision about the claim in question. Those are active areas of research. The researchers also stress the need to integrate AI tools into human-controlled procedures. “Provided these elements can be developed, the first intended end-users of an automated fact-checking system should be journalists and fact-checkers. Validation of the system through the lens of experts of the fact-checking process is something that the system’s performance on benchmark datasets cannot provide,” the researchers observe in their paper. The researchers explicitly warn about the consequences of blindly trusting machine learning algorithms to make decisions about truth. “A potential unintended negative outcome of this work is for people to take the outputs of an automated fact-checking system as the definitive truth, without using their own judgment, or for malicious actors to selectively promote claims that may be misclassified by the model but adhere to their own agenda,” the researchers write. This is one of many projects that show the benefits of combining artificial intelligence and human expertise. “In general, we combine the experience and creativity of human beings with the speed and meticulousness afforded by AI. To this end, AI efforts to combat fake news are simply tools that fact-checkers and journalists should use before they decide if a given article is fraudulent,” Wong says. “What an AI system can do is provide some statistical assurance about the claims in a given news piece. That is, given a headline, they can surface that, for example, 5,000 ‘other’ articles disagree with the claim whereas only 50 support it. Such as distinction would serve a warning to the individual to doubt the veracity of what they are reading.” One of the central efforts of DarwinAI, Wong’s company, is to tackle AI’s explainability problem. Deep learning algorithms develop very complex representations of their training data, and it’s often very difficult to understand the factors behind their output. Explainable AI aims to bring transparency to deep learning decision-making. “In the case of misinformation, our goal is to provide journalists with an understanding of the critical factors that led to a piece of news being classified as fake,” Wong says. The team’s next step is to tackle reputation-assessment to validate the truthfulness of an article through its source and linguistics characteristics.
<urn:uuid:637f0860-c92a-43d8-afa4-fa1310853dd3>
CC-MAIN-2022-40
https://bdtechtalks.com/2020/02/24/deep-learning-fake-news-stance-detection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00065.warc.gz
en
0.948092
2,056
2.59375
3
Information classification is a very important process that every organization should utilise, regardless of the size. You may think, why does information classification matter? Failing to classify information can lead to many organizational difficulties. Unclassified information is improperly organised which means there is no way to ensure that information is actually being safeguarded as it needs to be. The result is information that sometimes may be insecure and other times might be too secure. Being as secure as it needs to be is always the aim, but being too secure can hinder day-to-day processes. For those reasons information classification is become one of the important priorities for all organizations. Information classification is the process of sorting information in different categories. Various computing devices navigate through folders such as document, music and pictures. In the context of business, financial document shouldn’t be mixed up with sales and marketing campaigns, instead they should be kept separated in dedicated folders where the appropriate team can find them easily. There are different kinds of classification mechanisms available in the industry. Most often, information gets classified based on its sensitivity level, characteristics (e.g. type of information, contents etc.). The most common level of information classification is, Importance of Information Classification 1. Consistency And Improved Understanding Everyone aware of, The level of sensitivity of information The level of risks The consequences if it is leaked
<urn:uuid:7f4dfb18-f33e-4f7d-b192-77d566c628fb>
CC-MAIN-2022-40
https://www.consultantsfactory.com/article/classification-of-information
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00065.warc.gz
en
0.920064
287
3.078125
3
Phishing is a serious problem for companies of all sizes and all sectors. But by using phishing simulation exercises, an organisation can take control of this insidious threat. Cybercriminals love to phish employees to steal credentials and data, and infect companies with ransomware. The Anti-Phishing Working Group’s (APWG) Phishing Activity Trends Report shows that phishing hit an all-time high in Q2 of 2022. Additionally, these attacks showed a 7% increase in credential theft against enterprise employees. The result is often catastrophic when a password or other data is stolen: a phishing attack resulted in the U.S. Department of Defense handing over $23.5 million (£19.3 million) to a cybercriminal; the Open University in London experienced over one million phishing attacks over nine months, causing massive disruption, the list goes on. Why Are Phishing Simulation Programs Important? Phishing is arguably one of the most successful tools in the cybercriminal arsenal. A RiskIQ report into losses due to cybercrime found that $17,700 (£14,500) per minute was lost because of phishing attacks. Phishing is a clever method that tricks employees and other users into doing things that benefit the cyber-attacker. Cybercriminals widely use phishing, with 83% of organisations targeted by phishing attacks in 2021. As time passes, the hackers behind phishing emails become wise to the automated software systems that prevent phishing, such as anti-spam/email gateway platforms. As a result, the hackers change how the phishing emails operate and the content of those emails so that they can evade email gateways. For example, a report analysing 55.5 million emails sent to Microsoft Office 365 found that 25% of phishing emails containing malicious attachments were allowed through the email gateway built into Office 365. The result is that phishing emails are hard to prevent, and the phishing email ends up in an unsuspecting employee’s inbox, ready to trick them into handing over login credentials or installing malware. However, this unsuspecting employee can become a cyber-savvy, knowledgeable, security aware employee using regular simulated phishing exercises. What Happens In A Phishing Simulation Attack? Simulated phishing attacks are designed to look exactly like an actual phishing attempt. A simulated phishing platform is used to generate simulated phishing emails as part of a dedicated security awareness training campaign. Employees and any other user group needing Security Awareness Training should receive these simulated phishing emails. The simulated phishing email platform will interact with the user to help train them on the dangers of phishing. However, the platform should also record and audit what happens when that user receives the simulated phishing email. For example, does the user open the email, do they click on a link or download an attachment, and so on? These events are logged, and reports generated that can be used to assess how successful Security Awareness Training is and which areas need to be improved. Key Features of The Best Phishing Simulation Software Advanced simulated phishing software must have several important features: Mimics Real Phishing Emails The system must create realistic phishing emails that reflect current phishing campaigns seen in real-life. Provide a Wide Choice of Templates The phishing simulation platform should come with a large set of templates that can be used to design a realistic-looking phishing email. The templates should be configurable to match well-known brands and create ‘lookalike’ domain names and URLs. Can Be Tailored to Reflect Roles Fraudsters are known to target specific organisational roles, such as HR and accounts payable. Executives are also a targeted group and should be involved in simulated phishing exercises as specific cyber attacks such as Business Email Compromise may affect the C-Level. Therefore, simulated phishing messages should be tailored to groups of employees. People learn best when they are engaged and have an interactive learning experience. A platform that delivers point-of-need learning allows employees to learn from their mistakes. For example, employees will receive a warning notice if they click on a malicious link. A point of need interactive experience helps to explain what has happened and the dangers associated with a phishing email. Some advanced systems will take this further and educate the employee on avoidance strategies to help prevent future phishing attempts. Provides Language Options Many companies employ English as a second language staff or offices in non-English speaking countries. Therefore, simulated phishing email templates must be able to offer other language support. Audit and Reporting The metrics of a simulated phishing exercise are essential as they offer an insight into how well Security Awareness Training is progressing. In addition, metrics detail how many employees are vulnerable to phishing attacks. Some advanced systems will provide a granular breakdown of phishing metrics to analyse specific departments and user groups. Reports generated from these metrics demonstrate the effectiveness of a phishing simulation program and identify weak areas in staff’s understanding of what phishing entails. How Effective Are Phishing Simulations? According to a Cisco survey, phishing emails are difficult to spot, with 86% of companies having at least one employee click a malicious link. And it only takes one employee to click a link and enter login credentials to a spoof website to open the doors to your network. Phishing simulations offer a way to minimise the risk of that one disastrous click. How Frequently Should You Send a Phishing Simulation? A USENIX study into the longevity of Security Awareness Training found that employees could still spot phishing emails four months after the initial training. Still, after six months, the employees lost the ability to spot malicious emails. The report also highlights that videos and interactive training produce the longest lasting results, this level of training lasting a further six months. Therefore, the report recommends that training should be performed every six months. In addition, regular phishing simulations are a good idea because the security landscape also tends to change frequently.
<urn:uuid:64c45233-648a-4d3c-89c6-953f6a72eed9>
CC-MAIN-2022-40
https://www.metacompliance.com/blog/phishing-and-ransomware/what-is-a-phishing-simulation
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00065.warc.gz
en
0.931335
1,249
2.828125
3
These are boom times for cyberthreats, cyberattacks and cybercrime. From identity theft to the retail hacks, these attacks are dominating the news cycle. On average, fraud victims will spend $776 of their own money and lose 20 hours trying to fix the mess that identity thieves made. Here are the seven ongoing threats that showcase today's challenges in keeping your data protected. Retail Data Hacks Retail hacks are a serious danger because they can impact virtually anyone. 2014 saw a rise in cyberattacks against large companies like Target, with hackers stealing 40 million customer credit and debit card numbers. Cybercriminals steal and sell this personal information on the black market, which can easily lead to identity theft. While much of the responsibility falls on the retailer, such as keeping their payment methods up to date and secure, keeping a close eye on your bank account and credit card statement is a good way to stay safe during retail attacks. Mobile Security & Smartphone Vulnerability Threats Cybercriminals can easily exploit vulnerabilities in your mobile phone to obtain private data. These vulnerabilities sometimes come from the apps you use or within your smartphone itself. Mobile phones are also vulnerable to malware, which can log keystrokes and capture screenshots. Protect yourself by researching the apps you download, being careful with what emails you open, and which pictures you decide to upload. Phishing Attacks & Social Engineering When cybercriminals trick people into revealing sensitive information such as passwords and social security numbers, it's called phishing. One of the most common ways phishing happens is when a person receives an email, purportedly from a bank or government organization, and are lured to authentic-looking sites. Once there, the person is asked to enter their password, social security numbers, and financial data. Cybercriminals take this information and use it for their own purposes. Phishing is part of a larger problem called social engineering, which is essentially manipulating emotions in order to gain access to sensitive data. Don't fall for these tricks. Be skeptical of every email you receive, especially those requesting you reenter private information. Remember, real banks and government organizations never ask you to verify any potentially sensitive info. One of the fastest growing online crimes is identity theft. Many of the points previously covered in this article can lead to identity theft, phishing emails and data breaches. However, your identity is also at risk through everyday materials such as your resume, home address, social media photos and videos, financial data, and so forth. Identity thieves will steal your personal information and open credit cards and loan accounts in your name. While some of this is out of the average person's hands, there is still plenty you can do to keep your identity safe. Healthcare Data Hacks Early in 2015, Anthem experienced a massive data breach by hackers and impacted 78.8 million people. In July 2015, hackers broke into the UCLA Health System's computer network, potentially gaining access to the personal information of 4.5 million patients. Healthcare records contain important and sensitive information and are prime targets for cybercriminals which can easily lead to identity theft. Often times this information is used for health insurance fraud, such as buying and selling fraudulent prescriptions. Always monitor the news for any reports for healthcare data breaches. Targeting of Children by Sexual Predators Users looking to exploit children lurk in dark corners of the internet to trade illegal, lewd photos of children. This is done over email, peer-to-peer programs, or, increasingly, through the dark web, an area of the internet that is inaccessible with standard search engines. While these are disturbing trends, it is best to leave these sites to law enforcement officials and for the average person to avoid them entirely. Another online danger aimed at children is when sexual predators try and lure them into meeting off line, as well as either sending or asking for lewd, pornographic images. Make sure your children are well aware of the dangers of talking to strangers online and never to share personal information with people they've never met. Attacks on Banks In the 21st century, bank robbing has gone digital. A famous example is when a criminal gang stole up to one billion dollars in about two years from a variety of financial institutions across the world. Cybercriminals targeted bank employees and officials with a malware called 'Carbanak' through emails. Once they had successfully infected the targeted computers, the cybercriminals were able to successfully mimic the employees' behavior and transfer money to themselves, direct ATMs to dispense money at certain times, and used e-payment systems to filter money. Some experts like Ben Lawsky, say that a major attack on the banking system could be the equivalent to a "cyber 9/11". Always research a bank's security history before choosing them, don't click on any strange links from emails, shred financial documents, and consistently monitor your account for any irregularities. In a world of ever-evolving cyberthreats, what can you do to protect yourself? Security awareness is the first line of defense. There are powerful security tools available to help, but remember that you also need to use common sense to protect computer, your information and yourself. - Use strong passwords for your accounts that include numbers, lower case and capitalized letters, and are not easy to guess, e.g. password, 12345, etc - Don't open suspicious emails requesting that you reenter sensitive data - Destroy sensitive documents - Use a VPN to secure your Internet connection if you need to use public Wi-Fi - Keep your antivirus software up to date.
<urn:uuid:788b64b0-b0fc-4612-87bc-8df2fdaeec00>
CC-MAIN-2022-40
https://www.kaspersky.com/resource-center/threats/top-7-cyberthreats
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00065.warc.gz
en
0.94403
1,149
2.625
3
While being untraceable may seem ideal, anonymity online can create a liability for businesses and consumers in everyday online transactions. As a result, financial institutions inadvertently facilitate undetected fraud without proper processes and requirements and enable money laundering and other criminal activities. As a result, governments worldwide have instituted regulations called KYC or Know Your Customer, to protect consumers and businesses from the risks of fraud and other criminal activities. In this article, we’ll discuss the basics of what KYC is, how laws are applied, why KYC is important, what are some of the core requirements of Know Your Customer, and what technology exists to help affected businesses stay compliant. What is KYC? Know Your Customer (or Know your Client) is a set of regulations financial institutions must follow to verify the identity of their customers. KYC affects businesses with account creation or a customer login process online. These regulations require banks, credit unions, and other financial institutions to verify the identity of customers at the time of opening accounts. Then, they need to retain this identity information so that, should it be legally necessary, financial institutions can trace these transactions back to their point of origin. KYC measures exist to prevent criminal activities such as money laundering, the financing of terror organizations, and fraudulent trading. As a consumer, you can think of KYC as a business’ requirement to perform a “due diligence” check on each new and existing customer to verify their identity thoroughly. What are the KYC laws and regulations? Know Your Customer laws and requirements differ by country; we’ll use the US version as our example. In the United States, the legislation goes back decades to the Bank Secrecy Act of 1970, which put some of the first money laundering laws in place. More recently, the 2001 Patriot Act aimed to curb the financing of terrorist plots, including a section that amended the Bank Secrecy Act. The act now requires financial institutions to keep accurate records of the individuals they do business with and take measures to verify identity carefully called the CIP (Customer Identification Program) and CDD (Customer Due Diligence). Regulations increased as technology in financial services advanced over the last two decades. As well as adhering to rigorous provisions for ID verification within America, US financial organizations must ensure that overseas KYC provisions are followed before handling foreign clients. The IRS, for instance, has a list of 73 countries and territories with their own KYC rules and guidelines. These approved countries can receive information from the IRS in the case of an investigation through a qualified intermediary (QI) agreement. What is KYC Intended to Prevent? Know Your Customer laws and requirements exist to prevent illegal activity before it happens. In particular, properly implemented KYC verification can prevent identity theft, financial fraud, and money laundering. Let’s dive into these three use cases below. KYC requires more rigorous procedures for identity verification, preventing criminals from setting up false identities to use in the commission of further crimes. Security research firm Javelin estimates that $24 billion was fraudulently obtained in 2021 through identity theft, affecting 15 million consumers. Identity theft is one of the leading causes of fraud across the world. TD details what a few signs of attempted identity theft could look like including online activity with personal information you do not recognize and notice of a credit report inquiry you did not authorize. Businesses must adopt additional authentication methods upon new and unknown logins to better prevent identity theft with KYC processes. These extended authentication methods include 2FA (two-factor authentication) or MFA (multi-factor authentication), forced logouts, or CAPTCHAs. In addition, additional verification methods exist that do not disrupt or add to a login experience, such as device identification, which we discuss further below. Once valid payment details of a consumer are stolen, and in the hands of online fraudsters, this unlocks a world of opportunity for financial fraud to occur. For example, the recent 2022 IBM Global Financial Fraud Impact Report found that fraudulent card transactions and digital payments amounted to an average of $265 per year for each US citizen, with 39% of Americans being the victim of some form of a financial security breach. Financial institutions must verify customers at signup, login, and transaction to prevent financial fraud. However, fraud can occur at each step: - Account Creation Signup: A fraudster can use stolen identities to create accounts on behalf of a user without their knowledge. - Account Login: If a fraudster has valid login credentials for a user, they can log in and obtain even more information about a user and even take actions to take over that account entirely. - Checkout/Transactions: A fraudster can also make purchases using a compromised account or a stolen credit card, causing cause further damage. Preventing this is similar to methods of identity theft prevention. A few additional techniques for financial fraud can include: - Instituting usage rules, such as failed login and transaction attempt limits. - Not allowing saved payment information. - Regular credential rotation. - Enforcing password requirements. Money laundering is a result of financial fraud with stolen identities. For example, criminals set up dummy accounts to disguise the origins of money obtained through drug and people trafficking, smuggling, racketeering, and other activities. As a recent US Treasury Report puts it, “criminals and professional money launderers continue to use a wide variety of methods and techniques, including traditional ones, to place, move, and attempt to conceal illicit proceeds.” Again, verifying the identity of account holders every step of the way is essential to preventing acts like money laundering. How does KYC Legislation relate to Anti-Money Laundering (AML) Laws? Know Your Customer is part of a successful AML (anti-money laundering) compliance strategy for banks and financial institutions. Whereas KYC is responsible for verifying a customer is who they say they are, AML processes track past just identification verification and include the complete cycle of monitoring transactions for money laundering. What KYC Provisions are Companies Expected to Make? Organizations must adhere to specific data security and identification procedures to counter these significant threats, which affect the lives of millions and amount to billions of dollars of stolen money annually. These procedures include: Customer Identification Processes (CIP) require individuals to present a driver’s license, passport, or other acceptable photo ID. Corporate ID requirements are certified articles of incorporation, partnership agreements, trust instruments, and business licenses. Further Financial Documentation, which includes additional materials for individuals and companies, may be required, including credit agency references, financial statements, and other forms of secondary assurance. Due Diligence is when companies are required to conduct risk assessments on their customers, analyzing transactions to look for any suspicious patterns of behavior which may require monitoring. - Organizations may categorize their clients as requiring simplified or enhanced due diligence checks based on an assessment of risk factors. Continuous monitoring by companies is required to catch risk-related activities on customer accounts at any time. Automated processes are used to monitor transactions and flag unusual activity. Where such patterns are of concern, KYC regulations require the company to submit a Suspicious Activity Report (SAR) to law enforcement agencies, including the Financial Crimes Enforcement Network (FinCEN). What KYC ID Requirements are Mandatory? At the highest level, KYC processes require businesses to verify consumers at account creation with at least two forms of verified identification: - Proof of government-issued ID with photograph (usually driver’s license or passport) - Proof of address (usually bank statements or bills) However, some of these individuals may have neither a passport nor a driver’s license and may substitute other documentary evidence. There is no KYC-specific list of approved ID documentation, but the full list of approved documents for photo ID from the US State Department includes: - US Passport book or card - Valid Driver’s License with Photo - Certificate of Naturalization - Certificate of Citizenship - Government employee ID - US military or military-dependent ID - Current (valid) foreign passport - Trusted Traveller IDs (including valid Global Entry, FAST, SENTRI, and NEXUS cards) - Enhanced Tribal Cards and Native American tribal photo IDs - Learner driver’s permit with photo - Non-driver ID with photo - Temporary driver’s license with photo Officially accepted documents are updated and may change as new forms of ID are issued and approved and others retired. Therefore, we recommend using the above list as ONLY a reference of document types, not a source of truth, and checking with appropriate governmental departments. In addition, each organization is permitted to draw up its list of approved documentation so long as it remains assured of its ability to identify each customer correctly. What Solutions Exist to Make KYC Easier to Implement? Fortunately, ID verification, account monitoring, flagging, fraud detection, and automated report generation technologies make KYC provisions less time-consuming and prone to errors. Risks can be scored and prioritized without hiring analysts’ teams to manually scan vast volumes of data. Such innovations have helped mitigate the increasing cost of KYC implementation, which Thomson Reuters estimated can cost major financial institutions up to $500 million annually to implement correctly. For example, adding a device identification solution helps accurately identify users even with repeated visits. Fingerprint Pro is one of those solutions, and with a 99.5% accuracy rate, it can detect repeat visits of potential bad actors and prevent fraudulent login attempts or transactions from happening in the first place. Know Your Customer, or KYC exists to protect businesses that lend and store money for their customers. Banks and financial institutions alike have a requirement to not only protect their investments but also to verify and protect their customers’ assets. With KYC laws and regulations in place, this happens on a regulated level and is not an optional security measure.
<urn:uuid:6c6fdc85-e0c6-4a26-ae2c-973983900041>
CC-MAIN-2022-40
https://fingerprint.com/blog/kyc-know-your-customer-financial-fraud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00265.warc.gz
en
0.92034
2,078
2.890625
3
Bridging is a method of path selection that contrasts with routing. In a bridged network, no correspondence is required between addresses and paths. Put another way, addresses don't imply anything about where hosts are physically attached to the network. Any address can appear at any location. In contrast, routing requires more thoughtful address assignment, corresponding to physical placement. Bridging relies heavily on broadcasting. Since a packet may contain no information other than the destination address, and that implies nothing about the path that should be used, the only option may be to send the packet everywhere! This is one of bridging's most severe limitations, since this is a very inefficient method of data delivery, and can trigger broadcast storms. In networks with low speed links, this can introduce crippling overhead. IP, designed as a wide-area networking protocol, is rarely bridged because of the large networks it typically interconnects. The broadcast overhead of bridging would be prohibitive on such networks. However, the link layer protocols IP functions over, particularly Ethernet and Token Ring, are often bridged. Due to the pseudo-random fashion in which Ethernet and Token Ring addresses are assigned, bridging is usually the only option for switching among multiple networks at this level. Bridging is most commonly used to separate high-traffic areas on a LAN. It is not very useful for disperse traffic patterns. Expect it to work best on networks with multiple servers, each with a distinct clientele that seldom communicate with any servers but their “home”. Two types of bridging exists, corresponding to the distinction outlined earlier. Transparent bridging is used in Ethernet environments and relies on switching nodes. Token Ring networks use source-route bridging (SRB), in which end systems actively participate by finding paths to destinations, then including this path in data packets. Transparent bridging, the type used in Ethernet and documented in IEEE 802.1, is based on the concept of a spanning tree. This is a tree of Ethernet links and bridges, spanning the entire bridged network. The tree originates at a root bridge, which is determined by election, based either on Ethernet addresses or engineer-defined preference. The tree expands outward from there. Any bridge interfaces that would cause loops to form are shut down. If several interfaces could be deactivated, the one farthest from the root is chosen. This process continues until the entire network has been transversed, and every bridge interface is either assigned a role in the tree, or deactivated. Since the topology is now loop-free, we can broadcast across the entire network without too much worry, and any Ethernet broadcasts are flooded in this manner. All other packets are flood throughout the network, like broadcasts, until more definite information is determined about their destination. Each bridge finds such information by monitoring source addresses of packets, and matching them with the interfaces each was received on. This tells each bridge which of its interfaces leads to the source host. The bridge recalls this when it needs to bridge a packet sent to that address. Over time, the bridges build complete tables for forwarding packets along the tree without extraneous transmissions. There are several disadvantages to transparent bridging. First, the spanning tree protocol must be fairly conservative about activating new links, or loops can develop. Also, all the forwarding tables must be cleared every time the spanning tree reconfigures, which triggers a broadcast storm as the tables are reconstructed. This limits the usefulness of transparent bridging in environments with fluid topologies. Redundant links can sit unused, unless careful attention is given to root bridge selection. In such a network (with loops), some bridges will always sit idle anyway. Finally, like all bridging schemes, the unnecessary broadcasting can affect overall performance. Its use is not recommended in conjunction with low-speed serial links. On the pro side, transparent bridging gives the engineer a powerful tool to effectively isolate high-traffic areas such as local workgroups. It does this without any host reconfiguration or interaction, and without changes to packet format. It has no addressing requirements, and can provide a “quick fix” to certain network performance problems. As usual, careful analysis is needed by the network engineer, with particular attention given to bridge placement. Again, note that for IP purposes the entire spanning tree is regarded as a single link. All bridging decisions are based on the 48-bit Ethernet address. Source-route bridging (SRB) Source-route bridging (SRB) is popular in Token Ring environments, and is documented in IEEE 802.5. Unlike transparent bridging, SRB puts most of the smarts in the hosts and uses fairly simple bridges. SRB bridges recognize a routing information field (RIF) in packet headers, essentially a list of bridges a packet should transverse to reach its destination. Each bridge/interface pair is represented by a Route Designator (RD), the two-byte number used in the RIF. An All Rings Broadcast (ARB) is forwarded through every path in the network. Bridges add their RDs to the end of an ARB's RIF field, and use this information to prevent loops (by never crossing the same RD twice). When the ARB arrives at the destination (and several copies may arrive), the RIF contains an RD path through the bridges, from source to destination. Flipping the RIF's Direction Bit (D) turns the RIF into a path from destination to source. See RFC 1042 for the format of the RIF field and a discussion of SRB's use to transport IP packets. Source-route bridging has its problems. It is even more broadcast-intensive than transparent bridging, since each host must broadcast to find paths, as opposed to each bridge having to broadcast. It requires support in host software for managing RIF fields. To take advantage of a redundant network, a host must remember multiple RIF paths for each remote host it communicates with, and have some method of retiring paths that appear to be failing. Since few SRB host implementations do this, SRB networks are notorious for requiring workstation reboots after a bridge failure. On the other hand, if you want to bridge a Token Ring network, SRB is just about your only choice. Like transparent bridging, it does allow the savvy engineer to quickly improve network performance in situations where high-traffic areas can be segmented behind bridges.
<urn:uuid:049afd7e-f319-4fbe-b49c-dec71080136d>
CC-MAIN-2022-40
https://www.certificationkits.com/cisco-bridging/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00265.warc.gz
en
0.943303
1,326
4.25
4
- Protect the password in transit from the threat of sniffers or intermediation attacks – Use HTTPS during the entire authentication process. HSTS is better. HSTS plus DNSSEC is best. - Protect the password in storage to impede the threat of brute force guessing – Never store the plaintext version of the password. Store the salted hash, preferably with PBKDF2. Where possible, hash the password in the browser to further limit the plaintext version’s exposure and minimize developers’ temptation or expectation to work with plaintext. Hashing affects the amount of effort an attacker must expend to obtain the original plaintext password, but it offers little protection for weak passwords. Passwords like [email protected] or lettheright1in are going to be guessed quickly. - Protect the password storage from the threat of theft – Balance the attention to hashing passwords with attention to preventing them from being stolen in the first place. This includes (what should be) obvious steps like fixing SQL injection as well as avoiding surprises from areas like logging (such as the login page requests, failed logins), auditing (where password “strength” is checked on the server), and ancillary storage like backups or QA environments. Implementing PBKDF2 for password protection requires two choices: an HMAC function and number of iterations. For example, WPA2 uses SHA-1 for the HMAC and 4,096 iterations. A review of Apple’s OS X FileVault 2 (used for full disk encryption) reveals that it relies in part on at least 41,000 iterations of SHA-256. RFC 3962 provides some insight, via example, of how to select an iteration count. It’s a trade-off between inducing overhead on the authentication system (authentication still needs to be low-latency from the user’s perspective, and too much time exposes it to easy DoS) versus increasing an attacker’s work effort. Finally, sites may choose to avoid password management altogether by adopting strategies like OAuth or OpenID. Taking this route doesn’t magically make password-related security problems disappear. Rather than specifically protecting passwords, a site must protect authentication and authorization tokens; it’s still necessary to enforce HTTPS and follow secure programming principles. However, the dangers of direct compromise of a user’s password are greatly reduced. The state of password security is a sad subject. Like D minor, which is the saddest of all keys.
<urn:uuid:07804be0-2b4c-4e92-8935-0da2004cd442>
CC-MAIN-2022-40
https://deadliestwebattacks.com/archive/2012-08-27-password-interlude-in-d-minor
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00265.warc.gz
en
0.897945
658
2.671875
3
Cyber security has recently reached yet a new level of public awareness, as the world learned that an army of bots hosted on internet connected cameras were able to cause outages to well-known internet services such as Twitter, Amzaon, Spotify and Netflix. The global Distributed Denial of Service (DDoS) attack on DYN, a large DNS infrastructure company, caused the downtime, may not have shocked internet security professionals, but it gave yet another demonstration of the fragility of the Internet grid. Fortunately it was not as damaging as it could have been. The internet is a platform of innovation and inspiration. We can all invent, develop and release our work for free or for payment, as a product or as a service without formal qualification or certification. Products and services are released, improved and updated constantly, often without physical touch between the manufacturer, reseller and consumer. This is very unusual in the engineering world and so far has worked fantastically well. Security professionals realize this unprecedented freedom to innovate also comes with risks. Many internet connected products are not designed with security in mind and some of them contain very basic flaws that allow attacks such as the one on DYN to take place (in the attack on DYN, unprotected internet connected cameras were accessed easily by hackers using hardcoded or default user credentials). Public awareness of these security oversights is rising as cyberattacks targeting well-known services are becoming common. Securing the Grid As our lives are becoming so dependent on the internet, it’s time we thought about ways to protect the grid without hindering continuous innovation The most widespread grids in the world, alongside the internet, are the electrical grid and the telephone grid. Both are designed for high-resilience and require every device connected to them be certified to meet various standards that ensure it will not pollute the grid. Manufacturers are not allowed to sell electrical appliances or telephone equipment without appropriate certification, and authorities of every country of the world enforce that certification. Some people suggest that a possible conclusion could be to require certification of any and all equipment that is connected to the internet – ensuring that it will conform to basic security and other standards. This may end up being necessary and may develop over time, but would also be a very complicated process, as it will take a very long time to agree on the standards and then implement them. It’s also likely to slow down the pace of innovation that we enjoy today. A more practical solution would be for the grid to protect itself. It would require trust and entails some risks and yet since it involves far fewer parties; it could be done in a sensible and democratic way. Let’s look at how this could be achieved. Internet traffic control The most important internet services we rely on are local to our country of residence (financial services, government services) and sometimes international (DNS, Social Networks, Email services, Search services, etc.). Attacking these services can be done locally to some degree, and internationally to a very large extent, as demonstrated in recent global DDoS attacks. The biggest challenge when dealing with Denial of Service attacks is how to separate malicious traffic from legitimate traffic coming from the same origin – even sometimes from the same IP address. Many vendors today offer anomaly detection-based Anti-Denial of Service solutions that try to solve this, and they can be effective, especially when the attack is targeting the computing resources of the victim rather than just try to fill their internet link with traffic in order to disrupt legitimate traffic. But sometimes, if the link connecting the victim network to the Internet Service Provider (ISP) or moreover the link between the ISP to an up-stream ISP is saturated with attack traffic, then it is too late… As such, business providing internet services and specially ISPs should continue to protect themselves to the best of their ability. But if they are unable to help themselves, they should be able to call for help to the companies that comprise the internet back bone, the Tier-1 and Tier-2 internet service providers. The backbone of the Internet comprises is a mesh of networks owned by numerous companies. Six large providers are known today to be Tier-1 (Level 3 Communications, Telia Carrier, NTT, Cogent, GTT, and Tata Communications) as due to their capacity and wide geographical reach they do not have to purchase transit agreements with other providers. Connected to them are about thirty Tier-2 providers. Within each country there are numerous other providers that are connected to these Tier-2 providers. Internet Service Providers (and sometime large Content Delivery Networks) interconnect to each other using Internet Exchange Points (IXP). The aggregated capacity of these providers is the maximum capacity of the internet: no DDoS attack can exceed it. Blocking attacks at source As such, less than fifty Tier-1 and Tier-2 providers together have the technical capacity to stop most global DDoS attacks and, in many cases, also country-level attacks – at the source. To do this, accurate attacks patterns need to identified and agreed upon, but most importantly there is a need to define how this can be done in an effective and legitimate way, while maintaining data privacy. To achieve this, a scalable process with checks and balances could be implemented on these lines: - Internet services are expected to have some means of internal, or cloud scrubbing service to deal with DDoS until their line is saturated. - If a victim (any internet service) determines that it cannot deal with an attack as their internet line is saturated, they should approach their upstream Tier-2 provider (directly or through their local ISP; large providers may be connected directly to Tier-2 or IXPs) and provide details about the attack. - The Tier-2 provider should work with the victim to identify an attack pattern. This may not always be easy, but security professionals can achieve this. - The Tier-2 provider should determine whether they are able to block the attack using their own resources - If the Tier-2 is not able to block, they should issue a “Global Block Request” (GBR) – a set of flow identifiers (Source IP, Destination IP, Source Port, Destination Port, and Protocol) with possible ranges or wildcards and/or regular expressions that identify the attack pattern. The GBR also includes a blocking ratio that would indicate the desired blocking level – 1:1 for blocking all cases or 1:n for just easing the attack. - The GBR should be reviewed, approved and signed by at least three Tier-1 providers or five Tier-2 providers, who will validate and ensure that no significant legitimate traffic or traffic unrelated to the attack is blocked. - Once approved, all Tier-1 and Tier-2 providers should honor the GBR for two hours. After the two-hour period, the GBR can be renewed one more time. - If the attack continues, the GBR can be renewed but need to be reviewed, approved and signed by at least four Tier-1 providers or seven Tier-2 providers each time. In this case the GBR can be renewed again and again at six-hour intervals. A network/service device would enforce GBR either at the Tier-1/Tier-2 providers upstream or downstream (or at the IXPs). The provider would also inform the ISPs downstream that a specific IP address is generating an attack so the IP owner could be informed. Check Point Software Technologies and some other vendors can provide the technology required for handling GBRs today. Many attacks, such as the one on DYN, could be effectively mitigated using the above process. In case of attacks within encrypted channels (e.g. SSL), it will not be possible to Identify precise attack patterns using regular expressions within the encrypted traffic, but traffic from attacking IPs could be blocked or reduced using five-tuples that identify the communication pattern even without looking at encrypted traffic. All too often, major policy changes only occur when a catastrophe has taken place; only then there is enough public demand, urgency and political will to make concessions and drive real change. Solving global Distributed Denial of Service of attacks can be achieved before such a catastrophe strikes. As described here, mitigating many major DDoS attacks is achievable through practical collaboration of just a few global parties. More importantly, it can be an exercise in solving a simple problem by working together, rather than standing alone.
<urn:uuid:e595f07b-3e30-49e0-8654-18bf6c7b3ea8>
CC-MAIN-2022-40
https://blog.checkpoint.com/2016/11/08/denied-dealing-global-distributed-denial-service/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00265.warc.gz
en
0.95133
1,732
2.703125
3
Russia and China Pledge to Fight Together for Cyber Security The world’s two eastern super-powers, Russia and China, have pledged not to attack one another in cyber space and to fight together for “international information security.” More simply, this is seen as a “nonaggression pact in cyberspace,” at a time when relations between the US and Russia are very seriously strained. The cyber security deal was one of a total of 32 agreements signed by the two nations during a visit to Russia by Xi Jinping, President of the People’s Republic of China at the weekend. In a statement, Xi Jinping said he had held “substantial talks” with Russian President Vladimir Putin, and that they had agreed to continue to develop strategic cooperation and mutual foreign policy support. In the high-powered publicized on the Russian Government website, the two leaders agreed they would do everything in their power to ensure that the internal political and socio-economic “atmosphere” was not destabilized. They also agreed they would do all they could to ensure public order was not disturbed. The agreement between the two countries identified the primary threats to information security, and also determined the principles, main areas of interest, mechanisms and forms of cooperation between them. In terms of the agreement, both information technology (IT) and cyber threat data will be shared. The Russian Foreign Ministry described this as a “strategic partnership,” and stated that the agreement would help to mutually promote a “beneficial cooperation” between China and Russia. What the Pact Means to the West and US In an interview transmitted on the US Public Broadcasting Service (PBS), Orville Schell, director of the Center on US-China Relations at the Asia Society in Berkeley, California, said the two eastern super-powers had clearly teamed up because both found themselves “at odds with the West.” He said they had a number of common interests, including: - Energy that Russia can offer China - Weapons that Russia can supply to China - A common 5,000-mile border - Psychological symmetry of two “big empires” that have suffered at the hands of Japan and the West According to Ian Wallace of the New America Foundation’s Cybersecurity Initiative, the joint interest of Russia and China in information security is very different to that in the US and in Europe. He believes the primary interest of Russia and China focuses on “regime stability.” It is probably also inevitable, following the breakdown of cyber relations between the US and both Russia and China. In 2014 a Russian-US “cyber working group” collapsed after the Russian military offensive in Ukraine. A similar China-US working group also collapsed after five members of the Chinese military were indicted by the US Government for hacking. Different reasons, same result. China has also come under fire for trying to force US companies operating in that country to use encryption coding approved in Beijing, and make them supply source code to the government for inspection. The primary difference is that the US, Europe and other Western nations see the Internet as a free hub, while Russia, China and other like-minded nations believe digital data should be controlled at government level. Putin has gone so far as to claim that the Internet was launched as “a CIA project.” Certainly the new cyber security deal between China and Russia is going to set a precedent for cyber security issues facing these two eastern nations. By Penny Swift Penny has been a professional writer since 1984 – Penny has written more than 30 general trade books and eight college books. She has also written countless newspaper and magazine articles for: Skills on Site, Popular Mechanics (SA) and SA Conference, Exhibitions and Events Guide. Penny has a BA in Social Sciences and currently resides in Cape Town, South Africa.
<urn:uuid:90968392-9c81-4633-a66d-ff369924ff23>
CC-MAIN-2022-40
https://cloudtweaks.com/2015/05/eastern-super-powers-pledge-to-fight-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00265.warc.gz
en
0.9727
801
2.546875
3
The big idea In my synthetic chemistry lab, we have worked out how to convert the red pigment in common bricks into a plastic that conducts electricity, and this process enabled us to turn bricks into electricity storage devices. These brick supercapacitors could be connected to solar panels to store rechargeable energy. Supercapacitors store electric charge, in contrast to batteries, which store chemical energy. Brick’s porous structure is ideal for storing energy because pores give brick more surface area than solid materials have, and the greater the surface area the more electricity a supercapacitor material can hold. Bricks are red because the clay they’re made from contains iron oxide, better known as rust, which is also important in our process. We fill the pores in bricks with an acid vapor that dissolves the iron oxide and converts it to a reactive form of iron that makes our chemical syntheses possible. We then flow a different gas through the cavities to fill them with a sulfur-based material that reacts with iron. This chemical reaction leaves the pores coated with an electrically conductive plastic, PEDOT. The resulting film coats the brick surfaces with nanofibers that resemble the fine filaments produced by fungi. The nanofiber structure of our conducting polymer has low electrical resistance as well as high surface area, which makes it ideal for energy related applications. A few pieces of PEDOT-coated brick are able to power an LED, and based on our calculations approximately 60 regular sized bricks would be able to power emergency lighting for 50 minutes, and they would take 13 minutes to recharge. One of the surprising results of our research is that the supercapacitor brick wall can be recharged 10,000 times, which is on par with more traditional PEDOT supercapacitors. We have published our results in the journal Nature Communications. Why it matters We have converted iron oxide, which is a ubiquitous waste product, into a reactive intermediate – a substance useful in chemical reactions. By controlling a chemical reaction that uses this intermediate, we have produced state-of-the-art semiconducting nanofiber coatings. Turning rust into a useful chemical source material is cost-effective and demonstrates how inert materials hold the potential to be transformative in chemical manufacturing. Our work shows how waste can be upcycled and reused for producing cutting edge materials that extend the functional limitations of construction materials. What other research is being done in this field? Our work is the first to demonstrate energy storage in bricks, however other researchers are chemically altering bricks for other uses. The red pigment in bricks has been used as a chemical catalyst, however this requires significant processing to ensure the purity of the separated iron oxide. Metal oxide nanoparticles have also been combined both with brick and concrete to remove atmospheric pollutants. Other groups have created bricks that incorporate carbon nanomaterials to form electrodes that can conduct electricity. We need to increase the amount of energy our bricks can store by an order of magnitude. We are working on ways to convert the structure of the nanofibers into composites that contain other semiconductors in order to boost the amount of energy the nanofibers can store. We are scaling up the chemical synthesis so we can reduce cost and produce polymer-coated bricks rapidly. We are also developing new chemical syntheses that promote self-assembly inside bricks to cause the nanofibers to form 3D patterns, which will increase surface area. Our goal is to develop bricks that are patterned and ready to be stacked without the need for wires. We intend to produce devices that can be assembled like Lego blocks. • Julio M. D’Arcy is Assistant Professor of Chemistry, Washington University in St Louis. This article was originally published on TheConversation
<urn:uuid:3afd4c60-5bb5-4a7a-b8bd-e0d671f2c850>
CC-MAIN-2022-40
https://news.networktigers.com/opinion/clever-chemistry-turns-ordinary-bricks-into-electricity-storage-devices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00265.warc.gz
en
0.938388
785
3.8125
4
It is a well-known fact that China’s internet landscape is the largest in the world. With over 700 million internet users (and with 695 million of those mobile internet users), it comes as no surprise that the Chinese internet landscape is a key target market for businesses across Europe. China’s unique internet landscape is commonly known by the world. The Great Firewall of China, the region’s internet filtering system, is recognised throughout the world. China’s Cybersecurity Law also came into force on 1st June. It strengthens the regulatory environment and impacting what data can and cannot be held outside the region. The Great Firewall blocks certain types of content. For countries outside of the Firewall, it has a significant impact on web performance. As a result, many businesses in Europe that launch a website in the region will often find that their websites take more than 30 seconds to load! But what is less well known is that it isn’t just the Great Firewall that causes internet performance issues. In fact, there are three factors that cause latency for European brands’ websites. What causes latency in China? The first factor that impacts European brands web performance is the sheer distance from Europe to China. It comes as no surprise that the 4,000 odd miles between Europe and China plays a role in causing delays to web performance. Usually, distance is the factor that impacts performance the most. The number of exchanges between the origin server and the end-user adds to website loading time. The second factor that causes latencies is the Great Firewall, China’s unique internet filtering system. But the Great Firewall even causes delays to websites delivered from Hong Kong into China. In fact, the Firewall’s filtering process slows websites down by up to 40%. The third is China’s internet infrastructure which causes latency. In China, there are only a few Internet Service Providers (ISPs). Such as China Telecom, China Mobile and China Unicom to name a few. These state-owned ISPs are solely responsible for handling the traffic of 700 million internet users (which is also likely to impact web performance in the region). And for traffic to pass between these ISPs, traffic peering is required, and this is where the problem lies. Peering is the interconnection of networks between ISPs. Perring allows data exchange to take place (peering is essentially what allows internet users to connect to almost every public network). But ISPs pay one another to peer. In China, not only is it expensive to peer, but the interconnection points are also heavily congested. This means that data from one part of the country may not be able to make it to another part of the region. The limited number of peering points also has a huge impact on reaching China’s internet users. Obviously, not all internet users are in the major cities of Beijing and Shanghai. With over 600 cities in China, a huge proportion of potential customers reside in tier 2 and tier 3 markets. Which again means that data might not make it to all parts of the country. And also means that as a business, you can miss a huge proportion of your customer base. Because China’s internet infrastructure has limited peering points, fragmented network topology and poor connectivity. Simply setting up data centers in China’s major cities is not enough to ensure high performance, as it doesn’t help with delivery across the whole region. For European businesses looking to successfully launch websites in China, you have to tackle in-country latencies in a more targeted fashion. That is why many businesses turn to content delivery networks. How CDN (content delivery network) can help you entering into China Market A content delivery network is able to overcome the delays caused by distance, the Great Firewall and the complex internet landscape. This is because a CDN providers have points of presence (PoP) and infrastructure throughout mainland China. Thus they are able to cache data inside the Great Firewall. where within each PoP can help accelerate the speed at which content is delivered to the end user. And how do they do this exactly? When an end user requests a web page, the data is only ever being loaded from a nearby server. Data loads much faster as it travels much shorter distance. Not only does this help overcome the issues with peering and congested interconnection points, it also helps reduce latency caused by the firewall. With more European businesses looking to target China, picking the right CDN is crucial for success. You need to ensure that your CDN provider has expertise of the Chinese market. They need to have physical presence in the region and is au fait with the rules, regulations and licensing (and has relationships with all of China’s key networks). Tackling China is a lot to take on – but if you follow the right path it is achievable. If you’re interested in hearing more tips, you can read more about ‘The most commonly asked questions around China CDNs’ as well as finding out ‘Can your CDN provider really tackle China?’
<urn:uuid:0d70d668-8bdf-4de7-828a-18384fb41507>
CC-MAIN-2022-40
https://www.cdnetworks.com/emerging-markets-blog/chinas-in-country-latencies-and-the-role-of-isps/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00265.warc.gz
en
0.94696
1,055
2.515625
3
4 ways to encrypt or password-protect a PDF for free without Acrobat How-To - 3 min read Jul 26, 2022 Sep 19, 2022 3 min read Jump to section Cloud services have become essential for a wide range of businesses, providing improved flexibility over traditional computing architecture. A range of cloud models have emerged, meaning that those wishing to utilize the cloud have a wealth of options available. However, the cloud itself can be a confusing technology for the uninitiated. So in this article we will explain all of the important concepts related to both cloud services and providers. Cloud services refers to a range of functions that are offered to businesses and organizations via internet-based servers. The architecture used by cloud services differs from traditional architecture, which has always been based on physical hardware. By utilizing cloud services, businesses gain access to easy and affordable applications and resources. Such services would otherwise be processor-intensive and expensive. Cloud access negates the need for internal infrastructure or hardware. Cloud services are typically managed by external cloud providers, which offer all of the infrastructure required. For many cloud-based operations, this external hosting removes the need for organizations to host applications on their own servers. Such arrangements are often described as being “on-premise.” Due to their method of delivery, cloud services are scalable across networks, meaning that they can grow with the demands of a business. A variety of different cloud services are available. Organizations must choose between various options, or sometimes even a mixture of them. Cloud services made available via the internet are referred to as public cloud services. This form of cloud service makes it easy to share resources, enabling organizations to deliver features and capabilities that would otherwise be impossible, or at least extremely expensive. Private cloud services refers to those provisions that are made available to corporate users and subscribers. This model utilizes the internal infrastructure of an organization. Private cloud services often suit organizations that work with particularly sensitive data. Finally, a hybrid cloud environment essentially utilizes a mixture of public and private clouds. Proprietary software is typically used within hybrid models to enable communication between these disparate cloud services. Using a hybrid model enables a company to mix and match, taking advantage of the particular qualities of these two types of cloud services. Aside from the distinction between private public and hybrid cloud services, several different types of services are available. The first type is software as a service (SaaS), which has become the most popular model overall. This category offers a wide variety of services, including file storage, backup, web-based email, and project management tools. Many notable examples of SaaS cloud services already available, including NordLocker, Dropbox, Slack and Microsoft Office 365. Each of these enable users to access, share, store, and secure information via cloud provisions. The second popular form of cloud services is described as infrastructure as a service (IaaS). It offers the infrastructure associated with SaaS tools without requiring companies to maintain the infrastructure itself. Infrastructure as a Service provides the complete data center framework for companies. This framework can be particularly useful for resource-intensive industries. Providers that deliver Iaas solutions typically offer a range of high-quality facilities. Consequently, these cost quite a bit of money each year. It is therefore rare for a private user to require infrastructure as a service. Overwhelmingly, this model is aimed at the corporate customer. Finally, platform as a service (PaaS) is a web-based environment, enabling developers to build applications within the cloud. PaaS offers a database-driven operating system and a programming language for cloud-based software. These features obviate the need to maintain and manage underlying elements. Most people utilize cloud computing on a regular basis even if they don't realize its presence in their lives. Today, online services are typically stored in the cloud. Email providers, movie and television services, music platforms, popular video games, or storage mediums are all popular examples of services stored in the cloud. The cloud has become so successful that much of the total enterprise workload is already stored on cloud services. In fact, we're only beginning to scratch the surface of what is possible with cloud computing. Cloud providers already offer a myriad of cloud services. These include creating applications and cloud services. Such services make it possible to deploy, build and scale applications for web, mobile, or API platforms. The flexibility of the cloud has proved popular with developers that are testing and building applications. The cloud has played a major role in reducing application development costs via the scalability of cloud infrastructures. This reduction in costs has helped to attract developers. Many more cloud-based applications are therefore expected to emerge in the years to come. One of the most obvious and popular usages of cloud services is the storage of data. Cloud providers offer vast amounts of storage capacity to both personal users and companies. And using the cloud reduces the need to utilize on-premise hardware architecture. Furthermore, this storage can be accessed from any location or device, adding convenience to the whole process. Analyzing and unifying data across teams, divisions, and locations is a further critical value proposition. Cloud services can also collaborate with other technologies, such as machine learning and artificial intelligence. By utilizing AI, insights that enable more informed decisions can be created. This deployment of the cloud can also help reduce the strain on human resources within an organization. It's also possible to connect with high-definition video and audio via cloud services. This utilization of the technology has become increasingly popular. Advantages of cloud computing over traditional models include the following: Cloud computing makes it possible for companies to significantly cut the costs of their operation. Expenses are reduced by diminishing the need for hardware databases, servers, and software licenses. The cloud can also be scaled over a period of time, enabling businesses to keep control over their costs in strategic fashion. Cloud computing can be available all around the clock, with commercial services having a 99.99% uptime. Cloud servers and data centers are managed by external providers. This external management means that staff is not required to deal with systems internally. As mentioned previously, cloud services are hugely scalable, both users and resources being unlimited. Therefore, cloud systems can grow or retract with the needs of a company. Cloud services update automatically, providing valuable maintainability to companies. Automatic updates are not only convenient. They can also help to reduce labor costs. Skilled employees are also freed up to work in other areas rather than on the menial task of updating software. Cloud service providers have a multitude of datacenters. This diverse approach ensures that they are faster and more reliable than other services. Many cloud providers have a variety of data centers located all over the world, meeting the needs of a wide variety of customers. Another important aspect of cloud services are cloud platforms. These are online environments that enable users to develop code and run applications. Cloud platforms are a PaaS service, providing an easily navigable online experience for organizations seeking to develop software. Cloud software has also become increasingly popular. Full web applications are available via this integration. However, development costs associated with cloud software can be high. Cloud software is provided via a cloud-native approach. Development is achieved via an application architecture combining small, independent and loosely coupled micro-services. Several of these services can be packaged together to create the cloud app. And this can then be optimized by organizations as required. Cloud software is available on a 24/7 basis and can be accessed from anywhere in the world across multiple platforms. This level of access offers huge convenience for diverse customer bases. Flexibility is a major component in the success of cloud software. Another common utilization of cloud services is delivering professional services. These functions enable customers to deploy the various types of cloud platforms. This deployment is popular with organizations that wish to assist their clients in the adoption of cloud-based technology. This functionality has therefore become particularly commonly used by consulting firms, system integrators, and other companies offering an array of professional services. In this context, cloud services are extremely diverse. Companies delivering this form of cloud platform may offer cloud-readiness assessments, application rationalization, migration, deployment, customization, private and public cloud integration, and hybrid clouds, along with ongoing management of these services. Companies that specialize in cloud deployment have become extremely valuable for a wide range of organizations. A diverse range of industrial sectors have engaged with cloud services. Cloud services are often considered almost synonymous with web services. However, there are differences between the two. A web service offers a platform for applications or computers to communicate with one another over the internet. Therefore, web services are associated with machine-to-machine communication. Conversely, cloud services are used when individuals or customers are consuming a service. Typically, cloud services relate to office productivity tools and other app-based functions. However, cloud services and web services can be very closely entwined when delivered to individuals and organizations. Cloud services and web services can therefore form something of a hybrid model. This approach enables providers to deliver flexible applications and infrastructure that meet the needs of customers. The utilization and availability of cloud services continues to expand. And this expansion is expected to continue. The utilization of cloud services has become commonplace, perhaps mainly due to the flexibility and convenience of delivery. The scalability of cloud services is clearly also a major advantage for many companies. Now that the cloud has demonstrated its ability to work hand-in-hand within the corporate environment, most companies are moving at least some of their operations to cloud services. The cloud has offered constructive functionality in fields from application delivery through desktop virtualization, with an array of options in between. Cloud services have already transformed the way that businesses operate, and the way that people work on a daily basis. And the superior storage options available in the cloud mean that most organizations and businesses are taking advantage of this innovation. The cloud has become an everyday part of both commerce, and increasingly involved with our everyday lives. The potential of cloud services is only just beginning to unfold. NordLocker Cloud offers end-to-end encrypted cloud storage for individuals and businesses. It enables you to back up, store, and sync data privately and securely. You are also protected by the world's top cryptography algorithms and ciphers. Secure cloud storage helps you and your team work more efficiently without compromising security. NordLocker Cloud also provides an easier way to protect your critical data from internal incidents and cybercrime. John believes that the best things in life are simple. He uses the same approach when he’s writing about online security. John says that his #1 pet peeve is phishing scams. Ironically, his favorite non-work related activity is fishing.
<urn:uuid:94392ab4-bf9a-403e-8bcc-75b6fb59e7b2>
CC-MAIN-2022-40
https://nordlocker.com/blog/cloud-service-providers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00265.warc.gz
en
0.9531
2,234
3.25
3
SMiShing, or SMS Phishing, is a form of social engineering used to compromise an individual based on trusted phone numbers. The concept presents an end user with a familiar dialogue that builds your relationship with the goal of extracting information and ultimately some form of financial or information gain. The SMS message may appear to come from a person in your contacts list or from a company you have done business with. The way threat actors gain access to your name could be from a previous breach or from malware that has extracted the contact, or from an SMS list from the source victim’s mobile device. That information is then used to target potential victims and spoof the relationship. Here are a few things to consider if you think you are a victim of a SMiShing attack: - Government entities like the IRS or HUD never use SMS text messages for communications. All official and legitimate communications always come through the United States Postal Service. - Any SMS text message that asks you to reply to a form or asks for sensitive information is probably fake. Why would a trusted person or company ask you for your full name, address, or any other personally identifiable information, in bulk, through a text message? This is the setup for a scam. - If the responses to your skepticism are met with any hostility, it is probably SMiShing. Commonly, threat actors will reply with “Why don’t you trust me?” or “Your friends have had success with me, why would you pass this up?” Real companies and friends do not follow this patterned behavior. - Real businesses that use SMS text messaging for actual business typically ask for replies in simple terms. Like, reply “Y” to confirm your doctor’s appointment or “STOP” to terminate the text messages. SMiShing typically will use longer replies to conduct the attack, but be mindful – an attack may use the word “STOP” in the first message just to validate that someone is actually on the other side of the phone and willing to answer. - If the SMS message has links that you do not recognize or solicits the installation of new applications, do not click on the link; especially on Android mobile devices. This is a way to potentially install malware or exploit a vulnerability and compromise the device. SMiShing, like Vishing (voicemail phishing made famous via fake IRS scams), is yet another targeted attack focusing on social engineering and the flaws in the SMS texting system that allow source phone number spoofing. If you want to minimize the risk, outside of spoofed phone numbers, change the settings on your phone to block SMS text messages from users not in your contacts list. Otherwise, it is an education process to look for the threats and not respond. Morey J. Haber, Chief Security Officer, BeyondTrust Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored three books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook.
<urn:uuid:878f4dc2-4415-4d25-88ad-752de6cd8ed4>
CC-MAIN-2022-40
https://www.beyondtrust.com/blog/entry/smishing-attacks-victim
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00265.warc.gz
en
0.94108
792
3.21875
3
A radiological dispersal device (RDD), or ‘dirty bomb,’ detonation in a local jurisdiction will have significant consequences for public safety, responder health and critical infrastructure operations. First responders and emergency managers must quickly assess the hazard, issue protective action recommendations, triage and treat the injured, and secure the scene in support of the individuals, families and businesses in the impacted community. To address these critical needs, the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) National Urban Security Technology Laboratory (NUSTL), in partnership with the Federal Emergency Management Agency (FEMA), and the Department of Energy (DOE) National Nuclear Security Administration (NNSA) published guidance for first responders and emergency managers on how to plan for the first minutes of an RDD detonation response. It is for these efforts, of DHS S&T and the aforementioned agencies, that DHS S&T has been nominated to compete in the 2019 ‘ASTORS’ Homeland Security Awards Program. DHS S&T has been a recognized Winner in the Annual ‘ASTORS’ Homeland Security Awards Program for Government Excellence in Homeland Security’, for Two Consecutive Years The Radiological Dispersal Device Response Guidance Planning for the First 100 Minutes is the result of years of scientific research and experimentation conducted by DOE laboratories – Brookhaven National Laboratory (BNL) and Sandia National Laboratories – coupled with S&T NUSTL’s direct conversations with first responders about operationalizing and documenting the scientific recommendations. (S&T’s National Urban Security Technology Laboratory (NUSTL), collaborated with first responders, DOE, and FEMA to publish the “RDD Response Guidance: Planning for the First 100 Minutes,” is designed to encourage comprehensive radiological preparedness and assist first responders and local jurisdictions to both operationalize and implement best practices for RDD detonation response. See below for videos divided by individual tactics. Courtesy of DHS Science and Technology Directorate and YouTube. Posted on Apr 1, 2019.) The Guidance includes five missions and ten tactics to address initial response efforts. It is intended to be engaging and easy to use, allowing communities to plug in their specific assets, agencies and response protocols. “The Guidance provides emergency planners and first responders across the nation with a playbook of best practices to start from in planning for a RDD detonation response,” said Ben Stevenson, Program Manager at S&T NUSTL. Now that the Guidance is published, S&T’s NUSTL is leading efforts to make it accessible to the responder communities who will need to incorporate it into their planning efforts and to state and federal partners that will support the response. Animating the Guidance To support responder understanding of the missions and tactics described in the RDD Response Guidance, S&T’s NUSTL worked with DOE’s Lawrence Livermore National Laboratory to animate the missions and tactics. Using a realistic RDD detonation scenario, the team developed short training clips that can be used to: Instruct response actions; Show appropriate personal protective equipment (PPE); and Provide realistic radiological readings that responders may see during a response. These are available on S&T’s website and also on the RadResponder platform. Training the Nation S&T’s NUSTL is working with several organizations to disseminate the key missions and tactics of the Guidance as well as the animations to responders across the nation. On the immediate horizon are: - The development of a train-the-planner course for emergency planners training federal personnel located regionally across the country who support state and local activities; - The publication of templated RDD detonation exercise materials that first responders, Weapons of Mass Destruction Civil Support Teams (WMD-CSTs) and other partners can use individually or collaboratively. On the first effort, S&T’s NUSTL, FEMA’s National Training and Education Division, and the Counterterrorism Operations Support Center for Radiological Nuclear Training are designing a training specifically for emergency planners who are generally responsible for writing and organizing emergency response plans for local communities. This train-the-planner course will be offered as a mobile course and yield the basis of an RDD response plan for a local community, and will be piloted in the coming year before it is finalized within the FEMA course catalog in 2021. “Partnering with other agencies to develop and deliver the train-the-planner course brought together expertise from across the nation to ensure the training course will meet the objectives,” said James Dansby, Program Manager at FEMA. S&T’s NUSTL is also executing two-day train-the-trainer sessions in all eight of the DOE Radiological Assistant Program (RAP) regions. Working with BNL and DOE NNSA, the first day of this training allows representatives in each RAP region to receive training on the Guidance, and on the second day they help the S&T’s NUSTL project team to train regional representatives from their federal, state and local jurisdictions on the key response missions and tactics they learned about on the previous day. This effort will create a cadre who can support state and local understanding of the science behind the RDD Response Guidance for planning purposes. All sessions are scheduled to be complete by August 2019. Dan Blumenthal, Consequence Management Program Manager at DOE NNSA said, “I wanted to make sure the RDD Response Guidance is being adopted at the state and local levels, and that all their questions are being answered. One way to do that is making sure RAP teams are up to speed.” Lastly, S&T’s NUSTL is partnering with Idaho National Laboratory to develop standardized exercise templates for RDD detonation responses that can be used by the National Guard Bureau 57 WMD-CSTs across the country in their required training and exercises, in conjunction with state and local partners. Providing standardized training and exercise procedures, rooted in sound scientific principles and practices from the Guidance will support local radiological preparedness and encourage interagency coordination for radiological/nuclear response and recovery. These templated exercises will be available to state and local partners in 2020. Publishing the RDD Response Guidance is a big step forward in ensuring that state and local first responders have a solid scientific basis of the hazard and an easy-to-adopt method of planning for the initial response. But publishing guidance documents is not enough, and S&T’s NUSTL and its partners will continue working to ensure the recommendations are further integrated into training courses, exercise design documents and national response protocols. True preparedness for radiological emergencies comes from good coordination and communication between agencies and protocols at the local, state and federal levels, and S&T’s NUSTL will continue to execute research and development projects that, while focusing on supporting first responder radiological capabilities, benefit a comprehensive capability across agencies and levels of government. Watch DHS S&T RDD Response Guidance Videos Now! Agencies, states, cities, and responders are encouraged to share these animations, divided by individual tactics ,with their teams, colleagues, and partners so they can use the videos as a reference to visualize and understand the guidance. RDD Response Guidance: Planning for the First 100 Minutes Introduction (Videos are courtesy of DHS Science and Technology Directorate and YouTube. Posted on Apr 5, 2019.) Tactics 1 & 2: Initial Response Tactic 3: Give Report from the Scene Tactic 4: Issue Protective Actions to the Public Tactic 5: Notify Partners and Request Tactic 6: Initiate Life-Saving Rescue Operations Tactic 7: Secure and Manage the Scene Tactic 8: Map and Measure Overview Tactic 8: Map and Measure Phase 1 — Detonation Site and Transect Tactic 8: Map and Measure Phase 2 — Near-Field, 10-Point Monitoring, Outlying Areas Tactic 9: Commence Phased Evacuations Tactic 10: Monitor and Decontaminate DHS Science and Technology Directorate (S&T) Honored in the 2018 ‘ASTORS’ Homeland Security Awards Program ‘Excellence in Homeland Security’ Android Team Awareness Kit (ATAK) ‘Excellence in Homeland Security’ Enhanced Dynamic Geo-Social Environ (EDGE) Virtual Online Training for First Responders ‘Excellence in Homeland Security’ Flood Apex Program Flood Sensors *DHS S&T was also recognized in thee 2017 ‘ASTORS’ Awards Program (Learn About the Android Team Awareness Kit (ATAK), a GPS communications tool that runs on a mobile device. It improves situational awareness by allowing users to know where their mission partners are located, regardless of affiliation, improves communications through a variety of applications and was successfully used in 2017 Hurricane operations in Houston and Puerto Rico. Courtesy of the DHS Science and Technology Directorate and YouTube.) The Annual ‘ASTORS’ Awards Program is specifically designed to honor distinguished government and vendor solutions that deliver enhanced value, benefit and intelligence to end users in a variety of government, homeland security and public safety vertical markets. The 2018 ‘ASTORS’ Awards Program drew an overwhelming response from industry leaders with a record high number of corporate and government nominations received, as well as record breaking ‘ASTORS’ Presentation Luncheon Attendees, with top firms trying to register for the exclusive high – end luncheon and networking opportunity – right up to the event kickoff on Wednesday afternoon, at the ISC East registration! Over 130 distinguished guests representing National, State and Local Governments, and Industry Leading Corporate Firms, gathered from across North America, Europe and the Middle East to be honored among their peers in their respective fields which included: - The Department of Homeland Security - The Federal Protective Service (FPS) - Argonne National Laboratory - The Department of Homeland Security - The Department of Justice - The Security Exchange Commission Office of Personnel Management - U.S. Customs and Border Protection - Viasat, Hanwha Techwin, Lenel, Konica Minolta Business Solutions, Verint, Canon U.S.A., BriefCam, Pivot3, Milestone Systems, Allied Universal, Ameristar Perimeter Security and More! The Annual ‘ASTORS’ Awards is the preeminent U.S. Homeland Security Awards Program highlighting the most cutting-edge and forward-thinking security solutions coming onto the market today, to ensure our readers have the information they need to stay ahead of the competition, and keep our Nation safe – one facility, street, and city at a time. The 2019 ‘ASTORS’ Homeland Security Awards Program is Proudly Sponsored by ATI Systems, Attivo Networks, Automatic Systems, and Desktop Alert. Nominations are being accepted for the 2019 ‘ASTORS’ Homeland Security Awards at https://americansecuritytoday.com/ast-awards/. |Access Control/ Identification||Personal/Protective Equipment||Law Enforcement Counter Terrorism| |Perimeter Barrier/ Deterrent System||Interagency Interdiction Operation||Cloud Computing/Storage Solution| |Facial/IRIS Recognition||Body Worn Video Product||Cyber Security| |Video Surveillance/VMS||Mobile Technology||Anti-Malware| |Audio Analytics||Disaster Preparedness||ID Management| |Thermal/Infrared Camera||Mass Notification System||Fire & Safety| |Metal/Weapon Detection||Rescue Operations||Critical Infrastructure| |License Plate Recognition||Detection Products||And Many Others!| Don’t see a Direct Hit for your Product, Agency or Organization? Submit your category recommendation for consideration to Michael Madsen, AST Publisher at: [email protected]. Government Excellence Awards in the ‘ASTORS’ Awards In addition to DHS S&T, following Government Agencies were recognized in thee 2018 ‘ASTORS’ Homeland Security Awards: Argonne National Laboratory, Modified Infrastructure Survey Tool (MIST) Defense Advanced Research Projects Agency – DARPA, Subterranean (SubT) Challenge (Learn About the DARPA Subterranean Challenge working with multidisciplinary teams from around the world to compete in the development of the autonomy, perception, networking, and mobility technologies necessary to map explore and search underground networks in unpredictable conditions. Courtesy of DARPAtv and YouTube.) Department of Justice (DOJ) Bureau of Justice Assistance (BJA), Project Safe Neighborhoods Program Department of Justice (DOJ) Securities Exchange Commission (SEC) Office of Personnel Management, Federal Risk Mgmt Process Training (FedRMPTP) DHS Federal Protective Service, Modified Infrastructure Survey Tool (MIST) Federal Bureau of Investigation (FBI),Violent Crimes against Children (VCAC) program (Learn About the FBI VCAC Program, and hear from Special Agent Danielle Messineo, who works in the Crimes Against Children division, trying to prevent future victims by giving presentations to schoolchildren. Courtesy of USA Network and YouTube.) International Association of Counterterrorism and Security Professionals (IACP), IACSP Training and Technology Joint Interagency Task Force South (JIATF-S), United States Multi-Service, Multi-Agency Task Force Pentagon Force Protection Agency, Detection & Emergency Response to envelopes containing the deadly poison ricin at a Pentagon mail screening facility (Learn About the Joint Interagency Task Force South on Key West, Florida, a multi-agency, international alliance whose mission is to cover 42 million square miles of territory primarily in Central and South America to stem the flow of illegal drugs and to disrupt and dismantle sophisticated narco-trafficking networks. Much of that work is carried out on the high seas. Courtesy of Doug Brumbaugh and YouTube.) U.S. Customs and Border Protection, CBP Entry/Exit Program - US Immigration and Customs Enforcement’s (ICE) Homeland Security Investigations (HSI), Human Rights Violators and War Crimes Unit (HRVWCU) Also Recognized for ‘Excellence in Homeland Security’ in the 2018 ‘ASTORS’ Awards Program: Edward Reinhold, Deputy Assistant Director, FBI (Ret) Joel McNelly, Charlotte-Mecklenburg Police Department Matt Quillen, Bristol Virginia Police Department Stanley I. White, Counterintelligence Advisor, the International Association for Counterrorism & Security Professionals (IACSP) Thomas Homan, Acting Director, U.S. Immigration and Customs Enforcement (ICE) (Ret) Thomas O’Connor, President of the FBI Agents Association 2018 Champions Edition See the 2018 ‘ASTORS’ Champions Edition – ‘Best Products of 2018 ‘ Year in Review’ for in-depth coverage of the outstanding products and services of firms receiving American Security Today’s 2018‘ASTORS’ Homeland Security Awards.’ Enter Early to Maximize Media Coverage of your Products and Services at Kickoff, and Get the Recognition Your Organization Deserves! And be sure to Register Early for the 2019 ‘ASTORS’ Awards Presentation Luncheon at ISC East 2019 to ensure your place at this limited- space event! 2018 ‘ASTORS’ Homeland Security Awards Luncheon at ISC East
<urn:uuid:a0c4cd87-d88d-4d15-9d40-d3ca2fab40b6>
CC-MAIN-2022-40
https://americansecuritytoday.com/st-returns-with-nustl-responder-rad-preparedness-in-2019-astors/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00465.warc.gz
en
0.873509
3,318
2.53125
3
Mobile Critters: Part 2. Phishing. Phishing is undoubtedly the most ubiquitous attack of the last 10 years, if not more. It has been used successfully against companies large and small, sometimes leading to devastating results. Is opening emails from a smartphone safe? Email-based threats are nothing new and have been around for almost as long as email itself. Initially, just a nuisance, the harmless chain-letter emails of yesteryear morphed into so-called “419” scams, before eventually stepping up to the modern style of phishing we know today. This move towards criminal intent evolved in parallel to the Internet itself. In the early days, there was little eCommerce and few corporate resources hosted on the Internet. However, as the Internet was monetized and company portals and systems moved online, the value of access to those systems became clear. What is social engineering and how it works. And so bad actors started trying to get their hands on credentials for online platforms. Phishing is ultimately a form of Social Engineering – tricking the victim into handing over credentials for a website, platform, or corporate system. Depending on what access the attacker gains, further malicious activities can be carried out. In the corporate world, the attack commonly plays out like this: First, the attacker sends an email spoofing a commonly seen company or service. This could be sharing a file via DropBox or Office365, a parcel tracking service such as UPS, or maybe more devious, such as reflecting a current news event, such as Covid-19 or the Hurricane Katrina disaster. These emails often invoke a sense of urgency by suggesting that action must be taken quickly, or create intrigue, for example appearing as if someone has shared the entire company’s bonuses list with you. This encourages the target to click the embedded link or open the attached document to view the information. Fake website alert! For the second stage, the attacker has set up a website that looks like a familiar login page, such as Office365. However, entering your username and password into this page sends a copy of those credentials off to the attacker. Often this is where the attack ends, however, the use of Multifactor or Two-Factor Authentication is a strong protection against phishing, adding a randomly generated code to your login process. A basic phishing attack won’t capture this code, but a more sophisticated infrastructure may also ask you to enter the MFA code, all of which is captured and replayed into the real system. Finally, once an attacker has a username, password, and potentially a valid MFA code, they can access the system. This could be anything from a corporate email, HR, or CRM platform through to a personal gaming account, online banking, or shopping site. From there, money can be accessed, goods can be bought fraudulently and deeper attacks against a company can be staged. Kinds of phishing attacks you should be aware of. Several variations on Phishing work in a similar way, including Vishing (voice phishing), Smishing (SMS-based phishing), Whaling (targeted phishing against the company’s “big fish”), and Spear-Phishing (highly targeted phishing). While MFA provides good security against phishing and many modern email systems detect phishing quite easily, the best protection is to have well-trained staff who can spot a phishing attack before any credentials get typed out.
<urn:uuid:dde29ede-da9f-44af-b28f-146dff25298e>
CC-MAIN-2022-40
https://kaymera.com/phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00465.warc.gz
en
0.955081
716
2.71875
3
Ever been hit in the face with the harsh glare of your screens late at night? Most of us are familiar with the night mode available on most GPS apps and devices. When you’re driving along and day becomes night, the colors on the screen invert, allowing for a less blinding experience. Did you know that you can experience this sensation all of the time with most computer systems? What is dark mode? It shouldn’t be any surprise that extended periods of staring into a computer screen can result in eye strain and fatigue. Though some of the strain can be associated with text size and the level of irritating blue light, for some, the brightness of the average computer experience itself can exacerbate tired eyes. Dark mode, also known as “night mode" on a computer screen, is a viewing setting that inverts or lowers the colors of non-photographic sections of your display. The result is a reading and writing space that closely resembles that of a computer’s command terminal. Dark mode doesn't so much turn your computer screen black and white as much as it makes the colors more tolerable to eyes under low-light conditions. What are the benefits of using dark mode? Dark modes are great for darker workspaces. Some people prefer working on a computer in the dark or at night. Some don’t have the option of a well-lit space when trying to catch up on some emails while the family is sleeping. However, using a computer in the dark can result in high screen contrast that can be hard on the eyes. During times like these, computer screen dark mode is downright necessary. While there are plenty of "day walkers" using this setting, dark mode or night mode benefits night owls more than anyone. Dark mode reduces blue light. Why would it be handy to know how to reduce blue light on a PC, Mac, or mobile device? While you may not know it, blue light (present in digital screens, sunlight, etc.) affects the circadian rhythms of your brain. These rhythms dictate the chemicals that allow you to get to sleep easier or when you should be more alert. However, few of us know how to turn off or reduce the blue light on computer screen modules manually. Dark modes or dark screen backgrounds decrease the amount of blue light you receive from your device and computer screens so that your eyes aren’t as strained and so you can more easily turn off the workday to get to sleep faster. Though there's no best color for eyes for computer screens, reducing blue light when possible can help preserve natural circadian rhythms. Dark modes offer greater contrast for essential documents. As helpful as some of our editing systems can be when editing documents, certain corrective items can slip past us on white screens. Due to the greater contrast of these symbols against a dark background, they are much more apparent. Dark modes decrease eye fatigue. You may be wondering, "is dark mode better for your eyes?" Well, yes...and no. According to computer scientist Silas S. Brown from Cambridge University, dark mode screens can limit eye fatigue, video glare, flicker issues, and photophobia. While this is true, the optimal reading conditions for dark mode are not the greatest times to be reading in the first time. Low light conditions, regardless of light or dark mode, can lead to result in eye strain. Using dark modes uses less energy. While the decrease in energy may not as widely felt when using a computer in the dark that is always plugged in, dark modes may help laptops and other devices extend the lives of batteries. More energy is necessary to show brighter colors, making dark backgrounds an area of possible savings. How To Switch Your Computer to Dark Mode Setting Dark Mode on Mac Computers If your MacOS is Mojave or later, a computer-wide dark mode setting comes standard. - Open System Preferences app in your dock. Another easy way to do this is to click the magnifying glass search icon in the upper right-hand corner. - Click General. - At the top, next to appearance, you’ll see an option for “Light” or “Dark.” Simply choose “Dark.” How To Switch To Dark Mode on a Windows 10 Computer Before you begin, you should realize that knowing how to invert colors on a computer is not the same as dark mode. If you would like to switch to dark mode, proceed to the following steps: - Access the Personalization menu in your system settings or right-click your desktop just as you would reset your desktop wallpaper background. - On the sidebar, click “Colors” - In “App Mode”, click on the “Dark” option to turn on Dark Mode. How to Switch Just Your Internet Browser Into Dark Mode You may not want dark mode across every part of your computer. If you only want dark mode for your Chrome browser, including Google Drive and Docs or web-based Dropbox, you need to install a Chrome Browser extension. A recommended one if called Dark Reader. Simply click here to access Dark Reader and install it for use if you are using a Chrome browser. How To Run Dark Mode On Your iPhone - Open your Settings App - Access “General” - Open “Accessibility” - Open “Display Accommodations” and select “Invert Colors” - Turn on “Smart Invert” How to Run Dark Mode on Your Android Device Unfortunately, there is no system-wide Dark Mode option at the time this piece was published. While certain Samsung phones may have one, we will wait for an Android-wide option before providing instructions. In the meantime, there are a few applications that allow you to apply Dark Mode to select applications. Dark Mode: Not For Everyone Dark mode certainly isn’t meant to be the best option for everyone — otherwise, it would be the standard. However, experimenting with Dark Mode on your devices just may yield a more pleasant computer use experience and a better all-around workday.
<urn:uuid:f98331ef-27d8-4f99-ac2f-32d2f2e3dc5a>
CC-MAIN-2022-40
https://www.jdyoung.com/resource-center/posts/view/197/the-dark-side-the-benefits-of-computer-screen-dark-mode-jd-young
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00465.warc.gz
en
0.899299
1,270
3
3
Although hackers and other cyber criminals are responsible for a large number of harmful security breaches, the healthcare industry faces other threats as well. According to a recent report by the U.S. Department of Health and Human Services, 40 percent of all large data breaches occurred due to lost or stolen devices. Statistics like these highlight the importance for healthcare organizations to leverage a layered security approach to protect against both outside threats like malware and compliance risks that come from misplaced devices.Both the HHS report and an eWeek article mentioned the importance of encryption in securing healthcare information. According to eWeek, both data that will be transmitted and stored data should be encrypted to prevent compliance violations. Additionally, healthcare organizations should limit the amount of patient and health data that workers are allowed to store on laptops. Employee education is another factor healthcare organizations should consider. As cyber criminals devise new threats and targeted attacks against organizations, it’s important for workers throughout the organization to be aware of potential threats such as new phishing scams. According to eWeek, a lack of policies on social networking usage and data storage can present compliance risks. Advances in electronic health record (EHR) technology have allowed for a significant amount of collaboration among healthcare professionals. With sensitive data stored on laptops, smartphones and other devices, EHRs have also made it important to protect both the devices and the data stored on them. “Data sharing is essential as doctors look to collaborate on patient care as part of accountable care organizations under the Patient Protection and Affordable Care Act (ACA), also known as Obamacare,” the news source said. “But as important as data sharing is, health care organizations are also under a mandate to prevent costly data breaches that plague the health care industry.” Are you comfortable with electronic health records? Do you feel that hospitals take enough precautions to secure their patients’ data?
<urn:uuid:20b2bd5b-53d7-4c27-be59-363d13380e42>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/protecting-patient-data-in-healthcare
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00465.warc.gz
en
0.957497
379
2.703125
3
While the root certificate in itself is sufficient to implement the SSL security, in practice, most CAs make use of intermediate certificates. This is because of the practicalities involved in attaining the essential qualifications required to issue a CA. In most general cases, a CA starts with issuing cross-certificates. These digital certificates are issued by one CA that links to a public key of a root certificate issued by another well-established CA. A beginner certifying authority just starting its operations may not have the necessary qualifications to issue a root certificate yet. Hence, it will use another well-established CA’s services and link its certificates to a valid root certificate, thus forming the chain of trust. A single trusted root certificate will be linked to multiple other intermediate certificates with cross-certificates, thus allowing the users to get a valid trust chain for their SSL implementation. Once the CA gets the necessary validation and is deemed trustworthy to issue its own root certificates, it will replace the trust anchor with its own root certificates. And the corresponding roots will be added to the root store. Thus intermediate certificates serve to bridge the gap between an intermediate CA and a trusted root certificate. They are used to let growing CA companies find their footing and help establish a consumer base. Upon proper validation, they will issue their own root certificates completing the trust chain without another CA’s help. Some more reasons why intermediate certificates are used are listed down below - Intermediate certificates help control the number of root certificates in use and help mitigate security risks and fraud. As more and more users implement SSL sites, the number of root CAs will also increase. But having too many root CAs can lead to serious security implications, which intermediate certificates aim to resolve. They provide a means for the root CAs to delegate some of the certificate issuing responsibilities to intermediate CAs, providing intermediate certificates that will substitute for a root certificate. - Intermediate certificates can be replicated in high numbers without compromising the security framework and helping establish the Chain of trust. - They help with the scalable implementation of the SSL network. Almost all trusted CAs use intermediate certificates as it adds an additional layer of security and helps manage security incidents gracefully. In case of a security attack, only the intermediate certificate needs to be revoked instead of revoking the root certificate and all of its associated certificate that it has been used to sign. By revoking just the intermediate certificate in question, only the group of certificates that are in the same chain as the intermediate certificate will be affected, thus minimizing the cost and impact of the security incident.
<urn:uuid:28570ded-b0d1-4aff-898b-652e2661afee>
CC-MAIN-2022-40
https://www.keyfactor.com/resources/difference-between-root-certificates-and-intermediate-certificates/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00465.warc.gz
en
0.935339
526
3.109375
3
Natural language understanding (NLU) is a technical concept within the larger topic of natural language processing. NLU is the process responsible for translating natural, human words into a format that a computer can interpret. Essentially, before a computer can process language data, it must understand the data. Copyright by www.unite.ai Techniques for NLU include the use of common syntax and grammatical rules to enable a computer to understand the meaning and context of natural human language. The ultimate goal of these techniques is that a computer will come to have an “intuitive” understanding of language, able to write and understand language just the way a human does, without constantly referring to the definitions of words. There are numerous techniques that computer scientists and NLP experts use to enable computers to understand human language. Most of the techniques fall into the category of “syntactic analysis”. Syntactic analytic techniques include: - word segmentation - morphological segmentation - sentence breaking - part of speech tagging These syntactic analytic techniques apply grammatical rules to groups of words and attempt to use these rules to derive meaning. In contrast, NLU operates by using “semantic analysis” techniques. Semantic analysis applies computer algorithms to text, attempting to understand the meaning of words in their natural context, instead of relying on rules-based approaches. The grammatical correctness/incorrectness of a phrase doesn’t necessarily correlate with the validity of a phrase. There can be phrases that are grammatically correct yet meaningless, and phrases that are grammatically incorrect yet have meaning. In order to distinguish the most meaningful aspects of words, NLU applies a variety of techniques intended to pick up on the meaning of a group of words with less reliance on grammatical structure and rules. NLU is an evolving and changing field, and its considered one of the hard problems of AI. Various techniques and tools are being developed to give machines an understanding of human language. Most NLU systems have certain core components in common. A lexicon for the language is required, as is some type of text parser and grammar rules to guide the creation of text representations. The system also requires a theory of semantics to enable comprehension of the representations. There are various semantic theories used to interpret language, like stochastic semantic analysis or naive semantics. Common NLU techniques include: Named Entity Recognition is the process of recognizing “named entities”, which are people, and important places/things. Named Entity Recognition operates by distinguishing fundamental concepts and references in a body of text, identifying named entities and placing them in categories like locations, dates, organizations, people, works, etc. Supervised models based on grammar rules are typically used to carry out NER tasks. Word-Sense Disambiguation is the process of determining the meaning, or sense, of a word based on the context that the word appears in. Word sense disambiguation often makes use of part of speech taggers in order to contextualize the target word. Supervised methods of word-sense disambiguation include the user of support vector machines and memory-based learning. However, most word sense disambiguation models are semi-supervised models that employ both labeled and unlabeled data. […] Read more: www.unite.ai
<urn:uuid:4e6635f7-294e-4875-8fee-1a557a90b171>
CC-MAIN-2022-40
https://swisscognitive.ch/2020/12/08/natural-language-understanding/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00465.warc.gz
en
0.925109
702
3.546875
4
Teaching yourself Python machine learning can be a daunting task if you don’t know where to start. Fortunately, there are plenty of good introductory books and online courses that teach you the basics. It is the advanced books, however, that teach you the skills you need to decide which algorithm better solves a problem and which direction to take when tuning hyperparameters. A while ago, I was introduced to Machine Learning Algorithms, Second Edition by Giuseppe Bonaccorso, a book that almost falls into the latter category. While the title sounds like another introductory book on machine learning algorithms, the content is anything but. Machine Learning Algorithms goes to places that beginner guides don’t take you, and if you have the math and programming skills, it can be a great guide to deepen your knowledge of machine learning with Python. Oiling your machine learning engine Machine Learning Algorithm kicks off with a quick tour of the fundamentals. I really liked the accessible definitions Bonaccorso uses to explain key concepts such as supervised, unsupervised, and semi-supervised learning and reinforcement learning. Bonaccorso also draws great analogies between machine learning and descriptive, predictive, and prescriptive analytics. The machine learning overview also contains some hidden gems, including an introduction to computational neuroscience and some very good precautions on the pitfalls of big data and machine learning. That said, the machine learning overview does not go into too much details and would be hard to understand for novices. Given the audience of the book, it serves to refresh and solidify your understanding of machine learning, not to teach you the basics. Next, Machine Learning Algorithms builds up on that brief overview and goes into more advanced concepts, such as loss functions, data generation processes, independent and identically distributed variables, underfitting and overfitting, different classification strategies (one-vs-one and one-vs-all), and elements of information theory. Again, the definitions are smooth and very accessible for someone who has already had hands-on experience with machine learning algorithms and linear algebra. Before going into the exploration of different algorithms, the book covers some more key concepts such as feature engineering and data preparation. Here, you’ll get to revisit some of the key classes and functions of scikit-learn, the main Python machine learning library. If you already have a solid knowledge of Python and numpy, you’ll find this part a pleasant review of one-hot encoding, train-test splitting, imputing, normalization, and more. There is some very great stuff in the third chapter, including one of the best and most accessible definitions of principle component analysis (PCA) and feature dependence in machine learning algorithms. You’ll also get to see some of the more advanced techniques not covered in introductory books, such as non-negative matrix factorization (NNMF) and SparsePCA. Of course, without the background in Python machine learning, these additions will be of little use to you. The real meat ofthe book starts in the fourth chapter, where you get to the machine learning algorithms. Here, I had mixed feelings. A rich roster of machine learning algorithms In general, Machine Learning Algorithms is nicely structured and stands up to the name. There are chapters on regression, classification, support vector machines (SVM), decision trees, and clustering. The book follows up with a few chapters on recommendation systems and natural language processing applications, and finishes off with a very brief overview of deep learning and artificial neural networks. The main chapters offer in-depth coverage of principle machine learning algorithms in Python, including details not covered in introductory books. For instance, the regression chapter goes into an extensive coverage of outliers and methods to mitigate their effects. The classification chapter has a nice discussion on passive-aggressive classification and regression in online algorithms. The SVM chapter has a comprehensive (but complicated) discussion on semi-supervised vector machines. And the decision trees chapter provides a good coverage of the specific sensitivities of DTs such as class imbalance, and some practical tips on tweaking trees for maximum performance. The clustering section really shines. It spans across three full chapters, starting with fundamentals (k-nearest neighbors and k-means) and goes through more advanced clustering (DBSCAN, BIRCH, and bi-clustering) and visualization techniques (dendrograms). You’ll also get a full account of measuring the effectiveness of the results and determining whether your algorithm has latched onto the right number and distribution of clusters. Across the book, there are thorough discussions of the mathematical formulas behind each machine learning algorithm. You need to come strapped with solid linear algebra and differential and integral calculus fundamentals to fully understand this (if you need to hone your machine learning math skills, I’ve offered some guidance in a previous post). The book also makes extensive use of functions numpy, scipy, and matplotlib libraries without explaining them, so you’ll need to know those too (you can find some good sources on those libraries here). One of the most enjoyable things about Machine Learning Algorithms are the chapter summaries. After going through the nitty-gritty of the math and Python coding of each machine learning algorithm, Bonaccorso gives a brief review of where to apply each of the techniques presented in the book. There are also many references to relevant papers that provide more in-depth coverage of the topics discussed in the book. It’s refreshing to see some of the old but fundamental papers from early 2000s being mentioned in the book. Those things tend to get buried under the hype surrounding state-of-the-art research. Machine Learning Algorithms finishes off with a good wrap-up of the machine learning pipeline and some key tips on choosing between the different Python tools introduced across the book. Not enough real-world examples The one thing, in my opinion, that should set a book on Python machine learning apart from research papers and theoretical textbooks are the examples. A good book should be rich in use-case oriented examples that take you through real-world applications and possibly build up through the book. Unfortunately, in this respect, Machine Learning Algorithms leaves a bit to desire. For one thing, the examples in the book are mostly generic, using data-generation functions in scikit-learn such as make_blobs, make_circles, and make_classification. Those are good functions to show certain aspects of Python machine learning, but not enough to give you an idea of how to use the techniques in real life, where you have to deal with noise, outliers, bad data, and features that need to be normalized and categorized. The code is in plain Python scripts as opposed to the preferred Jupyter Notebook format (which is not much of a big deal, to be fair). Also, while the book omits much of the sample code and focuses on the important parts for the sake of brevity, it made it hard to navigate the sample files at times. The book does cover some real-world examples, including one with airfoil data in the SVM chapter and another with the Reuters corpus in the NLP chapter. The recommendation systems chapter also includes a few decent use cases, but that’s about it. Without concrete examples, the book often reads like a disparate reference manual with code snippets, which makes it even more crucial to have solid experience with Python machine learning before picking this one up. Another thing that didn’t really appeal to me were the two chapters on deep learning. Machine Learning Algorithms provides a good overview of deep learning and discusses convolutional neural networks, recurrent neural networks, and other key architectures. But the problem is that introductory books on Python machine learning already cover these concepts and much more. So most of the people who make it this far through the book without putting it down won’t find anything new here (aside from the mention of KerasClassifier maybe). Midway through Python machine learning journey So, where does this book stand in the roadmap to learning machine learning with Python? It’s neither beginner level, nor super-advanced. I would suggest picking up Machine Learning Algorithms after you read an introductory-to-intermediate book like Python Machine Learning or Hands-on Machine Learning, or an online course like Udemy’s “Machine Learning A-Z.” Otherwise, you won’t be able to make the best of the rich content it has to offer. Once you finish this one, you might want to consider Bonaccorso’s Mastering Machine Learning Algorithms, Second Edition, which expands on many of the topics presented in this book and takes them into even greater depth.
<urn:uuid:7315870c-f34d-4bb1-a2a5-d914ad03b33e>
CC-MAIN-2022-40
https://bdtechtalks.com/2020/10/14/machine-learning-algorithms-2nd-edition-review/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00465.warc.gz
en
0.928535
1,830
2.53125
3
It’s a challenge to learn a new language, especially once we’re past 18 years old. But Duolingo, self-proclaimed as “the world’s best way to learn a language” and seconded by reviewers at the Wall Street Journal and the New York Times, is set to change that with an assist from artificial intelligence (AI). Duolingo launched in 2011, and through a powerful mix of personalized learning, immediate feedback and gamification/rewards, it has become one of the most downloaded educational apps today. Let’s take a look at how artificial intelligence helps the company deliver personalized language lessons to its 300 million users. Duolingo on a Mission to Offer Free Language Education Founded in Pittsburgh by Carnegie Mellon University computer scientist Luis von Ahn, who is renowned for creating CAPTCHA, Duolingo’s mission is to “make education free and accessible to everyone in the world.” Today the company’s more than 300 million users receive personalized training on more than 30 distinct languages ranging from Spanish to Navajo to Klingon through its cross-platform app. Learning a language is time-intensive—the U.S. State Department estimates it can take 600 to more than 1,100 class hours to learn a language. Duolingo breaks this effort down into manageable chunks that can be done anywhere at any time individualized for each user and infused with fun and a points-based reward system. Users can access the app for free, but those who don’t want ads can sign up for premium service through a monthly subscription. AI Powers Duolingo’s Mission Artificial intelligence is behind Duolingo’s mission to make language education accessible and free to everyone. It starts with an AI-driven placement test to determine the starting knowledge of each user for the language they want to study. For example, if someone signed up to learn French and they have four years of instruction in high school, they will likely be able to start Duolingo’s lessons further along than a user who had never been exposed to French before. To determine exactly where each user’s understanding of the language begins, the placement test adapts as it continues based on if the user answered the previous question correctly or not. In a mere five minutes, this test gives the app a good sense of where each user should start the course. This ability contributes to a positive user experience and reduces the number of those with some knowledge bowing out due to boredom at the beginning. Another core feature of the Duolingo app that uses artificial intelligence is known as spaced repetition. This concept delivers personalized language lessons over longer intervals for optimal learning rather than cramming lessons into a short period of time. Additionally, the “lag effect” is also important to Duolingo’s learning techniques. If the gap between practice sessions is lengthened, users improve more. All of this content delivery is controlled on the backend by artificial intelligence. As learning expands and language proficiency is gained, the user interacts with the content in different ways. AI algorithms using deep learning predict at any given moment the probability of a user being able to recall a word in a given context and then can figure out what that user needs to keep practising. The algorithms analyze user data to then personalize the learning experience. In fact, Duolingo generates a wealth of data that is critical when creating predictive models. The amount, type, and uniqueness of the data was one of the reasons the company’s AI and research lead Burr Settles decided to join the company in 2013 over bigger, more established companies. Duolingo’s half-life regression statistical model operates based on analyzing error patterns of millions of language learners to inform content delivery that’s relevant for each user and their learning needs. In an effort to ensure an engaging experience, Duolingo has AI-powered chatbots that help teach language through automated text-based conversations with users who are in the app. Not only do the bots help users improve their language skills by helping them practice conversing in a language, but they get smarter the more they are used. The company has also considered the possibility of expanding into virtual reality technologies to deliver a more immersive experience. As with any artificial intelligence, the more users engage with the AI, the better it gets. And the better the AI gets, the closer it is to mimicking human language instructors. As the artificial intelligence of Duolingo gains expertise, it’s helping millions of people worldwide acquire new language skills. AI is reshaping businesses of all shapes and sizes, across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.
<urn:uuid:71ab3a7e-e739-4869-bbb1-271cd90019b2>
CC-MAIN-2022-40
https://bernardmarr.com/the-amazing-ways-duolingo-is-using-artificial-intelligence-to-deliver-free-language-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00465.warc.gz
en
0.929108
985
3.03125
3
Confused about PCI DSS compliance? This article will explain PCI DSS and the importance of complying with this important information security standard. What is PCI DSS? PCI DSS stands for the Payment Card Industry (PCI) Data Security Standard (DSS). The PCI DSS is a proprietary information security standard that was established in 2004 by the major credit card brands. The standards apply to organizations that handle major branded credit cards, including Visa, MasterCard, American Express, Discover, and JCB. The PCI DSS does not cover private label cards, such as department store credit cards, that are not associated with a major card brand. The PCI DSS consists of common sense steps that coincide with widely accepted data security best practices. The goals of the PCI DSS standards are to help merchants securely process credit card transactions and prevent fraud. Who must be PCI DSS compliant? Is PCI DSS compliance required by law? While PCI DSS is not mandated by U.S. federal law, some states have laws that refer to PCI DSS explicitly or contain equivalent mandated standards. Additionally, the major credit card brands require that all organizations, worldwide, that accept or process their cards be compliant with PCI DSS. If your organization processes, stores, or transmits cardholder data, you are required to be compliant with PCI DSS. What does PCI DSS compliance entail? The PCI DSS outlines 12 requirements, each falling under one of six categories, or “goals.” The following is a brief overview of these goals and their corresponding requirements: Goal No. 1: Build & Maintain a Secure Network - Organizations must install and maintain a secure network to conduct transactions, including utilizing firewalls that are effective but do not result in undue inconvenience to cardholders or vendors. - Organizations must not use vendor-supplied defaults for system passwords and other security parameters, as these defaults are widely known by hackers. They should be changed before a system is installed on the network. Goal No. 2: Protect Cardholder Data - Cardholder data should not be stored – whether in electronic or paper form – unless absolutely necessary. Magnetic strip and chip data should never be stored. When it is necessary to store cardholder data, it must be stored securely. Primary account numbers (PAN) must be rendered unreadable. - Cardholder data that is transmitted across open, public networks must be encrypted. Goal No. 3: Maintain a Vulnerability Management Program - Anti-virus software must be used and regularly updated. - All systems and applications must be secure and free of bugs or vulnerabilities that could allow data breaches. Software and operating systems should be kept up-to-date; vendor-supplied patches should be installed right away. Goal No. 4: Implement Strong Access Control Measures - Cardholder data should be accessible by employees on a “need to know” basis; employees should have access to only those systems and data that they absolutely need to perform their job. - Every user should have a unique ID to access the system, and users should be authenticated using a strong password or passphrase, biometrics, or a token device or smart card. - Data must be protected physically as well as electronically. This involves measures such as restricting physical access to different parts of the building, maintaining a visitor log, physically securing media, mandating the use of document shredders, and putting locks on dumpsters. Goal No. 5: Regularly Monitor and Test Networks - All access to network resources and cardholder data must be tracked, monitored, and regularly tested. Audit trails should be secured, and audit trail history should be retained for at least one year, with at least three months of history always available for analysis. - Security systems and processes should be regularly tested, especially after new software deployments or system changes. Goal No. 6: Maintain an Information Security Policy - The organization must have a comprehensive security policy that addresses all PCI DSS requirements. All personnel should be trained on the sensitivity of cardholder data and their specific responsibilities regarding data security. These responsibilities must be clearly defined and adhered to at all times. What happens if I’m not PCI DSS compliant, and a data breach occurs? Although there are no federal laws regarding PCI DSS, your business may be found in violation of your state’s laws regarding data privacy, some of which mirror PCI DSS standards or refer to them directly. Additionally, the credit card companies that mandate PCI DSS could impose fines on your organization amounting to tens or even hundreds of thousands of dollars; if you are unable to pay the fines, you will no longer be able to accept their cards. Despite the fact that the federal government does not mandate PCI DSS, federal law enforcement may still get involved to ensure that the credit card data stolen from your organization is not being used to finance terrorist activities. And, of course, your customers’ data will have been breached, which could result in massive, possibly irreparable damage to your organization’s reputation and/or civil lawsuits. What can I do to ensure that my organization is PCI DSS compliant? The PCI DSS focuses heavily on proactive steps that organizations can take to secure cardholder data and prevent breaches. Continuum GRC agrees with this approach; we feel that it is much better to be secure and prevent a breach than to have to react to one and face steep fines, legal ramifications, and damage to your organization’s good name. The specifics of PCI DSS compliance requirements are quite complex. Thankfully, the PCI DSS compliance experts at Continuum GRC are here to help. Continuum GRC’s modules were designed by leading PCI DSS Qualified Security Assessors (QSA) approved by the PCI Security Standards Council (SSC). We provide our clients with scalable, efficient solutions for meeting the rigorous demands of PCI DSS compliance. Continuum GRC offers full-service and in-house risk assessment and risk management subscriptions helping companies all around the world sustain a proactive cyber security program. Continuum GRC is proactive cyber security®. Call 1-888-896-6207 to discuss your organization’s cyber security needs and find out how we can help you with PCI DSS compliance.
<urn:uuid:f5878af2-baee-4e6d-b80c-ef546dedacd7>
CC-MAIN-2022-40
https://continuumgrc.com/author/continuum-grc/page/64/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00665.warc.gz
en
0.925799
1,306
2.609375
3
What is Network Segmentation? Network segmentation is a strategy used to segregate and isolate segments in the enterprise network to reduce the attack surface. However, given today’s data centers, with the explosion of users and the dynamics of applications and resources, security strategies involve moving the perimeter closer to the resource than it was in the ‘castle and moat’ strategy. Network segmentation is one of the core concepts in a Zero Trust security strategy, along with identities, based on the NIST SP 800-207 Zero Trust framework. Traditional network segmentation, also known as macro-segmentation, is usually achieved using internal firewalls and VLANs. In microsegmentation, the perimeter and security controls are moved closer to the resource (e.g. workload or a 3-tier application), creating secure zones. Network macro/microsegmentation is primarily executed to limit the East-West traffic across the data center and prevent/slow-down the lateral movement by attackers. Network macro/microsegmentation can be achieved with: - Hardware firewalls (e.g. internal segmentation firewalls) – the traffic flow to the zones or segments being governed by firewall rules - VLANs and access control lists (ACLs) – filter access to networks/subnets - Software defined perimeter (SDP) – moves the perimeter closer to the host providing a virtual boundary; enables granular policy controls at the workload level The Network Macrosegmentation Approach – Pros and Cons Resource access policies defined through firewall rules, VLAN/ACLs and VPNs are static and focus only on ingress and egress traffic. These policies are rigid and cannot scale up or adapt to dynamic hybrid environments and dynamic secure access requirements that have moved beyond static perimeters. |One of the oldest and widely adopted method of segmentation - predates Zero Trust||VLANs and firewalls create multiple chokepoints in the network - negatively affects network performance and business productivity (high friction)| |Familiar hardware firewalls to control both East-West and North-South traffic||Thousands of firewall rules and VLAN/ACLs quickly become a management and security nightmare (complex, prone to human errors)| |Expensive to scale up with hardware investment and personnel costs| |Complex to achieve centralized visibility on-premises and clouds| |What works for on-premises doesn’t work on clouds (visibility & security gaps – large attack surface)| |Complex to achieve granular policies – no security context| |Policies are rigid – doesn’t adapt to dynamic environments or sudden shift in business models (e.g. remote workforce, mergers & acquisitions, divestiture)| |Vendor lock-in becomes a overhead| The Network Microsegmentation Approach – Pros and Cons The perimeter is moved closer to the resource, and security controls are applied at the individual host. |Platform and infrastructure independent||Will need agents on every endpoint, workload or hypervisor/virtual machines| |Context-based security controls with granular policies||Though fine-grained policies are an advantage, the sheer number of policies that need to be created and managed across thousands of resources, user groups, zones (microsegments) and applications is overwhelming| |Unified platform||90% of the traffic is encrypted, requiring resource-intensive SSL/TLS decryption for full visibility, dramatically increasing processing requirements and therefore the cost to implement and operationalize this segmentation.| |Need to be completely aware of the entire data center architecture – what’s changing, what’s new and what are the gaps – to start thinking of policies that don’t break business productivity (e.g. scenarios: the sudden shift to remote work, and what happens when employees return post the pandemic? How will the architecture/topology change and how will the policies be affected and what are the ‘new’ gaps?)| |Minimal or lack of threat detection + prevention - Need separate tools and integration for threat intelligence, detection and prevention?| The network-centric segmentation approach, with either macrosegmentation or microsegmentation, clearly has its pros and cons. Network segmentation has a lot of moving parts: hardware firewalls, software-defined perimeters, additional controls and tools for multi-cloud infrastructure, and several resource access policies that need to be managed and updated to keep up with attacks and the evolving threat landscape. Shifting Gears: From Network Segmentation to Identity Segmentation Though network segmentation reduces the attack surface, this strategy does not protect against adversary techniques and tactics in the identity phases in the kill chain. The method of segmentation that provides the most risk reduction, at reduced cost and operational complexity, is identity segmentation. Protecting identities significantly reduces the risks of breaches from modern attacks, such as ransomware and supply chain threats, in which compromised credentials are a key factor. According to the Cost of a Data Breach 2021 Report by IBM and the Ponemon Institute, compromised or stolen user credentials were the most common root cause of breaches in 2021 and also took the longest time — an average of 250 days — to identify This is where CrowdStrike’s identity segmentation helps to significantly limit the attack surface by isolating and segmenting identities — providing immediate value, as the majority of breaches leverage user credentials. Unfamiliar with identity segmentation? Learn what identity segmentation is and what it isn’t, and download the white paper below to understand how identity segmentation differs from network segmentation.
<urn:uuid:df073e45-f9ea-44c0-a70b-0cef37ff0fbc>
CC-MAIN-2022-40
https://www.crowdstrike.com/cybersecurity-101/network-segmentation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00665.warc.gz
en
0.89527
1,193
2.5625
3
If you were to write a web application entirely by yourself, it would be a rather daunting task. You would need to write the UI elements from lower-level APIs, set up and manage the database connections, manage the HTTP requests and replies, and so on. Then there is the application code itself and its business logic. Maintaining this application release after release would be another onerous task. Thankfully, this isn’t how web applications are written. A plethora of frameworks have evolved to provide pre-written code for common components like UIs, database management, and more. These frameworks provide ready code to manage common tasks, enabling the developer to concentrate on the application itself. A background on frameworks Frameworks are software that has been architected to ease, automate, and structure one or more aspects of application development. Just as there are varied subsystems and functions in a web application, there are frameworks that provide cookie-cutter implementations of those subsystems, which the application developer need not write. One of the most popular frameworks for Java is Spring, which provides a wide range of functions such as object management, database connectivity, messaging, persistence, network communications, and user interface elements. Spring also enforces a certain software architecture by implementing programming paradigms such as inversion of control and aspect-oriented programming. In an application, the Spring framework provides almost all the common (and reusable) code, while the application code fills in the blanks with the application’s own objects and implements its business logic. Many frameworks provide narrower, specialized facilities to simplify specific tasks. As a craftsman would likely use many tools from a toolbox for a given project, it’s common for a developer to use more than one framework in an application. For database connectivity and data binding, MyBatis is used to connect objects to SQL procedures or statements. An object-relational mapping (ORM) framework is another popular way of connecting database information to objects; Hibernate is a well-known example. In the domain of UIs and UI mapping to the underlying application, Struts, Apache Tiles, and Vaadin are some of the most widely used frameworks. Analyzing applications for security With such a broad range of frameworks available, it shouldn’t be a surprise that framework code makes up most of the code in an application. Therefore, security tooling used to scan the application must understand the framework code. Static analysis is a leading security mechanism to identify bugs and vulnerabilities in source code before deployment. A key benefit of static analysis is that it allows security to shift to the earliest phases of development, which reduces both the remediation cost and developer time spent resolving issues. In your search for the strongest static analysis tooling option to implement within your team or organization, consider the depth and breadth of scanning. Top tools available on the market bring nearly 400 checkers to bear on a codebase, detecting both bugs and security issues (sometimes the line between the two is blurry). Static analysis tools use a variety of techniques to analyze both framework and application code for vulnerabilities. For example, cross-module data flow tracking follows data as it originates from web requests and flows into the application’s functions, which helps identify sources and sinks for cross-site scripting (XSS) and injection scenarios. Look for a tool that detects issues with authorization, hardcoded passwords, certificate usage, insecure (non-SSL) communication, and issues relating to leakage of sensitive data. One big challenge in analyzing framework-based applications for vulnerabilities is the flow of control between the framework code and application code. In the traditional model, the application calls into libraries as needed. But modern frameworks use the inversion of control model, where the framework and its components become the main body of the application, transferring control into your application at specific points in specific contexts. Any control flow across subsystems can pass data, possibly tainted, from the web, into the framework, and on to the application—and tainted data coming in from untrusted sources are a potential source of injection attacks. Other issues related to framework code include buffer handling, certificate management, credential management, insecure connection settings, path manipulation scenarios, and so on. An application using a framework isn’t secure if its framework code has exploitable vulnerabilities or insecure settings. Therefore, scanning the application code alone isn’t enough. Static analysis must cover the combined stack of application code and the frameworks it uses. There are static analysis tools available that include a wide array of security-related checkers, many framework-specific. Checkers for Spring, Struts, Sequelize, and Socket.io are tuned to understand the frameworks’ behavior and interactions with applications. The result is high-accuracy findings and fewer false positives. Broad coverage in a static analysis tooling solution frees up developers to focus on the application features and functionalities that require their attention and expertise. Be sure to implement a solution that not only understands the programming languages you use but can also identify and understand the frameworks you build your applications on.
<urn:uuid:dc9daa30-7e1b-4923-a6df-d10f2a2bb7b8>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2019/07/22/framework-aware-sast/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00665.warc.gz
en
0.911024
1,130
2.765625
3
Online safety is hard enough for most adults. We reuse weak passwords, we click on suspicious links, and we love to share sensitive information that should be kept private and secure. (Just go back a few months to watch adults gleefully sharing photos of their vaccine cards.) The consequences of these failures are predictable and, for the most part, proportional—a hacked account, a visit to a scam website, maybe some suspicious texts asking for money. But for an often-ignored segment of the population, online safety is more about discerning lies from truth and defending against predatory behavior. These are the threats posed specifically to children with special needs, who, depending on their disabilities, can have trouble understanding emotional cues and self-regulating their emotions and their relationship with technology. This year, for National Cybersecurity Awareness Month, Malwarebytes Labs spoke with Alana Robinson, a special education technology and computer science teacher for K–8, to learn about the specific online risks posed to special needs children, how parents can help protect their children with every step, and how teachers can best educate special needs children through constant reinforcement, “gamification,” and tailored lessons built around their students’ interests. Importantly, Robinson said that special needs education for online safety is not about a handful of best practices or tips and tricks, but rather a holistic approach to equipping children with the broad set of skills they will need to safely navigate any variety of risks online. “Digital citizenship, information literacy, media literacy—these are all topics that need to be explicitly taught [to children with special needs],” Robinson said. “The different is, as adults, we think that you should know this; you should know that this doesn’t make sense.” Whether adults actually know those things, however, can be disputed. “I mean, as I said,” Robinson added, “it is also challenging for adults.” Our full conversation with Robinson, which took place on our podcast Lock and Code, with host David Ruiz, can be listened to in full below. The large risk of disinformation and misinformation The risks posed to children online are often similar and overlapping, no matter a child’s disability. Cyberbullying, encountering predatory behavior, interacting with strangers, and posting too much information on social media platforms are all legitimate concerns. But for children with behavioral challenges, processing challenges, and speech and language challenges in particular, Robinson warned about one enormous risk above all: The risk of not being able to discern fact from fiction online. “Misinformation and disinformation online [are] a great threat to our students,” Robinson said. “There were many times [my students] would come in and say ‘I saw this online’ and we would get into discussions because they were pretty adamant that what they saw is correct.” Those discussions have increased dramatically in frequency, Robinson said, as her students—and children all over the world—watch videos at an impossibly fast rate on platforms like YouTube, which, according to the company’s 2017 statistics, streams more than one billion hours of video a day. That video streaming firehose becomes a problem when those same platforms have to consistently play catch-up to stop the wildfire-like spread of disinformation and conspiracy theories online, as YouTube just did last week when it implemented new bans on vaccine misinformation. “I have students pushing back and telling me, no, we never landed on the moon, that’s fake,” Robinson said. “These are the things they’re consuming on these platforms.” To help her students understand how misinformation can spread so easily, Robinson said she shows them how it can be daylight outside her classroom, but at the same time, if she wanted, she could easily post a video online saying that it is instead nighttime outside her classroom. Robinson said she also encourages her students to ask if they’re seeing these claims made elsewhere, and she steers them to what are called “norm-based reputable sources”—trustworthy websites that can provide fact-checks while also removing her students from the progression of recommended online videos that are fed to them through algorithms that prioritize engagement above all else. “This is what we call building digital habits,” Robinson said, emphasizing the importance of digital literacy in today’s world. The promise of a “solution” to misinformation and disinformation online almost feels too good to be true, whether that solution equips special needs children with the tools necessary to investigate online sources or whether it helps adults without special needs defend against hateful content that is allegedly prioritized by one enormous technology company to boost its own profits. So, when Robinson was asked directly as to whether these teaching models work, she said yes, but that the models require constant reinforcement from many other people in a child’s life. Comparing digital literacy education to math education, Robinson said that every single year, students revisit the topics they learned the year before. She called this return to past topics “spiraling.” “Part of developing digital students into really successful, smart, discernible, digital adults is the ongoing, constant spiraling and teaching of these concepts,” Robinson said. “If you can collaborate with other content area educators in your building, you’re infusing these topics through subject areas.” Essentially, Robinson said, teaching online safety and cybersecurity to special needs children needs to be the responsibility of more than just a single technology teacher. It needs to be taken on by several subject matter educators and by parents at home. For parents who want to know how they can help out, Robinson suggested finding teaching moments in everyday, common mistakes. If a parent themselves falls for a phishing scam, Robinson said those same parents can take that as an opportunity to teach their children about spotting online scams. “It’s an ongoing work and it never stops,” Robinson said. Teach kids about what they like using To help special needs children understand and take interest in online safety education, Robinson said she always pays attention to what her students are using and what they’re interested in. This simple premise makes lessons both applicable and interesting to all students—not just those with special needs—and it provides a way for children to immediately understand what they’re learning, why they’re learning it, and how it can be applied. As an example, since so many of her students watch videos on TikTok, Robinson spoke to her students last year about the US government's reported plans to ban the enormously popular app. “The federal government was thinking of not allowing TikTok to be used here because it might’ve been a safety risk, and so we had that discussion, and I said ‘What happens if you couldn’t use TikTok anymore?’” Robinson said. Robinson said this tailored approach also gives teachers and parents an opportunity to help kids not just stay safe online, but also learn about the tools they use every day to view online content. The tools themselves, Robinson said, can greatly impact how a child with special needs feels on any given day—sad, happy, worried, scared, anything goes—and that children with special needs can often use guidance in self-regulating and understanding their own emotions. Robinson added that many of her lessons about online tools and platforms have a similar message: If a game or website or tool makes her students feels uncomfortable, they should tell an adult. It’s a rule that could likely help even adults when they find themselves gearing up to get into an online argument for little legitimate reason. Embrace the game Finally, Robinson said that many of her students enjoy using online games to learn about online safety, and she specifically mentioned Google’s Internet safety game called “Interland,” which parents can find here. Google’s Interland leads kids through several short “games” on online safety, with lessons centered around the topics of “Share with Care,” “It’s Cool to Be Kind,” and “Don’t Fall for Fake.” The browser-based games ask kids to go through a series of questions with real scenarios, and each correct answer earns them points while their digital character jumps from platform to platform. The website works with most browsers, but Malwarebytes Labs found that it ran most smoothly on Google Chrome and Safari. Interestingly, when it comes to lessons that Robinson’s special needs students excel at, she said they are excellent at creating strong passwords—and at calling people out for using weak ones. “I teach 100 students, 10 classes, [and] I used not a very strong password for every student in this one class … and I said ‘By the way, everyone has this [password],’ and they’re like, when I said everyone has this same password, they’re like ‘Oh no no! That’s not a strong password, oooh,’” Robinson said, laughing. “They literally let me have it.”
<urn:uuid:eb9172f4-fc85-4e3e-a628-733de0203dc6>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2021/10/what-special-needs-kids-need-to-stay-safe-online
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00665.warc.gz
en
0.958515
1,926
3.21875
3
‘Business Process Model and Notation (BPMN)’ is a method for business process mapping. That is, creating a visual representation of complex business practices or process flows. This is designed to give major stakeholders the clarity and perspective required to make informed decisions. The essence of BPMN 2.0 involves establishing the location of individual processes and departments, along with their relationships to each other. This style of diagramming is generally easier to understand than narrative text, allowing users to explain relationships that may otherwise require experts to interpret. The right BPMN tool could even do wonders for your organization’s planning efficiency. Developed by the Business Process Management Initiative, BPMN 2.0 can be a highly valuable asset for a variety of stakeholders and business users, including: External teams and third parties BPMN diagrams provide these actors with the terminology to communicate their ideas and make decisions with a much greater sense of perspective. This, in turn, can ease collaboration and even boost morale since everyone will be able to see the purpose and benefits of their own contributions. How does BPMN work? In essence, business process modeling techniques are a lot like flowcharts, just slightly more complex. They utilize graphical notation to outline process steps, the placement of departments, and so on in order to create blueprints for businesses. Observers can then pinpoint specific areas in order to understand how they relate to each other, as well as how they can be improved. A BPMN model is made up of ‘pools’ and ‘swimlanes’. There will be multiple swimlanes within a single pool, with each representing different participants and responsibilities. Tasks and information can also flow between swimlanes, affecting them simultaneously. It goes without saying that different factors within a business are never entirely independent; by understanding the connections between them, you can get a clearer perspective of how the business works and how prospective changes may impact it. You can also understand how information and activities flow towards the endpoint of a process, such as a finished product or service. A BPMN model is made up of ‘pools’ and ‘swimlanes’. There will be multiple swimlanes within a single pool, with each representing different participants and responsibilities. Tasks and information can also flow between swimlanes, affecting them simultaneously. It goes without saying that different factors within a business are never entirely independent. By understanding the connections between them, you can get a clearer perspective of how the business works and how prospective changes may impact it. You can also understand how information and activities flow towards the endpoint of a process, such as a finished product or service. A key part of this process is the terminology of BPMN. This can be applied to models of varying complexity, from simple hand-drawn diagrams to huge complex blueprints. Each will utilize the same shapes and terminology in order to make the finished blueprint accessible to all. BPMN 2.0 depicts four element types for business process diagrams: Events: Triggers that start, modify, or complete processes, such as receiving a message or escalating an issue Activities: Tasks performed by people or systems. These can also include subprocesses, loops, compensations, multiple instances, and so on Gateways: Decision points that can adjust paths. For example, a single decision could cause a process to go down Path A or Path B, leading to very different endpoints. Types include inclusive, parallel, and exclusive gateways Sequence flow: The order of activities to be performed Message flow: Messages that flow across pools or organizational boundaries Associates: Artifacts or text within an event, activity, or gateway Pools and swimlanes: Pool: A major participant in a process. Different pools could represent separate departments or companies, though they will still be linked to the same process Swimlane: Areas within a single pool. They show activities and flows for certain roles and participants. They also define who is accountable for which parts of the process Data object: These will show what kind of data is required for an activity Data group: These will show a logical group of activities but will not change a diagram’s flow Annotation: Further explanation to any part of a diagram (like comments in a document) How can BPMN help my organization? Now that you know how it works, you might be asking, ‘What is BPMN used for?’ BPMN offers a number of potential advantages to businesses of all sizes. The essence of its offering is clarity, as it helps users determine objectives and processes in a way that allows them to control their organizations with a greater sense of perspective. Once a business is familiar with the terminology of BPMN, it can also enjoy clearer communication between different stakeholders and departments. A crucial aspect of this is BPMN’s ability to link technical and non-technical audiences with a single, clear language. Sub-models can also be created to allow actors to view sections and pieces of information that are relevant to them. This can be done without having to make a business’s entire model transparent to all employees. There are also a number of other potential benefits to consider: Align IT operations and software development with business strategies: As crucial as IT is to modern businesses, its complexities can often cause difficulties for decision-makers. Lacking the same knowledge as IT managers, they may struggle to see where IT fits in with wider business priorities. BPMN solves this problem by making IT and its position within a business easier to understand. BPMN software can also be employed to ease the process even further. Improve operational efficiency and cut down on wastage: With a greater understanding of a business’s processes and relationships, decision-makers will be in a much better position to boost efficiency wherever they can. With ongoing process improvement across the board, the business can enjoy greater productivity, more cost-effective utilization of resources, and a culture of continuous optimization. Enable quicker changes to the business: Businesses face new challenges all the time, such as changes to technology or new initiatives by competitors. The clarity offered by BPMN can make it easier to adapt to these challenges, allowing businesses to eliminate expensive guesswork. In short, a BPMN diagram shows not only where a business is failing but also helps users understand how to make improvements. Improve process communication and perspective: BPMN terminology and documentation can make team members certain of what they need to do, as well as how and why. This can also enable rapid knowledge transfer, making it easy to turn knowledge and experience into documented processes that everyone, including new employees, can learn from. Enjoy an adaptive process: The fluidity of BPMN means that it can constantly be adapted to include new technologies, frameworks, and processes. Is BPMN the same as UML? UML, or 'Unified Modeling language', performs a similar role to BPMN. It is a standardized modeling language specifically designed for system and software developers. With a focus on objects, it is used to specify, visualize, construct, and document software artifacts. While their purposes are similar, UML is primarily for software elements. BPMN, on the other hand, is focused on processes and is far more applicable for business domains. For this reason, BPMN is generally the more widely used framework. Why Study BPMN 2.0 with Good e-Learning? The Good e-Learning BPMN Foundation & Practitioner course was created by e-learning specialists. It helps newcomers and practitioners get a practical understanding of the framework as quickly as possible. Key features of our BPMN 2.0 course: Rich graphics and video content Practice exams and quizzes Hours of course material 6 months of access instantly Certificate of completion Mobile app for offline learning While there is no official exam to take, students will receive a certificate of completion upon finishing the course. The expertise on offer can also be highly lucrative, with some students reporting a 21% increase in their earning potential after studying BPMN 2.0. Good e-Learning also specializes in creating bespoke corporate training courses for businesses looking to upskill multiple employees at once. We have already partnered with hundreds of global blue chips to design courses that take their uniqueness into account, including their location, size, business goals, corporate culture, and, of course, budget. Want to find out more? Get in touch with Good e-Learning today to learn how we can set you on the path to your BPMN course!
<urn:uuid:40533053-13dd-4344-ab99-78c9301f0db6>
CC-MAIN-2022-40
https://www.goodelearning.com/courses/business-process/bpmn-training/what-is-bpmn
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00665.warc.gz
en
0.947183
1,830
2.71875
3
Environmental sustainability is increasingly an important competitive consideration for many of the most forward-looking enterprises. These companies are consistently striving for ways to optimize resources – and for many, that includes the resources deployed within their data centers. Therefore, more sustainable data centers are becoming more important. Almost all of the world’s internet traffic goes through data centers. The insatiable demand for internet connectivity is driving the consumption of power inside those data centers to unimagined levels. For instance, streaming Netflix for just one hour consumes enough electricity to drive a Tesla Model S roughly 20 miles. Sustainable Data Centers: Power Consumption to Reach 10% of Global Supply by 2030 According to the IEA, global data center electricity demand in 2019 reached almost 1% of total electricity and is forecasted to grow to consume almost 10% by 2030. Increasing power consumption and the CO2 emissions that result are a big concern for all of us. Power is a crucial consideration in the selection of the data center. The source of that power is growing in importance with the volatility of prices in the carbon energy market and the proliferation of corporate sustainability goals. In the US, renewable power generation is expected to outpace electricity demand growth by 2025, which should create opportunities to increase the mix of renewable power consumed in the data center. Increased internet traffic and the consequent growth in power consumption will continue to put economic and environmental pressure on forward-looking businesses. Squeezing every ounce of efficiency out of data center deployments only adds to that pressure. While servers get the most attention, network hardware can be a fairly sizable consumer of power and space – not to mention generating a fair amount of heat. These factors are leading many advanced digital businesses to consider alternatives in their networking hardware. As legacy fiber networks age, the hardware required to push bandwidth through those fibers becomes more complex. Older fiber networks with higher signal loss require more forgiving network hardware. These devices can be high consumers of power and generate a substantial amount of heat. The GSMA estimates the power consumed by network devices adds 20 to 40% to total operating expenses inside a data center. Single-Span Point-to-Point Links Can Leverage SFPs to Reduce Space/Power Needs Meanwhile, new fiber networks with low signal loss can be optimized with network hardware that consume far less space and power. Single-span point-to-point links can take advantage of small form-factor pluggables (SFPs) for distances inside of 10Km and in some cases up to 40Km. SFPs consume 10% of the power and take up a fraction of the space a typical WDM chassis would – not to mention, generating far less heat. Clients deploying this type of hardware are achieving 100Gbps and in some cases up to 400Gbps across those links. Critical to achieving that higher bandwidth is a ‘clean’ set of fibers (low signal loss) which is more likely to be found in new fiber networks. The Case to Replace Legacy Fiber Networks New, purpose-built fiber networks that minimize latency and signal loss are quickly becoming vital elements of the infrastructure for many corporations. Not only do they allow those businesses to push higher bandwidth, they also reduce the consumption of both space and power inside the data center. Those efficiencies lend to more sustainable data centers and can add up to a material reduction of the carbon footprint for the digital enterprise. At Bandwidth IG, our network is brand-new and purpose-built to minimize latency and signal loss – and to maximize diversity. This makes our network ideal for deploying SFPs and thereby helping our customers reduce their carbon footprint. We don’t believe it is a leap of faith to say that older legacy networks will need to be decommissioned and replaced in order to support current and future sustainability imperatives. The need to support new technology growth, and what looks like permanently changed demand patterns, will continue to put pressure on older existing networks. When you add in the benefit of substantially reduced power consumption required to run new fiber networks, the case to replace is a no-brainer.
<urn:uuid:f38442e3-7faf-4a02-bc9d-dc875ab99718>
CC-MAIN-2022-40
https://bandwidthig.com/sustainable-data-centers-what-role-does-fiber-play/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00665.warc.gz
en
0.927448
836
2.90625
3
New Training: WAN Technologies In this 8-video skill, CBT Nuggets trainer Jeremy Cioara teaches you about various wireless area network (WAN) technologies, including service types, transmission mediums, and termination types. Watch this new networking training. Watch the full course: CompTIA Network+ This training includes: 51 minutes of training You’ll learn these topics in this skill: WAN Technologies: Understanding the LAN, MAN, and WAN Difference WAN Technologies: WAN Transmission over Copper, Fiber, Wireless, and Satellite WAN Technologies: WAN Types – ISDN/PRI, T1-T3, E1-E3, and OC WAN Technologies: WAN Types – Cable, DSL, Dial-up WAN Technologies: WAN Characteristics – Frame Relay, ATM, MPLS, and DMVPN WAN Technologies: What is PPP and PPPoE? WAN Technologies: What is SIP Trunking? WAN Technologies: The Physical Reality of a WAN Connection The Difference Between LAN, MAN, and WAN is Private A LAN is a group of devices that are interconnected to allow for communication. A defining characteristic of a LAN is that it’s privately owned. The network devices are generally owned by the same organization using the network. For that reason, LANs typically cover one narrow geographical area: a coffee shop, hospital or college. A MAN covers a larger area than a LAN but it remains a defined geographical region. Compared to a LAN, a MAN comes with all the drawbacks you’d expect from a larger network, more congestion and lower transmission speed. But the critical difference is that many of the components of a MAN are publicly owned — or at least not owned entirely by the networks operating on them. A WAN is different from a LAN or MAN in that it’s generally not defined geographically — it’s a huge interconnection of devices and networks, from a number of different places. WANs are almost never privately owned and depend on public infrastructure and systems to maintain connection.
<urn:uuid:4ed62007-1cdb-4926-b121-1412a2d92cd0>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/new-skills/new-training-wan-technologies
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00665.warc.gz
en
0.903977
448
2.78125
3
Can We Trust Artificial Intelligence? by Adrian Bowles, PhD Arthur Clarke’s third law—Any sufficiently advanced technology is indistinguishable from magic (1973)—captures the essence of our current AI-trust dilemma. We are quite capable of constructing deep learning solutions that perform admirably, but whose performance cannot be sufficiently explained to satisfy some regulatory or consumer concerns. This explainability problem is one barrier to trust. Another barrier is the lack of rigor or standards for developing AI-powered solutions. There is nothing in the AI world comparable to the Software Engineering Institute’s Capability Maturity Models (measuring the quality of the development process) or the USDA’s Food Safety and Inspection Service processes. Trust is generally based on the fragile reputation of the supplier. Can a Voluntary Reporting Approach Help? IBM researchers recently proposed the development of a standard Supplier’s Declaration of Conformity (SDoC) or factsheet for AI services to address this trust issue. The SDoC would be a document similar to a food nutrition label or information sheet for an appliance. It would provide facts about important attributes of the service, including information about the processes used to design and test it. As the team currently envisions the SDoC, it would provide data about product-level—not component-level—functional testing. The sample checklist of process-oriented questions is aligned with IBM’s four pillars for trusted AI systems: - Fairness: Training data and models should be free of bias. - Robustness: AI systems should not be vulnerable to tampering and training data must be protected from compromise. - Explainability: AI systems should provide output that can be understood by their users and developers. - Lineage: “AI systems should include details of their development, deployment, and maintenance so they can be audited throughout their lifecycle.” If “yes,” describe bias policies that were checked, bias checking methods, and results? Was any bias mitigation performed on the dataset? If “yes” describe the mitigation method. IBM is proposing that the SDoC be voluntary, but with recent well-publicized examples of bias in facial recognition systems (e.g., falsely matching 28 members of Congress with mugshots from a public database), something like this is likely to become mandatory soon. Mathematically sound techniques to make programs provably correct have existed for decades, but they are cumbersome and generally impractical for large, complex, enterprise applications. That doesn’t stop us from building these applications. Deep learning and other opaque modern AI techniques will continue to be developed and disseminated in products based on correlation data; for most applications, that’s a good thing. The pace of advancements in this area is much faster than the ability of most organizations—private and public—to objectively evaluate the processes and identify potential edge cases that may fail spectacularly. Efforts to develop practical quality and machine learning explainability solutions should continue, especially for mission or life-critical systems. For most application buyers, however, insisting on SDoCs as part of the purchasing criteria looks like a good way to improve buyer confidence and help vendors earn their trust.
<urn:uuid:70341d1f-2d69-4dd9-b81f-e7a395d63407>
CC-MAIN-2022-40
https://aragonresearch.com/can-we-trust-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00665.warc.gz
en
0.931065
675
2.640625
3
The researchers identified contamination of ground water in rural areas containing artificial sweeteners caused by local septic system. Researchers from the University of Waterloo found evidence in contamination and began testing the ground water in Nottawasaga River Watershed and detected four artificial sweeteners in ground water influenced by waste water from septic system in local area. The high concentrations of artificial sweeteners unchanged in water. Also most water treatment plants couldn’t clean the human waste water accurately. Other substance E. coli, viruses, pharmaceuticals, personal care products and increased levels of nitrate and ammonium. “Although the four artificial sweeteners measured all approved for human consumption by Health Canada. The other septic contaminants also present in the water pose a health risk,” said John Spoelstra, first author on the study and an adjunct professor in earth and environmental sciences at Waterloo. “As for groundwater entering rivers and lakes, the effect of artificial sweeteners on most aquatic organisms is unknown.” The Study analyzed from 59 private wells 30 per cent of samples show at least one among four artificial sweeteners. These indicate the presence of human wastewater. Between 3 and 13 per cent of wells contain at least 1 per cent septic effluent. 32 per cent of samples positive for sweetener The team tested groundwater on banks of the Nottawasaga River. They found 32 per cent of samples positive for sweeteners. In rural areas, homes doesn’t have connectivity to municipal sewer system. The system perform primary treatment in removing solids. Before the effluent discharged to septic drain field further treatment occurs. The same group revealed by previous studies that presence of artificial sweeteners in the Grand River as well as in treated drinking water sourced from the river. “We were not really surprised by the most recent results given what we’ve found in past studies,” said Spoelstra, also a Research Scientist with Environment and Climate Change Canada. “Septic systems are designed to discharge effluent to groundwater as part of the wastewater treatment process. Therefore, contamination of the shallow groundwater is a common problem when it comes to septic systems.”
<urn:uuid:12f8bf3c-4926-47a6-adae-c6277daa1f29>
CC-MAIN-2022-40
https://areflect.com/2017/11/07/groundwater-containing-artificial-sweeteners-indicate-contamination-from-septic-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00665.warc.gz
en
0.925203
451
3.4375
3
Smart energy for smart cities technology revenue is expected to reach $20.9 Billion in 2024 according to a new report from Navigant Research. Increasing pressure from climate change The report, Smart Energy for Smart Cities, has its focus on the smart grid and advanced energy technologies segments. Smart energy management is an important area for city leaders. They are under increased pressures to take measures to deal with climate change, which mean city leaders need to develop policies around energy efficiency and carbon reduction. Navigant Research says this has given rise to ambitious energy policies involving a range of innovations such as smart grid technology, demand management, alternative and renewable energy generation and distributed energy resources. Synergies and complementary technologies The report points out that there are many synergies in the technology framework that makes up the backbone of a smart grid. Over the next ten years cities, utilities, and third-party vendors are expected to increasingly seek out complementary technologies. This will be vital if they are to optimise the use of resources – both those of the city and the citizen – and to ensure that investment is used to best effect. The report says the smart energy for smart cities vision has both economic and technical barriers which inhibit current development. The varying business models of cities’ utilities and private stakeholders mean that alignment between stakeholders can be difficult to achieve. Nevertheless the report says that global smart energy for smart cities technology revenue is expected to grow from $7.3 billion in 2015 to $20.9 billion in 2024. “Energy is the lifeblood of a city,” says Lauren Callaway, research analyst with Navigant Research. “Developing an integrated and sustainable energy strategy within the smart city framework is one of the most effective ways cities can contribute to their larger goals of addressing climate change, supporting citizen well-being, and fostering economic development.”
<urn:uuid:4b53af54-2b29-422b-93d0-d33a38704fa3>
CC-MAIN-2022-40
https://internetofbusiness.com/smart-energy-smart-cities-technology-revenue-reach-20-9-billion-2024/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00665.warc.gz
en
0.935828
381
2.5625
3
Power and cooling are the two most fundamental elements of data centers. An average size data center consumes a large amount of power. A large chunk of this power is used in cooling systems. Cooling systems keep the facility functioning, as they remove the waste heat produced by the IT equipment. Traditional data centers use a variety of cooling techniques. A combination of raised floors, CRAC / CRAH units is one example of a data center cooling setup. Meanwhile, modern data centers utilize innovative cooling technologies. From basic fans to complex heat transfer technologies like integrating with AI, edge, and cloud computing to manage cooling systems. In this article, we look at the traditional data center cooling techniques of yesterday and the advanced cooling technologies of today. We will also discuss the utilization of edge and cloud technologies in data center cooling. Finally, we will look at the future of data center cooling. Legacy Data Center Cooling Photo Credit: Uptime Institute Journal For many years, the raised floor technique has been deployed in data centers. Raised floors are used to deliver cold air to the servers. The cold air from the CRAC / CRAH flood the area under the red floor with cold air. As the air is fed in it pressurizes the plenum. Perforated floor tiles allow cold air to leak out. The perforated tiles are usually located either in cold aisles, or infant of each rack. After moving through the servers, the waste heat returns to the CRAC / CRAH units to be cooled again. Typically, the CRAC units’ return temperature is used to manage the cooling system. The CRAC unit fans operate at a constant speed. It also has a humidifier that produces steam. This has been a typical design in data centers for many years. It is still widely employed today. Raised floors are still considered a basic element in a data center infrastructure. The legacy design in data centers depends on the principle of comfort cooling. That is, bringing a small amount of conditioned air. Then allowing it to combine with the larger amount of air available in space to achieve the right temperature. The legacy design is ideal for low-density IT equipment. Since low densities can handle poor efficiency and uneven cooling. But now, the demand for high density data centers is increasing. This means more efficient cooling systems are needed. Datacenter Cooling Technology Today Photo Credit: www.gea.com Data center demand is increasing, be it a small edge data centers, an enterprise, or hyperscale facility. All types of data centers depend on thermal management to operate. But their demands differ based on their sizes. Cloud technology is also utilized in today’s data center cooling. Businesses are now switching to intelligent environmental sensor systems to monitor, control and optimize their data centers. In 2014, a 3 MW data center was considered a large facility. But now, 10 MW is the average facility size for hyperscale and enterprise data centers. Colocation and hyperscale customers are concerned with high-volume custom solutions that can address their needs. It offers lower costs, fewer risks, and faster speed to market. Multi-story data centers are also becoming a trend. This reduces the expense of the real estate. It allows other cooling systems to be applied in other areas. Modern data centers can also tolerate a slightly higher temperature as advised by ASHRAE. From 18°C or 64.4°F to 27°C or 80.6°F. This means that operators can increase supply temperatures without worrying about damage. Utilizing Edge And Cloud Technology The edge and cloud technology are now widely used in data centers. These can be seen in the form of the Internet of Things (IoT) and 5G networks. Most businesses consider edge as the vital element of data centers. Since these systems are responsible for increasing revenue. As well as data storage. Edge cooling systems can handle airflow, humidity, and temperature. You may opt for a remote monitoring system with an IoT mobile app and an online portal. This allows you to monitor and manage your facility anytime and anywhere. There are edge systems that can be mounted on walls and ceilings. So, you do not have to worry about space. Eliminating Water For Cooling As you know, water consumption is also a big concern in the data center industry. Although water is an effective cooling medium, it is still a limited resource. Eliminating water in cooling systems is still controversial in the industry. Hence, liquid cooling technologies may not be an option for other businesses. Water treatments, costs, and availability can be restricting. Other big data centers are switching to water-free cooling systems. To eliminate water consumption and head towards their sustainability goals. Utilizing Pumped Refrigerants Photo Credit: www.foodengineeringmag.com Pumped refrigerant systems are another trend in data center cooling. When optimized, it can increase energy savings by up to 50%. As well as save up to 6.75 million gallons of water per year for a 1 MW data center. Relative to a chilled water cooling system that uses vast amounts of water. It is a rather sustainable technology that can still deliver efficiency. Pumped refrigerant is also cost-saving. It does not need additional chillers or cooling towers. It is optimized with the help of ambient temperatures and IT load. Instead of fixed outdoor temperature setpoints. Allowing data centers to enjoy 100% potential economization hours. Direct evaporative cooling systems are now employed in hyperscale facilities. This system uses cool, outside air as the main cooling material usually throughout the year. It also uses a wet media pad to cool the incoming air when there are higher ambient conditions. Direct evaporative design is a sustainable option for cooling systems. It also costs lower since it uses relatively lower amounts of water. As well as decreased energy consumption. But take note that this system may not be applicable for all data centers. Some facilities need a much wider operating window for temperature and humidity. Datacenter Cooling In The Future With today’s technology, the world is only ever advancing. In data centers, applications like facial recognition and data analytics are evolving. Even server processors are also becoming more advanced. This provides an opportunity for better cooling systems. Hopefully, cooling systems in the future will become truly sustainable. Pumped refrigerant systems and free cooling systems are also likely to advance. Lowering costs and bringing higher efficiency. Processors and servers will also need innovation in terms of their thermal management. Cooling systems will always be a mission-critical component in data centers. Choose the right cooling systems for your facility. Take advantage of the modern techniques and technologies we have today. This will help you improve efficiency as well as promote sustainability in your data center. Keeping Your Cool In Data Center These systems are the primary means of keeping data center equipment cool in a variety of operations and environments. They all, however, necessitate constant supervision. Monitoring with DCIM in place will quickly identify hot spots, allowing systems to be slowed down, switched off, and replaced as needed. The unexpected emergence of a hot spot often signifies an impending equipment failure; being able to detect this before it occurs i Wireless Cabinet Thermal Map mproves system uptime and availability. Wireless Thermal Mapping of IT Cabinets Wireless thermal mapping of your IT cabinets. With 3x Temperature sensors at the front and 3x at the rear, it monitors airflow intake and exhaust temperatures, as well as provide the temperature differential between the front and rear of the cabinet (ΔT) Wireless Thermal maps work with all Wireless Tunnel™ Gateways. Thermal Maps are integrated with AKCPro Server DCIM software in our cabinet rack map view. For more details on the cabinet, thermal map sensors view here. AKCP Airflow Sensor The AKCP Pro airflow sensor is designed for systems that generate heat in the course of their operation and a steady flow of air is necessary to dissipate this heat generated. System reliability and safety could be jeopardized if this cooling airflow stops. The Airflow sensor is placed in the path of the air stream, where the user can monitor the status of the flowing air. The airflow sensor is not a precision measuring instrument. This device is meant to measure the presence or the absence of airflow. Data centers have become more complicated in recent years. The influx of data in recent years has resulted in the influx of demand. A data center is a fragile structure that houses various IT systems and equipment. Hence, it needs careful and efficient power and cooling systems. As IT systems generate heat, it is essential to remove this heat waste. Cooling technology is designed to keep the correct climate for IT activities. Today, there are many cooling technologies that you may utilize in your facility. You may combine different cooling designs and methods. To achieve an efficient system perfect for your facility. Take advantage of the advanced technologies we have today. Especially if you have the budget to do so. Take note that your data center is the heart of your organization. Trust nothing but the best systems for your facility. Naivas, Kenya's largest supermarket and online grocery delivery service, selected AKCP Wireless LoRa™ based monitoring system for quality control temperature and humidity monitoring of their cold storage environment. Installed at the Naivas Beef Butchery, cold storage and dispatch areas the L-DCIM provided centralized monitoring, graphing and alerting. LBTH, LoRa™ Battery-powered dual temperature and humidity sensors were deployed in key areas. Easy installation with no communication cables or power required.
<urn:uuid:a58ef5a5-0bab-41b4-b389-a77421f26f3f>
CC-MAIN-2022-40
https://www.akcp.com/articles/data-center-cooling-technology-now-and-in-the-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00665.warc.gz
en
0.934242
1,984
3.390625
3
The Metaverse is a popular term across the gaming and entertainment industry that refers to the theoretical 3D “virtual world” of the future. The term, which has roots in dystopian science fiction, can mean many different things to different people, and as such, can be difficult to explain to the layperson. Ultimately, however, the Metaverse will be defined by those who use it and create it. Today I’d like to dive deeper into the topic—what it is, and why we should care about it. While many companies out there would like to enable the Metaverse, today’s Internet is lacking some abilities that would make that possible. The Internet, as it currently exists, is inherently CPU-based—this will have to change if we hope to see anything resembling our vision of a Metaverse. Everything will need to be rendered in 3D, which will require more GPUs in more places than ever. Companies like AMD, Intel and NVIDIA will have to ramp up GPU production to the point where GPUs are nearly as ubiquitous in cloud computing as CPUs are today. That seems fairly difficult—the current global semiconductor shortage means the Metaverse might have to wait until 2023 or later. Another possible solution is distributed GPU computing through services like RNDR from OTOY. OTOY’s distributed rendering technology is currently used for film, TV, motion design and architectural visualization. It’s no wonder that NVIDIA’s CEO Jensen Huang is so excited about the Metaverse given the predicted surge in demand for GPUs. Many different opinions Many people understand the Metaverse to be anything that is both online, collaborative and immersive in presence. Others disqualify AR and 2D experiences from the category for being less immersive, engaging, or realistic. I personally believe it’s likely the Metaverse will eventually contain all forms of 3D immersive collaborative environments, whether they are open or not. Many people believe that the Metaverse must be open and cross-platform, allowing any user to interact with another without boundaries. Niantic’s CEO John Hanke recently penned a blog making the argument that the Metaverse is nothing less than a dystopian nightmare in the making. Hanke sees AR as a better solution. Then there’s Facebook’s Mark Zuckerberg, who recently created an entire Metaverse division within his company. He shared in a great recent interview with The Verge’s Casey Newton that he ultimately sees Facebook as “a Metaverse company.” As you can see, lots of different and opinions exist about what the Metaverse can and should be. Another interesting thing to keep an eye on is the rise in popularity of NFTs and 3D digital assets, which can be handmade in the digital world or scanned into 3D from the real world and used virtually anywhere, on any platform thanks to file formats like glTF and USD. NVIDIA’s Omniverse is a professional take on the Metaverse concept in which engineers and content creators collaborate with one another in VR, in real-time, working on the same 3D asset in a shared digital space. The game engine companies are in a heated battle to accumulate as much content creation capabilities as they can for the Metaverse. The most notable of these is Epic, who recently acquired Quixel and Sketchfab, and Unity, with its acquisition of RestAR and Pixyz. Ultimately, the Metaverse will very likely come in many different flavors. I have yet to see one unified vision of a 3D virtual realm that everyone agrees upon. I believe the Metaverse will instead become a series of open and closed ecosystems. All will exist in the cloud, in one form or another, and will be open to users of all kinds of devices, headset or not. While I have wholeheartedly enjoyed many of the Fortnite virtual concert experiences and numerous VR product launches I’ve attended, I wouldn’t necessarily consider any of them the the Metaverse quite yet. It will likely be some years until we have the content and compute power to see realize the Metaverse’s real potential. Today’s XR industry is relatively small compared to the smartphone and PC industries; it will take some time to catch up to those sectors in terms of experience and market size. Until then, we’ll have to keep speculating.
<urn:uuid:7464296b-1886-49fd-82da-a7383e44690c>
CC-MAIN-2022-40
https://moorinsightsstrategy.com/the-metaverse-debate-rages-on-whos-right-and-why-does-it-matter/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00665.warc.gz
en
0.950566
901
2.546875
3
More and more companies are moving to distributed clouds. This is a fundamentally new approach to cloud computing. Analysts at Gartner have named the move to distributed clouds as one of the top ten technology trends of 2021. Let's see why distributed clouds are beneficial for business and what advantages they have. What is a distributed cloud? A distributed cloud is a type of cloud service that allows you to use centralized resources but, if necessary, start computing processes on local equipment. Cloud service providers can manage equipment in centralized and regional data centers. This is not enough for a distributed cloud. It is necessary to introduce so-called substations with strategically advantageous locations. In the case of edge computing, there is a binding to the physical location of resources. An example of a distributed cloud is a content delivery network (CDN), which is a geographically dispersed network infrastructure. It is designed for the optimized and fast delivery of content (most often video or audio) to users in different locations, significantly reducing download speed. But distributed clouds are not only beneficial for creators and providers of media content. They can be applied in other areas of business, from shipping to sales. A distributed cloud can be used even with reference to specific geographical areas. For example, a file transfer service provider can use centralized cloud resources to format video and store content in multiple formats on geographically dispersed CDNs. In anticipation of increased demand for services in specific locations, it can place data in local storage in some residential regions or even in 5G stations in densely populated areas to ensure fast video downloads on mobile devices. Nowadays these devices are being used more and more, you can check for mobile internet stats if you want to learn more about it. How is a distributed cloud different from a hybrid? And now distributed clouds enter the arena. While this is only a trend, not a commonplace, many confuse it with a hybrid. But there is a fundamental difference between the two: both hybrid and distributed clouds can enhance business opportunities. But in the case of hybrid infrastructure, it is mostly about expanding the environment for computing. Distributed clouds enable edge computing and also expand the environment, but geographically. What is the advantage of a distributed cloud? Distributed cloud with associated edge computing is a natural trend. Business requirements have changed, and even hybrid cloud infrastructure no longer suits companies, especially when it comes to large corporations. This is primarily because distributed cloud services help avoid the gap between private and public clouds, which often happens when using the hybrid infrastructure. But a distributed cloud has other benefits as well: - Reduced latency and improved performance. The closer the cloud resources are to a specific location, the faster the end-user will receive computing processes (content delivery, data analysis, etc.). - Expanding business presence. By introducing a distributed cloud into a company's work, you can increase the number and availability of computing zones. - Reduced costs. Even though a hybrid cloud requires a shared infrastructure, management is resource-intensive. The enterprise needs to control both environments, and this requires hiring more specialized employees and, accordingly, spending more money. A distributed cloud can significantly reduce the financial burden. - Reduced risk of network failure. Unlike a centralized cloud, distributing to different locations will help avoid large and lengthy problems. - Compliance with legal regulations. Different countries have different laws, and these businesses may not comply with local regulations. Edge Computing helps companies comply with country-specific laws. This is especially important in cases where specific data cannot be taken out of the state. - If you have to control and administer the private cloud yourself, the service provider will directly monitor the distributed cloud. This leads to a decrease in the cost of equipment administration and enables the business in the event of technical failures to concentrate on its tasks and not solve the problem with the help of its specialists. What is the future of distributed cloud? The transition to a distributed cloud is becoming one of the most important trends. But in the future, the technology will be actively developed, as analysts say. At least for now, cloud providers are busy installing and equipping substations that they will use for edge computing. According to experts' forecasts, by 2025, cloud services will dominate among other information and communication technologies, and at the same time, the popularity of distributed clouds will grow proportionally. Business has been moving to cloud services for a long time, and in 2021, clouds have become especially relevant. Specifically, businesses have begun, en masse, to order services from large suppliers. As a result, the profits of Google and other companies that provide cloud services have increased several times. This is just the beginning. The demand for clouds will only grow.
<urn:uuid:f88a2af8-c637-4026-a918-5b8570876428>
CC-MAIN-2022-40
https://www.networkcomputing.com/cloud-infrastructure/distributed-cloud-future-cloud-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00665.warc.gz
en
0.935028
953
2.8125
3
New Training: Time Management Strategies In this 8-video skill, CBT Nuggets trainer Simona Millham covers a variety of tips and techniques you can use to manage your time better. Gain an understanding of time management techniques and how to use technology to be more effective with your time. Watch this new Professionalism training. Watch the full course: Soft Skills for Business Training This training includes: 36 minutes of training You’ll learn these topics in this skill: Introducing Time Management Strategies When’s Your Most Productive Time? Planning Your Day Dealing with Procrastination Focus, Focus, Focus The Pomodoro Technique Being Smart with Technology Reviewing Time Management Strategies What is the Pomodoro Technique? The Pomodoro Technique is a time management method in which you use a timer to segment tasks into intervals of 25 minutes, with each separated by a break. Each of these intervals are called a pomodoro, which is the Italian word for "tomato." They are called this because Francesco Cirillo, who is the creator of the technique, used a kitchen timer that was shaped as a tomato. Between individual pomodoros, you should take a break of 3-5 minutes. However, between sets of pomodoros, which consist of 4 pomodoros each, you should take a break of between 15-30 minutes. If you finish a task in less than 25 minutes, you should review what you have completed as well as the list of tasks upcoming. There are a number of variations of the technique, which vary the time period of a pomodoro. You can find the Pomodoro Technique in many apps and websites that provide time management.
<urn:uuid:cf076b4f-3e78-489e-9efb-daa2842d1c16>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/new-skills/new-training-time-management-strategies
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00065.warc.gz
en
0.91296
368
2.546875
3
Cnet reported in early March of this year that a flaw had been discovered in the ROM (Read-Only Memory) of many of Intel's processor chips. The issue was announced in May of 2019 but updated in March. Many users have asked me how a processor could have errors in its functions. Processors have had errors or unexpected functions for years, but current systems have microprocessors that are thousands of times more complex than earlier ones and more closely resemble the complex processors of mainframe computers of decades ago, and are even more powerful than they were. Early large electronic computers had very simple processors with instructions that were created by wiring them directly together. Changes could be made by engineers by moving wires from one place to another or by adding tubes or transistors, depending on the era. This approach was similarly used in many single-chip microprocessors. These designs were inflexible. The operations of the single-chip processors could (generally) not be changed once they were created. The difficulty in fixing bugs led to the large computers being designed with an interesting approach: instead of executing stored programs directly, the processor looked in a special memory area for instructions on how to execute the stored programs. For instance, a simple instruction to add a and b might have micro instructions along the lines of "clear the adder, get a from memory, add it to the adder, get b from memory, add it to the adder". Presumably, the result would then be stored somewhere. Those microinstructions are called microcode and stored in memory that the processor can access quickly. The operation of individual instructions (e,.g. add or multiply) could be changed by changing the microcode for the instruction. (As an aside, the floppy disk was developed to load the microcode in IBM mainframes.) Some computers have changeable microcode and some do not. In modern microprocessors such as those from Intel and AMD, the processors do a lot more than execute the instructions of operating systems and programs. These processors have "engines" or "subsystems" that provide security or device management functions. These features make the processors more securable and easier to manage, particularly in enterprise environments. The issue is that as the systems get more complex the greater the possibility of errors. Errors in processing instructions are generally discovered before the chips are sent out to users. Errors in the other engines may not be discovered as quickly. They are generally discovered by security researchers who are specifically looking or vulnerabilities. In the case of the Intel, vulnerabilities disclosed last year and updated this month, the vulnerabilities were found by Positive Technologies in the Intel Converged Security Management Engine and other subsystems. While difficult to exploit they represent a potentially serious problem. Some aspects of the issue have been addressed with software changes, but some may not be able to be fixed because the software is in ROM. Intel has advised end-users of the processors (you, me, and the organizations with which we work) to take two important precautions to prevent bad guys from exploiting the bugs: keep the systems physically secure, and keep software updated. These are precautions we advise in e.g. Learning Tree Course 468, System and Network Security Introduction. These precautions are good for many reasons, and now to help mitigate this threat. These vulnerabilities are difficult to exploit and appear to require the ability to act during the boot process, so physical security is essential.
<urn:uuid:7516231c-4db1-460d-9a18-7f635af7cfd2>
CC-MAIN-2022-40
https://www.learningtree.ca/blog/there-is-another-flaw-in-computer-processor-chips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00065.warc.gz
en
0.964442
705
3.890625
4
|Object COBOL Concepts| Object COBOL extends Micro Focus technology to enable object-oriented programming in COBOL, while retaining all the dialects and features previously available. This chapter explains: If you don't want to use OO you don't need to read this manual, and can use Micro Focus Object COBOL in the same way you used Micro Focus COBOL. To enable you to write OO programs in COBOL, Micro Focus has added the following components and features to the COBOL system: Object COBOL has a small amount of new syntax - one new verb (INVOKE), one new data type (OBJECT REFERENCE) and three new types of header paragraph Adds support for creation of objects, dynamic message binding, and inheritance to the COBOL run-time system A library of pre-defined objects which you can use as a foundation for building your own applications A tool for browsing and editing Object COBOL source code available on windows and OS/2. It provides you with a view of your application which is based on classes and methods. This manual is intended to help you get started programming with Object COBOL as quickly as possible. However, if you are completely new to OO you might want to do some background reading to familiarize yourself with the new concepts, and perhaps get some training in Object-Oriented Analysis and Design. There is a list of books in the section Reading List. The next two sections explain: The list below shows the basic steps to develop an OO application. >>To develop an OO application You need to see where inheritance hierarchies will help you to reuse code, and to see which objects will interact between each other and the messages they will send. Windows and OS/2: You can use any text editor for coding; we recommend you use the Micro Focus Class Browser. This gives you templates to help you construct classes and methods quickly, and also enables you to navigate methods and classes faster than a conventional editor. You can use any text editor to code Object COBOL; we recommend the Micro Focus Editor. Alternatively, you can develop and debug your application on Object COBOL or Workbench for Windows NT, and transfer the code to a UNIX platform for production. Apart from GUI programming, which is not supported on UNIX, Object COBOL code is portable between OS/2, Windows NT and UNIX. Windows and OS/2: On Windows and OS/2 platforms,the Class Browser provides functions to compile individual classes, a class and its subclasses, or all the programs in your application. Steps 1 and 2 are OO analysis, and fall outside of the scope of this book. Some information sources to help you get started are provided in the section Learning More About Object-Orientation. The next section explains how you can use the documentation to help you with steps 3 to 8. When you start developing OO applications with Object COBOL, you learn about three things: How OO is handled by Object COBOL. There is very little new syntax to learn. The development environment for OO will be familiar to any previous users of Micro Focus COBOL. Windows and OS/2: On Windows and OS/2 you have an additional tool; the Class Browser. This is a GUI tool which simplifies writing and reading Object COBOL source code. It is integrated with Animator V2 and the Object COBOL Compiler. The Class Library gives you a set of ready-made objects which you can use in your applications. The tutorials in this book give you an introduction to the main types of classes in the Class Library, so that you can get a feel for what is in there. You don't have to know the entire Class Library to begin programming. This manual is divided into parts for ease of use: Consists of this chapter and the chapter Object COBOL Concepts. Read the Object COBOL Concepts chapter to get a quick overview of the key points about OO and Object COBOL. The tutorials show you how to send messages, write classes, and start using the supplied class library. The chapter Object COBOL Tutorials briefly describes the subjects covered in the tutorials, and explains how you use them. The tutorials in Part II are for Object COBOL on Windows and OS/2, the tutorials in Part III are for Object COBOL on UNIX. Object COBOL syntax and behavior is the same on all environments, but the development environments are different. The Windows and OS/2 tutorials exploit the Class Browser and graphical environment, while the UNIX tutorials are driven from command line tools. Also, the Windows and OS/2 tutorials use the GUI classes which are not available on UNIX. Refer to the chapters in this section once you start programming. The first two chapters Using Objects in Programs and Class Programs deal with using and writing Object COBOL classes. If your interest is to be able to use Object COBOL classes supplied by others as building blocks, you only need to read the chapter Using Objects in Programs, but if you are going to write your own classes you need to read both. The chapter Extending a Class documents a Micro Focus feature for dynamically adding behavior to a class at run-time. The chapter Requirements-Based Vocabularies shows you how you can extend Object COBOL syntax and define your own verbs and functions for sending messages. The chapter Debugging Object COBOL Applications explains some of the extra facilities you will find useful in debugging any program which uses Object COBOL classes and objects. The chapter Compiling, Linking and Shipping OO Applications supplements the information supplied in the other manuals in your Object COBOL documentation set. It will help you get an application ready for delivery to end-users. Documents the main frameworks provided in the supplied Object COBOL Class Library. The Class Library is a big subject in its own right; the section Exploring the Class Library later in this chapter will help you to find your way around. Documents the use of objects through Object Request Brokers (ORBs) like SOM and OLE with Object COBOL. Contains the appendix Descriptions of OO Run-time Switches. This is reference information on switches you can set to alter the behavior of OO applications at run-time. The glossary contains a list of the new terms introduced by this book. The Class Library contains public classes and private classes. Private classes are not part of the public interface and are not fully documented - for an explanation of what Micro Focus means by public and private, see the sections Private Interface and Public Interface in the chapter Introduction to the Class Library. To help you get started with the Class Library, we have provided four sources of information, moving from introductory and "how-to" information, to detailed reference information: For each class, it gives you a description of its use, and a list of all the public class and instance methods which the class either implements or inherits, except those inherited from Base. All classes inherit the methods in Base, so these are only documented under the description for Base. Enter the following command line: cobrun hyhelp ooclib! Windows and OS/2 : On Windows and OS/2 you can look through the Class Library sources using the Class Browser. The Class Browser is also very useful for browsing through your own in-house libraries of classes. The documentation supplied with Object COBOL will help you to start programming in Object COBOL. However, if you are new to OO, learning about the principles of Object-Oriented Design and Analysis (OOD and OOA) will enable you to make the best use of this new technology. The reading list below suggests some texts you might want to start with. There are also several training organizations who run language-independent courses on OOD and OOA. Additionally, you can get a computer-based training package from Micro Focus Publishing. Micro Focus Training also runs Object COBOL programming courses. This is a short list of texts dealing with Object-Oriented methodologies and technologies. |Grady Booch:||Object-Oriented Design Benjamin/Cummings, 1994 ISBN: 0-8053-0091-0 |Ivor Jacobson:||Object-Oriented Software Engineering Addison-Wesley, 1992 ISBN: 0-201-54435-0 |James Rumbaugh:||Object-Oriented Modeling and Design Prentice Hall, 1991 ISBN: 0-13-629841-9 |Raymond Obin:||Object Orientation - An Introduction for COBOL Micro Focus Publishing, 1993 ISBN 1-56928-005-3 Aimed at existing COBOL programmers who want to learn about Object-orientation, and also provides a sketch of the forthcoming ANSI extensions to the COBOL language. |Sally Shlaer and Steve Mellor:||Object-Oriented Systems Analysis: Modeling the World in Data Prentice Hall, 1988 ISBN: 0-13-629023-1 |Object Lifecycles: Modeling the World in Prentice Hall, 1992 ISBN: 0-13-629940-7 |David Taylor:||Object Oriented Information Systems: Planning John Wiley, ISBN: 0-471-54364-0 Aimed at managers who need to make informed decisions for successful system installation and development. |Rebecca Wirfs-Brock and B. Wilkerson:||Designing Object-oriented Software| The following courses have been developed by Object Management Labs and are available through Micro Focus publishing: For managers and executives needing an overview of object-oriented analysis and design For programmers and technical managers involved in object-oriented projects. For programmers wanting to learn OO programming with COBOL. Copyright © 1999 MERANT International Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law. |Object COBOL Concepts|
<urn:uuid:4d724e26-7f0c-449c-8370-6b423235cdb8>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/object-cobol/oc41books/opintr.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00065.warc.gz
en
0.870043
2,158
2.703125
3
IoT in the Sky Updated · Jul 02, 2015 WHAT WE HAVE ON THIS PAGE One industry ripe for IoT-driven improvement is aviation. A McKinsey Global Institute report cites the potential impact of IoT in the sky as being anywhere from $900 million to $2.3 billion by 2025. It won’t be easy. The aviation industry uses a complex patchwork of systems, thanks to a large and varied group of stakeholders. The airlines own the aircraft and ground maintenance; communications systems are owned and operated by telecom and satellite providers; the engines themselves may be owned by the manufacturer and leased out to the airlines; and flight operations are controlled by the likes of the Federal Aviation Administration (FAA) and its European counterpart EuroControl. This multi-tenant set-up makes it tough to deliver the comprehensive services demanded by IoT technologies. A major piece of the puzzle is Automatic Dependent Surveillance-Broadcast (ADS-B), a system that uses GPS signals to determine the location of aircraft. There are many land-based ADS-B systems in North American, Europe and metropolitan areas around the world. But there is no coverage in remote areas such as over oceans, mountains, deserts or the poles. These gaps in coverage explain why a plane such as Malaysian Airlines Flight 17 can seemingly ;disappear from the face of the earth. A plan is in place, however, to erect scores of satellites around the globe which will be able to pinpoint airline locations everywhere. Establishing an IoT Infrastructure “Ground-based ADS-B stations give limited coverage, but soon our coverage will be 100 percent,” said Chip Downing, senior director of Aerospace Defense at Wind River, a subsidiary of Intel which has an operating system that is used by many airlines. This satellite network is being installed by Aireon, which is part of satellite communications provider Iridium. According to Iridium, airline fuel savings alone could amount to $8 billion by 2030. Savings will result from combining ADS-B data with engine IoT information, weather data and information from all other planes in the vicinity, then analyzing the data in near real time to make better decisions about altitudes, route planning, velocity, the influence of wind and maintenance schedules. “This means you don’t need a manned controller looking at a radar screen to keep flights separated,” said Downing. “As a result, planes can fly closer while maintaining safety.” Many ADS-B devices run on top of a Wind River operating system known as VxWorks. An edge management system known as Wind River Helix Device Cloud collects data from sensors or devices and puts it into the cloud for analysis. Speedier Data Analysis Older systems used to gather the data, process it, store it and then send it elsewhere for analysis, Downing explained. That is too slow for the needs of IoT. Now data is siphoned off at the point it is generated, he said. One feed goes on the normal route and the other goes directly to the point of analysis in the cloud. “To be accurate and responsive enough, you need to go directly to the cloud and not just via control systems,” Downing said. “That way, we are able to send the control system alerts or guidance in near-real time.” Such systems make it possible to analyze data mid-flight rather than waiting to download it all upon landing. This allows engineers to tune engines more precisely, alter flight paths to avoid bad weather, take advantage of better winds on a slightly different course and, perhaps most importantly, set up maintenance actions on the ground so that crews are ready well in advance of the plane’s arrival. Wind River and Intel are putting the pieces together for a platform that will underpin other systems. Engine supplier GE Aviation and airline OEM Boeing, for example, can plug their various applications on top of Wind River VxWorks. “Many vendors can add new capabilities and create a richer aviation environment using IoT data,” Downing said. Another company lining up its IoT platform for use in aviation is Pivotal with its Big Data Suite. Its approach, said Raghvender Arni, senior director, Platform Strategy at Pivotal, is to provide the analytics capabilities and platform, and let airline specialists build what they need on top of it. It works with the likes of GE Aviation and Honeywell to help them deal with up to 1 terabyte of data per flight. “That number is growing all the time as GE continues to add more sensors to its jet engines,” said Arni. “The company wants a finer grained view of what is happening inside its engine.” Transmitting, storing and digesting that amount of data with so many planes landing every minute of every day can be daunting. One current approach to dealing with bandwidth limitations is to perform some lightweight data processing and analysis mid-flight and upload the bulk of the data into a Hadoop data store once the plane lands. The tradeoff between the desire to send the data and the cost of doing so is throttling down an all-out IoT aviation data torrent. Drew Robb is a writer who has been writing about IT, engineering, and other topics. Originating from Scotland, he currently resides in Florida. Highly skilled in rapid prototyping innovative and reliable systems. He has been an editor and professional writer full-time for more than 20 years. He works as a freelancer at Enterprise Apps Today, CIO Insight and other IT publications. He is also an editor-in chief of an international engineering journal. He enjoys solving data problems and learning abstractions that will allow for better infrastructure.
<urn:uuid:d2646e35-6397-4c83-8272-0ed1f68215de>
CC-MAIN-2022-40
https://www.enterpriseappstoday.com/business-intelligence/iot-in-the-sky-1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00065.warc.gz
en
0.948486
1,181
2.90625
3
Apologies in advance, this is a bit of a connective blog entry – this is a big topic, and it needs some scene setting, basic understanding and several weeks worth to get the most out of it. We live in a connected world now – my other half was showing me a washing machine with a WiFi connection and an associated iPhone App that would allow you remote control of and reporting about your intimate garments spin cycle ! I wonder if that is really necessary to be honest, as even if it has finished, knowing that while I’m in the office and the washing machine is at home is a complete waste of electrons. The network, and the connected nature of things is what allows us as penetration testers to attempt to compromise the security of a company without going anywhere near it. There are other aspects to full scale penetration testing as I’ve alluded to before – with social engineering and physical attack ( lock picking, not baseball bat ) parts of such a scope – but a majority of the work is computer and network based. To that end, a good understanding and working knowledge of networking is pretty much a job pre-requisite. So, rather than giving you a lesson myself, I’ll give you a quick and dirty set of online references – this won’t make you an expert by any stretch of the imagination, but hopefully it will get us through the rest of this section without too much head scratching.1 - The OSI Model - Internet Protocol (IP) - Transmission Control Protocol (TCP) - User Datagram Protocol (UDP) I would apologise for the laziness on my part, however I subscribe to Larry Wall’s school of thought that it is a virtue – if someone else has done it well enough already, why spend time re-inventing the wheel. The corollary of that is, if you find that there isn’t a good explanation of something in that set that you’d like to understand better – add a comment on the bottom of this post and we’ll bring it up to scratch ( perhaps both here and at Wikipedia 😉 ). So seing as you all now fully understand TCP/IP packet structure and know your URG from your SYN … ( It’s ok, I’m only joking. ) We are fortunate that in reality, we have some amazing tools available to us that include all of the low level things done for us already. I am going to profess a view though that, like forensics, you shouldn’t rely on the output of a tool that you don’t understand the inner working of and, that you couldn’t reproduce and/or verify the results of at a binary level. There are plenty of PenTest ( and Forensic ) companies out there who get cheap, unqualified labour to run automated tools and then publish the results as gospel – occasionally with disastrous results – please, please, please don’t add to them. To that end, I’m going to introduce a few tools this week, and next time we are going to build a small lab and run a few scans and look at the network traffic and the results. First off, our listening post, Wireshark (nee Ethereal). Wireshark is a network protocol analyser, given a promiscuous network port on a machine it will sit and listen to all traffic that it can see on its segment.2 I love Wireshark, and actually, as a general purpose network trouble shooting tool, it’s pretty hard to beat. It can colour, track and decode flows across a wide range of protocols and applications, and best of all – it’s free. Secondly, our port scanner, NMap. Whilst, as they say on the BBC, other products exist, frankly I don’t see any reason to use them. NMap has been around for nearly as long as I have with early editions out in 1997, it has grown since then to one of the most comprehensive ( if not the most comprehensive ) tool of it’s type. There are graphical front ends and countless enhancements, and it is cross platform with clients for pretty much anything you might want to run it on and it plugs into dozens of other PenTest tools ( MetaSploit and Nessus [ which we will get to later ] amongst them ). For now, I’m going to leave it there I’m afraid, I am trying to keep this in bite-size chunks and if I go into any more detail today I’m really going to over run. As a preview though, next time we are going to build a test lab using virtualisation, which we are going to continue to use for subsequent exercises, and we are going to run a range of port scans using NMap and see what we get back and what we can see in Wireshark while we do it. I’m also hoping to get some usage out of my new toy and see if we can’t get some demo video tutorials available to go with the text content. I intend to make downloadable VMs that you can easily use on a number of platforms, so hopefully this won’t be too painful an experience ! 1. Above all other material in this area I would recommend, without hesitation, TCP/IP Illustrated: The Protocols v. 1. This is a phenomenally detailed book, that actually isn’t that bad to read, and is an excellent reference moving forward. In fact, I now own two copies, as I’ve found out through writing this that it has been updated to cover IPv6 late last year – so I’ve put my money where my mouth is ! 2. Where a network is broken down into sections or segments with routers and switches ( rather than hubs ) – traffic is actively filtered by the networking devices, restricting the amount that can be seen by a sniffing device – worth remembering if you are wondering why you can’t see something and also if you are designing a secure network …
<urn:uuid:10624e84-6c7d-4979-beba-204b1e1051ef>
CC-MAIN-2022-40
https://www.forensicfocus.com/articles/introduction-to-penetration-testing-part-3a-active-reconnaissance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00065.warc.gz
en
0.953495
1,260
2.515625
3
Cloud computing was one of the defining technologies of the 2010s. Companies increased their investments in cloud-based compute, storage and networking resources to run applications and store data with greater flexibility and scalability than ever before. But now with the growth of hybrid cloud architectures and the rapid evolution of artificial intelligence (AI) and machine learning (ML), the edge - and how it connects to the cloud and to the data center - has become a more central concern. What is the edge (and what is edge computing)? The edge is a hardware location at the periphery of a network, where data is collected; it exists outside of a data center. That data may be passed on to a data center or cloud without modification, or subjected to further processing before going there. Red Hat has made a distinction between “the edge” and “edge computing,” with the latter referring to the act of processing data before transmitting it anywhere else. The edge itself consists of devices that store data. Why is the edge important? Although cloud environments provide vast resources at economical cost, they aren’t ideal for the most performance-sensitive use cases. These include sending huge amounts of data needed for training an AI model, or running a workload that pushes a graphics processing unit (GPU) to its limit. In these instances, the sheer physical distance between the cloud and where the data is collected means that there will be noticeable latency. The volume of information being transmitted may also create bottlenecks with bandwidth. For example, AI algorithms must be fed a lot of labeled data before they can function properly. Collecting, storing and processing that data at the edge will allow for quicker, more economical AI training than trying to do the same in a faraway cloud. Or consider the case of a hybrid cloud. An edge storage solution could provide a critical piece in connecting the data center/private cloud to a public cloud, helping to manage the flows of large quantities of data between diverse environments. Overall, the right hardware and software on the edge help: - Make data more readily accessible and available for demanding use cases. - Provide top-notch performance that minimizes latency. - Support the growth of AI and hybrid cloud computing initiatives. IDC has predicted that up to 50% of all enterprise IT infrastructure will be deployed at the edge by 2023, compared to just 10% in 2019. Meanwhile, according to Grandview Research, the global AI market is expected to expand at a 40% compound annual growth rate from 2021 to 2028, underscoring the need for resilient edge solutions. Enter IBM Spectrum Fusion and Elastic Storage System (ESS) 3200. What are IBM Spectrum Fusion and ESS 3200? IBM, a pioneer in hybrid cloud computing, has a broad range of edge offerings, designed to support better data accessibility and availability, with the two below being among the most prominent: IBM Spectrum Fusion This is a container-native hybrid cloud storage platform, built for Red Hat OpenShift. It enables: - Straightforward global access to data, from any resource. - Resiliency and scalability for mission-critical workloads. - Faster business insights, e.g. from AI algorithms. - Consistency across hybrid cloud environments, thanks to containerization. - EAsier development of cloud-native applications. IBM Spectrum Fusion’s first incarnation is in a hyperconverged infrastructure (HCI) configuration known as IBM Spectrum Fusion HCI, according to Steve McDowell of Moor Insights and Strategy, writing for Forbes. This offering combines high-performance hardware and software to support GPU-accelerated applications, hybrid cloud integration and unified management of containers and virtual machines, among other capabilities. The ESS 3200 is a software-defined edge storage appliance built for AI and hybrid cloud modernization. Compatible with all ESS models, the ESS 3200 combines super-fast NVMe storage with containerized software and a compact 2U enclosure. This solution helps accelerate the time to value for AI and other projects, thanks to high throughput for, and smart management of, data stored on the edge. It may be deployed by itself or in combination with other ESS hardware. Join us for a Spectrum Fusion and ESS 3200 webinar InfoSystems is hosting a July 8, 2021 webinar that will dive into more detail on both of these solutions within the context of the edge. Register now or connect with the InfoSystems team for more information. For over 25 years, InfoSystems has provided reliable IT solutions to build and maintain strong and secure systems for both SMB and enterprise organizations. Headquartered in Chattanooga, TN, our trusted team of experts specialize in traditional infrastructure, IT optimization and cybersecurity services, as well as next gen solutions such as hybrid cloud and artificial intelligence, from partners such as IBM, Dell Technologies, Red Hat, VMware and Cisco.
<urn:uuid:b8542849-b78b-44ac-8492-b8e6fc758315>
CC-MAIN-2022-40
https://infosystems.biz/it-optimization/infosystems-simplify-data-accessibility-availability-on-the-edge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00065.warc.gz
en
0.90951
1,023
2.9375
3
The only thing more essential than online privacy protection, in general, are the laws, rules, and regulations designed to protect the most vulnerable among us: our children. Here’s what you need to know. Wow! Did you know that as of June 20, 2017, there were approximately 3.88 billion Internet users worldwide? To put this into perspective, that’s about 51.7% of the total population of planet Earth! However, most discussions regarding the ubiquitous use of digital resources fail to place proper emphasis on the people who are actually being affected the most: our children. Make no mistake, primary age children are more connected, more mobile and more social than their parents or grandparents ever were thanks to the wonders of the World Wide Web. This poses a number of interesting (and potentially dangerous) implications in terms of online privacy that are essential for us to explore. Children and the Internet: Facts and Figures In a world where 21% of children between kindergarten and second grade have cell phones, do we really think that we’re doing enough to protect their privacy on the Internet? Also, consider statistics like the following compiled by the experts at GuardChild.com: It’s true that steps have been taken to protect the privacy of children online. Most social networking sites require users to be at least 13 years old before they can get approved for an account. However, these “safeguards” are incredibly easy to get around with just a few quick mouse clicks. It’s clear that something more must be done, which is where regulations like COPPA come into play. What is COPPA? Short for “Children’s Online Privacy Protection Rule,” COPPA was created by the Federal Trade Commission in the United States in an effort to better protect children when they use the Internet. It imposes a number of requirements on both website and online services that are specifically directed at those under 13 years of age. It details what personal information can be collected, how that information must be stored and protected, and much more. The law itself was proposed in 1998, and first went into effect in April of 2000. As technology in general and the Internet have evolved rapidly since then, it should come as no surprise that the law has done the same. Some of the more important requirements of COPPA for compliance include, but are not limited to the following: As one might expect with a topic this important, COPPA violations can be incredibly severe. In 2006, for example, the website Xanga was fined $1 million for repeatedly allowing children under the age of 13 to sign up for and to use accounts without getting verifiable parental consent. Mrs. Fields Cookies, The Hershey Company, Kidswirl and Imbee are also examples of high profile sites that have been on the receiving end of COPPA violations in the past. Online privacy protection is one of the most important topics of the digital era. However, during all of the discussions about massive data breaches and business intelligence, we must not forget to acknowledge and look after those who need it the most: our children. Kids these days are being exposed to technology and the Internet at an increasing rate, and at a far earlier age than any other generation in human history. It’s up to all of us to work together to make sure they can enjoy the many advantages, with as few of the downsides as possible.
<urn:uuid:0845283c-9d8b-48ea-95c5-6193372ea65f>
CC-MAIN-2022-40
https://www.infiniwiz.com/do-you-know-what-your-children-are-doing-when-theyre-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00065.warc.gz
en
0.965738
703
3.09375
3
Information classification can be simply defined as the process of assigning an appropriate level of classification to an information asset to ensure it receives an adequate level of protection. Why is information classification relevant to ISO 27001? Information classification is a key part of any ISO 27001 project. In the standard, control objective A7.2 is titled ‘Information Classification’. The objective of this control is “to ensure that information receives an appropriate level of protection”. The way that organisations go about implementing this control is by developing a set of information classification guidelines that detail how information should be classified using labelling or marking and deciding how this information should be handled once it is classified. For example, an organisation may choose to have three or four levels of classification, such as Restricted, Confidential and Public. They will then provide examples for each of these in their classification guidelines and detail what measures should be in place before any information crosses the organisation’s boundaries. How can information classification be made simple? Some organisations choose simply to add classifications to Microsoft Word or other electronic documents manually, but this is prone to human error. Others have old-fashioned stamps to apply classifications to each physical document. And again, this is prone to human error. The simple answer is through an information classification software solution such as Boldon James Classifier. If you want to ensure your information is classified in the right way and that your classification guidelines are enforced, Classifier is the solution you need. For more on implementing ISO27001, visit the IT Governance website
<urn:uuid:dd390d6e-423c-441d-88b0-67581e693d23>
CC-MAIN-2022-40
https://www.boldonjames.com/blog/what-is-information-classification-and-how-is-it-relevant-to-iso-27001/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00065.warc.gz
en
0.875191
320
2.96875
3
Necessity and Standards of Electrical Wiring Color Codes Why Are Electrical Wiring Color Codes Important? Electrical wiring color codes matter a lot even if there are safety features such as the fuse, the double insulation design and the earth wire in plugs and appliances for electrical connection. For example, in a home electrical system, there are usually three wires/lines in a power cord or in the whole power link entering into buildings. One wire brings electricity to the appliance and one wire completes the circuit by taking electricity away from the appliance, these two wires are called “live” and “neutral/zero” respectively. The third wire is the earth wire designed for important safety considerations. Imagining that a power link has been cut into two parts for an insertion of an additional appliance to support more electrical devices between the two parts. What would you do if there is no distinction of the three wires? If there is any mistake resulting in the misconnection of these wires, for example, the live wire is wrongly regarded as neutral wire or earth wire, big safety problems may occur. People dealing with the wires may get electrocuted or the circuit won’t work due to the wrong connection. Therefore, keeping the consistency of the wire colors before and after adding the appliances is vital. Given such potential hazards, electrical wiring color code standards is a necessity to be made. The electrical wiring color code standards not only help in appliance addition, but also offer support when there is a need to replace the old wires with new ones. Electrical Wiring Color Code Standards It is important to abide by the electrical wiring color code standards for safety concerns. Electrical wiring color code standards are specified according to different power distribution branch circuits: AC (alternating current) and DC (direct current). AC is widely used in distribution and transmission electrical networks. In the low-voltage (the normal voltages of 240V and 415V delivered to most customers) distribution network, the transmission line generally adopts a three-phase four-wire system, of which three lines represent the three phases of L1, L2, and L3 respectively, and the other is a neutral line N. The neutral line in a three-phase four-wire system is different from the zero line or neutral line which enters the user's home in a single-phase transmission line. The zero line normally passes current to form a current loop in the single-phase line, while in a three-phase four-wire system, the three phases form a loop, and the neutral line is normally without current. In the 380V low-voltage distribution network, the N line is set to obtain the 220V line voltage from the 380V phase voltage, or for zero-sequence current detection in order to monitor the balance of three-phase power supply. Figure 1: Example of Electrical Wiring Systems Note: Before the power supply enters the home, the PEN (Protective Earth and Neutral) will be grounded and then divided into PE (earth wire) and N. This grounding system is called TN-C-S by IEC 60364 or GB 16895. DC distribution is limited to use in situations such as tramway and traction systems with a voltage of usually 600V, railway DC traction systems with a voltage of 1.5 kV between rail and overhead collector wire, lifts, printing presses and various machines where smooth speed control is desirable, electroplating or is simply used for battery charging. Usually, DC systems are of 2-wire or 3-wire types, of which a DC power has two terminals, one positive (+) and the other negative (-). The current flows from the positive terminal to the external circuit and returns to the negative terminal. AC Power Circuit Wiring Color Code Standards Different countries or regions have different AC power circuit wiring color code standards, of which the most important ones are illustrated below. Most of Europe abides by IEC (International Electrotechnical Commission) wiring color codes for AC branch circuits. These are listed in the table below. One can replace or mix the wiring in some installations according to the color code change from the old IEC color to the new IEC color as shown in the following figure. |Function||Label||New IEC Color||Old IEC Color| |Line, single phase||L||brown||brown or black| |Line, 3-phase||L1||brown||brown or black| |Line, 3-phase||L2||brown||brown or black| |Line, 3-phase||L3||grey||brown or black| Figure 2: Mixed installation of old and new wire colours Note: Although the United Kingdom (UK) is no longer a member of the European Union (EU) since January 31st, 2020, the UK follows the IEC AC wiring color codes as well. Different from the IEC AC power circuit wiring color codes, the US National Electrical Code (NEC) only mandates white (or grey) for the neutral power conductor, and bare copper, green, or green with yellow stripe for the protective ground. |Function||Label||Common Color||Alternative Color| |Protective ground||PG||bare, green, or green-yellow||green| |Line, single phase||L||black or red (2nd hot)| Governed by the Canadian Electric Code (CEC), the Canada AC power circuit wiring color codes shown below are similar to the US electrical wiring color code standards. |Protective ground||PG||green or green-yellow| |Line, single phase||L||black or red (2nd hot)| DC Power Circuit Wiring Color Code Standards DC power installations, for example, solar power and computer data centers, use color coding which follows the AC standards. The IEC color standard for DC power cable color code is listed in the table below. |2-Wire Unearthed DC Power System||Positive||L+||brown| |2-Wire Earthed DC Power System||Positive (of a negative earthed) circuit||L+||brown| |Negative (of a negative earthed) circuit||M||blue| |Positive (of a positive earthed) circuit||M||blue| |Negative (of a positive earthed) circuit||L-||grey| |3-Wire Earthed DC Power System||Positive||L+||brown| Regardless of the IEC color standard, the US recommended DC power circuit wiring color codes as the following: |Protective ground||PG||bare, green, or green-yellow| |2-Wire Ungrounded DC Power System||Positive||L+||no recommendation (red)| |Negative||L-||no recommendation (black)| |2-Wire Grounded DC Power System||Positive (of a negative grounded) circuit||L+||red| |Negative (of a negative grounded) circuit||N||white| |Positive (of a positive grounded) circuit||N||white| |Negative (of a positive grounded) circuit||L-||black| |3-Wire Grounded DC Power System||Positive||L+||red| |Mid-wire (center tap)||N||white| In addition to the main electrical wiring color code standards mentioned above, there are other standards as well, such as the International/North American Conductor Color Coding. These standards have subtle differences with each other but mainly use green color and white color for earth wire and neutral wire coding respectively. How to Check and Maintain the Safety of Electrical Wiring Circuit? Except from the electrical wiring color codes, it is also important to check and maintain the whole electrical wiring circuit for safety concerns. Check if there are abnormal smells or black burn marks around electrical devices. For example, check if there are black burn marks around wall power sockets or light switches. If there are, then the sockets or light switches may be worn out and may cause sparks when operated. Similarly, if there is any sign of smell like the smell of burning plastic or rubber, then the wire insulation may be breaking down causing short-circuiting or other problems. Under such circumstances, a double check of the wires behind the sockets and light switches is needed to make sure the wires are not deteriorating. If the wires are deteriorated, then one must replace the old ones as soon as possible. Maintain a sound electrical wiring circuit. Do not give much burden to the existing circuit. Normally, there are not enough power points in a room installed in the past to satisfy the growing electrical needs today. Sometimes, appliances like a server, air conditioner, microwave oven, television, satellite video box, wireless access point and other charging devices may be inserted into one wall power socket through a power strip. These appliances will draw current and cause additional strain on the ancient insulation, which is easy to cause a household electrical fire. If necessary, choose power cables supporting high-power electronic equipment and add additional wall power sockets to support the devices you need or rewiring the house with a more suitable distribution of the power points.
<urn:uuid:10685682-3caa-46b8-9867-c2fcf03a7dc2>
CC-MAIN-2022-40
https://community.fs.com/blog/necessity-and-standards-of-electrical-wiring-color-codes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00065.warc.gz
en
0.846694
2,110
3.984375
4
What is SQL Injection? SQL injection is a code injection technique that might destroy a database. Code injection is the exploitation of a computer bug that results from processing invalid data. Attackers use injection to introduce or inject code into a vulnerable application or change the course of execution. SQL injection is one of the most common web hacking techniques where hackers place malicious codes in SQL statements via web page input. - SQL injection is a code injection technique that destroys databases - SQL injection occurs when attackers give SQL statements in web applications that require user input - The attack enables hackers to access, modify, or delete application data - You can detect SQL injection vulnerabilities using security scanners or a systematic manual set of tests How SQL Injection Occurs SQL injection attack occurs when web applications ask for user input, such as username/user id. Instead of providing a name/id, the user gives an SQL statement that the application unknowingly runs on the database. A web page or web application with an SQL injection vulnerability uses such user input directly in an SQL query. The attacker can create input content, often called a malicious payload. After sending the content, malicious SQL commands are executed in the database. The SQL statements interfere with the queries that an application makes to its database, allowing an attacker to illegally retrieve and view data. Your application becomes vulnerable to SQL injection attacks when user input is either incorrectly filtered for string literal escape characters embedded in SQL statements or user input is not strongly typed and unexpectedly executed. Impacts of SQL Injection Attacks SQL injection is mostly known as an attack vector for websites and web-based applications. However, the vulnerability can affect any SQL database, including MySQL, Oracle, and SQL server. SQL injection attacks represent two-thirds (65.1 percent) of all web app attacks, which is a sharp rise from 44 percent of web application layer attacks that SQL injection represented two years ago. When local file inclusion attacks are counted, nearly nine in ten attacks are related to input validation failures. SQL injection has also been among the top Open Web Application Security Project’s (OWASP) list of top 10 web vulnerabilities for several years. SQL injections generally allow an attacker to view data without authentication and authorization. The data may include personal information, passwords, credit card details, trade secrets, intellectual property, or any other data the application can access. In some cases, an SQL injection attack can modify or delete data, causing persistent changes to the application’s behavior or content. The attack can tamper with existing data, cause repudiation issues such as voiding transactions or changing balances, allow the complete disclosure of all data on the system, destroy data, or make it unavailable. Hackers can also escalate an SQL injection attack to compromise the underlying infrastructure, such as the server, resulting in a denial of service (DoS) attack. The attack can give database server administrative rights to the attacker. SQL injection can give access to the operating system using a database server. In this situation, a hacker uses SQL injection as the initial attack vector and then attacks the internal network behind a firewall. An attacker can obtain a persistent backdoor into a system, leading to a long-term compromise that can remain unnoticed for an extended period. Like any other form of data breaches, successful SQL injection attacks lead to reputational damage, financial losses, and regulatory fines. SQL Injection Examples Some SQL injection attacks and techniques include: - Modify SQL query to retrieve hidden data or return additional results - Change the SQL query to interfere with the application’s logic - Modify SQL query to cause UNION attacks that retrieve data from different database tables - SQL injection attacks that examine databases to extract information about the database’s structure and version - Blind SQL injection where the query results are not returned in the application’s responses How to Detect SQL Injection You can find most SQL injection vulnerabilities quickly and reliably using security scanners, such as the Burp Suite’s web vulnerability scanners. You can also detect vulnerabilities manually using a systematic set of tests against every entry point in the application. Preventing SQL Injection SQL injection growth as an attack vector over the last few months should concern every website and web applications owners. While every application attack vector is stable or growing, none are growing as quickly as SQL injection. Follow these tips to prevent SQL injection: - Input validation and parameterized queries, including prepared statements – the application code should never use the input directly - Sanitize all input, including web form inputs such as login forms. Sanitizing input removes potential malicious code elements such as single quotes - Turn off the visibility of database errors on your production sites. Hackers use SQL injection and database errors to gain information about your database - Provide suitable security training to developers, DevOps, QA staff, and sysadmins - Do not trust any user input. User input in an SQL query introduces a risk of an SQL injection - Adopt the latest technologies, such as the latest version of the development environment and language that offers SQL injection protection
<urn:uuid:ef609810-5186-4119-9b38-abe3a3f721b1>
CC-MAIN-2022-40
https://cyberexperts.com/encyclopedia/sql-injection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00266.warc.gz
en
0.877876
1,054
3.640625
4
Zero Trust is an increasingly common term that is heard in the security industry. It’s both a mindset for thinking about security as well as a well-architected solution that helps to minimize risk from a changing working environment as well as an increasingly hostile world. Zero trust is an active approach and model that integrates continuous, context-aware analysis and verification of trust, in an effort to help ensure that users and devices on a network are not doing anything malicious. The basic idea behind zero trust is the assumption that all devices and users are untrustworthy until proven otherwise. Even after a user or entity is proven to be trustworthy once, zero trust models do not by default trust the same user or device the next time they are seen by the system. Trust in the zero-trust model is never taken for granted, but is based on observation and regular authentication to help limit risks. The concept of zero trust is often associated with the Software Defined Perimeter (SDP), which is an effort that originally began development under the auspices of the Cloud Security Alliance (CSA). In the general SDP model, there is a controller which defines the policies by which agents can connect and get access to different resources. The gateway component helps to direct traffic to the right data center or cloud resources. Devices and services make use of an SDP agent which connects and requests access from the controller to resources. Along the way, device health checks, user profiling including behavioral data and multi-factor authentication mechanisms are engaged to validate security posture. The zero trust model says that at every stage of an agent or host connection, there should be a security boundary that validates that a request is authenticated and authorized to proceed. Rather than relying on an implicit trust after the correct username and password, or access token has been provided, with zero trust by definition everything is untrusted and needs to be checked prior to providing access. Zero trust is a great idea to help organizations reduce the attack surface and limit risks, but it is not without its complexity and implementation challenges. A key challenge with some SDP zero trust implementations is that they are based upon on-premises deployment approaches, with a need for device certificates and support for the 802.1x protocol for port-based Network Access Control (NAC). Enabling full support, end-to-end across multiple public cloud and on-premises deployments can often be a tedious and time-consuming task. Though it might seem like a misnomer, there is often a need for organizations to trust a zero trust solution since there tend to be data encryption termination requirements. Typically an organization will already have various security tools in place, including VPNs and firewalls. How a zero trust solution provider is able to navigate that minefield is often a key challenge. Whether a zero trust solution is deployed is often a function of how easy it is to actually get set up Zero trust models work as overlays on top of existing network and application topologies. As such, having an agile data plane that can manage a distributed network is a key consideration. The amount of effort it takes to install device certificates and binaries on an end-user system is often compounded by various challenges, including both time and resource demands. Using a solution that is agentless is a key consideration, as it can make all the difference between having a solution and having a solution that can actually be deployed rapidly in a production environment. Consider zero trust tools with a host-based security model. In the modern world, many applications are delivered over the web and taking a host-based approach aligns with that model. In a host-based model for zero trust, the system validated that a given end-user system is properly authorized to receive an access token for a specific resource. Understanding how encryption works in the zero trust model is also important. One option is to enforce encryption from end-to-end across a zero-trust deployment. The basic SDP method is well defined for deploying zero trust models on-premises. When it comes to the cloud, it can become more complex. Different cloud providers have different systems, adding potential complexity to any type of deployment. Compounding the complexity is the growing trend toward multi-cloud deployments. So in addition to the challenges of deployment on a single public cloud provider, there is the complexity of having a zero-trust model that is both deployable and enforceable across multiple public cloud providers. One of the ways to deploy zero trust across a multi-cloud deployment is by leveraging the open-source Kubernetes container orchestration platform. Kubernetes is supported on all the major public cloud providers, including Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP). With Kubernetes, there is a control plane for managing distributed nodes of applications that run in docker containers. Using a docker container as a method to package and deploy an application to enable zero trust, is an approach that further reduces complexity. Rather than needing different application binaries for different systems, by using a cloud-native approach with a Kubernetes based system, it’s possible to abstract the underlying complexity of the multi-cloud world. The cloud is also not a uniform construct, in that all public cloud providers have multiple geographic regions and zones around the world. The purpose of the different deployments is to make sure that resources are available as close to the end-user as possible. When deploying a zero trust model to the cloud, be sure to choose a solution with multiple points of presence around the world to help make sure that there is as little network latency as possible. IT resources are always constrained and few if any organizations have the budget required to do all the things that are needed. Adding another layer of security with zero trust can sometimes be seen as yet another piece of complexity that will require additional time and demands from an IT department’s precious resources. Zero trust however has the potential when properly deployed to reduce demands on overtaxed IT staff. In a non zero-trust based network environment, the username and password are often the primary gatekeepers of access, alongside basic directory (Active Directory or otherwise) based identity and access management technology. Firewall and Intrusion Protection System (IPS) are also commonly deployed to help improve security. Yet what none of those systems actually do is continuously validate the state of a given access request. If and when something does go wrong, if a credential is lost or stolen, there is additional time and effort required by IT staff to locate the root cause and then remediate. In a properly configured and deployed zero-trust environment all access is validated. That means that instead of IT staff needing to figure out that a credential has been abused and a system has been breached, the zero-trust network always starts off with the assumption of zero access. Only through validation is the access granted. Zero trust means a reduced attack surface which typically translates to reduced risk. It also means fewer hours spent by IT wondering if an account has been breached and digging through logs to figure out what happened. With zero trust, access is simply never granted to a compromised machine and potential lateral movement of an adversary across a network is restricted. When considering how to implement a zero-trust solution keep these simple questions in mind.
<urn:uuid:5ad76aeb-900d-4eae-a5c5-8f9bfd63fe39>
CC-MAIN-2022-40
https://www.checkpoint.com/cyber-hub/network-security/what-is-zero-trust/how-to-implement-zero-trust/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00266.warc.gz
en
0.953118
1,497
2.875
3
The U.S. Department of Energys Accelerated Strategic Computing Initiative White computer was publicly unveiled last month at Lawrence Livermore National Laboratory, in Livermore, Calif. ASCI White, built by IBM, has taken top honors among the worlds fastest supercomputers. At 12.3 teraflops, the classified system beats the computing capacity of the worlds No. 2 system, also from IBM, by a factor of about 3. ASCI White, used for weapons research, is a behemoth: 8,192 CPUs, 6 terabytes of RAM and 160 terabytes of disk space. However, the technology behind ASCI White isnt as mysterious or specialized as one might think. Thanks to clustering software and fast networking interconnects, huge systems such as ASCI White can be assembled from standard commercial components—in this case, 512 RS/6000 SP servers running AIX, each with 16 Power3 copper-based 375MHz CPUs. Data warehouses, Web servers and application servers all scale very well on clustered computer systems, meaning that ASCI White-type designs could end up being a lot more common in the future.
<urn:uuid:d722bbd6-c4fa-4c00-85be-0d1233a9b6e3>
CC-MAIN-2022-40
https://www.eweek.com/networking/geekspeak-september-3-2001/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00266.warc.gz
en
0.930518
233
2.859375
3
While deep learning models might not be able to simulate large-scale physical phenomena in the same way purpose-built supercomputers and their application stacks do, there is more research emerging that shows how traditional HPC simulations can be augmented, if not replaced in some parts, by neural networks. An upcoming meeting of the American Physical Society that will focus on fluid dynamics and turbulence will shed light on how and where this happening with a number of presentations focused on how neural nets fit into CFD and other physics-driven simulation areas. Researchers from Los Alamos National Lab compared three deep learning models, generative adversarial networks, LAT-NET, and LSTM against their own observations about homogeneous, isotropic, and stationary turbulence and found that deep learning, “which do not take into account any physics of turbulence explicitly, are impressively good overall when it comes to qualitative description of important features of turbulence.” Even still, they add that there are some shortcomings that can be addressed by making corrections to the deep learning frameworks through reinforcement of special features of turbulence that the models do not pick out on their own after training. Lead authors from LANL in the research above break out results from work with generative adversarial networks in a second presentation on turbulent flow. They will present results from a fully-trained GAN that is able to rapidly draw random samples from the full distribution of possible inflow states without needing to solve the Navier-Stokes equations, eliminating the costly process of spinning up inflow turbulence. “This suggests a new paradigm in physics informed machine learning where the turbulence physics can be encoded in either the discriminator or generator.” The LANL team will also propose additional applications such as feature identification and subgrid scale modeling. Connected to that research, another team from Brown University has proposed a new Navier-Stokes informed neural network that is trained to spot various aspects of fluid motion (velocity, pressure, etc.) as they occur in dye, smoke, or other settings. They design their algorithm to be agnostic to the geometry or initial boundary conditions. Their algorithm achieves “accurate predictions of the pressure and velocity fields in both 2D and 3D flows for several benchmark problems motivated by real-world applications. The findings demonstrate that this relatively simple methodology can be used in physical and biomedical problems to extract valuable quantitative information (e.g. lift and drag forces or wall shear stresses in arteries) for which direct measurement may not be possible.” A UCLA team used several deep learning models to see where neural networks might fit into aerodynamics applications, specifically how they might be trained to identify incident gusts and rapid changes of wind direction. The four algorithms they work with “achieved satisfactory results in their own tasks, showing the possibility of employing deep learning on a broader scale for gust detection in aerodynamics.” In a similar vein, work from researchers at New York University and TU IImenau in Germany have used deep learning to detect turbulent superstructures in specific convection settings. The team’s convolutional neural network was able to generate precise image segmentation from a relatively small training set of turbulent convection patterns. All of the research mentioned here will be presented at the 71st Annual Meeting of the APS Division of Fluid Dynamics in Atlanta in November.
<urn:uuid:f191c41d-d7a8-418f-b0c6-d385f671a271>
CC-MAIN-2022-40
https://www.nextplatform.com/2018/09/26/deep-learning-infiltrating-hpc-physics-domains/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00266.warc.gz
en
0.937448
683
2.703125
3
Scientific research is difficult, and it can be traumatic at times. To develop a state-of-the-art solution, it is necessary to conduct experiments and develop procedures. The literature review, however, is the most time-consuming aspect of the study. A solid research foundation on which a researcher can build and expand his ideas is provided by a comprehensive literature review. A terrible one can led to a detour in research. For the literature review, we only consider the most recent and up-to-date research; the number of relevant publications that the researchers must read is in the thousands. For scholars, this is a continuous difficulty. We now have a solution to this problem TLDR, thanks to AI (particularly NLP). TLDR is a concise summary of a work that is generated automatically. It stands for “Too Long; Didn’t Read”. In this context, a subtype of text summarization, i.e., extreme summarization which deals with shortening of whole document into single sentence can be exploited. The advanced NLP techniques use deep learning-based models such as transformers for automatic abstractive summarization. This document introduces the reader to extreme text summarization and pre-trained models for abstractive text summarization. Hence, a reader can utilize the knowledge gained for developing abstractive summarization modules. What is Text Summarization? Automatic text summarization is a technique which shortens lengthy texts and provides summaries to convey the desired content. It’s a widespread challenge in natural language processing (NLP) and machine learning. The process of constructing a concise, cohesive, and fluent summary of a lengthier text document, which includes highlighting the text’s important points, is known as text summarization. Text summarising presents a number of issues, including text identification, interpretation, and summary generation, as well as analysis of the resulting summary. Identifying important phrases in the document and exploiting them to uncover relevant information to add in the summary are critical jobs in extraction-based summarising. Type of summarization - Extractive summarization: The extractive text summarization technique entails extracting essential words from a source document and combining them to create a summary. Without making any changes to the texts, the extraction is done according to the defined measure. Abstractive summarization: Abstractive text summarization is the process of creating a short and concise summary of a source text that captures the main points. The generated summaries may include additional phrases and sentences not found in the original text. It mainly uses deep learning-based techniques for generation of summaries. Here, the sentences generated may or may not be accurate. Scientific TLDR (SciTLDR) Dataset In order to understand extreme summarization, its beneficial to first understand the dataset which can be used. The researchers at the Allen Institute for AI wanted a dataset that met their needs in order to execute extreme summarization. Hence, they created SciTLDR, a dataset containing 5,411 one-sentence summaries of 3,229 scientific papers. The researchers collected more than one short summary for each manuscript to verify the performance of TLDR creation. The author of the paper penned one of the summaries. The other summary was based on peer-review comments and prepared by computer science students. Further, to generate a one-sentence summary, the TLDR model only looks at the abstract, introduction, and conclusion of a publication. Pretrained models for abstractive summarization A deep learning model that has already been trained on a generalised task is referred to as a pre-trained model. It is not necessary to train the model from scratch in order to use it for a downstream task. It can be fine-tuned on the downstream job instead. The current state-of-the-art pretrained models for abstractive extreme summarization are discussed in this section. Such models, it should be highlighted, are based on transformers. T5 Transformer for Text Summarization T5 transformer is a encoder decoder pre-trained model which takes input as text to provide text based output. It requires the task name to be applied at the beginning of the input to help the model identify the precise task to perform. T5 model can be fined tuned for various downstream task such as customized summarization, question answering, machine translation etc. T5 model can be fine-tuned on SciTLDR for extreme abstractive summarization of long scientific articles. BART Transformer for Text Summarization BART is a pretrained denoising autoencoder for sequence-to-sequence models. It is taught by corrupting text with a random noise function and learning a model to reconstruct the original text. It employs a conventional Transformer design (Encoder-Decoder), which is comparable to the original Transformer model for neural machine translation, but differs from BERT (which only employs the encoder) and GPT (which employs both encoder and decoder) which only uses the decoder. GPT-2 Transformer for Text Summarization GPT-2 is made up entirely of transformer-style stacked decoder blocks. In the standard transformer architecture, the decoder receives a word embedding concatenated with a context vector, both generated by the encoder. The context vector in GPT-2 is zero-initialized for the first word embedding. In the traditional transformer architecture, self-attention is also applied to the entire surrounding context, including all of the other words in the sentence. Instead, in GPT-2, the decoder is only allowed to take information from the sentence’s earlier words, which is known as concealed self-attention. Aside from it, GPT-2 is a near-exact duplicate of the basic transformer architecture. XLM Transformer for Text Summarization: XLM uses a well-known pre-processing approach called Byte pair encoding (BPE) as well as a dual-language training mechanism called BERT to learn relationships between words in various languages. When a pre-trained model is utilised for initialization of the translation model, the model outperforms other models in a cross-lingual classification challenge and enhances machine translation significantly. The above explained pre trained transformer models can be fined tuned on SciTLDR for extreme abstractive summarization. The next section explains the concept fine-tuning. Transfer learning and fine-tuning are indistinguishably intertwined. When we apply knowledge obtained from solving one problem to a new but related problem, we refer this transfer learning. Fine-tuning is a technique for putting transfer learning into practice. It is a procedure that takes a model which has already been trained for larger task and tunes it to perform a downstream task that is similar to the first. By fine tuning the pre-trained summarization model on SciTLDR dataset long summaries generated by model can be compressed to a single line sentence which can give a lot of information clearly for scientific documents. Different Fine-Tuning Techniques - Train the complete architecture – On our dataset, we may train the complete pre-trained model and pass the results to a SoftMax layer. The error propagates throughout the architecture in this case, and the model’s pre-trained weights are updated to reflect the new dataset. - Train some layers while freezing others – One more option is to partially train a pre-trained model. We can maintain the weights of the model’s early layers static while retraining only the upper levels. We can experiment with how many layers to freeze and how many to train. - Freeze the complete architecture – We can even train a new model by freezing all of the model’s layers and attaching them with our own neural network layers. During model training, just the weights of the associated layers will be restructured. The Hugging Face transformers package is a well-known Python library that provides pre-trained models that can be used for a range of Natural Language Processing (NLP) purposes. It exclusively supports PyTorch and TensorFlow 2. We can perform fine tuning on the SciTLDR dataset using Hugging Face transformers by: - The HuggingFace not only has models, but it also has multiple datasets in a variety of languag s. To download and cache a dataset, use the Datasets library. The SciTLDR dataset is available for download. The training set, validation set, and test set are all contained in a dataset dictionary object. There are several columns (source, source_labels, paper_id, target, title) and a variable number of rows. - To pre-process the dataset, we’ll use a tokenizer class present in transformers library to turn the text into numbers that the model can understand. We can provide a single sentence or a list of sentences to the tokenizer. - The Trainer class in the Transformers library can be used to fine-tune any of the pre-trained models on your dataset. We just have to provide model name which we want to use for fine tuning. - We must create a Training Arguments class that contains all of the hyperparameters that the Trainer will use during training and evaluation. We must provide a directory in which the trained model, as well as the checkpoints along the process, will be recorded. We can leave the rest of the settings, which should be sufficient for basic fine-tuning. - We can define a Trainer by providing it all of the objects we’ve built so far — the model, the training arguments, the training and validation datasets and our tokenizer. - After defining all of the parameters to the Trainer class, we simply need to use our Trainer’s train() method to fine-tune the model on the dataset. Conclusion and Future aspects - TLDR shows that it is possible to generate single-sentence summaries of scientific papers. Scientific articles, on the other hand, vary in complexity. Not all papers can be summed up in a single statement without losing their meaning. It’s exciting to think about a TLDR model that can handle even the most difficult research articles. - Abstractive extreme summarization pretrained model give state of art results for computer scientific publication. For creating summaries of literature from various academic areas, a tweaked version of the TLDR model may be required. - Generation of TLDR can be extended to more pre trained models which can be fine-tuned for generating extreme summarization of scientific papers. - Cachola, D. (2020). TLDR: Extreme Summarization of Scientific Documents. In Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 4766–4777). Association for Computational Linguistics. - Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, & Donald Metzler. (2021). Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers. - Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer (2019). BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. CoRR, abs/1910.13461. - Guillaume Lample and Alexis Conneau (2019). Cross-lingual Language Model Pretraining. CoRR, abs/1901.07291. - Jesse Vig. (2019). A Multiscale Visualization of Attention in the Transformer Model. - Shuming Ma, Jian Yang, Haoyang Huang, Zewen Chi, Li Dong, Dongdong Zhang, Hany Hassan Awadalla, Alexandre Muzio, Akiko Eriguchi, Saksham Singhal, Xia Song, Arul Menezes, & Furu Wei. (2020). XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders. - Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, ., & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998–6008). - Genest, P.E., & Lapalme, G. (2011). Framework for abstractive summarization using text-to-text generation. In Proceedings of the workshop on monolingual text-to-text generation (pp. 64–73). - Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., & others (2019). Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. - Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu (2019). Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. CoRR, abs/1910.10683. Cite this article: Mridul Sharma, Diksha Malhotra, Poonam Saini (2021) Extreme Abstractive Summarization of Scientific Documents, Insights2Techinfo, pp.1
<urn:uuid:503968e1-c235-43b7-bb07-44df7801fcaf>
CC-MAIN-2022-40
https://insights2techinfo.com/extreme-abstractive-summarization-of-scientific-documents/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00266.warc.gz
en
0.883658
2,989
3.25
3
Zero Trust Network Access (ZTNA) w/ Binu Panicker The idea of Zero Trust security has become widely popular over the last few years. While many organizations have shifted priorities to adopt Zero Trust, ZTNA is the technology behind achieving a true Zero Trust model. VPN-related organizational attacks and breaches are on the ascent, resulting in the adaptation of newer strategies for securing an organization’s networks and cloud systems. Consequently, two-thirds of organizations worldwide are adopting something new; it is what is known as a Zero Trust Network Access (ZTNA). The idea of Zero Trust security has become widely popular over the last few years. While many organizations have shifted priorities to adopt Zero Trust, ZTNA is the technology behind achieving a true Zero Trust model. What is ZTNA and why is it important? The Zero Trust Network Access security model is best described by the phrase never trust, always verify. The ZTNA theory assumes that there will always be an attacker both inside and outside an organization’s network. No client or user ought to be consequently trusted, even if they bypass the DMZ. This clearly differentiates from the “trust but verify” concept behind traditional perimeter security. Zero Trust networks require verification whenever a client or user requests for access, whether or not the requester sits inside the organizational network. ZTNA does not depend on a DMZ comprising VPNs, firewalls, edge servers, and other security devices protecting restricted resources. There are many ways that ZTNA establishes more secure access to organizational applications. ZTNA services establish an environment that safeguards both your physical (on-premises) and logical (cloud-based) resources. Applications are non-discoverable (covered up), and access is checked by a trusted broker, who permits or denies access utilizing these three key advances: - Verify users when they sign on to the system. - Validate devices before entering the network for potential threats. Ensuring that devices that are incoming are known, trusted, and up-to-date on patches and security. - Limit access based on the Principle of Least Privilege (PoLP). The user or device is only given as much privilege as needed to access the requested resource, based on the roles of the user. ZTNA provides advanced flexibility and scalability which enables organizations to access critical infrastructure without exposing services. Some core principles of ZTNA are as follows: - Least-privilege access, which means only allowing access to the information each individual requires as mentioned above. This limits the ability of any malicious file to jump from one system to another and reduces the chance of internal data exfiltration. - Micro-segmentation divides a network into segments with different access. This increases the means of security and keeps attackers from running rampant through the network even if one segment is compromised. - Data usage controls limit the actions of the user with data once they are provided access using safeguards such as revoking permission to copy already-downloaded data to USB disk, email, or cloud apps. - Continuous monitoring observes how users are interacting with data and systems. This verifies that people really are who they claim to be and enables risk-management and security enforcement based on people’s actions. There are several ZTNA use cases, but there are four common organizational use cases: - ZTNA provides more secure cloud access. Securing multi-cloud access is the most famous spot for associations to begin their ZTNA journey. - With more organizations receiving cloud, ZTNA can reduce third-party risk. Most outside users get over-privileged access which could become a threat. ZTNA fundamentally decreases third-party dangers by guaranteeing an outside user never gains access to the network and that only authorized users gain access to allowed applications. - ZTNA can accelerate M&A integration. ZTNA reduces and simplifies the time and management needed to ensure a successful M&A and provides immediate value to the business. - ZTNA is a more secure VPN alternative. For most organizations, VPNs are slow, have poor security, and usually are difficult to manage. There are two significant ZTNA architectures: Endpoint-initiated ZTNA and Service-initiated ZTNA. Service-initiated ZTNA is portrayed by being cloud-based: When choosing a ZTNA provider and technology, here are some questions an organization should consider: - Who has control of the access rules? - Where are our organizational secrets (like passwords and private keys) kept? - How is the risk of internal threats alleviated? - Is the users’ data exposed or sold? - What is the scope of secure access? Does it include networks, users, etc.? - What is the ZTNA provider’s architecture? Are the servers located in the cloud or in a data center? Who can access it? - What happens if the ZTNA provider is compromised? Is the organization still secure? For organizations that are interested in Zero Trust Network Access functionality, it can be implemented within an organization in several ways: - Gateway Integration is a way for ZTNA functionality to be implemented as part of a network gateway since any traffic endeavoring to cross the network limit characterized by the gateway arrangement will be filtered based upon the access control policies. - Secure SD-WAN executes advanced networking across the WAN, and Secure SD-WAN coordinates a security stack into every SD-WAN machine. ZTNA functionality can be fused into this security stack to give centralized access management. - Secure Access Service Edge (SASE) takes the functions of Secure SD-WAN and implements them as a virtual machine in the cloud. This empowers an organization to boost both network effectiveness and security, including ZTNA functionality. You may ask, what are the benefits of ZTNA? First of all, there is less vulnerability. Once it’s set up, the ZTNA strengthens the security of your organization, particularly from in-network lateral dangers that could easily be manifested shown under a different security model. Secondly, there is a strong policy for user identification and access. Zero Trust requires solid administration of users inside the network so that their records are safer making the whole network more secure. Utilizing multi-factor authentication or moving beyond passwords with biometrics are effective methods to keep accounts protected. Then, with the order of clients, they must be given access to information and accounts as essential for their specific work. Third, there is smart segmentation of data. With ZTNA, you would not give clients access to all the information. Segmenting information by type, sensitivity, and use gives a safer arrangement. As the result, critical or sensitive data is secured and potential attack surfaces are reduced, leading to increased data protection. ZTNA also keeps data secure in both storage and transit through the use of automated backups and encrypted or hashed message transmission. Lastly, ZTNA is a good example of security orchestration, which ensures all your security components work efficiently and viably. In an ideal ZTNA, no openings are left uncovered, and the joined components complement one another rather than presenting incongruities between them. There are still some challenges to using the Zero Trust model. With so many extra security classifications, ZTNA makes a security strategy more complicated. Considerations that come with such a comprehensive strategy include: - Time and effort to set up — Reorganizing strategies inside a current network can be troublesome because it still needs to work during the change process. Moreover, it might be easier to create another network from scratch and then switch over to the new one. If legacy systems are contradictory to the ZTNA structure, starting from scratch will be necessary. - Increased management of varied users — Employee users need to be monitored more closely with access granted only as necessary. Keep in mind that users can go beyond employees. Customers, clients, and third-party vendors may also use your company’s website or access data. This means there is a wide variety of access points, and a good ZTNA implementation requires specific policies for each type of group. - More devices to manage — The present workplace incorporates several types of users, as well as various devices for every one of them. These various devices may have their own properties and communication protocols which should accordingly be monitored and secured. - More complicated application management — Moreover, there has been an increase in business applications. Applications are often cloud-based with users across different platforms, and they might be shared with outside users. In accordance with a ZTNA mindset, application use ought to be arranged, checked, and tailored specifically to users’ needs. - More careful data security — On modern networks there is typically more than one area where data can be stored, meaning there are more sites to secure. Data configuration should be done capably with the most elevated security principles. In conclusion, ZTNA combines the principle of least privilege, software-defined perimeters, and advanced security tools and strategies to create a comprehensive security solution. While some effort is involved in implementing a ZTNA model, the result is increased security and reliability for your users. About the author: Binu is an experienced cybersecurity expert, consultant, and blogger who writes about information security and data privacy. Connect with Binu: @letsaskbinu on Twitter.
<urn:uuid:ad01fd62-3700-4378-ab7e-b6e580d07cbf>
CC-MAIN-2022-40
https://www.cover6solutions.com/zero-trust-network-access-ztna/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00266.warc.gz
en
0.937637
1,963
2.53125
3
For decades, land mobile radio systems have represented the gold standard for push-to-talk functionality, and the development of mission-critical communication services over broadband has been a key priority of the 3rd Generation Partnership Project (3GPP). Mission Critical Push-to-talk (MCPTT) is the global standard for mission-critical applications, with the specification effort led by Service and System Aspects Working Group Six (SA6) of 3GPP. To date, SA6 has completed specifications for mission critical push-to-talk (PTT) applications, a Common Services Core for all MC applications, and enhanced video and data communications. But, deploying a mission-critical public safety system is about much more than being standards compliant. What is also required is a deep understanding of emergency responders operations, and a demonstrated ability to convert that insight into a working solution. Which is why we think there are three key criteria by which an MCPTT supplier should be judged. 1. Push-to-talk communications know-how For emergency responders, push-to-talk serves as their lifeline in critical situations. Designing an effective PTT platform requires a deep understanding of how frontline personnel operate devices under stress and in harsh environments, and the capacity to turn that knowledge into reliable communication tools. Since we launched our first mobile radio for public safety in the 1930’s, Motorola Solutions has been a pioneer and innovator in PTT communications. We introduced the first FM portable or “walkie-talkie” two-way radio in 1943, providing critical PTT communications for troops during World War II. In 1969, we took push-to-talk further than it had ever been before, providing reliable PTT communications for the first astronauts to land on the moon. Other firsts include the first digital encryption technology for PTT communications in 1977, and the world’s first narrowband digital PTT communications system for public safety in 1991. In 1993, we pioneered the evolution of push-to-talk over cellular (PoC) with the introduction of Integrated Digital Enhanced Network (iDEN) technology. iDEN combined the benefits of efficient PTT communication with the unlimited range of cellular, providing users with push-to-talk, telephone, text, and data communications all in one unit. 2. Leadership in the development of broadband PTT industry standards and technology Having direct, extensive experience in defining a standard paves the way to quick compliance with that standard. For more than 15 years, Motorola Solutions has distinguished itself as a leader in the development of industry standards for broadband push-to-talk communications. Kodiak, a Motorola Solutions Company, was a significant contributor to the OMA (Open Mobile Alliance) PoC specifications, and we served as the Rapporteur for OMA’s Push-to-Communicate for Public Safety (PCPS) specification. We are also an active participant in the definition of the 3GPP Mission Critical PTT (MCPTT) Standard, serving as the Vice-Chair of the SA WG6 working group and Work Item Rapporteurs for several key technical specifications. In addition to our standards leadership, we are a leader in broadband PTT technology innovation. Running in a fully virtualised environment, our platform utilises Kernel-based Virtual Machines (KVMs) to enhance its scalability, making it possible for one platform to support premise-based, cloud-based, or network-wide deployment options to ensure the right fit for each customer’s unique requirements. We have also generated more than 20 patents on vital aspects of broadband PTT technology, and our Kodiak MCPTT platform has already demonstrated compliance with key areas of the 3GPP Standard. 3. Proven experience with large scale field deployments Our broadband PTT platform is a proven solution backed by over a decade of experience delivering fast, secure, reliable push-to-talk communication to customers in all industries. - 500+ deployments in 38 countries, including the top 50 U.S. cities - Integration with the network, customer care, and operations systems of leading wireless carriers globally - Over 1 million end-users As the title suggests, demonstrating our broadband PTT leadership is truly as easy as one – two - three. Our unique combination of push-to-talk knowledge, standards and technology leadership, and deployment experience results in a broadband PTT platform that can get the right information to the right people at the right time in the moments that matter. Michael is Marketing Manager, Kodiak at Motorola Solutions Follow #ThinkPublicSafety, @MotsolsEMEA on Twitter
<urn:uuid:da96b91a-4069-435f-a4eb-4e5998812c7b>
CC-MAIN-2022-40
https://www.motorolasolutions.com/en_xu/communities/en/think-public-safety.entry.html/2018/12/14/what_should_you_expe-4Q7R.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00266.warc.gz
en
0.916819
956
2.546875
3
Data Quality Management Improves Organizational Knowledge It is commonplace to say that data is an “asset,” but most organizations do not treat data as they treat their other assets. They are unaware of the quantity or condition of their data. The organizations create data for their immediate needs and without considering the potential for a broader use. Moreover, they do not differentiate between data that is valuable or the one that has little importance. Few provide guidance for employees about how to get value from data. Fewer still recognize the costs of low-quality data or the benefits of high-quality data. The failure to manage data as an asset prevents organizations from taking advantage of the knowledge and insight they could gain from their data and the ways that they can derive value from their data. In many organizations, lack of understanding of their own data is a huge blind spot, which is also a terrible waste of resources. Data quality management helps address this blind spot. At its simplest, data quality management is the application of product quality management methodologies to data. It aims to improve the quality of data and to sustain the levels of data quality that an organization needs to deliver on its mission and serve its customers, regardless of how an organization defines its mission and customers. Data quality practitioners must help the organization answer and act on the findings from three fundamental questions: • What do we mean by high-quality data? • How do we detect low-quality or poor-quality data? • What action we take when data does not meet quality standards? To answer these questions, data quality practitioners must help the organization: • Define Expectations for Quality in the form of standards, rules, models, and requirements from data consumers. Defining standards is a means of clarifying expectations for quality so that these expectations can be met. Figure: Three questions at the core of data quality management (From Laura Sebastian- Coleman, Meeting the Challenges of Data Quality Management, Elsevier 2022) • Assess Data Quality to determine whether data meets requirements. Assessment can take place as part of data analysis for project work, data quality improvement projects, data quality monitoring, or through data use. Monitoring is not an end-in-itself. Its main goal is to help data consumers by creating a feedback loop to data producers. • Take Action so that data does meet expectations. Data assessment may clarify data quality requirements or identify new requirements. Most organizations lack the understanding of their own data, which is a terrible waste of resources. Data quality management helps improve and sustain the data quality that an organization needs to deliver on its mission and serve its customers o For data that is not critical to the organization’s goals, taking action may mean deciding not to take action, and instead choosing to live with identified issues and their associated risks – but doing so intentionally, not accidentally. In my role at Prudential Financial, Inc. (“PFI”), a US-based global financial services and asset manager, I find these questions help me partner effectively with our business to become a more data driven company. Asking these questions and helping the organization answer them is necessary to identify improvement opportunities. As importantly, the activities necessary to answer the three questions (setting standards, assessing data, and taking action) will build out explicit knowledge about the organization’s data. Perhaps the biggest benefit is increased awareness of the ways data binds the organization together. This awareness will help people in different parts of the organization work together more productively. Done well, data quality management provides a means of developing organizational self-knowledge through data knowledge. Prudential Financial Inc. of the United States is not affiliated with Prudential PLC, headquartered in the United Kingdom, or with Prudential Assurance Company, a subsidiary of M&G PLC, also headquartered in the United Kingdom.
<urn:uuid:9b2237c1-adae-46ce-b1dc-0b266c7fa7f3>
CC-MAIN-2022-40
https://sage.cioreview.com/cxoinsight/data-quality-management-improves-organizational-knowledge-nid-35837-cid-80.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00466.warc.gz
en
0.939918
791
2.546875
3
It’s incredible how quickly staff and pupils have adapted to the challenges presented by remote learning over the past 18 months and have continued to achieve amazing things. Whilst schools are predominantly now back to in-class learning, there may still be times when individual pupils or even groups of pupils will need to isolate at home due to Covid-19. During these times children will need to learn remotely and there are some key things to bear in mind when planning for what might become a relatively long-term way of working. What’s the difference between remote learning and online learning? The first thing which comes to mind is that there has been a language shift. Oftentimes ‘remote learning’ is referred to as ‘online learning’ and it is worth reminding ourselves that these are not the same thing! Online learning refers to learning which takes place solely online, using digital technologies. This includes Zoom & Teams sessions, online activities, watching instructional videos online, and online educational games. During the first lockdown, some schools were applauded for keeping pupils occupied for the full school day using Zoom, Teams and other online methods to teach in real-time. However, there is no real evidence to suggest that this is a more effective approach than many other successful remote learning approaches that teachers employed across the country. Furthermore, not all children have access to devices which allow for full-time online learning, so if this is the only type of learning offered, then some children are going to be greatly disadvantaged. Remote learning refers to any learning which takes place remotely (away from the classroom). This of course can include online learning activities, but also includes activities which can be completed offline in exercise books or learning journals. Ofsted gives a clear definition of remote learning as being “a broad term encompassing any learning that happens outside of the classroom, with the teacher not present in the same location as the pupils”. In other words, it doesn’t have to be face-to-face with the teacher on Zoom or Teams! Remote education is simply a way of delivering the curriculum. Keep it simple! This is possibly the most sensible guidance that Ofsted has ever given. Teachers do not need to overcomplicate resources with too many graphics and illustrations which do not add to the content. When using digital remote education, the platform should be simple to use (for both the pupils and their parents!) Just as we don’t need ‘all-singing, all-dancing’ lessons in the classroom, remote education often benefits from a straightforward and easy-to-use format. Live lessons aren’t always best Many think that live lessons are the ‘gold standard’ of remote education and this is not necessarily the case. Whilst live lessons do offer some advantages (teachers can engage with their pupils, answer questions, address misconceptions and keep more control over the learning environment), live lessons are not always more effective than asynchronous approaches. Using recorded lesson segments followed by a task or activity allows pupils to watch the video at a suitable time for them (ideal if they only have access to a device at certain times) and allows children to re-watch the video several times over if necessary (something they wouldn’t be able to do if the lesson was live and they missed something). It also allows the teacher more time for marking and providing feedback on completed work…something which would be nearly impossible if they were on Zoom all day! The medium matters (a bit) Quality of teaching is far more important than how lessons are actually delivered. However, there is some evidence that the medium does matter, especially when it comes to online learning. Pupils tend to spend longer accessing a remote lesson when they are using a laptop or PC than when they are using a mobile phone (with tablets falling somewhere in between). This means teachers will need to carefully consider whether pupils have access to the right kind of device when using digital remote education. If they don’t and the school can’t provide enough devices, then it may be better to consider non-digital approaches to remote learning. It is certainly more challenging to engage and motivate pupils remotely than when they are in the classroom. There are more distractions and as a teacher you are not physically present to manage the environment. There has been a lot of attention paid to ways in which online learning can be made more engaging. Setting a variety of task types and activities is key to helping children stay excited about their learning. Building in rewards and incentives is also a great way to make learning more ‘game-like’. Communication with parents also plays a key factor as they can help support their children with their home learning. Engagement increases when pupils (and their parents) feel part of the school community. Whole-school digital assemblies, sharing and celebrating remote learning achievements, newsletters (to both pupils and parents), videos of the teachers addressing their pupils, having a safe online community space are all ways to help children and their parents feel part of the community even when learning remotely.
<urn:uuid:bac04058-753b-4404-a913-8469fd23785b>
CC-MAIN-2022-40
https://www.neweratech.com/uk/blog/remote-learning-and-online-learning-theyre-not-the-same-thing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00466.warc.gz
en
0.968532
1,037
3.1875
3
A microservice is a small, single service offered by a company. It derives from the distributed computing architecture that connects many small services, rather than having one large service. The microservice can then be delivered through an application programming interface (API). An API is a method of communication between a requester and a host, most often accessible through an IP address. The API can communicate multiple types of information to users, such as: - Data you want to share - A function you want to provide In short, talk of a microservice has to do more with the software’s architecture, and the API has to do with how to expose the microservice to a consumer. How microservices work Microservices extend from the idea that a company provides a large, single service. Microservices come as individual functions. If Microsoft Word were to be split into microservices, perhaps there would be one offered as the blank sheet of paper, one as a spell checker, one service as a formatting tool. Kubernetes has allowed computer software to adapt. While Kubernetes has its own advantages, it has also pushed software design away from a single monolith of services—and towards a combination of many, small services working together. That’s because of the Kubernetes design, which can: - Efficiently orchestrate the use of single containers on servers - Increase system reliability and scalability - Decrease associated management and resource costs Examples of microservices Microservices are very simple. Simplicity is a primary goal. They can be thought of as roles in a company; one microservice serves a very particular role and has just one job to do. DZone put together an excellent graph of different microservices that Uber offers, communicating with one another through APIs and performing different tasks. Uber builds different services for each task: - Passenger management - Passenger web UI - Driver management - Driver web UI - Trip management Microservices can also be illustrated through graphs, where one microservice is a single node that communicates to another service via an API. The architecture can grow and grow as more services are tacked onto the system. As you can imagine, the graphs of the large companies can be extensive, like a small city. Here are the supposed graphs of Amazon and Netflix: Microservices rely on APIs The API is a communication tool—it lets one service interact with another. An API itself cannot do anything unless it is connected to something, like a cell phone that just sits there. The API becomes useful when it is connected to services and microservices such as: The API is the way you can distribute the microservice to users. Instead of downloading software or popping in a disc, the API distributes your service. The API is necessary for the microservice architecture to function because the API is the communication tool between its services. Without an API, there would be a lot of disconnected microservices. Technically, the microservice would just build to be a monolith again. How APIs work APIs are extremely versatile. You can: - Create APIs on any containerized service - Use many different languages—Java, Python, Go, to name a few - Deploy APIs on any of the major cloud providers APIs can increase both the usability and the exposure of your service. With the distribution made a lot easier, you can offer smaller services. (After all, you don’t have to build a whole Adobe Suite just to prove viability). Many APIs are RESTful and exposed through an endpoint like an HTTP endpoint. This means accessing information from an API is as easy as pinging a URL. GET, POST, PUT, DELETE commands, in conjunction with the URL, work as expected, fetching data or giving data to the API. Though REST APIs are the most common in modern web applications, other options include: As a product, the API endpoint is usually served alongside a developer portal that informs developers how to use it and assigns devs an API key. If the goal of a microservice is to provide data on the registered vehicles in a given county, then the dev portal will explain: - What the service does - How the data is structured (i.e.; a data schema) - What is required for a developer to use the API Microservices vs APIs Most good microservices have some type of API. If you want your microservice to be used, then you’re going to create an API. The API is to developers what having a social media account is to artists and creators. If you want people to use it, you use an API so they can receive it.
<urn:uuid:83393890-5360-4317-9f84-33e426188668>
CC-MAIN-2022-40
https://www.bmc.com/blogs/microservice-vs-api/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00466.warc.gz
en
0.947084
999
3.6875
4
Those working in the business world or the IT field probably have a basic idea of what a Configuration Management Database (CMDB) is. Techopedia defines and explains what a CMDB is in detail here, but essentially a CMDB is collected data that concerns the IT infrastructure of a business or organization. A CMDB will store all the IT data, but how is that helpful? CMDBs as a System Map A CMDB can show an entire IT structure like a big map of the inside of a machine. All the small inner-working parts that make up the whole system can be viewed. This means that every piece of hardware that is connected to another, any software used within a company, the email and Internet services, all of it can be assessed. A CMDB will show what will be affected by a change or from where a problem originates. It can help with troubleshooting and monitor any changes made to a system. CMDBs as a Record A CMDB can keep records of what happens in an IT system. This can pinpoint when a change occurred and how it was done so new employees can continue the practices set in place by their predecessors. This eliminates instances of specific IT workers being the only ones who understand the system and its set up. CMDBs and Transparency Since a CMDB is like a big map of how everything in an IT system functions and is connected, internal and external audits can easily be performed. A company's IT system is able to be viewed by authorized personnel to insure legality and compliance with regulations. This is also useful internally because being able to see every process, application, and change within a system can highlight areas for improvement. Once a CMDB is in place, a lot can be done automatically. Anything that is connected to the network will be found with a system search that a CMDB does automatically. This can provide many details concerning servers, applications, software, etc. Changes made to the system are automatically tracked, and any unauthorized access will trigger an alert. This is particularly helpful with large systems that would be painstaking to monitor and update manually. Overall, there are many ways a CMDB can be helpful. At its heart, it is a tool for organizing information in an IT system. It seems a little daunting and does require some effort and upkeep, but a CMDB is an efficient way to keep track of IT infrastructure. When a CMDB is properly executed and well maintained, it will never be more of a hindrance than a help.
<urn:uuid:5cdaa0e0-9ca9-41d5-a89b-afe4da097e00>
CC-MAIN-2022-40
https://www.givainc.com/blog/index.cfm/2016/2/10/how-a-cmdb-is-helpful
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00466.warc.gz
en
0.95928
517
2.96875
3
SSl and TLS are 2 commonly used protocols for data secured data transfer between a web server and a web browser (client machine).Both protocols render authentication and encryption when transferring data between client and server.TLS is a more recent, improved and secured version of SSL. Also, It fixes some key security vulnerabilities found in earlier SSL protocols. How SSL/TLS work: When SSL/TLS certificates are provisioned on web server, 2 keys are used – (1) public key and (2) private key. The keys are used for encryption and decryption data between server and client. Now, when a request is initiated by any visitor or client PC through browser, it will look for server site’s SSL/TLS certificate. Next, the browser will perform a secured “handshake” to validate certificate and authenticate the web server.Once the client PC browser validates the authenticity of certificate, an encrypted link between client browser and server is created for transport of data. SSL was developed by Netscape. SSL had gone through some updates in its 3 version. Though its 1st version (SSL v1) was not considered an official lauch, its 1st approved version was SSL v2, which was lauched in year 1995. Below are the 3 releases of SSL – - SSL v1 - SSL v2 - SSL v3 SSL3.0 was prone to man in the middle attack. One such case was “POODLE” vulnerability which allowed attackers to encryt and decrypt the traffic. The hackers could manipulate the communication and hear the secured communication traffic. Further, the client initiated traffic could be redirected for cyber crimes like financial frauds and malware infection. TLS was introduced taking view of security risks accosiated with SSL protocol. Below are the 4 versions TLS has gone through since its inception by IETF (The Internet Engineering Task Force) – - TLS v1.0 - TLS v1.1 - TLS v1.2 - TLS v1.3 TLSv1.0 had some security weaknesses which could put financial transaction at risk and hence had to be stopped by 2018 by websites which were using credit cards or services used by US Government. TLS 1.3 has made significant improvements comapred to its predecessors and at present major players around the internet are pushing for its proliferation. Most present day web browsers dont support SSL 2.0 and SSL 3.0 now. Includng Google Chrome, other major browsers have already or planning to shortly stop supporting TLS 1.0 and TLS 1.1. Now that we understan the SSL and TLS working and some historical events, lets illustrate difference between both protocols is below table:
<urn:uuid:edb7b5a8-fbe8-4857-a7b8-2c8367239280>
CC-MAIN-2022-40
https://ipwithease.com/ssl-vs-tls/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00466.warc.gz
en
0.95979
573
3.578125
4
Data stewardship is the collection of practices that ensure an organization’s data is accessible, usable, safe, and trusted. It includes overseeing every aspect of the data lifecycle: creating, preparing, using, storing, archiving, and deleting data, in accordance with an organization’s established data governance principles for promoting data quality and integrity. Data stewardship encompasses: - Knowing what data an organization possesses - Understanding where that data is located - Ensuring that the data is accessible, usable, safe, and trusted - Safeguarding the transparency and accuracy of data lineage - Enforcing rules and regulations on how data can be used - Helping the organization make most of its data for competitive advantage - Driving toward a data-driven culture - Being an advocate for trusted data Data stewardship comes under the umbrella of data governance. But whereas data governance establishes high-level policies for protecting data against loss, corruption, theft, or misuse, data stewardship focuses on making sure those policies are actually followed. Data stewards are the persons with responsibility for data stewardship. Some people are assigned “data steward” as a formal title. Others assume the role in addition to their regular jobs. Either way, the role is indispensable, as data stewards are basically “data ambassadors” between the data team and the user community, with the ultimate goal of empowering users with trusted data. Why is Data Stewardship Important? Data is swiftly overtaking physical assets in terms of value to organizations. Keeping data safe, private, consistent, and of high quality is as important to enterprises today as maintaining factory machinery was in the industrial age. Without trusted data, organizations end up with messy and unreliable heaps of information sitting in multiple databases, platforms, and even individual spreadsheets. When users don’t trust the data, they aren’t confident about leveraging it to make business decisions or to drive operations. In worst-case scenarios, data of substandard or inconsistent quality can steer organizations in the wrong strategic direction, with disastrous business results. Data stewards prevent this from happening. By establishing consistent data definitions, maintaining business and technical rules, and monitoring and auditing the reliability of the data, they ensure high levels of data quality, integrity, availability, trustworthiness, and privacy protection. Managing data lineage is an especially important part of data stewardship. Data lineage is the lifecycle of a piece of data: where it originates, what happens to it, what is done to it, and where it moves over time. With visibility into data lineage, including the accompanying business context, data stewards can trace any errors or problems when using data—say, for analytics—back to their root causes. Because data stewardship is so important, data stewards occupy positions of trust. In fact, for data stewardship to succeed, both technical staff and business professionals must have the utmost confidence in the their organization’s data stewards. Such people form a bridge between data professionals and the community of people who use the data. Because of this, data stewards must have both a big-picture view of how the organization works as well as a strong grasp of the down-to-earth details of how data is created, managed, manipulated, stored, and—most importantly—how it’s used. It’s also important to note that there are two sides to data stewardship. One is defensive: to guard against the regulatory and reputational risks that come with owning data. To that end, data stewards are champions for information governance within their organizations. They evangelize the reasons for protecting data, and deliver education, training, and mentorship to the workforce. At the same time, data stewards are the key drivers of the use of data for strategic advantage, and they promote improvements to the business process that create and consume data. For this reason, data stewards must be experts in the business units they serve. They constantly work to inspire users to make the most out of the data—consistently, accurately, and safely—to make smarter business decisions each day. Over time, with strong data stewards in place, employees perform better in their jobs. They make fewer errors. They contact the right customers for upselling and cross-selling. They prioritize the right business initiatives. And they do all this while following data governance policies and processes.
<urn:uuid:cff915ab-d4a8-4a1d-a4eb-41e560baef8c>
CC-MAIN-2022-40
https://www.informatica.com/hk/resources/articles/what-is-data-stewardship.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00466.warc.gz
en
0.932401
897
3.171875
3
Access to relevant data and having ways and techniques to analyse it makes a huge difference in the success of organizations and faster growth. Data collection and its analysis is very important for government, public and private sector firms and also educational institutes. Usually, the data displayed on websites can only be viewed using a web browser. At times copy paste option is also disabled on sites. So, you need the data – it is a very tedious job to complete and it may take hours , days and weeks. The technique to automate this is known as scraping. In this article, we will learn more about Web scraping technique and how it works, what it is used for , its features, functions and limitations and so on. What is Web Scraping? Web scraping is also known as crawling, spidering, Screen scraping, web data extraction, web harvesting etc. In this technique large amounts of data automatically extracted from websites and stored in a file or a database. The data scrapped is usually be in tabular or spreadsheet format. Instead of manually copying data from websites – web scraping software is used to perform same tasks with much less effort and off course the time. Web scraping software automatically load, crawl and extract data from multiple pages of websites based on the need it could be either custom built or specific website. Web scraping has two functional parts , - one is crawler and - other is scraper. The crawler is an artificial intelligence algorithm which is used to browse the web to locate a particular data required by following the links across the internet. The scraper is a specific tool which is created to extract data from the website. Uses of Web Scraping There are many uses of Web scraping in varied businesses as under : - In E-commerce web sites scraping is used to perform price comparisons and monitoring of competitors - In marketing web scraping is used for lead generation, build phone and email contact lists for cold calling - In Real Estate web scraping is used to collect property details and contact details of owners and agents - Web scraping is used to collect training data for machine learning modules Ways to scrap data from websites? Web scraping software’s fall under two types. First one which can be installed locally on system. WebHarvy, OutWit hub, visual web Ripper etc. are some examples of system-based web scraping software. Second one which runs on cloud. Examples of cloud-based web scarping software are import.io, Mozenda, ParseHub , OctoParse etc. Web scraping software can be custom built for specific requirement usually through a hired developer who will use a common API such as apify.com to scrape data from any website. How is Web Scraping performed? Implementation of web scraping is done via a small code which is used to get information from the website and known as scraper bot. The bot initiates a GET request to web site. When a response is received from website, scraper parses the HTML document in a specific data pattern and after parsing the bot converges data into format programmer had designed in the bot could be a spreadsheet or tabular format. Scraper bots are used for variety of tasks such as Price scraping to compare between markets competition, promotional emails, WhatsApp promotions etc. are part of contract scraping which steals data from eCommerce sites and use it to promote their brand and products. Protection from malicious Web scraping ? Another form of scraping is content scraping where entire content is copied and pasted and can be misused by a fraudulent website. Web scraping is considered malicious and illegal when data is extracted without the consent of website owners. The most widely use case for such kind of scarping is price and content theft. To counter the attempts made by bot scrapers several techniques are used such as HTML fingerprinting , IP reputation, Behaviour analysis and progressive challenges such as CAPATCH challenge.
<urn:uuid:85c7990a-080b-436d-819c-c727f29b25e8>
CC-MAIN-2022-40
https://networkinterview.com/what-is-web-scraping/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00466.warc.gz
en
0.933355
814
3.0625
3
Reference data introduction Why do we need reference data? Reference data helps classify and categorise information. It is a type of Master Data, and it typically changes very slowly (if at all). Examples include: - Iso codes - Country codes - Product codes Reference data doesn’t change very often hence it’s important we get it right. To that end, one of the main things you can do with reference data is force standardisation by having people choose from a predetermined list of values. You could think of this for something like a country code or a country telephone code. Imagine people having to type their own country code in a free form textbox. Some people might put 00 in front while others might put + instead, or simply leave it blank. Let’s use Singapore as an example. ( ) 97503525 If we can provide a dropdown of reference data for people to select their country and it automatically populates the country code for them. This standardised information can then be used by other business processes and other teams across the organisation. For example, it’ll make sure we’re not calling up our customers using an incorrect country code. A healthcare use case If we are prescribing drugs to patients, we have to be very careful. The wrong dosage might make someone sick. The wrong product might kill someone. And we must know what we have prescribed and to whom so we can re-stock our supplies and invoice patients for the products they’ve received. Take Ibuprofen as a simple example. Many people take this to fight pain and inflammation. Some people take the brand name “Advil” others take a generic “Ibuprofen”. Both are the same medicine, and both will have the same benefit to our patients. If we can standardise the reference data so the uninitiated know that they can prescribe either Advil or a generic Ibuprofen safely and with the same effects, it allows us to stock multiple brands whilst still being able to search for “ibuprofen” and be returned a list of all types of this medicine that we can use. If we actually standardise – have reference data for each different drug and each different quantity of drug that is administered, it becomes easier for the organisation to know exactly what was prescribed and delivered to whom. As a result, they’ll know how much to bill for each of these products. Whereas if we use different terminologies, there is the chance misunderstanding might occur. For example, I’m looking at Advil being dispensed versus ibuprofen, and I’m unable to match that up as easily as I would be if we had a common identifier, some reference data that allows us to know that both of these products are indeed the same thing. Stop struggling, start succeeding learn the 6 easy steps to Data Governance Success Quickly learn how to succeed with 3-5 minute long video lessons packed with practical advice you can use in your job todayCheckout now!
<urn:uuid:5551ae05-0b20-47ed-b452-9825e9d7fb6d>
CC-MAIN-2022-40
https://academy.cognopia.com/topic/reference-data-introduction/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00466.warc.gz
en
0.914375
667
2.796875
3
What is Two-Factor or Multi-Factor Authentication? Multi-Factor, also known as Two-Factor Authentication (aka: MFA and 2FA) is the gold standard for strong authentication. Banks mandate its use when logging into your financial accounts. We all should learn from Banks and enable 2FA on all our “critical accounts”. There is one critical account that we’ll focus on in this article: LastPass a Password Manager. Nearly all Password Managers have this capability so if you aren’t using LastPass, research and implement 2FA on your Password Manager today. You have too much riding on this technology not to. According to this Symantec info-graphic, “80% of data breaches could be eliminated by the use of two-factor authentication.” What is Two-Factor Authentication? 2FA is the combination of any two of the following three identification factors: Something you know – a password, passphrase, or a geometric unlock shape used to unlock your your Android phone; Something you have – your cell phone‘s ability to provide a random 6-digit code or to receive a code from a text message; Something you are – your physical characteristics such as a fingerprint, facial recognition, voice recognition, or even an iris scan. If you use two of these three identification factors, you are using 2FA to authenticate yourself, and your critical accounts and the data they contain will be properly secured. If you would like more information on this, read this article on Two-Factor Authentication. Setting up 2FA on LastPass “Teams” To set up 2FA on your LastPass Teams account, watch this short video. You will learn how to enable different 2FA solutions and how to apply 2FA requirements to select users. - All critical accounts, especially your Password Manager, must have 2FA enabled on them. This helps to stop over 80% of online breaches according to Symantec.
<urn:uuid:c87b52e3-e8de-4574-b0d2-bfb20f22a078>
CC-MAIN-2022-40
https://cyberhoot.com/howto/how-to-setup-two-factor-authentication-in-lastpass/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00466.warc.gz
en
0.904293
416
3
3
Facebook hopes to one day be permitted to draw young children into the Facebook fold. CEO Mark Zuckerberg recently suggested altering legal restrictions that prevent children younger than 13 from signing up for a Facebook account. Currently, the Children’s Online Privacy Protection Act (COPPA), a federal law passed in 1998, mandates that 13 is the minimum age at which a child may sign up for accounts on sites like Facebook. Zuckerberg argued that in our increasingly technology-driven society, education in online media is what will set students apart and boost the economy, according to a report in the International Business Times on Monday. He maintained that learning must start at a very young age. Learning From Facebook? It’s unclear whether Zuckerberg has plans to facilitate more learning opportunities on his social networking site or if he’s simply targeting a younger crowd of Facebook users. The Web can indeed be a powerful learning tool, according to Judy Harris Helm, president of Best Practices and an expert in early childhood education. “Students can interview experts from far away, visit webcams of sites, share their work and connect with other students of the world researching the same topic,” she told TechNewsWorld. However, there are already numerous sites, activities and teaching tools in place online. Those sites, monitored by teachers, don’t have the same risks as a mostly unrestricted page like a Facebook profile, where personal information is shared and spread in an instant. In this way, children can become victims of Internet bullying or scams. Educators also worry about the maturity levels of children while navigating the Web. “Using social networking sites requires focus and control. This is something that children are developing, but it takes a long time to mature. What is more important, when children are under stress, such as angry at a classmate or feeling excluded, their ability to exercise control diminishes, so they lash our and say things they later regret,” Helm added. Under the Legal Radar Aside from Zuckerberg’s comment, Facebook has not released any statements indicating it’s gearing up for a legal battle. It’s a battle they may not have to face, however. Currently, even though federal law prohibits it, children younger than 13 can join Facebook simply by entering a different, earlier birth year. “I don’t know that Facebook has the power to change the law. But it’s unfortunate that the effect of the law is that most e-commerce companies won’t serve people under 13, and as a result the first exposure kids have to the modern Internet is lying about their age to get on the site in the first place,” Mark A. Lemly, a professor at Stanford Law School with an expertise in technology law, told TechNewsWorld. Even if kids are lying, some parents tolerate that ruse. In fact, a recent study from Liberty Mutual‘s Responsibility Project states that the number of parents who say they’d let their 10- to 12-year-old-child open a social media account doubled in the past year. At the same time, parents who monitor their children’s accounts rose to 70 percent in 2011, up from 55 percent the year before. Facebook hopes parents and educators will continue to keep their eyes on children as young users increase. “As Mark noted, education is critical to ensuring that people of all ages use the Internet safely and responsibly. We agree with safety experts that communication between parents or guardians and kids about their use of the Internet is vital. We believe that services such as Facebook have a role to play in encouraging this,” Andrew Noyes, manager of public policy communications at Facebook, told TechNewsWorld. Law experts speculate young users will stay under the legal radar, at least in the near future. Although possible to overturn, the protection aspects of COPPA are mostly unopposed. “Every parent knows that kids under 13 are either using Facebook, or nagging to be allowed to use Facebook, or both. I don’t think, however, that Congress is going to look very favorably on reducing online protections for children in an election year,” A. Michael Froomkin, professor at the University of Miami School of Law with an expertise in Internet law, told TechNewsWorld. Legislation to take away some of that security is unlikely to gain much public support. “We protect children from dangers during this time period of growth. We do not allow them to drive a car. We do not allow them to drink alcohol. We do this because they do not have the maturity to inhibit or control their behavior. Children do not need this,” said Helm.
<urn:uuid:690538fa-a954-4200-8d8a-16944afbacd6>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/facebook-wants-to-hook-em-young-72518.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00466.warc.gz
en
0.952727
978
2.640625
3
NASA earlier this month entered an agreement with Arx Pax to use its Magnetic Field Architecture technology in hardware that will let astronauts move tiny satellites without touching them. The Space Act Agreement marks a major milestone for Arx Pax, CEO Greg Henderson said.”It’s exciting to work hand in hand with NASA’s brilliant team of scientists and engineers. We’re thrilled about the potential impact we can make together.” Henderson and his wife, Jill Henderson, last year launched a successful Kickstarter campaign to fund development ofa functional hoverboard based on the technology. Magnets in Space? NASA has been seeking to create a magnetic tether that can be used to couple and uncouple microsatellites called “CubeSats.” It’s interested in exploring whether this tech can be used in a space environment, said Luke Murchison, a project manager at NASA’s Langley Research Center. NASA in the near term will work to identify the constraints of the magnetic tether technology with regard to its applications in low-Earth orbit, he said. “In the long term, we are interested in developing technology to allow the autonomous assembly of small modular satellites,” Murchison told TechNewsWorld. “That would let us create entirely new satellite architectures.” CubeSats are a small form factor for satellites, said Alex Saunders, a student working at the CubeSat Lab at California Polytechnic State University. “They come in small 10-by-10-centimeter cubes, and they can be used for a variety of things,” he told TechNewsWorld. “A lot of them are used for tech demos in space.” A tech firm will reach out to CubeSat researchers to test prototypes of their products in space, said Saunders. They’ll put them on a CubeSat, “and we’ll get data back to them so that they can say they tested it in space.” Along with launching tech demos into space, CubeSats also are used to conduct scientific experiments in space. “One of the projects we’ve worked on here at Cal Poly is ExoCubes, where we put a small mass spectrometer in space and are reading ions and neutral data for certain particles at a certain level in our atmosphere,” noted Saunders. Coupling and uncoupling CubeSats using Magnetic Field Architecture would make a good tech demo and could serve as an important step to scaling up, he said. Building It Out Someday, larger satellites even could use magnetic tethering to dock or undock with space stations, Langley’s Murchison suggested. This just the start of MFA’s use in space. NASA plans to iterate on the tech through its alliance with Arx Pax. “We are currently developing a number of prototypes over the next one to two years,” he added, “and will be exploring alternative designs with this technology.”
<urn:uuid:51e27486-3687-4ad6-846e-392f622fcde0>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/nasa-may-move-microsatellites-magnetically-82484.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00466.warc.gz
en
0.920065
640
3
3
By definition, cyber security is the practice of protecting computer networks and systems from unauthorised access or theft – this includes both hardware and software security measures. Cyber security and the threat from cyber attackers are often overlooked as an important business issue, so it’s worth taking a closer look at why cyber security is crucial to businesses in the UK today. First, let’s take a look at one controversial episode – the Sony PlayStation hack in 2011. The Sony PlayStation hack On April 26, 2011, the PlayStation Network and Qriocity service (now the Sony Entertainment Network) was taken offline by Sony. It was later revealed that the attack had been carried out by a group of hackers known as ‘Anonymous’ who had gained access to the network’s servers and had stolen customer data, including credit card details. This resulted in Sony being fined £250,000 by the UK’s Information Commissioner’s Office (ICO) for failing to protect its customers’ personal data. How did Sony fix the problem? After the attack, Sony implemented several security measures to help prevent a similar incident from happening again. This included adding extra layers of security to its network, increasing monitoring of suspicious activity, and improving staff training on cyber security. What can we learn from the Sony hack? The Sony hack was a wake-up call for businesses of all sizes. It showed that even large, well-known companies can be vulnerable to attack if they don’t have adequate security measures in place. It also highlighted the importance of having a robust cyber security strategy in place to defend against the growing threat of cybercrime. How does cyber security benefit society? Having seen what happened during the Sony PlayStation hack, we can see how cyber security benefits society in many ways. It helps to protect our critical infrastructure: Cyber security helps to protect personal information, financial data and important services from hackers. It helps to keep the internet safe and secure: By working to prevent cyber attacks, we can make sure that the internet remains a valuable platform that anyone can use. It creates a more secure and trustworthy online environment: This is especially important for businesses that rely on the internet for a variety of tasks including communication, marketing, and sales. It ensures businesses can continue to operate smoothly and efficiently: By ensuring that their networks and systems are protected from cyber attacks, businesses can focus on growing their operations and delivering valuable services for their customers. It boosts customer confidence: Through online accounts for shops and services, the general public are entrusting more information than ever to businesses. Businesses that ensure that data is protected will retain the loyalty and confidence of their customers. How does cyber security help to protect our critical infrastructure? Critical infrastructure refers to the systems and networks that are essential for the functioning of a society or economy. This includes things like power plants, water treatment facilities, financial institutions and transport systems. These systems are often targeted by cyber criminals because they can cause widespread disruption if they are successfully attacked. Cyber security helps to protect these critical systems by preventing or mitigating the damage that can be caused by such attacks. Some of the ways that cyber security helps to protect our critical infrastructure include: · Preventing unauthorised access: Preventing unauthorised access to our critical infrastructure reduces the likelihood of theft and vandalism. · Protecting against cyber attacks: Cyber attacks can cause serious damage to our critical infrastructure. By protecting against these attacks, we can reduce the risks of downtime and potential power outages. · Ensuring continuity of operations: In the event of a cyber attack, it is important to have continuity of operations. This means that our critical infrastructure will be able to continue to function, minimising the disruption of an attack. By protecting our critical infrastructure, we can ensure the continued functioning of essential services and prevent disruptions to society. Cyber security is therefore vital for the safety and security of the modern world – as was illustrated by the ‘WannaCry’ ransomware attack. The ‘WannaCry’ ransomware attack One of the best illustrations of cyber attackers targeting our infrastructure is the ‘WannaCry’ ransomware attack in 2017. In May, a ransomware attack known as ‘WannaCry’ hit hundreds of thousands of computers in more than 150 countries. The attack encrypted user files and demanded a ransom to decrypt them. Among the many organisations that were affected were hospitals that had to cancel appointments and operations. Massive disruption was also caused to businesses around the world. How did industries fix the WannaCry problem? After the WannaCry attack, there was a huge push for businesses to improve their cyber security. This included identifying and patching software vulnerabilities, improving staff training on cyber security and increasing investment in cyber security. The WannaCry attack showed that even the most well-protected systems can be vulnerable to attack. It also highlighted the importance of having a robust cyber security strategy in place, and the need for businesses to regularly update their software and systems to protect against new threats. How does cyber security help to protect financial data? It is vital that we keep our financial data safe and secure. More and more of our personal and business information is digital, stored on computers, servers and other devices. If this information is stolen, it could be used to commit fraud or identity theft. By implementing proper cyber security measures, we can help to prevent this from happening. Some of the ways in which cyber security can help to protect financial data include: · Encrypting data: This makes it much harder for unauthorised individuals to access and read sensitive information. · Implementing access controls: This limits who can view and edit financial data. · Creating backups: This ensures that there is a copy of the data in case it is lost or stolen. By protecting our financial data, we can make sure that it remains safe and secure. This is important for both individuals and businesses. How does robust cyber security benefit the modern world? There’s no doubt that cyber security is important for the safety and security of both individuals and businesses. By working to prevent cyberattacks, we can make sure that the internet remains a valuable tool for everyone. Perhaps the most obvious way that cyber security benefits everyone is to protect our critical infrastructure. If these systems were to be compromised by a cyber attack, it could have a devastating impact on our lives. Another key benefit of cyber security is the protection of our personal information. In today’s digital age, we share a lot of personal data online, whether it’s through social media, online shopping, or banking. If this data were to fall into the wrong hands, it could be used for malicious activities – as we saw in the Ashley Madison attack. What happened in the Ashley Madison hack in 2015? In July 2015, the dating website Ashley Madison was hacked, and more than 30 million user accounts were exposed. The hackers released information such as names, addresses, phone numbers and credit card details of Ashley Madison’s customers. This resulted in a lot of people being blackmailed. How did Ashley Madison fix the problem? After the attack, Ashley Madison implemented several security measures to help prevent a similar incident from happening again. This included increasing security around its customer data, improving staff training on cyber security and adding extra layers of security to its website. Cyber security isn’t just for businesses – it’s for all of us In the UK, cyber security is important for several reasons. Because the UK is a highly connected country, this brings lots of opportunities for criminals to exploit vulnerabilities to access sensitive information or systems. And while we appreciate why businesses and organisations must invest in cyber defences – it’s also important that we take precautions at home too. So, what can the public do to ensure they are protected online? The steps that everyone can take to ensure their safety online: · Use strong passwords for all online accounts · Never reuse passwords across different accounts · Avoid using personal information as passwords · Use a reputable security suite or antivirus program and keep it up to date · Be cautious when clicking on links or opening email attachments · Use caution when sharing personal information online · Keep software and operating systems up to date · Back up important files regularly. The growing impact of hackers on our everyday lives is regularly featured on TV and in newspapers as victims tell of their own misfortune. And they all stress how important the above steps are for protecting our personal information – especially bank accounts. Remember, cyber security is for everyone, not just big firms. What are the latest developments in cyber security? Since cyber-attacks are constantly evolving, the methods to defend businesses are evolving too. The latest developments in cyber security include the use of biometrics, artificial intelligence (AI), and behaviour analytics. These technologies are being used to create more secure systems that can better protect against data breaches and other threats. Additionally, companies are working on developing new ways to respond to cyber-attacks, such as using machine learning to automatically detect and block malicious activity. What should businesses do? There is no one-size-fits-all answer to this question, as the best approach for a business will vary depending on the specific industry and type of data they have. However, there are some general steps that all businesses can take to improve their cyber security posture. These include: · Implementing security controls to mitigate those risks · Training employees on cyber security best practices · Testing systems regularly to ensure they are functioning properly · Responding quickly and effectively to any incidents that do occur. By taking these steps, businesses can create a robust cyber security defence. Why is cyber security so important? As we have seen from the examples above, cyber attacks can have a serious impact on both individuals and businesses. That’s why it’s so important to have strong cyber security measures in place to protect your data from being accessed or stolen by hackers. By investing in cyber security, you can help to keep yourself safe from the potentially devastating consequences of a cyber attack. Browse more articles from our experts and discover how to make better use of IT in your business. AAG Security Advisory – ‘EvilProxy’ A new type of phishing attack, called 'EvilProxy', is being used by cyber criminals to attack businesses like yours. This security advisory highlights the danger that EvilProxy poses and how… Cyber Security Career: Essential Knowledge Are you looking for a great opportunity in a rapidly expanding sector? A cyber security career is rewarding and dynamic, offering the chance to defend the digital infrastructure of businesses…
<urn:uuid:ef7d88c4-43e8-483b-8bf9-5798f6ad3bd5>
CC-MAIN-2022-40
https://aag-it.com/how-does-cyber-security-benefit-society/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00666.warc.gz
en
0.947974
2,228
3.25
3
The purpose of a programming validation model is to enable the development of validations, without forcing the developer to write the same validation code several times. All checks and validations in the code should only be made once, and the code should be public and accessible from all parts of the application (i.e. different components). This means that ordinary interface mechanisms should be used as much as possible. Validations should always be done in the server, but the client also validates input in the window before data is sent to the server, for example data type and length, and if the attribute is mandatory or not etc. It is preferable to reduce the number of validation errors in the client since the server will only show the first validation error on each save and not the complete list of errors. There are four different types of validations: |Validation Type||PL/SQL Method| |Validation of attributes used inside a class|| Check_Common___, |Validation of fields for direct client response||Separate methods| |Validation of foreign keys used from other classes||Exist| |Validation before removing objects from a class||Check_Delete___| The following sections will describe the different server validation types, how they work and how they are integrated in the Foundation1 development environment. Validation of attributes is done in the methods Check_Update___. These two methods include only specific validation when records are inserted or updated. Validations that are common for both insert and update should be done in Check_Common___ to avoid duplicated code. This method is called from the other two. The client calls method Modify__ with option CHECK to make the validation part without inserting or updating any rows in the database. This option solves situations with error messages, without performing any rollback of transactions, when the client tries to save information. The example below shows how the base validation methods works. Note that these procedures are automatically generated and they contain only validation generated from the model. Any entity specific validation can be added by overriding the methods. It is also possible to change the validation behavior with different code generation properties. Another type of validation is to check whether a specific base data value exists within its corresponding logical unit (typical foreign key reference) to ensure database consistency. The standard framework method is named Exist. The method is public and therefore accessible from all other logical units and is implemented as a stored procedure with any foreign key as input parameters. The algorithm is as follows: - If the attribute exists, then the procedure Existwill return to the calling procedure and this procedure continues - If the attribute does not exist, the procedure Exist* * will call the system service method Record_Not_Existin package Error_SYSand this will do the following: - Fetch a proper language independent error message string from the text database. - Raise an Oracle error by calling system procedure raise_application_errorand use specific error string. - The error will be raised through the call stack to a valid exception handler in Oracle Server. One typical issue in development of business applications is to decide whether it is possible to remove an entity instance and how the entity will know about references from other entities. For example, should it be possible to remove a customer if it has "unpaid customer invoices" and the customer status is "active". Business rules like this can be implemented by overriding standard method Check_Delete___. Complex code should be placed in a separate procedure for this purpose and called from Check_Delete___ has three purposes: - Decide if deletion is possible due to the attributes within this entity (like status "active" in the example above) - Decide if deletion is possible due to the existence of foreign keys and if follow on actions is needed. - Decide if deletion is possible due to the entity attributes of foreign keys(like "unpaid invoices" in the example above) There are three ways to handle the deletion of a record if it is referenced from other records: - Restricted delete - Cascade delete - Custom delete For descriptions about the different delete behaviors and how to use them, see Associations.
<urn:uuid:6520b374-a8ff-414f-ae83-36162ade685c>
CC-MAIN-2022-40
https://docs.ifs.com/techdocs/21r2/050_development/027_base_server_dev/030_mechanisms/080_validations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00666.warc.gz
en
0.857621
925
3.390625
3
We all know business processes are challenging and complicated. Meeting sales targets, hiring new competent employees, creating business plans, monitoring KPIs (Key Performance Indicators), adjusting budgets, shaping business strategies are only few of the numerous business processes that put employees under physical and emotional pressure. Companies are therefore seeking ways to make business tasks more pleasant and engaging for employees, as a means of reducing stress and enhancing productivity. In this context, how about thinking of employees like kids-the most stress free and enthusiastic beings? Children enjoy engaging in playful activities and games, which helps them build confidence and push the limits of their imagination. At the same time, it also teaches them teamwork and facilitates mental development. Most important, games make kids happy. Since teamwork, confidence and happiness are essential for successful and productive employees you can consider employing games for supporting some of the corporate business processes and for making your workplace happy and inspiring. Accordingly to Gartner, gamification is “the use of game mechanics and experience design to digitally engage and motivate people to achieve their goals”. In other words, gamification in the corporate environment refers to the engagement of employees in “serious games” i.e. games whose scope goes beyond entertainment, to the support of business processes. Nowadays enterprises are increasingly employing serious games and gamification processes in order to boost employee engagement and productivity in business processes, but also in order to influence employees’ behavior at work. Serious games are currently employed for business processes associated with training, marketing, recruitment, sales and more. Here is an indicative (yet non exhaustive) list of examples: The development of effective and successful serious games in the above categories is both art and science. Serious games should feature pleasant, motivating and user-friendly user interfaces. However, they should also be designed on the basis of scientifically sound theories, which explain the ways to boost learning and working performance. Adherence to such principles is a key to enabling employee learning and behavior shifts, based on mechanisms such as attainable challenges, rewards (badges, points) and public recognition. Here is an indicative list of principles that should drive game development: The above list is by no means exhaustive. There are numerous other scientific factors that should be taken into account for the design of a successful serious game, beyond the ever important artistic, ergonomic and aesthetic aspects. Gamification is a productivity tool, which can enable enterprises achieve productivity benefits far beyond conventional training of the work force. However, gamification projects have their own challenges, as the design of proper games is associated with a wide range of technical, economic, social and cultural factors. It is a process that should be tailored to the needs of the enterprise, as there is no “one-size-fits-all” solution, even for enterprises in the same sector. Modern IT Infrastructure Management for SMBs Five ways for Differentiating your organization in a Post Digital World Royalty Free: What does it mean for SMBs? IT Priorities for SMBs in 2018 Here’s why you need to setup a PM CoE Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:ba3a346f-c58a-4179-b055-7f79087a309d>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/serious-games-employee-training-tool-or-productivity-driver/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00666.warc.gz
en
0.94255
854
2.609375
3
Big data analytics refers to examining large amounts of data to uncover hidden patterns, correlations and other insights. With today’s technology, it’s possible to analyze your data and get answers from it almost immediately. It is so important to understand the future trend of big data these days because Big data analytics helps organizations harness their data and use it to identify new opportunities. That, in turn, leads to smarter business moves, more efficient operations, higher profits and happier customers. The Internet of Things (IoT) has added innumerable new sources of Big Data into the Data Management landscape and will be one of the major Big Data Trends in future. Electronic devices sensors on machines, all generate huge amounts of data for the IoT. Big Data which includes Business Intelligence (BI), Cloud, Data Analytics, Internet of Things (IoT), Machine learning trends are here..
<urn:uuid:83ab54f4-bb4d-469b-b6b2-71fa345fd337>
CC-MAIN-2022-40
https://www2.inceptasolutions.com/big-data-trends-in-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00666.warc.gz
en
0.900803
176
2.78125
3
University of Washington and Microsoft researchers revealed on February 21st that they have taken a significant step forward in their quest to develop a DNA-based storage system for digital data. In a paper published in Nature Biotechnology, the members of the Molecular Information Systems Laboratory (MISL) describe the science behind their world record-setting achievement of 200 megabytes stored in synthetic DNA. They also present their system for random access—that is, the selective retrieval of individual data files encoded in more than 13 million DNA oligonucleotides. While this is not the first time researchers have achieved random access in DNA, the UW and Microsoft team have produced the first demonstration of random access at such a large scale. One of the big advantages to DNA as a digital storage medium is its ability to store vast quantities of information, with a raw limit of one exabyte—equivalent to one billion gigabytes—per cubic millimeter. The data must be converted from digital 0s and 1s to the molecules of DNA: adenine, thymine, cytosine, and guanine. To restore the data to its digital form, the DNA is sequenced and the files decoded back to 0s and 1s. This process becomes more daunting as the amount of data increases—without the ability to perform random access, the entire dataset would have to be sequenced and decoded in bulk in order to find and retrieve specific files. In addition, the DNA synthesis and sequencing processes are error-prone, which can result in data loss. MISL researchers addressed these problems by designing and validating an extensive library of primers for use in conjunction with polymerase chain reaction (PCR) to achieve random access. Before synthesizing the DNA containing data from a file, the researchers appended both ends of each DNA sequence with PCR primer targets from the primer library. They then used these primers later to select the desired strands through random access, and used a new algorithm designed to more efficiently decode and restore the data to its original, digital state. “Our work reduces the effort, both in sequencing capacity and in processing, to completely recover information stored in DNA,” explained Microsoft Senior Researcher Sergey Yekhanin, who was instrumental in creating the codec and algorithms used to achieve the team’s results. “For the latter, we have devised new algorithms that are more tolerant to errors in writing and reading DNA sequences to minimize the effort in recovering this information.” Using synthetic DNA supplied by Twist Bioscience, the MISL team encoded and successfully retrieved 35 distinct files ranging in size from 29 kilobytes to over 44 megabytes—amounting to a record-setting 200 megabytes of high-definition video, audio, images, and text. This represents a significant increase over the previous record of 22 megabytes set by researchers from Harvard Medical School and Technicolor Research & Innovation in Germany. “The intersection of biotech and computer architecture is incredibly promising and we are excited to detail our results to the community,” said Allen School professor Luis Ceze, who co-leads the MISL. “Since this paper was submitted for publication we have reached over 400 megabytes, and we are still growing and learning more about large-scale DNA data storage.” With this new milestone, MISL researchers have succeeded in demonstrating how DNA-based data storage—known to be significantly denser and more durable than existing digital storage technologies—can be practical, too. The UW and Microsoft team estimates its approach will scale to physically isolated pools of DNA containing several terabytes each. When dehydrated for storage, these pools of data would be several orders of magnitude denser than tape. And as the costs associated with DNA sequencing and synthesis continue to decline, the team foresees substantial activity devoted to the development of DNA-based data storage in future. “DNA data storage is an incredibly exciting area, and it is great to see our progress recognized by such a reputable publication as Nature Biotechnology,” said Microsoft Senior Researcher Karin Strauss, co-leader of the MISL and an affiliate professor at the Allen School. “We are enthusiastic to continue working at the intersection of biotechnology and IT.” It was this intersection that initially interested Allen School Ph.D. student Lee Organick, who performed many of the wet-lab experiments the team used to validate its approach. Having made the leap from undergraduate studies in molecular biology to computer science, she is enthusiastic about the potential impact of the MISL’s approach. “We’re at a time when a lot of groundbreaking research will be done at the intersection of fields,” said Organick. “When I heard about this project it seemed a bit outlandish, but it captured my imagination.” The makeup of the lab—which unites researchers from multiple disciplines and organizations—is another plus, in Organick’s view. “Having worked with such a creative and diverse team of people for several years now, they’ve shown me that projects like this one are achievable,” she said. “And it’s just as exciting as it was the first day.” This article was originally published here. Alex von Hassler’s long term focus is the continued testing, learning, and deployment of modern IT solutions. During his years as a DataSpan team member, his responsibilities grew from managing Salesforce CRM to improving system security, creating marketing initiatives, as well as providing continued support to the highly motivated and experienced team in an ever-changing industry. As DataSpan evolves to provide the best-fitting IT solutions to its customers, Alex von Hassler continues to hone his skills in the world of web-based ERP systems, security, and best customer engagement practices. Empowering such a dynamic team with the right tools provides him with enormous gratification.
<urn:uuid:9cb518b0-ea68-4cea-bd6a-26e0a29b4f3c>
CC-MAIN-2022-40
https://dataspan.com/blog/researchers-achieve-random-access-large-scale-dna-data-storage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00666.warc.gz
en
0.946923
1,217
3.375
3
Reading comprehension and agility are two of the most important factors in our children’s lives. Most standardized tests have an entire section dedicated to this idea, yet one of the simplest solutions to raising test scores, classroom lighting, has been overlooked for years. So, is it true? Does brightness and color of light really affect reading performance? In the first post of our Classroom Lighting Matters series, we discussed a study by Liebel et al., which explored many related ideas, such as how pupil size is affected by spectrum and light level, and how reading is affected by pupil size. By connecting these two ideas, we can see that reading speed is affected by both spectrum and light level. In the second post, we discovered how exposure to natural daylight early in the morning helps students prepare for a day of activity and stress. In the third and final post of our Classroom Lighting Matters series, we’ll take a deeper look into Liebel’s study to show how spectrum and light levels affect reading speed. Conditions of the study mimicked classroom lighting The backdrop for the study was in a simulated office environment. Not only did the lighting mimic a typical modern facility, but so did the size of the text. For this study, lighting was 200-500 lux horizontal, and print size was 6-12 point type. The participants 18-49 years old, and they were divided into three groups with a roughly equal amount of males and females in each group. Each person was tested at three light levels: twice under an 830 lamp and twice under an 865 lamp, for a total of 12 separate lighting combinations. Does blue spectrum classroom lighting improve reading speed? To measure light spectrum color, we use a scale called Color Correlated Spectrum (CCT). The study showed under a fixed spectrum, an increase in the lighting level decreases the size of the pupil. Similarly, under a fixed luminance, a higher CCT (blue light) also decreases the size of the pupil. Fixed spectrum + Increased lighting levels = Smaller Pupils Fixed luminance + Higher CCT, or blue light = Smaller Pupils This means spectrum lighting and light level affect reading speed and accuracy. So it’s an even trade-off: A higher CCT lamp can improve reading performance at a lower light level, while a lower CCT lamp provides equal reading performance at a moderate light level, or lighting typical of interior office spaces. Note: Because of this, it is possible to save lighting energy by simply replacing a current high-energy lighting source with one having more bluish spectral content, i.e., a higher CCT at a lower lighting level. The U.S. Department of Energy has more information on saving energy costs. We can conclude a definite correlation exists between pupil size and reading performance. While spectrum, lighting levels and lighting distribution are involved, research shows that spectrum plays a larger role. Several studies have been conducted on higher blue spectrum light in the classroom and student achievement. The process of introducing more natural light in schools has commonly been referred to as “daylighting.” The following are findings compiled from Healthyschools.org: - Students in Capistrano School District in Orange County, CA who had more daylighting in their classrooms progressed 20 percent faster on math tests and 26 percent faster on reading tests in one year than students with the a lesser amount of daylighting. - The Poudre School District in Fort Collins, CO found a 7 percent improvement in test scores in classrooms that used daylighting and a 14-18 percent improvement for students in classrooms with the largest window areas. - In Seattle, WA students in classrooms with the most access to daylight were found to test 9-15 percent higher than students in classrooms with the least window area. Erik Hinds is Vice President of Helping People at Make Great Light. For more information on how fluorescent light filters for classrooms can improve the learning environment, please visit the resource center.
<urn:uuid:d9d2dcda-92b3-4284-9157-06c2fdadf281>
CC-MAIN-2022-40
https://mytechdecisions.com/facility/how-does-classroom-lighting-affect-the-students-part-iii/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00666.warc.gz
en
0.944277
828
3.71875
4
In 2016, just under half of humanity owned a smartphone. That number has now risen to 80 %. Along with such a great expansion in mass connectivity comes great responsibility for mobile networks, the ether through which thousands of gigabytes of data navigate every day. Keeping these data safe and the networks secure has therefore become a key priority for mobile operators, especially as they continue grappling with new threats and vulnerabilities. According to a recent report1, 97 % of organizations in 2020 faced at least one mobile threat that used multiple attack vectors and at least 40 % of the world’s mobile devices are inherently vulnerable to cyberattacks. Over the past years, network security functions such as next-gen firewalls or security information and event management (SIEM) solutions have played a critical role in identifying and managing network threats for mobile networks. These functions are often paired with monitoring and deep packet inspection (DPI) capabilities, such as those provided by R&S®PACE 2, for timely detection of traffic and applications that are suspicious and anomalous. While application awareness greatly facilitates the implementation of various security policies for mobile networks, the use of load balancing, a mechanism by which traffic is allocated to a network subsystem, often leaves network security functions with huge visibility gaps, as a subscriber session is typically split into separate streams that are routed to different devices. How does this impact network security? Myriad mobile networks, myriad dangers Mobile networks are susceptible to attacks of various forms. Malware attacks, for example, easily take place when users unwittingly click on something that triggers a drive-by download which then installs malware such as viruses or spyware on a system. Spyware can garner information about a user’s internet usage, passwords and contacts and pass this to a third party. Viruses can actively harm a device and mine information. These attacks can be sourced from seemingly innocuous or legitimate links or applications. Mobile networks are also vulnerable to distributed-denial-of-service (DDoS) attacks in which a perpetrator tries to take down a website or web application by overwhelming it with traffic from multiple locations. Mobile devices and networks can get entangled in DDoS attacks in two ways: either by being used as part of the botnet that renders another web node unavailable or as part of the web service or process itself that is targeted by a botnet. Mobile networks can also be subject to fraud of various kinds. They can be abused by an imposter taking over a legitimate account, either by phishing, card fraud, call center fraud or through simply stealing devices. Compromised accounts can be used for various nefarious purposes. This can be the illegal usage of network capacity as in the case of unauthorized peer-to-peer applications and illegal tethering. Fraud can also be used to break into operators’ subscriber databases, leading to mass data or identity theft. These threats become even more pronounced with 5G. 5G’s service classes, namely enhanced mobile broadband (eMBB) and massive machine type communications (mMTC), will see the number of endpoints growing exponentially. These endpoints will increase the attack surface of the network, making it increasingly difficult for operators to secure these devices against tampering, hijacking and being manipulated as gateways for accessing valuable network resources and for launching attacks on the network. Complete visibility for intelligent load balancing For handling each of the threats discussed above, full visibility into a subscriber session is critical. Conventional load balancing, however, impairs this visibility by presenting only part of the malicious or anomalous traffic to the onward processing tool. Such tools gain full visibility only upon the completion of postprocessing reconciliations and aggregations. By then, however, attacks would have already penetrated the targeted applications and network resources and caused major damage. To address this, operators are moving to intelligent load balancing. The GTP subscriber resolution module by Rohde & Schwarz (R&S®GSRM) makes this possible. R&S®GSRM is an OEM software module that builds on the extensive and deep expertise of Rohde & Schwarz in mobile network intelligence. It uses the correlation of GTP control and user traffic to identify subscribers in real time, allowing all packets from a single subscriber session to be processed in the same sequence, delivering complete visibility into every session. R&S®GSRM can be embedded directly in a security tool or deployed in network packet brokers to deliver intelligent load balancing for various security subsystems in the mobile core. With subscriber awareness, a network packet broker can filter, aggregate and forward all packets from a single session to the same network security tool, increasing its visibility into sessions that are potentially malicious. Subscriber-aware threat management Subscriber awareness greatly improves the capacity and capabilities of security functions in the mobile core. Intrusion prevention systems, which monitor, report and block malicious activity in a network, can easily single out traffic that does not conform to normal patterns, for example, continuous sessions connecting to a sensitive application such as a banking site. Data overages can be discerned, indicating that a phone has been hijacked or a device stolen. Similarly, web filtering subsystems can also be improved, as attempts to access blocked or blacklisted applications can be identified instantaneously. Unusually large file transfers can be detected early before confidential and critical data is siphoned off enterprises and public agencies. The same would also apply to DDoS attacks, which become visible when a large number of successive requests is registered from the same subscriber. In all these use cases, network security solutions leverage complete visibility provided by R&S®GSRM which allows for packets from a single subscriber session to be processed together without any loss of information. R&S®GSRM provides subscriber awareness, helping to analyze attacks and network anomalies on a granular level so that risks inherent across different applications, users and geographies can be identified and used to design responsive policies for network security. This granular analysis enables security tools and operators to manage security incidents in the future. Data benchmarks based on historical usage patterns at the subscriber, subscriber group and application level are readily available through aggregation and filtering enabled by subscriber resolution that is provided by R&S®GSRM. In the context of 5G, the need for higher bandwidth and speeds across applications that are data-intensive and that connect to multiple end nodes simultaneously poses new challenges for operators. They have to ensure that a sudden shift in traffic patterns is not caused by the activities of threat actors. With session aggregation, such patterns become easy to decipher as the session information is complete and can be compared to past consumption patterns in real time. This helps to detect network abuses such as fraud or SIM cloning and the hijacking of end nodes aimed at stealing network resources and destabilizing the network. R&S®GSRM in combination with R&S®PACE 2 makes it possible for application and subscriber awareness to work hand in hand, delivering enhanced network intelligence to manage and improve mobile network security. R&S®GSRM uses a lightweight, fast-performing software module that can be deployed in the mobile core to restore complete visibility into user sessions for meaningful and effective management of subscriber traffic where traffic brokering is used. Network security vendors in particular benefit from subscriber-aware traffic filtering and aggregation enabled by an engine that is built specifically for this task, giving them the power to keep networks safe and going at all times. 1 Mobile Security Report 2021 by Check Point
<urn:uuid:002e6b08-f99d-470a-9cce-8851ddae638f>
CC-MAIN-2022-40
https://www.ipoque.com/blog/subscriber-awareness-for-mobile-network-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00666.warc.gz
en
0.933318
1,523
2.703125
3
Protect your site with the HTTPS protocol HTTPS (Hypertext Transfer Protocol Secure) is a protocol for communication over the Internet that protects the integrity and confidentiality of data exchanged between computers and sites. This protocol was created to secure the exchange of information between two web nodes and prevent phishing and hacker attacks by ensuring users a safe and private use of a website, regardless of the content it hosts. Google labels as “unsafe” all web pages that collect sensitive data, such as login credentials and credit card data, and bank transactions, not published via HTTPS protocol. To ensure that users’ browser does not display a “non-security” notification of web pages, you need to move the entire website or only pages that host forms fields for entering passwords and credit cards, in pages published using the HTTPS protocol. In addition to sensitive payment data and login credentials, you must also protect your privacy about personal data such as names, addresses, phone numbers, emails, and general preferences. For this reason, in addition to being fundamental for banks and e-commerce websites, adopting the HTTPS protocol is also recommended for any type of website, including blogs and amateur websites. HTTPS, therefore, protects the integrity and confidentiality of information exchanged between computers and websites by issuing a Secure Socket Layer (SSL) certificate. In particular, data sent via HTTPS is protected by the Transport Layer Security (TLS) certificate, which provides three basic levels of protection: - Encryption: The data exchanged is encrypted to protect them from eavesdropping. This means that, while the user visits a website, no one can “listen” to his conversations, keep track of the activities carried out on multiple pages or subtract his information. - Data integrity: Data cannot be changed or damaged during transfer, intentionally or unintentionally, without being detected. - Authentication: Demonstrates that users communicate with the intended website, protects against man-in-the-middle attacks, and instills trust in users. Advantages of the HTTPS protocol The main advantages of activating the HTTPS security protocol are: - Authentication. Allows the browser to verify that you are browsing the correct site and that you have not been applied a redirection by an attacker. - Protect the integrity of a website. It helps maintain the integrity of communication between the website and the browser of the users who visit it, preventing attackers from violating it and inserting unwanted advertisements or malware in it. - Protect users’ privacy and security. Being an encrypted communication prevents third parties from reading data about users and their behavior on the site. - SEO. Google considers the HTTPS security protocol as a determining factor for the positioning of various websites in its search results. This penalizes all sites with HTTP protocol. Best practices for HTTPS protocol implementation To switch from an HTTP protocol to an HTTPS protocol, you must request a TLS or SSL certificate from your hosting provider. Thanks to the services offered by different platforms you can install them with a simple click. The final step will be to adapt the service to your website, for these steps it is better to be supported by your web agency, thus avoiding errors or slowdowns. If the article was useful share it on your social and follow us to not miss the next interesting post!
<urn:uuid:d6d93eb0-9a8b-4201-ac73-478a44c995a5>
CC-MAIN-2022-40
https://arimas.com/2022/09/01/web-data-security-the-https-protocol/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00066.warc.gz
en
0.915628
666
3.609375
4
HIPPA standards help protect sensitive private healthcare information (PHI). Hackers are constantly attacking Healthcare organizations to steal data to sell on the black market. In addition, criminals may use healthcare data to commit fraud and identity theft. This can cause significant damage to patients. HIPPA and its purpose The Health Insurance Portability and Accountability Act was started mainly to streamline the flow of information in healthcare, instruct healthcare and healthcare insurance companies on maintaining and protecting personally identifiable information, and limit insurance coverage such as portability for U.S. citizens. What is PHI? Private healthcare information encompasses information that can be used to identify patients connected to a healthcare record. These include: - Names or part of names - Phone numbers - Fax numbers - Email address - Geographical identifiers - Dates directly associated with a person - Account numbers Medical record numbers - Vehicle license plate numbers - Web URLs - P. address - Whole face or comparable photographic images - Serial numbers and device identifiers - License or certificate numbers - Social security number - Health insurance beneficiary numbers - Other unique identifiable characteristics Who should comply with HIPPA? HIPPA concerns organizations referred to as "HIPPA Covered Entities" (C.E.). They include healthcare providers, healthcare clearinghouses, health plans, and recommended Medicare prescription drug discounts, card sponsors. In addition, any organization commonly referred to as business associates or B.A., providing third-party service to a C.E. and in the process may come into contact with PHI are required to follow HIPPA rules. They must ensure they have enough safeguards to protect PHI even though they will not create, receive, maintain, or transmit it. Before they can start working together, both the C.E. and B.A. must sign a business associate agreement to guarantee the integrity of PHI. How can an organization acquire HIPPA compliance certification? Threats to HIPPA do not come from external sources only. For example, your employees, volunteers, and other internal players might cause harm knowingly or unknowingly. Apart from having HIPPA related protocols in place, implementing an ongoing course will help any organization keep HIPPA in mind. Topics should revolve around PHI security and the consequences of breaches. Each new member who joins your organization should be trained within a reasonable time. Exploring HIPPA compliant hosting Look for a hosting provider with top-level security storage. When looking for a provider, ensure they are - Compliant and have a security expert on the team - Capable of conducting an auditors risk assessment for the environment around ePHI - Have a secure off-site backup plan - Have experience with healthcare clients - They should also provide private cloud solutions using HIPPA compliance software. HIPAA compliance for AWS hosted SaaS For a covered entity to avail of service from Amazon Web Services, they will need to sign a Business Association Addendum (BAA) that determines the extent of permissible disclosure of PHI liability. As such, AWS has obligations to protect PHI. Conducting a regular risk assessment ensure to conduct a regular risk assessment to identify the possible breach and apply corrective measures. This will help you identify where breaches emanate from, weaknesses in the system, and possible feedback on improving. Please contact Eden Data for a free consultation with an experienced cybersecurity consultant.
<urn:uuid:ce0ef8bc-ce58-46a5-8d91-eb33d086085f>
CC-MAIN-2022-40
https://www.edendata.com/post/hipaa-compliance-for-startups
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00066.warc.gz
en
0.919987
696
2.703125
3
Researchers at Harvard University say they are one step closer to the creation of commercially viable methane-powered fuel cells for laptops. The research is being lead by solid-oxide fuel cell expert Shriram Ramanathan at the Harvard School of Engineering and Applied Sciences. The academics have been able to demonstrate a functioning thin-film solid oxide fuel cell which does not contain platinum. The use of expensive materials like platinum has been one of the challenges of the commercial viability of the technology for products such as laptops and mobile phones. The technology also needs to account for high temperatures in order to improve reliability. "If you use porous metal electrodes, they tend to be inherently unstable over long periods of time. They start to agglomerate and create open circuits in the fuel cells,” said Ramanathan. The group was also able to demonstrate a methane fuel cell operating with internal temperatures less than 500 degrees. Operation at lower temperature places less demands on ceramic components while also reducing the time it takes to obtain been current delivery capability. “Low temperature is a holy grail in this field. If you can realize high-performance solid-oxide fuel cells that operate in the 300 degrees C range, you can use them in transportation vehicles and portable electronics, and with different types of fuels,” he said. The use of methane, an abundant and cheap natural gas, in the team’s SOFC was also of note. Until recently, hydrogen has been the primary fuel for SOFCs. Pure hydrogen, however, requires a greater amount of processing. "It’s expensive to make pure hydrogen," says Ramanathan, "and that severely limits the range of applications." As methane begins to take over as the fuel of choice, the advances in temperature, reliability, and affordability should continue to reinforce each other the Harvard researchers said.
<urn:uuid:18ac73a8-ac18-44e8-a28c-5bb1a5a3e3d8>
CC-MAIN-2022-40
https://www.pcr-online.biz/2010/11/24/harvard-researchers-say-methane-powered-laptops-one-step-closer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00066.warc.gz
en
0.945957
379
3.46875
3
If you’re at all interested in the wireless space, chances are you have been reading quite a lot about 5G over the last year or two. T-Mobile has already launched their nationwide 5G network, and many other carriers are close on their heels. Now that 5G is starting to actually arrive, an increasing number of people are starting to wonder: Is 5G safe? Let’s take a closer look. 5G is the fifth-generation of cellular network technology. The new network promises lightning fast speeds, the likes of which consumers have never had available to them before. Since 5G is a cellular signal, service areas are covered in the 5G signal. That constant signal is the root of many people’s safety concerns about the service. Concerned consumers believe that since the 5G signal is even more powerful than the current 4G LTE that radiation exposure will become a concern. The Federal Communications Commision (FCC) recently weighed in on these concerns. According to the FCC, consumers will not be introduced to more radiation due to 5G network exposure. The government entity expounded on that conclusion, iterating that they had been studying the risks and concerns of the public for the last six years and found no convincing evidence of any danger. As a result, FCC regulators unanimously voted to allow the public rollout of 5G networks to continue with no federally enforced safety limits. Exposure standards will stay the same for the radio-frequency cell phones and antennas emit and for consumer devices and equipment used on rooftops and cell towers. The FCC points to current medical standards when they say that there is no link between cellphone-radio frequency and cancer. However, just because the medical community has not proven a connection, that does not mean one does not exist. 5gInsider points this out, writing that “The National Toxicology Program (NTP) inside the Health and Human Services (HHS) found a link between radio frequency radiation and cancer in male rats. Over the past few decades, the NTP has been conducting a study on rats by exposing them to levels of radiation higher than what we humans would experience when using a cellphone.” 5G is already here, and more networks are going to pop up from more carriers as time goes on. As with all new technology, it’s imperative of the medical community to study the effects of its use. The truth is we won’t know for certain the effects of long-term exposure for some time. The jury is still out on long-term smartphone use. For now, there’s no reason to panic but as will all things—moderation is key.
<urn:uuid:2ea9d346-12b8-4936-b041-705a077cb9d8>
CC-MAIN-2022-40
https://deal.godish.com/is-5g-safe/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00066.warc.gz
en
0.965318
548
2.90625
3
The issue of Internet of Things device security is one of the main growing pains that inevitably follow the industry which is expanding at a rapid pace. As Statista indicates, by 2025 there will be 75 bln connected devices that we will have to protect against virtual and physical attacks. What about Security on IoT Devices One of the main security indicators of IoT devices is the ability to personalize a device without additional effort and expense (connect it to the network and enter access keys). In addition, code integrity and the ability to withstand attacks are important. Problems arise both on the part of users and manufacturers. For example, not all device owners replace factory passwords with more complex ones. Manufacturers, in the pursuit of quick money and the desire to quickly bring the device to the market, do not always properly test their product codes. More on Internet of Things weak spots was covered in our previous article. As we noted there, the main issue with the Internet of Things is that there are no uniform security standards, which is an inevitable consequence of the broad variety of devices on the market. Sometimes you have to save on something in order to be competitive in comparison with others. And, as you may guess, the functionality often falls victim in this struggle, since you have to fill the available IoT memory with as much as possible sacrificing some lesser evil. How to Secure IoT Devices Botnets are a common tool which hackers use in carrying out their attacks. One of the most vivid occurrences took place in 2016 when as a result of a DDoS attack the Mirai botnet brought down most popular American websites. The malware cracked combinations of default logins and passwords and hacked thousands of IoT devices. What is more, intruders uploaded the source code on the Internet, which increased the risk of it being used by other hackers. Below is a typical architecture of a DDoS attack. These and similar attacks can bring physical harm to people (e. g. if hackers have got access to explosive and fire-hazardous equipment) or cause delays or downtime at production sites (which in turn leads to sufficient losses). In addition, by hacking one smart device, malicious users can get access to the entire network, including entrance locks and bank accounts. Some of these solutions were covered in our previous article. Let’s focus on certification this time. Can certification be the best way to secure IoT devices? The issue associated with the security of IoT devices can be partially resolved by certification. Passing tests and obtaining appropriate certificates will provide clients with some protection against attacks by cybercriminals. The main condition for such a procedure is that it is available to all manufacturers and does not turn into a pure formality. Online Trust Alliance (OTA) came up with an idea of how to secure your IoT devices. The company compiled a list of requirements for manufacturers and service providers — IoT Trust Framework. This list should ensure the safety and viability of IoT products. Based on it, you can design certification programs for devices, as well as assess existing risks. A certain number of IoT certificates, which were developed by private companies, are already in use. For example, Verizon – ICSA Labs created its own program that inspects the security of IoT devices. The solution developed by UL Cybersecurity Assurance (CAP) tests not only product safety, but entire systems as well. A CAP certificate confirms that software updates for a device will not reduce security or increase the risk of attacks. In 2018, the German software developer SAP introduced its IoT device certification program in the CIS. Within its framework, devices are tested and receive a respective quality mark, which confirms that the devices are safe and can be used in IoT projects. The European Commissioner for Digital Economics and Society, Thibaut Kleiner, believes that measures to protect the IoT from cyberattacks should be taken at the state level, in particular, to make certification mandatory for all devices of the Internet of Things. Such a procedure should apply not only to devices, but to networks and cloud storages as well. What can you do to secure your IoT devices? If you want to secure your Internet of Things infrastructure, you’ll have to possess a carefully crafted security strategy. According to this strategy, it is necessary to protect data in the cloud, to ensure data integrity during data transfer over a public network and to thoroughly prepare devices for further use. Each next level provides better security guarantees throughout the entire infrastructure. This carefully thought-out security strategy can be developed and implemented with the active participation of various parties involved in the design, development and deployment of IoT devices and infrastructure. Let’s consider what each of them can do to ensure the success of your IoT project. - Manufacturer or integrator of IoT devices As a rule, these are manufacturers of deployable IoT devices, equipment integrators who assemble equipment originated from various manufacturers, or equipment suppliers who provide IoT deployment equipment manufactured or integrated by other vendors. - IoT solution developer An IoT solution is usually developed by a software developer with a good share of expertise in this field. This developer may be part of a team within the company or a system integrator specializing in this sphere. An IoT solution developer can design various IoT solution components from scratch, integrate various box or open source components, and implement solution accelerators by introducting slight modifications to them. - IoT solution deployment specialist The developed IoT solution should be deployed in a runtime environment. At this stage, equipment is deployed, device interaction is established, and solutions are deployed on hardware devices or in the cloud. - IoT solution operator Once deployed, the IoT solution requires long-term monitoring, updates, and maintenance. These tasks can be performed by a team within the company, which includes information technology specialists, equipment operation and maintenance teams, as well as specialists in a specific area who monitor the work of the entire IoT infrastructure. Below we provide a quick overview of developing, deploying, and operating a secure IoT infrastructure for each of these process participants. What measure should be taken to secure IoT devices? Manufacturer or integrator of IoT devices - Ensure the compliance with the minimum requirements for equipment - Protect the IoT equipment against unlawful modifications and alterations - Create the solution on the basis of secure equipment - Ensure that updates are safe and secure IoT solution developer - Choose the safe way to develop software - Pay special attention when selecting open-source software - Be cautious when integrating IoT solution deployment specialist - It’s obvious but deploy the solution securely - Ensure the security of authentication keys IoT solution operator - Ensure timely updates of your system - Take care to protect yourself against malicious activity - Frequent audit is a must - Ensure the physical protection of the IoT infrastructure - Protect your cloud credentials We will cover the above best practices in our upcoming article. Stay up-to-date!
<urn:uuid:af9212a0-a2ec-4cd5-8c99-0e148c928c3d>
CC-MAIN-2022-40
https://resources.experfy.com/iot/security-of-iot-devices-ways-to-protect-yourself-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00066.warc.gz
en
0.941443
1,409
2.859375
3
In Silicon Valley, a community of circuit designers meets for a lively debate about the merits of two different designs developed by one of the participants. Huddling together over the circuit diagrams, they analyze possible faults, discuss issues of efficiency, propose alternatives, tease out each other’s assumptions and make the case for their view. Their energy is palpable to both the regular participants and visitors. Although many factors, such as management support or an urgent problem, can inspire a community, nothing can substitute for this sense of aliveness. How do you design for aliveness? It is different from most organizational design, which traditionally focuses on creating structures, systems and roles that achieve relatively fixed organizational goals and fit well with other structural elements of the organization. The goal of designing for aliveness is to bring out the community’s own internal direction, character and energy. What is the role of design for a “human institution” that is, by definition, natural, spontaneous and self-directed? How do you guide such an institution to realize itself, to become alive? From our experience we have derived seven principles. 1 Design for Evolution Because communities of practice are organic, designing them is more a matter of shepherding their evolution than creating them from scratch. As the community grows, new members bring new interests and may pull the focus of the community in different directions. Changes in the organization influence the relative importance of the community and place new demands on it. For example, an IT community that was only marginally important to an organization suddenly became critical as the company discovered the potential of a few e-business pilots. Community design often involves fewer elements at the beginning than does a traditional organization design. In one case, the coordinator and core members had many ideas of what the community could become. Rather than introduce those ideas to the community as a whole, they started with a very simple structure of regular weekly meetings. They did not capture meeting notes, put up a website or speculate with the group on “where this is going.” Their first goal was to draw potential members to the community. Once people were engaged in the topic and had begun to build relationships, the core members began introducing other elements of community structure one at a time. Physical structures?such as roads and parks?can precipitate the development of a town. Similarly, social and organizational structures, such as a community coordinator or problem-solving meetings, can precipitate the evolution of a community. 2 Open a Dialogue Between Inside and Outside Effective community design is built on the collective experience of community members. Only an insider can appreciate the issues at the heart of the domain, the knowledge that is important to share, the challenges his field faces, and the latent potential in emerging ideas and techniques. Only an insider can know who the real players are and their relationships. Good community design requires an understanding of the community’s potential to develop and steward knowledge, but it often takes an outside perspective to help members see the possibilities. It might mean bringing an “outsider” into a dialogue with the community leader and core members as they design the community in order for them to see new possibilities and effectively act as agents of change. The well-connected leader of a new community on emerging technology was concerned about how to develop the community when many of the prima donnas of the industry were outside his company. When he saw how a similar community in another organization was structured to involve outside experts in multiple ways, he started rethinking the potential structure of his own community. He realized that the key issues in his community were less about technology and more about the business issues involved in developing the technology. This understanding of the business perspective of the other community gave him a sharper sense of the strategic potential of his own. 3 Invite Different Levels of Participation People participate in communities for different reasons. We commonly see three main levels of community participation. The first is a small core group of people who actively participate in discussions. As the community matures, this core group takes on much of the community’s leadership. But this group is usually rather small, only 10 percent to 15 percent of the whole community. At the next level outside this core is the active group. These members attend meetings regularly and participate occasionally in the community forums, but without the regularity or intensity of the core group. The active group is also quite small, another 15 percent to 20 percent of the community. A large portion of community members are peripheral and rarely participate. Instead, they keep to the sidelines, watching the interaction of the core and active members. In a traditional meeting or team, we would discourage such halfhearted involvement, but these peripheral activities are an essential dimension of communities of practice. Indeed, the people on the sidelines often are not as passive as they seem; they gain their own insights from the discussions and put them to good use. Finally, outside those three main levels are people surrounding the community who are not members but who have an interest in the community, including customers, suppliers and “intellectual neighbors.” The key to good community participation, and a healthy degree of movement between levels, is to design community activities that allow participants at all levels to feel like full members. Rather than force participation, they make opportunities for semiprivate interaction, whether through private discussion rooms on the community’s website, at a community event or in a one-on-one conversation. 4 Develop Public and Private Spaces Public community events serve a ritualistic as well as a substantive purpose. Through such events, people can tangibly experience being part of the community and see who else participates. However, communities are much more than their calendar of events. The heart of a community is the web of relationships among community members, and much occurs in one-on-one exchanges. Thus, a common mistake in community design is to focus too much on public events. Every phone call, e-mail exchange or problem-solving conversation strengthens the relationships within the community. When the individual relationships among community members are strong, the events are much richer. Because participants know each other well, they often come to community events with multiple agendas: completing a small group task, thanking someone for an idea, finding someone to help with a problem. In fact, good community events usually allow time for people to network informally. 5 Focus on Value Value is key to community life because participation in most communities is voluntary. But the full value of a community is often not apparent when it is first formed. Moreover, the source of value often changes during the life of the community. Communities need to create events, activities and relationships that help their potential value emerge and enable them to discover new ways to harvest it?rather than attempting to determine their expected value in advance. Several months after one community started, it made discussing value part of its monthly teleconferences. Most community members were not able to identify any particular value when these discussions began, even though they all felt participation was useful. Soon, however, one community member was able to quantify the value his team gained by applying a new technique he learned from a fellow member. Another said the real value of the community was more personal and less quantifiable; he knew whom he could contact when he had a problem. 6 Combine Familiarity and Excitement Community members can offer advice on a project with no risk of getting entangled in it; they can listen to advice with no obligation to take it. Those are reasons scientists in a pharmaceutical company, driven by urgency to develop new products, see their community as a place to think and consider ideas too “soft” for the development teams. Vibrant communities also supply divergent thinking and activity. Conferences, fairs and workshops such as these bring the community together in a special way and thus facilitate a different kind of spontaneous contact between people. Routine activities provide the stability for relationship-building connections; exciting events provide a sense of common adventure. Vibrant communities of practice also have a rhythm. At the heart of a community is a web of enduring relationships among members, but the tempo of their interactions is greatly influenced by the rhythm of community events. A community of library scientists had an annual meeting and a website with a threaded discussion. Not surprisingly, six months after the conference there was very little activity on the Web. An engineering community, on the other hand, held a biweekly teleconference as well as several focused, face-to-face meetings during the year. In this community there is typically a flurry of activity on the website just before and after the teleconferences and meetings. The rhythm of the community is the strongest indicator of its aliveness. A combination of whole-community and small-group gatherings creates a balance between the thrill of exposure to many different ideas and the comfort of more intimate relationships. A mix of idea-sharing forums and tool-building projects fosters both casual connections and directed community action. There is no right beat for all communities, and the beat is likely to change as the community evolves. But finding the right rhythm at each stage is key to a community’s development.
<urn:uuid:d867ea0e-451e-49a9-8126-655894d7c3d1>
CC-MAIN-2022-40
https://www.cio.com/article/270670/relationship-building-networking-build-information-sharing-communities-in-your-company.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00066.warc.gz
en
0.970364
1,878
2.921875
3
- About Us - IT Services - IT Security - Cloud Services - Who We Help - Contact Us US Businesses Increase Encryption Efforts As a Result of the NSA’s Spying Program The National Security Agency (NSA) is a U.S. intelligence agency that monitors telephone calling metadata and data communication activity by foreigners communicating with entities in the United States. When Edward Snowden, a former contractor for the NSA, leaked a variety of documents regarding the NSA’s spying efforts; encryption became a hot topic for many small business owners. What is Encryption? Encryption is a method of ensuring sensitive data is protected from government agencies, business competitors, and cybercriminals. Ultimately, encryption prevents unauthorized access to data, whether you’re transmitting data over the Internet, storing data on your laptop, or backing data up on a server. How Does Encryption Work? Encryption is the process of encoding information using an algorithm to make it unreadable to anyone without the key to decrypt it. If encryption is used properly, the information will be readable to you and other individuals that receive the key from you. The encryption key is the most important factor in securing a comprehensive encryption plan for your sensitive data. Will Encryption Protect Data Stored in the Cloud? While encryption improves data security in the cloud, major telecommunications and cloud service providers like Apple, Google, Verizon Communications, and Microsoft have cooperated with the government to enable the NSA to access data without customers’ permission. Most telecommunications and cloud service providers claim to only provide information when required by a court order, however, it’s important to hold the encryption key to ensure your cloud provider can’t decrypt your data. In addition, you must evaluate potential cloud providers to ensure they’re able to meet your data protection requirements. While many business owners believe the cloud provider is responsible for addressing their data protection requirements, this is far from the truth. Ultimately, you’re responsible for ensuring your data and applications are secure in the cloud. To learn more about encryption, give us a call at (719) 355-2440 or send us an email at [email protected]. Colorado Computer Support can help you protect your sensitive data and communications using appropriate encryption methods.
<urn:uuid:dca6148b-cc77-4ac4-87ab-9639b340dab9>
CC-MAIN-2022-40
https://www.coloradosupport.com/did-edward-snowden-accelerate-business-it-encryption-in-corporate-america/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00066.warc.gz
en
0.883946
475
2.6875
3
I am sometimes surprised to see how even those working in the software industry tend to forget where the burden of security lies. Most incidents we experience today stem from defects in the code – actually bugs – committed by software engineers when designing, implementing, and integrating all those systems. But, on the other hand, this is not that surprising, given that software security is usually not included in standard educational programs. Investing Where it Matters Let’s face reality: Constrained by resources, many software developers ignore security entirely until they face an incident. Another common issue is tackling information security threats by just focusing on the options thought to be the cheapest. This usually means just going through a checklist for finding some of the most common problems. But is this really the cheapest option? Just think about it: What is the cost of losing reputation when news spreads that your organization has been hacked? It’s another economic process; the cost of designing software securely in the first place is less than the cost of the bad press after a vulnerability is announced plus the cost of writing and deploying the patch. Today the software development community realizes that security is interwoven in the whole of the product development lifecycle. But do not forget that while engineers have to be vigilant and – in theory – eliminate every single bug in the code to make a product secure, an intelligent attacker only needs to find a single remaining vulnerability in a rarely-used module to use it as a vehicle for committing cybercrime. During the 2000s the software industry started to realize the fact that, in the long run, investing in their own employees would be the most effective way of implementing security. Training became the key initial phase in the Microsoft Security Development Lifecycle and is also a standard practice within the Building Security in Maturity Model followed by many entities. Companies started to reserve more and more from their security budget to train their employees, as training tackles the problem of security right at its source- the engineer. Accountability for Proper Security But just as we can’t put a policeman on every corner, assigning a dedicated security expert to a development group is not enough (still better though than not doing anything at all). Usually, a single minor oversight by one of the engineers is the root cause of a complete system compromise. The overall average preparedness of all involved software architects, programmers, and testers is what actually counts. From a project management point of view, it is an easy formula: Your engineers work hard each day and produce vulnerable codes that result in hundreds of security bugs annually. Your organization will need to use resources to test, detect, and correct such vulnerabilities. An easier option would be to send those programmers to secure coding training and they’ll start to write secure code from their next working day. Special prudence is needed however to teach security practices to software engineers. The trainer should not only be an experienced software developer himself but also has to have strong security expertise. The courses should be practical but still go into enough theoretical details; the problems demonstrated should be supported by exercises that promote a hands-on experience. If not, developers will forget most of the issues the next day. Classes should be intensive so they don’t pull people away from their everyday work for too long. Courses at SCADEMY Secure Coding Academy were formed based on decade-long expertise in product security and security research. In this sense, with our courses, we teach what we do. With a track record of thousands of attendees worldwide, the training programs in our portfolio are specifically designed to serve diverse development groups of large companies developing any kind of software. Secure coding courses by Secure Coding Academy are now available at ITpreneurs. Have a look and find out how you can help your learners meet their cybersecurity goals. About the author Management and leadership International sales in diverse geo and corporate cultures SDLC – software security, product/application security, security testing Educating software professionals and security champions on secure coding practices Vulnerabilities and their mitigation – best practices Secure coding – C/C++, Java, C#, Python and many other languages and platforms R&D project planning and management
<urn:uuid:e12d642e-ec76-4760-89ef-8039a9df34ec>
CC-MAIN-2022-40
https://www.itpreneurs.com/blog/learning-secure-coding-the-most-effective-information-security-tactic/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00066.warc.gz
en
0.950851
861
2.53125
3
Researchers aim to better understand how the self-moving machines respond to real-world challenges. Army researchers recently began experimenting with unmanned, autonomous vehicles on land that was used to test munitions and weapons almost 100 years ago. About 200 acres across the Aberdeen Proving Ground in Middle River, Maryland, now make up a modern military study site, where Army Research Laboratory officials are working to prove and refine the performance of the software framework and algorithms created to explore systems’ intelligence—and enable machines’ abilities to carry out specific tasks. “The one-of-its-kind research campus was established to advance Army knowledge of autonomy and intelligent systems through basic and applied research of unmanned technologies that integrate artificial intelligence, autonomy, robotics and human teaming elements in complex environments,” Jeffrey Westrich, an ARL program manager said in a statement Thursday. Self-moving vehicles and robots hold potential to accomplish acts that could put troops’ lives at risk if done manually. According to ARL’s release, scientists “performed the first fully-autonomous tests onsite using an unmanned ground vehicle testbed platform, which serves as the standard baseline configuration for multiple programmatic efforts within the laboratory,” earlier this year. A video spotlighting the work revealed the investigations were designed explicitly to capture sensory data about elements like fallen branches, rocks or bumpy terrain that can pose real problems for such robots operating in the real world. Generally, Army researchers have turned to simulations to evaluate artificial intelligence-enabled systems and autonomy. But these ongoing and in-the-making tests in the historic, natural environment could help humans better grasp how the self-moving machines actually behave in real settings—and make the computer-based models stronger. “We can utilize those results as a comparative metric for improving simulation, and informing research and development through an iterative improvement approach,” Westrich said. “This location promises a substantial role in accelerating our understanding of robotics research, and pursuing significantly more complex experimentation in the future.”
<urn:uuid:6a48c71c-d674-4e2b-8e15-087a0340bcb0>
CC-MAIN-2022-40
https://www.nextgov.com/emerging-tech/2021/02/army-launches-autonomous-vehicle-tests-maryland/172163/?oref=ng-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00066.warc.gz
en
0.926056
418
3.171875
3
Thursday, February 24, 2011 Thunderbolt: Introducing a new way to hack Macs Imagine that you are at a conference. You innocently attach your DisplayPort to a projector to show your presentation on the big screen. Unknown to you, while giving your presentation, the projector is downloading the entire contents of your hard disk. The reason this works is the trusting nature of the protocol. Your laptop sends a command across the wire saying "please write the data in my memory location XYZ". What the device on the other end is then supposed to do is send the data with an address of XYZ. But it does't have to. It can instead send data to address ABC. In other words, it can upload malware into the computer's memory and run it. This technique rarely works on USB. That's because USB is designed in a "master-slave" configuration. Your computer can do this trick against anything attached to your USB port. Indeed, that's how some "jailbreaking" of devices like iPhone's work. Your computer, the master, infects the phone with malware by writing to specific locations in the phone's memory. However, in some versions of USB (such as USB On-the-Go), the devices will negotiate who is to be master, and who is to be slave. We found a couple notebooks 6 years ago that could be broken into with USB this way. I don't know if any newer computers can. But most other technologies are "peer-to-peer" rather than "master-slave". In those cases, either side can hack into the other. We did this at a pentest recently. A company gave employees laptops that were secured using all the latest technology, such as encrypted boot disks and disabled USB ports. Users weren't given admin privileges. But the Firewire ports were open. We connected a device to the Firewire port on a laptop, and broke in with administrator access. Once in, we grabbed the encrypted administrator password (the one the owner of the laptop didn't know). We cracked it using L0phtcrack. That password was the same for all notebooks handed out by the company, so we now could log onto anybody's notebook. Worse -- that administrator account was also on their servers, so we could simply log into their domain controllers using that account and take control of the entire enterprise. Another real-world story comes from the HBGary e-mails. Apparently, HBGary sold devices to the government so that they could perform the same sort of trick. We did it with a laptop running Linux, but you can easily do this from a thumbdrive. The current Thunderbolt simply sends PCIe signals across the wire. That means, in theory, anything a PCIe card can do, a Thunderbolt device can do. A hostile device should be able to send any address it wants, to read and write any part of memory of the host machine. Intel has a solution for this. It's called "Intel Virtualization Technology for Directed I/O" or "VT-d". Using VT-d, a driver can configure the chipset to allow a device on the PCIe bus (or the Thunderbolt connection) to only write to specific areas of memory instead of the entire memory. The processors in the MacBooks support this feature, but Mac OS X does not (at least, last time I checked it wasn't). Note that MacBooks already have Firewire, ExpressCard, and SD/IO ports that are vulnerable to this feature. Therefore, having yet another port with the same vulnerability isn't a huge increase in the risk. Update: iFixit's teardown shows that the MacBook uses the BD82HM65 southbridge, which does not support Vt-d.
<urn:uuid:4121393f-d815-4593-8a66-51ba6bd61d31>
CC-MAIN-2022-40
https://blog.erratasec.com/2011/02/thunderbolt-introducing-new-way-to-hack.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00066.warc.gz
en
0.955974
770
2.546875
3
Silicon is the second most abundant element in the earth’s crust (oxygen is the first). It occurs naturally in silicate (Si-O containing) rocks and sands. The elemental silicon used in semiconductor device manufacture is produced from high purity quartz and quartzite sands, which contain relatively few impurities. Electronic grade silicon, the name used for the grade of silicon employed in semiconductor device manufacture, is the product of a chain of processes beginning with the conversion of quartz or quartzite sand to “metallurgical grade silicon” (MG-Si), in an electric arc furnace (Figure 1) according to the chemical reaction: SiO2 + C → Si + CO2 Silicon prepared in this manner is called “metallurgical grade” since most of the world’s production actually goes into steel-making. It is about 98% pure. MG-Si is not pure enough for direct use in electronics manufacturing. A small fraction (5% – 10%) of the worldwide production of MG-Si gets further purified for use in electronics manufacturing. The purification of MG-Si to semiconductor (electronic) grade silicon is a multi-step process, shown schematically in Figure 2. In this process, MG-Si is first ground in a ball-mill to produce very fine (75% < 40 µM) particles which are then fed to a Fluidized Bed Reactor (FBR). There the MG-Si reacts with anhydrous hydrochloric acid gas (HCl), at 575 K (approx. 300ºC) according to the reaction: Si + 3HCl → SiHCl3 + H2 The hydrochlorination reaction in the FBR makes a gaseous product that is about 90% trichlorosilane (SiHCl3). The remaining 10% of the gas produced in this step is mostly tetrachlorosilane, SiCl4, with some dichlorosilane, SiH2Cl2. This gas mixture is put through a series of fractional distillations that purify the trichlorosilane and collect and re-use the tetrachlorosilane and dichlorosilane by-products. This purification process produces extremely pure trichlorosilane with major impurities in the low parts per billion range. Purified, solid polycrystalline silicon is produced from high purity trichlorosilane using a method known as “The Siemens Process.” In this process, the trichlorosilane is diluted with hydrogen and fed to a chemical vapor deposition reactor. There, the reaction conditions are adjusted so that polycrystalline silicon is deposited on electrically-heated silicon rods according to the reverse of the trichlorosilane formation reaction: SiHCl3 + H2 → Si + 3HC By-products from the deposition reaction (H2, HCl, SiHCl3, SiCl4 and SiH2Cl2) are captured and recycled through the trichlorosilane production and purification process as shown in Figure 2. The chemistry of the production, purification and silicon deposition processes associated with semiconductor grade silicon is more complex than this simple description. There are also a number of alternative chemistries that can be, and are, used for polysilicon production. The silicon wafers so familiar to those of us in the semiconductor industry are actually thin slices of a large single crystal of silicon that was grown from melted electronic grade polycrystalline silicon. The process used in growing these single crystals is known as the Czochralski process after its inventor, Jan Czochralski. Figure 3 shows the basic sequence and components involved in the Czochralski process. The Czochralski process is carried out in an evacuable chamber, commonly referred to as a “crystal puller” that holds a large crucible, usually quartz, and an electric heating element (Figure 3(a)). Semiconductor grade polysilicon is loaded (charged) into the crucible along with precise amounts of any dopants such as phosphorus or boron that may be needed to give the product wafers specified P or N characteristics. Evacuation removes any air from the chamber to avoid oxidation of the heated silicon during the growth process. The charged crucible is electrically heated to a temperature sufficient to melt the polysilicon (greater than 1421ºC). Once the silicon charge is fully melted, a small seed crystal, mounted on a rod, is lowered into the molten silicon. The seed crystal is typically about 5 mm in diameter and up to 300 mm long. It acts as a “starter” for the growth of the larger silicon crystal from the melt. The seed crystal is mounted on the rod with a known crystal facet vertically oriented in the melt (crystal facets are defined by “Miller Indices”). In the case of seed crystals, facets having Miller indices of <100>, <110> or <111> are typically chosen. The crystal growth from the melt will conform to this initial orientation, giving the final large single crystal a known crystal orientation. Following immersion in the melt, the seed crystal is slowly (a few cm/hour) pulled from the melt as the larger crystal grows. The pull speed determines the final diameter of the large crystal. Both the crystal and the crucible are rotated during a crystal pull to improve the homogeneity of the crystal and dopant distribution. The final large crystal is cylindrical in shape; it is called a “boule.” Czochralski growth is the most economical method for the production of silicon crystal boules suitable for producing silicon wafers for general semiconductor device fabrication (known as CZ wafers). The method can form boules large enough to produce silicon wafers up to 450 mm in diameter. However, the method has certain limitations. Since the boule is grown in a quartz (SiO2) crucible, some oxygen contamination is always present in the silicon (typically 1018 atoms cm-3 or 20 ppm). Graphite crucibles have been used to avoid this contamination, however they produce carbon impurities in the silicon, albeit at an order of magnitude lower in concentration. Both oxygen and carbon impurities lower the minority carrier diffusion length in the final silicon wafer. Dopant homogeneity in the axial and radial directions is also limited in Czochralski silicon, making it difficult to obtain wafers with resistivities greater than 100 ohm-cm. Higher purity silicon can be produced by a method known as Float Zone (FZ) refining. In this method, a polycrystalline silicon ingot is mounted vertically in the growth chamber, either under vacuum or inert atmosphere. The ingot is not in contact with any of the chamber components except for the ambient gas and a seed crystal of known orientation at its base (Figure 4). The ingot is heated using non-contact radio-frequency (RF) coils that establish a zone of melted material in the ingot, typically about 2 cm thick. In the FZ process, the rod moves vertically downward, allowing the molten zone to move up the length of the ingot, pushing impurities ahead of the melt and leaving behind highly purified single crystal silicon. FZ silicon wafers have resistivities as high as 10,000 ohm-cm. Once the silicon boule has been created, it is cut into manageable lengths and each length ground to the desired diameter. Orientation flats that indicate the silicon doping and orientation for wafers of less than 200 mm diameter are also ground into the boule at this stage. For wafers with diameters less than 200 mm, the primary (largest) flat is oriented perpendicular to a specified crystal axis such as <111> or <100> (see Figure 5). Secondary (smaller) flats indicate whether a wafer is either p-type or n-type. 200 mm (8-inch) and 300 mm (12-inch) wafers use a single notch oriented to the specified crystal axis to indicate wafer orientation with no indicator for doping type. Figure 3 shows the relationship between wafer type and the placement of flats on the wafer edge. After the boule has been ground to the desired diameter and the flats have been created, it is cut into thin slices using either a diamond encrusted blade or a steel wire. The edges of the silicon slices are usually rounded at this stage. Laser markings designating silicon type, resistivity, manufacturer, etc. are also added near the primary flat at this time. Both surfaces of the unfinished slice are ground and lapped to bring all of the slices to within a specified thickness and flatness tolerance. Grinding brings the slice into a rough thickness and flatness tolerance after which the lapping process removes the last bit of unwanted material from the slice faces, leaving a smooth, flat, unpolished surface. Lapping typically achieves tolerances of less than 2.5 µm uniformity in wafer surface flatness. The final stage in silicon wafer manufacture involves chemically etching away any surface layers that may have accumulated crystal damage and contamination during sawing, grinding and lapping; followed by chemical mechanical polishing (CMP) to produce a highly reflective, scratch and damage free surface on one side of the wafer. The chemical etch is accomplished using an etchant solution of hydrofluoric acid (HF) mixed with nitric and acetic acids that can dissolve silicon. In CMP, silicon slices are mounted onto a carrier and placed in a CMP machine where they undergo combined chemical and mechanical polishing. Typically, CMP employs a hard polyurethane polishing pad combined with a slurry of finely dispersed alumina or silica abrasive particles in an alkaline solution. The finished product of the CMP process is the silicon wafer that we, as users, are familiar with. It has a highly reflective, scratch and damage free surface on one side on which semiconductor devices can be fabricated. Compound semiconductors are important materials in many military and other specialty electronics devices such as lasers, high-frequency electronic devices, LEDs, optical receivers, opto-electronic integrated circuits, etc. GaN has been commonly used in many different commercial LED applications since the 1990’s. Table 1 provides a list of the elemental and binary (two element) compound semiconductors along with the nature of their band gap and its magnitude. In addition to the binary compound semiconductors, ternary (three element) compound semiconductors are also known and used in device fabrication. Ternary compound semiconductors include materials such as aluminum gallium arsenide, AlGaAs, indium gallium arsenide, InGaAs and indium aluminum arsenide, InAlAs. Quarternary (four element) compound semiconductors are also known and used in modern microelectronics. The unique light-emitting ability of compound semiconductors is due to the fact that they are direct band gap semiconductors. Table 1 denotes which semiconductors possess this property. The wavelength of the light emitted by devices built from direct band gap semiconductors depends on the band gap energy. By skillfully engineering the band gap structure of composite devices built from different compound semiconductors with direct band gaps, engineers have been able to produce solid state light emitting devices that range from the lasers used in fiber optic communications to high efficiency LED light bulbs. A detailed discussion of the implications of direct versus indirect band gaps in semiconductor materials is beyond the scope of this work. Simple, binary compound semiconductors can be prepared in bulk, and single crystal wafers are produced by processes similar to those used in silicon wafer manufacturing. GaAs, InP and other compound semiconductor ingots can be grown using either the Czochralski or Bridgman-Stockbarger method with wafers prepared in a manner similar to silicon wafer production. Surface conditioning of compound semiconductor wafers, (i.e., making them reflective and flat) is complicated by the fact that at least two elements are present and these elements can react with etchants and abrasives in different fashions. |Material System||Name||Formula||Energy Gap (eV)||Band Type(I = indirect; D = direct)| Table 1. The elemental semiconductors and the binary compound semiconductors.
<urn:uuid:cfb4608c-0969-44f7-be69-286c25fbf29a>
CC-MAIN-2022-40
https://www.mks.com/n/silicon-wafer-production
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00266.warc.gz
en
0.896126
3,094
3.90625
4
Predictive Analytics and Big Data As analytics becomes main stream and more and more businesses harness its power, the maturity of businesses is also rising, as are the expectations. Analytics has been used to examine historical data to analyse key events and occurrences. Now the focus is on getting gleaning actionable intelligence for future events. And businesses are turning to Predictive analytics to gain this insight. Investopedia.com defines predictive analytics as ‘The use of statistics and modelling to determine future performance based on current and historical data'. Predictive analytics look at patterns in data to determine if those patterns are likely to emerge again, which allows businesses and investors to adjust where they use their resources in order to take advantage of possible future events. Predictive Analytics is being leveraged to examine past performance and forecast revenue generating patterns, understand customer behaviour and use the information to offer better products and services, fine tune ability to identify risks by catching suspicious trends, optimize processes and more. 1. Companies use predictive analytics in various industries including Retail, Manufacturing, Finance, and Healthcare among others, and across different functions like marketing and operations. 2. Retailers, for example, are using data from loyalty programs to analyse past buying behaviour and predict the promotions a customer is most likely to participate in, or make purchases in the future. 3. Marketing functions can explore analytics for retaining or reactivating customers with the right incentives. On the other hand, a Government initiative such as rarefied as veld and forest fire fighting is using advanced analytical measures to predict possible wildfires in South African grasslands. 4. Financial institutions are using analytics to identify high-risk probable customers and minimize default risks, as well as cross-selling and up selling their products, customer segmentation, fraud detection, cash planning etc. Healthcare organizations are parsing patient history to enable more accurate diagnoses, studying responses to medication, reducing hospital readmissions, integrating bedside medical device data into algorithms which help detect deteriorating vital signs in critical patients in real-time and more. The volume and variety of information – both structured and unstructured - is exploding. There isn’t one common data source that gives customers or companies insights about the customers. One has to piece together from multiple data sources. Turning data into information and then into action is becoming more important than ever before. Predictive analytics can be used for determining events or outcomes before they happen, simulation of a process to determine bottlenecks and risks as well as in “what-if” scenarios to determine the “best” course of action. To develop a robust predictive model, enterprises need to focus on defining a clear set of business rules for each decision and then focus their analytics on driving the best decisions Predictive analytics practices can help companies in three key areas – • minimizing risk • identifying fraud • pursuing new revenue opportunities An important challenge to greater adoption is that the skill needed to analyze data, create models and implement them successfully is in short supply. Organizations looking to get more value from the their data assets through predictive analytics will be wise to invest in analytical training and mentoring programs, along with the expertise of partner firms. Effective outsourcing partners understand the nuances and have expertise to ensure the pipelines are built for the flow of data, information and action. Predictive analytics, thus, delivers strategic value as well as tactical guidance. In some instances, analytics can also help automate decision making, thereby dramatically reducing the cost of operations. With actionable insights from all the data in their possession, data analysts are now able to get granular visibility into systems and processes. Understandably, the accuracy of the prediction will depend on the volume of data available for analysis. Businesses are exploring social media to obtain that extra volume of data for more accurate guidance. With the explosion of data via social media access to higher intelligence is now possible. In parallel, technology has kept pace making it easier to process large sets of data. Thus, Big data and predictive analytics, together is ready to add yet another dimension to business decision making. Armed with these kinds of insights Predictive analytics certainly holds great promise for organisations across verticals. Big data is the way forward, and predictive analytics will continue to be the science behind all the data.
<urn:uuid:4be7b07c-e72b-48e7-bf6c-4d28c3d8d424>
CC-MAIN-2022-40
https://agile.ciotechoutlook.com/cxoinsight/predictive-analytics-and-big-data-nid-1192-cid-55.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00266.warc.gz
en
0.933631
869
2.640625
3
A system activity is an activity completed electronically without human input. Examples include activities that automatically download data from a data source, or analyze input values to get a result. - (Example) Configure a Timer activity - (Example) Configure a Convert Date Format activity - (Example) Configure an And Activity - (Example) Configure the Regular Expression Activity - (Example) Create an Inline Function - (Example) Configure an Excel Read Activity - (Example) Configure an Excel Write Activity - (Example) Configure a Document Transfer activity - Examples - Step-by-step use case examples, information about what types of examples are provided in the AgilePoint NX Product Documentation, and other resources where you can find more examples. Video: Add Standard Task Activities About This Page This page is a navigational feature that can help you find the most important information about this topic from one location. It centralizes access to information about the concept that may be found in different parts of the documentation, provides any videos that may be available for this topic, and facilitates search using synonyms or related terms. Use the links on this page to find the information that is the most relevant to your needs. system activity, activities, shapes, AgilePart, system activities, automatic task, system task, automated task, automated activity
<urn:uuid:191d10bc-4fb3-4403-83c4-a4ca02e4c8f6>
CC-MAIN-2022-40
https://helpdesk.agilepoint.com/hc/en-us/articles/115002744274-system-activity
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00266.warc.gz
en
0.771902
289
2.53125
3
Machine learning technologies are everywhere. They’re used by search engines, social media, and even in online banking. But one area that this technology is still emerging is medicine. Machine learning technologies could be very promising in medicine, and could be used for many applications, such as detecting signs of disease in cells, or discovering new drugs for rare diseases. But in order for a machine learning approach to be able to do such things, it needs to be both accurate and able to understand how cells work. Our team has developed an accurate machine learning approach that can predict cell growth in a way that researchers can easily understand. The machine learning technique makes its predictions by looking at how cells change and act under different conditions. This method could someday be used to diagnose cancer, or predict how certain drugs may interact with a patient. Interpreting machine learning predictions In essence, machine learning is a form of artificial intelligence (AI) in which data is used to teach computers to make decisions on their own, without a person needing to be there to do it for them. But one of the main weaknesses of machine learning techniques in biology and medicine is the fact that they don’t incorporate biological knowledge – such as underlying cell biochemistry – in the learning process. In general, they also ignore this knowledge when making their predictions. This is because these systems treat biological information as data or numbers, so they don’t consider the actual biological meaning of these numbers. Such systems are often referred to as “black box” systems. These are AI that are fed data, and provide users with a clear decision or prediction based on the patterns found in that data. However, it’s usually unclear how the AI made its decision because of how complex its analysis is. Black box predictions aren’t a major issue in fields where high accuracy is the most important goal – such as in software used to predict spam emails. But it’s a major disadvantage in biomedicine. Black box predictions can’t be interpreted by researchers because of how complex they are, meaning they have little understanding of how the AI algorithm reaches its prediction. “White box” systems, on the other hand, could be slightly less accurate in their decisions or predictions, but it’s clearer to users the relationships they’ve inferred based on the data given. The benefit of white box systems is that users can understand what information the system used to make its prediction, and because it’s understandable, users can also interrogate the decision itself and interpret it from a biological point of view. Machine learning predictions need to be interpretable and justifiable to be trustworthy and to work in biomedicine. In the case of detecting cancer, if the AI technique made a false-positive prediction, it could lead to unnecessary treatment – while false-negative predictions could lead to the disease being left untreated. Understanding the predictions made by machine learning algorithms will also help avoid false negatives when researching potential drugs and any side effects they might have. Predicting cell growth In order for AI methods to work in biomedicine, we first needed to design a machine learning approach that could predict cell growth, and understand what was driving this growth. Understanding how cells grow and how their growth changes in different conditions is the first step in being able to design an AI that can detect the presence of a disease or predict how well certain treatments may work. Our team evaluated 27 different machine learning approaches that looked at both gene expression profiles and mechanistic metabolic models. Gene expression profiles showed how the cell’s process of assembling proteins changed under a variety of conditions. Metabolic models showed how the underlying cell biochemistry works in each strain. We then built our own white box machine learning technique, which would allow us to easily interpret how the AI made its decision, overcoming the shortfalls of previous computer learning techniques. We did this by teaching our AI to make decisions using data from both gene expression and metabolic models – something that hasn’t been done before. Using both models to build our machine learning approach improved predictive accuracy compared to using only gene expression data by up to 4% in some cases. This has the advantage of revealing previously unknown interactions between gene expression and metabolic activity. We then checked our approach on more than 1000 different strains of Saccharomyces cerevisiae – a species of yeast common in baking, brewing, and wine making. Data on this type of yeast is widely available, making it easy to evaluate the effectiveness of our machine learning approach. The results from the yeast showed that with our white-box approach, we can maintain and in some cases improve the predictive accuracy of AI techniques. But importantly, we also offer an interpretation of these predictions, by explaining which biochemical reaction is active in the cell across various conditions. Our approach incorporates information on biological mechanisms, such as cell biochemistry, in the learning process. This overcomes the black-box limitations of conventional data-driven approaches, and achieves a step towards the development of interpretable machine learning models. The advantage of this is that machine learning models based on our approach will be more trustworthy. Our results show that combining data and knowledge-driven models gives researchers more information about how cells grow and work in certain conditions. While this will still need to be tested using human cells, it could have many promising applications in the future. For example, understanding how cancer cells are influenced by their genetic make-up and by environmental conditions is a major and pressing challenge in treating and preventing it. • Dr Claudio Angione is a Reader in Computer Science at Teesside University. This article originally appeared on TheConversation
<urn:uuid:c0f1490c-7a69-486b-be83-21c1142666f5>
CC-MAIN-2022-40
https://news.networktigers.com/industry-news/could-ai-be-used-to-detect-cancer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00266.warc.gz
en
0.945258
1,162
3.5625
4
Alternating current (AC) power is ubiquitous in data centers, and it's hard to change the status quo. But a direct current (DC) power demonstration project conducted by the Lawrence Berkely National Laboratory produced some interesting results: a 7% energy savings over top-notch AC technologies. SearchDataCenter.com recently reported on an experiment in using DC power in a data center at Syracuse University, furthering the practical research into this data center power option. While total DC power infrastructure for data centers isn't quite ready, these investigations are putting the concept on the top of mind for data center professionals concerned about power consumption. Will DC Take to the Data Center? DC Power could save a bundle, but tech managers are just exploring it. More than a century after DC bowed to AC as the most efficient method of electrical distribution, DC is getting a second look. But this time around, the ambitions of DC supporters are more narrowly focused. They're touting DC over AC as a way to make the facilities that house massive and power-ravenous data computing, storage, and communications systems more energy efficient. In a replay of the original AC-DC fight, however, AC supporters counter that tried-and-true AC, especially if it's optimized for efficiency, still reigns superior. The battle is taking place against a backdrop of the surging "green" movement and an increased awareness of how much energy data centers use (and waste) because of how they're configured. Electrical engineering conferences, white papers, and demonstration projects that assess the comparable benefits of AC and DC for data centers have been proliferating. They've accelerated since 2006, when Congress directed the Environmental Protection Agency and the Department of Energy to oversee efforts to mitigate spiraling data center power consumption. Has AC met its match? DC has emerged as one possible fix, primarily because it would eliminate one of the biggest sources of energy loss and waste with AC - the multiple back-and-forth transformations and conditioning needed to step voltage down for use by IT equipment. By converting high-voltage AC to DC earlier, keeping it in DC form, and delivering it directly to rack-based servers, energy loss from conversion and the resultant heat that must be removed with cooling that also requires energy could be reduced. In fact, some studies peg energy savings as high as 30%. >>Tell us your thoughts - send in a comment. >>Join DVL's Users' Group
<urn:uuid:b937a242-528e-480e-80f2-daf5c0d55ae6>
CC-MAIN-2022-40
https://www.dvlnet.com/blog/topic/pod
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00266.warc.gz
en
0.944935
497
2.984375
3
A self-signed certificate is a digital certificate not signed by any publicly trusted Certificate Authority (CA). Self-signed certificates include SSL/TLS certificates, code signing certificates, and S/MIME certificates. Self-Signed certificates are created, issued, and signed by the organization responsible for the website or the signed software. Advantages and Disadvantages of Self-signed Certificates - Self-signed certificates are free. - They are suitable for internal network websites and development/testing environments. - Encryption and Decryption of the data is done with the same ciphers used by paid SSL certificates - Browsers and Operating Systems do not trust self-signed certificates since a Publicly trusted CA does not sign them. Browsers would not show the green lock symbol or other visual indicators of trust. - Attackers can generate self-signed certificates, which can be used for man-in-the-middle (MITM) attacks, leaving users vulnerable to data theft and other forms of cyber-attacks.
<urn:uuid:902f19da-a7fa-4e35-8fff-f24e30e70ecc>
CC-MAIN-2022-40
https://www.encryptionconsulting.com/education-center/self-signed-certificates/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00266.warc.gz
en
0.938982
223
3.4375
3
That’s right, it’s now possible to create circuits on paper – it’s as easy as doodling and just as fun. Featuring technology that utilizes conductive silver ink, Circuit Scribe comes as a rollerball pen, giving the user the ability to design and create fully functioning circuitry on a piece of paper. What makes it so special? Circuit Scribe’s ink is water based and non-toxic, making it safe and easy to use. In fact, it looks just like a regular pen. With Circuit Scribe, teaching circuitry in schools, for one, can be done more efficiently and effectively as the technology is designed to be accessible and inexpensive. Whether you’re building simple circuits or more complex ones, Circuit Scribe takes away all of the bulky paraphernalia that typically comes with making even the simplest of circuits, allowing the user to focus on designing the circuit systems and understanding circuit behavior while allowing lots of room for creativity. Using Circuit Scribe also results in less waste – no more wires and breadboards – and it can be cheaper since it lets you create as many prototypes as you want. And if you’re worried that the quality will be compromised, don’t be. Circuit Scribe has been widely tested and has proven reliable in making high quality circuitry that instantly work. The inventors of Circuit Scribe, Electroninks, have also created a range of magnetic components that snap right into the circuits you draw. These can allow you to connect the circuits to each other and create more sophisticated systems. With Circuit Scribe, teaching electronics in a hands-on manner has just become a whole lot easier, not to mention more fun for kids and adults alike. Circuit Scribe can even be used to augment art projects (such as making a simple greeting card) or to create complex, full-fledged systems. The possibilities are indeed endless. You can learn more about it from the project’s campaign video.
<urn:uuid:8451b0d7-04e7-4f8d-be7a-fda0546ae5c2>
CC-MAIN-2022-40
https://davidpapp.com/2014/08/22/circuit-scribe-lets-draw-circuits-literally/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00466.warc.gz
en
0.940879
411
2.984375
3
To create a Firewall rule, navigate to the Edge Gateway, right-click and select “Edge Gateway Services”. Then select the Firewall tab. Click the “+” symbol to add a rule, and the “x” symbol to delete a rule. Select a rule and use the “↑” and “↓” icons to change rule order. The Firewall is enabled by default, and denies all traffic by default. To allow traffic, both outbound and inbound rules are necessary. See below for examples of both inbound and outbound rules. Once the firewall rule is created, it will appear in the rules list as “New Rule”. Once the rule is created, the user will need to edit the rule to perform its intended function by editing the Source, Destination, Service and Action fields. To edit the Source and Destination fields for a Firewall Rule, mouse over the field and select “IP” for IPv4 definition, or the “+” icon to set the field to a network object. To add an IPv4 Source or Destination range, enter the appropriate range in the “Value” field in the popup as shown below, and select “Keep”. A network object can also be used as a Source or Destination. Select the appropriate type from the dropdown menu, then select the specific relevant object. Select the “→” icon to add the object to the rule. A summary of network object types is shown below. To edit the Firewall Rule’s Service, select the “+” icon to select a service (network protocol and port). To edit the Action (Accept or Deny), select the appropriate action from the dropdown menu. The chart below shows a description of each Network Object available by default in the Edge Gateway Firewall Rule editor. Note that manually-defined objects are available from the Grouping Objects tab. This Firewall rule allows all outbound traffic from an internal subnet. The “vnic-internal” object refers to any private IP on the Edge Gateway’s internal interface. The “any” keyword covers any value for the relevant fields. The action is set to “Allow”, which overrides the default “Deny” action. This firewall rule allows traffic from any source to access the external IP at a specific port. The port should match a port defined by a Port DNAT rule. See NAT Rule Management for how to set up a corresponding NAT rule. NAT rules and Firewall rules work together to route traffic across the Edge Gateway. Both are necessary for normal traffic. GreenCloud support is always available for assistance troubleshooting NAT rule interactions. This Firewall Rule explicitly allows ICMP traffic across to an internal server, which will enable ping traffic. Please note that ICMP ping response is also disabled by default on GreenCloud VMs, so it may be necessary to verify that ICMP is also on for the target server in order to successfully ping. This rule will allow ping traffic to flow to the target internal IP from an external source.
<urn:uuid:fc193285-05ad-4418-be4e-e11471857c21>
CC-MAIN-2022-40
https://greenclouddefense.com/knowledge-base/advanced-edge-firewall-rule-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00466.warc.gz
en
0.858711
660
2.546875
3