text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Facebook recently announced it would give up on its facial recognition system. Facebook, or Meta, was using software to automatically identify people in images posted to its social network. Since facial recognition has become an increasingly toxic concept in many circles and Facebook was having enough to deal with as it is, it shut the “feature“ down. But that doesn’t mean that the technology no longer exists, or even that it isn’t used anymore. Let’s establish first what we consider facial recognition to be. By definition: A facial recognition system is a technology capable of matching a human face from a digital image or a video frame against a database of faces, typically employed to identify and/or authenticate users. In layman’s terms, facial recognition is technology to recognize a human face. How does facial recognition work? There are different systems and algorithms that can perform facial recognition, but at the basic level they all function the same—they use biometrics to map facial features from a photograph or video. The image is captured and reduced to a set of numbers that describes the face that needs to be identified. The software analyses the shape of the face by taking certain measurements that, all put together, provide a unique characteristic for the face. The shape of the face is reduced to a mathematical formula, and the numerical code of that formula is called a “faceprint.” Such a faceprint can be quickly compared to those stored in a database in order to identify the person. You can compare this to a person leafing through an enormous book of portraits to find a suspect. Only much faster because now it’s a computer comparing sets of numbers. How is facial recognition used? The most well-known example of facial recognition is the one that can be used to unlock your phone or similar. In those cases, your face is compared to the ones that are authorized to use the phone. Another convenient method of facial recognition can be found in some major airports around the world. An increasing number of travelers hold a biometric passport, which allows them to skip the long lines and walk through an automated ePassport control to reach their gate faster. This type of facial recognition not only reduces waiting times but also allows airports to improve security. A lot less consensual is the fact that in some countries mobile and/or CCTV facial recognition is used to identify any person, by immediately comparing an image against one or more face recognition databases. In total, there are well over 100 countries today that are either using or have approved the use of facial recognition technology for surveillance purposes. This has brought up a lot of questions about our privacy. What is bad about facial recognition? As we can see from the above, facial recognition is not always bad. And it can be used to improve our personal and public security. It becomes a privacy issue when the consensus from the person in the database is missing. People, especially in large cities, have become used to being monitored a lot of the time that they spend outside. But when facial recognition adds the extra layer of tracking, or the possibility to do so, it becomes worrying. China, for example, is already a place deeply wedded to multiple tracking/surveillance systems. According to estimates, there are well over 400 million CCTV cameras in the country, and they do not shy away from using facial recognition in public shaming to crack down on people that are jaywalking and other minor traffic offenders. It’s because of the privacy implications that some tech giants have backed away from the technology, or halted their development. Many groups like American Civil Liberties Union (ACLU) and Electronic Frontier Foundation (EFF) have made objections against facial recognition technology as it is considered a breach of privacy to use biometrics to track and identify individuals without their consent. Many feel that there is already more than enough technology out there that keeps track of our behavior, preferences, and movement. Can I use facial recognition to find someone? For an individual to identify another individual would require access to a large database or an enormous amount of luck. As we explained, the faceprints are compared with those in a database. And that database has to contain a pretty large subset of the population you are looking in. But there are other ways to identify an individual if he is nowhere to be found in the database. A picture can be compared to one that is openly posted on social media. Some organizations have built quite the databases just from harvesting pictures from social media. And you might be amazed about what a reverse image search could bring up. In essence, your chance of success finding a person based on a picture depends on how sophisticated your search algorithm is and how many pictures of your subject can be found on the Internet. The other way around, if you do not want to be found, make sure that you don’t post your pictures everywhere, and when you do, make sure they are not publicly accessible. And stay out of the databases. If you are interested in the subject of facial recognition, you may also want to listen to S1Ep6 of the Malwarebytes podcast Lock and Code where we talk with Chris Boyd about “Recognizing facial recognition’s flaws”
<urn:uuid:78cd1eb5-2efd-4c02-ba17-bb6eb8f85c34>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2021/11/what-is-facial-recognition
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00443.warc.gz
en
0.949585
1,080
3.28125
3
Identifying Relationships in a Logical Model An identifying relationship is a relationship between two entities in which an instance of a child entity is identified through its association with a parent entity, which means the child entity is dependent on the parent entity for its identity and cannot exist without it. In an identifying relationship, one instance of the parent entity is related to multiple instances of the child. In IDEF1X notation, an identifying relationship line is drawn as a solid line with a diamond or a filled circle at either end of the line. In IE notation, an identifying relationship line is drawn as a solid line with crows feet. You can create an identifying relationship when you add a relationship using the Relationships editor, on the diagram window, or using the Model Explorer. Note: Primary key attributes are automatically migrated from a parent entity to a child entity, so you do not need to enter any foreign keys in the child. Copyright © 2020 erwin, Inc. All rights reserved.
<urn:uuid:8549a8a5-cc8d-4c6b-bd43-664b6e03091b>
CC-MAIN-2022-40
https://bookshelf.erwin.com/bookshelf/public_html/2020R1/Content/User%20Guides/erwin%20Help/Identifying_Relationships_in_a_Logical_Model.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00443.warc.gz
en
0.911408
210
2.953125
3
The following is adapted from Fire Doesn’t Innovate. Two-factor authentication has become more important as the level of cyberattacks continues to rise. Every time a company you do business with (or that collects data about you) has its data stolen by cyberattackers, your online security is compromised and your accounts are vulnerable to attack. Two-factor authentication can help prevent access to your online accounts if someone else has your user ID and password. Authentication is the process of proving your identity and it happens one of three ways: - Something you know (your a PIN or password) - Something you have (your driver’s license or mobile phone) - Something you are (your photo or fingerprint) With all three of these factors, the idea is that it’s very hard for someone to impersonate you because they don’t know what you know, have what you have, and are not what you are. It is very hard to steal from you when you put all three of those factors into play. Two-factor authentication uses two of these factors to verify your identity and give access to your account. Using just one of these authentications is not enough to protect you and your company’s private data. If you want to know whether the websites you use offer two-factor authentication, and what options are available, here’s where you can check: https://twofactorauth.org. If you have an easy to guess password, but must use a one-time, random six digit code that you look up on your mobile phone in order to logon to your bank account, it will be difficult for cybercriminals to steal from you. If you’re lucky they’ll just move on to their next target. But… Not All Authentication is Created Equal Two-factor authentication is great for protecting your online accounts, but what you must realize is that not all forms of authentication is equally effective. Many banking institutions use a PIN sent via text message to authenticate your login. Many US banks use this authentication technique because it’s easy for customers to use. But this is not a truly secure two-factor authentication technique. First, if a cybercriminal steals your phone (or increasingly, your number), they can intercept your PIN. Second, the phone system that handles text messages has absolutely no security of any kind. It was never designed to do anything that required secrecy. Cybercriminals are adept at hacking into that system to intercept text messages. How Criminals Can Steal Your Phone Number Let’s run through a hypothetical situation to show how this might look in practice. Imagine a cybercriminal steals your mobile phone number and attempts to log in to your bank account. They call T-Mobile from a new phone and tell the representative, “I just bought a new iPhone, so I’d like to move my number from my old phone.” The T-Mobile representative will ask the caller to verify they are the authorized account holder (which, of course, they aren’t). Most often, they’ll ask for the last four digits of your Social Security Number, or a security question that is easy to find online. Once the cybercriminal “proves” that they’re you, the T-Mobile rep will release your phone number from the old phone and assign it to the new mobile phone, which deactivates service to your handset. Once that process is complete, the criminal now owns your phone number and can receive every text message sent to it. One by one, the criminal will take control of your online accounts by doing password resets along with the actual PIN texted to your phone number. To keep your mobile phone number from being stolen, set a random, six-digit account owner PIN with your mobile phone carrier and store it in your password manager. (You’ve paid for a high-quality one, right? Either 1Password or LastPass are great choices.) If you want to move away from text message PINs as a second factor of authentication, use an app like Google Authenticator or Microsoft Authenticator that mathematically generates one-time passwords. Most popular websites work with these apps. You’re More Secure with an iPhone Despite the text messaging system itself being insecure, the iPhone is a very secure phone. In fact, even law enforcement struggles to get data from an iPhone. Up until the iPhone came on the scene, it was very easy for law enforcement to get info from people’s phones. They even had dedicated workstations in police stations for downloading information from suspects’ and victims’ phones. The latest iPhone models do not allow them to do that. It will only give information if the owner authenticates it. Apple has a special chip that they designed specifically for security, and the chip is part of a subsystem called the Security Enclave. That is a place where you can store secrets on your iPhone that no one can access, like an uncrackable safe. An important tip: be sure to set at least a six-digit unlock code on your mobile phone. It’s just too easy to guess a four-digit PIN. Start Securing Your Accounts Today Two-factor authentication can be very secure. When it’s done well and in combination with a password manager, it’s your best option for protecting your sensitive data. Start with the institutions that carry your most sensitive data—such as banks and cloud services—and enable two-factor authentication if they make it available. For more advice on two-factor authentication, you can find Fire Doesn’t Innovate on Amazon. Kip Boyle is founder and CEO of Cyber Risk Opportunities, whose mission is to enable executives to become more proficient cyber risk managers. His customers have included the U.S. Federal Reserve Bank, Boeing, Visa, Intuit, Mitsubishi, DuPont, and many others. A cybersecurity expert since 1992, he was previously the director of wide area network security for the Air Force’s F-22 Raptor program and a senior consultant for Stanford Research Institute (SRI).
<urn:uuid:e008c993-2317-4ee6-b583-5aad5b66ddd3>
CC-MAIN-2022-40
https://www.cyberriskopportunities.com/keep-this-in-mind-with-two-factor-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00443.warc.gz
en
0.936049
1,281
2.953125
3
A regular web application firewall (WAF) provides security by operating through an application or service, blocking service calls, inputs and outputs that do not meet the policy of a firewall, i.e. set of rules to a HTTP conversation. WAFs do not require modification of application source code. The rules to blocking an attack can be customized depending on the role in protecting websites that WAFs need to have. This is considered an evolving information security technology, more powerful than a standard network firewall, or a regular intrusion detection system. Image 1 – WAFs become integrated with the cloud Today, WAF products are deeply integrated with network technologies such as load balancing and — cloud. Cloud-based WAFs, thus, utilize all advantages of WAFs and share that threat detection information among all tenants of the service, which improves results and speeds up detection rates. The whole community learns from an attack to any website sharing a single cloud-based WAF service. Plus, cloud based WAF technology is: - easy to set-up - offered as pay-as-you-grow service - sharing back reports By using cloud-based WAFs, clients need not make any software or hardware changes and tunings to their system, and can successfully protect their websites from threats, by applying custom rules and deciding on the aggressiveness of the protection. This service is used and considered ideal by anyone from financial institutions to mid-sized businesses and trading platforms, to government bodies, e-commerce vendors, and so on. They all pick WAF as protection against top vulnerabilities such as: - identity theft - access to confidential/unauthorized data - falsified transactions - injection flaws (such as SQL injection) - broken authentication session - cross-site scripting (XSS flaws) - sensitive data exposure - forged requests to access functionality - forged HTTP requests to a vulnerable web application - vulnerable component exploit - unvalidated redirects and forwards With cloud space opening up and bringing full virtualization of OS, of storage, of software, platform, and infrastructure, more applications need to be developed for the cloud (while most are not) and remain secure on the cloud. With WAF in the cloud, traffic is being redirected to traffic scrubbing and protecting proxy farm of WAFs. Cloud-based WAF service providers will often include a full threat analysis, exception handling policies, as well as continuous monitoring of their service.
<urn:uuid:bf9a73a7-33a7-463c-a09b-d30ff854bd18>
CC-MAIN-2022-40
https://www.globaldots.com/resources/blog/what-is-cloud-based-web-application-firewall-waf/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00643.warc.gz
en
0.933187
524
2.53125
3
Humidity naturally surrounds every item on the planet. It is beneficial to humans; however, it can spell trouble for IT equipment. Uncontrolled humidity is a danger to electronic components, resulting in downtime and equipment failure. Water vapor in the air conducts away dangerous static electrical discharge, which protects IT equipment. Relying on precision cooling solutions to regulate the proper humidity level in a data center does not guarantee adequate humidity levels in the air intake of the IT equipment. It was previously believed that humidity control in data centers follows the particular need of the IT equipment, which is why data center humidity was regulated at a very narrow range but research has shown that IT equipment is able to bear a much greater humidity range compared to what was previously believed. Since then, the industry now recognizes that by allowing a greater range of relative humidity (RH) and given proper regulation, these methods can turn out to be more cost-effective since a significant amount of energy is saved while maintaining acceptable IT performance. In order to ensure ideal environmental monitoring and control in the data center, temperature and humidity must be closely monitored. Too high humidity can cause condensation, which might lead to water damage. Too low humidity can cause static discharge buildup, which endangers electronics. Humidity and the IT Environment Air is composed of various gasses, which are water vapor, carbon dioxide (0.3%), oxygen (21%), and nitrogen (78%). The water vapor in the air is known as humidity. It is vital for the air in the IT environment to contain the right amount of water vapor in order to maximize the functionalities of the equipment. Too little or too much water vapor in the air may lead to downtime in equipment and reduced productivity. The amount of water in the air is very little, but it is not fixed. The water that can be held in the air directly correlates to the increase and decrease of temperature. At a minimum, there is one humidity monitor that determines relative humidity (RH) in most data centers. It shows data in the form of relative humidity. “Relative” pertains to the humidity of the air close to what it could be given its temperature. When the air is hotter, it is capable of holding more water. It is expressed as a percentage between 0% to 100%. If the relative humidity is zero, then there is no water vapor in the air. If the relative humidity is 100%, the air is holding the maximum amount of water vapor it can. However, controlling relative humidity without taking into account the temperature at the same time would be useless. The same air measured may have a different relative humidity depending on its temperature. When the air is cooled, its relative humidity increases until the water starts condensing. This is also known as the dew point, and it is expressed in a temperature form. Dew point is the temperature when water vapor disappears in the air and manifests as liquid water on objects. Dew point and relative humidity are two related factors that are important in an IT environment. The dew point in the air at a specific temperature will rise as the relative humidity of the air increases. The air’s dew point is equal to its temperature when it reaches 100% relative humidity. In such a case, the air will be considered saturated. Humidity has positive effects in a data center when it is regulated at the proper level. However, possible problems might occur if it is too high or too low. When the proper humidity level is maintained, there is a reduced “charging” effect. The charging effect may lead to static electrical discharge by making the air a little more electrically conductive and the surfaces it passes thoroughly slightly damp. When the positive and negative charges are not balanced, static electricity is produced, and it will be unlikely to result in electrical discharges (10,000+ volt sparks) due to the slight increase in conductivity in the air. The flow of cool, low humidity throughout the data center is a potential source of static electricity. When this cool air moves through an ungrounded insulated surface, proper humidity levels must be maintained at all times. High relative humidity levels in an IT environment lessen the chance of static discharge; however, it can also increase the possibility of metal component corrosion. High humidity levels also increase the likelihood of the equipment getting harmed due to water damage. Due to this, many IT specifications indicate the acceptable humidity range regarding non-condensing humidity. Equipment often functions normally within a humidity range of an average of 20-80% RH, provided that the temperature of the equipment and everything that surrounds it remains above dew point temperature. Rapid changes in temperature and high relative humidity in environments will likely result in condensing humidity. Exceeding upper humidity levels in IT environments with high-speed printing will result in paper stock to possibly swell and increase thickness, which will then lead to jams and overall process downtime. What Causes Changes In An IT Environment Humidity? There are three factors that cause changes in the humidity of the environment: Humidifiers allow water vapor to be added, while dehumidifiers remove the water vapor. The IT environment humidity is maintained by the factors abovementioned. The gain or loss of the rate of humidity due to infiltration relies on the size of the open area and the difference in temperature and humidity between the spaces. A high humidity body of air and a low humidity body would immediately equalize to a humidity level between the high and low levels when placed next to each other. The humidity levels continuously try to balance between the spaces of computer rooms that are at a different humidity level compared to the outdoor or office space air that is around them. Floors, walls, and ceilings around the IT environment should be able to stop this equalization. There are many instances in which they do not. Water vapor can enter or escape through a minuscule crack or porous surface, which can affect the IT environment relative humidity. In some instances, the IT environment air cooling removes large amounts of water vapor that leads to low relative humidity levels. Such an event happens when warm data center air is drawn through a computer room’s air conditioner cooling coil. Most cooling coils need to be maintained constantly at a temperature of 6.1-8.8 °C. Since this is below the required due point for air in an IT environment, water droplets can form on the cooling coil as well. Extremely large volumes of air pass through the cooling coil at a high velocity. Liquid water or condensate forms on the cooling coil if the air remains in contact with the cooling coil with enough time to be cooled below its dew point. However, pumps located inside the cooling system condensate away from the IT environment and towards the building drainage system. In order to add the necessary water vapor back into the air stream leaving the cooling system, humidifiers are used. Humidifiers can usually be found in the air handlers and air conditioners inside the computer room. People inside need to be constantly supplied with fresh outdoor air. This fresh air inside data centers and computer rooms is referred to as “make-up air” and influences the relative humidity levels. The amount of fresh air needed inside is identified when the room is designed and computed by taking into account the room’s specific use, the number of possible occupants in the room, and the laws in effect at the time of its building. Depending on the amount of outside air drawn into the room and the geography of the building, changes in IT environment humidity due to ventilation issues occur. Humidity is reduced when ventilation air is supplied from cold, dry, and desert regions, while humidity is added when ventilation air is supplied from warmer climates. The air required for ventilation is usually a known quantity and is established during the cooling solution’s specification. Managing Humidity In An IT Environment In order to control relative humidity inside the data centers, a set of steps need to be performed so they can meet the recommended range and conserve energy in humidification and cooling. - Lessen the supply of fresh outdoor air inside the computer room through minimum pressurization and ventilation. - Determine the temperature of the air that flows inside the IT equipment. - Manage hot spot plates through moving floor gratings, using bleach, and removing the glass doors of the racks. - Increase the air handler setpoints. When possible, increase the set point of the return air, but keep in mind that the air supply to the equipment should fall within the specified limits suggested by ASHRAE (18 ° C / 64.4 F 27 ° C / 80.6 ° F ). In order to provide constant air temperature, it would be best to adjust the control strategy CRAC return air to supply air control. - Change the setpoints of the relative humidity in air handlers. It should be maintained above 5.5°C / 41.9°F dew point air (DP) for humidification. - The dew point must fall below 15°C / 59°F to de-humidify. - Should there be an essential source of external humidification brought by high humidity, and there is a need for de-active humidification, relative humidity should be controlled in relation to the return temperature to keep the dew point under 15°C / 59°F. - The temperature of the dew point and dry bulb of the air flowing entering the IT equipment should be monitored periodically.
<urn:uuid:514aad12-078a-469f-b496-2a0589446a91>
CC-MAIN-2022-40
https://www.akcp.com/blog/data-center-humidity-effects/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00643.warc.gz
en
0.924395
1,952
3.6875
4
The environmental impacts of data centers are something that is discussed quite often. The United States alone is projected to use about 73 billion kWh of power in 2020. And as “smart” technology continues to grow, so will the demand for data storage. Companies understand that data centers have an extremely large carbon footprint as currently structured, which is why some of the biggest companies are looking into using clean renewable energy options. Some of the biggest and most well-known businesses are turning to solar energy as a way to power their operations. As of 2018, in order, Apple, Amazon, Target, Walmart, Switch, and Google lead the way with the most solar capacity installed. Although Apple may be leading all businesses in this regard (powering both of their data centers and it’s main offices with nearly 400MW of capacity), Google has just announced a new data center using a 350MW solar plant and a 280MW battery storage all powered by clean renewable solar energy. Google is investing $600 million into its upcoming data center in Henderson, Nevada. Pegasus Group Holdings has also announced a $3 billion data center near Kingsman, Arizona. This will be the largest solar-powered data center in the world. A 717-acre solar field providing 340MW will power a network of modified cargo containers holding 500 – 1000 servers each. And Microsoft is also following suit by pledging to be 100% carbon-negative by 2030. This means the company is planning to remove more carbon from the environment than it emits. Microsoft states that all of its data centers and buildings will be 100% renewable by 2025. Google and Apple have both stated they have already reached the 100% mark. Microsoft is responsible for 16 million metric tons of emissions per year, which includes its data centers, making electronic components, and the energy consumption of those using Xbox gaming consoles at home. Greenhouse gases like carbon dioxide can cause climate change, and create air pollution contributing to respiratory diseases. It can also cause extreme weather, increased wildfires, and food supply disruptions. So a company as big as Microsoft pledging to not only reduce its carbon emissions but to be carbon-negative by 2030 is a big deal. If more companies were to strive for this outcome—the world could drastically change. Solar power is a suitable solution for data centers for many reasons. The cost of solar power is stable, while the cost of fossil fuels has many variables and is always fluctuating. Solar power is also self-contained and for the most part self-managed, which means it not susceptible to spikes or blackouts. It also provides a secured capacity. All businesses and data centers, to be specific, should be looking into going green to save the environment. Although some of the biggest companies like Google and Apple have stated they have reached the 100% renewable energy mark for their companies, other companies and businesses globally are still far behind. In 2019, a global survey was conducted asking data center professionals to predict what percentage of a data center’s power will come from either solar or wind by the year 2025. More than 800 data center professionals predicted a number of 13% by 2025. There are uncertainties regarding government assistance with renewable energy programs. The US and China are both ending subsidies for wind energy development. If government subsidies continue to be taken away for renewable energy programs, companies may feel less inclined to update current systems to solar power. While certain countries are taking away assistance, others are pushing towards reaching a higher percentage of renewable energy consumption. The European Union has recast a Renewable Energy Directive to reach 32 percent. Companies like Apple, Google, and Amazon, along with certain countries still pushing for renewable energy gives the industry optimism for a greener future. The data center industry’s renewable number may not be as low as 13% or as high as Google and Apple’s numbers, but there is still optimism of getting the entire industry as a global unit to an average of 20% renewable by 2025. There are many difficulties regarding harnessing renewable energy. One of the main difficulties is that sources of renewable energy also have their own variables. Collecting wind or solar power isn’t constant. And after you have successfully done so, you need to use that power rather quickly or store it in battery source. Artificial Intelligence is being used to improve and optimize renewable energy operations. Artificial Intelligence is being used can analyze the extensive amounts of meteorological data to make predictions of when to expect a charge. Using this knowledge can serve managers by helping them understand and calculate the extent of gathering, storing, and distributing renewable energy. AI can also help manage the distribution to storage and balancing the surges of power by analyzing the grids before and after they are absorbed. This will help reduce power congestion. AI systems can also be used to regulate and control how and where the power is allocated. Numerous energy companies are already using AI to improve their systems. Artificial Intelligence is already being used in current data center operations, but it will also be one of the things that help bring renewable energy to the masses. What was once known as “alternative” energy sources, are now known as “renewable” energy sources. People have come to realize that these sources shouldn’t just be looked at as an alternative but as a necessity. Using renewable energy reduces greenhouse gas emissions, reduces fossil fuel imports, and will help progress the local industry and create new jobs. As the world continues to move into this mindset, this just means that solar power and other renewable energy will be the future of data center operations. Some of the biggest companies including Apple, Microsoft, Google, and Amazon are already leading the charge.
<urn:uuid:c1cd7907-85ef-4d4e-8249-a4320cd2e1a4>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/renewable-energy-data-centers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00643.warc.gz
en
0.957021
1,157
3.34375
3
Antivirus programs are often billed as an absolutely vital component of your online health, but the monthly prices on some of them can be quite daunting. If you’re looking to save money, it seems like a tempting option to forgo an antivirus entirely, but is that really a viable option? What Does an Antivirus Do? Think of an antivirus program as sort of a mix between a water filter and a metal detector, or maybe something like a security checkpoint for your computer. An antivirus typically has at least two basic functions. The first is that each program you download (and oftentimes, incoming emails with attachments) is scanned before the download completes. This download is checked against the database each antivirus program has of known malware, or programs that are appreciably similar to that malware. If the program trips the warning signs it is isolated and waits for your approval to delete it; this helps stop false flags by allowing you to override the antivirus and download it if you’re absolutely sure the program is safe. The second function is similar, but it scans everything already on your computer, as well as alerting and quarantining any malware already on your computer it finds. This helps make sure any existing threats aren’t overlooked the first time you use an antivirus. That’s the long and short of it really. Antiviruses are very simple in their functionality, but pretty effective as well, being good protection against most things out there. So Do You Actually Need One? I think “need” is a strong word. Antivirus programs are great to have, but they’re more of a safety net or supplement than a must have option, at least in my opinion. Good internet health and safety will help you more than even the best antivirus out there – always knowing the source of things you download, and so on. What the antivirus will do for you is help when you make a mistake, but they won’t prevent you from making those mistakes in the first place…and that can be dangerous in and of itself. Having an antivirus does not mean you’re immune to having your computer get infected with malware, and in my experience of the two alone, simply being more careful with your internet habits is more effective than an antivirus.Now, that said, having both impeccable internet health habits and an antivirus is going to be exponentially better than one alone, but I think you can safely save some money on an antivirus if you think you can handle keeping yourself safe on your own. And remember, putting all your faith in an antivirus is a foolish move, given how fallible they can be when their database of viruses and malware falls behind whatever the cutting edge is. - Protect Your Social Media Accounts From Hacker - Protect Yourself Against Identity Fraud - Protect Your Identity Online - Prevent Your Password From Being Stolen - Serious Consequences of Identity Theft Last Updated on
<urn:uuid:60729212-42b3-44c6-928d-c6192994eeca>
CC-MAIN-2022-40
https://www.homesecurityheroes.com/how-necessary-is-an-antivirus-program/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00643.warc.gz
en
0.948958
626
2.703125
3
The Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Cyber Security Centre (ACSC) released a joint Cybersecurity Advisory (CSA) on the top malware strains of 2021. The report found that some of the top leading pieces of malware in 2021 were remote access trojans (RATs), banking trojans, information stealers, and ransomware. CISA and ACSC also explain that these pieces of malware and ransomware are continually being updated by bad cyber actors to benefit themselves. “Most of the top malware strains have been in use for more than five years with their respective code bases evolving into multiple variations. The most prolific malware users are cyber criminals, who use malware to deliver ransomware or facilitate the theft of personal and financial information,” stated the report. The report also goes into detail warning that possible targets should go into looking to reinforce their data migrations, as well as securing Remote Desktop Protocol (RDP), and ultimately patching all systems for known exploitations or vulnerabilities. Other important actions that the report suggests are investing in training processional work staff in cyber hygiene, using stronger passwords, as well as maintaining offline backups of data. Ultimately, the report ends with detailing and listing the various types of pieces of malware and tools used by cybercriminals along with tips on how organizations and private citizens can go about securing their computer systems.
<urn:uuid:0d3cbd0e-33eb-4ac9-b8b6-19b5636acacb>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/cisa-and-acsc-issue-cybersecurity-advisory-on-top-malware-of-2021/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00643.warc.gz
en
0.952071
283
2.640625
3
What is an Architecture Alternative? To put it simply, architecture alternatives are a method of problem solving. Use Architecture Alternatives whenever you are developing and selecting alternative ways to solve a problem. Multiple workable target architectures can meet the architecture vision, principles, and requirements. We should identify alternative candidate architectures, together with an awareness of various options and solutions between the alternatives. Architects can uncover underlying motives, guiding principles, and needs that could have an influence on the final overall architecture by presenting stakeholders with a range of choices and potential solutions. Most often, there is over one choice that might address the needs of all parties involved. The TOGAF Standard includes a method for researching potential solutions and talking about them with stakeholders. We frequently find alternatives for each domain. To make the examination of the many choices simpler, this is done. Of course, an overall study of the options for the entire architecture can incorporate the alternatives per domain. The method's initial step involves identifying the requirements that are appropriate for the problem. We use vision, principles, requirements, and other information. The method's second step specifies options in accordance with the criteria and develops comprehension of each. To generate the suggested alternative, the third component of this process will either choose one alternative or blend traits from many. We should provide just enough information to complete the following tasks to support your choice. Any stage at any level of architecture can use the approach. How to Use Architecture Alternatives The criteria are developed from several inputs to the architecture and are applied for the various possibilities. Think about how architecture principles, specifications, goals, and stakeholder concerns may have an impact. Each choice will have unique benefits or drawbacks that must be addressed and decided upon by stakeholders. To enable stakeholders to examine the options and comprehend any linkages, risks, and uncertainties, additional perspectives and opinions may be required. You cannot develop consistent and architecturally useful criteria without knowing your stakeholders and their concerns. Flexible alternatives, time and cost to implement the alternative, transitions, unstable areas, low business effect, limited risk, etc. are a few examples of criteria-based alternatives. The Practitioner's Guide to Developing Enterprise Architecture has a standard list of stakeholders and stakeholder concerns. Understanding Potential Alternatives After fully comprehending your requirements, it's time to think about the general vision and guiding principles of architecture. Show the overview criteria for each possibility for each choice. Define the criteria for the alternative using the vision, principles, and needs of architecture. The criterion may identify various architectural options at various abstraction levels and ADM stages. Explain the alternative's architecture after that. Make the appropriate architectural perspectives in order to fully comprehend the influence of the alternative. Fill up any gaps with further details. Don't go into too much detail. However, it is crucial to conduct an effective impact assessment, identify relationships between alternatives and the current environment, and clearly understand the ramifications of implementing the alternative. The gaps between the baseline and this alternative will then need to be estimated. Outline the gaps between the baseline and this alternative based on existing knowledge about the baseline situation. This gap analysis will be loosely defined if the baseline has not yet been established. Finally, fully comprehend the effects and tradeoffs of the option. Finding potential effects that the alternative could have on the architecture, transition, implementation, and overall value of the organization should be part of this. Selecting and Defining Chosen Alternatives Finally, it's time to choose or specify an alternative solution. To resolve conflicts between options, use trade-off analysis. Start by becoming familiar with each option's advantages and disadvantages. Based on how closely the options adhere to the established criteria, compare them. To define a different alternative in cooperation with stakeholders, choose the alternative that best fits your needs or mix elements from many options. You may then easily put the alternative together. Complete the alternative's description and make sure all the needed architectural points of view have been considered. Make sure the alternative is also sufficiently specified to aid in decision-making. To evaluate alternative decisions and funding, resolve consequences throughout the architecture landscape and perform a formal stakeholder evaluation. Architecture Tradeoffs to Consider Your problem will have different stakeholders and stakeholder concerns. Your problem will require trade-off across these criteria. For example, if you are looking at Public Cloud, you may need to explore several aspects just to speak about cost and feasibility. - Leveraging current investments - Leveraging current investments can be done by transferring an on-premises analytics system to a cloud platform or by using current vendor solutions. - Data migration - Both inside and across cloud platforms, as well as between on-premises and cloud platforms, data transfer is possible. It is crucial to assess and prepare for the frequency and amount of data migration, as well as the corresponding network bandwidth needs. - Purchasing premium products - Selecting the finest goods for the various parts of a cloud analytics solution could need comparing goods offered by several cloud service providers. What are Some Architecture Alternatives? When developing an architecture, they often give the enterprise architect a broad shortfall in an organization: ‘improve customer intimacy,’ ‘improve agility,’ ‘lower risk,’ etc. One can use a wide range of potential solutions as architecture alternatives. Each will do a better job on one or more criteria. We primarily used this method in Architecture to support Strategy and Architecture to support Solution Delivery. Solution Delivery is the detail of how you will make a change in an initiative that was decided. Alternatives can be used to look at different ways of solving a problem at a level of detail (Strategy vs Solution Delivery), inside a domain (business, application, data, technology), or for the entire architecture. Example of Architecture Alternatives with Complex Criteria - Customer Intimacy There are some key considerations to think about when looking at alternative architectures, especially in digital transformation. The Seven Levers of Digital Transformation identifies seven distinct levers you need to control in a Digital Transformation. - Lever 1 - Business Process Transformation - Lever 2 - Customer Engagement and Experience - Lever 3 - Product or Service Digitization - Lever 4 - IT and Delivery Transformation - Lever 5 - Organizational Culture - Lever 6 - Strategy - Lever 7 - Ecosystem and Business Model When you look at the Seven Levers of Digital Transformation, you can quickly see that there will be multiple ways of addressing something like Customer Intimacy. At the start, you need to understand 'customer intimacy.' A client's beliefs and requirements deeply is essential for building customer intimacy. You must know consumer perceptions in order to adjust your company plan. Building a corporate culture that values its customers requires close customer relationships. All company divisions use a sincere understanding of client issues. This enhances customer service, which raises client loyalty. Customers that are committed to your brand are less inclined to use your rivals. When architecting to support solution development, you might look at product feedback. Here, architecture alternatives would be online engagement, focus groups, industry testers, etc. The design may be made simpler by collocating all crucial elements (including the analytics platform and other supporting elements) on a single Cloud Service Provider's platform. The CSP's product offerings or those of other vendors whose products the CSP supports must be chosen for this strategy. Distributing the components over various CSP platforms enables using the benefits of product offerings made available through each distinct CSP, as opposed to placing all necessary components on a single CSP's platform. Due to the possibility of additional complexity, this technique requires a thorough review of the architecture and consideration of the costs associated with using several CSPs. As an alternative, sharing critical components between on-premises and (single or many) Cloud platforms enables using investments in pre-existing on-premises components, but does so at the expense of on-premises and cloud technologies working in concert. Superior architecture, and the driving criteria will constrain these choices. Addressing this same question for strategy may have you looking at very different alternatives. Strategy covers significant organizational challenges. It will do things like outline how a major initiative that touched brand, communications, gathering product feedback, sponsorship, etc. would address ‘customer intimacy.’ When Should Architecture Alternatives Be Used? We can use architecture alternatives whenever there is a need in the core architecture that isn’t being met – as long as the tradeoffs are worth it in terms of the final design. When Are Architecture Alternatives Used in the TOGAF ADM? We actively use Architecture Alternatives in TOGAF ADM Phase A and the TOGAF ADM Phase E. In Phase A, it is uncommon for there to be a single Architecture Vision. In fact, it is very common for the following architecture domain work to explore Architecture Vision alternatives until we winnow the weaker options out. In Phase E the Architecture Roadmap technique of Architecture Roadmap Type 4: Scenario & Multiple Candidates explicitly compares Architecture Alternatives. Final Thoughts on Architecture Alternatives Architecture Alternatives are a method of problem solving. Good enterprise architects, do not lock in on an answer and try to prove it. Instead, they use Architecture Alternatives to find the best way to solve a problem. You will always have multiple workable answers. These answers will be stronger and weaker on different criteria. Your job is to ensure that the stakeholders are informed so that they can make a good choice. An architecture review board managing the enterprise architecture governance process will look for evidence that alternatives have been considered. Remember, the work of developing architecture and performing trade-off is valuable. Stakeholders can understand their preferences, the benefits, risks and uncertainty, and the work of different architecture alternatives. In the end, you are looking for the best target architecture for your organization today, not some theoretical best. How was our guide to architecture alternatives? Tell us your thoughts in the comments below.
<urn:uuid:270e0849-2845-4f00-8092-d1b5092ac441>
CC-MAIN-2022-40
https://conexiam.com/everything-you-need-to-know-about-using-architecture-alternatives/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00643.warc.gz
en
0.916329
2,058
3.15625
3
(By Dag Adamson – President Destroy Drive – Onsite Data Sanitization and Destruction) During the entire lifecycle of servers and storage arrays, different preventative measures need to be employed to ensure security. In addition to secure end-of-life handling of hard drives, there are several other instances when data “wiping” / sanitization or physical destruction is needed to protect personal identifiable information, financial data, or valuable/proprietary information. These different events include: – Returning equipment to leasing companies or service providers – Data Center Move – Data Center Consolidation – Cloud Migration While some organizations choose to physically destroy or wipe hard drives themselves, many choose to outsource this activity to obtain: - Scalability of wiping/sanitizing large quantities of hard drives - Transfer the liability of data destruction to another party - Stronger compliance by leveraging the expertise of an AAA NAID audited and certified resource - Faster and more efficient data destruction - Lower cost - Value recovery of reusable hard drives - Reduce strain on internal resources to focus on mission critical activities SIMPLICITY and SCALABILITY The process of data destruction is relatively simple. Physical destruction can be as simple as: remove the hard drive from the computing or storage platform and strike it with a blunt instrument like a hammer or puncture it several times with an electric drill. For a single or handful of drives this may be physically simple, however, when quantities increase – safety risks increase, drill bits seemingly only last for a few drives that contain hardened materials and speed of destruction becomes an issue. A more efficient method would be to use a specialized piece of destruction equipment that “bends” the hard drive or shreds the hard drive in compliance with AAA NAID certification or the NIST 800-88 guideline. Some data destruction software is relatively easy to use; however, some have technical limitations with RAID drives found in servers and connectivity & compatibility with large storage arrays. Specialized equipment, software and trained systems engineers are frequently leveraged from service providers overcoming these obstacles. Optimally, hard drives can remain inside servers or large storage arrays containing 100’s and even 1,000’s of hard drives when advanced wiping/sanitization equipment and software is employed. Using specialized software and equipment in many cases is 10X faster than wiping individual drives. Data collection of serial numbers can be simplified, and accuracy can be improved by leveraging state-of- the-art wiping/sanitization technology that automatically collects the make, model, and serial number of the hard drive. Anyone that has manually attempted to scan the serial number of a hard drive can attest that a technician must decipher which of the half-dozen bar-codes found on the face of a hard drive is the serial number. LIABILITY TRANSFER / CERTIFIED PROCESSES There continues to be a growing list of domestic & international laws/regulations requiring closer attention to data privacy, information destruction and especially compliance. Many laws including the Health Insurance and Portability Act (HIPAA) and the General Data Protection Regulation (GDPR) require evidence of processes and controls to assure proper compliance and stewardship of personal data. By leveraging vendors that are regularly audited by an independent data destruction standard, like the AAA NAID Certification, organizations can mitigate data destruction process risk and transfer the liability to professional and certified experts. Highlights to AAA NAID Certified vendors includes: - Detailed Audit – Physical Destruction and Sanitization - Onsite and Plant Based Services - Detailed attention to Chain of Custody - Specific Requirements for Employee background checks and drug testing - Highest level of operational and physical security - Security / Threat Prevention – Conducted by independent 3rd party auditor LOWER COST / VALUE CREATION In addition to compliance, lower cost and value creation are important considerations in compliant data destruction. Rather than purchasing “bending” or shredding equipment that is only periodically used, organizations leverage service providers that have trained technicians that use the equipment regularly. As an alternative, many organizations seek to off-set or even eliminate data destruction costs by performing compliant data wiping or sanitization and then remarketing servers or storage arrays. In the case of lease returns or using colocation or hosting services, it’s an imperative to wipe hard drives prior to returning equipment at the end of the lease or at the end of the service contract. The good news: Even the most rigorous government data security requirements afford hard drive wiping / sanitization compliance. When performed with rigorous attention to process, and leveraging state-of-the-art technology, sanitization/wiping is a cost-effective method for compliant data destruction. As with all professional service delivery, information is necessary to determine an initial scope of a data destruction program. The following information will help your DataSpan account manager determine the best and most compliant solution for your data destruction needs: - Type of equipment (servers, arrays, loose drives) - Manufacturer of server or array - Model of server or array - Number of drives - Type of interface (SAS, SATA, SCSI, FC) Alex von Hassler’s long term focus is the continued testing, learning, and deployment of modern IT solutions. During his years as a DataSpan team member, his responsibilities grew from managing Salesforce CRM to improving system security, creating marketing initiatives, as well as providing continued support to the highly motivated and experienced team in an ever-changing industry. As DataSpan evolves to provide the best-fitting IT solutions to its customers, Alex von Hassler continues to hone his skills in the world of web-based ERP systems, security, and best customer engagement practices. Empowering such a dynamic team with the right tools provides him with enormous gratification.
<urn:uuid:b8b13795-67a2-4900-bbdc-213e250ade95>
CC-MAIN-2022-40
https://dataspan.com/blog/mitigate-privacy-risk-in-lease-returns-data-center-move-consolidation-or-cloud-migration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00643.warc.gz
en
0.92105
1,225
2.546875
3
10G ETHERNET GLOSSARY 1. 10 Gigabit Ethernet Standard (or 10GE or 10GbE or 10 GigE) was first published in 2002 as IEEE standard 802.3ae-2002 and is the fastest of the Ethernet standards. It defines a version of Ethernet with a nominal data rate of 10 Gbit/s, 10 times faster than Gigabit Ethernet. 2. 10 Gigabit Ethernet over UTP (802.3an) (See “10GBase-T”) 3. 10GBase-CX4 was the first 10G copper standard published by 802.3 (as 802.3ak-2004). It uses the XAUI 4-lane PCS (Clause 48) and copper cabling similar to that used by InfiniBand technology. It is specified to work up to a distance of 15 m (49 ft). Each lane carries 3.125 Gbaud of signaling bandwidth. Most short reach copper implementations today use SFP+ Direct Attach rather than 10GBase-CX4. 4. 10GBASE-KX4 and 10GBASE-KR (see “Backplane Ethernet”) 5. 10GBase-LRM (Long Reach Multimode) also known as 802.3aq uses the IEEE 802.3 Clause 49 64B/66B Physical Coding Sublayer (PCS) and 1310 nm lasers. It delivers serialized data over multi-mode fiber at a line rate of 10.3125 Gbit/s. 10GBase-LRM is designed to achieve longer distances over FDDI grade optical cable (OM1) within the data center. There has not been strong adoption of 10GBase-LRM. 6. 10GBASE-SR (SFP+ SR “Short Range”) uses the IEEE 802.3 Clause 49 64B/66B Physical Coding Sublayer (PCS) and 850 nm lasers. It delivers serialized data over multi-mode fiber at a line rate of 10.3125 Gbit/s. This is the most common optical PHY used in the data center. Emulex “M” models support SR optics. 7. 10GBase-T (IEEE 802.3an-2006), is a standard released in 2006 to provide 10 gbit/s connections over unshielded or shielded twisted pair cables, over distances up to 100 meters (330 ft). 10GBASE-T cable infrastructure can also be used for 1000BASE-T allowing a gradual upgrade from 1000BASE-T using auto-negotiation to select which speed to use. 10GBASE-T has higher latency and consumes more power than other 10GbE physical layers. In 2008 10GBASE-T silicon is now available from several manufacturers with claimed power dissipation of 6W and a latency approaching 1 microsecond. 10GBase-T is expected to become common in rack servers starting in 2H11. 8. 10GSFP+Cu (SFP+ Direct Attach) is a copper interconnect using a passive twin-ax cable assembly that connects directly into an SFP+ housing. It has a range of 10 meters and like 10GBASE-CX4, is low power, low cost and low latency with the added advantage of having the small form factor of SFP+, and smaller, more flexible cabling. Emulex “X” models support SFP+ Direct Attach. 9. Backplane Ethernet , also known by its working group name IEEE 802.3ap, is used in backplane applications such as blade servers and routers/switches with upgradable line cards. 802.3ap implementations are required to operate in an environment comprising up to 1 meter (39 in) of copper-printed circuit board with two connectors. The standard provides for two different implementations at 10Gbit/s: 10GBASE-KX4 and 10GBASE-KR. 10Gbase-KX4 uses the same physical layer coding (defined in IEEE 802.3 Clause 48) as 10GBASE-CX4. 10GBASE-KR uses the same coding (defined in IEEE 802.3 Clause 49) as 10GBASE-LR/ER/SR. The 802.3ap standard also defines an optional layer for FEC, a backplane auto-negotiation protocol and link training, where the receiver can set a three-tap transmit equalizer. Blade servers from Cisco and HP use 10GBASE-KR; blade servers from IBM and Dell use 10GBASE-KX4. 10. Bridges (L2 Switches) involves segmentation of local area networks (LANs) at the Layer 2 level. A multiport bridge typically learns about the Media Access Control (MAC) addresses on each of its ports and transparently passes MAC frames destined to those ports. These bridges also ensure that frames destined for MAC addresses that lie on the same port as the originating station are not forwarded to the other ports. 11. Broadcast Packet means that the network delivers one copy of a packet to each destination. On bus technologies like Ethernet, broadcast delivery can be accomplished with a single packet transmission. On networks composed of switches with point-to-point connections, software must implement broadcasting by forwarding copies of the packet across individual connections until all switches have received a copy. 12. Checksum (CRC), or (cyclic redundancy check), is a non-secure hash function designed to detect accidental changes to raw computer data, and is commonly used in digital networks and storage devices. A CRC is a "digital signature" representing data. The most common CRC is CRC32, in which the "digital signature" is a 32-bit number. FCoE packets contain a Fibre Channel checksum and Ethernet CRC. TCP/IP packets contain a TCP Checksum and Ethernet CRC. iSCSI packets optionally contain an iSCSI digest (CRC), TCP Checksum and Ethernet CRC. Offload engines and stateless offloads remove the checksum offload overhead from the host CPU. 13. Common Internet File System (CIFS) is a remote file system access protocol that works over IP networks to enable groups of users to work together and share documents across LANs or WANs. CIFS is an open, cross-platform technology based on the native file-sharing protocols built into the Microsoft Windows operating systems, and is also supported on other platforms. 14. Congestion Control for TCP uses a number of mechanisms to achieve high performance and avoid “congestion collapse,” where network performance can fall by several orders of magnitude. These mechanisms control the rate of data entering the network, keeping the data flow below a rate that would trigger collapse. 15. Congestion Notification (IEEE 802.1Qau) provides end-to-end congestion management for protocols that are capable of transmission rates limiting to avoid frame loss. It is expected to benefit protocols such as TCP that do have native congestion management as it reacts to congestion in a timelier manner. 16. Congestion Management (CM) (P802.1Qau) (See “Congestion Notification”) 17. Data Center Bridging Capability eXchange Protocol (DCBX) is responsible for configuration of link parameters for DCB function. It includes a protocol to exchange (send and receive) DCB parameters between peers, set local “operational” parameters based received DCB parameters, and resolve conflicting parameters. 18. Direct Data Placement Protocol (DDP), is the main component of iWARP, which permits the actual zero-copy transmission. DDP itself does not perform the transmission; TCP does. 19. Energy Efficient Ethernet (EEE) is the IEEE 802.3 standard to define a mechanism to reduce power consumption during periods of low link utilization for the following PHYs: 100BASE-TX (Full Duplex), 1000BASE-T (Full Duplex), 10GBASE-T, 10GBASE-KR, 10GBASE-KX4. 20. Enhanced Ethernet (EE), also known as Converged Enhanced Ethernet (CEE), is a generic term used by many vendors including HP, IBM, Dell, Brocade, and others to describe enhanced Ethernet. Data Center Ethernet (DCE) was a term originally coined and trademarked by Cisco. DCE refers to enhanced Ethernet based on the Data Center Bridging (DCB) standards, and also includes a Layer 2 Multipathing implementation based on the IETF's Transparent Interconnection of Lots of Links (TRILL) proposal. These terms generally refer to the collection of Priority Flow Control, Enhanced Transmission Selection and Data Center Bridging Capabilities Exchange Protocols. 21. Enhanced Transmission Selection (ETS) is the P802.1Qaz standard that specifies enhancement of transmission selection to support allocation of bandwidth among traffic classes. When the offered load in a traffic class doesn't use its allocated bandwidth, enhanced transmission selection will allow other traffic classes to use the available bandwidth. The bandwidth allocation priorities will coexist with strict priorities. It will include managed objects to support bandwidth allocation. 22. Fibre Channel over Ethernet (FCoE) is the encapsulation of the Fibre Channel protocol into Ethernet as defined by the INCITS T11 standards organization. This allows Fibre Channel traffic to coexist with TCP/IP traffic using a common adapter and network infrastructure. 23. Gigabit Ethernet (GbE or 1 GigE) is a term describing various technologies for transmitting Ethernet frames at a rate of a gigabit per second, as defined by the IEEE 802.3-2008 standard. 24. Internet Protocol (IP) is a protocol used for communicating data across a packet-switched internetwork using the Internet Protocol Suite, also referred to as TCP/IP. IP is the primary protocol in the Internet Layer of the Internet Protocol Suite and has the task of delivering distinguished protocol datagrams (packets) from the source host to the destination host solely based on their addresses. 25. IP Multicast is a technique for one-to-many communication over an IP infrastructure in a network. It scales to a larger receiver population by not requiring prior knowledge of who or how many receivers there are. Multicast uses network infrastructure efficiently by requiring the source to send a packet only once, even if it needs to be delivered to a large number of receivers. 26. iSCSI is the storage networking standard developed by the Internet Engineering Task Force (IETF) for linking data storage over an IP-based network. 27. Internet Storage Name Service (iSNS) protocol allows automated discovery, management and configuration of iSCSI devices on a TCP/IP network. 28. iSCSI Extensions for RDMA (iSER) protocol maps the iSCSI protocol over a network that provides RDMA services (like TCP with RDMA services (iWARP or InfiniBand). This permits data to be transferred directly into SCSI I/O buffers without intermediate data copies. 29. iWARP (The Internet Wide Area RDMA Protocol) is an Internet Engineering Task Force (IETF) update of the RDMA Consortium's RDMA over TCP standard. iWARP is a superset of the Virtual Interface Architecture that permits zero-copy transmission over legacy TCP. It may be thought of as the features of InfiniBand (IB) applied to Ethernet. 30. Jumbo Frames are Ethernet frames with more than 1,500 bytes of payload (MTU). Conventionally, jumbo frames can carry up to 9,000 bytes of payload, but variations exist and some care must be taken when using the term. Many, but not all, Gigabit Ethernet switches and Gigabit Ethernet network interface cards support jumbo frames, but all Fast Ethernet switches and Fast Ethernet network interface cards support only standard-sized frames. 31. Large Send Offload (LSO) is a technique for increasing outbound throughput of high-bandwidth network connections by reducing CPU overhead. It works by queuing up large buffers and letting the NIC split them into separate packets. The technique is also called TCP Segmentation Offload (TSO) when applied to TCP, or Generic Segmentation Offload (GSO). 32. Large Receive Offload (LRO) is a technique for increasing inbound throughput of high-bandwidth network connections by reducing CPU overhead. It works by aggregating multiple incoming packets from a single stream into a larger packet buffer before they are passed higher up the networking stack, thus reducing the number of packets that have to be processed. 33. Lossless Ethernet fabrics are enabled by using priority-based flow control (PFC) to pause traffic based on priority levels. This allows virtual lanes to be created within an Ethernet link, with each virtual lane assigned a priority level. During periods of heavy congestion, lower priority traffic can be paused, while allowing high-priority and latency-sensitive tasks such as data storage to continue. 34. Marker PDU Aligned Framing for TCP (MPA) is required to run Direct Data Placement Protocol (DDP) over TCP to guarantee boundaries of messages. 35. Microsoft Chimney (TCP Offload) architecture offloads the data-transfer portion of TCP protocol processing for one or more TCP connections to a network interface card (NIC). This architecture provides a direct connection, called a chimney, between applications and an offload-capable NIC. 36. Multi-Source Agreements (MSAs) 10GbE Optical modules are not specified in IEEE 802.3 but by multi-source agreements (MSAs). The relevant MSAs for 10GbE are XENPAK, X2, XPAK, XFP and SFP+. The latest, smallest and lowest power is SFP+ (SFF-8431). Emulex OneConnect™ stand-up PCI adapters use SFP+ modules. 37. NetQueue Offload is a performance technology that significantly improves performance in VMware ESX server deployments by queuing data to multiple receive queues, generally tied to VMs running under ESX. MSI-X is then used to signal to the specific queue being used. 38. Network File System (NFS) is a distributed file system that allows a system to share directories and files with other systems over a network. NFS is most commonly used with Linux and Unix systems. 39. Network Interface Controller (NIC) is a hardware device that handles an interface to a computer network and allows a network-capable device to access that network. The NIC has a ROM chip that contains a unique number, the multiple access control (MAC) Address that is permanent. The MAC address identifies the device uniquely on the LAN. The NIC exists on both the 'Physical Layer' (Layer 1) and the 'Data Link Layer' (Layer 2) of the OSI model. 40. Partial Offload (see “Microsoft Chimney”) 41. PCI Express (Peripheral Component Interconnect Express), abbreviated as PCIe or PCI-E, is a computer expansion card standard designed to replace the older PCI, PCI-X, and AGP standards. PCI Express is used as a motherboard-level interconnect (to link motherboard-mounted peripherals) and as an expansion card interface for add-in boards. 42. Priority-based Flow Control (PFC), IEEE 802.1Qbb, provides a link level flow control mechanism that can be controlled independently for each Class of Service (CoS), as defined by 802.1p. The goal of this mechanism is to ensure zero loss under congestion in DCB networks. 43. Remote Direct Memory Access Protocol (RDMA or RDMAP) is a direct memory access from the memory of one computer into that of another without involving either one's operating system. This permits high-throughput, low-latency networking, which is especially useful in massively parallel computer clusters. 44. Receive-Side Scaling (RSS) is a technology that enables packet receive-processing to scale with the number of available computer processors, by dynamically load-balancing inbound network connections across multiple processors or cores. 45. Retransmission is the resending of packets which have been either damaged or lost. It is a term that refers to one of the basic mechanisms used by protocols operating over a packet switched computer network to provide reliable communication (such as that provided by a reliable byte stream, for example, TCP). 46. SACK processing stands for Selective Acknowledgement and is an advanced TCP feature. With SACK, the receiver explicitly lists which packets, messages, or segments in a stream are acknowledged (either negatively or positively). Positive selective acknowledgment is an option in TCP. 47. SFP+ Direct Attach (See “10GSFP+Cu”) 48. SFP+ SR “Short Reach” (See “10GBASE-SR”) 49. Single Root I/O Virtualization (SR-IOV) allows a PCIe device to appear to be multiple separate PCIe devices. The SR-IOV specification includes physical functions (PFs) and virtual functions (VFs). PFs are full-featured PCIe functions. VFs are lightweight functions that do not support configuration resources. With SR-IOV, virtual machines can share adapter ports using virtual functions to optimize performance. 50. Spanning Tree Protocol (STP) is defined in the IEEE Standard 802.1D and creates a spanning tree within a mesh network of connected layer-2 bridges (typically Ethernet switches), and disables those links that are not part of the tree, leaving a single active path between any two network nodes. 51. Stateful Offload describes the type of offload done by a TCP Offload Engine, where connection state is passed to the device. 52. Stateless Offloads do not require state to be stored in the offload engine. These are operations like checksum offload, LSO, and LRO. 53. TCB (Transport Control Block) is a data structure in a TCP connection that contains information about the connection state, its associated local process, and feedback parameters about the connection's transmission properties. 54. TCP Offload Engine or TOE, is a technology used in network interface cards (NICs) to offload processing of the entire TCP/IP stack to the network controller. It is primarily used with high-speed network interfaces, such as gigabit Ethernet and 10GbE, where processing overhead of the network stack becomes significant. The term TOE is often used to refer to the NIC itself, although it more accurately refers only to the integrated circuit included on the card which processes the TCP headers. TOEs are used to reduce the overhead associated with protocols like iSCSI. 55. SFP+ form factor (SFF-8431) is a specification for compact, hot-pluggable transceiver that interfaces a device to a fiber optic or copper networking cable. SFP transceivers are designed to support Gigabit Ethernet and Fibre Channel and have expanded to SFP+ to support data rates up to 10.0 Gbit/s (including data rates for 8 gigabit Fibre Channel, and 10GbE.) 56. User Datagram Protocol (UDP), sometimes called the Universal Datagram Protocol, is one of the core members of the Internet Protocol Suite, the set of network protocols used for the Internet. With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol (IP) network without requiring prior communications to set up special transmission channels or data paths. 57. Unicast transmission is the sending of information packets to a single network destination. The term "unicast" is formed in analogy to the word "broadcast" which means transmitting the same data to all destinations. 58. VLAN (virtual LAN) is a group of hosts with a common set of requirements that communicate as if they were attached to the Broadcast domain, regardless of their physical location. A VLAN has the same attributes as a physical LAN, but it allows for end stations to be grouped together even if they are not located on the same network switch.
<urn:uuid:2975c997-5ed8-45d5-b70f-abbf0b754bd5>
CC-MAIN-2022-40
https://www.10gtek.com/new-913
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00043.warc.gz
en
0.896553
4,200
2.859375
3
Decision Trees in Business Analysis PMI-PBA (Fast Food vs Fine Dining?) Decision Trees in Business Analysis PMI-PBA (Fast Food vs Fine Dining?) This post if from our PMI-PBA Business Analysis Certification course. Start training today! This is a simplified nautical explanation of Decision Trees to help represent some of the features that PMI and/or IIBA reference when they’re taking about decision trees. We’ll go a little further into this. The concept is that you’re going to make some decisions and based on the decisions, you’re going to have outcomes. A decision or a choice you might make is drawn by a square or a box and the outcome or the alternatives that arise as a result of that decision are typically shown as a zero, or circle, or a sphere depending on how they draw it. This is just an example of making decisions. What you’re doing in this particular rule model is you’re identifying when do you make choices and when based on the choices do you have outcomes. If you were to go and take a course in statistics. I took two of them during my MBA program, and one of those was an entire semester learning how to build and solve decision trees. They can be used for a lot of other mechanisms other than just business analysis rule planning. Be aware that the entire field or use of decision trees is much greater specifically than business analysis. I have an hour long webinar where I walk through how to build and use decision trees. Check it out. Principles of Decision Tree Design and Analysis in Project Management (PMP) For the PMI‑PBA exam, there are a couple of elements you need to understand though. I’m going to give overview of decision trees. You always build a decision tree, so you always build them from left to right. Once you have an entire decision tree built or you believe you built it the way that you solve it, or in the business analysis sense, the way you review it to make sure that it’s accurate is you solve from right to left. Once I get into this general overview for how you do it, you’re going to start with your choices. For example, let’s say you’re getting hungry. You’re either going to go to fast food or you’re going to go to fine dining. Once you make that decision, there are some outcomes that you may or may not be able to control. When I talk about outcomes, the food is good, the food is just OK or the food is bad. Now, here’s where I maybe disagree from what Business Analysts do. Because you want to make it easy, concise for people to follow rules, at least on the PMI‑PBA exam and to some degree on the Business Analyst Body of Knowledge (BABOK) and the exam for the Certified Business Analysis Professional (CBAP). They suggest that you should always build a diagram that has only two outcomes / two choices at a time. That gets really difficult to do. The reason I state that is if we’re talking about what compass direction do we go on? If you know a compass, a compass has 360 degrees and maybe half degrees and feet and seconds. You don’t have one option, you have lots of them. Trying to get this down to just two simple choices, go left, go right, maybe that works in a maze that’s got very square corners not so much in larger world type scale of issues. Trying to do that just with two is difficult because we have at least four major directions ‑‑ north, east, south and west. Be aware that typically on the test, they’re going to say any time you want to simply or make it concise, you want to try to get it down to two choices by your process. This is why we’re going from, create from left to right, solve from right to left. If we go fine dining, we have the same outcome. It’s good, it’s OK or it’s bad. Remember, as a Business Analyst our goal is always to look at things like impact. If it’s good, we have a positive impact, and that will be in both cases. If it’s bad, we have a negative impact. To do this from a risk perspective trying to solve this, I might say that, fast food if it’s good it’s only worth five dollars to me. Fast food if it’s bad might imply food poisoning. Food poisoning might cause me to lose $1,000 in terms of lost wages and medical bills to get through that food poisoning. That’s a negative, that’s why I did it in the red. My impact for fine dining might be to me that I love a great meal that’s worth $50 to me. If I get sick on that fine dining it’s also a loss of $1,000. I can start solving this, just the impact. If you’re a Project Manager you need to solve it by incorporating probability as well. I’m just not going to do that here to shorten it to give you the concept. If I were solving this from, again right to left. I need to know the probability of it. If most of the time I’m going to have a high impact 90 percent of the time and 10 percent of the time I’m going to have $1,000 loss. If it’s 90 percent that I’m going to be good up here. Let’s see, 90 percent times five dollars is $4.50 so I’m solving back to the left. Back here I have a $4.50 gain here and the impact means that 10 percent of the time I’m going to be losing $1,000. If that’s 10 percent and I come over here, that’s $100 loss. What I’ve got here is $4.50 effectively is going to be my gain but $100 is going to be my loss. When I put the two of these together, here’s my $100 loss solving from right to left. I have $100 loss, plus I have a $4.50 gain out of this just by probability. That doesn’t look so good. That fast food, by just rules from my decision suggest that I’m going to have ‑‑ if I calculate this quickly in my head ‑‑ a $95.50 overall loss. This is just trying to do some impact analysis. This is a decision tree, the rules we’re following. Let’s do the same thing for fine dining. Let’s just say that when we go to fine dining, particularly a great restaurant, my impact over here for this is going to be 99 percent of the time, we’re going to have this great outcome. 99 percent of the time, I’m just going to round it. That means we’re approx $49.90. That’s my gain. 49.90 is my gain on this 99 percent of the time and one percent of the time, I’m going to have a loss. One percent I’m moving the decimal point over a couple of places, so I’m going to have a $10 loss. That $10 loss is a negative here over a 10. My aggregate solution for fine dining is a $39.90 gain by, again, adding or aggregating from the right to the left. My wife loves this solution that suggests that we should always do fine dining rather than our decision to try to do fast food. This is a decision tree. You won’t have to solve them on the PMI‑PBA exam, at least not the version through June, July of 2018. You need to understand the concept that the squares are your choices or decisions, the O’s are your outcomes or choices. That you’re creating them from left to right and you’re always reading or solving them from right to left, and that decision trees are a form of a rule model. I look forward to seeing you in the classroom or online! Steve teaches PMI-PBA: Business Analysis Certification, PMP: Project Management Fundamentals and Professional Certification, Windows 10, and CompTIA classes in Phoenix, Arizona. Steve’s Video Certification Training Classes at Interface Technical Training: You May Also Like In this video, you will gain an understanding of Agile and Scrum Master Certification terminologies and concepts to help you make better decisions in your Project Management capabilities. Whether you’re a developer looking to obtain an Agile or Scrum Master Certification, or you’re a Project Manager/Product Owner who is attempting to get your product or … Continue reading Agile Methodology in Project Management This is part 1 of our 5-part Office 365 free training course. In this Office 365 training video, instructor Spike Xavier introduces some of the most popular services found in Microsoft Office 365 including the Admin Portal and Admin Center. For instructor-led Office 365 training classes, see our course schedule: Spike Xavier SharePoint Instructor – … Continue reading An Overview of Office 365 – Administration Portal and Admin Center In this Office 365 training video, instructor Spike Xavier demonstrates how to create users and manage passwords in Office 365. For instructor-led Office 365 training classes, see our course schedulle: Spike Xavier SharePoint Instructor – Interface Technical Training Phoenix, AZ 20347: Enabling and Managing Office 365
<urn:uuid:073738c0-c8e9-47c1-9489-214bf533560f>
CC-MAIN-2022-40
https://www.interfacett.com/blogs/decision-trees-in-business-analysis-pmi-pba-fast-food-vs-fine-dining/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00043.warc.gz
en
0.940402
2,060
2.546875
3
One of the most important parts of your network security when it comes to devices connecting through the internet is a virtual private network (VPN). There are a number of things that can cause risk when it comes to online activities. Two of these are connections to your network from remote employees and when users on your network connect to the outside world. Anytime a device is using a connection, it can open a portal that leaves data transmissions at risk of being compromised and that leaves a network vulnerable to a hacker. 28% of data breaches are caused by weak remote access security. A VPN is a way to secure those network connections by adding a layer of encryption to the traffic being transmitted and also controlling network access. But all VPNs are not the same. Two distinct versions of VPNs are those that are firewall-based, Office VPNs, and those that are Anonymous VPNs, typically provided as an application. We’ll break down each type below, so you’ll understand their uses and how they can be applied to your cybersecurity needs. How Does an Anonymous VPN Differ from an Office VPN? There are several factors that go into the choice between an office VPN and anonymous VPN. These include considerations of cost, reliability, and security. One of the main differences between the two types of virtual private networks is that an office VPN, which is firewall-based, is controlled locally, where an anonymous VPN is controlled by a third-party provider. Here is an overview of both types. Firewall-based Office VPN With the office VPN, you are getting a combination of a network firewall and virtual private network in one. Firewalls are designed to protect your internal network by monitoring all traffic coming in and going out. The VPN gains added security from the firewall processes, such as address translation, user authentication, alarms, and monitoring. Companies can have either a hardware or software firewall. A hardware-based firewall is a piece of equipment connected to their office network. Software firewalls use an installed application instead to provide firewall protection. The VPN part of the firewall can either be placed first, in front of the firewall. Or it can be done with the firewall as the outer ring of protection. An anonymous VPN is the one you’re looking at when you see products like NordVPN or ExpressVPN. They’re marketed to both consumers and business users. This type of VPN makes your IP address anonymous. Instead of a website that’s being visited seeing your router’s IP address, it sees the IP address of the VPN provider’s server that you’re using to connect. This type of VPN involves downloading software onto a device and turning it on. Once on, it secures connections that device is making. How Each VPN Protects Traffic An office VPN is like having a team of sentries stand guard over your network and ensure only approved traffic is coming in and out and that the traffic is secure. An anonymous VPN is device-based and is more like a shield that a soldier would carry around to protect them. As you connect online it secures that connection. You have much more control of security policies and user authentication when using an office VPN. For example, you have the ability to set up challenge questions or other modes of authentication before allowing a remote user to connect to the VPN. When you’re using an anonymous VPN through a 3rd party provider, your options are more limited. Most of these have a simple username/password login and the ability to add multi-factor authentication, but you don’t have the same policy controls. When it comes to cost, the anonymous VPN is the most economical. This is because it’s a monthly service that you’re paying for per user, similar to a cloud service subscription. Firewall-based VPNs are usually an outright purchase when you buy the firewall, although software-based ones can offer significant savings over hardware firewall/VPNs. Speed and Reliability Because office VPNs are locally connected to your network to secure remote traffic, they tend to be faster and more reliable than VPNs hosted on a provider’s server. With an anonymous VPN, you’re relying on the service provider’s server entirely. The speed and reliability can be hampered by things like how far away the server is and how many other users are accessing it. Ease of Use Once set up, an office VPN can be just as easy to use as an anonymous VPN. However, to take advantage of the flexibility for customization, there is generally more setup work involved than there is with an anonymous VPN, which can be used “out of the box” pretty quickly. Which VPN is Right for Your Company’s Needs? You don’t want to guess when it comes to deciding which VPN to use. Instead, contact C Solutions. We can do a full assessment of your network protection needs and recommend the best VPN to match those. Schedule a free network assessment today! Call 407-536-8381 or reach us online.
<urn:uuid:6f7c9b3b-0188-4c0c-a120-6986a298f985>
CC-MAIN-2022-40
https://csolutionsit.com/office-vpns-vs-anonymous/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00043.warc.gz
en
0.943004
1,069
2.703125
3
Is my browser making an effort to keep my system safe and my online behavior private? This is usually not the first question we ask ourselves when we choose our default browser. But maybe it should be. These days, threats to your privacy and security come at your from all angles, but browser-based attacks such as malvertising, drive-by downloads, adware, tracking, and rogue apps make going online and conducting a search a little more dangerous. Therefore, it's important take note of what browsers are doing to shore up their defenses—and what you can do to optimize them. When it comes to online privacy, it looks as if the silent majority of Internet users have shifted from the “I have nothing to hide” frame of mind to the “they already know everything anyway” group. And based on recent events, many social media users might right. Effectively, both groups feel as though it is not worth the trouble to jump through hoops to keep their data private. So should this even be a consideration? While privacy is ultimately a personal choice, we believe it is still a right. So we'll continue to offer our advise for those who are interested. But let's look at the security aspect first. This is something we can all agree on. Browser security measuresThere have been a few initiatives taken recently by the major browsers to enhance their safety. - Google has decided that Chrome extensions submitted to the Web Store will not be allowed if they contained “obfuscated” code. According to Google, developers should not have to hide their code. It makes it hard to decide whether they should allow the extension, and most obfuscated extensions turned out to be malicious. - Google is in the process of putting an end to “inline installation” of extensions. This means websites can no longer directly install Chrome extensions using the Chrome API, but have to send you to the Web Store. While this process will only be finished by the end of the year, distributors have already adapted their methods to deliver their extensions. - Mozilla (Firefox), Google (Chrome), Apple (Safari), and Microsoft (Edge and Internet Explorer) have announced to drop support for the TLS (Transport Layer Security) 1.0 and 1.1 encryption protocols in early 2020. This will force websites to start using the newer and more secure protocols. - WebRTC leaks and vulnerabilities were solved. Real-time communication features could expose your true IP address via STUN requests with Firefox, Chrome, Opera and Brave browsers, even when you were using a VPN. Remaining problemsDespite all the attempts to apply some pest-control on adware, malicious cryptominers, and other assorted browser hijackers, there will always be those that manage to slither through and infect users. And that doesn't even take into account the multitude of potentially unwanted programs (PUPs) that most parties don’t even seem to care about at all. However, readers of this blog will undoubtedly know the way to our Malwarebytes products page, where they can download a cure for an infected browser. Browser privacyThe upside of being able to use browser extensions is that there are many good ones out there that can help you establish a more private browsing experience. Ad-blockers, anti-tracking tools, and protective extensions add further protection. You can also tighten your privacy by using a Virtual Private Network (VPN) to anonymize your traffic. You have options here, since you can install a VPN to anonymize all your Internet traffic, or you can install a VPN extension that will do so for your browser only. Since a VPN slows down the Internet connection, the choice will be based on which other Internet connections you use and your personal preference. You could even decide to use one browser with a VPN extension and another without one. Personally, I use different browsers for different purposes. This is called compartmentalization and it allows you to visit trusted (and preferably bookmarked) websites with a quick browser and do your regular surfing with a fully protected and anonymized browser. Besides using a VPN, you can also look at some alternative browsers that are already optimized for privacy and security: - The TOR software protects you by bouncing your communications around a distributed network of relays run by volunteers all around the world. - Freenet is a peer-to-peer platform for censorship-resistant communication and publishing that is available for Windows, macOs, and Linux. - Waterfox is a secure and private browser based on Firefox, that allows you to use Firefox extensions. It is available for Windows, macOS, Linux, and Android. - Pale Moon is another Mozilla fork, but it doesn’t work with all Firefox extensions. It is available for Windows and Linux. - Brave is a Chromium-based browser that blocks unwanted content by default and does not need much tinkering to keep you safe and private. Brave is available for Windows, macOs, Linux, iOS, and Android. Anonymous searchingWe have talked about (not so) private search extensions before, but I want to mention a search engine that does deliver on the promised private searches, and that was brought up in the comments to that blogpost (thanks Patrick). It is called DuckDuckGo, and you can perform searches directly from their site or you can install their app or extension. Test to see whether your browser is safe against fingerprintingBrowser fingerprinting is a method used by commercial websites to uniquely identify visitors based on the way you have configured your browser and some other metrics that they can fetch from your browser, such as timezone. If you feel you have already done your best to make your browser untrackable, pay this site a visit: https://panopticlick.eff.org/. It provides visitors with an option to do a test and analyze how well their browser and add-ons protect them against online tracking techniques. The site will also be able to see if your system is uniquely configured and therefor identifiable, even if you are using privacy-protective software. Don’t get hung up on the test result alone though, because the number of results you are compared with plays a big role in the outcome. For example, coming from a small country or language area may give you away when no one else from that area has taken the test. This doesn’t automatically mean advertisers will be able to track you as well. Do pay attention to the specified fingerprinting results. You can access those by clicking on the fingerprinting link in the Test column. Blocking advertisementsAs we have explained in the blogpost Everybody and their mother is blocking ads, so why aren’t you?, blocking advertisements provides a vital security layer that not only severs a potential vector for online malvertising attacks, but also blocks privacy-invading tracking plugins from collecting and harvesting your personal information. CookiesCookies are another topic that we have discussed earlier. Most cookies are not worth worrying about, but it is a good idea to be aware of them. How could you not be aware with every site asking your permission, right? In the blogpost Cookies: Should I worry about them?, we have explained how you can check and control the cookies that you want to allow. Level of concernSo, while many major browsers are doing their best to keep you secure and private, it depends on your own level of concern how far you want to take this journey. There are specialized browsers, extensions, search engines, and other tools to help you achieve any level of privacy. Most people will be satisfied by customizing their mainstream browser to fit their needs, while others wouldn’t think of going online unless they are using Tor behind a VPN. To each their own. As long as you are aware of the risks. And we hope this post will help you to achieve the level you are after. Stay safe, everyone!
<urn:uuid:cfe8bb10-8a44-4f1d-81b7-a29a722ac532>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2018/10/tighten-security-increase-privacy-browser
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00043.warc.gz
en
0.941554
1,626
2.59375
3
The most popular and comprehensive Open Source ECM platform Carbon is considered the most important element to living things because of the many bonds and compounds that it can form. Carbon’s chemistry is what forms a huge body of work known as organic chemistry. There’s a new chapter about carbon that is only recently being written. It is about a special form of carbon called graphene that is the thinnest material known, is incredibly strong, and which can be combined with other gases and metals to form very interesting compounds, often having superior properties. Sheets of the one-atom thick material have very strong and useful thermal and electrical properties. It is sometimes referred to as ‘black gold’ because of its current high cost to produce. Graphene was first isolated in 2004 and Andre Geim and Konstantin Novoselov received the Nobel prize in physics for the discovery in 2010. Graphene has intensely interested scientists since it was identified. Scientists discovered that graphene exhibits a new type of quantum Hall effect. They also observed that charge carriers in graphene could move in a way similar to massless high-energy particles traveling at relativistic speeds. Recently scientists at Argonne National Laboratory discovered unusual behavior when gold nanoparticles are placed on one-atom-thick graphene sheets. When a light was shined on the gold particles the scientists observed a plasma field form. The field was symmetric and for the gold particles near the edge of the graphene sheet, it was particularly strong. This is the first time a plasma effect like this has been observed. The Argonne experiment was made possible with a special kind of electron microscope that allowed scientists to observe materials over very short periods of time. Ilke Arslan, Director at Argonne, said that “having the ability to take measurements like this in such a short time window opens up the examination of a vast array of new phenomena in non-equilibrium states that we haven’t had the ability to probe before. We are excited to provide this capability to the international user community.”
<urn:uuid:e572b8f0-cc4f-42d3-a5a4-a365e2312a24>
CC-MAIN-2022-40
https://formtek.com/blog/graphene-chalk-up-another-unusual-behavior-for-this-remarkable-material/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00043.warc.gz
en
0.967717
422
3.484375
3
As a candidate and as President, John F. Kennedy pushed the government to remake the classroom. In 1960, he told the National Association of Educational Broadcasters that shoring up the “shameful weaknesses” of the nation’s classrooms was a question of “national survival.” Of all the modern presidents, Kennedy may have had the most prescient vision of what technology can do for society. Kennedy called the leading audiovisual technology of the time an instrument “with the potential to teach more things to more people in less time than anything yet devised,” saying it “seems a providential instrument to come to education’s aid.” To that end, he signed the Educational Television Act in 1962, providing funds for noncommercial television broadcasting. The Federal Communications Commission (FCC) then created the Instructional Television Fixed Service (ITFS) to beam educational TV into the classroom. As it turned out, TV transformed nearly everything but the classroom. Asking teachers to be TV producers was too much, and the idea didn’t get far off the ground. Much of the airwaves set aside for the ITFS lay fallow, and nearly half the states had no ITFS licensees. Time and technology don’t stand still, even as policy seems stuck in the past. New technologies came, and the FCC transformed ITFS into the Educational Broadband Service (EBS), recognizing that new technologies might allow Kennedy’s idea to flower. Once again, technology transformed the world, but the EBS remained stuck in the past. Few educators knew how to use it, and funding was in short supply. So, with the FCC’s blessing, educational entities leased their EBS licenses to commercial entities, including mobile wireless carriers. 2.5 GHz spectrum band can power 5G and the IoT EBS frequencies – also known as the 2.5 GHz band – lie in the middle of the frequency range. These mid-band frequencies sit in a spectrum “sweet spot.” They are low enough to cover large areas but have the bandwidth to carry high-capacity services. With its high functionality, mid-band spectrum can support the latest technologies, like autonomous vehicles and augmented reality. In short, it’s ideally situated for 5G and the Internet of Things (IoT). For these reasons, mid-band spectrum is where most of the rest of the world will deploy 5G. For the U.S., it’s a vital missing link that properly utilized could ensure American global dominance of the future of wireless communications. So where does this leave the EBS? Unfortunately, it’s in the same place as when it was the Instructional Television Fixed Service. The 2.5 GHz band is underutilized or unused in about half of the country, and the FCC estimates that more than 90 percent of the EBS licenses held by educational institutions are leased to other entities. FCC Commissioner suggests EBS spectrum auction would help solve Homework Gap problem It doesn’t have to stay this way. Enlightened policy makers like FCC Commissioner Jessica Rosenworcel have a better idea. Seven in 10 teachers assign homework requiring internet access, and the FCC says one-third of all households, mostly in rural areas, do not have broadband service. Commissioner Rosenworcel coined the phrase the “Homework Gap” to describe the place where those numbers overlap. The Senate Joint Economic Committee says the Homework Gap affects 12 million school-aged kids across the country. For students in households without broadband, getting homework done is extremely difficult. As a former rural legislator, I’ve seen the problem firsthand. Kids should not have to go to McDonald’s or sit in the parking lot of a business with Wi-Fi in order to complete their homework assignments. Commissioner Rosenworcel‘s idea is to help close the Homework Gap by putting frequencies in the 2.5 GHz band into an incentive auction. Incentive auctions first pay the entity that holds the license, with the money left over going to the government. In the recent broadcast incentive auction, broadcasters received $10 billion, and approximately $8 billion went to the U.S. Treasury. This idea has a lot of merit. The auction is voluntary. Educational institutions that have developed the EBS could keep those frequencies. Institutions wanting in? They could sell. Commissioner Rosenworcel goes a necessary step further, as her plan would designate the government’s share of the proceeds for education, helping schools find innovative ways to make progress toward closing the Homework Gap. While Commissioner Rosenworcel and I are Democrats, this idea isn’t partisan. Technology policy traditionally has been bipartisan, and Republicans like FCC Commissioners Brendan Carr and Michael O’Rielly have also expressed support for examining an auction for the EBS. If all goes well, we will have a policy that would be a credit to JFK’s original idea, giving educators a providential instrument that can help with funding to close the Homework Gap, while at the same time moving the U.S. to the 5G forefront.
<urn:uuid:3d307841-840e-4a51-97b2-bedbaa1953da>
CC-MAIN-2022-40
https://www.cio.com/article/219785/lessons-from-broadcast-spectrum-auction-can-help-close-the-homework-gap.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00043.warc.gz
en
0.951777
1,079
2.765625
3
Phishing attacks increased 19 percent in June over May, according to a report released by the Anti-Phishing Working Group. Of the 1,422 new unique attacks, 92 percent of them used forged, or “spoofed,” e-mail addresses. To somemembers of the working group, that fact reveals a crying need for sender authentication in all e-mail in order to limit both spam and phishing. “Classic phishing attacks are dependent on normal e-mail as anattack medium,” the group’s Peter Cassidy told TechNewsWorld. “If you can slow down the volume of spam, you can slow down the number of successful hits that phishing attacks make.” “Yahoo and Microsoft finally got together on Sender ID,” Cassidy said. “If that’s widely adopted it should cut down the number of raw spams allowed to traverse the Internet.” The Phish Story Phishing involves the mass distribution of “spoofed” e-mail messages with return addresses, links and branding which appear to originate from banks, insurance agencies, retailers or credit card companies. The bogus messages can trick their recipients into divulging personal authentication data such as account information, credit card or social security numbers and PINs. Because the e-mails look genuine, recipients respond to them and become victims of identity theft and other fraudulent activity. Phishing can also involve the planting of clandestine code on a computerfor filching information in real time through programs like key loggers. Sender authentication will be very successful in getting rid of the “script kiddies,” but it won’t discourage more sophisticated phishers from their avocation, the working group’s Cassidy maintained. “What will happen is that the professionals will start bearing down on stuff that needs a greater degree of sophistication,” he said. Signs of these more sophisticated phishing vehicles have already been discovered in the wild. These vehicles use encryption to evade detection by antivirus software. Once nested on a computer, they begin logging keystrokes based on discrete events, such as accessing an online bank account. Then the logsare sent to a phisher without the computer operator’s knowledge. “There has been a huge shift in phishing from last November to this summer in terms of how attacks are done,” Bill Franklin, president of 0Spam Network Corporation in Coral Gables, Florida, told TechNewsWorld. “There weren’t any of these sophisticated attacks last fall,” Franklin said. “It would take a good four to six months” for a phishing attack to target a security gap. “Whereas now,” he said, “if a security exploit is observed, in two weeks — guaranteed — there’s going to be a virus and phishing attack that take advantage of that.” According to the group’s report, the financial services sector remains the top target of phishers, garnering more than1,000 of the new unique attacks. Citibank alone amassed 492 attacks, a 32 percent jump from the previous month. Financial Sector Fights Back Because it has become a prime target of phishers, the industry has launched an initiative through the Financial Services Technology Consortium (FSTC) to define the full scope of the phishing problem and find new solutions to it. “That’s the first of program of its kind to attack phishing specifically,” Cassidy noted. “The FTSC project will be useful because it will be shared more broadly in the industry,” observed Jim Maloney, chief security executive at Corillian in Portland, Oregon, a provider of online banking solutions. “It will give us a better idea of the full scope of the problem and the full range of solutions that can applied to it,” he told TechNewsWorld.
<urn:uuid:17bcd54b-e358-4ff9-a705-a1ddb3ae6d38>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/phishers-dangle-more-hooks-in-june-35570.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00043.warc.gz
en
0.922322
838
2.5625
3
Note: It occurred to me that I’ve been calling the Map Server / Map resolver router by both ‘maprouter’ as well as ‘mapserver’ throughout this post. Any reference to either mapserver or maprouter refers to the central Map Server / Map Resolver router. I’ve corrected this error in future posts and called it just ‘mapserver’. Sorry! So now that we’ve talked a little bit about the basics in the intro post, let’s do a little LISP configuration to get our feet wet. One struggle I’ve had with learning LISP is that there really aren’t a ton of tutorials on it at the moment. I find that I learn best through example, so in the next few posts I’d like to walk through some very simple configurations that should help you get your feet wet, but not be terribly over complicated. That being said, let’s dig right in. So let’s start with a base topology. Something like this… I’m going to start completely from scratch here and apply this base configuration to each device… ip address 10.0.1.1 255.255.255.252 ip address 10.0.2.1 255.255.255.252 ip address 10.0.1.2 255.255.255.252 ip address 192.168.1.1 255.255.255.255 ip route 0.0.0.0 0.0.0.0 10.0.1.1 ip address 10.0.2.2 255.255.255.252 ip address 192.168.2.1 255.255.255.255 ip route 0.0.0.0 0.0.0.0 10.0.2.1 So at this point, we have some basic routing going on, but nothing fancy. We can ping between the border networks, but router 2 doesn’t know about router 3’s loop 0 interface and vice versa. In any other scenario, we’d configure some sort of IGP (or static routes) to get Loop 0 reachability between all of the routers. Since this is a LISP lab, let’s give each router a LISP role… So here’s what we’ve done. We’ve renamed Router1 to ‘maprouter’. Maprouter will serve as a LISP map server and map resolver. In addition, maprouter is providing IP connectivity between the two other routers. Router2 has been renamed ‘lisp1’ and is providing LISP xTR functionality for what I’ll refer to as ‘Site A’ throughout the rest of the example. Router3 has been renamed ‘lisp2’ and is providing LISP xTR functionality to what I’ll refer to as ‘Site B’ throughout the rest of this example. The green shaded area refers to what is considered the RLOC space and the orange shaded area refers to what is considered the EID space. If you need a refresher on the LISP terminology , refer to my first post here. So now that we’ve changed things up a little bit, let’s put some LISP configuration on each device. Let’s start with the mapserver router (formerly router 1). Configuring LISP is much like configuring any other dynamic routing protocol. Enter LISP configuration mode as follows… Enter configuration commands, one per line. End with CNTL/Z. Now that we are in the LISP configuration mode, we can configure this router for it’s designated LISP role… Since this router is acting as the map server, we need to define each of the LISP site’s that plan on connecting to this map server… mapserver(config-router-lisp)# site sitea mapserver(config-router-lisp-site)# description Site A mapserver(config-router-lisp-site)# authentication-key sitea mapserver(config-router-lisp-site)# eid-prefix 192.168.1.0/24 To build a site, we need to define the site name, the key it will use for authentication, and the EID prefix for which that site is authoritative for. We’ll complete the same configuration for the other side, or Site B… mapserver(config-router-lisp)# site siteb mapserver(config-router-lisp-site)# description Site B mapserver(config-router-lisp-site)# authentication-key siteb mapserver(config-router-lisp-site)# eid-prefix 192.168.2.0/24 That’s really all we need to do on the map server at this point. Let’s run a couple of commands though to verify things… As you can see, the mapserver router knows that sitea and siteb have been defined, but they are not yet ‘up’ at this point. Let’s move onto the site router configuration… The configuration for the site’s is pretty easy as well. Recall that since these router’s are performing LISP xTR functionality, they are performing both the LISP ITR and ETR roles. Let’s walk through the lisp1 router configuration… lisp1(config-router-lisp)# database-mapping 192.168.1.0/24 10.0.1.2 priority 1 weight 100 The database-mapping command is used to describe the EID space for which this xTR is authoritative for. Don’t worry about the priority and weight commands just yet, we’ll cover those in one of the coming posts. Next we have to define the router roles… lisp1(config-router-lisp)# ipv4 itr map-resolver 10.0.1.1 lisp1(config-router-lisp)# ipv4 itr When configuring the ITR role, we need to tell the router what to use as a map resolver. In our case, maprouter is performing both the map resolver and map server functionality so we specify it’s interface facing the lisp1 router. Recall that the ITR is a device that takes traffic from EID space, encapsulates it, and sends it out into RLOC space as a LISP packet. That being said, before it can do that it needs to ‘resolve’ the EID prefix it’s headed for. Hence, it needs to query the LISP map-resolver. Next, we configure the ETR function… lisp1(config-router-lisp)# ipv4 etr map-server 10.0.1.1 key sitea lisp1(config-router-lisp)# ipv4 etr When we configure the ETR function, we have to define the map-server. The map-server is the LISP device that keeps track of all of the LISP mappings. That being said, it would make sense that you’d want some type of authentication in place for this role. After all, you want to make sure that the right device is registering the right EID prefixes. Here we specify the map server as well as the key that we used to define the site on the mapserver router earlier. Before we move onto configuring the lisp2 router, let’s take a quick look at the mapserver router again… As you can see, the mapserver router and the lisp1 router have already registered with each other. Once we finished the configuration on lisp1 it immediately reached out to the mapserver router to register. Let’s configure a quick debug on the mapserver router so we can see lisp2 register after it’s configured… mapserver#debug lisp control-plane all All LISP control debugging is on This configuration is almost identical to lisp1 so we won’t talk about it in such great detail… lisp2(config-router-lisp)# database-mapping 192.168.2.0/24 10.0.1.2 priority 1 weight 100 lisp2(config-router-lisp)# ipv4 itr map-resolver 10.0.2.1 lisp2(config-router-lisp)# ipv4 itr lisp2(config-router-lisp)# ipv4 etr map-server 10.0.2.1 key siteb lisp2(config-router-lisp)# ipv4 etr Immediately after configuring lisp2, we see these messages on the mapserver router… *Nov 6 02:19:47.287: LISP: Processing received Map-Register message from 10.0.2.2 to 10.0.2.1 *Nov 6 02:19:47.287: LISP: Processing Map-Register no proxy, no map-notify, no merge, no security, no mobile-node, 1 record, nonce 0x9942D785-0xBA4E2890, key-id 1, auth-data-len 20 *Nov 6 02:19:47.287: LISP: Processing Map-Register mapping record for IID 0 192.168.2.0/24, ttl 1440, action none, authoritative, 1 locator 10.0.2.2 pri/wei=1/100 LpR *Nov 6 02:19:47.287: LISP-0: MS registration IID 0 prefix 192.168.2.0/24 10.0.2.2 site siteb, Created. *Nov 6 02:19:47.291: LISP-0: MS registration IID 0 prefix 192.168.2.0/24 10.0.2.2 site siteb, Adding locator 10.0.2.2. *Nov 6 02:19:47.291: LISP-0: MS EID IID 0 prefix 192.168.2.0/24 site siteb, Map-Notify, to registering ETRs due to changed registration. *Nov 6 02:19:47.291: LISP-0: MS EID IID 0 prefix 192.168.2.0/24 site siteb, ALT route update/create. *Nov 6 02:19:47.291: LISP-0: ALTroute IID 0 prefix 192.168.2.0/24 <-> created. *Nov 6 02:19:47.291: LISP-0: ALTroute IID 0 prefix 192.168.2.0/24 <-> add source MS-EID. *Nov 6 02:19:47.291: LISP-0: ALTroute IID 0 prefix 192.168.2.0/24 <MS-EID> RIBroute ignore create, no ALT RIB. There’s a lot of detail here and some pieces of LISP that we haven’t yet explored but it’s fairly apparent from this debug that the registration of the LISP site ‘siteb’ was successful. This can be confirmed by checking the lisp sites on the mapserver router again… Testing LISP between sites Now that we’ve configured LISP, it’s time to test it out. Let’s try pinging from site A to site B to see what we get… As you can see, it took the router a couple seconds, but eventually the pings went through. The delay was due to the LISP lookup that occurred between lisp1 and mapserver. If we take a look at both of the xTRs, we can see that they both have a valid map-cache entry for the the other router’s EID space… I don’t want us to get caught up in the specifics (AKA, I’m not going to walk through the router debugs or packet captures in this post) but I do want to talk a little bit about the forwarding decision on LISP enabled routers before we wrap this post up. As we saw, the LISP enabled router populates it’s local map-cache with responses it receives from map resolvers and other ETRs. However, those entries don’t appear when you look at the routing table the way most engineers do. For instance, let’s look at the forwarding table on router lisp1… As you’ve probably noticed, there isn’t a forwarding entry for the 192.168.2.0/24 network. However, if we take a look at CEF… We can see that CEF does actually know how to forward these packets and that forwarding action is to encapsulate the traffic. Now, if we were to clear the map-cache entry and check again… We see that CEF no longer has the specific prefix we are looking for, but it does know to check LSIP eligibility which I’m assuming means check the map-resolver before forwarding the packet natively. That being said, it appears that the router will check with it’s defined map-resolver when attempting to route a packet to a destination for which the router doesn’t have a specific prefix in the forwarding table or a valid map-cache entry. So we’ve covered quite a bit in this post. We built a very basic lab topology which we’ll continue to build upon as we scale the LISP infrastructure out in future posts. Additionally, we scratched the surface of how the actual LISP protocol works. In my next post, we’ll take a look at the LISP map-request and map-reply process in more depth and analyze the LISP process used to actually signal and build the connectivity we configured today. Feedback and comments are more than welcome!
<urn:uuid:10c6d68a-b171-4ec5-a665-e2eb6c9f00df>
CC-MAIN-2022-40
http://www.dasblinkenlichten.com/lisp-a-base-configuration-to-build-on/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00043.warc.gz
en
0.832146
3,352
2.71875
3
Packet Tracer STP Configuration In this post, instead of detaily talk about STP (Spanning Tree Protocol), we will focus on a basic Switching Loop topology and how STP mechanism helps to avoid this Switching Loop. You can DOWNLOAD the Packet Tracer example with .pkt format HERE. Switching Loop is an unwanted problem in a network. Then, what is Switching Loop? Switching Loop is the situation, in which there are two layer 2 path between two layer 2 endpoint(switch, brigde). Switches creates broadcast storms from every port and switch rebroadcast again and again. Because of teh fact that there is no TTL(time to live) mechanism on layer 2, this continues forever. To avoid this unwanted Switching Loops, there are some mechanisms. One of the most common name of this mechanisms is STP(Spanning Tree Protocol). Acording to this protocol, in the switching topology, a Root Bridge is selected. And then the connected port of the switches are classified. The port classification and their meaning are like below: – Root Port : The port to the Root Bridge – Designated Port : The other port thatis not Root Port – Non Designated (Blocked) Port : In a segment, other port than the Designated Port The selection process is done orderly. First Root Bridge is selected, secondly Root Ports on all the switches, then Designated Ports are selected, and lastly the remainning ports become Non-Designated Port, meaning Blocking Port. STP Example on Packet Tracer For STP example with PAcket Tracer, we will use the below switch topology. To understand more detailly let’s check the show screenshots. As we can see above, the addresses are for the Root and the Bridge part. So, Switch0 is selected as Root Bridge. The Root Bridge is selected according to the Bridge ID, The Bridge ID is the MAC address of the Switch. So, the lower one is selected as Root Bridge. This is Switch0. The two port of Switch0 are normally Designated Port. Because all the ports on Root Bridge is always choosen as Designated Port. Both of these ports are in Forwarding State, this means that they are ready to send the traffic. As a recall, as you know there are four states of an STP port. These are: – Blocking (20 seconds) – Listening (15 second) – Learning (15 second) You can also use the following commands to check the spanning-tree information. You can check the other Packet Tracer Examples below: Common Cisco Router Configuration Example on Packet Tracer Router DHCP Configuration Example on Packet Tracer VTP Configuration Example on Packet Tracer VLAN Configuration Example on Packet Tracer STP Configuration Example on Packet Tracer RSTP Configuration with Packet Tracer STP Portfast Configuration with Packet Tracer Inter VLAN Routing Configuration on Packet Tracer Switch Virtual Interface (SVI) Configuration with Packet Tracer BGP Configuration Example on Packet Tracer Port Security Configuration Example on Packet Tracer RIP Configuration Example on Packet Tracer CDP Configuration Example on Packet Tracer OSPF Area Types Example on Packet Tracer (Standard and Backbone Areas) OSPF External Routes Example on Packet Tracer OSPF Area Types Example on Packet Tracer (Stub, NSSA, Totally Stubby, Totally NSSA Areas)
<urn:uuid:322233e0-2ff0-44f3-addd-8b9297ae4185>
CC-MAIN-2022-40
https://ipcisco.com/stp-spanning-tree-protocol-example-on-packet-tracer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00043.warc.gz
en
0.859034
763
3.015625
3
Back in 2015 I explored how our five primary senses—sight, smell, taste, touch, and hearing—were being re-created using sensors. Our senses are how we navigate life: they give us a perspective of the environment around us, and help us interpret the world we live in. But we’re also limited by the sensory world. If a sense is diminished, there may be a way to approximate or enhance its effects (as we do with hearing aids) or rely on another sense in a compensatory fashion (as with braille and sign language). Today, gadgets (and IoT technologies) are being built that work in conjunction with, or completely replace, capabilities of the eyes, ears, nose, tongue, and hands. Sensory receptors can be replaced with micro-chipped devices that perform the same functions as these receptors, attached to or integrated with our bodies. The technology in 2015 was eye-opening (ha-ha), but I wanted to examine how much things have advanced over the past few years. Sight: Remember Google Glass? Before its demise, engineers were working on eyeglasses that connected to automobiles, and provided telemetry displays on the lens. Today you can get a device that beams such information onto the windshield, or displays it using technology built into the glass. We also have technology that lets you ‘see’ through walls. There are 285 million visually impaired people worldwide; and among those, there are 39 million who are totally blind. Sensor-based assistive devices for the blind used to be limited in their capabilities, and typically alerted the user to the presence of obstacles only. Now researchers have developed a wearable assistive device that enables a person to sense their environment and move around more safely. These devices—currently available as a sonar equipped wristband or a radar monitor—use frequency waves and give feedback either with vibrations or audio. There’s more, though bionic eyes are being developed, and blind patients are testing bionic implants that rely on a brain-computer interface. These devices could bring back some vision in patients with certain genetic eye disorders. A camera and an array of electrodes implanted around the eye and the retinal cells can transmit visual information along the optic nerve to the brain, producing patterns of light in a patient’s field of view. The results aren’t perfect, but this does give hope to those with limited or declining vision. Smell: From Smell-O-Vision and Smell-O-Rama back in the 1940s-50s to the little devices that connect to your mobile device to emit a scent, objects designed to create smells have been around for a while—as well as devices designed to “smell” a substance in the air, such as smoke, radon, and carbon-monoxide detectors. Researchers have already developed wearable sensors that can smell diabetes by detecting acetone in the breath, and have figured out how to use a sensor to identify the odor from melanoma. Also, Apple is looking to add sensors to the iPhone and Apple Watch to detect low blood sugar based on body odor. Current electronic noses can smell more effectively than human noses, using an array of gas sensors which selectively overlap, along with a pattern reorganization component. The smell or flavor is perceived as a global fingerprint and generates a signal pattern (a digital value) that’s used for characterizing smells. What would “stink” be to the Nth power? Hearing: According to U.K. firm Wifore Consulting, hearing technology alone will be a $40 billion market by 2020. In 2018, it was $5 billion. We have alerting devices, cochlear implants, and a wearable vest that helps deaf people hear through a series of vibrations. A suite of sensors picks up sounds and vibrates, allowing the wearer to feel rather than hear sounds. The vibrations occur at the exact frequency which the sound made. (Have you ever stood next to a thumping speaker at a concert and felt the sound? You don’t need to hear it to know the bass thump.) What about communicating with those who don’t know sign language? Prototype SignAloud gloves translate the gestures of American Sign Language into spoken English. There was some criticism of the device because of mistranslations, and because the device didn’t capture the nuances of sign language—such as the secondary signals of eyebrow movements, shifts in the signer's body, and motions of the mouth—that help convey meaning and intent. With another glove, users can record and name gestures that correspond with words or phrases, eliminating facial additions; another version can send translations directly to the wearer's smartphone, which can then enunciate the words or phrases. Touch: Back in 2013, researchers developed a flexible sensor able to detect temperature, pressure, and humidity simultaneously, providing a big leap toward imitating the sensing features of the human skin. Elsewhere, the University of Pittsburg Medical Center designed a robotic arm which allows the user to feel touch applied to the robotic fingers. And now we have an artificial nerve! Similar to sensory neurons embedded in our skin, a bendy Band-Aid looking device detects touch, processes the information, and sends it off to other nerves. Rather than zeroes and ones, this nerve uses the same language as a biological nerve and can directly communicate with the body—whether it be the leg of a cockroach or residual nerve endings from an amputated limb. Today’s prosthetics can read a user’s brain activity and move accordingly but imagine the reverse: circuits that transform voltage into electrical pulses. The outputs of this artificial nerve are electrical patterns that the body can understand—the “neural code.” Forget computers, it’s time to go neural! Taste: The Internet of Food is expanding. I've written about smart chopsticks that can detect oils containing unsanitary levels of contamination, a fork that monitors how many bites you take, and a smart cup that counts the amount and calories you drink. The field of chemosensory research focuses on identifying the key receptors expressed by taste cells, and understanding how those receptors send signals to the brain. For instance, researchers are working to develop a better understanding of how sweet and bitter substances attach to their targeted receptors. What we think of as taste often comes from the molecular composition of a food ingredient along with smell. IBM’s Hypertaste uses “electrochemical sensors comprised of pairs of electrodes, each responding to the presence of a combination of molecules by means of a voltage signal…The combined voltage signals of all pairs of electrodes represents the liquid’s fingerprint,” according to the IBM Research Blog. It also needs to be trained just like the human palate! Yet another taste focused system uses sensors and electrodes that can digitally transmit the basic color and sourness of lemonade to a tumbler of water, making it look and taste like the summertime favorite. No matter what, all these technologies require an application, services and some code to properly function. Science-fiction aside, who would have thought that an individual's perspectives would become embedded within software so quickly?
<urn:uuid:9c654b9b-fe33-4a12-be73-86e7872b10e9>
CC-MAIN-2022-40
https://www.f5.com.cn/company/blog/sensors-for-our-five-senses
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00043.warc.gz
en
0.937661
1,484
3.1875
3
If you have come to this page, you probably looked for information about SAP program. Actually, SAP program can mean a lot of things, so let us discuss what people could mean when referring to SAP program. In the most general case, it is possible that they refer to one or several components of the suit of SAP business applications. Our article What is SAP? can be a good starting point for information about SAP. All SAP business applications are essentially software products. For this reason, one can use the term ‘SAP program’ for any of them. What is SAP Program? SAP applications are characterized by high complexity. They are capable of performing a lot of different functions for supporting operations of companies. SAP consists of a large number of programs and sub-programs. SAP program is a sequence of instructions written in the special programming language called ABAP that control behavior of a computer for recording business transactions and performing various analytics functions. When SAP program is being executed, it delivers certain business function to users of an SAP system. For example, SAP has a program for creating and saving sales orders. Business users of the SAP system will capture and save in the database new orders from their clients with the help of this SAP program. Who Uses SAP Software? SAP software is expensive and is only purchased by corporate customers (companies) because it is not of any value for private individuals. As we already wrote, the main function of SAP software is to automate business processes of a given enterprise. Therefore, it makes a lot of sense that nobody wants to install it on their personal computers. Moreover, not every company needs SAP program. For many small and medium size companies the functionality offered by SAP is excessive and too sophisticated for their needs. There are many alternatives of SAP that are cheaper and simpler. These alternatives can be a good fit for small companies that do not have a large budget for IT systems. The true power of SAP software shines when this system is used for managing operations of global multinational companies (e.g., Coca-Cola or Apple). SAP is used in almost every global company because it offers exceptional scalability capabilities and can handle business operations across different continents. How to Work with SAP Program? Usually companies hire SAP consultants for implementing SAP program and training their employees how to work with SAP. Our article How Does SAP Work? provides an easy explanation of the basic principles behind SAP and its architecture. It is a good idea to read this article for general understanding of how SAP software operates. From this article you will learn that SAP has several tiers architecture and all the work with SAP is done through the graphical user interface (GUI). GUI presents various screens to the users where they either can enter data or read some information extracted from the database. It is not very difficult to work with SAP program as a user but some training is definitely required because many things are counterintuitive in SAP and could be confusing. Being an SAP consultant (the person who customizes SAP software and supports it) is more demanding for SAP skills and requires extensive SAP training and hands-on experience.
<urn:uuid:46928899-31e3-4c22-b76b-0277249aae3d>
CC-MAIN-2022-40
https://erproof.com/sap-program/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00043.warc.gz
en
0.955816
624
2.96875
3
In derivatives finance, a derivative contract derives its value solely from the change in the value of an underlying asset. The underlying asset can be a bond, stock, commodity, currency, or even interest rate, which is also referred to as the underlying asset. A derivative contract, unlike a traditional financial contract, is not required to be repaid in cash, but can be “hedged” by an agreement between two parties. Derivatives are used in finance to make transactions more complex by using different instruments. A derivative can be a combination of two or more financial assets (such as a bond, equity, or an interest), and any other financial product that can act as a substitute for one of these. Different derivatives can involve financial products such as interest rates, bond prices, the cost of buying or selling one of the underlying assets, and more. This is often done to help protect against the risks of different financial instruments, or for making the exchange rate of various products more stable. Derivatives are generally considered to be securities and can be purchased from banks, brokers, or other financial entities. Investors who want to hedge their portfolio will use financial instruments that can act as substitutes for one or more of their securities. For example, if a company needs money to finance a new project, one of its options is to take out a loan, which may be covered by the value of a portfolio of derivatives. However, taking out such a loan would also require the repayment of money, if the project doesn’t go as planned. A derivative is a strategy used to circumvent this problem by allowing a business to “hedge” its risk without actually having to hold any assets that could fall under the terms of the original contract. Different types of derivatives can be used to create various financial products, which are typically referred to as financial products. Some examples of financial products are interest-rate swaps, option-pricing curves, foreign currency contracts, equity swaps, financial futures contracts, commodity options, credit risk hedging, and bond hedging. If you are new to derivatives and what they entail, you can read about them and the various types of derivatives available on the web. or read a book on derivatives finance. Many businesses will have more than one derivative product. Some examples of derivatives are stock options, bonds, options, stocks, options on stocks, option securities, option warrants, and options on bonds, financial derivatives, commodity swaps, commodity-linked securities, treasury bills, swap agreements, swap options, and other options, and foreign exchange contracts. There are two types of derivatives, the first are “lock-ins,” which means a contract in which the underlying asset remains the same while the derivative is changing, and the second is called a “call.” The former refers to an asset or liability that has been locked-in or locked, so that the asset or liability cannot be used, and the owner of the derivative is not able to sell it until the price reaches a predetermined amount. and the value has been locked-in. Assets and liabilities that are locked-in include gold, precious metals, commodities (like stocks, options, and bonds), securities, and cash funds held by businesses, governments, and individuals. In addition, there are also other types of securities, such as mortgage debt, corporate debt securities, certificates of deposit, and financial derivatives like swap options. that are not locked-in. All types of assets and liabilities can be turned into financial products. Many of the financial products are either ‘locked-in ‘call,’ and others are either ‘out-of-the-money ‘overexpressed,’ meaning that the price will change when the value of the underlying asset is different from the current market price. Financial products can also be ‘over-the-counter’ over-bought.’ Most derivatives can either ‘over-deliver’ the current value of the underlying asset or make the price higher or lower. Because there are many different types of financial products, it is important to understand what these terms mean before investing in them. It is very common for many companies and individuals to use financial products. There are four main categories of derivative products. They are listed below and are discussed in greater detail in the following paragraphs:
<urn:uuid:15f6f15a-63fb-4f5b-80b6-e0a1e1fe2f36>
CC-MAIN-2022-40
https://globalislamicfinancemagazine.com/what-is-derivatives-finance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00243.warc.gz
en
0.963248
892
3.546875
4
Optimization Techniques In Data Science Data science is a relatively new field that focuses on analyzing large amounts of data via various methods to make the data more intelligible. To comprehend the field of data science, we will need a solid foundation in the following three areas of knowledge: statistics, linear algebra, and optimization. However, in today's article, we will understand Optimization and its various techniques in data science. Let's get started and before we jump directly to the techniques, let us understand what optimization is. Optimization: Brief Introduction The term "optimization" refers to a process or method utilized to determine the most effective solution. The requirement will determine whether its value is the minimum or the maximum for that category. For instance, if a company is required to find a means to make a maximum return on their products, then the condition will be maximum. On the other hand, if the company wants to discover a technique by which their business earns the least product cost on production, then the need will be minimal. Various Optimization Techniques in Data Science Let us now look into the various optimization techniques in data science. Here they are: 1. Gradient Descent The gradient descent approach is currently the method of choice for optimization. The objective of this tactic is to update the variables iteratively in a way that is opposed to how the gradients of the objective function move. This strategy directs the model to discover the target with each update, eventually converging to the optimal value for the objective function. 2. Stochastic Gradient Descent The stochastic gradient descent (SGD) algorithm was developed to solve the high computing cost involved in each iteration of the process when dealing with enormous amounts of data. The equation can be written as follows: Back-propagation refers to taking the values and iteratively modifying them depending on various parameters to lower the loss function. Instead of directly calculating the exact value of the gradient, this approach updates the gradient (theta) by randomly using one sample at each iteration. This is done in place of directly calculating the gradient's value. The stochastic gradient approximates the true gradient that does not consider any outside factors. This optimization approach cuts down on the amount of time needed to do an update when working with a large number of samples, and it also eliminates some redundant computational work. 3. The Technique of Adaptive Learning Rate Learning rate is one of the primary hyperparameters optimized during the process. The model's behavior depends on the learning rate, which determines whether it will pass over particular data sections. When the learning rate is high, there is a possibility that the model will overlook more nuanced parts of the data. If it is low, then it is preferable for applications that take place in the real world. The rate of learning has a significant impact on the Stochastic Gradient Descent (SGD). It can be difficult to determine what the optimal value of the learning rate should be. It was suggested that adaptive approaches could automatically do this tweaking. In Deep Neural Networks (DNNs), the adaptive forms of SGD have seen the widespread application. Methods such as AdaDelta, RMSProp, and Adam all use exponential averaging to offer accurate updates while keeping the process as straightforward as possible. - Weights with a steep gradient will have a slow learning rate and vice versa. This is known as the degrade formula. - RMSprop modifies the Adagrad method so that it slows the rate at which it monotonically decreases the amount learned. - Adam is virtually identical to RMSProp, except that he possesses momentum. - The Alternating Direction Method of Multipliers, sometimes known as ADMM, is an additional alternative to the Stochastic Gradient Descent method (SGD) The learning rate is not predetermined in either the gradient descent or AdaGrad approaches, which is the key distinction between the two. The computation for it makes use of all of the historical gradients that have been accumulated up to the most recent iteration. 4. Method of the Conjugate Gradient The conjugate gradient (CG) method is applied in the process of solving nonlinear optimization issues in addition to large-scale linear systems of equations. A slow convergence speed is characteristic of first-order approaches. On the other hand, the approaches of the second order require a lot of resources. An intermediate algorithm known as conjugate gradient optimization combines the benefits of first-order information with the high-order methods' ability to ensure rapid convergence. 5. Optimization Without the Use of Derivatives Certain optimization problems are almost always solvable by employing a gradient, notwithstanding the possibility that the derivative of the objective function does not exist or is difficult to calculate. This is due to the fundamental characteristics of the issue. Derivative-free optimization enters the picture at this point to help solve the problem. Instead of deriving solutions methodically, it uses a heuristic algorithm that picks approaches that have previously been successful. There are numerous examples of this: genetic algorithms, particle swarm optimization, and classical simulated annealing arithmetic. 6. Zeroth Order Optimisation Recent years have seen the development of derivative-free optimization's successor, zeroth order optimization, which aims to remedy the faults of its predecessor. Methods of optimization that don't use derivatives have trouble scaling up to huge problems and don't provide a way to analyze how quickly they converge on a solution. Advantages of the Zeroth Order include the following: - Ease of implementation that requires only a minor adjustment to the gradient-based methods that are most frequently employed - Approximations of derivatives that are computationally efficient in situations when the derivatives themselves are difficult to compute - convergent rates that are comparable to those of first-order algorithms. With this, we come to an end for today's article. To summarize what we learned, first, we understood optimization in brief and its essence. Then, we understood the different optimization techniques in data science, including Gradient Descent, Stochastic Gradient Descent, Method of Adaptive Learning, and the rest. If you are an enthusiast and want to learn everything related to data and make a career, data science is the best path you could choose. And when speaking of data science, we cannot leave Skillslash behind. It's more of a bridge that connects aspiring data scientists to a successful career path.
<urn:uuid:087c0509-c2bc-4349-8065-7cfb5dcb846f>
CC-MAIN-2022-40
https://www.hostreview.com/blog/220812-optimization-techniques-in-data-science
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00243.warc.gz
en
0.920117
1,329
3.15625
3
An OCCURS clause requires special handling, because the compiler must assign a unique name to each database column. The compiler accomplishes this by appending sequential index numbers to the items named in the OCCURS. For example, if the following were part of a file's description: 03 employee-table occurs 20 times. 05 employee-number pic 9(3) these column names would be created in the COBOL database table, that is, the table of COBOL data created when AcuXDBC performs its translation: employee_number_1 employee_number_2 . . . employee_number_10 employee_number_11 . . . employee_number_20 You can use the SUBTABLE directive to modify this behavior, resulting in the compiler storing just the base name along with the name of the subtable specified by the directive. Note that the hyphens in the COBOL code are translated to underscores in database field names, and the index number is preceded by an underscore.
<urn:uuid:ca00eda2-805a-41e2-8b48-b0496349f313>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/extend-acucobol/1031/extend-Interoperability-Suite/BKXDXDPREPOCCURS.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00243.warc.gz
en
0.855239
213
3.03125
3
Researchers at the University of Illinois Chicago have developed a novel continuous-flow microfluidic device that may help scientists and pharmaceutical companies more effectively study drug compounds and their crystalline shapes and structures, which are key components for drug stability. The device consists of a series of wells in which a drug solution – made up of an active pharmaceutical ingredient, or API, dissolved in solvent, such as water – can be mixed with an anti-solvent in a highly controlled manner. When mixed together, the two solutions allow for the API crystals to form a nucleus and grow. With the device, the rates and ratios at which the drug solution is mixed with the anti-solvent can be altered in parallel by scientists, creating multiple conditions for crystal growth. As the crystals grow in different conditions, data on their growth rates, shapes and structures is gathered and imported into a data network. With the data, scientists can more quickly identify the best conditions for manufacturing the most stable crystalline form with a desirable crystal morphology — a crystal with a plate-like shape instead of a crystal with a rod-like shape — of an API and scale up the crystallization of stable forms. The UIC researchers led by Meenesh Singh, in collaboration with the Enabling Technologies Consortium, have validated the device using L-histidine, the active ingredient in medications that can potentially treat conditions like rheumatoid arthritis, allergic diseases and ulcers. The results are reported in Lab on a Chip, a journal of the Royal Society of Chemistry. “The pharmaceutical industry needs a robust screening system that can accurately determine API polymorphs and crystallization kinetics in a shorter time frame. But most parallel and combinatorial screening systems cannot control the synthesis conditions actively, thereby leading to inaccurate results,” said Singh, UIC assistant professor of chemical engineering at the College of Engineering. “In this paper, we show a blueprint of such a microfluidic device that has parallel-connected micromixers to trap and grow crystals under multiple conditions simultaneously.” In their study, the researchers found that the device was able to screen polymorphs, morphology and growth rates of L-histidine in eight different conditions. The conditions included variations in molar concentration, percentage of ethanol by volume and supersaturation – important variables that influence crystal growth rate. The overall screening time for L-histidine using the multi-well microfluidic device was about 30 minutes, which is at least eight times shorter than a sequential screening process. The researchers also compared the screening results with a conventional device. They found that the conventional device significantly overestimated the fraction of stable form and showed high uncertainty in measured growth rates. “The multi-well microfluidic device paves the way for next-generation microfluidic devices that are amenable to automation for high-throughput screening of crystalline materials,” Singh said. Better screening devices can improve API process development efficiency and enable timely and robust drug manufacturing, he said, which could ultimately lead to safer drugs that cost less money. The research is based on the work performed in the Materials and Systems Engineering Laboratory at UIC in collaboration with and supported by the Enabling Technology Consortium, which is composed of 13 pharmaceutical and biotechnology companies. An application for an international patent for the device has been filed.
<urn:uuid:b0e9fd16-75de-4d1b-8bc6-3e8d8e8eb8e9>
CC-MAIN-2022-40
https://www.ecmconnection.com/doc/uic-research-paves-way-for-next-generation-of-crystalline-material-screening-devices-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00243.warc.gz
en
0.9431
682
2.953125
3
Researchers at the Arizona University claim to have created the world’s first white laser, which could be used for a wide variety of technology related things, including communications and monitors. White lasers were invented back in the 1960s, but its application was limited – technology was out of its reach. However, according to a paper published in Nature, all of that could change. “[Realising] such a device has been challenging because of intrinsic difficulties in achieving epitaxial growth of the mismatched materials required for different colour emission”, says Nature (opens in new tab). The paper had been submitted in October, but required extensive peer review before publication, Computing (opens in new tab)says. The researchers, Cun-Zheng Ning, professor in the School of Electrical, Computer and Energy Engineering, together with doctoral students Fan Fan, Sunay Turkdogan, Zhicheng Liu and David Shelhammer, claim to have created "a novel nanosheet - a thin layer of semiconductor that measures roughly one-fifth of the thickness of human hair, with a thickness that is roughly one-thousandth of the thickness of human hair - with three parallel segments, each supporting laser action in one of three elementary colours. "The device is capable of lasing in any visible colour, completely tuneable from red, green to blue, or any colour in between. When the total field is collected, a white colour emerges," claims the university. "Lasers are brighter, more energy efficient, and can potentially provide more accurate and vivid colours for displays like computer screens and televisions. Ning's group has already shown that their structures could cover as much as 70 per cent more colours than the current display industry standard.”
<urn:uuid:6675af81-481a-4fa9-963a-dfe161c7f81b>
CC-MAIN-2022-40
https://www.itproportal.com/2015/07/30/your-next-monitor-could-be-made-from-lasers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00243.warc.gz
en
0.963621
359
3.78125
4
Criminals will always find ways to gain access to PCs without the owner’s permission. They will then use the hacked computers to their favor. One recent case was a Trojan virus that infected multiple computers then used them to mine Bitcoins, without the PC owners knowing what is Bitcoin is a digital currency that can be traded in the digital market. Its value may vary from time to time so traders invest heavily on it, then selling them once the values go up. Bitcoins, being a digital currency, can be used to purchase practically anything online. Bitcoin mining is the method of processing the transactions that are happening in the Bitcoin network. This secures the information in blocks, hence the name of the technology Blockchain. Doing this requires the expense of certain resources such as hardware, energy, time, and Bitcoin miners are then paid for the “mining” that they do for the network and this is the part where hackers invade other people’s PCs to make their work faster and easier. Hackers simply create or utilize malicious programs and spread them through online conversations like Skype, emails, and even a simple website browsing. This unethical method of Bitcoin mining gives hackers the opportunity to earn money almost automatically. With the Bitcoin’s rising value in the digital market, it becomes even more tempting for crooks to illegally mine more Bitcoins. They will do their best to hack your system just like what they did to millions of computers around the world. How does it work? The Trojan virus that was discovered just recently spread through Skype then forces the infected computers to mine Bitcoins for the hacker. This method can earn the hacker more Bitcoins (and eventually real money) faster and easier. The drawback is that the infected computer receives too much usage that can damage it. Users will normally notice that their computers will show lags and the CPU usage will increase tremendously. Once the Trojan virus gets installed on somebody’s computer, the hacker gets control to your computer, and he can now do almost anything he wants. Kaspersky Lab says that Trojan is not only used for mining Bitcoins unethically but also for other immoral doings that is why it is highly advised to be careful in receiving internet messages and downloading programs. It is also crucial to have reliable antivirus software to protect you from cyber attacks. This is not the first case of a Trojan virus invading the Bitcoin mining sector. Malicious programs are not known to attack personal computers and perform Bitcoin mining only, but they also steal the Bitcoin itself. Two years ago, a Trojan virus called Badminer was spotted by Symantec. This Trojan virus gained access illegally to obtain information from the graphical processing units which will be used to produce Bitcoins. Be extra careful There are numerous solutions out there that can help you fight and prevent cyber attacks. All of them are designed to protect you from damages and information leaks. The first and cheapest solution you can take is to just be careful in browsing websites, receiving and opening emails or personal messages, and downloading files and programs. Doing this simply removes you from the chances of being attacked by hackers and your information being stolen. There will always be websites in the internet that will lure you to download their program without you knowing that it already contains a Trojan virus. Do not easily fall for offers that are too good to be true because they normally contain malicious programs. Next is getting a reliable and effective antivirus. There are many antivirus programs out there that you can get for free, offering basic protection. If you want to get additional protection and services, you can always get the premium version. You can also choose the package you want if your budget is limited. Another way to protect your PC from cyber attacks is using a VPN. VPN may not directly kill or quarantine a virus just like what an antivirus program can do but it will create a private network that cannot be accessed easily by non-authorized people. You would need to access multiple security levels before getting into the system. Give this a quick read to see how free VPNs may Criminals and intruders will always be around to invade your privacy and cause havoc for their own good. Being always on guard can keep your files, records, programs, and money safe from cyber attacks. In this era where currencies are now digital, criminals are also going cyber just to steal. Consider the solutions mentioned above to keep your system protected.
<urn:uuid:68a23535-ef3c-460b-8eae-7ca8feb54b9b>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/trojan-virus-can-turn-pc-bitcoin-miner-without-knowing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00243.warc.gz
en
0.929894
965
2.671875
3
Challenge: Take computing performance to a higher level. The supercomputing capabilities at NASA Ames needed to be upgraded to accommodate additional equipment, but the existing building didn’t have enough power. Upgrading the current space with additional power and cooling infrastructure would had been very expensive and the replacement of legacy equipment still remained useable. High performance computers also use massive amounts of energy to cool heat-generating hardware components, driving high energy costs. Solution: The right blueprint for transformation. GDIT developed an alternate approach for cooling the HPC system, using evaporative cooling to take advantage of the local climate of Northern California. The new supercomputing facility is modular, consisting of small, segmented units fitted with high performance computers, rather than a traditional data center. Refined in GDIT’s HPC Center of Excellence, this site-specific approach increased power capacity while lowering energy costs to cool the equipment. Results: Supercomputing that’s super fast to deploy and energy efficient. Through this collaboration, NASA Ames has gained the ability to fully cool its supercomputing hardware 38 percent more efficiently than the industry standard, resulting in significant cost savings. And virtually all of the energy consumed is used for computing, instead of a significant portion being diverted for cooling. The energy savings and space savings may allow the agency to install new equipment in the future that could be used for scientific breakthroughs. sq. ft. foundation, about the size of a football field modular data centers can be supported megawatts of HPC compute equipment can be supported lower energy usage than industry average for supercomputing facilities
<urn:uuid:977f0b70-e221-45bd-98e5-7551a445ac8d>
CC-MAIN-2022-40
https://www.gdit.com/perspectives/our-stories/supercomputing-facility-saves-energy-costs-for-nasa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00243.warc.gz
en
0.92344
335
3.671875
4
What Is a Fiber Optic Splitter? In today's optical network topologies, the advent of fiber optic splitter contributes to helping users maximize the performance of optical network circuits. Fiber optic splitter, also referred to as optical splitter, or beam splitter, is an integrated waveguide optical power distribution device that can split an incident light beam into two or more light beams, and vice versa, containing multiple input and output ends. Optical splitter has played an important role in passive optical networks (like EPON, GPON, BPON, FTTX, FTTH, etc.) by allowing a single PON interface to be shared among many subscribers. How Does Fiber Optic Splitter Work? Generally speaking, when the light signal transmits in a single mode fiber, the light energy cannot be entirely concentrated in the fiber core. A small amount of energy will be spread through the cladding of the fiber. That is to say, if two fibers are close enough to each other, the transmitting light in an optical fiber can enter into another optical fiber. Therefore, the reallocation technique of optical signal can be achieved in multiple fibers, which is how fiber optic splitter comes into being. Specifically speaking, the passive optical splitter can split, or separate, an incident light beam into several light beams at a certain ratio. The 1x4 split configuration presented below is the basic structure: separating an incident light beam from a single input fiber cable into four light beams and transmitting them through four individual output fiber cables. For instance, if the input fiber optic cable carries 1000 Mbps bandwidth, each user at the end of output fiber cables can use the network with 250 Mbps bandwidth. The optical splitter with 2x64 split configurations is a little bit more complicated than the 1x4 split configurations. There are two input terminals and sixty-four output terminals in the optical splitter in 2x64 split configurations. Its function is to split two incident light beams from two individual input fiber cables into sixty-four light beams and transmit them through sixty-four light individual output fiber cables. With the rapid growth of FTTx worldwide, the requirement for larger split configurations in networks has increased to serve mass subscribers. Fiber Optic Splitter Types Classified by Package Style The optical splitter can be terminated with different forms of connectors, and the primary package could be box type or stainless tube type. Fiber optic splitter box is usually used with 2mm or 3mm outer diameter cable, while the other is normally used in combination with 0.9mm outer diameter cables. Besides, it has variously different split configurations, such as 1x2, 1x8, 2x32, 2x64, etc. Classified by Transmission Medium According to the different transmission mediums, there are single mode optical splitter and multimode optical splitter. The multimode optical splitter implies that the fiber is optimized for 850nm and 1310nm operation, whereas the single mode one means that the fiber is optimized for 1310nm and 1550nm operation. Besides, based on working wavelength differences, there are single window and dual window optical splitters—the former is to use one working wavelength, while the latter fiber optic splitter is with two working wavelengths. Classified by Manufacturing Technique FBT splitter is based on traditional technology to weld several fibers together from the side of the fiber, featuring lower costs. PLC splitter is based on planar lightwave circuit technology, which is available in a variety of split ratios, including 1:4, 1:8, 1:16, 1:32, 1:64, etc, and can be divided into several types such as bare PLC splitter, blockless PLC splitter, ABS splitter, LGX box splitter, fanout PLC splitter, mini plug-in type PLC splitter, etc. Check the following PLC Splitter vs FBT Splitter Comparison Chart: |Type||PLC Splitter||FBT Coupler Splitters| |Operating Wavelength||1260nm-1650nm (full wavelength)||850nm, 1310nm, 1490nm and 1550nm| |Splitter Ratios||Equal splitter ratios for all branches||Splitter ratios can be customized| |Performance||Good for all splits, high level of reliability and stability||Up to 1:8 (can be larger with higher failure rate)| |Input/Output||One or two inputs with an output maximum of 64 fibers||One or two inputs with an output maximum of 32 fibers| |Housing||Bare, Blockless, ABS module, LGX Box, Mini Plug-in Type, 1U Rack Mount||Bare, Blockless, ABS module| Fiber Optic Splitter Application in PON Networks Optical splitters, enabling the signal on the optical fiber to be distributed between two or more optical fibers with different separation configurations (1×N or M×N), have been widely used in PON networks. FTTH is one of the common application scenarios. A typical FTTH architecture is: Optical Line Terminal (OLT) located in the central office; Optical Network Unit (ONU) situated at the user end; Optical Distribution Network (ODN) settled between the previous two. An optical splitter is often used in the ODN to help multiple end-users share a PON interface. Point-to-multipoint FTTH network deployment can be further divided into the centralized (single-stage) or cascaded (multi-stage) splitter configurations in the distribution portion of the FTTH network. A centralized splitter configuration generally uses a combined split ratio of 1:64, with a 1:2 splitter in the central office, and a 1:32 in an outside plant (OSP) enclosure such as a cabinet. A cascaded or distributed splitter configuration normally has no splitters in the central office. The OLT port is connected/spliced directly to an outside plant fiber. The first level of splitting (1:4 or 1:8) is installed in a closure, not far from the central office; the second level of splitters (1:8 or 1:16) is situated at terminal boxes, close to the customer premises. Centralized Splitting vs Distributed Splitting in PON Based FTTH Networks will further illustrate these two splitting methods that adopt fiber optic splitters. How to Choose the Right Fiber Optic Splitter? In general, a superior fiber optic splitter needs to pass a series of rigorous tests. The performance indicators that will affect the fiber optic splitter are as follows: Insertion loss: Refers to the dB of each output relative to the input optical loss. Normally, the smaller the insertion loss value, the better the performance of the splitter. Return loss: Also known as reflection loss, refers to the power loss of an optical signal that is returned or reflected due to discontinuities in the fiber or transmission line. Normally, the larger the return loss, the better. Splitting ratio: Defined as the output power of the splitter output port in the system application, which is related to the wavelength of the transmitted light. Isolation: Indicates a light path optical splitter to other optical paths of the optical signal isolation. Besides, uniformity, directivity, and PDL polarization loss are also crucial parameters that affect the performance of the beam splitter. For the specific selections, FBT and PLC are the two main choices for the majority of users. The differences between FBT splitter vs PLC splitter normally lie in operating wavelength, splitting ratio, asymmetric attenuation per branch, failure rate, etc. Roughly speaking, the FBT splitter is regarded as a cost-effective solution. PLC splitter featuring good flexibility, high stability, low failure rate, and wider temperature ranges can be used in high-density applications. For the expenses, the costs of PLC splitters are generally higher than the FBT splitter owing to the complicated manufacturing technology. In specific configuration scenarios, split configurations below 1×4 are advised to use FBT splitter, while split configurations above 1×8 are recommended for PLC splitters. For a single or dual wavelength transmission, FBT splitter can definitely save money. For PON broadband transmission, PLC splitter is a better choice considering future expansion and monitoring needs. Fiber optic splitters enable a signal on an optical fiber to be distributed among two or more fibers. Since splitters contain no electronics nor require power, they are an integral component and widely used in most fiber-optic networks. Thus, choosing fiber optic splitters to help increase the efficient use of optical infrastructure is key to developing a network architecture that will last well into the future.
<urn:uuid:a3735a18-4680-4136-961d-7246835f029f>
CC-MAIN-2022-40
https://community.fs.com/blog/what-is-a-fiber-optic-splitter-2.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00243.warc.gz
en
0.895559
1,838
3.5625
4
What is Db2 for z/OS? Db2 for z/OS is a relational database management system that runs on the IBM® Z platform. A relational database is a database in which all of the data is logically contained in tables. These databases are organized according to the relational model. In a relational database, referential integrity ensures data integrity by enforcing rules with referential constraints, check constraints, and triggers. You can rely on constraints and triggers to ensure the integrity and validity of your data, rather than relying on individual applications to do that work. Db2 is software that you can use to manage relational databases. IBM offers a family of Db2 products that run on a range of operating systems, including Linux®, UNIX, Windows, IBM i, VSE, VM, and z/OS. z/OS is the main operating system for IBM's most robust hardware platform, IBM Z. Db2 for z/OS is the enterprise data server for IBM Z. It manages core business data across an enterprise and supports key business applications. Db2 for z/OS supports thousands of customers and millions of users. It remains continuously available, scalable, and highly secure. With Db2 for z/OS and other Db2 family products, you can define and manipulate your data by using structured query language (SQL). SQL is the standard language for accessing data in relational databases. This information might sometimes also refer to Db2 12 for z/OS as Version 12. Unless a different product, platform, or release is specified, assume that they refer to the same product. IBM rebranded DB2® to Db2, and Db2 for z/OS is the new name of the offering that was previously known as DB2 for z/OS. For more information, see Revised naming for IBM Db2 family products on IBM z/OS platform. As a result, you might sometimes still see references to the original names, such as DB2 for z/OS and DB2, in different IBM web pages and documents. If the PID, Entitlement Entity, version, modification, and release information match, assume that they refer to the same product.
<urn:uuid:103f197b-f52d-4e4a-8d73-2f95dc251d6f>
CC-MAIN-2022-40
https://www.ibm.com/docs/en/db2-for-zos/12?topic=getting-started-db2-zos
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00243.warc.gz
en
0.915678
462
2.734375
3
Leidos team helps NASA manage climate risks A NASA ground station on Ross Island, Antarctica, used for tracking and communicating climate data via satellite. Photo: Getty Images Why you should know: One of NASA’s top priorities, as described in the plan published in October, is to ensure continued access to space by managing climate-related risks to launch sites and equipment. Erik Tucker, the Leidos Project Manager acknowledged in the report, said agencies like NASA are already facing more frequent and extreme weather events, threatening mission activities and leading to increased maintenance and repair costs. Leidos impact: Tucker and his team helped NASA1 facilitate collaboration among stakeholders across the agency, including Earth system scientists, facilities managers, emergency management staff, natural resource managers and human capital specialists. The resulting plan provides a holistic approach to mission, operations, workforce and asset protection. Tucker said NASA’s direct experience studying climate change and risk-focused culture provides a unique and powerful perspective in its planning. From the source: “You can’t fix what you don’t measure,” Tucker explains. “NASA is one of the primary agencies that develops global climate data and research other organizations worldwide rely on to assess their own climate vulnerabilities. One way NASA is enhancing its own resilience is by expanding the use of agency-generated climate science to inform internal planning.” Starting in 2022, all federal agencies are required to implement and provide annual progress updates on their respective plans. “Finding adaptation solutions for the climate crisis is a growing priority for all of us,” Tucker said. “Make no mistake, climate is already impacting the entire federal government. There’s no other way forward but to adapt in the face of changing conditions.” For a full list of federal climate adaptation plans, visit sustainability.gov. Please contact the Leidos media relations team for more information. 1This work was conducted by Leidos under subcontractor support to Herndon Solutions Group on the NASA Environmental and Medical Contract (NEMCON).
<urn:uuid:2f7b6cf6-62b5-4fb0-9f44-5b121c499b72>
CC-MAIN-2022-40
https://www.leidos.com/insights/leidos-team-helps-nasa-manage-climate-risks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00243.warc.gz
en
0.925773
431
2.953125
3
At the time of writing, the continuing global spread of COVID-19 is resulting in huge human and economic cost. The damage to lives and livelihoods is vast, and the race for a vaccine is well and truly on – according to the World Health Organization, as of late April 2020, there were five candidate vaccines in clinical evaluation and 71 in preclinical evaluations. While the vaccine hunt continues apace, organizations the world over are having to find ways to protect their employees, maintain operational continuity, address significant disruption to supply chains, and all while looking at longer-term strategies to avoid future adversities. The impact on life sciences, and particularly the pharmaceutical industry, has been particularly hard felt because it extends through the entire global supply chain ecosystem, and because of an overreliance on a select few suppliers (particularly those in low-cost territories). In fact, EFCG estimates that upward of 80% of chemicals used to make drugs sold in Europe now originate in China and India. But unforeseen events can provide valuable lessons for organizations and their supply chains. For example, the 2014 Ebola crisis and Hurricane Maria in Puerto Rico in 2017 prompted Johnson & Johnson to maintain key inventory at major distribution centers away from high-risk areas and to work with its suppliers to mitigate the impact of future crises. There is particular pressure on biopharma, which is charged with developing, producing and distributing COVID-19 therapies and, ultimately, the vaccine. Biopharma will be responsible for facilitating one of the biggest distribution programs in human history, in an attempt to inoculate every human on the planet. Supply chains will, of course, be vital in facilitating this. This will require an astounding effort in organization, collaboration, logistics, and technical coordination. But the disease has already impacted, and called into question, the very system (the life sciences supply chain) that will be vital in distributing an eventual vaccine. Life sciences organizations, in general, are facing a plethora of challenges – not only to discover and make treatments and vaccines available, but also to maintain supply chains for existing treatments and services. But quarantine and physical distancing measures within the workplace can impact the operation of factories and infrastructure. US census data shows that about 20% of the industry workforce in biopharma is engaged in activities that are vital in keeping sites up and running. These activities include material and inventory handling, production processes, and testing and maintenance. Business continuity plans must be in place to prevent operations from grinding to a complete halt while protecting the safety of employees. A recent report by The Capgemini Research Institute, “The great supply chain shock: COVID-19 response and recovery,” highlights two examples of quick thinking and frugal innovation by pharma giants Novartis and Pfizer, who have used micro-factories no bigger than a shipping container to produce drugs faster and more cheaply. COVID-19 has shown us that we are all very closely connected; our societies, our economies, each other. To fight the virus, our supply chains must become more connected too – they must branch out like a hyperconnected network. Supply chains can no longer be so linear and rigid, too dependent on certain links that are vulnerable to disruption. A more agile, nuanced, and plural branching out in terms of partners, technology, and touchpoints, fed by data, is needed – an intricate web. If you cut a snake in half, it stops moving– it perishes. But if you cut a tentacle off an octopus, it can continue to live, adjust, (it has seven others anyway), and can even grow back the missing tentacle stronger than it was before. To me, it’s clear that supply chains across all sectors will need more end-to-end visibility and resource prioritization to become more flexible. Data and AI fed by touchpoints enable a monitoring system to ensure effective execution of risk mitigation strategies. For leaders to act on facts and make “now-based” decisions and mitigate risk, end-to-end value chain visibility and insight–based scenario planning are essential. AI and data can also build simulation models to predict demand and supplier lead times. It can bring into precise focus your operations and value chain risk and help anticipate future disruption to any part of the supply network. The Capgemini Research Institute’s recent report, “The great supply chain shock: COVID-19 response and recovery,” gives a number of recommendations to organizations in transitioning out of the recovery phase to gain supply chain resilience and flexibility, including: - Reassess customer demand – Sales & Operations planning teams have to be able to build a more complete picture of future consumer demand by simulating recovery scenarios before finalizing production and logistics decisions. It’s important to assess to what degree customer buying preferences and preferred sales channels have shifted. - Improve forecasts – AI and analytics can help to develop new forecast models based on the latest customer sales and market data. - Align operations – Drive transformation through the digitization of supply chains, the mapping of supply networks, and rethinking supply chain strategy. Organizations with sophisticated digitalization in their supply chains are able to respond more quickly. To become more flexible and resilient, life sciences supply networks should be infused with Industry 4.0 capabilities, including intelligent analytics, AI, automation, and IoT. These technologies will ensure transparency across the global supply chain and enable more informed decisions fed by real-time data. This will be crucial in meeting higher demand for pharmaceuticals in the short term while, in the longer term, enabling the smooth and expedient rollout of the vaccine, whose distribution will have to be coordinated across multiple geographies simultaneously. The actions of life sciences companies over the coming tens of months will be much scrutinized. This will also be the case after the pandemic has passed. However, I believe that despite the hardship, the pandemic could represent to the life sciences supply chain – not only an urgent challenge, but an opportunity to innovate and improve. Just like an immune system responding to a vaccine, supply chains worldwide are being presented with a chance to learn, grow stronger, and become impervious to future disruption. Visit How to accelerate a healthy recovery in life sciences to learn how we help life sciences organizations respond to the current events.
<urn:uuid:76cb3f2b-8fbc-46c2-a0a8-4efee4464263>
CC-MAIN-2022-40
https://www.capgemini.com/au-en/2020/07/a-shot-in-the-arm-for-the-life-sciences-supply-chain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00243.warc.gz
en
0.932695
1,297
2.625
3
Careless disposal of PII is subject to harsh legal penalties in many countries. Similarly, companies who do not have regimented processes for the retirement of technology and the resident data are also at risk of a loss of reputation, trust, and revenue. In the United States, there are several major laws that businesses need to remain aware of: Privacy Act of 1974 The Privacy Act of 1974 holds certain stipulations for the rights and restrictions on data when it is held by government agencies. It governs the collection, maintenance, use, and dissemination. Essentially, US federal workers must not wilfully disclose information to anyone not entitled to receive it. The Fair and Accurate Credit Transactions Act (FACTA) This law was passed in 2003 and its purpose is to enhance customer protections, mainly those that protect against identity theft. While it meant that the amount of PII required from customers increased, it also gave more protection to that PII when gathered. Penalties for violations of FACTA vary, but wilful violations could amount to penalties within the billions. Gramm-Leach-Bliley Act (GLBA) Also known as the Financial Modernization Act, this law was passed in 1999. It requires US companies to explain how they share and protect personal information and protects financial non-public personal information (NPI). Amongst other specifics, it means that businesses apply special protections to private data in accordance with an information security plan. Punishments for GLBA non-compliance, once proven, are severe. Individuals found in violation face fines of $10,000 for each violation discovered. Organizations face $100,000 for each violation. Health Insurance Portability and Accountability Act (HIPAA) HIPAA came into force in 1966 and covers information regarding health status, care, or payment, setting standards for covered parties and business associates. It only applies to protected health information (PHI). Any organization that houses this kind of data must protect it - during use or disposal. Jail terms are likely and restitution may also need to be paid to affected individuals. However, the penalties brought forth depend on whether the breach was carried out with intent or not and the degree of negligence involved. California Consumer Privacy Act (CCPA) At least 35 states implement their own laws regarding data protection and the CCPA is a well-known one. It has actually influenced other states to create similar laws, which have been implemented in areas such as Maryland, Rhode Island, and Massachusetts among others. Passed in early 2020, the CCPA actually incorporates the foundational principles of GDPR, mirroring its focus on data protection and privacy requirements. Penalties for violations of the CCPA vary, with fines of $2,500 for individual breaches and $7,500 for wilful individual breaches. Similarly, both the Federal Trade Commission (FTC) and the Health Insurance Portability and Accountability Act (HIPAA) also require the proper disposition of information.
<urn:uuid:3f60af73-212f-4f01-bf45-6cf18424f860>
CC-MAIN-2022-40
https://www.aptosolutions.com/what-is-itad
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00443.warc.gz
en
0.953165
609
2.609375
3
Guide: What Is Low-Code? Low-code platforms are development tools that provide software that can be used by programmers to create apps visually, as opposed to using more familiar coding platforms. These platforms open up a new avenue for businesses to remedy basic issues in the workplace and provide a better customer experience. Download the What is Low-Code? guide to learn more about: - Differences between low-code and no-code - Benefits of low-code applications - Necessary skills to successfully implement low-code
<urn:uuid:b820943f-ddfd-4bed-ac9a-2c543f0e6cbc>
CC-MAIN-2022-40
https://www.impactmybiz.com/ebook/guide-what-is-low-code/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00443.warc.gz
en
0.930659
108
2.734375
3
Hacking has become a threat far bigger than most people think. November 8, 2017 By Scott Lindley Many hackers are teenagers in basements just trying to get into any system that they can. It’s referred to as “opportunistic hacking.” And, when they get in, they like to change code that will create mayhem. That should not give you rest. Apple CEO Tim Cook warns, “The hacking community aren’t hackers anymore; they are sophisticated enterprises.” IPVM recently reported how a $30 copier easily spoofed a popular proximity card. The column stated that the copier “used to copy the cards works much the same way as normal card readers, with transceiver coil, power supply, IC chip, buzzer and even LEDs components shared by both. Given the principal operation of contactless card readers, the copier excites the coil and delivers power wirelessly to the card, which then momentarily stores energy and then uses it to broadcast card details back to the copier.” Interestingly, some security people don’t seem to secure their own security equipment. Users are learning that today’s IP-enabled contactless card readers and wireless cameras have become favourite targets of hackers. Unsecured, they provide irresistible backdoors. Thus, new specifications are needed for electronic access control projects. For instance, were you aware that by simply putting the default installer code in a disarmed state, it can be used to view the user codes including the master code or to change or create a new code? Therefore, if a potential unauthorized person gains access to a panel in the unarmed state, using the installer code gives that person access to all installed hardware and will even allow creation of a new user code or change of a current user code. This code then trumps the master code or other user codes. So, if the installer does not change the default code, the user might as well be giving a user code to everyone. Less than 30 seconds is all it takes to view the master, all other user codes, or even create a new one. Yes, you say, but what if the installer says that they don’t have the default installer code? Unfortunately, too often, these codes can be found online with a simple Google search. And, of course, once inside the system, the hacker can also get access to the rest of the computer system. Sometimes the problem is within the software itself. Oftentimes, the default code is embedded in the app to provide a mechanism to let the device still be managed even if the administrator’s custom pass code is lost. However, it is a poor developer practice to embed passwords into an app’s shipped code, especially unencrypted. Adding to the problem is that Wiegand, the industry standard over-the-air protocol commonly used to communicate credential data from a contactless access credential to an electronic access reader, is no longer inherently secure due to its original obscure and non-standard nature. For this reason, options are now available that can be added to the readers. The first is MAXSecure, which provides a higher-security handshake, or code, between the proximity or smart card, tag and reader to help ensure that readers will only accept information from specially coded credentials. The second is Valid ID, a relatively new anti-tamper feature available with contactless smartcard readers, cards and tags. Embedded, it can add an additional layer of authentication assurance to NXP’s MIFARE DESFire EV1 smartcard platform, operating independently, in addition to, and above the significant standard level of security that DESFire EV1 delivers. Valid ID lets a smartcard reader effectively help verify that the sensitive access control data programmed to a card or tag is indeed genuine and not counterfeit. Role of the access control provider First of all, when considering any security application, it is critical that the access control provider needs to realistically assess the threat of a hack to a facility. For example, if access control is being used merely as a convenience to the alternative of using physical keys, chances are the end user has a reduced risk of being hacked. However, if the end user is using their access system as an element to their overall security system because of a perceived or imminent threat due to the nature of what they do, produce or house at their facility, they may indeed be at higher risk and they should consider methods to mitigate the risk of a hack. Here are a few steps that may be considered in reducing the danger of hacking into a Wiegand-based system. • Install only readers that are fully potted. Potting is a hard epoxy seal that does not allow access to the reader’s internal electronics from the unsecured side of the building. An immediate upgrading is recommended for readers that fail to meet this standard. • Make certain the reader’s mounting screws are always hidden from normal view. Make use of security screws whenever possible. • Embed contactless readers inside the wall, not simply on the outside, effectively hiding them from view. Or, if that is not possible and physical tampering remains an issue, consider upgrading the site to readers that provide both ballistic and vandal resistance. • Make use of reader cable with a continuous overall foil shield tied to a solid earth ground in a single location. This helps block signals from being induced onto the individual conductors making up the cable as well as those signals that may be gained from the reader cable. • Deploy readers with a pig tail, not a connector. Use extended length pig tails to assure that connections are not made immediately behind the reader. • Run reader cabling through a metal conduit, securing it from the outside world. Make certain the metal conduit is tied to an earth ground. • Use the “card present” line commonly available on many of today’s access control readers. This signal line lets the access control panel know when the reader is transmitting data. • Provide credentials other than those formatted in the open, industry standard 26-bit Wiegand. Not only is the 26-bit Wiegand format available for open use but many of the codes have been duplicated multiple times. Alternatives can include ABA Track II, OSDP, RS485 and TCP/IP. • Offer the customer cards that can be printed and used as photo badges, which are much less likely to be shared. • Employ a custom format with controls in-place to govern duplication. • Offer a smart card solution that employs sophisticated cryptographic security techniques, such as AES 128-bit. • Make available non-traditional credentials with an anti-playback routine, such as transmitters instead of standard cards and tags. Long range transmitters offer the additional benefit of not requiring a reader be installed on the unsecure side of the door. Instead they can be installed in a secure location, such as the security closet, perhaps up to 200 feet away. • Offer a highly proprietary contactless smartcard technology such as Legic advant. • Provide two-factor readers including contactless and PIN technologies. Suggest users roll PINs on a regular basis. If required, offer a third factor, normally a biometric technology (face, fingerprint, voice, vein, hand, etc.). • Assure additional security system components are available. Such systems can also play a significant role in reducing the likelihood of an attack as well as mitigating the impact of a hack attack should it occur. • Intrusion: Should the access control system be hacked and grant entry to a wrong individual, have a burglar alarm system in place to detect and annunciate the intrusion. • Video: If the access control system is hacked, granting entry to an unauthorized individual, have a video system in place to detect, record and annunciate the intrusion. • Guards: If the system is hacked and intruders are let in, make sure that guards in the control room as well as those performing a regular tour receive an alert notifying them that someone has physically tampered with the access control system. We must always stay one step in front of the bad guys. With the proper tools, any of these assaults can be defended. Adding encryption into an access control system One aspect of securing a card’s information is to make the internal numbers unusable; they must be encrypted. To read them, the system needs access to a secret key or password that provides decryption. Modern encryption algorithms play a vital role in assuring data security.Today,13.56 MHz smart cards are used to provide increased security compared to 125 KHz proximity cards. One of the first terms you will discover in learning about smart cards is “MIFARE,” a technology from NXP Semiconductors. MIFARE enables 2-way communications between the card and the reader. The newest of the MIFARE standards, DESFire EV1, includes a cryptographic module on the card itself to add an additional layer of encryption to the card / reader transaction. This is amongst the highest standard of card security currently available. MIFARE DESFire EV1 protection is therefore ideal for sales to providers wanting to use secure multi-application smart cards in access management, public transportation schemes or closed-loop e-payment applications. Don’t let them hack the system you specify Protecting your customers’ organization(s) from hackers is imperative. The threats have grown to the point where they now include sophisticated government backed entities as well as teenaged mischief makers. With knowledge of what hackers seek and the remedies available to thwart them, anti-hacking specifications are now mandatory. For additional help, ask your manufacturer to provide you with their cybersecurity vulnerability checklist. Scott Lindley is the president of Farpointe Data. Print this page
<urn:uuid:6833be4a-6ace-4eb4-96c0-21c4f854acfe>
CC-MAIN-2022-40
https://www.sptnews.ca/prevent-access-control-hacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00443.warc.gz
en
0.92933
2,026
2.8125
3
According to data presented by tradingplatforms.com, Bitcoin (BTC) now devours 143 terawatt-hours (TWh) of power annually, eight times higher than Google and Facebook combined, and more than some medium-sized European countries, raising concerns among conservationists. Comparing Bitcoin’s electricity demands to those of individual tech firms highlights the stark contrast in their consumption volumes. For instance, Google, the world’s largest search engine, uses only 12 TWh, a twelfth of BTC’s energy use. Likewise, Facebook’s energy requirements pale significantly in comparison to BTC’s. The social media giant requires only 5 TWh of electricity for its functions, a mere 3.5 percent of what the cryptocurrency requires. BTC also trumps the energy needs of whole countries. For example, its energy demands beat that of Norway and Switzerland. Whereas the former’s energy demands top 124 TWh, the latter’s stand at 56 TWh, 19 and 86 TWh short of the currency’s demand respectively. The figures come into sharp perspective when compared with data from other sectors too. For example, global data centres consume 205 TWh yearly. This means that, comparatively, BTC alone consumes 70 per cent of this figure. In a blog post on tradingplatforms.com, Edith Reads said: “The statistics paint a grim picture. The amount of energy that BTC consumes to perform transactions uses more than what some whole countries do at the moment, and this figure is bound to rise because of the asset’s growing mining difficulty that demands more power to execute.” Bitcoin’s bad rap, Reads explained, stems mainly from its technology. It uses a proof of work (PoW) consensus mechanism for validating transactions. PoW is energy and hardware-intensive, releasing so much waste to the environment. Additionally, a significant proportion of its mining activity uses non-renewable energy resources. These resources, including coal, are affordable and therefore attractive to many miners, but leave a huge carbon footprint on the environment. But the Bitcoin sector hasn’t taken the criticism lying down, concludes Reads. Its proponents have been quick to point out that other mundane activities leave a bigger footprint than the coin. Such activities include the traditional financial system, and the amount of electricity households waste on idling electronic equipment. The sector is also attempting to pursue cleaner and greener mining. Part of that shift involves adopting renewable energy in validating transactions, including solar, wind and geothermal power. Others have called for a change to a less energy-intense consensus, like proof of stake.
<urn:uuid:39029321-a144-4076-83f2-54f231eb3eb6>
CC-MAIN-2022-40
https://digitalinfranetwork.com/news/bitcoin-consumes-eight-times-more-power-than-google-and-facebook-combined/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00443.warc.gz
en
0.920437
553
2.796875
3
The Great Barrier Reef, which is located off the coast of Queensland, Australia, has been a concern for conservationists for some time now, given the effects of climate change. The rising temperature of the Earth causes a phenomenon called coral bleaching, which is bad news for marine life. Therefore, not-for-profit organisations like Citizens of the Great Barrier Reef are doubling efforts to save one of the seven natural wonders of the world. They are doing this through initiatives like the Great Reef Census, an annual survey of the far reaches of the 2,300-kilometre coral reef system. However, researchers are having difficulties in gathering and processing data, despite the huge amount of human support pouring in worldwide. According to Professor Peter Mumby, a professorial research fellow at the University of Queensland, time is of the essence, especially that there were four bleaching events or heat waves in the past seven years. “For example, in 2016, about half of the corals in the Great Barrier Reef bleached, which is a symptom of stress. It doesn’t necessarily mean that when a coral bleaches, that it’s going to die. If the sea temperature returns to normal, then the corals can recover. But with really severe heat waves, you can see large areas die out,” the academic explained during a media briefing hosted by the Citizens of the Great Barrier Reef, the University of Queensland, and Dell Technologies. “From a conservation or management perspective, you can’t actually stop heat waves. There are people investigating technologies to help address that, but really, what you can do is try to improve the capacity of the reef to recover once there’s been a bleaching event like this,” he added. All hands on deck Realising that it’s already the 11th hour, Citizens of the Great Barrier Reef quickly embarked on collecting data from coral reefs, via the Great Reef Census. Scientists, divers, tourists, fishers, and skippers on a community research flotilla capture images, which are then analysed by citizen scientists globally to help locate key source reefs important for coral recovery. Going into the project, the initiative was beset with three major data challenges: - How to get people to collect the data. - How to get the collection uploaded. - How to handle the massive volume of data in a timely fashion. As such, the organisation partnered with Dell technologies to solve these obstacles. Dell designed and implemented a strategy for real-time data collection at sea, which uses a Dell edge solution deployed on multiple watercraft. These, in turn, automatically upload data directly to servers upon return to land, via a mobile network. During its first run in 2020, the flotilla managed to capture 13,000 images. The following year, the images grew to 42,000. Data from the census helped the Great Barrier Reef Marine Authority prioritise control sites. Having solved the first two data challenges, the nonprofit was now faced with the last one — how to analyse the data faster. “How do we create a system that could engage citizen scientists all over the world? In the analysis, we did try a new one, an analysis platform, but it was complex, time consuming, tricky to run, and a bit glitchy. So that kind of left us with, okay, this is something we really need to solve,” said Andy Ridley, CEO of Citizens of the Great Barrier Reef. To this end, Dell had to leverage deep learning technology. “We knew that the citizen scientists in the first phase were spending up to seven minutes per photo — uploading them, detecting them, selecting all the parameters of the photo, choosing what was refilled, and what wasn’t, and so forth. We knew that this was taking a certain amount of time, and the accuracy levels varied from a citizen scientist using it, to a researcher that was able to be a lot more accurate in how they were looking at reef images,” said Danny Elmarji, Vice President, Presales, APJ, Dell Technologies. Although more images and participation from volunteers is desired, Dell had to solve the bottlenecks in the equation. Deep learning technology Going into the nuts and bolts of deep learning, how exactly does it work to solve the data challenges encountered by the Great Barrier Reef conservationists? “We are using segmentation algorithms for analysing the images for (the) census,” revealed Aruna Kolluru, Chief Technologist, Emerging Technologies, Dell Technologies APJ. “In segmentation, every pixel in the image is analysed and classified to say which category it belongs (to). According to Kolluru, there are two different segmentation methods: instant segmentation, where every instance of a category can be identified differently; and semantic segmentation, which is what the project is using. This method identifies all the pixels with the same category as one segment. Meanwhile, the infrastructure platform that supports Dell’s deep learning solution has two different elements: one for training, and one for inferencing. “(The) cluster for training is leveraging Dell PowerEdge servers and GPU acceleration for training. We also have PowerScale (i.e., a software-defined Dell storage product), which eliminates the I/O bottlenecks, and provides both high-performance concurrency and scalability for both training and validation of the AI models. On this platform, we trained different deep learning models, unit segment, deep learning, deep lab, and others,” Kolluru shared. For inferencing, PowerScale is again utilised, with each server running with VMware to do multiple AI inferences at the same time. “With every pixel in an image, we started to classify the reef borders, and in under 10 seconds, we used a human to verify the accuracy of the label. What happens now is Dell Technologies is working between this human and machine partnership, to take what was close to 144 different categories of reef organisms, and really dividing them into subcategories. We were able to make a shortlist of 13 categories, and repeatedly refine them until there were only really five critical categories. That’s going from 144 to five, making it really easy for the humans to verify what was (part of the) reef or not,” Elmarji elaborated. Now in its third year, the Great Reef Census kicks off in October 2022. Through the partnership between Citizens of the Great Barrier Reef, University of Queensland, and Dell, the following had been achieved so far: - Improved citizen experience. - Massive improvement in quality of citizen scientist analysis. - Improving trust in the value of citizen science. - Highlights value of machine-human partnerships. - Acceleration of research. - Improved conservation effort. - Constantly improving process. - Scalable technology. In terms of scalability, Elmarji argued that deep learning has an advantage in dealing with data challenges. “The AI doesn’t get bored, whereas citizen scientists can get bored. Imagine uploading all those images and having to repeatedly go through, and stencil out what’s reef and what’s not reef. As humans, our attention spans can drift on occasion, but AI doesn’t. It allowed us to apply something that was purposely built almost for an AI engine to drive it. So, the more data we get, the more accurate the model gets. But this is a really good example of this kind of symbiotic relationship that we believe is going to coexist for some time between humans and machines,” he remarked. Meanwhile, Mumby emphasised the importance of extending the topic of reef conservation beyond the government and the scientific community. “There are so many dedicated people who are doing a lot, but this is just another way in which citizens can contribute towards an important pathway of building the reef’s resilience. We just can’t do it alone; we have to work in partnership,” he noted. As for future use cases, Ridley is banking on the evolution of the internet and smartphones in the next two to three years, as a catalyst for their conservation model to be applicable on a global scale. “We are trialling this new conservation operating system on the Great Barrier Reef but ultimately, we’re building an infrastructure that can be scaled to reefs and marine ecosystems around the world. Our ambition is to share our 21st century conservation model with reef communities around the world, opening up the possibility for reef conservation to scale to a level we have not seen before,” Ridley said.
<urn:uuid:2654010f-8d49-493a-a33f-2201bb2d02ae>
CC-MAIN-2022-40
https://www.frontier-enterprise.com/breathing-new-life-to-the-great-barrier-reef-via-deep-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00443.warc.gz
en
0.941919
1,805
3.328125
3
The telecom industry is increasing its emphasis on 5G and how it will transform both our personal lives and the way we work. As communication needs increase dramatically, so does our need for networks that can handle ever-growing speed and volume. In general, the 5G discussion so far has been mostly concentrated on the radio domain. There is a lot of information on what has to be done on the radio standards to deliver on the 5G promise. The fact is, however, that the changes are not limited to the radio domains. There is a fairly significant change that will need to happen at the core of the network to enable 5G: To be able to deliver on the promise of ultra-high data throughputs (beyond 10 Gbps) and ultra-low latencies (1ms) the core network for 5G will have to rely on an extremely efficient cloud infrastructure. This Cloud Native Core will leverage the best of today's cloud technologies and further advance them, making the cloud architecture more reliable and more distributed to deliver on the availability and latency requirements typical of the telco environment. This will trigger a new approach to the design and development process for both cloud software and infrastructure components. Network quality and standards As mentioned earlier, latency is one vital concern in 5G networks. In fact, there are different factors that affect the quality of network connections. We tend to think of bandwidth alone, but that merely refers to the amount of information that can be transmitted through the connection, like the size of a pipe that carries water. Just as important is the issue of latency or speed of transmission, which is how quickly the information is being transmitted, like water flowing through the pipe. The lower the latency, the better. The two factors will have to work together in establishing the optimal user experience in 5G. Tomorrow’s connections need to move massive amounts of data at lightning speed to support business and consumer demand. Most of us have experienced a pause of a few seconds when we press play on a streaming video or load a web page. And we would likely not notice a delay of a few milliseconds. But in a self-driving car in a 5G environment, even a fraction of a second can mean the difference between life and death if danger is detected. Historically, the wireless operators were largely concerned with promoting connectivity, as evidenced by coverage maps advertised by wireless carriers. The fundamental connectivity as a service then began to give way to speed as the central business case while 4G became the industry standard. 5G promises to continue improving capacity, of course, but operators will also be able to take additional steps towards enjoying additional business opportunities. The improved control over network resources will allow operators to deliver new business services. Concepts like network slicing, for example, will enable operators to create dedicated network resources for specific services. These services will then pave the way for advancements like remote surgery, which will have dedicated resources that will be untouched by spikes in traffic in other network areas. The path to 5G deployment countdown As previously mentioned, to achieve the new benchmark of efficiency and capability of 5G, the evolution will impact more than just the radio aspect and will include the cloud infrastructure as well. The traditional cloud architecture, as a heritage from the IT world, is known for being very centralized, with few huge data centers providing services to the world. These services, such as those provided by large social media companies, don't require the highest delivery speeds because of their nature. That reality enables a few locations to serve many users with little regard for distance, and a second of delay here or there has little lasting impact on the user experience. For 5G to succeed, however, the user and the data center must be much closer to each other. This distributed cloud network (also known as edge cloud) will provide the efficiency and low latency required for tasks ranging from trading stocks to virtual reality. The telco industry also has to further evolve in the way cloud software is developed. Up until now, telco cloud (or NFV -- network functions virtualization) software has been developed as a monolithic package, just now running on top of a virtualized infrastructure. So, not a lot has changed. While this approach served its purpose initially for telco cloud and NFV, it is time to take the next step and make our software cloud native. This will give us the ability to optimize resources utilization and scale the network to provide the capacity needed for 5G. By taking a cloud-native approach, the software itself will concentrate on its specific business logic, with all the information related to user and session data being put into a centralized database, known as the Shared Data Layer. This approach provides better performance, scalability and reliability of services because data can be moved from one virtual network function to another. When one data center goes down, the next can take over without the user experiencing service problems. Once this transformation is executed, service providers will then be able to build specific services for customers on the 5G network. With the latest evolution of 4G networks, like 4.5G Pro and 4.9G, certain applications are technically feasible, such as VR streaming. And to avoid a spike in the utilization of very demanding services that would impact the overall performance of the network, dedicated slices of the network can be created. These provide an additional level of control and ensure not only the availability of critical services, but also a good overall experience for the customers. 5G promises a transformational experience for businesses as well as consumers, and we are starting to see the standards and best practices take shape. By combining updated architecture with a new approach to software, and the ability to launch new services, telco companies will be able to traverse the path to successful 5G implementation within the next few years. — Sandro Tavares, Head, Cloud Core Marketing, Nokia Corp. (NYSE: NOK)
<urn:uuid:534bf85d-58fb-4098-9d6b-fd0a8401e69e>
CC-MAIN-2022-40
https://www.lightreading.com/mobile/5g/readying-the-cloud-for-5g/a/d-id/732428
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00443.warc.gz
en
0.952229
1,198
2.6875
3
In the past decade, artificial intelligence has escaped the confines of research labs and found its way into many of the things we do every day. From online shopping and content recommendation to healthcare and self-driving cars, we are interacting with AI algorithms, in many cases without even knowing it. But we’ve barely scratched the surface, many believe, and artificial intelligence has much more to offer. Ironically, one of the things that is preventing AI from realizing its full potential is the cloud, one of the main technologies that helped usher AI into the mainstream. “The reason we still don’t see AI everywhere is not that the algorithms or technology are not there. The main reason is cloud dependency,” says Ali Farhadi, co-founder and CXO at Xnor, a Seattle-based AI hardware startup. Edge AI, the collective name for hardware and software that enable the performance of AI tasks without a link to the cloud, has gained much steam in the past few years. And as Farhadi and many other experts believe, the edge (or the fog, as some like to call it) will unlock many new AI applications that weren’t possible before. The costs of cloud-based artificial intelligence AI owes its recent rise in popularity to advances in deep learning algorithms and neural networks. But one of the main limits of neural networks is their requirements for vast amount of compute resources, which are mostly available in public cloud platforms. “AI algorithms are very demanding on compute, memory and power. Those have been very limiting factor at scaling AI use cases,” Farhadi says. There are many ways AI’s dependency on the cloud becomes problematic. “Many use cases of AI will happen at the true edge. They need to run 24/7 and multiple times per second,” Farhadi says, adding that running those many AI inferences in the cloud would be too costly for many use cases and businesses. Connectivity to the cloud also imposes costs on the devices that will be running the applications at the edge. Every one of those AI-powered devices will need expensive communication hardware modules as well as a network infrastructure that can support its connection to the cloud. For instance, imagine wanting to deploy an array of smart sensors in a large industrial complex. The individual costs of those devices plus all the switches and networking devices that will connect them to the cloud are truly excessive. Even in smaller and well-connected settings such as smart homes, AI-powered devices impose costs that most people can’t bear. An interesting example is smart security cameras, devices that use AI algorithms to detect intruders or safety incidents. “These things cost $200 per device,” Farhadi says. “In a true smart home, I actually have to have 20-30 cameras. I would go bankrupt if I wanted to install these many devices in my home. And there will also be a constant running cost of AI computation in the cloud.” The limits of AI in the cloud Costs are not the only problem of cloud-based artificial intelligence. In many settings, a connection to the cloud is either non-present or unstable. This limits some of the very important use cases of AI such as agriculture, where computer vision algorithms and other AI techniques can help in precision farming. But farms are usually located in areas where getting a stable broadband internet connection is a real challenge. Another example is automated rescue drones, which need to work in environments where communications infrastructure is weak or has been damaged due to natural disasters. Again, without the AI cloud, the drones won’t be able to function properly. Latency is another challenge of the cloud. As AI algorithms find their way into the physical world, they need to perform many of their tasks in real time. An example is self-driving cars, which use deep learning algorithms to detect cars, objects and pedestrians. In some situations, they need to make split-second decisions to avoid fatal collisions. “I would never get into a car that is going to be driven by an algorithm that runs in the cloud,” Farhadi says, naming several ways that things can go wrong when a car’s AI algorithms are being processed in the cloud. Another issue that has turned into a pain point for the AI industry is the privacy concerns of sending your data to the cloud. Many people are not comfortable with purchasing devices and applications that are constantly streaming audio and video from their home to the cloud to be processed by an AI algorithm. “I don’t have any smart security cameras, because I just don’t want a picture of my daughter’s room to be uploaded somewhere in the cloud, even though Google and others will say this is very secure,” Farhadi says. “It’s just basically a major security concern.” AI’s power consumption problem Having a constant connection to the internet and streaming data to the cloud consumes a lot of electricity. This has become a pain point for many use cases of artificial intelligence, especially in fields like wearables and always-sensing devices that don’t have a mains power supply and are running on batteries. “If you have to replace a battery every three or six months, people won’t want to use it,” Farhadi says. This is especially true in settings like smart cities, where the numbers of devices can reach hundreds of millions. Power consumption also creates an environmental problem. “If the future we’re depicting is true, we’ll be surrounded with lots of devices that are going to make our lives much easier and simpler. That means we’re going to have billions of AI-powered devices. And if you want to do it with a cloud-based AI solution, the carbon footprint of doing that many inferences in the cloud would damage the planet significantly,” Farhadi explains. “When you think about these problems and scale, power has become one of the biggest issues that we have.” The power of edge AI In the past years, AI hardware has become a growing industry, giving rise to an array of startups that are creating specialized chips for performing AI tasks. Many of the efforts are focused on bringing AI closer to the edge and reducing dependencies on the cloud. Xnor is one of several companies developing edge AI devices, but what makes them different is their focus on costs and energy consumption. Two years, ago, the company created an object detection AI that could run on a Raspberry Pi Zero. The model running on the Raspberry Pi was a 16-layer object detection neural network. In other words, it was a very expensive algorithm running on a cheap and weak platform. They called it the five-dollar deep learning machine. More recently, Xnor took a step further and developed a smaller and more efficient device that replaces the $5 Pi Zero with a two-dollar field-programmable gate array (FPGA). Xnor’s new edge AI contraption runs a 30-layered convolutional network, a fairly complicated AI model, and can analyze at 32 framed per second. However, what makes the new device interesting is that it runs solely on solar power and needs no other power supply. According to Farhadi, with a coin-cell battery, the device could run round the clock for 32 years. “If you want to scale everywhere, you have to be able to run on commodity platforms,” Farhadi says. Low-cost, low-power AI devices that can run independent of the cloud can open the way for many new use cases. They also solve the privacy issue associated with many current AI solutions. Xnor’s smart camera only needs to send a few bits every time it detects an object. Since its data transfer requirements for edge AI devices are very low, they can run on communication platforms such as LoRa, which can send data at low rates as far as 20 miles. This again reduces the costs of networking equipment as well as power consumption levels on the device. “We’re going after enabling everything around us to become smarter than it is with literally no cost, on existing compute platforms, maybe no battery or a battery every decade or so,” Farhadi says. An area where low-power edge AI can help are smart wearables, where privacy and power consumption are both serious issues. Settings such as smart homes and smart cities, where scale is challenge, can also benefit from the cost efficiency of innovations such as Xnor’s low-cost deep learning device. Farhadi says that replacing the FPGA for an application-specific integrated circuit (ASIC) would drop the cost from two dollars to a few cents. Also, in environments where you either have to wire thousands of devices or change their batteries every so often, having self-sufficient AI devices that can run on their own power source for decades save a lot of manual maintenance effort. “We want to revolutionize the way people think about AI at the edge, and maybe change the definition of AI at the edge,” Farhadi says. Finding the balance between cloud and edge AI Does innovation at the edge mean that cloud AI will disappear? Probably not. “Cloud was extremely helpful in getting AI out of the labs and into the market. It served AI really well. One of the reasons we saw such a boost in AI was the cloud,” Farhadi says. Farhadi believes that some artificial intelligence applications will remain native to the cloud, including applications that need a lot of updates and applications that require a lot of data bandwidth to and from the device. Farhadi also acknowledges Edge AI doesn’t come without trade-offs. For one thing, the flexibility of and compute power of cloud can’t be replicated at the edge. “You can’t analyze video at hundreds of frames per second on a two-dollar device,” he says. But for most applications, the functionality offered by resource-constrained, edge AI devices is more than enough. “We’re not advocating that everything has to go to the edge. But there are a lot of AI applications that don’t really need to run in the cloud. That’s what we’re after,” Farhadi says. In the long run, artificial intelligence will be divided into cloud-native, edge-native and hybrid applications. The point, Farhadi believes, is to understand the opportunities and power that lies in edge AI and to choose solutions that best serve each use case. “Cloud is not the only solution. There’s a spectrum,” he says.
<urn:uuid:36614d74-0f2f-4fa8-b911-8f53450b1959>
CC-MAIN-2022-40
https://bdtechtalks.com/2019/03/06/artificial-intelligence-edge-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00443.warc.gz
en
0.960731
2,241
2.609375
3
From the heights of oceans tides each hour, a company’s daily sales revenue, or a country’s annual inflation rate, time series metrics play an extremely important part of our everyday lives. Read on to better grasp: - What time series metrics are - The different types of time series metrics - Examples and use cases What are Time Series Metrics? A time series is a sequence of sequential data points that occur over a particular interval of time. A “metric”, in this case, refers to the piece of data that is tracked at each increment of time. A time series metric has two main features: - Measurable: this means that you can assign a numeric value to it - Variable: this means the metric changes over time One important difference about time series data is that time is not just a metric (like the piece of data being collected), but rather it is the primary axis. This means that each numeric data point is paired with a timestamp and one or more labeled dimensions associated with the metric. It should be noted that the time intervals between each data point are most often gathered at regular intervals, although they can also be irregular. Time series also consists of 4 main components: - Level: this refers to the average, baseline value of a time series as if it were a straight line - Trends: this means that variations in the data move up or down in trends or reasonably predictable patterns. - Seasonality: this means that there are seasonal variations that repeat over a specific time period, for example, each day, week, month, or year. - Variability: also known as “noise” or “volatility” ”,this refers to random variations that don’t fall into any of the other three categories. The Different Types of Time Series Metrics Now that we have a high-level definition, let’s look at a few of the different types of time series metrics. Univariate Time Series Univariate time series refers to a type of time series that consists of a single observed value that is recorded in sequential order with equal or non-equal time intervals. When you are modeling a univariate time series, each metric represents changes in a single time-dependent variable. For example, the following image from Quantisti demonstrates a univariate time series of the Dow-Jones Industrial Index from August 1968 – Oct. 1992: Multivariate Time Series Multivariate time series refers to data that has more than one time-dependent variable. If there are just two time-dependent variables, this is referred to as a “bivariate time series”. In a multivariate time series, each metric has some dependency on the other variables. For example, in the following image from Machine Learning Mastery we can see a multivariate time series with 7 subplots of time series data that are used to forecast air pollution.. In this example, each of the variables has some dependency on each other and can all be used together to predict air pollution in the following time steps. Time Series Metrics for Time Series Databases One common application of time series metrics is for monitoring systems and time series databases (discussed further below). For example, Prometheus is an open-source time series database which offers four core metric types including: - Running counter: this is a cumulative metric that represents a single, increasing counter that preserves the given order. - Gauge: This is a metric for which each data point represents an average of all measurements within the data point time window. A gauge is typically used to measure things like average temperature in the last day, average response time in the last hour or your computer’s average memory usage in the last minute. - Histogram: this metric samples from the observations and counts them in configurable buckets, and also provides a sum of the observed values. - Summary: this metric also samples from the observations and provides a total count, a sum of all the observed values, and calculates configurable quantiles over a sliding time window. Examples and Use Cases of Time Series Metrics Now that we’ve reviewed the different types of time series metrics, let’s review several examples and use cases of them. Time Series Anomaly Detection & Forecasting One of the main applications of time series metrics is for time series analysis and forecasting: - Time series anomaly detection : this refers to data mining techniques that are used to detect outliers in a dataset . Anomaly detection has many real-world applications – from business monitoring, fraud detection, medical anomalies and cyber attack prevention. Examples of business monitoring use cases include revenue monitoring, cost monitoring and customer experience monitoring. - Time series forecasting: the use of models in order to predict future values based on previously observed data. For example, eCommerce companies often use time series forecasting to forecast demand so they can plan their inventory and budgetary requirements Five of the most important concepts for time series anomaly detection and forecasting include: - Seasonality: The presence of variations that occur at regular intervals is very common in real-world datasets and identifying these patterns help improve our anomaly detection and forecasting efforts. - Stationarity & Non Stationarity: A common assumption for time series techniques is that the data is stationary, which means that the statistical properties such as the mean, variance, and autocorrelation are constant over time.Conversely, non-stationary time series refers to data whose statistical properties change over time. - Trends: understanding if upward or downward trends are present over certain periods of time are an important part of forecasting and detecting anomalies in a data set. - Temporal Pattern Analysis: Temporal patterns refer to a segment of signals that recur throughout a time series and identifying these patterns plays an important role in time series analysis. - Event Impact Analysis: Often in time series data there are events that skew the data in a certain direction, so identifying these events and analyzing their impact is another valuable tool for anomaly detection and forecasting. Time Series Monitoring Since time series data tracks changes over time it is often used to monitor things like website traffic, fluctuating prices and IT systems. This time series data is often collected in short intervals (i.e., minutes), so as you can imagine the data accumulates very rapidly. As a result, it’s important to have a database that is optimized for time series data, which as we’ll discuss below, is why they’ve grown so much in popularity in recent years. Time Series Databases One of the most common applications of time series metrics are time series databases (TSDBs). As you can see from the image below, time series databases are in fact the fastest-growing category of database model in the past 24 months: One of the reasons for this trend is that time series databases are much more scalable than normal databases. While you can use a normal relational or NoSQL database, they simply do not perform as well as one that treats time as the primary axis. In particular, a few of the efficiencies of a time series database over normal databases include higher data ingest rates, improved data compression, and faster queries. Summary: Time Series Metrics Time series metrics play an incredibly important part of our daily lives. By treating time as the primary axis, we’ve seen that we can use these metrics for statistical analysis, forecasting, monitoring systems, and databases.
<urn:uuid:830a336e-eef0-4db2-ab10-911f7569ba06>
CC-MAIN-2022-40
https://www.anodot.com/learning-center/time-series-metrics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00443.warc.gz
en
0.930182
1,540
3.375
3
The purpose of the HIPAA Privacy Rules is to protect the confidentiality of patient healthcare and payment data to prevent abuse and fraud. Published by the Department of Health and Human Services as the “Standards for Privacy of Individually Identifiable Health Information”, the HIPAA Privacy Rules stipulate the permissible uses and disclosures of protected health information (“PHI”) gives individuals rights over their HIPAA PHI. All health plans (with the exception of some employer health plans) and healthcare clearinghouses are required to comply with the HIPAA Privacy Rules, as are most healthcare providers and any Business Associates with whom PHI is shared. Failure to comply with the HIPAA Privacy Rules can incur significant civil monetary penalties – even if no unauthorized disclosure of PHI has occurred – and criminal penalties if the violation is knowing, under false pretenses, or for personal gain. What Information is Covered by the HIPAA Privacy Rules? Before discussing how the HIPAA Privacy Rules safeguard Protected Health Information (PHI), it is important to understand what information is covered by the HIPAA Privacy Rules to ensure Covered Entities (health plans, healthcare clearinghouses, and qualifying healthcare providers) and Business Associates protect the right information and avoid protecting information unnecessarily. The Department of Health & Human Services (HHS) defines HIPAA PHI as individually identifiable health information that relates to: - An individual´s past, present, or future physical or mental health or condition, - The provision of health care to the individual, or - The past, present, or future payment for the provision of health care to the individual. The HHS does not define which elements of individually identifiable health information should be protected (i.e., name, address, date of birth, etc.), and many compliance experts rely on the safe harbor standard for the de-identification of PHI (§164.514) to determine the eighteen elements (or “identifiers”) that should be protected from impermissible uses and disclosures. A list of the eighteen identifiers is included in the FAQs section at the end of this article. It is important to note that the HIPAA Privacy Rules not only cover information relating to an individual, but also those that could identify a relative, employer, or household member when the identifiers are maintained in the same record set. Therefore, PHI might not only consist of the name, address, and date and birth of a patient, but also the telephone number of their employer or the license plate number of a partner´s car if the data is maintained in the same record set. It is also important to note that, although other health information might not be consider protected under HIPAA, this is only the case when it is isolated from any other protected health information. For example, a data set of vital signs by themselves does not constitute PHI. However, if the vital signs data set includes names, identifying numbers, or images that could reasonably identify an individual, the entire data set is considered PHI and must be protected. How the HIPAA Privacy Rules Safeguard PHI The HIPAA Privacy Rules specify the required and permissible circumstances when PHI can be disclosed to a third party without the authorization of the individual to whom the information relates. There are only two circumstances in which the disclosure of PHI is required – when requested by a patient or their personal representative, or when requested by a representative of HHS who is undertaking an audit, compliance investigation, or enforcement action. By contrast, there are many circumstances in which the use or disclosure of PHI is permitted – but not required – by the HIPAA Privacy Rules. These include for treatment, payment, or health care operations, when a disclosure is for public health or benefit activities (i.e., law enforcement, reports of neglect or abuse, health oversight activities, etc.), to comply with workers´ compensation laws, or when the disclosure is in response to a subpoena or other lawful process. All other uses and disclosures of PHI must be consented to by the patient – either through informal permission for uses such as inclusion in a hospital directory or formal written authorization if PHI is disclosed to (for example) an employer, a pharmaceutical company, or a marketing firm. If a patient or their legal representative is unable to provide their consent, a Covered Entity can use professional judgement to determine whether the disclosure is in the patient´s best interests. Additionally, the Privacy Rule´s Administrative Requirements (§164.530) require Covered Entities to “reasonably safeguard PHI from any intentional or unintentional use or disclosure that is in violation of [the HIPAA Privacy Rules]”. The Administrative Requirements do not offer any guidance on how Covered Entities should “reasonably safeguard” PHI other than implementing “appropriate administrative, technical, and physical safeguards to protect the privacy of PHI”. Individuals´ Rights over HIPAA PHI HIPAA gives individuals rights to request copies of PHI maintained by Covered Entities and (importantly) Business Associates, correct erroneous entries in their medical or insurance records, and restrict who has access to it beyond required disclosures to HHS representatives. Individuals also have the right to transfer their PHI to another provider and to request a “disclosure of accounting” – which should not only contain details of who PHI has been shared with, but why. The failure to comply with Individuals´ rights (often referred to as patients´ rights) is one of the leading causes of complaints to the HHS´ Office for Civil Rights; and although the complaints are simple to resolve, the Office for Civil Rights has started imposing civil monetary penalties on healthcare providers who do not comply with this particular area of the HIPAA Privacy Rules. In 2021, more than a dozen HIPAA settlements related to violations of individuals´ rights – the largest settlement being for $200,000. Compliance with the HIPAA Privacy Rules is Not HIPAA Compliance Compliance with the HIPAA Privacy Rules alone does not make a Covered Entity or Business Associate HIPAA compliant. Covered Entities and Business Associates also have to comply with the HIPAA Security and Breach Notification Rules in order to comply with HIPAA. The HIPAA Security Rules are a subset of the HIPAA Privacy Rules and have been developed to safeguard PHI when it is created, used, stored, or transmitted electronically. The Rules stipulate administrative, physical, and technical safeguards must be put in place to protect electronic PHI (ePHI) when in transit and at rest. The Breach Notification Rules require Covered Entities to report breaches of unsecured ePHI to the individual(s) whose PHI has been exposed (or potentially exposed) and to the HHS´ Office for Civil Rights. In certain circumstances, it is also necessary to inform the local media of the data breach. The failure to report a breach in a timely manner can attract sanctions for HIPAA violations. It is also important for Covered Entities to comply with the Rules relating to Business Associate Agreements. These contracts must – among other things – establish the permitted uses and disclosures of PHI by the Business Associate, require the Business Associate to implement appropriate safeguards to prevent unauthorized uses and disclosures of PHI, and require the Business Associate to report any uses or disclosures not provided for by the contract. Possible Enforcement Actions for Breaches of HIPAA It was mentioned above that the failure to comply with the HIPAA Privacy Rules can incur civil and criminal penalties. Although the HHS´ Office for Civil Rights prefers to resolve violations of HIPAA with technical assistance and corrective action plans, there are many examples of Covered Entities agreeing to multi-million dollar civil settlements and employees being sentenced to prison for knowingly taking PHI under false pretenses or for personal gain. When civil monetary penalties are imposed, the amount of the penalty depends on the level of culpability: - Tier 1: A violation that a Covered Entity or Business Associate was unaware of and could not have reasonably been avoided had an appropriate amount of care had been taken. - Tier 2: A violation that a Covered Entity or Business Associate should have been aware of but could not have avoided even with an appropriate reasonable amount of care. - Tier 3: A violation suffered as a direct result of “willful neglect”, where a Covered Entity or Business Associate has made an attempt made to correct the violation. - Tier 4: A violation of HIPAA law attributable to willful neglect, where no attempt has been made to correct the violation by a Covered Entity or Business Associate. Minimum and maximum penalties for each Tier were originally determined by the “General Penalty for Failure to Comply with Requirements and Standards” provision included in the original text of HIPAA. However, the amount of the penalty (up to $100 per violation to a maximum of $25,000 per year) were not a sufficient deterrent to non-compliant organizations, so the minimum and maximum penalties were increased significantly in the HITECH Act 2009 and have since been adjusted for inflation. The 2022 minimum and maximum penalties for breaches of the HIPAA Privacy Rules are: |Penalty Tier||Level of Culpability||Min. Penalty per Violation||Max. Penalty per Violation||Annual Penalty Limit| |Tier 1||Lack of Knowledge||$127||$63,973||$1,919,173| |Tier 2||Reasonable Cause||$1,280||$63,973||$1,919,173| |Tier 3||Willful Neglect||$12,794||$63,973||$1,919,173| |Tier 4||Willful Neglect not Corrected within 30 days||$63,973||$1,919,173||$1,919,173| In addition to the civil monetary penalties that can be imposed by HHS´ Office for Civil Rights, State Attorneys General also have the authority to pursue enforcement action for breaches of the HIPAA Privacy Rules when the breach impacts a resident of the state. With regards to criminal proceedings, these are pursued by the Department of Justice if there is evidence to suggest PHI was “knowingly” disclosed without authorization under false pretenses or for personal gain. Why Additional Training on the HIPAA Privacy Rules is Important The HIPAA Privacy Rule requires Covered Entities to “train all members of the workforce on the policies and procedures in respect to PHI […] as necessary and appropriate for the members of the workforce to carry out their functions with the Covered Entity”. No training requirements others than the implementation of a security and awareness training program exist for Business Associates. Often, complying with the minimum requirements to train members on the workforce on just the policies and procedures relevant to their functions is insufficient to prevent violations of the HIPAA Privacy Rules due to the many ways in which personnel (maintenance teams, environmental services teams, marketing teams, etc.) can encounter PHI outside their usual functions. Additional training on the HIPAA Privacy Rules can therefore prevent inadvertent and avoidable violations of HIPAA attributable to a lack of knowledge – especially for employees of Business Associates who may have to respond to requests by individuals for copies of their PHI or requests to amend erroneous information in their insurance or medical records. Because Covered Entities and Business Associates differ in what they do and how they do it, there is no “one-size-fits-all” training on the HIPAA Privacy Rules. However, on our HIPAA training requirements page, we have suggested several training modules that are appropriate for members of the workforce to better understand the HIPAA Privacy Rules and how to comply with them. HIPAA Privacy Rules FAQs Which healthcare providers are not required to comply with the HIPAA Privacy Rules? Although most healthcare providers do qualify as Covered Entities, those who do not transmit transactions electronically are usually exempt from complying with the HIPAA Privacy Rules, as are educational institutions that only provide healthcare services to students (as these records are protected by FERPA) and some employers that administer their own self-funded health plans. Why might penalties be imposed if no unauthorized disclosure of PHI has occurred? The HIPAA Privacy Rules not only safeguard the privacy of PHI, but they also give individuals rights over their HIPAA PHI. If a Covered Entity fails to comply with (for example) a request for a copy of an individual´s HIPAA PHI, the Covered Entity is in violation of the HIPAA Privacy Rules and could be fined is the violation is considered serious by HHS´ Office for Civil Rights. What are the eighteen identifiers that should be protected from impermissible uses and disclosures? As mentioned above, the eighteen identifiers generally regarded to require protection from impermissible uses and disclosures are taken from the “safe harbor” method of de-identification. This list is not sanctioned by HHS, and Covered Entities are advised to consider whether all individually identifiable health information should be protected from impermissible uses and disclosures. The eighteen “safe harbor” identifiers are: - All geographic subdivisions smaller than a State - All elements of dates (except year) for dates directly related to an individual. - Telephone numbers - Fax numbers - Electronic mail (email) addresses - Social security numbers - Medical record numbers - Health plan beneficiary numbers - Account numbers - Certificate/license numbers - Vehicle identifiers and serial numbers, including license plate numbers - Device identifiers and serial numbers - Web Universal Resource Locators (URLs) - Internet Protocol (IP) address numbers - Biometric identifiers, including finger and voice prints - Full face photographic images and any comparable images - Any other unique identifying number, characteristic, or code Why did a Covered Entity get fined $200,000 following a right of access complaint? In some cases, HHS´ Office for Civil Rights may impose significantly higher fines if the Covered Entity has a history of non-compliance. In this case, the Covered Entity – Arizona-based Banner Health – had previously experienced a cyber-attack exposing the unsecured ePHI of 3.7 million patients. An investigation into the breach identified non-compliance with multiple Security Rule standards. When have employees been sentenced to prison for violating the HIPAA Privacy Rules? The most recent occasion was in 2020, when a former medical clinic worker from Florida used her authorized access to healthcare systems to steal patients´ PHI and sell it to identity thieves for cash. The former employee was apprehended during an undercover operation, charged with aggravated identity theft and wire fraud, and sentenced to 48 months in federal prison.
<urn:uuid:4aeacb6d-c973-4c4f-b4d5-edf28e126dc3>
CC-MAIN-2022-40
https://www.hipaanswers.com/hipaa-privacy-rules/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00443.warc.gz
en
0.930696
3,001
2.625
3
A distributed denial of service (DDoS) attack is a malicious attempt to make an online service unavailable to users, usually by temporarily interrupting or suspending the services of its hosting server. Once thought of as prankish annoyance, DDoS attacks today are often a tool for cybercriminals to earn income. They’re regarded as one of the most powerful weapons on the internet because cybercriminals can launch them at will, impact any part of a website’s operations or resources, and lead to costly, time-consuming service interruptions. DDoS attacks are distinct from other denial of service (DoS) attacks, in that they use a single Internet-connected device (one network connection) to flood a target with malicious traffic. Attacks can be launched from any number of compromised devices. To nobody’s surprise, the number and complexity of DDoS attacks are increasing. Imperva Research Labs recently reported DDoS activity increased by 286% between Q4 2020 and Q1 2021. Security teams work hard to mitigate these attacks, but as they thwart them, the hackers adapt their strategies. Many organizations rely on their internet service provider (ISP) for DDoS mitigation because this service often comes as a relatively low-cost add-on to the ISP’s existing bandwidth offerings. Hackers understand this very well so they make ISPs top-priority targets for DDoS attacks. In May 2021, Belgian ISP BelNet suffered a large-scale DDoS attack that caused service disruptions for more than 200 organizations including government, healthcare, and academic institutions. The massive attack unfolded in consecutive waves, although it was not a sophisticated DDoS attack and seemed designed simply to inundate the network by sending thousands of IP addresses to create a surge in traffic flow. The result was a costly major disruption, but it could have been much worse. ISPs focus first and foremost on their principal technology services. DDoS attack protection is a feature they can say they offer, but they may only provide low-cost basic protections that are likely to be sufficient to stop only the most basic DDoS attacks. Choosing a security-oriented solution provider that specializes in DDoS protection enables you to mitigate risk in ways your ISP cannot. Here are five reasons why opting for a security-first vendor is smarter than depending on your ISP: - Your organization is not the ISP’s top priority. If an ISP detects large volumes of traffic going after their network, they may block all traffic – including to your site. At some level, the ISP actually helps attackers achieve their aim of shutting down websites. - Your ISP has limited bandwidth. For ISP’s under DDoS attack, the default response, as we mentioned, is to indiscriminately block traffic. A security-first vendor is capable of spreading the traffic over multiple ISPs and leveraging massive amounts of bandwidth using multiple data centers to absorb volumetric attacks. - ISPs do not protect against protocol attacks. As an organization, you are vulnerable to SYN floods, fragmented packet attacks, Ping of Death, Smurf DDoS, etc. that consume actual server resources, or those of intermediate communication equipment, such as firewalls and load balancers. ISPs don’t protect against these attacks. They also do not protect against advanced DDoS attacks such as burst attacks, dynamic IP attacks, or multi-vector attacks. - ISPs are not obligated to provide “best efforts” to stop an attack. The downtime that DDoS attacks cause is costly so the faster the response time, the better. ISPs offer no service level agreement (SLA) that articulates attack detection times, mitigation times, or quality of mitigation. The delays alone could cost a small fortune. - DDoS security is not the ISP’s core business. DDoS attacks have distinct characteristics, and developing ways to mitigate them and minimize their impact on customers requires the skills and expertise of a security-first vendor. A good vendor will stay up to date on new attack methods and trends and have tools at their disposal to respond quickly and effectively to attacks. Want more information about effective DDoS Protection? See what Imperva has to offer. Try Imperva for Free Protect your business for 30 days on Imperva.
<urn:uuid:0198f087-b0f2-4716-8c0a-8648f322169e>
CC-MAIN-2022-40
https://www.imperva.com/blog/5-reasons-why-depending-on-your-isp-for-ddos-protection-is-a-bad-idea/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00643.warc.gz
en
0.949862
866
2.625
3
The interpreted language Python is a lot of fun. It’s great for quick and dirty lash-ups, and has list comprehensions whilst being easier to use that Haskell. There are many great reasons why you would never deploy it in a production environment, but that’s not what this article is about. In the UK, the government decided that schoolchildren needed to learn to code; and Python was picked as the language of choice. Superficially it looks okay; a block structured BASIC and relatively easy to learn. However, the closer I look, the worse it gets. We would be far better off with Dartmouth BASIC. To fundamentally understand programming, you need to fundamental understand how computers work. The von Neumann architecture at the very least. Sure, you can teach CPU operation separately, but if it’s detached from your understanding of software it won’t make much sense. I could argue that students should learn machine code (or assembler), but these days it’s only necessary to understand the principle, and a high level language like BASIC isn’t that dissimilar. If you’re unfamiliar with BASIC, programs are made up of numbered lines, executed in order unless a GOTO is encountered. It also incorporates GOSUB/RETURN (equivalent to JSR/RTS), numeric and string variables, arrays, I/O and very little else. Just the basic building blocks (no pun intended). Because of this it’s very quick to learn – about a dozen keywords, and familiar infix expression evaluation, and straightforward IF..THEN comparisons. There are also a few mathematical and functions, but everything else must be implemented by hand. And these limitations are important. How is a student going to learn how to sort an array if a language has a built-in list processing library that does it all for you? But that’s the case for using BASIC. Python appears at first glance to be a modernised BASIC, although its block structured instead of having numbered lines. That’s a disadvantage for understanding how a program is stored in sequential memory locations, but then structured languages are easier But from there on, it gets worse. Data types are fundamental to computing. Everything is digitised and represented as an appropriate series of bits. You really need to understand this. However, for simplicity, everything in python is treated as an object, and as a result the underlying representation is completely hidden. Even the concept of a type is lost, variables are self-declaring and morph to whatever type is needed to store what’s assigned to them. Okay, you can do some cool stuff with objects. But you won’t learn about data representation if that’s all you’ve got, and this is about teaching, right? And worse, when you move on to a language for grown-ups, you’ll be in for a culture shock. A teaching language must have data types, preferably hard. The next fundamental concept is data arrays; adding an index to a base to select an element. Python doesn’t have arrays. It does have some great built in container classes (aka Collections): Lists, Tuples, Sets and Dictionaries. They’re very flexible, with a rich syntax, and can be used to solve most problems. Python even implements list comprehensions. But there’s no Having no arrays means you have to learn about the specific characteristics of all the collections, rather than simple indexing. It also means you won’t really learn simple indexing. Are we learning Python, or fundamental programming principles? Unlike BASIC, Python is block structured. Highly structured. This isn’t a bad thing; structuring makes programs a lot easier to read even if it’s less representative of the underlying architecture. That said, I’ve found that teaching an unstructured language is the best way to get students to appreciate structuring when it’s added later. Unfortunately, Python’s structuring syntax is horrible. It dispenses with BEGIN and END, relying on the level of indent. Python aficionados will tell you this forces programmers to indent blocks. As a teacher, I can force pupils to indent blocks many other ways. The down-side is that a space becomes significant, which is ridiculous when you can’t see whether it’s there or not. If you insert a blank line for readability, you’d better make sure it actually contains the right number of spaces to keep it in the right block. WHILE loops are support, as are FOR iteration, with BREAK and CONTINUE. But that’s about it. There’s no DO…WHILE, SWITCH or GOTO. You can always work around these omissions: You can also fake up a switch statement using IF…ELSEIF…ELSEIF…ELSE. Really? Apart from this being ugly and hard to read, students are going to find a full range of control statements in any other structured language they move In case you’re still simmering about citing GOTO; yes it is important. That’s what CPUs do. Occasionally you’ll need it, or at least see it. And therefore a teaching language must support it if you’re going to teach And finally, we come on to the big one: Object Orientation. Students will need to learn about this, eventually. And Python supports it, so you can follow on without changing language, right? Wrong! Initially I assumed Python supported classes similar to C++, but obviously didn’t go the whole way. Having very little need to teach advanced Python, I only recently discovered what a mistake this was. Yes, there is a Python “class”, with inheritance. Multiple inheritance, in fact. Unfortunately Python’s idea of a class is very superficial. The first complete confusion you’ll encounter involves class attributes. As variables are auto-creating, there is no way of listing attributes at the start of the class. You can in the constructor, but it’s messy. If you do declare any variables outside a method it silently turns them into global variables in the class’s namespace. If you want a data structure, using a class without methods can be done, but is messy. Secondly, it turns out that every member of a class is public. You can’t teach the very important concepts of data hiding; how to can change the way a class works but keep the interface the same by using accessors. There’s a convention, enforced in later versions, that means prefixing a class member with one or more underscores makes it protected or private, but it’s confusing. And sooner or later you discover it’s not even true, as many language limitations are overcome by using type introspection and this rides a coach and horses through any idea of private data. And talking of interfaces, what about pure virtual functions? Nope. Well there is a way of doing it using an external module. Several, in fact. They’re messy, involving an abstract base class. And, in my opinion, they’re pointless; which is leading to the root cause why Python is a bad teaching language. All Round Disaster Object oriented languages really need to be compiled, or at least parsed and checked. Python is interpreted, and in such a way as it can’t possibly be compiled or sanity checked before running. Take a look at the eval() function and you’ll see why. Everything is resolved at run-time, and if there’s a problem the program crashes out at that point. Run-time resolution is a lot of fun, but it contradicts object orientation principles. Things like pure virtual functions need to be checked at compile time, and generate an error if they’re not implemented in a derived class. That’s their whole point. Underneath, Python is using objects that are designed for dynamic use and abuse. Anything goes. Self-modifying code. Anything. Order and discipline are not required. So we’re teaching the next generation to program using a language with a wide and redundant syntax and grammar, incomplete in terms of structure, inadequate in terms of object orientation, has opaque data representation and typing; and is ultimately designed for anarchic development. Unfortunately most Computer Science teachers are not software engineers, and Python is relatively simple for them to get started with when teaching. The problem is that they never graduate, and when you end up dealing with important concepts such as object orientation, the kludges needed to approximate such constructs in Python are a major distraction from the underlying concepts you’re trying to teach.
<urn:uuid:92e70080-33e8-4dea-964e-25bc33c8e516>
CC-MAIN-2022-40
https://blog.frankleonhardt.com/category/life/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00643.warc.gz
en
0.913426
1,989
2.96875
3
Bias, consent, and intent - the ethical dilemmas around AI Did you know, if you are a Mac user, you might end up paying higher for hotel rooms? Or, that activity tracker on your Fitbit could be a compelling legal evidence? This is the dark side of the Artificial Intelligence era. As AI integrates into the very fabric of our lives, it’s giving rise to unique ethical dilemmas that warrant deep thought. An analogy used by ethics and security expert Patrick Lin comes to mind. Let’s assume you are speeding down a crowded expressway in your self-driving car, and suddenly a massive crate falls off the truck and lands ahead of you. If you don’t swerve, you may be killed. If you do swerve, you will hit a biker on either side. If you were driving yourself, your action will be driven by instincts and muscle memory. But how can you code that response in a self-driving car? How can a machine be trained to react instinctively like us? Certainly, simple decision-making principles do not apply here. What if the machine decides to hit a biker on either of its side to save the head-on collision? Oftentimes, decisions made by a machine could be at loggerheads with the moral standards of a society. Take for example Microsoft’s AI assistant Tay, that soon after its launch in 2016 started spewing out racist remarks, sending the readers and the company in a panicked frenzy. According to a 2018 Deloitte survey of 1,400 AI professionals in the US, 32% of respondents ranked ethical issues as one of the top three risks of AI. However, most organizations, countries, and governments are not yet cognizant of the approaches that can be adopted to deal with AI ethics. We are so enamored by AI and its potential to learn that we sometimes ignore guiding the algorithm to learn right. Bias is real IArtificial intelligence is just that – artificial. The decisions made by such an algorithm is typically heavily influenced by the way it is coded and the kind of data they are exposed to – either through training, or via user feedback. Exposure to bias from the creator or the users – whether deliberate or inadvertent – can quickly reflect in its functioning, damaging outcomes and impacting stakeholders of the society. For example, Amazon in 2015 was forced to scrap its AI recruiting tool after it showed bias against women candidates. Similarly, Twitter’s image-cropping algorithm that went viral last year after users discovered it was racist in favour of the white race. The root cause of such biases in AI can be traced back to the historical bias residual in the primary training data set. To isolate and rectify that bias and adjust the algorithm is a challenge, one that is further compounded by the speed of computing. The time is now ripe to examine and take responsibility for how we approach new-age technology. How are we designing for technology that keeps the greater good in mind? How do we discourage products and services that use data to “hook” people to them, even if it isn’t valuable to the user? It all starts with data They say if you are using something for free, you aren’t the consumer but the product, or rather your data, is. At least in today’s scenario. All of us leave behind data trails that we are not even aware of. This data has immense value to companies who pay millions of dollars to understand who you are, your preferences, your spending capacity, and what you are willing to pay for. They use this data to create personalized offers and recommendations to drive revenue and engagement. Personal data, in short, is a gold mine. Jeff Hammerbacher, Co-founder and Chief Scientist of Cloudera once pointed out the number of avenues brands have at their disposal to track and monitor everything we do, that we can’t even understand who is tracking us, what information they are getting, and how they are using it. Protecting personal data is becoming a battlefield all on its own, and the ownership and access of data is an ongoing conflict. According to Harvard Professor Dustin Tingley, the key questions to determine data ethics are: ‘Is this the right thing to do?’ and ‘Can we do better?’ A good framework rests on five principles: People own their data The subject of the data has the right to know their data will be collected, stored, and used Data privacy must be ensured What is the intent to use the data? How will it benefit you? Does it benefit the data subject, or is it only for your gain? Does it harm the data subject in any way? Are the outcomes causing harm to the data subject? An important aspect of upholding these principles is consent. Do you have express, informed, and recent consent from the data subject to use their information? While getting this consent is easier in industries like banking and healthcare, where there is an established communication mechanism with the consumers, it becomes fuzzy in cases like social media data. While there are regulations for companies to get consent from consumers, the complex legal agreements make the whole exercise redundant. Don’t we all just scroll through to the end and simply click “Agree,” signing away rights to our data? Does it make it right for companies to then say we signed up for it? Can we make it simpler for consumers to understand what they are sharing? Can we set up data exchanges to give customers the power to monetize their data and get rid of middlemen? Every organization these days, from Google to UNESCO (specifically to address gender bias in AI), the Govt. of Canada, BMW, among others are defining their own ethical frameworks for AI. Perhaps not surprisingly, most of them include obvious, commonly acknowledged, and broad-based parameters like environmental sustenance, human intelligence, equality, inclusivity, non-discrimination, etc. But are these high-level guiding principles able to provide developers and data scientists the necessary guard rails to write ethical, unbiased code in their models? Unfortunately, the answer is no. The answer probably lies in a sustained ongoing effort, which is focused on deployment and has metrics at its core to define frameworks for AI ethics. This effort needs to be measured and monitored by a cross-functional team of experts, drawn not just from technology, but also from risk, legal, data science, and an independent watchdog. However, as a matter of fact, there will never be one single metric that would be an indicator of AI fairness. What we need is a combination of many such metrics. For example, to ensure minimal bias in lending processes and algorithms, one must refer to the many case laws and judgments in US credit, housing, and employment laws. Some organizations monitor metrics like adverse impact ratio, marginal effect, or standardized mean difference to quantify discrimination in highly regulated fair-lending environments. Having said that, can we assume that metrics will be foolproof and ensure a minimal-bias AI system? Not really. There will always be some facets of algorithmic decision-making that will be difficult to quantify. However, this shouldn’t deter organizations from undertaking this sometimes resource-intensive, seemingly daunting exercise because the alternative is not an option. Waiting to see the ill effects of AI and taking corrective action as a rection to that effect will have catastrophic impact on a company’s client base, market reputation, and the society at large. Tell us how you are building and deploying ethical AI. We would love to know more about your journey. Write to me at [email protected].
<urn:uuid:79ff489e-989e-405b-bafd-f200d3cf8762>
CC-MAIN-2022-40
https://www.infosysbpm.com/blogs/business-transformation/bias-consent-and-intent-the-ethical-dilemmas-around-ai.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00643.warc.gz
en
0.947092
1,639
2.71875
3
We tend not to think of trains as being a security concern—instead, our concerns often turn towards airplanes and even road vehicles. Beyond urban transit systems with high traffic, such as New York City’s subway system, most people don’t think much about train security. The reality is, all trains are inherently vulnerable to security threats. This has created a need for reliable transportation security systems but to properly implement such systems, we must first understand why and how trains have become such a vulnerability. It’s easy to underestimate the sheer number of railways around the world. In the United States alone, there are over 100,000 miles of rail—freight trains are often the veins of a nation, connecting major cities, trade ports, and is an essential part of the infrastructure. Access to information about these trains, including their schedules, is sometimes available publicly. What’s more concerning these days are data breaches, in which a terrorist may gain knowledge about what sensitive materials are being shipped where and plan an attack around that information. The same things that make passenger trains effective for end-users also makes them vulnerable targets. For passenger trains to effectively benefit the public, they have to focus on being reliable. This means consistently having the ability to be filled to capacity and transport a large number of individuals at one time. It also means reliably showing up to the right locations at the right time. This easily accessible information makes passenger trains a viable target for attacks as well. An Answer: Transportation Security Systems While the major vulnerabilities that exist in the use of railways are inherent, there are certain measures that can lessen those risks. The use of transportation security systems can be incredibly useful but the nature of railway systems means that security systems can’t impede upon the efficiency of trains in any way. This is why proposals to install metal detectors at entry points for passenger trains has been met with disdain, as such a measure would have negative impacts on those heavy traffic trains. Intelligent optical systems, such as Gatekeeper’s Intelligent Train Undercarriage Scanner, are able to make freight and passenger trains alike more secure without impeding on the functionality of railways. Transportation Security Systems With Gatekeeper Gatekeeper Security’s suite of intelligent optical technologies provides security personnel with the tool to detect today’s threats. Our systems help those in the energy, transportation, commercial, and government sectors protect their people and their valuables by detecting threats in time to take action. From automatic under vehicle inspection systems, automatic license plate reader systems, to on the move automatic vehicle occupant identifier, we offer full 360-degree vehicle scanning to ensure any threat is found. Throughout 30 countries around the globe, Gatekeeper Security’s technology is trusted to help protect critical infrastructure. Follow us on Facebook, Google+, and LinkedIn for updates about our technology and company.
<urn:uuid:f20f2a63-734a-412c-913e-1e8cec444ae3>
CC-MAIN-2022-40
https://www.gatekeepersecurity.com/blog/trains-inherently-vulnerable-necessitate-transportation-security-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00643.warc.gz
en
0.952527
591
2.859375
3
Generative Adversarial Networks (GANs) can be defined as deep neural architectures that have two nets that are pit against each other. They were first introduced in a paper by Ian Goodfellow in 2014 together with other researchers from the University of Montreal. How Does GANS Work? To be able to understand GANs, you need to know how generative algorithms work. You can start with the neural network that is referred to as a generator that is able to generate new data. The other one that will contrast it is referred to as the discriminator that serves the purpose of evaluating authenticity. Moreover, the work of the generator is to create new images that will be passed on to the discriminator. Consequently, it will determine whether they are authentic or fake. The neural network has the ability to generate inaccurate information purposefully, as if telling a lie. The discriminators’ task is to identify whether information generated by the neural network is true or false. You can understand how GANs works by looking at it from the eyes of a police officer and a counterfeiter. The counterfeiter seeks to learn new ways to pass notes without being detected while the police officer comes up with new ways to detect counterfeit notes. - The generator will sample random numbers and return them as images. - The generated image will be fed into the discriminator along other images derived from the actual data set. - The discriminator will sample both the real and the fake image and return probabilities that will range between 0 and 1, with 1 being a prediction of authenticity and 0 representing fake. GANs can be compared to other neural networks like Variational Autoencoders (VAEs) and Autoencoders. The work of the Autoencoder is to program input data like vectors. They form a hidden representation of data that is raw and can be paired with a local decoder that allows it to reconstruct input data from a hidden representation. An example of this would be a restricted Boltzmann machine. The Variational Autoencoder acts as a generative algorithm that seeks to normalize hidden representation. However, GANs generate data in granular and fine detail while Variational Autoencoder will seem to produce images that are blurred. It’s important to be able to know what GANs are used for. They are utilized to produce photo-realistic images in order to visualize interior designs for items such as shoes, clothing, and bags or in some cases for computer games. Additionally, they can also be used to make better astronomical images. GANs can also be used to enhance images by using automated texture synthesis, thereby requiring a higher image quality than anticipated. Moreover, it can be able to mimic data distribution. Furthermore, GANs can be taught to create a different world that resembles our own. Moreover, it can also be used for images, speech, music, and prose. GANs can be looked at as being a robotic artist since the output is quite impressive. In conclusion, GANs is a promising approach that will make it easier to create algorithms and analyze data. Interested in technology solutions? Contact tekRESCUE, your local San Marcos IT company, for more information.
<urn:uuid:462cc5a4-59d4-4479-b749-1abc06d6983c>
CC-MAIN-2022-40
https://mytekrescue.com/generative-adversarial-networks-gans-changing-how-data-is-viewed/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00043.warc.gz
en
0.936196
672
3.6875
4
Storage virtualization is the technology of abstracting physical data storage resources to make them appear as if they were a centralized resource. Virtualization masks the complexities of managing resources in memory, networks, servers and storage. Storage virtualization runs on multiple storage devices, making them appear as if they were a single storage pool. Pooled storage devices can be from different vendors and networks. The storage virtualization engine identifies available storage capacity from multiple arrays and storage media, aggregates it, manages it and presents it to applications. The virtualization software works by intercepting storage system I/O requests to servers. Instead of the CPU processing the request and returning data to storage, the engine maps physical requests to the virtual storage pool and accesses requested data from its physical location. Once the computer process is complete, the virtualization engine sends the I/O from the CPU to its physical address, and updates its virtual mapping. The engine centralizes storage management into a browser-based console, which allows storage admins to effectively manage multi-vendor arrays as a single storage system. The Anatomy of Storage Virtualization Storage virtualization can occur in a variety of different scenarios. Data Level: Block or File Block-based storage virtualization is the most common type of storage virtualization. Block-based virtualization abstracts the storage system’s logical storage from its physical components. Physical components include memory blocks and storage media, while logical components include drive partitions. The storage virtualization engine discovers all available blocks on multiple arrays and individual media, regardless of the storage system’s physical location, logical partitions, or manufacturer. The engine leaves data in its physical location and maps the address to the virtual storage pool. This enables the engine to present multi-vendor storage system capacity to servers, as if the storage were a single array. File level virtualization works over NAS devices to pool and administrate separate NAS appliances. While managing a single NAS is not particularly difficult, managing multiple appliances is time-consuming and costly. NAS devices are physically and logically independent of each other, which requires individual management, optimization and provisioning. This increases complexity and requires that users know the physical pathname to access a file. One of the most time-consuming operations with multiple NAS appliances is migrating data between them. As organizations outgrow legacy NAS devices, they often buy a new and larger one. This often requires migrating data from older appliances that are near their capacity thresholds. This in turn requires significant downtime to configure the new appliance, migrate data from the legacy device, and test the migrated data before going live. But downtime affects users and projects, and extended downtime for a data migration can financially impact the organization. Virtualizing data at the file level masks the complexity of managing multiple NAS appliances, and enables administrators to pool storage resources instead of limiting them to specific applications or workgroups. Virtualizing NAS devices also makes downtime unnecessary during data migration. The virtualization engine maintains correct physical addresses and re-maps changed addresses to the virtual pool. A user can access a file from the old device and save to the new without ever knowing migration occurred. Intelligence: Host, Network, Array The virtualization engine may be located in different computing components. The three most common are host, network and array. Each serves a different storage virtualization use case. Primary use case: Virtualizing storage for VM environments and online applications. Some servers provide virtualization from the OS level. The OS virtualizes available storage to optimize capacity and automate tiered storage schedules. More common host-based storage virtualization pools storage in virtual environments and presents the pool to a guest operating system. One common implementation is a dynamically expandable VM that acts as the storage pool. Since VMs expect to see hard drives, the virtualization engine presents underlying storage to the VM as a hard drive. In fact, what the “hard drive” is the logical storage pool created from disk- and array-based storage assets. This virtualization approach is most common in cloud and hyper-converged storage. A single host or hyper-converged system pools available storage into virtualized drives, and present the drives to guest machines. Primary use case: SAN storage virtualization Network-based storage virtualization is the most common type for SAN owners, who use it to extend their investment by adding more storage. The storage virtualization intelligence runs from a server or switch, across Fibre Channel or iSCSI networks. The network-based device abstracts storage I/O running across the storage network, and can replicate data across all connected storage devices. It also simplifies SAN management with a single management interface for all pooled storage. Primary use case: Storage tiering. Storage-based virtualization in arrays is not new. Some RAID levels are essentially virtualized as they abstract storage from multiple physical disks into a single logical array. Today, array-based virtualization usually refers to a specialized storage controller that intercepts I/O requests from secondary storage controllers and automatically tiers data within connected storage systems. The appliance enables admins to assign media to different storage tiers, usually SSDs to high-performance tiers and HDDs into nearline or secondary tiers. Virtualization also allows admins to mix media in the same storage tier. This virtualization approach is more limited than host- or network-based virtualization, since virtualization only occurs over connected controllers. The secondary controllers need the same amount of bandwidth as the virtualization storage controllers, which can affect performance. However, if an enterprise has heavily invested in an advanced hybrid array, the array’s storage intelligence may outpace what storage virtualization can provide. In this case, array-based virtualization allows the enterprise to retain the array’s native capabilities and add virtualized tiering for better efficiency. Band: In-Band, Out-of-Band In-band storage virtualization occurs when the virtualization engine operates between the host and storage. Both I/O requests and data pass through the virtualization layer, which allows the engine to provide advanced functionality like data caching, replication and data migration. In-band takes up fewer host server resources, because it does not have to find and attach multiple storage devices. The server only sees the virtually pooled storage in its data path. However, the larger the pool grows, the higher the risk that it will impact data path throughput. Out-of-band storage virtualization splits the path into control (metadata) and data paths. Only the control path runs through the virtualization appliance, which intercepts I/O requests from the host, looks up and maps metadata on physical memory locations, and issues an updated I/O request to storage. Data does not pass through the device, which makes caching impossible. Out-of-band virtualization installs agents on individual servers to direct their storage I/O to the virtualization appliance. Although this adds somewhat to individual server loads, out-of-band virtualization does not bottleneck data like in-band can. Nevertheless, best practices are to avoid virtualization disruption by adding additional redundant out-of-band appliances. Benefits of Storage Virtualization - Enables dynamic storage utilization and virtual scalability of attached storage resources, both block and file. - Avoids downtime during data migration. Virtualization operates in the background to maintain data’s logical address to preserves access. - Centralizes a single dashboard to manage multi-vendor storage devices, which saves management overhead and money. - Protects existing investments by expanding available storage available to a host or SAN. - Can add storage intelligence like tiering, caching, replication and a centralized management interface in a multi-vendor environment.
<urn:uuid:b492e3fa-12e7-49a2-a146-9494f68c4f42>
CC-MAIN-2022-40
https://www.enterprisestorageforum.com/hardware/storage-virtualization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00043.warc.gz
en
0.908931
1,612
3.40625
3
In computer hardware, a port serves as an interface between the computer and other computers or peripheral devices. In computer terms, a port generally refers to the female part of connection. Computer ports have many uses, to connect a monitor, webcam, speakers, or other peripheral devices. On the physical layer, a computer port is a specialized outlet on a piece of equipment to which a plug or cable connects. Electronically, the several conductors where the port and cable contacts connect, provide a method to transfer signals between devices. Port connectors may be male or female, but female connectors are much more common. Bent pins are easier to replace on a cable than on a connector attached to a computer, so it was common to use female connectors for the fixed side of an interface. Computer ports in common use cover a wide variety of shapes such as round (PS/2, etc.), rectangular (FireWire, etc.), square (Telephone plug), trapezoidal (D-Sub — the old printer port was a DB-25), etc. There is some standardization to physical properties and function. For instance, most computers have a keyboard port (currently a Universal Serial Bus USB-like outlet referred to as USB Port), into which the keyboard is connected. Physically identical connectors may be used for widely different standards, especially on older personal computer systems, or systems not generally designed according to the current Microsoft Windows compatibility guides. For example, a female 9-pin D-subminiature connector on the original IBM PC could have been used for monochrome video, color analog video (in two incompatible standards), a joystick interface, or for a MIDI musical instrument digital control interface. The original IBM PC also had two identical 5 pin DIN connectors, one used for the keyboard, the second for a cassette recorder interface; the two were not interchangeable. The smaller mini-DIN connector has been variously used for the keyboard and two different kinds of mouse; older Macintosh family computers used the mini-DIN for a serial port or for a keyboard connector with different standards than the IBM-descended systems. Electrical signal transfer Electronically, hardware ports can almost always be divided into two groups based on the signal transfer: - Serial ports send and receive one bit at a time via a single wire pair (Ground and +/-). - Parallel ports send multiple bits at the same time over several sets of wires. After ports are connected, they typically require handshaking, where transfer type, transfer rate, and other necessary information is shared before data are sent. Hot-swappable ports can be connected while equipment is running. Almost all ports on personal computers are hot-swappable. Plug-and-play ports are designed so that the connected devices automatically start handshaking as soon as the hot-swapping is done. USB ports and FireWire ports are plug-and-play. Auto-detect or auto-detection ports are usually plug-and-play, but they offer another type of convenience. An auto-detect port may automatically determine what kind of device has been attached, but it also determines what purpose the port itself should have. For example, some sound cards allow plugging in several different types ofaudio speakers; then a dialogue box pops up on the computer screen asking whether the speaker is left, right, front, or rear for surround sound installations. The user’s response determines the purpose of the port, which is physically a 1/8″ tip-ring-sleeve mini jack. Some auto-detect ports can even switch between input and output based on context. As of 2006, manufacturers have nearly standardized colors associated with ports on personal computers, although there are no guarantees. The following is a short list: - Orange, purple, or grey: Keyboard PS/2 - Green: Mouse PS/2 - Blue or magenta: Parallel printer DB-25 - Amber: Serial DB-25 or DB-9 - Pastel pink: Microphone 1/8″ stereo (TRS) minijack - Pastel green: Speaker 1/8″ stereo (TRS) minijack FireWire ports used with video equipment (among other devices) can be either 4-pin or 6-pin. The two extra conductors in the 6-pin connection carry electrical power. This is why a self-powered device such as a camcorder often connects with a cable that is 4-pins on the camera side and 6-pins on the computer side, the two power conductors simply being ignored. This is also why laptop computers usually have only 4-pin FireWire ports, as they cannot provide enough power to meet requirements for devices needing the power provided by 6-pin connections. Optical (light) fiber, microwave, and other technologies (i.e., quantum) have different kinds of connections, as metal wires are not effective for signal transfers with these technologies. Optical connections are usually a polished glass or plastic interface, possibly with an oil that lessens refraction between the two interface surfaces. Microwaves are conducted through a pipe, which can be seen on a large scale by examining microwave towers with “funnels” on them leading to pipes. Hardware port trunking (HPT) is a technology that allows multiple hardware ports to be combined into a single group, effectively creating a single connection with a higher bandwidth, sometimes referred to as a double-barrel approach. This technology also provides a higher degree of fault tolerance because a failure on one port may just mean a slow-down rather than a dropout. By contrast, in software port trunking (SPT), two agents (websites, channels, etc.) are bonded into one with the same effectiveness; i.e., ISDN B1 (64K) plus B2 (64K) equals data throughput of 128K. Types of ports Serial ATA (SATA, abbreviated from Serial AT Attachment) is a computer bus interface that connects host bus adapters to mass storage devices such as hard disk drives, optical drives, and solid-state drives. Serial ATA succeeded the older Parallel ATA (PATA) standard, offering several advantages over the older interface: reduced cable size and cost (seven conductors instead of 40 or 80), native hot swapping, faster data transfer through higher signaling rates, and more efficient transfer through an (optional) I/O queuing protocol. SATA host adapters and devices communicate via a high-speed serial cable over two pairs of conductors. In contrast, parallel ATA (the redesignation for the legacy ATA specifications) used a 16-bit wide data bus with many additional support and control signals, all operating at much lower frequency. To ensure backward compatibility with legacy ATA software and applications, SATA uses the same basic ATA and ATAPI command sets as legacy ATA devices. SATA has replaced parallel ATA in consumer desktop and laptop computers, and has largely replaced PATA in new embedded applications. SATA’s market share in the desktop PC market was 99% in 2008. PATA remains widely used in industrial and embedded applications that use CompactFlash (CF) storage, which is designed around the legacy PATA standard, even though the new CFast standard is based on SATA. Serial ATA industry compatibility specifications originate from the Serial ATA International Organization (SATA-IO). The SATA-IO group collaboratively creates, reviews, ratifies, and publishes the interoperability specifications, the test cases and plugfests. As with many other industry compatibility standards, the SATA content ownership is transferred to other industry bodies: primarily the INCITS T13 subcommittee ATA, the INCITS T10subcommittee (SCSI), a subgroup of T10 responsible for Serial Attached SCSI (SAS). The remainder of this article will try to use the terminology and specifications of SATA-IO. Standardized in 2004, eSATA (e standing for external) provides a variant of SATA meant for external connectivity. It uses a more robust connector, longer shielded cables, and stricter (but backward-compatible) electrical standards. The protocol and logical signaling (link/transport layers and above) are identical to internal SATA. The differences are: - Minimum transmit amplitude increased: Range is 500–600 mV instead of 400–600 mV. - Minimum receive amplitude decreased: Range is 240–600 mV instead of 325–600 mV. - Maximum cable length increased to 2 metres (6.6 ft) (USB and FireWire allow longer distances.) - The eSATA cable and connector is similar to the SATA 1.0a cable and connector, with these exceptions: - The eSATA connector is mechanically different to prevent unshielded internal cables from being used externally. The eSATA connector discards the “L”-shaped key and changes the position and size of the guides. - The eSATA insertion depth is deeper: 6.6 mm instead of 5 mm. The contact positions are also changed. - The eSATA cable has an extra shield to reduce EMI to FCC and CE requirements. Internal cables do not need the extra shield to satisfy EMI requirements because they are inside a shielded case. - The eSATA connector uses metal springs for shield contact and mechanical retention. - The eSATA connector has a design-life of 5,000 matings; the ordinary SATA connector is only specified for 50. Aimed at the consumer market, eSATA enters an external storage market served also by the USB and FireWire interfaces. The SATA interface has certain advantages. Most external hard-disk-drive cases with FireWire or USB interfaces use either PATA or SATA drives and “bridges” to translate between the drives’ interfaces and the enclosures’ external ports; this bridging incurs some inefficiency. Some single disks can transfer 157 MB/s during real use,about four times the maximum transfer rate of USB 2.0 or FireWire 400 (IEEE 1394a) and almost twice as fast as the maximum transfer rate of FireWire 800. The S3200FireWire 1394b spec reaches ~400 MB/s (3.2 Gbit/s), and USB 3.0 has a nominal speed of 5 Gbit/s. Some low-level drive features, such as S.M.A.R.T., may not operate through some USB or FireWire or USB+FireWire bridges; eSATA does not suffer from these issues provided that the controller manufacturer (and its drivers) presents eSATA drives as ATA devices, rather than as SCSI devices, as has been common with Silicon Image, JMicron, and NVIDIA nForce drivers for Windows Vista. In those cases SATA drives do not have low-level features accessible. The eSATA version of SATA 6G operates at 6.0 Gbit/s (the term SATA III is being avoided by the SATA-IO organization to prevent confusion with SATA II 3.0 Gbit/s, which was colloquially referred to as “SATA 3G” [bps] or “SATA 300” [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both “SATA 1.5G” [b/s] or “SATA 150” [MB/s]). Therefore, eSATA connections operate with negligible differences between them. Once an interface can transfer data as fast as a drive can handle them, increasing the interface speed does not improve data transfer. Most newer computers, including netbooks/laptops, have external SATA (eSATA) connectors, in addition to USB 2.0 and sometimes USB 3.0 ports, though relatively few have built-in FireWire ports. There are some disadvantages, however, to the eSATA interface: - Devices built before the eSATA interface became popular lack external SATA connectors. - For small form-factor devices (such as external 2.5-inch (64 mm) disks), a PC-hosted USB or FireWire link can usually supply sufficient power to operate the device. However, eSATA connectors cannot supply power, and require a power supply for the external device. The related eSATAp (but mechanically incompatible, sometimes called eSATA/USB) connector adds power to an external SATA connection, so that an additional power supply is not needed. Desktop computers without a built-in eSATA interface can install an eSATA host bus adapter (HBA); if the motherboard supports SATA, an externally available eSATA connector can be added. Notebook computers can be upgraded with Cardbus or ExpressCard versions of an eSATA HBA. With passive adapters, the maximum cable length is reduced to 1 metre (3.3 ft) due to the absence of compliant eSATA signal-levels. A Video Graphics Array (VGA) connector is a three-row 15-pin DE-15 connector. The 15-pin VGA connector is found on many video cards, computer monitors, and high definition television sets. On laptop computers or other small devices, a mini-VGA port is sometimes used in place of the full-sized VGA connector. DE-15 has been conventionally referred to ambiguously as D-sub 15, incorrectly as DB-15 and often as HD-15 (High Density, to distinguish it from the older and less flexible DE-9 connector used on old VGA cards, which has the same E shell size but only two rows of pins). VGA connectors and cables carry analog component RGBHV (red, green, blue, horizontal sync, vertical sync) video signals, and VESA Display Data Channel (VESA DDC) data. In the original version of DE-15 pinout, one pin was keyed by plugging the female connector hole; this prevented non-VGA 15 pin cables from being plugged into a VGA socket. Four pins carried Monitor ID bits which were rarely used; VESA DDC redefined some of these pins and replaced the key pin with +5 V DC power supply. The VGA interface is not engineered to be hotpluggable (so that the user can connect or disconnect the output device while the host is running), although in practice this can be done and usually does not cause damage to the hardware or other problems. However, nothing in the design ensures that the ground pins make a connection first and break last, so hotplugging may introduce surges in signal lines which may or may not be adequately protected against. Also, depending on the hardware and software, detecting a monitor being connected might not work properly in all cases. DisplayPort is a digital display interface developed by the Video Electronics Standards Association (VESA). The interface is primarily used to connect a video source to a display device such as a computer monitor, though it can also be used to carry audio, USB, and other forms of data. VESA designed it to replace VGA, DVI, and FPD-Link. DisplayPort is backward compatible with VGA, DVI andHDMI through the use of passive and/or active adapters. Digital Visual Interface (DVI) is a video display interface developed by the Digital Display Working Group(DDWG). The digital interface is used to connect a video source, such as a display controller to a display device, such as a computer monitor. It was developed with the intention of creating an industry standard for the transfer of digital video content. The interface is designed to transmit uncompressed digital video and can be configured to support multiple modes such as DVI-D (digital only), DVI-A (analog only), or DVI-I (digital and analog). Featuring support for analog connections, the DVI specification is compatible with the VGA interface.This compatibility, along with other advantages, led to its widespread acceptance over competing digital display standards Plug and Display (P&D) and Digital Flat Panel (DFP). Although DVI is predominantly associated with computers, it is sometimes used in other consumer electronics such as television sets, video game consoles and DVD players. IEEE 1394 is an interface standard for a serial bus for high-speed communications and isochronous real-time data transfer. It was developed in the late 1980s and early 1990s by Apple, which called it FireWire. The 1394 interface is comparable to USB though USB has more market share. Apple first included FireWire in some of its 1999 Macintosh models, and most Apple Macintosh computers manufactured in the years 2000 – 2011 included FireWire ports. However, in 2011 Apple began replacing FireWire with the Thunderbolt interface and, as of 2014, FireWire has been replaced by Thunderbolt on new Macs. The 1394 interface is also known by the brand i.LINK (Sony), and Lynx (Texas Instruments). IEEE 1394 replaced parallel SCSI in many applications, because of lower implementation costs and a simplified, more adaptable cabling system. The 1394 standard also defines a backplane interface, though this is not as widely used. IEEE 1394 was the High-Definition Audio-Video Network Alliance (HANA) standard connection interface for A/V (audio/visual) component communication and control. (HANA was dissolved in September 2009 and the 1394 Trade Association assumed control of all HANA-generated intellectual property.) FireWire is also available in wireless, fiber optic, and coaxial versions using the isochronous protocols. The PS/2 connector is a 6-pin mini-DIN connector used for connecting some keyboards and mice to a PC compatible computer system. Its name comes from the IBM Personal System/2 series of personal computers, with which it was introduced in 1987. The PS/2 mouse connector generally replaced the older DE-9 RS-232 “serial mouse” connector, while the PS/2 keyboard connector replaced the larger 5-pin/180° DIN connector used in theIBM PC/AT design. The PS/2 designs on keyboard and mouse interfaces are electrically similar and employ the same communication protocol. However, a given system’s keyboard and mouse port may not be interchangeable since the two devices use a different set of commands. In computing, a serial port is a serial communication interface through which information transfers in or out one bit at a time (in contrast to a parallel port). Throughout most of the history of personal computers, data was transferred through serial ports to devices such as modems, terminals and various peripherals. While such interfaces as Ethernet, FireWire, and USB all send data as a serial stream, the term “serial port” usually identifies hardware more or less compliant to the RS-232 standard, intended to interface with a modem or with a similar communication device. Modern computers without serial ports may require serial-to-USB converters to allow compatibility with RS 232 serial devices. Serial ports are still used in applications such as industrial automation systems, scientific instruments, point of sale systems and some industrial and consumer products. Server computers may use a serial port as a control console for diagnostics. Network equipment (such as routers and switches) often use serial console for configuration. Serial ports are still used in these areas as they are simple, cheap and their console functions are highly standardized and widespread. A serial port requires very little supporting software from the host system. Small Computer System Interface (SCSI, skuz-ee) is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, protocols, electrical and opticalinterfaces. SCSI is most commonly used for hard disk drives and tape drives, but it can connect a wide range of other devices, including scanners and CD drives, although not all controllers can handle all devices. The SCSI standard defines command sets for specific peripheral device types; the presence of “unknown” as one of these types means that in theory it can be used as an interface to almost any device, but the standard is highly pragmatic and addressed toward commercial requirements. **Source by wikipedia ** To Become Certified For CompTIA A+ Please Visit This Link ;
<urn:uuid:84f16d92-84a3-4999-9031-49e6248b6fb7>
CC-MAIN-2022-40
https://asmed.com/comptia-a-ports/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00043.warc.gz
en
0.923914
4,201
3.671875
4
The geographic center of the U.S. population has shifted west and south over the past 10 years to Texas County, Missouri. The Census Bureau and NOAA’s National Geodetic Survey are marking the occasion – and the spot. Over the past 220 years, the population center of the United States has shifted westward and southward as the country has grown. When it was first computed the center was in Kent County on Maryland’s Eastern Shore, about 23 miles east of Baltimore, said David Doyle, of the National Geodetic Survey. “From 1790 to 1930 there is almost a straight line going due west. Then it starts to shift, turning to the southwest.” As of 2010, the geographic center of the U.S. population shifted 23 miles, to 37 degrees 31 minutes north, 92 degrees 10 minutes west, in the northwest corner of Texas County, Missouri. On May 9, the Census Bureau and the National Geodetic Survey will mark this shift — figuratively and literally — by placing a survey marker that will become part of the NGS National Spatial Reference System. Fixing the flaws in North American maps The marker will be part of a network of 1.5 million marks making up the system used to map and chart geographic position and height in the United States. “We are trying to showcase the positioning technologies” available today, Doyle said. “The public today is vastly more spatially aware than they have ever been,” because of the growing availability of Global Positioning System data in consumer devices, from driving direction systems to smart phones. The job of computing the center of population has become more complicated over the years as the population has grown, said Paul Donlin, programmer with the Census Bureau’s Geography Division. “The formula is not complicated,” Donlin said, but the large amount of data needed to perform the calculation makes it complex. The data comes from 12 million census blocks of varying sizes, from a city block to a sizable portion of a rural county. The center of population for each block is calculated and this data is used to find the weighted center point for the entire continental U.S. population. “We need a computer to do that,” said Census geographer Ted Sickley. The job was not always done with a computer. The first such calculation was done by hand in 1880, when the center was placed about one mile south of the Ohio River, across from Cincinnati in Kentucky. Eighty years later, the calculation was done by machine, according to a 1970 Census report. “The population centers and the population counts for each of these areas were recorded on punched cards and then transferred to magnetic tape for processing through an electronic compute,” the report said. “The 'program' introduced into the computer controlled the mathematical processes which the computer executed.” Today the data is stored in an Oracle spatial database and the computation is done using software-as-a-service, Donlin said. Why go to the trouble of figuring this theoretical point? Each point is a visual expression of what people were doing at that time. “It gives us a way of characterizing the population of the U.S.,” Sickley said. “It becomes useful when you look at it over time.” From 1850 to 1860, there was a big jump westward from West Virginia to Ohio, reflecting the westward expansion of the country as new states were added and settled. In 1870, a slight shift to the north illustrated the migration to northern cities following the Civil War. From 1890 to 1940, movement slowed and the center remained in southern Indiana, reflecting the large European immigration into eastern cities. A southwesterly arc since then reflects the growth of the sunbelt states. The National Spatial Reference System consists of about 1.5 million passive markers installed by the NGS over the last 200 years, as well as about 1,700 Continuously Operating Reference Stations that provide streams of real-time GPS data. The system is used by surveyors to accurately measure and chart the United States. Since 1960, the NGS has marked the current population center with a commemorative survey marker. Until 1990, the mark was placed within a few centimeters of the actual point, Doyle said. “That’s nice, but they’re out in the middle of a forest somewhere,” he said. Since 1990, the markers have been placed in the nearest incorporated community. This year, it will be next to the Post Office in Plato, Mo., (estimated population 1,430 in 2000), about three miles from the actual location of the population center. NEXT STORY: Red Hat expands its cloud formation
<urn:uuid:7a8caad4-da8e-4b68-a124-b3e13b0ac062>
CC-MAIN-2022-40
https://gcn.com/cloud-infrastructure/2011/05/center-of-us-population-shifts-again-and-census-is-on-the-trail/282676/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00043.warc.gz
en
0.94907
991
3.75
4
This section lists the settings found in the Output behavior Properties tab, in the System task. In the Properties tab, you can configure the output signal pattern. - Output type - Choose the output type. - Sets the circuit’s state to open or closed. - Sets a pulse to be generated. - Sets a cyclic output to be generated. - The delay before the pulse or periodic output is generated. - The duration (in milliseconds) of the pulse. - Select this option if the periodic behavior should continue until it is told to stop by another output behavior. - Duty cycle - The ratio of the output signal pattern pulse width divided by the period. - The time for one complete cycle of the output signal pattern.
<urn:uuid:4efb291c-5b3f-4f26-9aa5-1dbb1eb3dc2b>
CC-MAIN-2022-40
https://techdocs.genetec.com/r/en-US/Security-Center-Administrator-Guide-5.9/Output-behavior-Properties-tab
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00043.warc.gz
en
0.711006
180
2.515625
3
In 2016, the infamous Mirai botnet briefly derailed the internet, shutting down Dyn servers responsible for internet traffic and much of the internet’s infrastructure. Despite the scope and consequences of the global attack, botnets remain on the periphery of general security awareness. Even so, Botnets swamped the 2018 Verizon Data Breach Report by order of magnitude: “attacks on web application authentication mechanisms driven by banking Trojan botnets happen—a lot. Had we included the almost 40,000 of them as part of the analysis, nothing else would come to light.” What are botnets? Botnets, a portmanteau of robot networks, describes a system interconnected computers that can be used to perform various tasks, typically for nefarious reasons. Users catch botnets like a cold. Much like the invisible spreading of germs, botnets infect computers and IoT devices—smartwatches, smart refrigerators, PlayStations, even Teslas—through malware Trojans designed to install the malicious code that allows the hacker to spawn and control their bot regime. This code is often shared through peer-to-peer services like Google Drive or SharePoint, but can take the form of adware, files on torrent sites like Putlocker, and pornography clips. From there, the bad actor enlists the device in a somewhat anonymous army designed to co-opt devices and deluge firewalls, networks, load balancers, DNS servers, and applications. Botnets have evolved to the point that the bot herder can use any bot to control the botnet, so they no longer has to risk using their personal device as a command and control server to coordinate the attacks, which can be readily traced by law enforcement. The dispersion and mass of the chaos botnets wreak on the internet is precisely what makes them so successful. Why do botnets matter? Disruptive botnets can shut down entire websites and, potentially, the internet itself. They’re difficult to secure against because they exploit browser security vulnerabilities, forgotten plugins, operating systems, outdated hardware/software, and unwitting users. Conceptually, they challenge widely held ideas about ownership of and responsibility over devices. If an unsuspecting user is compromised and their device contributes to a large-scale click fraud campaign, then are they, too, responsible for this and the other cyber crimes that bot herders execute? This network of infected devices can be thought of as the composite of other attack types, such as ransomware, malware, phishing, and keylogging, to name a few. As such, botnets have many uses. In simpler cases, botnets can hijack email to disseminate spam. They can be used to generate fake internet traffic on a third-party website for financial gain. On the darknet, botnet masters can sell or rent their zombie army to other hackers to control at their discretion—typically to spread ransomware, perform cyber attacks, or harvest personally identifiable information. They can also be used to mine cryptocurrency. Botnets can even skew election polls. They are most commonly known for DDoS attacks, though. Common botnet attacks and how to prevent them The profitability and multi-vectored nature of botnet attacks make them a hacker favorite. For even the savviest users, botnets are difficult to detect. A frequently unresponsive browser or spike in error reports may be a sign of botnet compromise, but not necessarily—especially for many IoT devices, where the user interface is less interactive.Therefore, a preemptive defense strategy against botnets is the most effective option for how to prevent attacks Hackers use email to spread malware like a plague. The unsolicited attachments or links that often come with these emails are basically invitations to join a botnet. Credential harvesting trojans, such as spoofed login pages for Google Drive, PayPal, or Netflix, are the driving force behind the continued success of this method. Once the account has been compromised, worms and drive-by downloads can also be spread from that account. Add multi-factor authentication to your email, then guard it even further with an authenticator app. This basic and automated protection shields you from the mass identity theft botnets are often designed for. Avanan's email security scans and sandboxes all incoming files to detect malware, effectively blocking the initial entry point for botnet hackers. Botnets can install keylogging programs onto your device. Still, recording every keystroke on thousands of devices would render that pursuit meaningless. That's why hackers filter their keylogging malware to detect words like "PayPal" or "login." From there, hackers can instantly capture credentials that open the floodgates to a wealth of valuable PII. It is exceedingly likely that a user would have the same credentials for PayPal as they would for their Bank of America app. Installing a password manager like LastPass that encrypts, stores, and generates passwords for you guarantees that keylogging scripts will not surface valuable credentials. As a result, hackers will never see your credentials because you never have to type passwords into your keyboard. Installing Advertisement Add-Ons Botnets capitalize on human error and angst. A host of plugins float around the internet, waiting for users to integrate them into their browser. Frequently, these plugins claim to be helpful products like malware scanning tools, offering to prevent the very kind of attacks they execute. They might also take the form of services unrelated to device security, such as the oddly bland advertisement shown below. Prevention for this breed of attack is simple: perform a Google search for the product being advertised in the ad banner to confirm its existence and validity. To add another layer of defense, consider switching to a browser like Duck Duck Go, which is specifically designed to cloak you from the prying eyes of advertisers. After downloading the extension, you will no longer see the relentlessly annoying ads that flood your favorite sites. Researchers at University of Twente in the Netherlands found that "the most profitable undertaking is click fraud, which generates well over $20 million a month of profit." Bot herders frequently create fake websites on which to host advertisements for third-parties for profit. For every click on an advertisement executed by a bot in a botnet, botmasters accrue a percentage of advertising fees since the advertiser rewards the publisher for engagement with their content. Securing your Wifi with a strong VPN creates an encryption tunnel that is almost impossible for hackers to penetrate. It’s no secret that choosing the best VPN service is difficult. When choosing a VPN, it is important to stop and think a little about the reasons why you need a VPN in the first place. One that unblocks streaming sites might not be sufficient for a tech-savvy user with exotic needs. The list from Cybernews, sums it all up nicely. How to identify if your device is part of a botnet Consult the following sites to quickly determine if your IP address is part of a botnet: How to remove yourself from a botnetIf you have found that your IP address has been used in a botnet attack, here are some methods to restore the privacy and integrity of your devices. - Reset your router. - Switch to alternative DNS providers, like OpenDNS. - Download anti-virus software. - Remove factory-installed passwords on IoT devices and routers and replace them with long, strong passwords that use a balance of numbers, letters, and special characters. - Upgrade your router, even if it is working fine. New routers have more robust security protocols. Despite the immense power that botmasters wield over their zombies and the internet itself, botnets rely on common hacking methods to gain footing. Most botnet attacks can be avoided using basic security practices. For general defense against botnets, invest in a firewalling router and don't click on pop-up ads, suspicious email attachments, or unsolicited software downloads. Organizations that are serious about security should use more advanced cloud security tools, such as Avanan's cloud security platform, to prevent botnet attacks.
<urn:uuid:42243ad4-9fc3-473b-95c4-0595764bde0a>
CC-MAIN-2022-40
https://www.avanan.com/blog/what-are-botnets-and-how-to-defend-against-them
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00043.warc.gz
en
0.931086
1,687
3.25
3
The Java "sandbox" is the term used to refer to the protected area used to run Java applets. Basically, the applet is prevented from leaving the "sandbox". The idea is to prevent a malicious applet from doing whatever it wants to your local PC. HTH, Charles > -----Original Message----- > From: Lim Hock-Chai [mailto:Lim.Hock-Chai@xxxxxxxx] > Sent: Friday, September 10, 2004 2:25 PM > To: Java Programming on and around the iSeries / AS400 > Cc: java400-l-bounces@xxxxxxxxxxxx > Subject: RE: MVC and applet > > > Thanks for the input. I'll take all the problems you > mentioned into consideration. > > I would like to first mention that I'm new in java and web > programming. What I'm stating below might not be any good :(. > > The problem I've with browser base application is that the UI > is not as good, compare to PC type application. The > objective is to modernize our legacy green screen > application. This application is primary use by internal > user and most of them probably using the same application all > the time. A non user friendly UI will not provide any > benefit for the user. If all I want is to allow application > to run in the browser, I could just use tool like > Webfacing/HAT to covert those applications. > > The reason I mentioned applet is because it allow me to use > Swing/AWT ect, which provide a better UI (I think anyway). > However, I've heard some of the horror story about applet. > That is the reason I post this message. To see if somebody > have done that and what are their thought. > > Question: > What is a JVM sandbox? > > > > > -----Original Message----- > From: java400-l-bounces@xxxxxxxxxxxx > [mailto:java400-l-bounces@xxxxxxxxxxxx]On Behalf Of > Niall.Smart@xxxxxxxxxxxxxxx > Sent: Friday, September 10, 2004 12:26 PM > To: Java Programming on and around the iSeries / AS400 > Cc: Java Programming on and around the iSeries / AS400; > java400-l-bounces@xxxxxxxxxxxx > Subject: RE: MVC and applet > > > Lim > > Applets can be a pain. You have all hassle of keeping client > JVM's up to > date, operating within the JVM sandbox, programming Swing/AWT > etc, etc. I > would prefer to build a browser based web app using J2EE if I had the > choice. > > Niall > > > -- > This is the Java Programming on and around the iSeries / > AS400 (JAVA400-L) mailing list > To post a message email: JAVA400-L@xxxxxxxxxxxx > To subscribe, unsubscribe, or change list options, > visit: http://lists.midrange.com/mailman/listinfo/java400-l > or email: JAVA400-L-request@xxxxxxxxxxxx > Before posting, please take a moment to review the archives > at http://archive.midrange.com/java400-l. > As an Amazon Associate we earn from qualifying purchases. Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.
<urn:uuid:ba35dc41-dde7-4ac5-97d4-cc8986138fc9>
CC-MAIN-2022-40
https://archive.midrange.com/java400-l/200409/msg00037.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00043.warc.gz
en
0.890192
796
2.609375
3
Phishing scams are notorious for being one of the most effective ways for “threat actors” (or hackers) to steal valuable information. The motivation behind this type of attack is that it’s quick, easy, and can net a significant amount of money through very little effort. Threat actors have historically used an evolving variety of phishing tactics and these tactics are expected to continue to evolve and become more sophisticated in the future. This is why it’s more important than ever to recognize these attacks and know how to avoid them. In this article, we’ll take a look at some of the most popular social engineering scams today, as well as anti-phishing tips your business can use to stay protected. Types of Phishing Scams It’s a common misconception that phishing attempts to a business email will always relate to business, and threat actors know this. For example, if a CFO receives an email regarding cutting a check for something she doesn’t normally do, she will likely recognize it as a phishing attempt (or at least ask her colleagues). However, if she is a wildlife advocate, and receives an email at her business email about an upcoming local event regarding anti-poaching legislation, she is likely more apt to click on a button. This tactic involves posing as an affiliate of a legitimate company, belief system, political party, etc. of the person being targeted. For example, someone who has an interest in protecting wildlife may begin receiving messages from a fictitious conservation group that has been targeted by poachers. Some of these messages may include links that, when clicked, will install malware onto the computer. Business Email Compromise A type of scam where a spoofed email (an email which has been manipulated to seem as if it came from a trusted source) is sent to an individual within the company asking for their help with a purchase order or invoice payment. The request then asks them to send payment or bank account information. This attack is most commonly aimed at financial or accounting divisions of an organization, so it’s especially important to train individuals in those divisions about this type of phishing attempt and have procedures in place that help to prevent successful phishing attempts. Social engineering attacks via calls or voicemails. The scammer uses spear phishing tactics to call employees or leave voicemails asking for money or information. Social engineering attacks sent via text. SMS offers a faster and easier way for cyber attackers to get people to click on malicious links without having to worry about people reporting scam emails. New Phishing Tactics from 2021 In recent years, phishing has shifted away from the traditional email scams in favor of more high-tech, low-effort attacks. Some of these new tactics include: Crypto Payment Scams: This is a new spin on scams asking people to pay with cryptocurrency. An impersonator (acting as law enforcement, a utility company, etc.) calls and asks you to send money. They direct you to go to a store that has a cryptocurrency ATM and buy cryptocurrency. Once you’ve bought it, they send you a QR code with their address embedded in it, asking you to scan the code to transfer the cryptocurrency. Once you scan it, they’ve stolen your money. Health/Vaccine-Related Scams: Many scammers are mimicking healthcare organizations, such as the CDC or NHS, to send emails discussing COVID-19 vaccines, testing, and other health-related information. They will include a malicious link under the guise that it will allow you to order a test or schedule a vaccine appointment. IT Support Scams: With the vast number of remote employees in today’s business landscape, it’s no surprise this scam has gained success. Scammers will impersonate an IT department or support personnel from a company such as Microsoft 365, requesting someone to fill out a form with their personal information in order to correct a software issue or update their account. Often, the scammer will include a malicious link in the email, which—when clicked on—will give them access to your computer. This scam may also come in the form of a fake invoice. For example, someone impersonating a Cisco support team member may send a link for you to click and pay for a service you never received. Often, finance departments may not be aware of all the bills incurred by other departments in the company, so they can be more susceptible to falling for this kind of phishing attempt. Online Ordering Scams: The pandemic has caused a huge spike in online ordering, which scammers are using to exploit online shoppers. Phishers send “shipped” or “missed delivery” notifications via text or email with malicious links attached. These attacks often seem legitimate because most people are, in fact, expecting a delivery. They may also send requests to update payments, check account information, or view falsified order confirmations. CTA: Top 10 Red Flags of a Phishing Email Security Measures to Protect Your Business The best way to protect yourself against these types of cyber attacks is to stay informed and be proactive in your prevention techniques. Here are some tips for protecting your business against phishing attacks: Basic Phishing Security Tips - Implement MFA on all your accounts and devices and require the same for your employees. Multi-factor authentication is nearly 100% successful, as long as you never give out codes or other verification information to anyone. - Never respond to any suspicious messages or requests for personal information. Sometimes scammers will use pressure tactics, such as threats of bringing down systems or changing account settings if you don’t comply. - If the text or email is requesting verification of personal information, log into your account directly to determine whether information needs to be verified. Most legitimate companies will never ask you to send information via text or email. Many companies, including Amazon, even post information on their websites on exactly how to discern whether an email or call is really from them. - Do not click on any links or attachments when you aren’t 100% sure who sent them. Advanced Phishing Security Tips - Monitor employee access and device usage, which includes monitoring cell phone text messages. This will give you or your IT company insight into what kinds of social engineering attacks are being used within your company. - Implement email spam filtering, which can filter out suspicious URLs or attachments before they reach your inbox. Keep in mind, although Microsoft and Google provide some level of protection, it’s always wise to opt for an additional layer of spam filtering. For example, at AIS, we use Proofpoint to protect our clients’ inboxes. - Keep all business data encrypted in a secure storage location (whether onsite, or in the cloud). If a threat actor does penetrate your systems, your data won’t be readable. Security Awareness Training Another one of our phishing tips to keep your business safe is to offer Security Awareness Training to train your staff to spot and avoid social engineering attacks. An IT provider can set your team up with advanced training tools to help you create a culture of cybersecurity within your company. Security Awareness Training platforms engage employees in interactive training programs that can teach them how to recognize and address phishing attacks. - Gamification with points and rewards to encourage positive participation - Simulated phishing attacks based on current threats - Quiz sessions to test comprehension Security Training platforms help you monitor how employees are performing and get notifications for concerns you need to be aware of. - Tracking and reporting for employees - Account takeover monitoring that allows you to identify compromised accounts - Instant notifications when company credentials are found on the dark web These platforms also allow you to measure the effectiveness of training programs for your team and make improvements. - Regular progress reviews of your cybersecurity training - An overview of key metrics, such as employee participation, quiz scores, and phishing test results How AIS Can Help You At AIS, we offer comprehensive security solutions, including email spam filtering and Security Awareness Training, for businesses—so you can feel more confident about your security posture. We understand how important cybersecurity is to business owners, which is why our team of experts are dedicated to helping companies improve their online security. Contact us for a free consultation today!
<urn:uuid:fe40701a-6862-4e9c-8572-72a8f3d12a89>
CC-MAIN-2022-40
https://aisllp.com/cyber-security/tip-guide-is-your-business-prepared-for-phishing-scams/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00244.warc.gz
en
0.939914
1,739
2.546875
3
This article is an introduction to different default gateway solutions. Those technologies are enabling devices on IPv4 local subnets to have more than one Default gateway configured or at least some configuration that make them work half the way of ideal redundant solution. Idea behind this article is to be an introduction to a set of articles that will explain different redundancy solutions based on IPv6 technology. Some of those technologies, will be used in future and some of them already existing and suggested to be used from day one on IPv6 implementation. Default gateway is the next hop address of the device that leads the packets out of the local LAN segment. If there are packets destined to an IP address that is not from local subnet PC will forward those packets usually to router device that will have the information where to forward those packets in order to get them transferred towards the destination. IP hosts have different ways of deciding which default router or default gateway they will use. Some of the methods are DHCP, BOOTP, ICMP Router Discovery Protocol (IRDP), manual configuration, or sometimes by routing protocol. Though is not normally usual that hosts are running routing protocols, it can be done. Most frequent method is DHCP because is automatic and there is one DHCP server on almost every user LAN segment. The other usual solution is manual configuration that is basically typing the IP address of the default gateway into device. Result with manual configuration is of course in the host knowing a single IP address of its default gateway. Redundant Default Gateway solutions The fact that there can be only one default gateway IP address configured on almost every device in the network it’s sometimes a limitation. It basically makes network hosts completely reliant on only one router when communicating with all nodes that are not on the local subnet. There is no redundancy and that’s the issue. But I have two routers that can be Default Gateway for the subnet?!? You have a possibility to configure DHCP server to give to the host two different default gateway IP address. It can be done by defining two pools if IP address from one subnet. Let’s say that you have 172.16.20.0/24 and there are R1 with 172.16.20.1 and R2 with 172.16.20.128 routers on your LAN segment edge. You can split the scope to two subnets 172.16.20.1-172.16.20.127 -> 172.16.20.0/25 and other one 172.16.20.128-172.16.20.254 -> 172.16.20.128/25 and then give to first one the router option of R1 and to other scope the router option of R2. It would mean that on your /24 subnet some devices will receive R1 IP address as their Default Gateway and some others with get R2 IP address for their Default Gateway. If one router goes down at least half of devices will still be able to reach outside networks across R2. This is not really a redundant solution, but is something close to that. The real solution VRRP, HSRP, GLBP i.e. Virtual Router Redundancy Protocol, Hot Standby Router Protocol and Gateway Load Balancing Protocol represent protocols that are making default gateway redundancy possible. Issues related to a host knowing a single IP address as its path to get outside the subnet. You configure one IP address on all devices on the subnet and then two routers/L3 switches in VRRP,HSRP or GLBP configuration will work together to act as a single device using different techniques. VRRP and HSRP will do Active-Passive configuration and GLBP will also have a possibility to work in Active-Active config. (More about those protocols in a separate article) IRDP – ICMP Router Discovery Protocol enables computers inside local LAN to find all routers that can be used for default gateway purposes. If devices running IRDP runs in router mode, router discovery packets are sent to the LAN. If devices running IRDP runs in host mode, router discovery packets are received.
<urn:uuid:031d2396-0d2b-46b6-b205-454437531745>
CC-MAIN-2022-40
https://howdoesinternetwork.com/2014/redundant-gateway-solutions
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00244.warc.gz
en
0.913014
846
3.234375
3
As schools continue to prepare for the adoption of the Common Core State Standards (CCSS), teachers will be looking for new tools to help guide their students and reinforce key CCSS skills. There are a number of ways educators can do that, but perhaps the most engaging solution is math-based games in the form of apps. With personal devices in hand, students can open themselves up to a variety of learning opportunities personalized to their specific needs. This roundup covers 11 math apps that all align with the CCSS. These apps reinforce a variety of topics for students of different ages. From geometry and trigonometry to algebra and basic counting, these apps cover it all. They also provide an element of fun that inspires students to want to learn.
<urn:uuid:797bc6aa-0469-4a31-a76c-9e48d910022b>
CC-MAIN-2022-40
https://mytechdecisions.com/compliance/11-common-core-math-apps/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00244.warc.gz
en
0.956288
150
3.3125
3
Viewing an article in the news about Pegasus spyware, I read that one of the functions of Pegasus was to steal passwords. I have often seen people store passwords in Excel, Notepad and in their browsers. In addition, I often see passwords being sent by E-mail, text messaging, WhatsApp and other social media communications platforms. According to Morning Brew: The [Pegasus] spyware unlocks root access to a device, meaning the user can see the target’s emails, call logs, social media, passwords, pictures, video, sound recordings, and browsing history—including apps with end-to-end encryption, like WhatsApp and Signal. When you save your passwords digitally, they are susceptible to being discovered by malware. Why is Storing or Saving Passwords in a Browser NOT Secure? If you save your passwords in your browser, a person may be able to sit at your desk, visit a Web site such as Gmail or your bank and access your account without having to know your password, because it’s saved. What’s worse is that the same person can view your saved passwords relatively easily and without having any advanced computer skills. Here’s how easy it is to view passwords saved in Firefox. Open the Menu, and select Preferences. Click Privacy & Security (from the left pane). Scroll to Logins & Passwords. Click Saved Logins. Select the site from the left pane. Click Show Passwords. It’s just as easy on other browsers! Anybody with access to your computer can easily view your saved passwords. If it’s this easy for someone with little or no IT skills, imagine a hacker with advanced computer skills and remote access to your device or a piece of malware programmed to do the same. If you don’t want your browser to ask you to save the password, you can disable it in the settings. In Firefox for example, go to settings -> security and uncheck the option ‘ask to save passwords’. What’s more, since people oftentimes use the same password (or variations of the same password) across multiple sites, the passwords gleaned from your browser can then be used to seed a brute force attack algorithm and get into other systems. What’s the Best Place to Store Passwords? The best place to store passwords is in your memory. Since this is unlikely given the vast amount of passwords we need to remember, the second best place is on a sheet of paper, stored in a combination safe or in a locked drawer. Passwords should be long, at least 14 characters and should not contain dates, names of your family members or pets, since this information can often be found on social media. Contrary to popular belief, passwords should not be unintelligible characters and numbers. They should be changed frequently, giving you the opportunity to memorize them and then scratching them from the list! Never use names or dictionary words followed by dates as passwords. Never use personal information than can be easily obtained from social media such as family member names, pet names, birth dates and hobbies. Cracking algorithms can quickly deduce these types of passwords. A good strategy for lengthy, easy to memorize password is to use phrases, separated by special characters. For example: I*LOVE*Breaking*Bad*yo! Although it seems lengthy and difficult to memorize, it’s not! I guarantee you will remember it after only a few logins. Additional Password Safety Tips - Set a screen saver inactivity period of 15 minutes or less to lock your computer in case you leave and forget to log off. - Use lengthy password phrases and store them under lock and key until you memorize them. - Don’t store your passwords on any electronic device and don’t save logins on your browser. - Check your passwords periodically against dark Web database dumps to see if any services you use have been hacked. - Don’t re-use passwords or use variations of the same password across multiple sites and services. - Avoid using password hints. If required, use incorrect information that cannot be obtained through public records or social media. - Secure as many services as you can with multi-factor authentication.
<urn:uuid:f4f22d68-f00c-4837-b5d8-906bf756fd36>
CC-MAIN-2022-40
https://www.falconitservices.com/never-let-your-browser-store-your-passwords/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00244.warc.gz
en
0.920559
893
2.65625
3
Over the last 12 months, we have seen a large number of cities in the USA become the victim of sophisticated ransomware attacks, including New York, Washington, Atlanta, Maryland and more. Baltimore, the most recent victim in this wave of ransomware attacks, experienced their second attack in the last 12 months. Thousands of computers in Baltimore’s city government were completely frozen on the 7th May 2019 after criminals got their hands on them. The attackers demanded 13 Bitcoin, which at the time equated to approximately $114,000, to unlock all of the computers. Authorities refused to pay this ransom. The disruption that ransomware can do to city governments and residents is drastic. Locals in Baltimore were unable to pay their bills, parking tickets and taxes. Many were unable to send or receive emails. The most annoying thing? It could easily have been avoided by simply patching Windows machines to fix a flaw in EternalBlue (a known hacking vulnerability) that was released two years prior to the breach. Why Are Governments at Risk? There is no one industry or vertical that is safe from ransomware attacks. If you are targeted, then it is likely the attacker will find a way in. Governments, however, appear to be slightly more at risk due to their importance within the local community. Government bodies have critical services that they must offer to their residents, and downtime can seriously affect their ability to do that. Another possible reason that governments appear to be targeted more often is because they lack the fundamental resources to secure their data and infrastructure. How Do Ransomware Attacks Start? The vast majority of ransomware attacks initiate through phishing emails that rely on the carelessness of users. Hackers know that if they send enough emails that look legitimate, they will eventually get a user to click on a malicious link or open a compromised attachment. It’s tempting to assume that most people know how to spot and ignore phishing emails, but research presented at Black Hat USA back in 2016 suggests the opposite. Despite a widely acknowledged increase in cybersecurity awareness, a staggering 45% of participants clicked on the malicious link. Despite this, only 20% of people admitted to clicking the link when questioned about it. What does this tell us? It tells us that trust is not a security strategy. We cannot trust our users to notice and ignore a phishing email. There must be something else we can do. How Governments Can Defend Against Ransomware The vast majority of ransomware attacks occur due to privilege misuse or abuse. One simple solution could be to remove local admin rights from endpoints likely to be targeted. By effectively managing permissions and using a Data Security Platform to get visibility into the changes being made to data and infrastructure, you can likely detect and prevent ransomware attacks before they cause real damage. Many Data Security Platforms enable you to perform threshold alerting, which can alert when a large number of changes occurs over a short period of time (a common occurrence with some forms of ransomware). Such solutions can then execute an automated script upon receipt of these alerts that can shut down a computer or server.
<urn:uuid:c44ef3cc-47c2-450f-be9a-6449a0cb8a75>
CC-MAIN-2022-40
https://www.lepide.com/blog/protecting-cities-from-ransomware-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00244.warc.gz
en
0.957535
628
2.828125
3
Leveraging machine learning in data management allows businesses to better utilize the data they track. The scale of modern data management would not be possible without the application of machine learning, making it critical that organizations understand the synergies between them. Companies are able to track and compile huge sums of data that then allow businesses to draw conclusions about the most important aspects of their products or services. Whether that’s learning exactly how many resources should be poured into which efforts or which teams are better working on different kinds of problems, good data management allows you to properly wring out every ounce of benefit from the data being tracked by your company. The limitations of data management quickly become apparent as an organization gathers more data. That’s where machine learning comes in. Keep reading to gain a better understanding of the various aspects of machine learning that are used in data management today. What Are the Limitations of Data Management? First, understanding data management itself is essential for discussing how machine learning can improve data management. Data management involves intaking, organizing, storing, and maintaining data. Every organization uses data management to some extent, but different companies and industries track different data depending on their goals and aspirations. As a business scales, you begin to see the disorganization of data at an increasing rate. The potential for error increases as more hands are involved in the data management process. Collection errors can occur and data becomes less organized. As a result, your data is less accessible as well, which compromises the intended purpose of collecting this data in the first place. To solve this problem, new tools, systems, and processes are implemented as data management efforts. You have to change your system when you grow, as well as when you introduce new metrics to track. Data management systems have a certain potential for growth, but they have to be actively adjusted to ensure proper data management. Good data management encompasses all of these situations, and great data management stays a step ahead of the process. Part of the difficulty in data management is the employee power required for processing large amounts of data, as well as inferring conclusions from this data. Processes can help, but only to a point. Data management can quickly overwhelm a full team without the proper tools, which is where machine learning in data management begins to become more important. Data management teams use artificial intelligence to allow employees to spend less time doing manual data management and more time drawing conclusions from what information the AI has been able to extract. Great data management is both knowing where to apply your efforts as well as knowing how to apply AI in order to manage data more efficiently. How Does Machine Learning Interact with Data? Machine learning applies AI without the need to actively program a system. Instead, algorithms continue to generate and build off of new data, which simulates “learning.” Early adaptations of machine learning processed data sequentially, but new approaches are making big strides in using semantic analysis to model the way the human brain processes information. This creates a much more “human-like” experience, and the advantages of learning machine learning are quickly increasing. Different methodologies can be used in machine learning. Supervised learning and unsupervised learning are two critical types of machine learning that involve pre-existing datasets to find patterns and new data. Supervised learning uses a labeled dataset that acts as a baseline or training dataset for the machine learning algorithm. As the testing dataset is correctly interpreted, the machine gets better at interpreting future inputs, which simulates learning. The more data that is mined by a supervised machine learning algorithm, the more accurate it can become. An example of this is text generators on your phone. Your phone will begin to pick up your typing patterns and infer what you’re intending to say based on how often you type on your phone. The more you text, the more these algorithms can compare what you actually typed to what it predicted you would type. Unsupervised learning utilizes unlabeled data, in contrast to supervised learning. Unsupervised learning is typically utilized to detect patterns, gain insights, or highlight what is different between different sets of unlabeled data. An example of this is the large “recommended” sections on shopping websites. Recommended items are generated by unsupervised algorithms that detect patterns of similarity between things you may be interested in purchasing. By doing this, you may be interested in a slightly different item, and these algorithms can keep a customer in a sales funnel if they properly suggest similar items. At its root, machine learning is used to analyze data, and the way this is done changes depending on your goal. How Does Machine Learning in Data Management Enhance Efficiency? Data architecture is the structure applied to data assets in the data management system. Raw data is not very useful by itself, so some kind of structure and interface must be applied to it. Data architecture takes the needs of the business and then applies a structure that allows the data to be easily accessed. This is done through data flow management and storage. Machine learning vastly improves the architecture options available. Because of the needs of machine learning, using multiple architectures (also called a hybrid architecture) allows machine learning to utilize many different data types. This opens up possibilities of data utilization that would otherwise be inaccessible. Data governance ensures the security and trustworthiness of accessible data. It ensures that data is usable and useful through security measures and verifying continuity between data. Without data governance, data inconsistencies will not get resolved and the consistency between datasets is diminished. Data is sometimes recorded inaccurately, and each department in a business might intake data differently for the same customer. If this is happening, data silos can begin to arise as each department stores its own data. This causes unnecessary complexity and discontinuity between these departments, ultimately obscuring the collected data. Machine learning simplifies data governance in many ways. One of the most significant is by sorting out the inconsistencies in similar entries. In huge datasets, it can be difficult to pick out errors in data, but artificial intelligence is good at quickly sorting out input errors or multiple entries. In general, machine learning in data management means much cleaner and more unified datasets. Good data management ensures that data is stored effectively. In many industries, data has to be stored in a way that is compliant with government regulations. Data must be stored under various security measures and with a specific organizational method. Additionally, the needs for data may change over time, so data storage must be continually evaluated to ensure that data is being stored in the best way possible that allows for the most efficient use. Machine learning becomes more useful as the amount of data needing to be stored increases. One of the ways that machine learning can help is through tagging data. Unsupervised learning is very good at finding similarities between different data. When patterns can be recognized, data can be categorized and tagged based on specific attributes. That makes data very easy to extract value from, since tags are theoretically limitless. Data security is the process by which data is protected from theft or corruption. Machine learning is improving various aspects of security, such as making it easier to identify malware and spyware threats. By using supervised learning, a training dataset of malware and spyware threats can be used to quickly detect similar threats. Security is also improved by machine learning in cloud storage. Since the cloud is shared by many users, machine learning can be used to quickly detect anomalies in user activity. Machine learning can alert you to users accessing private data, large file downloads, or unusual login attempts. Data analysis is ultimately the activity that is done with the data available. Analyzing data means inspecting it, applying statistical or logical techniques, manipulating, or modeling in an effort to extract conclusions that can then be used as business intelligence. The analysis is the final step of the process in making the data you’ve collected work for you. At its core, machine learning is designed to improve data analytics. Data paints a picture, and machine learning is used to reveal patterns in the data. Analytical tools, with machine learning integration, can now do the heavy lifting associated with data analytics. Traditional data analytics involved humans who would manually process and analyze data in a way that was directed by an initial hypothesis. This was not comprehensive, and the analyst could only analyze the data so much due to time constraints. With machine learning, data analytics is much more comprehensive. AI is able to show a much clearer picture of what exactly is going on in the data. That allows businesses to make better decisions informed by more accurate data and wider insights. What Is the Future of Machine Learning in Data Management? As machine learning advances, its capabilities in data management are sure to expand. Machine learning can automate data management through analytical model building in which the models are trained to identify patterns in the data. While more mathematical techniques may evolve, according to our AI Research Scientist, Gene Locklear, the biggest movement forward will be in the development of new ways to distribute the model deployment across multiple processors or servers. Currently, the biggest limitation in using machine learning for data management is the immensity of the datasets. This leads to a model training time so long that by the time the analysis is complete, the analysis may no longer be relevant. Data scientists combined with network architects are the new paradigm for machine learning and all other subsets of AI as well, Locklear argues. We can easily think of mathematical processes that we can use to glean information from large datasets. But the time it takes those models to “ingest” these large datasets is a critical bottleneck. Developing more advanced techniques for this is the future of machine learning in data management. Contact Sentient Digital to Use Machine Learning in Data Management Looking for data management? Sentient Digital offers technology solutions and services in cloud, cybersecurity, software development, systems engineering, and integration. With a team of multidisciplinary engineers and machine learning experts, we can help you make the most of machine learning in data management. Contact us today to discuss your needs in data management and IT solutions.
<urn:uuid:7c1c497b-fe10-444f-912c-45dfa39e778f>
CC-MAIN-2022-40
https://sdi.ai/blog/machine-learning-in-data-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00244.warc.gz
en
0.935397
2,075
2.875
3
Voice over Internet Protocol (VoIP) is essentially a telephone connection over the Internet. The data is sent digitally, using the Internet Protocol (IP) instead of analog telephone lines. This allows people to talk to one another long-distance and around the world without having to pay long distance or international phone charges. This can be done through various applications, such as Skype, WhatsApp, Google Hangouts, Zoom, and Facebook Messenger. SMBs tend to value simplicity, and that’s why many are making the switch from traditional phone systems to voice over internet protocol (VoIP). The traditional phone system has had a presence in the business world for decades, VoIP technology has exposed its age by wrapping most of the same features in a simplified, cloud-based package. When one option is faster to set up, easier to maintain, and more affordable and feature-packed than the alternative, any company would be remiss not to explore a potential upgrade. In other words, VoIP technology doesn’t just bring numerous advantages to the table, in many ways, it represents the next step for small business communication.
<urn:uuid:2c657ae8-54e1-4a88-8a5f-723ab9dccb25>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/voice-over-internet-protocol-voip/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00444.warc.gz
en
0.935262
227
2.578125
3
Phishing is an online scam where cyber criminals send messages that appear legitimate to get the recipient to click a link and enter confidential information. Once a phishing link is clicked, the criminals can steal personal information, gain access to a computer network, or download malware. Phishing is a serious cybersecurity issue; 65% of U.S. organizations experienced a successful phishing attack last year and only 49% of U.S. workers can answer the question, “What is phishing?” correctly. (ProofPoint, 2020) Phishing is going mobile – 87% of phishing attacks on mobile devices use social media apps, games, and messaging as the attack method of choice. (Wandera, 2020) As the awareness of phishing increases and its effectiveness decreases, hackers have developed increasingly sophisticated and personalized phishing attacks. This guide to phishing is meant to illustrate the many ways cyber criminals attempt to access your information so that you, and your business, can remain cyber safe. Spear Phishing is a targeted attempt to steal information from a specific person. Spear Phishing uses information specific to the target to appear legitimate, often gathered from social media or “About Us” sections of company websites. Real World Example: An email is sent to the parent of a youth soccer team’s player from a cyber criminal posing as the coach of the soccer team. The email is personalized and advises that the soccer game had been cancelled and the recipient should view the attached file for the updated schedule. In a Whaling phishing attempt, the unknowing target is typically a member of a business’s senior leadership team. Whaling emails used spoofed “From:” fields to trick other employees of the company into sending sensitive data. Real World Example: An email is sent to the HR department of a large technology company that appears to come from the company CEO asking for salary information, social security numbers, and home addresses of dozens of employees. The HR team, believing the email was legitimate, proceeded to unknowingly send the required confidential information to the cyber criminal. Phishing attempts that happen on the phone are known as Vishing attacks. The scam attempts to create a sense of urgency and panic, making the victim want to act quickly and without thinking. Vishing attacks use spoofed caller ID numbers to add to the believability of the scam. Real World Example: A call appears to come from a local bank. The caller says they have noticed fraudulent activity on the potential victim’s account and need to verify account information to prevent further fraud. The criminal will ask for account numbers and passwords to “verify” the account. Smishing uses SMS text messages to target victims. Real World Example: A text is sent from a parcel delivery company with a tracking number and link to “choose delivery preferences.” Clicking the link takes the user to a fake Amazon site which asks for a user name and password to claim a free gift card “reward” for taking a customer satisfaction survey. Zombie Phishing is when a hacker gains access to a legitimate email account, resurrects an old email conversation, and adds a phishing link. Real World Example: A months-old email thread between two company employees appears in the victim’s inbox, with a message like “Message truncated, click to view entire message.” The link takes the user to a fake company webmail portal and when the user logs in, the cyber criminal has gained network access. Evil Twin phishing uses Wi-Fi to accomplish its goals with a wireless access point that looks like a legitimate one. Once an unsuspecting user logs onto the Evil Twin Wi-Fi, the criminal can gather personal or business information without the user’s knowledge. Real World Example: A victim sets up his laptop in a coffee shop and logs into the “Starbuck5” Wi-Fi, not noticing that the business name was misspelled. Search Phishing uses legitimate keywords in search engines to offer unbelievable sales or discounts on popular products. This scam uses fake webpages as the phishing link. Real World Example: A search for a popular portable music player returns a link to an incredible sale on the product. When the link is clicked, the victim is taken to a fake web site that asks for a credit card or bank account to create an account. A different version of this scam creates a fake warning in your web browser saying your computer has been infected with malware, with a link to download software to “fix” it, or to download an updated version of your web browser. Social media offers cyber criminals a whole new way to exploit people with Angler Phishing, which uses social media posts with links to cloned websites that look legitimate, and malicious links in tweets and instant messaging. Real World Example: A bank customer tweets about the bank’s lackluster service. A fake bank customer service account DMs the customer and offers immediate assistance; all the user must do is click the enclosed link, which downloads malware, or asks for personal bank account information. While not a phishing attack per se, another way to hide phishing links is by using a link shortening tool, like Bitly or Ow.ly. Cyber criminals also buy domains that sound or look like popular websites, hoping you click the link, not noticing the misspelling or wrong URL. One of the best examples is hackers using the domain arnazon.com, which looks very much like amazon.com because when placed together, rn looks very much like m. The Bottom Line Why is the awareness of phishing tactics important? Phishing attacks account for more than 80% of reported cybersecurity incidents (Verizon, 2019) and attackers use phishing as an entry point for almost one-third of all cyber-attacks (IBM, 2019). Knowing the various ways cyber criminals attempt to gain access to your account logins and passwords, download malicious software to your computers and network devices, and ultimately separate you (or your business) from your hard-earned money, can help keep your cyber secure, and the online world a safer place. Phishing Tips to Keep You Safe Think, don’t click! Slow down and really examine a suspicious email or text. Some red flags to look for: - Bad Spelling. If there are obvious spelling mistakes or grammar errors, delete the message - Hover Over It! Even though a link may appear to be real, hover over it to reveal the link’s actual destination. - Greetings! If the salutation is “valued customer” or “Hello, friend!” and not your name, chances are good it is a phishing attempt. - Request for Information. Your bank already has your information, so there is no need for them to ask you for it. - Threats. “Your account has been suspended” or “Payment required” are red flags. - Attachments. Never open an attachment from someone you don’t know, or that you aren’t expecting. - Email Address. If the email address is from an email service and not a legitimate business email address, take no action. Fortress Security Risk Management is a global data protection company that helps organizations dramatically minimize their risk of disruption from unforeseen events like cyber-attacks. Our goal is to help every client achieve the highest degree of security and the least amount of risk their organization can afford, or what we call, SecurityCertaintySM. Feel free to use and distribute the accompanying infographic, “Let’s Go Phishing! A Guide to Phishing Attacks” to raise awareness of phishing with your co-workers, colleagues, friends, or family.
<urn:uuid:5c53eaed-2bb1-40e0-85eb-44cb1d2e182b>
CC-MAIN-2022-40
https://fortresssrm.com/a-guide-to-phishing-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00444.warc.gz
en
0.914607
1,656
3.421875
3
How a Power Outage Can Kill Your Hard Drive Your hard drive uses an array of complex, sensitive devices to store, read, and write data. Those devices—the hard disk platters and the read/write heads—are the most at risk when the hard drive loses power. Obviously, a sudden loss of power is not a situation with a 100% hard drive fatality rate. But when a power outage kills your hard drive, here’s what happens: When you normally power down your computer, it sends shutoff signals to your hard drive and tells it to suspend its operations. The read/write heads pull away from the platters before the spinning platters come to a stop, and the drive safely powers down. But when the hard drive unexpectedly loses power, it doesn’t receive these signals. The platters stop spinning (because the hard drive isn’t receiving any power to its spindle motor anymore). But the read/write heads don’t always finish pulling away from the platters before the spinning stops. The heads are normally held aloft by a cushion of air generated by the spinning of the platters. And the sudden disappearance of this cushion can cause the heads to crash onto the platter surfaces. This can result in damage to both the heads and platters. As a result, the hard drive can appear to be blank, or even start to click or beep when you power it on. In this data recovery case, our client suffered a blackout that killed their drive and gave their computer a startup error. But there’s also a slim chance that this can also happen if you do a hard reboot (i.e. holding down the power button until your computer turns off, or pulling the plug on it). Power Outage Data Recovery Case Study: Dealing with a Gray Screen of Death Drive Model: Western Digital WD3200AAJS-40VWA1 Drive Capacity: 320 GB Operating/File System: Mac (HFS+) Data Loss Situation: iMac had Gray Screen of Death after power outage; hard drive appeared blank when plugged into another computer Type of Data Recovered: Quark Xpress projects, Quicken/Quickbooks, photos and documents Binary Read: 99.6% Gillware Data Recovery Case Rating: 9 The client in this data recovery case study came to us after a power outage. In the aftermath of the blackout, their iMac wouldn’t start up properly, showing only a gray screen and a folder with a question mark. This was the “Gray Screen of Death”, Mac’s rough equivalent of Windows’ infamous Blue Screen of Death. The client tried to figure out whatever was preventing the computer from starting up properly and eventually discovered the culprit: their hard drive. The hard drive held mountains of valuable data to the client: Quark Xpress documents, Quickbooks documents, and a treasure trove of photos. And when the client plugged the hard drive into another Mac to read and copy off its data, the entire drive showed up as blank. The client sent their hard drive to Gillware Data Recovery, where our Mac data recovery experts could diagnose the drive and (hopefully) recover their data. Your Mac’s Gray Screen of Death isn’t necessarily caused by hard drive failure, like this customer’s (and a few others) was. Sometimes you can fix a Gray Screen of Death simply by rebooting your Mac (and of course, sometimes you can’t). Diagnosing a Gray Screen of Death The Gray Screen of Death is a kernel panic, which occurs when your operating system encounters an error so perplexing that it has no idea what to do. This can be caused by a hardware or driver issue with an internal component of your computer or something you have plugged into your computer. It can happen if something prevents the O/S from accessing critical system files, such as when key system files become corrupted. To diagnose and fix a peripheral hardware issue: Try unplugging as many external devices as possible (printers, external hard drives, flash drives, USB hubs, etc.) except for your mouse and keyboard. This should resolve whatever driver issue is preventing your Mac from starting up properly. To diagnose and fix an issue with corrupted or missing system files, you can boot into a diagnostic mode and run Disk Utility by holding Cmd+R, or booting from your O/S installation disk. After verifying your hard disk, you can run “Repair Disk” to address any logical problems with your hard drive if necessary, and run “Repair Disk Permissions” to fix corrupted or misplaced system files. If nothing you try works, the problem may be due to a faulty motherboard in your Mac, or a hard drive failure. In the case of a failing motherboard, you can still remove your hard drive and copy data off of it onto another Mac, but you will need to rely on Apple computer repair experts to replace the motherboard and get your computer back up and running. If your hard drive has failed, you won’t be able to pull any data off of it—you’ll have to leave that to the professionals. Read on to find out how our engineers salvaged this client’s data after their hard drive died. iMac Hard Drive Data Recovery Our hard drive data recovery engineers found that this Western Digital hard drive had in fact suffered damage from the blackout. The read/write heads had crashed and become mangled, scratching the surfaces of the platters. After using one out of our vast collection of donor hard drives to replace the read/write heads, our engineers started salvaging data from the failed hard drive. In a hard drive, data lives on both surfaces of the spinning disk platters. In cases of platter damage, often one of the surfaces of one platter will suffer more damage than the others. Platter scratches cause irreversible data loss, as the physical sectors holding data are scraped off of the surfaces of the platters. Platter scratches also frequently complicate data recovery work. In this case, our engineers could pull data smoothly from all but one of the platter surfaces. The one surface that had suffered the most damage gave our engineers a much bumpier ride. While our engineers managed to successfully read 99.6% of the sectors on the platters, the majority of the remaining sectors that were too damaged to read lived on this particular platter surface. Our engineers managed to fully recover 86.8% of the user-created files from the client’s failed hard drive, as well as partially recovering numerous others. Many of the partially-recovered files still worked, albeit with obvious gaps and holes where sectors of data were missing. Our engineers focused their efforts on recovering the client’s most important files, and were able to fully and successfully recover the vast majority of the client’s critical documents, projects, and photos. As a result, we ranked this power outage data recovery case a 9 on our ten-point scale.
<urn:uuid:7e1936b1-da7e-4632-979c-1a2c8f77ce87>
CC-MAIN-2022-40
https://www.gillware.com/hard-drive-data-recovery/power-outage-data-recovery-gray-screen/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00444.warc.gz
en
0.932004
1,464
2.75
3
Cybercriminals use a wide variety of attack vectors to infiltrate corporate networks. From that point, they may spend weeks or months conducting research, identifying vulnerabilities, and exfiltrating sensitive data to their own servers for data theft extortion. Data exfiltration 101 describes the types of attacks that lead to data exfiltration and why 83% of all attacks rely on it as the primary vector. There are many kinds of attack vectors. They include everything from malicious email attachments to insider threats and sophisticated technical exploits. Cybersecurity professionals and IT leaders must constantly allocate resources to detect and prevent attacks on these vectors. Knowing which ones cybercriminals are currently focusing on helps security leaders make efficient use of those resources. This information is obviously important for detection since detection-based systems tend to narrowly target certain vectors. It’s also important for prevention-based cybersecurity because it informs IT leaders’ greater security strategy. If you don’t know where attacks are coming from, preventing them is a near-impossible challenge. Global Statistics: Today’s Most Targeted Sectors Cybercrime trends change based on the specific sectors and industries targeted. According to BlackFog’s 2021 Annual Ransomware report, the most frequently targeted sectors of 2021 were: - Technology – 89% increase year-over-year. - Healthcare – up 30% year-over-year. - Retail – up 100% year-over-year. - Government – up 24% year-over-year. Considering the economic and geopolitical upheaval taking place in Eastern Europe as a result of Russia’s invasion of Ukraine, it’s likely that many of these sectors will see themselves targeted even more in the near future. Government and military agencies in particular are likely to experience concentrated attack efforts made by state-supported cybercriminal organizations. Your own organization’s risk profile depends on whether it is an enterprise-level organization or a small to mid-size business. Cybercriminals modify their tactics, techniques, and procedures based on the size and preparedness of their victims. Top 5 Enterprise Attack Vectors Large enterprises can typically afford to implement a complex set of cybersecurity tools, with. 80% using between 3 and 19 different cybersecurity tools. Many of these tools are industry-leading security platforms operated by highly experienced security personnel. However, cybercriminals have learned to exploit vulnerabilities in highly complex enterprise security environments. They may focus their efforts on incompatibilities between different enterprise tools or compromise trusted accounts and try to hijack those tools for their own use. Some of the most common attack vectors today’s enterprises face include: Enterprises can improve their security posture by consolidating their security solutions and reducing the complexity of their tech stacks. Overly complex security environments contain many moving parts that highly motivated cybercriminals may successfully bypass. Small and Mid-Sized Businesses are Particularly Vulnerable Cybercriminals have learned to target smaller organizations instead of large, well-defended enterprises. They now target smaller businesses that are often unable to adequately defend themselves the way large enterprises can. More than 80% of smaller organizations have less than 10 cybersecurity tools deployed. One third of these have only one or two tools at their disposal. Over 40% of cyberattacks target small businesses. Attackers now use highly automated workflows to identify vulnerable organizations and launch attacks to probe their defenses. The three most common types of attacks on small businesses are: - Phishing and Social Engineering Attacks: 57% - Compromised and Stolen Endpoint Devices: 33% - Credential Theft Attacks: 30% Small and mid-sized businesses can effectively address security risks by hiring qualified managed security service providers who use best-in-class technology. These services often come at a vastly reduced rate compared to in-house expertise, giving smaller organizations access to enterprise-level technology at favorable cost. However, small businesses must pay close attention to their security partners and the technologies they use. Competent, reputable partners who use a balanced set of technologies (including both detection and prevention-based solutions) are worth the higher rates they often charge. Anti Data Exfiltration (ADX) Today’s cybercriminals can use a variety of methods to gain access to protected networks, and there are signs this trend will increase sharply in the near future. Enterprises and small businesses alike should look beyond detection-based solutions to ensure their most sensitive data is truly secure. All of the attack vectors listed above share a single factor in common. In order for the attack to succeed, data must travel from inside the protected organization to the outside. Attackers must somehow coordinate with software located inside the target’s network. Data exfiltration protection serves as a critical layer of protection against ransomware, data breaches and malware attacks. This prevents cybercriminals from accessing sensitive data and cuts off communication between compromised accounts and cybercriminal Command & Control centers. Small businesses, managed security service providers, and large enterprises alike should make this prevention-based technology a crucial part of their overall security posture. Stop cybercriminals from accessing protected data and protect your most sensitive assets from exploitation.
<urn:uuid:8d62b63b-c0cb-406a-bba7-ccba93112ce8>
CC-MAIN-2022-40
https://www.blackfog.com/data-exfiltration-101/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00444.warc.gz
en
0.944815
1,067
2.53125
3
Breakfast is the most important meal of the day and rightly so. Your breakfast must contain fruits and a big serving Healthy Breakfast foods of protein in every serving. This is a great way to assure that you have enough energy throughout the day. Why is breakfast important? The first rule, which has probably been said too many times, is to never skip your breakfast. Healthy Breakfast foods are important for us for various reasons. Eating a healthy meal in the morning provides the necessary fuel for your brain and body. It can also help to rein hunger, decrease the urge to snack throughout the day. For children, eating breakfast has been positively co-operated with school studies and activities as well as a decreased risk of obesity. What foods make a nutritious breakfast? Experts suggest trying to choose unprocessed Healthy Breakfast foods from each of the five food categories: fruits, vegetables, grains, protein foods, and dairy. Try to include proteins from foods like eggs, yogurts, nuts, and seeds, or legumes. Also try to include complex carbohydrates such as whole fruits and vegetables, and whole grains that provide fiber and will help you feel full longer. What should your breakfast never lack? With other meals, Healthy Breakfast foods are important to focus on your overall diet and not on only one meal in particular. Usually, your breakfast has to contain a good amount of vegetables and fruit to make sure that you are getting in enough vegetables and fruits throughout the day. What foods should you limit? As per your healthy breakfast food regimes, cereals or pastries which contain a lot of added sugars and little amount of nutritional value, as well as breakfast meats such as sausage and bacon which causes to increase the risk of some cancers when eaten regularly, and which are also high in saturated fats, which increases the risk of heart disease and body fat. What does a healthy breakfast food look like? Ideal and healthy food choices can create sometimes headaches and disappointment. But according to Healthy Breakfast foods regime, it doesn’t have to be. Eating well is now possible. The more effortless we could make it, the better chances arrive to actually stick to it. With National Nutrition Month this March, we will be breaking down what balanced healthy meals look like. We don’t have to go into “diet mode” rather, we can simply shift into “healthy mode.” The goal is not to feel like you are separated yourself from anything , instead of rewarding yourself—with new flavors, and hopefully renewed energy and improved mood and health. Some examples of a balanced, healthy breakfast? Here are some ideas for Healthy Breakfast foods— - OATMEAL + BLUEBERRIES + FLAX OR CHIA SEEDS - WHOLE WHEAT TOAST + PEANUT BUTTER + BANANA - WHOLE WHEAT TOAST + PESTO + SLICED AVOCADO + SLICED TOMATO + EGG - WHOLE WHEAT + PLAIN YOGURT + FRUIT + A BIT OF HONEY Do we have some tips on how to plan in advance? Many breakfast foods can be prepared in advance. Oatmeal can be combined with fruit & milk and put in the refrigerator the night before you will have – no cooking required for this preparation. Healthy Breakfast foods also means one can use blueberries and a banana, also any combination of fruit works fine. Another great option is a piece of fruit like banana or apple with some nut butter or Greek yogurt with granola; these can be made ready in minutes. What are the benefits of Healthy Breakfast foods? A healthy breakfast gives you a chance to start each day in a healthy and nutritious way. For adults who are habituated with a regular healthy breakfast are more likely to – - Control their blood sugar levels. - Control their weight. - Perform better at work. - Eat more vitamins and minerals. For children, Healthy Breakfast foods comes them more closer to – - Be able to concentrate in studies. - Be active in school. - Attach daily nutrient requirement - To maintain a good healthy body weight. What are the 3 Healthy Breakfast foods to - The Basics of Healthy Breakfasts, says to avoid- - Sugary drinks. - White bread. - Fried or grilled food. A healthy morning is one that is on the table with your loved ones enjoying a peaceful, hearty meal. This is good, not just for your body, but also for your mind. Waking up in a rush and pushing off to work with maybe just an apple in hand, might look funny while watching it on television – but is a recipe for disaster in real life. Always start your mornings peacefully and ease into the day with something to support your energy requirements. An old saying goes something like this: breakfast like a king, lunch like a queen and dinner like a beggar.
<urn:uuid:7e4eff7c-6850-460b-be41-4d459c59356e>
CC-MAIN-2022-40
https://beyondexclamation.com/healthy-breakfast-foods-the-basics-of-healthy-living/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00644.warc.gz
en
0.952408
1,044
2.984375
3
Power Over HDBaseT: Necessity and Applications HDBaseT Technology and Power Over HDBaseT Promoted by the HDBaseT Alliance, HDBaseT is a consumer electronic and connectivity standard for whole-home and commercial distribution of uncompressed ultra-high-definition multimedia content. For example, HDBaseT technology connects home entertainment systems and devices using a feature called 5Play™ that enables up to 8 Gbps (the equivalent of 10.2 Gbps of HDMI) of uncompressed video and audio, 100BaseT Ethernet, control signals and power to be transmitted through the same cable within home connectivity. It uses standard RJ-45 connectors to transmit data up to 100 meters. Figure 1: HDBaseT Technology for Home Devices The Power over HDBaseT (PoH) standard is based on the IEEE 802.3at standard (Power of Ethernet or PoE), with the appropriate modifications to enable safe delivery of up to 100 W over the four pairs of the Ethernet cable. In a typical PoH implementation, the PSE is installed and powered by a 50-57V DC power supply, and all PDs then receive power directly over the HDBaseT link across all four pairs of Cat5e (or higher) cables, receiving twice the power of earlier two-pair solutions in PoE. If multiple PDs need power, PoH allows devices to be daisy-chained together and all powered through higher-power extenders (95 W). Figure 2:Four-Pair Powering With Power over HDBaseT Why Do We Need PoH? There are three main reasons why we need the power over HDBaseT. No additional power source needed: With HDBaseT technology, a single LAN cable can provide power up to 100W over a maximum distance of 100 meters, requiring no additional power source. Solves power circuitry problems in providing thinner, lighter wall-mounted TVs: HDBaseT power capabilities also solve the problem of AC-to-DC and DC-to-DC power circuitry where thinner, lighter wall-mounted TVs are needed. HDBaseT replaces the AC-to-DC circuitry with a single, convenient cable or connector, making it possible to connect wall-hung TVs via an HDBaseT-enabled Cat5e (or higher) cable without other power source required. PD can draw more power: PoH is comparable to PoE for delivery of power 30 W or less when two pairs are used. For higher power delivery, PoH is unique to the HDBaseT standard to bring more power to the devices that require it. With PoH, a PD can identify cable length and resistance, and draw power up to 100 W, surpassing many types of PoE standards. PoH Applications and Prospects PoH technology offers a cost-effective and easy way to deliver power to digital signage at homes, in airports, hotels, hospitals, cafeterias or any other environment in need of a video display, eliminating the need for AC power. Unlike other HD distribution technologies currently available, HDBaseT is the only standard that delivers uncompressed Ultra-HD 4K video for up to 100 meters. HDBaseT technology is the industry’s most advanced, cost-effective, easy-to-use and all-in-one solution for whole-home converged distribution of HD multimedia content, with inherent support for the huge install base of uncompressed sources and real-time interactive devices. Listed below are two main common scenarios using power over HDBaseT. Used to Power a Single-Wire TV PoH can be used to power a single-wire TV. In this case, the TV receives video, power, controls and Ethernet through HDBaseT. Audio, video, controls and power are delivered through the same cable from an AV receiver, which can be placed up to 100 meters away. If you use a two-box TV (see the figure below) where most of the electronics and the intelligence exist, the display is simply an elegant flat screen which receives only video and power. Figure 3: PoH for Single-Wire TV Powering Used for a Two-Box Projector Setup PoH can be used in a two-box projector setup, where the projector powers both the transmission TV box and charges the laptop with enough power. With HDBaseT, neither of the box and the laptop needs a separate power cord. Figure 4: PoH for TV Box and Laptop Powering FAQs About Power Over HDBaseT Q1: What challenges will HDBaseT face? A1: One big challenge for HDBaseT is the heat rise in cable bundles, which exists in PoE as well. To overcome this issue, how to avoid heat rise in PoE cabling may become a reference. Another challenge, the unbalance of DC resistance, also happens to both PoE and HDBaseT. Instead of bit errors and retransmits, the unbalance of DC resistance in a PoH connection can distort the picture. If the difference of DC resistance between two conductors is greater than the maximum allowed 3% as specified by IEEE Std 802.3, it is likely that your PoH application won't be picture-perfect. Therefore, you’d better test the DC resistance. Q2: What is the difference between PoH and PoC? A2: The main difference between the two is standardization. PoH is based on the PoE+ standard IEEE 803.AT where it can provide up to 100 watts of power. While PoC (power over cable, sometimes referred to as power over coaxial) is not based on an international standard. Q3: Is PoH compatible with PoE? A3: Yes, the PoH standard is based on PoE, and as such, it is compatible. In that case, the maximum power transmitted will be according to PoE. Q4: Is HDBaseT compatible with coax cables? A4: No. HDBaseT transmits signal either through a LAN cable or fiber cable (in products that feature Spec 2.0 chipsets).
<urn:uuid:f6710044-6040-471c-9ca9-8f1dc056a8ce>
CC-MAIN-2022-40
https://community.fs.com/blog/power-over-hdbaset-necessity-and-applications.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00644.warc.gz
en
0.894447
1,268
2.5625
3
The Threat of Ransomware A Crisis Level Threat Ransomware has made itself known as one of the most dangerous and prevalent cyberthreats for the new decade. As society becomes more and more dependent on technology, hackers seek to disrupt daily operations by attacking fundamental operations. The statistics from the past year are startling. According to a yearly report published by Emsisoft Malware Lab, 2019 saw at least 996 ransomware attacks in the United States. The organizations impacted by these attacks included government institutions, healthcare providers, and educational establishments including colleges, universities, and even entire school districts. The estimated cost of this cyber onslaught is over $7.5 billion; the average being about $8.1 million per incident. However, the cost of these attacks was not only monetary. In the case of healthcare facilities like hospitals and care centers, operations were halted, forcing them to turn away emergency patients and reschedule surgeries. For local governments, critical systems like 911 and emergency response services were interrupted. The disruptions caused by ransomware attacks made it impossible to access important documents like health records and made day to day operations extremely difficult to carry on business as usual. Ransomware attacks not only endangered data, but people’s lives as well. With the rise of the COVID-19 pandemic, this particular risk is even more evident as already overwhelmed hospitals are increasingly vulnerable targets for ransomware strains like Ryuk. Ransomware on the Rise After the WannaCry ransomware attacks in May 2017, hackers have realized that ransomware pays big. Since then, a multitude of different ransomware strains have popped up and infected entities across the world. In late 2019 and 2020 alone, a few frontrunners have made themselves known and feared. Maze and Ryuk ransomwares have both made the news several times after hitting high profile companies and local governments, and then threatening to leak victim data. Other ransomware strains have piqued cybersecurity experts’ interests by only targeting certain regions of the world. As with most technology, ransomware is growing more sophisticated in its tactics. Some experts are thinking that artificial intelligence and machine learning will come into play, and cybercriminals will take advantage of these resources to make their malware all the more dangerous. Both sides are in an arms race in order to keep up with each other’s technological advancements. Hackers are using social engineering tactics as well, exploiting people using well disguised phishing attacks that are sometimes hard for even the best email-sceptic detectives to spot. All of these things mean more trouble in the way of victims attempts to recover data without paying a ransom for a decryption key. Targeted Attacks on Businesses, Healthcare, Government, and more… Ransomware operators aren't looking to hook small fish anymore; they are targeting bigger entities in hopes of scoring a larger payload. According to an article by ITPro Today, enterprise ransomware attacks increased by over 300% from 2018 to 2019, and we are likely to see that same trend in 2020. Businesses of all sizes are at risk, but small businesses are especially in danger because they may not have the advanced infrastructure to protect against such an attack. Hackers know this and make careful note of what potential targets may be low-hanging fruit with high reward. When it comes to healthcare, especially in the current state of the world, ransomware operators are targeting vulnerable facilities like overwhelmed hospitals who are likely to pay a ransom in order to save lives which might be lost if immediate measures aren’t taken to regain control over critical systems. A lot of thought and research is going into these ransomware attacks, making them even more devastating. Inflicting maximum damage also comes with an increased ransom cost. Ransoms can cost anywhere from a few thousand to a few million dollars depending on how much the victim’s data is worth to attackers, but the costs to recover that data and lost sales often appear to outweigh the ransom price. Ransomware operators take advantage of the fear and panic and make ransoms appear as if they are the best way out of the situation. Many businesses will choose to pay the ransom, if the price is right, to avoid the hassle of recovery. If ransomware is ever to be stopped, ransom payments must stop also. Ransomware only runs rampant when there is profit to be made. In late December 2019, the US Coast Guard was hit by Ryuk Ransomware, and critical information technology systems were shut down for over 30 hours. The suspected point of entry for the attack was thought to be a phishing email. EWA (Electronic Warfare Associates) was attacked by Ryuk in January as well. Evidence of the attack and encryption of web servers could be seen on company websites, which appeared as mostly gibberish, since the information had been encrypted. In February 2020, a Ransomware attack shut down a natural gas compression facility for two days, prompting the US Department of Homeland Security CISA (The Cybersecurity and Infrastructure Security Agency) to post an official alert. LaSalle County, Illinois was hit by a ransomware attack in late February that shut down 200 computers and 40 servers across several departments of government. These are only a fraction of the ransomware attacks that have occurred in the past few months. Planning for cyber-emergencies is just as important as planning for physical ones, and it’s more important than ever as ransomware operators try scary new tactics to get paid. Ransomware Attacks are Becoming Data Breaches As if being hit by an infrastructure crippling ransomware attack wasn’t enough, a new trend is raising the stakes for victims by blackmailing them into paying the ransom price or having their data leaked or sold. A strain of ransomware known as Maze ransomware started threatening to release victim data in late 2019, shortly thereafter creating a site completely dedicated to publishing leaked data. Maze is not afraid of high-profile targets either; they hit Southwire, one of America’s largest private companies (according to Forbes), and published over 14GB stolen files. This piqued the attention of the F.B.I., prompting them so send out an alert about Maze specifically targeting U.S. Companies. Other ransomware strains, such as Sodinokibi (REvil), Nemty, BitPyLock, DopplePaymer, and Nefilm have adopted this same strategy in attempts to dissuade victims from seeking other methods of data recovery. Before encrypting files and demanding a ransom, operators are stealing sensitive information. Most ransomware is executed around three days after a system is infected. As the malware remains dormant and undetected, operators use this downtime to steal administrator credentials and confidential data. Whether or not this data is released, it still means that private information is in the hands of bad actors. Due to the popularity of this new trend, ransomware attacks are leaning more towards being classified as data breaches. How InfusionPoints Can Help Secure Your Environment Ransomware attacks are somewhat preventable. InfusionPoints can help by infusing cybersecurity capabilities into every point of your business solution's life cycle. We identify, validate, and report weaknesses in your organization's security posture, which helps employees be prepared to avoid common methods of entry for ransomware, like phishing emails. With our VNSOC360° Continuous Monitoring services, InfusionPoints reduces the detection and response time to an adversary's attempt to compromise your infrastructure. VNSOC360° Managed Detection and Response reduces dwell time by providing timely detection which reduces the length of time the adversary is in your IT ecosystem and limits the impact of a breach. VNSOC360° takes control of the chaos and mitigates your risks. Stay Updated on the Latest Developments Keep your infrastructure secure by staying aware of major ransomware attacks with InfusionPoints’ monthly newsletter! Each month, we curate an informative newsletter summarizing significant ransomware attacks from all over the world. Stay up to date on all the new tactics used by ransomware operators and stay ahead of the game. Sign up today!
<urn:uuid:3e6bfc02-5eeb-49e6-9c4f-0b72b4a92563>
CC-MAIN-2022-40
https://infusionpoints.com/blogs/threat-ransomware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00644.warc.gz
en
0.955578
1,633
2.828125
3
A server system is the heart of a company’s I.T. life. It often contains many, if not all company files, and acts as the central core that all company computers are connected to. A slowdown of such a system would therefore mean a slowdown of the company as a whole. Often, inefficiencies that begin to occur are caused by an aging operating system (OS), which inhibits the utilization efficiency of a server’s components. An OS is the fundamental program of a computer system. It acts as the interface between the components of a computer and its user, and acts as the ground on which other programs operate. However, like any other program, it requires constant updates from its developer to keep up with new components or software. For instance, Windows OS receives periodic updates from Microsoft which improve its stability, efficiency, and security. When a developer cuts further support to an end-of-life operating system, it not only becomes a cause of major slowdown due to inefficient use of hardware, but it also becomes a security liability, as hackers would be able to take advantage of vulnerabilities that are no longer being addressed by its programmers. As such, it is imperative to keep the operating systems on company systems updated, to ensure efficient operation, as well as data security.
<urn:uuid:7023a576-f2bd-4416-afaa-9763f3e68e90>
CC-MAIN-2022-40
https://www.genx.ca/server-os-page-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00644.warc.gz
en
0.971463
262
3.109375
3
Choosing the ISP (Internet Service Provider) plan that is right for you can be confusing if you are unsure about what each plan offers and the types of Internet connections that are available in your local area. Here is some information that will help you discover what types of connections are available as well as the type of ISP plan that will suit your individual needs. Types of ISP Connections There are different types of ISP connections which are available and if you know what they are you can find out which ones are available in your local area. - Dial-Up: A dial-up connection is an ISP connection that uses your telephone line to connect to the Internet. Dial-up requires you to have a telephone and a modem installed in your PC. Although dial-up is available in just about every area, it is also the connection with the slowest speed. - DSL: DSL stands for Digital Subscriber Line and uses your telephone line to establish a broadband Internet connection. Unlike dial-up, broadband is a much faster connection and does not tie up your telephone line to connect to the Internet. With broadband you can talk on the telephone and surf the Internet at the same time. - Cable: To acquire a cable connection, you must have cable service available in your area. A cable Internet connection is a permanent connection to the Internet and runs at a much faster speed than a modem connection. Cable is convenient because you do not have to log on every time you want to check your email or access the Internet. - Satellite: A satellite Internet connection is achieved through the use of a satellite dish. This is also a much faster connection but is more expensive than some of the other types of connections. Satellite is helpful if you live in a rural area that has limited choices when it comes to Internet connectivity. All you need is a clear view of the south so you can establish a connection with the satellite dish. Choosing the Right ISP Plan To make sure you choose the right ISP plan, there are some things you should consider before reviewing the plans that Internet Service Providers offer. - Your Purpose for Using the Internet: Decide on the purposes for which you will be using the Internet and if you will be using it for business or personal use. - How Often You Use the Internet: Determine how often you will be using the Internet. If it is quite often, you will want a faster connection to save time with daily tasks. - Types of Applications: Decide on the types of applications you will be using. For example, if you are using a lot of multimedia, a dial–up connection will be too slow to run these types of applications. - Tech Support: Determine whether you will need tech support around the clock or just during normal business hours. - Download Requirements: Decide on how much downloading you will need to do such as emails, movies, and games. If you plan on downloading a lot, then you will need a fast Internet connection.
<urn:uuid:ec970120-2186-40b3-a981-041d3f97e697>
CC-MAIN-2022-40
https://internet-access-guide.com/how-to-choose-the-right-isp-plan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00644.warc.gz
en
0.936563
608
2.609375
3
It is widely believed that by the end of 2018, approximately 23 billion devices will communicate with a myriad of networks, thanks to the advent of the IoT. The rising popularity of the IoT has also spurred the demand for cutting-edge, customized optoelectronic sensor solutions. Different types of optical sensors such as through-beam, retro-reflective and diffuse reflection sensors have been extensively used in various applications over the past few years. Unlike yesteryear cables, the absence of electricity in its strong resistance to drastic temperature changes has made optical sensor communication highly preferred globally. Contrary to common belief, optical sensors aren’t merely objects exploited for research and development purposes; they are embedded in devices most humans use on a routine basis. Ambient light sensors embedded in most mobile phones these days are perfect examples of an optical sensor application, which most people, today, can relate with. In a significant way, optical sensors have revolutionized the biomedical industry. Optical heart-rate monitors, for instance, have been extensively used by medics across the globe. In a typical optical heart-rate monitoring sensor, light reflected by the skin is captured by optical sensors. The density of blood and its ability to absorb light helps the sensors accurately determine fluctuations in heart rate. Optical sensor-based liquid level indicators are quite popular too and help in limiting liquids to the desired levels in oil refineries. These liquid level indicators are amalgamations of LEDs, light transistors and prisms and they trigger an alert based on the amount of light reflected by the sensors. Another application of optical sensing which has gained immense popularity over the years is demand control kitchen ventilation (DCKV) systems. These ventilation systems trigger an alert if penetrated by smoke, triggering fans to operate at high speed, thus preventing suffocation by a great deal. The embracement of the BYOD culture worldwide and the advent of technology have definitely made optical sensors popular. Industry veterans opine that custom optical sensors are futuristic and will be a pivotal part of many applications in the days to come. In the current technological climate, it seems like custom optical sensors are here to stay.
<urn:uuid:4dec9448-0a87-40b3-9b35-ebf3e4df31d0>
CC-MAIN-2022-40
https://www.enterprisetechnologyreview.com/news/the-benefits-of-using-custom-optical-sensors-nwid-186.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00644.warc.gz
en
0.938855
428
2.984375
3
Nanotechnology means controlling matter on a tiny scale, at the atomic and molecular stage. FREMONT, CA: Nanotechnology is set looking at the arena on any such tiny scale that one can see the atoms that makeup everything around us (such as ourselves); however, it is possible to manipulate and pass those atoms round to create new matters. Think of nanotechnology, then, as being a little like production, only on a tiny scale. Everyday merchandise that uses nanotechnology Nanotechnology might appear like something out of the future, but many everyday products are already made using nanotechnology in reality. Nanoparticles had been added to sunscreens for years to make them more useful. Two particular types of nanoparticles usually added to sunscreen, are titanium dioxide and zinc oxide. These tiny particles are not only effective at blocking off UV radiation but additionally, they also feel lighter on the skin; that is why contemporary sunscreens are nowhere close to as thick and gloopy as the sunscreens everyone has been slathered in as kids. When used in textiles, silica nanoparticles can help create a fabric that repels water and different liquids. Silica may be brought to fabric by incorporating it into the cloth’s weave or sprayed onto the fabric’s surface to create a waterproof or stainproof coating. So in case one has ever observed how liquid forms little beads on waterproof clothing – beads that sincerely roll off the material in place of being absorbed – that’s way to nanotechnology. Similarly, clothing can be made water-resistant and stainproof through nanotechnology, so it can also upholster furniture. Even better, nanotechnology is likewise assisting in making furnishings less flammable; by coating the foam used in upholstered furniture with carbon nanofibers, manufacturers can lessen flammability with the aid of up to 35 percentage. Nanotechnology can also be utilized to optimize adhesives. Interestingly, most glues lose their stickiness at high temperatures; however, an effective “nano-glue” now not most effective withstands high temperatures – it receives more potent as the surrounding temperature will increase.
<urn:uuid:409acc10-f225-4521-bec3-888a2fcc7c3e>
CC-MAIN-2022-40
https://www.enterprisetechnologyreview.com/news/top-4-examples-of-nanotechnology-in-action-nwid-991.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00644.warc.gz
en
0.942561
471
3.5
4
A good criminal needs to know what makes people tick. There is a great deal of psychology involved in criminal activities - especially when it comes to establishing contact with potential victims. Many cyberattacks are successful because cyber criminals misuse human interaction online. For example, cybercriminals send fake invoices to retrieve passwords or fake text messages from a parcel service to cheat victims out of money. Humans are therefore usually the weak link. Cybercriminals know this like no other and use psychological tactics to trick victims; we also call this social engineering. But why do people click on dangerous links en masse, when we know the risks? And why are we so quick to give our confidential information to cybercriminals? According to behavioural researcher Robert Cialdini, there are six universal principles of influence that determine human behaviour. Social engineers use these principles of influence to manipulate their potential victims and induce certain behaviour. The six principles of influence are: reciprocity, consensus, consistency, sympathy, authority and scarcity. In my book ‘Cyberdanger’ (2019, Springer, English updated version from the original Flemish edition ‘Cybergevaar’ published in 2013 and also available in German ‘Cybergefahr’) I always described that any cybersecurity issue or problem is a direct result of a combination of technological and human factors. Most malware and cyberattacks would not stand a chance without naivety, curiosity or other human weaknesses such as the six principles laid out in this article. In my book it’s described as Willems’ (Second) Law : CSP = TF x MF Where CSP stands for a cybersecurity problem, TF for the technological factor (malware, vulnerability, exploit, etc ) and MF for the human factor (human behaviour). Besides these psychological tactics, cybercrime also has a psychological impact on victims. Most people think that the impact of an online crime is smaller, but recent research by the NSCR shows this not to be the case. Digital crimes appear to have a similar impact on victims as traditional forms of crime. Generally, people find it difficult to understand that someone can become a victim of cybercrime. This while online crimes take place on a large scale and anyone can become a victim. Due to the lack of understanding, victims of online crime are more likely to experience victim blaming. Victims receive reproachful comments from friends, family or colleagues as well as random strangers on the internet who are known for getting on a high horse, while in reality cybercrime can happen to anyone. It is therefore important to raise the level of knowledge about online crime so that victims can count on support and recognition. An e-learning training course, such as the G DATA Security Awareness Training, is ideal for this. In addition we shouldn’t always blame the victims too hard. A user is indeed a weak potential link in your network environment. But instead of pointing fingers and bemoaning the situation, one might as well turn this perceived liability into an asset and offer training and education to the users. Governments in the EU have created some anti-phishing commercials and this can and should be applauded. As a lot of cyberattacks are successful due to human error, it is important to pay more attention to the psychological aspects. For example, little is known about the impact of cybercrime on victims. In addition, more scientific research needs to be done on how to prevent people from clicking on links and taking the correct decisions which ultimately may reduce the number of victims of online crime. Maybe the ultimate solution could be to use Artificial Intelligence (AI) to make the right decisions for us and to make software much more secure by design from the beginning. This is already happening but it still will take a lot of time to arrive in a safer world. Image credit: Pixabay
<urn:uuid:3299f7d7-6399-4edf-9864-a1a636bd279b>
CC-MAIN-2022-40
https://www.gdatasoftware.com/blog/2022/07/cybercrime-psychology
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00644.warc.gz
en
0.941633
783
3.25
3
What’s Next: Liquid Printing 3D printing, although becoming increasingly popular, still has some limitations. Unlike injection molding or casting, it has only found its home in small-scale production and generally suffers from a relatively slow print speed. Researchers at MIT are experimenting with a new 3D process called rapid liquid printing. Designed to tackle large-scale production, it doesn’t rely on specialized or prototype materials. How it works While 3D printing relies on layer-by-layer creation, rapid liquid printing “draws” an object by injecting its printing material (rubber, foam, or plastic) into a thick, transparent liquid gel suspension. Instead of building up layers of material, the material is excreted from the printhead and into the gel. Because the gel material suspends the printed object, it allows for more printing accuracy and less material waste. The resulting printed object will have better structural integrity than if it was laid down as a series of layers – essential if it needs to bear weight or other stresses. Some printing materials are designed to harden by a chemical reaction when coming into contact with the gel, so there is no need for light or changes in temperature. The printed object can be removed without additional curing. The printhead is attached to a robotic arm, allowing it to approach the object from multiple angles. Although there are currently only a few relatively small versions of liquid printers, there are no limits to scale. With a large enough tank, the process can create objects of any size. The suspension gel is basic and is similar to hair gel or hand sanitizer. It has two essential functions. The first is that it can suspend objects, so they are not adversely affected by gravity. The second is that the gel self-heals after the nozzle passes through, allowing the print heads to continuously move and print within the gel and not create tunnels or cavities which would fill up with excess printed material that must be dealt with later. The gel can be mixed to a specified density to accommodate the weight and bulk of the printed material. How it might be used Customization is the key. Liquid printing could also open up new avenues for production in the furniture, automotive, and aerospace industries, where fast, large-scale manufacturing is essential. Rapid liquid printing is still in the research phase, but MIT researchers are planning for larger-scale liquid printers using other printing materials.
<urn:uuid:6714b385-34c3-468e-94f8-bb14f27db8c8>
CC-MAIN-2022-40
https://www.capitalmds.com/whats-next-liquid-printing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00644.warc.gz
en
0.947421
499
3.46875
3
ODBC (Open DataBase Connectivity) is a standard to access databases originally developed by Microsoft. ODBC provides an API to make the code independent of database systems and operating systems. Denodo provides an ODBC interface, but it requires the installation of the ODBC driver. Like any other ODBC driver, you have to install it on the machine where the client application is running. In this section you will learn how to access to the Denodo server using an ODBC client. This information is also valid for any other ODBC connection. For the example, we will use MS Excel but feel free to use any other ODBC client. The first thing that we have to do when connecting using ODBC is to install the Denodo ODBC driver. Denodo Platform 8.0 includes an ODBC driver named DenodoODBC and it is located under the Extract the folders in this directory and run the programs inside to install the drivers. Once this is complete, restart your Virtual DataPort Server from the Denodo Control Panel. Once you've installed the ODBC driver you will need to add a new user data source: Control Panel > Administrative Tools > Data Sources (ODBC). Add User DSNor Add System DSN. The difference is that "User DSN" can only be used by the current user and "System DSN" can be used by all the users of the system. In the configuration dialog fill in the following information: Now, you have to configure some of the Advanced properties by clicking on the Datasource button (a pop-up will open). Select the same options shown in the screenshots below and write "SET QUERYTIMEOUT TO 3600000" in the Connect Settings Box on page 2: Finally, click the Ok button and then click Save button to finish. Now, you have your environment ready to connect to Denodo using ODBC (remember than the previous steps are only valid to connect to the "tutorial" virtual database, so if you want to connect to another database you will have to create a new DSN). For an example of an ODBC client application you can use the well-known Microsoft Excel. You will only have to select this DSN as a data provider to import the customer data into the spreadsheet. Please, follow these steps and see the results: Data > From Other Sources > From Microsoft Query And voilà! The results from Denodo are populated into the MS Excel spreadsheet!
<urn:uuid:b949f292-a028-485b-b7a9-9c9cc762e093>
CC-MAIN-2022-40
https://community.denodo.com/tutorials/browse/basics/4connect2odbcclient
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00644.warc.gz
en
0.838335
558
2.578125
3
March is Women’s History Month, which celebrates women’s contributions to history, culture, and society. It has been observed annually since 1987. This month CertaSite is recognizing and honoring the trailblazing women in our industry and around the world whose dedication, determination, and resiliency inspire the women of today and tomorrow. Women’s History Month was established after a weeklong celebration in 1978 organized by the school district in Sonoma, California. Presentations, parades, and essay contests were held to recognize and celebrate achievements of women. Years later, more communities and schools started participating across the country and in 1980, President Carter issued the first presidential proclamation declaring the week of March 8 as National Women’s History Week. In 1981, Congress passed a resolution establishing a national weeklong celebration. It was not until 1987 that the celebration was extended to a month. International Women's DayIn celebration of Women’s History Month, International Women’s Day takes place globally on March 8. This is a global celebration and recognition of the economic, political, and social achievements of women, first occurring in 1911. Celebrations around the world include demonstrations and educational initiatives. Since 1975, the United Nations has sponsored International Women’s Day. Women's History Month Theme As seen with many recognitions throughout the year, Women’s History Month also has a theme. This year’s theme is “Women Providing Health, Promoting Hope.” The theme is a “tribute to the ceaseless work of caregivers and frontline workers during this ongoing pandemic and also recognition of the thousands of ways that women of all cultures have provided both healing and hope throughout history.” This is incredibly fitting for the fire protection and life safety industry in which we recognize women on the fire and emergency frontlines for everything they are doing to advance the industry. Women in Fire Protection and Life Safety When it comes to the fire protection and life safety industry, women are making positive strides. According to a recent NFPA (National Fire Protection Association) report, only 93,700 out of the 1,115,000 firefighters in the United States are female, equating to 8 percent. In comparison, 13 percent of police officers or detectives are female and 12 percent of paramedics and EMTs are women. In 1982, Women in Fire formed with approximately 200 members. Women in Fire speaks specifically for women in their specific fire industry profession. Members range from women firefighters and officers to inspectors, dispatchers, and fire service educators. Members are from diverse backgrounds and are committed to their profession. Women in Fire has helped establish standards for firefighter qualifications through the NFPA and they often participate in advisory groups providing input to the National Fire Academy. They provide education, support, and advocacy for women in fire service. Women in Fire is their voice to the fire service industry. The fire service industry is becoming more diverse, frequently recruiting females to fill an array of fire service opportunities. These women hold many different roles, including firefighter, officer, chief, paramedic, arson investigator, training instructor, and more. Even though they have different roles, they are one community. One community of fire service women looking to make a difference in the industry for future females and young girls with aspirations of being heroes in the fire protection and life safety industry. Enjoy these historical facts about women in fire service: 1815: the first known female firefighter is Molly Williams from New York City. She becomes a member of the Oceanus Engine Company #11. 1859: San Francisco heiress Lillie Hitchcock Coit becomes an honorary member of the Knickerbocker Engine Company #5 as a teenager. 1910: women’s volunteer fire companies emerged in Silver Spring, Maryland, and Los Angeles. 1920s: Emma Vernell, age 50, becomes the first woman officially recognized as a firefighter by the state of New Jersey. 1930s: Nancy Allen becomes the fire female fire chief in the world. She serves for the Cedar Hill Volunteer Fire Department in Rhode Island. During WWII, women across the country volunteers for fire services to fill the roles left by men who were called by the military. Two Illinois fire departments were staffed entirely by women. 1960s: all-women fire companies evolved in California and Texas. 1973: the first of two women begin paid roles as firefighters. The first one, Sandra Forcier, in North Carolina, and the second, Judith Livers, in Virginia. Both later retired at the rank of battalion chief. 1982: Women in the Fire Service, Inc. is founded. 2002: Rosemary Roberts Cloud becomes the first female African American fire chief in the United States.
<urn:uuid:7c1d3d49-e83c-46d5-8f03-1b7d078b0e74>
CC-MAIN-2022-40
https://www.certasitepro.com/news/womens-history-month-2022
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00044.warc.gz
en
0.961086
996
3.3125
3
Cybersecurity is an important part of daily business operations to ensure your data is safe and protected. But for one month of the year, an extra special emphasis is put on cybersecurity awareness to help keep individuals and businesses vigilant about their data security. National Cybersecurity Awareness Month (NCSAM) is celebrated throughout the month of October every year. It was first launched in 2004 by the National Cyber Security Alliance and the U.S. Department of Homeland Security. Every 39 seconds a hacker is attacking a computer in the U.S. This nearly constant barrage of cyberattacks has put IT security at the top of most organizations’ priority lists because data breaches, ransomware attacks, and virus infections can be deadly for any company. Many small businesses don’t survive a major data breach, due to all the associated costs including: - Data loss - Emergency tech costs - Lost business during downtime - Loss of customer trust - Productivity losses - Fines for non-compliance with data privacy rules This Month’s Cybersecurity Theme The theme of this year’s NCSAM is OWN IT. SECURE IT. PROTECT IT. and it revolves around three specific areas of cybersecurity. The goal is to help encourage personal accountability for digital privacy as well as proactive behavior in adopting best practices and understanding common cyber threats. OWN IT is about understanding your digital profile and how information can be compromised. This includes both your own information and also the data that you collect from your clients in the course of business. SECURE IT refers to best practices that you can adopt to protect your data and prevent your corporate network from being breached. PROTECT IT relates to your ongoing “Cyber Hygiene” and staying vigilant about protecting your data, devices, and network from a breach. Take Time to Improve Your Team’s Cybersecurity Awareness Adopting good cybersecurity practices can keep your business from becoming hacked and positively impact your bottom line. Just one best practice of forming an incident response team can reduce costs if a data breach occurs by $360,000. Here are several cybersecurity best practices for your office to adopt that fall under each of the three areas of the NCSAM theme. OWN IT – Reducing Your Digital Footprint - Improve Privacy Settings: Whether you’re on your own social media account or posting for your corporate one, the privacy settings you use are important. Be sure you’re only allowing those you want to give access to see your posts. - Ask Before You Share: It can be great to share a photo of your latest client project and how happy they were with it, but be sure you ask before you share, not only to honor their privacy but to also keep a good working relationship. - Understand Security of Smart Devices: Voice activated speakers, smart whiteboards and other IoT gadgets are futurizing offices around the world, but you need to understand how to protect them before you connect them. SECURE IT – Protecting Your Data - Use Multi-Factor Authentication: Compromised passwords account for about 80% of hacking-related data breaches. Multi-factor authentication, which requires a PIN to be entered along with your password, goes a long way towards credential security. - Provide Phishing Training & Apps: Phishing is the number one way that malicious codes get into devices and networks. Train your team on how to spot and avoid clicking on phishing emails and back them up with anti-phishing software. - Mobile Device Security: Smartphones and tablets are taking over more of the office workload every year. Make sure you have a way to log and secure their data access and to wipe them clean remotely should they be lost or stolen. PROTECT IT – Remain Cyber-Vigilant - Apply Patches & Updates in a Timely Fashion: Updates contain vital security patches for found and exploited software vulnerabilities. It’s best not to just leave these to users to do, but instead use a Managed IT Plan that includes managing your patches and updates. - Review Cyber and Physical Security: Cybersecurity and physical security go hand-in-hand when it comes to protecting your business. It’s a good idea to do annual assessments of each to see if any policies need to be updated. - Adopt Good Data Backup Practices: It’s easy for a data backup to stall and fail if you’re not monitoring it, and that data loss can be devastating. Ensure you adopt good backup practices, such as keeping three copies of all your data, ensuring one copy is offsite, and checking backups regularly. Contact C Solutions for a Full IT Security Assessment Data security is not something you want to leave to chance. Just one data breach can cripple a company for years, so the best practice is prevention and ensuring your IT security is up to date and in good shape. Contact us today to schedule an IT security assessment and make sure your business is protected. Call us at 407-536-8381 or reach out online.
<urn:uuid:e0cf7807-ce0f-4191-b606-8462ce38b3c8>
CC-MAIN-2022-40
https://csolutionsit.com/national-cybersecurity-awareness-month/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00044.warc.gz
en
0.926902
1,039
2.578125
3
Parents often put their own relationship on the back burner to concentrate on their children, but a new study shows that when spouses love each other, children stay in school longer and marry later in life. Research about how the affection between parents shapes their children’s long-term life outcomes is rare because the data demands are high. This study uses unique data from families in Nepal to provide new evidence. The study, co-authored by researchers at the University of Michigan and McGill University in Quebec, was published in the journal Demography. “In this study, we saw that parents’ emotional connection to each other affects child rearing so much that it shapes their children’s future,” said co-author and U-M Institute for Social Research researcher William Axinn. “The fact that we found these kinds of things in Nepal moves us step closer to evidence that these things are universal.” The study uses data from the Chitwan Valley Family Study in Nepal. The survey launched in 1995, and collected information from 151 neighborhoods in the Western Chitwan Valley. Married couples were interviewed simultaneously but separately, and were asked to assess the level of affection they had for their partner. The spouses answered “How much do you love your (husband/wife)? Very much, some, a little, or not at all?” The researchers then followed the children of these parents for 12 years to document their education and marital behaviors. The researchers found that the children of parents who reported they loved each other either “some” or “very much” stayed in school longer and married later. “Family isn’t just another institution. It’s not like a school or employer. It is this place where we also have emotions and feelings,” said lead author Sarah Brauner-Otto, director of the Centre on Population Dynamics at McGill University. “Demonstrating and providing evidence that love, this emotional component of family, also has this long impact on children’s lives is really important for understanding the depth of family influence on children.” Nepal provides an important backdrop to study how familial relationships shape children’s lives, according to Axinn. Historically, in Nepal, parents arranged their children’s marriage, and divorce was rare. Since the 1970s, that has been changing, with more couples marrying for love, and divorce still rare, but becoming more common. Education has also become more widespread since the 1970s. In Nepal, children begin attending school at age 5, and complete secondary school after grade 10, when they can take an exam to earn their “School-Leaving Certificate.” Fewer than 3% of ever-married women aged 15-49 had earned an SLC in 1996, while nearly a quarter of women earned an SLC in 2016. Thirty-one percent of men earned SLCs in 2011. By 2016, 36.8% of men had. The researchers found that the children of parents who reported they loved each other either “some” or “very much” stayed in school longer and married later. The researchers say that their next important question will be to identify why parental love impacts children in this way. The researchers speculate that when parents love each other, they tend to invest more in their children, leading to children remaining in education longer. The children’s home environments may also be happier when parents report loving each other, so the children may be less likely to escape into their own marriages. Children may also view their parents as role models, and take longer to seek similar marriages. These findings still stood after researchers considered other factors that shape a married couple’s relationship and their children’s transition to adulthood. These include caste-ethnicity; access to schools; whether the parents had an arranged marriage; the childbearing of the parents; and whether the parents had experience living outside their own families, possibly being influenced by Western ideas of education and courtship. “The result that these measures of love have independent consequences is also important,” Axinn said. “Love is not irrelevant; variations in parental love do have a consequence.” A large number of studies have focused on subjective well-being1(see Diener 2000; Helliwell 2003; Kahneman et al. 1999), and many have found that married people have higher subjective well-being (Mikucka 2016; Stutzer and Frey 2006; Waite and Gallagher 2000). Yet the increase in cohabitation – not just as a prelude to marriage but also as an alternative partnership type and an accepted setting for parenthood (Perelli-Harris et al. 2012) – raises questions as to whether only marriage has beneficial effects. Given that cohabitation shares many of the same characteristics as marriage – for example, intimacy, emotional and social support, and joint residence – cohabitors may have similar well-being to those who are married (Soons et al. 2009; Zimmerman and Easterlin 2006). One of the key issues when analyzing the relationship between partnership status and subjective well-being (SWB) is selection. Cross-sectional studies have often recognized the inability to disentangle selection and causality (e.g., Lee and Ono 2012; Soons and Kalmijn 2009). Longitudinal studies tend to use fixed-effects (FE) models, which focus on within-individual transitions in partnership status, a process usually occurring in younger adulthood and covering a relatively short period of the life course (Kalmijn 2017; Musick and Bumpass 2012; Soons et al. 2009). These studies did not examine the long-term effects of partnership in midlife, after the majority of individuals have made decisions about marriage and childbearing. Midlife2 – after women’s prime reproductive period, and when being married may influence one’s identity and well-being – is an understudied part of the life course (Lachman 2015). At this point in life, the initial boost in happiness may have declined (Soons et al. 2009; Zimmermann and Easterlin 2006), and raising children may confound SWB (Balbo and Arpino 2016; Margolis and Myrskylä 2015). FE models also do not directly consider individuals who do not experience a change in partnership status: that is, those who cohabit and never marry. Thus, we cannot tell whether certain groups – for example, those living in a coresidential partnership who have a low propensity to marry – would benefit if they married rather than cohabited; or alternatively, whether the benefits to marriage may be more pronounced for those who have a higher propensity to marry. Given the interest in marriage promotion policies, targeting low-income individuals in countries such as the United Kingdom, it is important to examine whether those unlikely to marry would be happier if they did marry. To address these selection processes, we use propensity score–weighted regression analysis to pinpoint how early life and/or current conditions influence conditions in midlife. This method allows us to address baseline bias and differential treatment bias (Morgan and Winship 2014), which occur when the link between marriage and well-being varies across subgroups. Cross-sectional research has found a “happiness gap” between cohabitation and marriage in most countries, but the size of the gap appears to vary and may be linked to the acceptance and prevalence of cohabitation in a society (Soons and Kalmijn 2009) or gender context and religious norms (Lee and Ono 2012). However, it is unclear whether the long-term effects of selection in different countries operate similarly, especially because union duration, childbearing experience, and meanings of cohabitation differ across countries (Hiekel et al. 2014; Perelli-Harris et al. 2014). In addition, the heterogeneity of treatment effects may vary, indicating that marriage has different benefits for different groups, depending on the country. Here we compare the association between partnership type and SWB in the United Kingdom, Australia, Germany, and Norway, which have experienced substantial increases in cohabitation over the past few decades but have different family policies (Perelli-Harris and Sánchez Gassen 2012) and cultural orientations toward marriage (Perelli-Harris et al. 2014). Each of these contexts leads us to predict a certain relationship between partnership status and SWB. This study addresses a number of gaps in the literature. First, we provide new insights into selection due to childhood background and current characteristics for the association between union type and SWB. Second, we examine whether marriage may be especially advantageous for partnered individuals who have a lower or higher propensity to marry. Third, we analyze the extent to which the relationship between partnership type and SWB varies by country and gender. More broadly, analyzing how cohabitation differs from marriage for individuals’ well-being will contribute to our understanding of the meaning and consequences of cohabitation as well as the extent to which these meanings differ across contexts. A large body of research has investigated the beneficial aspects of marriage for well-being (for reviews, see Nelson-Coffey 2018; Waite and Gallagher 2000). These studies posited that married partners benefit from sexual and emotional intimacy, companionship, and daily interaction (Kamp Dush and Amato 2005; Umberson et al. 2010). Spouses help each other cope with stress by providing social and emotional support. Recognition from a spouse may provide symbolic meaning in life (Umberson et al. 2010). Additionally, sharing a household can lead to economies of scale, and married spouses could profit from a larger friendship and kin network (Ross and Mirowsky 2013; Umberson and Montez 2010). All these mechanisms could enhance SWB. Nonetheless, given dramatic social change over the past decades, the benefits to marriage may be declining (Liu and Umberson 2008). A recent study comparing 87 countries found that the life satisfaction advantage of married men compared with unmarried men has waned over the last three decades, suggesting that marriage has become less advantageous (Mikucka 2016). This decline may be partially due to the increase in cohabitation, especially in high-income countries. Cohabitation may be taking on much of the form and function of marriage (Cherlin 2004), especially as cohabiting unions become longer and involve children. Similar to married couples, cohabiting couples share a household and may benefit from similar intimacy, support, care, and family networks. Normative expectations to marry have become weaker, and the tolerance for nonmarital arrangements has increased in most countries (Treas et al. 2014). A large body of research, however, has found that cohabitors often differ from married couples. Across countries, cohabitors have lower second birth rates (Perelli-Harris 2014), are less likely to pool incomes (Gray and Evans 2008; Lyngstad et al. 2010), have lower relationship quality (Wiik et al. 2012), and are more likely to dissolve their relationships (Galezewska et al. 2017), even if they have children (Musick and Michelmore 2018). Qualitative research from Europe and Australia has suggested that many still think of cohabitation as a less-committed type of union than marriage and instead oriented toward freedom and independence (Perelli-Harris et al. 2014). Marriage may thus still be desired by most people but more as a cultural ideal or status symbol. Recently, most scholars have used FE models to examine whether cohabitation is similar to marriage in increasing SWB. This approach allows the testing of set-point theory, which posits that individuals have a baseline level of happiness that cannot be permanently modified by life events, such as union formation. This theory has been tested in a range of settings, and the findings support a positive effect of marriage and cohabitation on SWB (Musick and Bumpass 2012; Soons et al. 2009; Zimmermann and Easterlin 2006), with cohabitation having a weaker effect (Kalmijin 2017). Some studies, however, have questioned set-point theory and found that different model specifications can result in long-term improvements for marriage (Anusic et al. 2014). Overall, however, most studies indicated that, on average, marriage provides a boost to well-being, with cohabitation providing a weaker boost, and individuals return to original happiness levels in the long term. Prior studies examining the effects of SWB on partnership status have found that individuals receive a minor boost in happiness after moving in with a partner and a larger boost after marriage, although these effects generally wear off as married partners return to their set-point happiness (Kalmijn 2017; Musick and Bumpass 2012; Soons et al. 2009). What these studies cannot show is whether marriage is beneficial to those who are unlikely to marry, and the extent to which any marriage benefits are due to characteristics that select people into marriage. Prior studies have also not specifically focused on the effects of marriage relative to cohabitation in midlife, after most people have married, and the initial honeymoon period of marriage is over. Our study produced some surprising findings that indicate not only differences by country and gender but also differences by the propensity to marry. First, contrary to prior studies (e.g., Ono and Lee 2012; Soons and Kalmijn 2009), our results indicate that relative to cohabitation, marriage does not automatically provide a boost to SWB in all countries. On average, cohabiting men in the United Kingdom and Norway and women in Germany have levels of SWB that are similar to those of married men in midlife, even without controls. These findings suggest that in some countries, cohabitation may provide benefits similar to those of marriage, such as shared intimacy, pooled resources, and emotional support (Musick and Bumpass 2012; Perelli-Harris and Styrc 2018). Second, our results show that, on average, marriage does differ from cohabitation for Australian men and Norwegian women. Without any controls for selection, married individuals in these countries have higher levels of SWB than those in cohabiting partnerships. In Australia, these differences disappear after we include controls, indicating that cohabitation is selective of disadvantage, in accordance with prior studies (Evans 2015; Heard 2011). For Norwegian women, however, our entire battery of controls cannot eliminate average differences between cohabitation and marriage. This finding is quite surprising because some have argued that cohabitation and marriage in Norway, and Scandinavian countries in general, are indistinguishable (Heuveline and Timberlake 2004). Yet here, we see persistent differences between the two partnership types. Our models, however, take into account important sociodemographic status and childhood background variables but may be missing key psychological characteristics or other attributes that are associated with marriage. We cannot rule out the possibility that people with certain personality traits or preferences are more likely to marry. Thus, we are reluctant to interpret our results as having a causal effect. Nonetheless, the findings suggest that marriage is more important in Norway than often assumed. Focus group research found that marriage is associated with romance and love, even if it occurs sometime after childbearing (Lappegård and Noack 2015), or as a capstone later in life (Holland 2017). Although cohabitation may seem to be identical to marriage superficially, marriage may be indicative of a closer partnership on a deeper level. Marriage for women may be symbolic of a more committed loving relationship, and if marriage does not happen by midlife, the lack of marriage might have detrimental effects on SWB. The elimination of marriage effects when we include relationship satisfaction suggests this may be the case. On the one hand, the results may indicate that relationship quality is more important than type of partnership, and cohabiting and married women have similar SWB. On the other hand, relationship satisfaction may instead be a proxy for marriage given that happier couples are generally more likely to marry (Wiik et al. 2009). Then again, the marital contract may in fact improve relationship quality for women. Thus, although we urge caution in interpreting this result one way or the other, it seems to be plausible that marriage, on average, has positive effects for women in Norway. Third, our results demonstrate the heterogeneity of treatment effects for German men and British and Australian women. Partnered individuals who have a lower propensity to marry based on childhood selection mechanisms (ATC) have lower SWB if they cohabit rather than marry. Those who have a high propensity to marry (ATT), on the other hand, would not receive any benefits from marriage. For German men and Australian women, controlling for selection mechanisms and partnership experiences eliminates differences between cohabitation and marriage, again demonstrating that selection is more consequential for SWB than partnership status. For British women, however, we find the persistent effect of marriage on SWB for those who are less likely to marry, despite the large number of controls. These results suggest that marriage may provide some benefits for disadvantaged women, as was also found in a study on mental well-being (Perelli-Harris and Styrc 2018). Our findings here, however, indicate that women from disadvantaged backgrounds would be better off if they did marry, but not because they or their partners have low education, poor employment conditions, or low income, which would make them unhappy. Instead, marriage seems to be associated with happiness for other reasons. The findings could be due to unobserved selection mechanisms related to personality, appearance, or other psychological factors that make them less-attractive marriage partners. On the other hand, they may be unhappy because they do not want to marry partners who do not live up to their expectations, or they may disagree about whether to get married, which could have a greater effect on women than men. The mediating effect of relationship quality suggests this may be the case; those who have higher-quality relationships marry and have higher levels of SWB. Qualitative research provides a deeper explanation for this finding: focus group participants from all educational levels in Britain generally agreed that marriage signaled a more committed relationship than cohabitation, but low-educated women stated that although they would like to marry, cohabitation was more common among their peers. For these women, marriage was a low priority compared with other responsibilities such as housing and children, but they nonetheless aspired to have a wedding (Berrington et al. 2015), and perhaps they were unhappier because they could not achieve their goals. Our study is not without limitations. First, as mentioned, life satisfaction may be endogenous to partnership decisions: happier people may be more likely to marry than cohabit (Luhmann et al. 2013). Although we focus on controlling for selection mechanisms in childhood, before individuals enter into partnerships, our data do not include a direct measure of happiness in childhood or around the time of entrance into partnership. Propensity score–weighted regression is also unable to control for unobserved factors not available in our surveys. Marriage may be selective of other individual characteristics such as personality, emotional control, or attractiveness. This is particularly important for Norwegian and British women, where differences between cohabitation and marriage persist until relationship satisfaction is included in the models. Second, despite our concerted effort to harmonize the variables across our surveys, differences in survey design and variable construction may limit comparability across surveys. Finally, our definition of midlife (ages 38–50) is relatively narrow. But because cohabitation has increased only within the past few decades, the sample size for cohabiting individuals in the older cohorts is too small. Therefore, future research must continue to evaluate these relationships as cohabitation increases throughout midlife. Despite these limitations, this study provides evidence that the relationship between partnership and SWB is not straightforward and is context-specific. The context, especially the policy context, does not always operate in predictable ways. For example, given that the German government privileges the marital breadwinner model, we would have predicted that married women in Germany would be happier than cohabiting women; however, we find no differences between marriage and cohabitation. This finding suggests that despite policies to encourage marriage, cohabitation in Germany may not be stigmatized; indeed, Treas et al. (2014) found that Germans have a more positive view of living in cohabitation without marriage intentions than those in Great Britain and Australia. Thus, the policy climate may not shape the association between partnership status and SWB as much as changing values and the specific meaning of cohabitation. This also holds true for Norwegian women, who (as discussed earlier) appear happier if they are married, despite a gender-friendly policy regime that stipulates few legal distinctions between cohabitation and marriage (Lappegård and Noack 2015). Regardless of an increasing number of children born in cohabitation and a lack of stigmatization toward cohabitors, Norwegian women still seem to value marriage and the symbolic implication of the wedding. Research on individual countries needs to recognize that context may be shaping the relationship between factors. Finally, this study not only demonstrates the role of selection in accounting for the association between marriage and SWB but also illustrates how selection is heterogeneous and differs according to the propensity to marry. Such findings can have important policy implications given that those with a low propensity to marry—namely, the disadvantaged—are likely to be targeted by pro-marriage policymakers. At first glance, our findings seem to suggest that in some countries, those least likely to marry would benefit from marriage-promotion policies: if they were to marry, they would be happier. However, for German men and Australian women, the effect of marriage disappears after the inclusion of more controls. For disadvantaged women in the United Kingdom, marriage may matter, especially if it provides legal protection and a sense of security (Barlow 2014; Berrington et al. 2015). The effect in the United Kingdom disappears after a control for relationship satisfaction is included, implying that policymakers could focus on improving relationship quality, possibly through relationship support organizations that provide counseling, although it is also important to recognize that these women may be unhappy because they are unable to find a suitable partner. On the whole, however, our study indicates that especially after selection and relationship satisfaction are taken into account, differences between marriage and cohabitation disappear in all countries. Marriage does not cause higher SWB; instead, cohabitation is a symptom of economic and emotional strain. Thus, our findings imply that in order to increase SWB, policymakers should aim to reduce disadvantages—both in childhood and adulthood—instead of creating incentives to marry. University of Michigan
<urn:uuid:222be167-e052-4f48-9fd5-cd95e500662b>
CC-MAIN-2022-40
https://debuglies.com/2020/02/13/when-spouses-love-each-other-children-stay-in-school-longer-and-marry-later-in-life/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00044.warc.gz
en
0.959123
4,704
2.984375
3
Behavioural analysis uses machine learning, artificial intelligence, big data, and analytics to recognize malicious behavior by examining differences in everyday activities. Behavioural analysis is an extremely important tool when it comes to fending off cyber-attacks. We all are aware that cyber-attacks have evolved at a rapid rate over the years and the rate has further been accelerated due to the pandemic as most of the workforce and companies have adopted the online platform as a new norm for executing their day-to-day activities. One thing is common for all malicious activities- they behave differently as compared to normal behaviour and hence leave different signatures which would normally allow companies to identify and terminate them. However, sophisticated cyber-attacks become harder to identify due to the new tactics and techniques cyber attackers use. But now with the help of large volumes of unfiltered endpoint data, security personnel can now use behavioural-based tools, algorithms, and machine learning to discover what the normal behaviour of everyday users is and help distinguish it from the bad actors. Behavioural analysis help recognise trends, patterns and events that are different from everyday norms. To put it better into perspective, consider this scenario: how do we find a needle in a haystack? It’s simple, you bring a magnet. Behavioural analysis is the “magnet” which can be used to find the threats and malware i.e., “needle” in a “haystack” of genuine traffic. By using this tool security teams can attain visibility and recognise unexpected behavioural tactics of attackers in the early stages and save millions of dollars perhaps which could have been the cost of the cyber-attacks. Behavioural analysis can also help reveal root elements and present insights for future identification and foresight of similar attacks. One must note that most behavioural analysis systems come with a pre-decided standard set of policies and some systems can be toggled and customized at the discretion of the user. How behavioural analysis is changing the WAF environment? As established before, threats are continuously evolving and so our countermeasures should evolve as well. The most advanced perimeter threats for data loss or exfiltration occur at the application layer. A few points from the current scenarios of threats: - DDoS attacks may or may not be volumetric in nature. - Attacks are getting more and more automated in nature. DDoS attacks have become fully automated and all execution at over 1Tbps speed. Automation has become even harder to detect as it is specifically designed to masquerade as genuine traffic and evade. Usage of CAPTCHA is considered a way to combat these however they have been rendered less effective over time. - Malware is used to exploit weaknesses in browsers and the users operating those browsers. Malware has multiple methods of delivery such as infected ads, links, attachments. All this information helps one understand why behavioural analysis has become the need of the hour. Basically, most of these attacks may bypass traditional WAF detection mechanisms as they are specifically designed and traditional WAFs are “outgunned” as they say. This is further worsened by almost unlimited supplies of compromised devices or websites. In order to combat all these malicious activities, WAF vendors like F5 and Prophaze are now offering top of line Behavioural analysis as a part of their WAF features. To top it all off, behavioural analysis is complemented by the cloud and usage of its extreme computational powers, scalability and efficiency of management. The cloud provides a way that combines big data with powerful analytics to help beat even the most sophisticated attacks. Vendors also offer cloud-based WAF coupled with behavioural analysis which makes streaming analytics possible. This has further paved the way for monitoring and comparing all activities to any unfiltered historical endpoint data. Behavioural analysis is a must for any company that has critical data or important online assets to protect. Behavioural analysis will definitely augment the current defence system the company has in place for cybersecurity and will enable IT teams to handle sophisticated attacks thrown their way. Some behaviour-security products are sophisticated enough to apply machine learning algorithms to data streams so that security analysts don’t need to identify what comprises normal behaviour. Other products include behavioural biometrics features that are capable of mapping specific behaviour, such as typing patterns, to specific user behaviour. Most products have sophisticated correlation engines to minimize the number of alerts and false positives. One more point I would like to add is that signature-based tools help identify and fend off known threats whereas behavioural analysis help mitigate zero-day attacks as well which mean attacks that haven’t been registered yet. In conclusion behavioural based analysis is a tool that your company most probably will not go wrong in employing for cybersecurity measures. In fact, there are malwares such as fileless malware which can only be identified by behavioural technology
<urn:uuid:5463f136-3dc7-4637-ae6e-660ce223d47d>
CC-MAIN-2022-40
https://gbhackers.com/behavioural-analysis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00044.warc.gz
en
0.955321
1,004
2.65625
3
When dealing with a large enterprise-level Active Directory structure, one of the more important concepts is replication. Replication is the process of sharing Active Directory updates between domain controllers. Many challenges are involved in replicating database changes across a large enterprise. To make this process easier, Windows 2000 uses an organizational structure called a site. In this article, I’ll discuss the ways that sites are used within Windows 2000. What’s a Site? If you’re familiar with Exchange 5.5, then you’re probably already familiar with the idea of sites. The main difference between Exchange and Windows sites is that whereas an Exchange site consists of a group of mail servers, a Windows 2000 site is made up of a group of domain controllers. Unlike Windows NT version 4.0, Windows 2000 uses what’s known as a multimaster domain model. This means that rather than making all administrative changes directly to a primary domain controller and replicating them out, administrative changes can be made to any domain controller. These changes are then replicated to each domain controller. The Site Model The multimaster domain model can be a bit chaotic. Imagine a large network with dozens of domain controllers that are constantly trying to replicate changes to each other, and you’ll understand how quickly the network could be flooded with replication traffic. To help reduce this constant bombardment, Microsoft implemented the site model. The site model groups domain controllers that are members of the same domain and that are connected by high-speed, low-cost links. Dividing the domain controllers in this way eases the strain caused by replication. For example, suppose that your domain consists of three domain controllers. Now, imagine that an Ethernet segment connects two of those domain controllers to each other, and the third connects via a dedicated ISDN line. Needless to say, Ethernet offers a speed that’s more than sufficient to sustain replication. Therefore, you’d probably want to form a site that contains the two domain controllers connected by the Ethernet segment. Doing so would allow the two domain controllers to replicate freely between each other as needed. And it makes sense because you usually don’t have to worry about bogging down an Ethernet segment with replication traffic. If your network is too congested with traffic already, you can install a second network card into each server and form a dedicated segment between the servers that’s used solely as a backbone for replication traffic. Once you’ve established your initial site, you’ll probably want to create a second site to contain the server on the other end of the dedicated ISDN line. The reason for doing so is that ISDN is a slow, and potentially expensive, medium, and you don’t want to risk congesting your ISDN link with constant replication traffic. You can solve this problem with the two-site model. Servers within each site will replicate Active Directory changes with each other freely, but servers in different sites will only replicate directory information at scheduled times. You can set the replication schedule to replicate across the slow link at a time when network traffic will be minimal. In the future, as you add servers to the network, you can place them in the site to which they have the most efficient link. If a new server is connected to the rest of the domain only by slow links, you can always create another site. So far, the model I’ve given you for creating sites has applied to multi-facility networks. For example, you might use this site model when most of a company’s employees reside in an office building, but the network also needs to be linked to a warehouse across town. However, it sometimes makes sense to use multiple sites in one physical location. A good rule of thumb to follow is that each subnet that contains at least one domain controller should be its own site. In fact, you can actually associate individual subnets with sites within the Active Directory. Remember that if you do decide to use a separate site for each subnet, you should plan carefully how often those sites will replicate with each other. For example, suppose that some of the users in subnet A frequently use some of the network resources found in subnet B. If replication doesn’t occur frequently enough, users in subnet A might not be able to see changes made to their resources in B until several hours after the change has occurred. A guideline to follow is connection speed. Basically, if the sites are on different subnets, but those subnets are connected by low-cost, high-speed links, then there’s little reason not to replicate the sites more often than you would if they were separated by a slow wide-area connection. Creating a Site Creating a site within Windows 2000 is a relatively simple process. First, click Start and select Programs|Administrative Tools|Active Directory Sites And Services. When you do, you’ll see the Active Directory Sites and Services snap-in for Microsoft Management Console. In the column to the left, right-click on the Sites folder and select New Site from the context menu. At this point, you’ll see the New Object Site dialog box. Enter the name of the site you want to create in the Name field. You should also select the site link object that you want to use for the site from the bottom portion of the dialog box. Usually, if you’re establishing your first site, the only available link name will be DEFAULTSITELINK. The default site link is automatically set up to use the IP protocol. When you install the first domain controller in a site, Windows 2000 will automatically create a site with the name Default-First- Site-Name. If you’re planning to use multiple sites in your enterprise, you should definitely change this name to something more fitting to your organizational naming scheme. Even if you don’t currently plan to create other sites, it isn’t a bad idea to give the default site a custom name just to make your Active Directory structure a little easier to follow. Besides, you never know when you may have to create a second site. If you do decide to rename the default site (or any other site, for that matter), go back into the Active Directory Sites and Services snap-in. In the column on the left, navigate to Active Directory Sites and Services|Sites. When you select the Sites folder, the column on the right will display all the existing sites. Right- click on the site you want to rename and select Rename from the context menu. So far, I’ve shown you how to create sites; but a critical piece of the puzzle is still missing. Unless you link the sites, replication will never occur between them. Remember that as far as Windows 2000 is concerned, each site is a separate entity unless you tell it otherwise. The task of linking sites is accomplished by a mechanism known as a site link.. A site link is bound to a protocol that both sites joined by the link can use to communicate. The site link itself also contains the replication schedule and various security mechanisms. When you create your first site, Windows 2000 automatically creates one site link. This is the DEFAULTSITELINK that you saw earlier when you created the site. If you had selected this option when creating a site, this site link would be used to join the new site to any existing sites that were also set to use the link. You can access the DEFAULTSITELINK by going into the Active Directory Sites and Services snap-in and navigating to Active Directory Sites and Services|Sites|Inter-Site Transports|IP. When you select the IP folder, the DEFAULTSITELINK will appear in the column on the right. Right-click on the DEFAULTSITELINK and select Properties from the context menu. When you do, you’ll see the DEFAULTSITELINK’s Properties sheet. The General tab displays which sites are linked by the site link. The tab also displays the link’s cost and replication schedule. You can use the Change Schedule button to replicate the connected sites more or less often. The default replication schedule is set to replicate the connected sites every 180 minutes. By looking through the various options found on the properties sheet, you can easily establish basic inter-site replication. However, this is just the tip of the proverbial replication iceberg. I cover site links and replication in more detail in part 2 of this series ( Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it’s impossible for him to respond to every message, although he does read them all.
<urn:uuid:14e9a75a-3bea-443e-a109-95474f500cd7>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/os/using-sites-in-windows-2000/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00044.warc.gz
en
0.924258
1,853
2.984375
3
A data center is traditionally and most commonly defined by its maximum IT load or the total incoming power supply capacity. Unfortunately, this approach immediately focuses attention on the data center as a large power consumer without communicating either the purpose of the facility or the value it brings. It’s an incomplete and misleading description which invites criticism. Describing a data center on the basis of its maximum power consumption is comparable to promoting a motor vehicle purely on the size of its engine. There are probably people that would find this approach attractive, but most of us would really like to know what type of vehicle is being described: lorry, bus, van, car, motorbike. And what kind of performance it provides: fuel efficiency, max speed, 0-60 time, number of seats for passengers or maximum load. That way we can understand the capability of the vehicle and whether it’s suitable to meet its expected requirement or not. Who would be interested in buying a new vehicle if all they knew was that it had a 150bhp engine? Does it have five doors? Does it have three wheels? Clearly, we do need to be aware of the power consumed by a data center. Global data center electricity demand in 2018 was an estimated 198 TWh, or almost one percent of global final demand for electricity (International Energy Agency, 2019), illustrating the impact of the data center industry on global energy use. However, if we continue to focus the public’s attention on how much power our data centers require without communicating the value they bring, we are inviting criticism. What metrics are available to describe data centers? There are a range of physical metrics used to describe data center facilities such as site area, internal floor area or data hall floor area. But again, simply defining a data center facility by its area can lead to the public thinking of such sites as colossal, vacuous wastes of space, let alone energy the energy they consume, once again with no concept of the purpose or benefit of the facility. Other variants of energy-based metrics such as air throughput and water turnover can also be used. The challenge here is that used in isolation, such metrics promote the energy-hungry nature of a facility, once again without providing any clue as to the value it brings. This obvious potential for bad press leads us to believe that alternative metrics should be considered to help communicate the scale of a facility in a way more comprehendible to those who fall outside of this growing industry. How do we go about change? One option would be to use IT focused metrics such as: - Data throughput - Number of servers - Speed of connection to major population areas - Data storage capacity - Total processing power - Connectivity (number of operators) The downside here is that the metrics are somewhat technical and abstract, and don’t really communicate the real-world value the facility brings to the person in the street. If we want to communicate the real benefit of a facility, we need to select a metric that relates to the primary purpose of the facility in a meaningful way. The Table helps to illustrate this approach: Using this approach would help the data center industry to communicate the primary purpose of specific facilities as well as the capacity of the data center to fulfil its objective, in doing so it removes the focus from the energy being consumed in doing so. For facilities that perform multiple functions, a combined description could be used e.g., “Facility y will store up to 3 million hours of HD video, 2 million apps and host up to 500,000 simultaneous players for online games.” As a further extension to this idea, we could also communicate the efficiency of a facility in delivering its primary purpose by using a composite metric to quantify the ‘Benefits’ per unit of ‘Drawbacks.’ With this metric, the ratio should be obvious, the greater the value the better! Data Center Useful Value = Benefits/Drawbacks Sounds simple, doesn’t it? But, just deciding on the ‘Drawbacks’ in the ratio also brings a range of options. This could be based on the power consumption of the facility, or could possibly be extended to CO2 emissions (could this be day-to-day or embodied carbon?), or perhaps the ratio could be made relative to the floor area of the building or site? Can we change the way we communicate data center value? The advantage of an approach such as this is that we get a real world benefit based efficiency metric rather than using Power Usage Effectiveness (PUE) which communicates how efficiently the facility delivers power to the IT equipment, with no means of communicating whether or not the IT equipment is efficient or performing any valuable work. All the above seem plausible within specific sectors. However, it may not facilitate an industry-wide data center comparison and could not be directly applied to mixed-use or colocation facilities. At first glance, it doesn’t seem that one single metric to effectively measure a data center’s worth will achieve industry recognition or acceptance any time soon. In the meantime, I would speculate that what is more likely to happen is the development of a ‘toolkit’ of metrics for use across various applications, to help promote a data center's worth and allow comparison within a sector. Such a toolkit could introduce new-found healthy competition within the industry and may ultimately keep us motivated in moving the industry towards further enhancements in performance whilst reducing overall resource consumption. I’d be interested in other ideas for evaluating data centers in a way that focusses less on the energy we use and more on the value and efficiency we bring to the connected world. It’s definitely a discussion we need to have. Arup is a member of the EUDCA.
<urn:uuid:ff2ac328-17ac-41b1-a3a7-8532c506a627>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/opinions/what-metrics-should-we-use-describing-data-centers-and-their-efficiency/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00044.warc.gz
en
0.947014
1,193
2.921875
3
Free Expression Online Studies of Internet filtering, network interference, and other technologies and practices that impact freedom of expression online. Featured in Free Expression Online This report demonstrates the technical underpinnings of how WeChat image censorship operates and suggests possible evasion strategies. The UN Special Rapporteur on violence against women, Ms. Dubravka Šimonović, recently released her draft report to the Human Rights Council on online violence against women and girls from a human rights perspective. The Special Rapporteur’s report includes many key insights from the Citizen Lab’s formal response on this issue last fall and echoes many of our sixteen key recommendations. This report describes our investigation into the global proliferation of Internet filtering systems manufactured by the Canadian company, Netsweeper Inc. This section details the research questions that informed our study. We also outline in detail the methods that we adopted to identify Netsweeper installations worldwide, and those that we employed to reduce the findings to countries of interest. We also present high-level technical findings and observations. In this section, we spotlight several countries where we have evidence of public ISPs blocking websites using Netsweeper’s products. Each country has significant human rights, public policy, insecurity, or corruption challenges, and/or a history of using Internet censorship to prevent access to content that is protected under international human rights frameworks.
<urn:uuid:11799d75-d8df-4bb9-b9aa-a416d6c90a97>
CC-MAIN-2022-40
https://citizenlab.ca/category/research/free-expression-online/page/6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00044.warc.gz
en
0.892854
284
2.671875
3
Every once in a while, it is a good idea to take a high level overview of where information technology (IT) has come from and where IT is going. That helps us put our efforts into context and lets us know where we should be devoting our efforts. Information technology is now in its third transformation. The first transformation was the digitization of business. The second is the continuing digitization of human experience. The third stage is the digitization of machines. Each transformation is ongoing, builds upon the others, and may overlap. Thus, some technologies that formed a foundation earlier are still active. For example, the mainframe is still alive and well even in the time of mobile computing. We have to understand that while they act simultaneously in some sense, we also have to understand how they fit together. Even though specific technologies provide a frame of reference, these transformations are from a broad perspective and not dependent upon any one technology. Please also note that there is not a smooth transition to each transformation, but that elements of a later transformation may be present while the key transformation of an earlier era is still more prominent. The Three Digital Transformations Let’s take a closer look at all three from a digital perspective: - Transformation I: Business Workflow and business automation — the beneficiary of this transformation is the business itself; it represents the digitization of traditional business processes most familiarly associated with online transactions processing systems (OLTP); from a software perspective, including operating systems, application-driven software intelligence using third-generation and object-oriented programming languages, database management systems; and from a hardware perspective, mainframes, minicomputers, and servers along with hard disks and magnetic tape; and from a network perspective, leased lines. - Transformation II: Human Experience– this has consisted of two stages. (i) Interpersonal communication and productivity — the beneficiaries of this first stage has been employees broadly and directly at all levels; it represents digitization meant to enhance worker productivity and includes office applications, such as e-mail, word processing, presentations and spreadsheets; the key software was office productivity tools; the key computing hardware was the personal computer; and the key networking component was the extension of the local-area network (LAN) within a company’s local environment and the extension of the wide-area network (WAN) to the Internet so that employees of one company could communicate with other people (including customers and business partners). (ii) Virtual digital world (cyberspace) — the chief beneficiary of this stage is the individual (consumer); yes, business has benefited tremendously, but the overall result has been the immersion of the individual into the digital world (notably cyberspace); the Internet is key to this transformation, as is the move to mobile devices that have added tablets and smartphones to the mix ,along with traditional laptops and their derivatives. - Transformation III: Machines — the chief target of this transformation is things (mostly machines); this transformation represents the Internet of things which contain digital intelligence (embedded processors and storage); some call this the “industrial Internet” but that is limiting, because not only machine-machine interactions can take place, but also machine-biological interactions (such as body sensors); this represents heavy use of data-driven software intelligence, which encompasses sensors and other instrumentation, machine learning, robotics, and content creation (such as online learning). Why is this taxonomy useful? Let’s look at it from a big data growth perspective as shown in the following slide. We can see how all three transformations play a role in big data and how our lives are getting more complicated as a result. Figure 1: Sources of Big Data Growth Source: The Storage Networking Industry Association Data Protection and Capacity Optimization Group 2013 What about the trends we consider so important? We will not ignore major trends as they affect our everyday life. Take big data. Big data is a code word for solutions that seek to make sense of and derive insights from the explosion of data. It consists of both what is happening in Transformation 3, such as the Internet of things, as well as taking advantage of Internet-created data from Facebook, Google searches, etc.,. Still, there is more that has to be done to have “things” use this data effectively. In the Internet of things, most data is generally created by machines autonomously rather than by people individually. Data can come from artifacts, the non-biological natural world, or the biological natural world. - Artifacts — the technological creation of the human mind, from everyday items, such as washing machines and refrigerators, to sophisticated high technology, such as smartphones or tablet computers. Your washing machine or refrigerator can communicate problems over the Internet. Your position over time can be collected using GPS on your smartphone and used for a number of purposes (such as helping you locate the nearest Italian restaurant). - The non-biological natural world — this includes the weather, astronomical observations, and mineral deposits as well as most of the areas where the hard sciences, such as physics and chemistry, play a big role. Weather data, searching for mineral deposits and the work of CERN all generate big data. - The biological natural world — this consists of the spectrum of living (or near living) things), such as plants and animals – and encompassing humans. For example, genomic research is a major contributor. Sensors of one type or another are responsible for capturing information, ranging from those which measure component wear and predict failure on a train locomotive or car (artifact), to data that collects data on hurricanes (non-biological natural world), and to medical sensors on a person to monitor heart rate, temperature, etc. (biological natural world). The Internet of things has a big impact on IT organizations and what they need to do with emerging technologies and platforms like cloud computing. When we think of a compute cloud (say, for simplicity’s sake, a private cloud) we tend to think of a transformation of a traditional data center. However, the traditional data center was designed for transformation 1 (such as OLTP systems). The data center was adapted to serve Transformation 2 (as businesses found accommodation with PCs). In addition, IT organizations may have been able to force-fit their own Web services into traditional data center environments. This is not perfect, but most of the data is still under the control of IT theoretically. That is changing. In dealing with BYOD where business and personal data commingle, IT finds that there is data “in the wild” that is outside the control of IT. One source of the “data in the wild” problem are business units that opt for shadow IT outside the confines of official IT, such as SaaS applications, without considering the consequences. These problems have been harder for IT to tackle. Add in the geographically dispersed and distributed collection of data that the Internet of things requires. In this case, the cloud has to be not merely a physical single data center but a world-wide virtual data center that encompasses everything, with the necessary scale and elasticity, including the ability to deliver IT as a service. Quite a challenge. The concept of IT causes a problem when we try to think about what is happening today with information, because we naturally think in terms of specific technology products and what is happening before our eyes rather than from a broader perspective. Other terms, such as “infosphere” (not the IBM product) have been coined to describe this complex situation, but they are not commonly recognized nor accepted. IT has been very adaptable to change over the years, and will continue to do so. Hopefully, having the Three Transformations of IT laid out will provide a context and reference point from which IT can put its thinking cap on and plan for the big picture rather than donning blinders and focusing only on one immediate problem at a time.
<urn:uuid:4bb6607b-2abc-4ff4-9088-d065a76b6b9d>
CC-MAIN-2022-40
https://mesabigroup.com/it-perspectives/the-three-transformations-of-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00044.warc.gz
en
0.941767
1,639
2.75
3
A fund flow refers to the inflow and outflow of funds or assets for a company and is often measured on a monthly or quarterly basis. A fund flow statement reveals the reasons for these changes or anomalies in the financial position of a company between two balance sheets. These statements portray the flow of funds - or the sources and applications of funds over a particular period. Be prepared for upcoming challenges and learn how to manage your changing payments business today with our guide. Fund flow statements are used to show movement and activity related to both long-term and short-term funds by revealing: - How the funds were generated (source of funds) - Where those funds have been used (application of funds) Why prepare a fund flow statement? A company's financial statements already include a profit and loss statement and a balance sheet. So why is a fund flow statement needed at all? - A profit and loss and balance sheet will show a company's financial position, but will not explain the reasons for fluctuations or variations in within the company's financial or cash position - A profit and loss and balance sheet will depict two sets of figures - the current and previous year - but will not explain why movement has happened. Cash flow statement vs funds flow statement In the area of financial management, there are 4 main financial statements from which to obtain financial data related to business operations. 1. Balance Sheet Accounts 2. Profit and loss Account 3. Cash Flow Statement 4. Fund Flow Statement A company's balance sheet and income statement measures one aspect of performance of the business over a period of time. A cash flow statement shows the cash flows and cash equivalents of the business during business operations in one time period. A cash flow statement helps companies manage cash inflows (cash receipts), cash outflows (cash disbursements), operating cash flow. It shows - Cash from operating activities - Cash from investing activities - Cash from financing activities Read more about monitoring and improving cash flow visibility here. A Funds flow statement reports changes in a business's net working capital from its operations in a single time period. The main components of working capital are: Net working capital is the total change in the business's working capital, calculated as total change in current assets minus total change in current liabilities. For example: If the inventory of a business increased from $700,000 to $750,000, then this increase of $50,000 is the increase in the working capital for the corresponding period and will be mentioned on the funds flow statement. But the same would not be reflected in the cash flow statement as it does not involve cash. Table showing differences between a funds flow statement and a cash flow statement Image source: AAFM The importance of fund flow statements A funds flow statement is an essential factor in revealing how funds are used. A fund flow statement shows financial analysts how to assess the fund flow of an organization in the near future. Usually, the preparation of these statements is followed by a funds flow statement analysis. It serves as a financial parameter that helps a company to control its finance and develop a better strategy for long term financial planning, and to utilize short term and long term funds. What is a funds flow statement analysis? Fund flow statement analysis is a comparison between various aspects of a Balance Sheet. While evaluating this statement, it is also vital to understand all the aspects. If the asset section of a balance sheet experiences growth, it implies that the company has purchased assets by spending funds. These assets might then result in the inflow of funds in the future. Here are some examples: - Fixed assets - Short-term loans - Long-term loans - Cash and cash equivalents - Present investments On the other hand, if the assets section shows a decline, it means that the company has sold some of its assets to maintain fund inflow. In a Funds Flow Statement, any increase in liabilities means the organization has funds inflow which needs to be paid. Some of the examples are: Conversely, a decline in liabilities implies that the current obligations have been satisfied. Image source: Unacademy Cash flow visibility is more important than you think. Read why here Interpreting a fund flow statement: To remain financially sound, part of every company's accounting process should be to frequently analyze its fund flow statement (and other financial statements) to make appropriate business decisions. Fund flow statements, along with profit and loss account statements and balance sheets portray the company’s present financial position when approaching banks, investors etc. for working capital funds and loans. Today, most businesses use advanced technology for accounting to draw up these complicated financial statements instantly. Image source: indiafreenotes When fund flow statements show an increase in working capital: This situation arises when the net working capital of a business has experienced an increase in current assets. This can be defined by increased receives or other assets, or a decrease in current liabilities. This indicates a company's liquidity position, showing that funds can now be used effectively by the company to meet changes in working capital requirements, pay its dividends or pay off some of its short term outstanding loans etc., from its long-term sources. Such a company is financially healthy and a good bet for its capital investors. When fund flow statements show a deficit or decrease in working capital: This situation comes about when the company has fewer long-term sources of funds, and an increase in the application of funds. In these circumstances, a company may need to raise a loan to meet its commitments. It's crucial for fund managers to have a deep understanding of the company’s funds flow statement, and investor input, as it reflects changes in sources of capital and fund utilization purposes. The excess or deficit in a company’s current liabilities and assets can only be effectively viewed and scrutinized in the funds flow financial statement rather than the income statement or balance sheet. Fixed assets and current assets A fund flow statement will also reveal information about a company's fixed and current assets. Noncurrent - or fixed assets are a company's long-term investments for which the full value will not be realized within the accounting year. Examples of noncurrent assets include investments in other companies, patents, property, plant and equipment. So, a company using its long term funds flow for fixed assets is generally regarded as the right utilization of funds and these details are revealed a by fund flow statement. Current assets are any assets that can be converted into cash within a period of one year. This counts products that are sold for cash as well as resources that are consumed, used, or exhausted through regular business activities that are expected to provide a cash value return within a single accounting period. What if a company uses its short-term funds to finance its long-term assets? This is not an ideal situation, and indicates that a company could find itself in a cash-crunch situation. Once an investment is made into long-term assets using short-term funds, the company will not be in a position to quickly convert those assets into liquid cash due to the nature of the investment. This could seriously affect its ability to repay short-term obligations. A funds flow statement helps explain the source of funds and its utilization or application, allowing the users of financial information to interpret and know the impact on the business. Image source: indiafreenotes What is free cash flow? Free cash flows (FCF) refer to how much cash a company generates after allowing for cash outflow to support operations and maintain its capital assets. Unlike earnings or net income, free cash flow is a measure of profitability that excludes the non-cash expenses of the income statement and includes spending on equipment and assets - as well as changes in working capital from the company balance sheets. What Is Net Present Value (NPV)? Net present value (NPV) refers to the difference between the present value of cash inflows and the present value of cash outflows over a period of time. NPV is used in capital budgeting and investment planning to analyze the profitability of a projected investment or project. Ten key benefits of a fund flow statement 1. Shows financial position. A funds flow statement helps indicate the addition in profits, which is a boon to shareholders. 2. Indicates addition of share capital. The fund flow statement can highlight changes in share capital. 3. Shows addition or reduction in share premium. The fund flow statement shows fluctuations in share premiums. This increases when shares are issued at premium or when preferential shares or debentures are reduced and the funds flow statement shows key information at a glance. 4. Reveals profit or loss of operation. The fund flow statement clearly shows whether an organization is earning profit or sustaining a loss. 5. Reveals addition in long-term borrowings. The statement can show the additional amount borrowed by issuing debentures. 6. Indicates decrease in working capital. The statement shows the reduction in working capital (i.e., when current assets are less than current liabilities). 7. Fund flow statement acts as a guide. The statement allows management to learn about future problems, needs, and fundraising requirements, helping the company to avoid financial problems. 8. Helpful in sound dividend policy. Sometimes, a company may have sufficient profit, yet it is advisable not to distribute dividends due to lack of cash or liquidity. The fund flow statement is useful in informing sound dividend policy. 9. Helpful in long-term borrowings. Before advancing long-term loans, lenders may ask for several years of fund flow statements to learn the firm’s creditworthiness. 10. Useful information for investors. Before investing, some investors study a company’s fund flow statements to know how funds are raised and used (e.g., whether funds are adequate for the payment of interest and principal sum). Payment transaction monitoring for improved fund flow visibility Through valuable data insights, led by information and payments data, a business can improve profitability, optimize revenue and cut costs. Properly analyzed data gives clear visibility into an organization's financial situation. In a payments environment, transaction monitoring identifies performance issues, and detects fraud and other anomalies. Payment analytics tools allow a business to take historical data and apply it to things that are happening to a business right now, creating cash flow visibility. Performance management through analytics is important to any business with a financial department. Every business needs to be able to see their cash flow and have the means to control it. IR's Transact suite of payments solutions IR Transact simplifies the complexity of managing modern payments ecosystems, bringing real-time visibility and access to your payments system. Transact gives organizations unparalleled insights into transactions and trends to help turn data into intelligence, and assuring the payments that keep you cash flow positive. With dynamic visualization tools, businesses easily get a clear view of all this information to make proactive management decisions. Transact offers a thousand points of reference, from a single point of view. Read more about real time analytics here.
<urn:uuid:03470fb2-eb51-47fa-8bfd-22a2c0bce415>
CC-MAIN-2022-40
https://www.ir.com/guides/fund-flow-statement
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00044.warc.gz
en
0.932587
2,337
3.34375
3
The act of interconnecting two or more conductive bodies to establish electrical continuity and conductivity. Listed on the nameplate of a given electric motor, the baseline speed is the full-load or running speed or rpm of the motor when it is operated at its rated voltage, rated full-load current, rated frequency (AC motors), and rated torque (horsepower rating). Branch-Circuit Selection Current (BCSC). Normally abbreviated to the acronym BCSC, this term describes the value in amperes to be used instead of the rated-load current in determining the ratings of the hermetic refrigerant electric motor-compressor branch-circuit conductors, disconnecting means, controllers, and branch-circuit short-circuit and ground-fault overcurrent protective devices wherever the running overload protective device permits a sustained current greater than the specified percentage of the rated-load current. The value of the BCSC will always be equal to or greater than the nameplate-marked rated-load current of the hermetically-sealed refrigerant motor-compressor. The maximum torque that can be developed by a given electric motor with the rated voltage applied at the rated frequency (AC motors), without an abrupt drop in rotor speed.
<urn:uuid:62ae51f6-6391-41f8-a089-078c21c2026c>
CC-MAIN-2022-40
https://electricala2z.com/glossary/electrical-engineering-terms-b/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00044.warc.gz
en
0.862337
274
3.40625
3
The amount of APTs, ransomware and phishing attacks have been growing steadily for the last few years. As a result, cyber security professionals have been looking for better methods to protect organizations. In this article, we will discuss one of those newly popularized methods: cyber security ontology. What is Cyber Security Ontology? Although it started gaining popularity in the recent years, cyber security ontology is not a new concept. It was first coined around 2012 by Carnegie Mellon University’s CERT program. When you hear the term cyber security ontology, it might remind you of the philosophical concept of ontology, which refers to the branch of philosophy that deals with the nature of being. Yet cyber security ontology has nothing to do with philosophy. Instead, this term refers to a set of categories, concepts and ideas within the framework of a specific area or domain. The most prominent feature of a cyber security ontology is the fact that the relationship between all items in the set are illustrated. The idea behind a cyber security ontology is the need for a common language that includes basic concepts, intricate relations and main ideas. With the creation of a proper and cohesive cyber security ontology, the members of the cyber security community across the globe can efficiently communicate and develop a shared understanding regarding the prominent ideas within the field. Moreover, cyber security ontologies are unique in the way that they also include the relationships between each entry within an ontology. This allows the cyber security professionals to make faster and better decisions. Also, being able to see the relationships between incidents, events and concepts provides a valuable insight. Although cyber security ontologies have been gaining popularity over the last few years, there is an ongoing debate regarding whether they are actually useful and necessary. sargue that such taxonomies allow cyber security professionals in different organizations or even in different countries to communicate faster and more efficiently. Moreover, they state that ontologies can be very beneficial for describing critical vulnerabilities, risky exposures and weak spots that can especially harm mobile-enabled organizations and employees. In addition, some organizations report that employing cyber security ontologies helped them discover new product capabilities and use their resources more efficiently. On the other hand, some cyber security professionals believe that cyber security ontologies are rather stagnant and hinder possible updates on the definitions of the items in them. As the attacks change, our defence systems and precautions change. Naturally, definitions of once-straightforward concepts and ideas might need some updates. From our perspective, each organization faces different challenges when it comes to cyber security. That is why ontologies and taxonomies can be very beneficial for some organizations while being completely useless for others. It is up to you and your cyber security professionals to decide whether such an approach would be useful for you. Learn about the model that provides the backbone for how an organization should be assessing and handling vulnerabilities. Integration is one of the most critical features that every security product should have. But, unfortunately, this is not a case when it...
<urn:uuid:39f6eb96-0788-47cd-b3fb-b1d5fe0189a5>
CC-MAIN-2022-40
https://www.logsign.com/blog/what-is-cyber-security-ontology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00044.warc.gz
en
0.937983
624
2.828125
3
Energy saving computer? Check. Good PC power management habits? Of course! Could you still make your computing habits greener? Probably. A recent story from TechTarget described what cloud computing can bring to the table as far as saving energy is concerned. And with market research firm Verdantix predicting that carbon emissions from information communication technology equipment will double from 2 percent to 4 percent of total emissions by 2020, strategies for reducing emissions are more important than ever. “A large percentage of global GDP is reliant on ICT – this is a critical issue as we strive to decouple economic growth from emissions growth,” Paul Dickinson, executive chairman of the Carbon Disclosure Project, said of the numbers from Verdantix. “The carbon emissions reducing potential of cloud computing is a thrilling breakthrough, allowing companies to maximize performance, drive down costs, reduce inefficiency and minimize energy use – and therefore carbon emissions – all at the same time.” TechTarget said a cloud solution is far more efficient and environmentally sound that most other types of premises-based systems, which typically include storage infrastructure, servers and other equipment that could end up creating significant emissions. The website said things like multi-tenancy in the cloud, which means sharing infrastructures, and using a cloud to improve server utilization are great ways to control power usage and limit emissions. Another study by Verdantix said the cloud has some other great perks, including the ability to save companies a lot of money. A study by the group, “Cloud Computing: The IT Solution for the 21st Century,” forecasts that by 2020, firms in the United States may be able to achieve $12.3 billion in energy savings because of the cloud. Do you use the cloud at home or at work? Do you feel like it could help you with PC power management? Let us know your thoughts on this rising technological platform!
<urn:uuid:814e0609-8640-4a71-94f3-133e57b80cd6>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/it-adopts-the-cloud-carbon-emissions-start-to-drop
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00244.warc.gz
en
0.944025
386
2.75
3
Hoppy beers are famous as a driver of craft brewing. But the challenging taste of hops goes far beyond the palate. According to a new study from Scripps Research scientists, the bitter flavor literally reaches into your gut. Moreover, chemicals in hops called isohumulones may help control obesity, type 2 diabetes and other diseases. Intestinal taste receptors detect isohumulones, according to the study. While the work was performed in mice and needs to be confirmed in humans, it’s already known that people also have taste receptors in their gut. Previous research indicates that intestinal taste receptors influence the production of hormones as well as appetite, said Enrique Saez, a Scripps Research associate professor. And hops extracts have been shown to reduce weight gain and decrease insulin resistance. So the study fills a gap in research between the observed effects of hops extracts and the chemicals and molecular mechanism causing the effects, Saez said. The study was recently published in the journal Molecular Metabolism. Isohumulones are being studied by Seattle’s KinDex Pharmaceuticals as therapy for metabolic diseases such as Type 2 diabetes and polycystic ovary syndrome. The company is studying the compounds in people under the name KDT501. KinDex asked Saez and associates to study isohumulones to better characterize what they do. With that knowledge, KinDex could optimize the drugs, said Saez. He is on the KinDex advisory board with colleagues Paul Schimmel and Ben Cravatt. “We were quite surprised when reviewing the literature,” Saez said. “It turned out that these taste receptors are expressed not only in the mouth, but also in the gut, the airway epithelia, the liver, and some other organs.” These receptors appeared to have evolved as protection against eating bitter substances, which are often poisonous, Saez said. Isohumulones work indirectly. The bitterness of beer is measured according to the International Bitterness Units scale, with one IBU corresponding to one part-per-million of isohumulone. When beer is exposed to light, these compounds can decompose in a reaction catalyzed by riboflavin to generate free-radical species by the homolytic cleavage of the exocyclic carbon-carbon bond. The cleaved acyl side-chain radical then decomposes further, expelling carbon monoxide and generating 1,1-dimethylallyl radical. This radical can finally react with sulfur-containing amino acids, such as cysteine, to create 3-methylbut-2-ene-1-thiol, a thiol which causes beer to develop a “skunky” flavor. They stimulate release of a hormone called glucagon-like peptide-1, or GLP-1, that works with insulin to decrease blood sugar levels, he said. The chemicals also promotes satiety. “It makes you feel fuller, and other hormones that this bitter taste receptor also regulates limit the absorption of nutrients in the gut,” he said. “So in effect it probably limits absorbing these potentially poisonous compounds.” A mimic of GLP-1 was developed by San Diego’s Amylin Pharmaceuticals as a diabetes medication, exenatide. It was discovered in an unlikely place, the saliva of the Gila monster. The drug is sold as Byetta and in extended release form as Bydureon. It was attractive enough that Amylin was purchased for $7 billion by Bristol-Myers Squibb in 2012. However, exenatide must be injected, limiting its usefulness, Saez said. “Isohumulones are small molecules that you can eat,” he said. More information: Bernard P. Kok et al. Intestinal bitter taste receptor activation alters hormone secretion and imparts metabolic benefits, Molecular Metabolism (2018). DOI: 10.1016/j.molmet.2018.07.013
<urn:uuid:25b44933-8e47-422c-9682-4cf91df13aa1>
CC-MAIN-2022-40
https://debuglies.com/2018/09/26/beers-bitter-flavor-literally-reaches-into-your-gut/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00244.warc.gz
en
0.940294
885
3.15625
3
This technological advance raises legal concerns: Based on current law, if a robot conceived the idea for an “invention,” this invention might not have the possibility for patent protection in the United States, possibly leaving the owner or lessee of the robot to depend only on Trade Secret law for the protection of the invention. The situation might be different under the laws of other countries. However, owners or lessees of such machines in the United States should be forewarned that they may confront difficulties in obtaining patents or other protection for inventions made by such machines. Currently, there are many areas of technology in which automated machines or robots are involved in the process of invention. For example, electronic circuit design relies on Monte Carlo analysis using Spice, but this involves human inputs and human analysis of the results. Existing DNA/amino acid sequencing machines provide inventors with information that inventors later patent, of course. There is a difference, because such machines are automated and not capable of cognition, and much of the inputs into such machines are provided and selected by humans. Also, the resulting data and results are analyzed and verified later by humans. Another case in point involves high-throughput compound screening to identify promising compounds for pharmaceutical, agricultural and other purposes — but again, the inputs into screening machines are human, and the outputs are analyzed by humans. There are numerous similar examples. In contrast, the robots discussed in the report in Science seem to have an independent ability to generate and verify hypotheses, perhaps leading, in patent parlance, to independent “invention” by the robot, not the human. The issue is whether the owner or controller of the robot would be eligible to obtain patent protection for an invention conceived by a robot. Who not What American patent law (35 U.S.C. Section 101) limits what is considered patentable subject matter, and limits the invention to the discoverer: “Whoever invents . . . may obtain a patent . . . .” Section 101 uses “whoever” — not “whatever.” In addition, 35 U.S.C. Section 102 says that “a person shall be entitled to a patent unless . . .” (emphasis added), and proceeds to set forth a number of exceptions to patentability. That preamble to section 102 limits the ability to patent to a person — probably not extending it to a machine. Thus, a person using a robot that might make an invention may face some serious statutory impediments to patent protection. The situation is compounded by Section 102(f), which states that one cannot obtain a patent if “he did not himself invent the subject matter sought to be patented.” Thus, Section 102(f) prevents one from obtaining valid patent protection if the idea in question comes — even in private — from another source (e.g., a robot). Of course, there is the possibility that the programmer of the robot could be the inventor if the robot were given the hypotheses to test and parameters to evaluate, in which case the human would probably be the inventor on the theory that the robot was simply the “hands of the inventor.” But that does not seem to be the case with the robot reported in the Science article. As mentioned previously, another potential protection for robotic inventions might be found in U.S. Trade Secret law. However, situations would probably arise in which, once a robotic invention were commercialized, the invention could readily be reverse engineered. Reverse engineering of an unpatented product cannot be prevented under Trade Secret law, which could cause the product developer to regret not having patent protection. Also, Trade Secret law does not prevent subsequent independent development by another. European law provides an illustration of how things might be different in other countries. Article 58 of the European Patent Convention sets forth the “entitlement to file a European patent application” thusly: “A European patent application may be filed by any natural or legal person, or any body equivalent to a legal person by virtue of the law governing it.” That language seems to provide some wiggle room for the possibility of a robot being an inventor in Europe. Yet one would still have to name the inventor on a European patent application, which leads to an interesting question: Would the robot’s central processor be listed as the inventor? If so, it might need to be identified by serial number and where it resides. Interesting possibilities.Trade Secret law — at least in terms of reverse engineering and independent development — is quite similar in Europe and the U.S. If a robot were to be or become an “inventor” under the laws of the U.S. or Europe, it would seem that the owner or lessee of the robot would probably be the owner of the “invention” rather as employers are generally the owners of employees’ inventions. However, owners or lessees of such robots should do something akin to what employers do with employees: still get solid written contracts from the developers of the robots to make sure robot inventions are owned by the owner or lessee. Patented by HAL 9000? One final thought: We might someday ask whether a robot that gains true cognition, or self-awareness, should be considered a “person” for the purposes of patent law. Although the question may seem a long way off — and perhaps a bit too much for any court to decide now — that day may be coming sooner than we expect. Robert W. Stevenson and Joseph F. Murphy are patent attorneys and Thomas J. Clare is technical advisor at Caesar, Rivise, Bernstein, Cohen & Pokotilow, a law firm that specializes in intellectual property law and litigation.
<urn:uuid:77cd73d8-1837-4967-9871-7ba9dac9a31f>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/robots-and-the-law-will-the-real-inventor-please-stand-up-66915.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00244.warc.gz
en
0.948438
1,200
2.890625
3
VPN services and proxy servers aim to keep your browsing experience private, but is one better than the other? Each technology has its pros and cons, so if you’re unsure which one to use, this is the article for you. Learn the differences between VPNs and proxy servers. Proxy Server – What Is It? To put it in a nutshell, proxy servers are mediators standing between you and the internet. This means that a proxy server receives your data and sends it further – to your favorite websites, services, and other servers. Normally, when you try to access a website, your computer sends a request and waits for a response. The connection is direct, and the site’s owner sees you (your IP address) as the visitor. But if you access the website through a proxy server, your real IP address is not visible to the website’s owner. Instead, they see the proxy’s address because the request comes from it and not from you. Why Do People Use Proxy Servers? The main purpose of proxy servers is to hide your real IP address. Why would you want to hide it, you may ask. Well, your IP address reveals a lot of information about you, including your physical location – country, sometimes even city. It’s also your identification number, enabling advertisers to track your activity and profile you for their profits. This is why some people choose to use proxy servers. Pros of Proxy Servers: - They hide your IP address and location. - They can be used to access some restricted content, e.g., in countries censored by the government. - Many proxy servers are free. Cons of Proxy Servers: - Proxy servers do not provide data encryption, which means they are no more secure than a direct connection to the internet. - Since most proxy servers are free, it is difficult to say if they are safe. Server owners may have malicious intentions and gather data that passes through their proxies. What is a VPN? A VPN (Virtual Private Network) somewhat resembles proxy servers. The main similarity is that it also acts as a mediator between you and the internet. When you access a website via a VPN, your data goes further to the VPN server. So, where is the difference? The most important distinction between VPNs and proxy servers is that the former encrypts your data. This means that only your computer and the VPN server “know” what is being sent inside the data packets. Nobody else – no advertisers or government agents – has access to your information sent through this encrypted tunnel. Why Use a VPN? VPNs are tools designed to increase your online privacy. They are similar to proxy servers in that they intercede in your connection but offer more excellent protection from hackers, spies, and other “outsiders.” Pros of VPNs: - VPNs provide data encryption and hide your activity from others. - Not even your ISP can see what you’re connecting to when you use a VPN. - They hide your real IP address and physical location. - VPNs can be used to access restricted data in censored countries (this is also safer than a proxy server because of data encryption). Cons of VPNs: - Since all of your data goes through the VPN server, its owner has access to all of it. You must be sure that the VPN you use has a no-log policy. Otherwise, your data may be collected and sold. - Good VPNs are sometimes expensive. The free ones may come with data limits or lower speeds. Should you use a proxy or a VPN? Both proxies and VPNs are designed to enhance your privacy when browsing the web. You should at least think about using one of them, but which one? It all depends on you and your preferences, but here’s a hint: VPNs offer everything that proxies have and more, just like a premium membership. Using a VPN in Canada, the USA, Japan, or any other country can provide you with access to geo-blocked content. Proxies can also do this, but they don’t offer a secure, encrypted connection. Both tools are pretty similar: they can mask your IP address and location. If you’re still unsure which one to use, we encourage you to learn more about the differences between proxies and VPNs to make the best decision.
<urn:uuid:91a84fd2-32b5-4ee0-8705-d03169903bf0>
CC-MAIN-2022-40
https://www.urtech.ca/2022/07/should-you-use-vpn-proxy-or-neither/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00244.warc.gz
en
0.947739
915
2.734375
3
A rising population and urban sprawl have left many wondering about the challenges of feeding future generations. One possible solution focuses on simultaneously maximizing the use of dwindling farmland while implementing more efficient planting, growing and harvesting technology. Regina, Saskatchewan-based Dot Technology Corporation offers their autonomous DOT Power Platform - a U-shaped, diesel-powered unit that’s basically a mechanized frame, or tractor, designed to connect and carry a variety of farming implements. So instead of hooking up to a planter, plow or maneuver spreader, the 4.2-ton DOT connects beneath and on three sides of the field implement with a remote control. In addition to a hauling capacity of 40,000 pounds, the DOT features four hydraulically-driven wheels with individual hydrostatic pumps and a top speed of 12 mph. Similar to the mapping process used by warehouse robots, the farmer uses software and a GPS receiver to generate a path plan for each field, including boundaries and obstructions. Embedded sensors constantly update this information so DOT can make safe and efficient decisions about its navigation path. It also utilizes a Windows Surface Pro tablet to communicate with a local network in transferring data from the field that can include virtual reality mapping, fuel usage and vehicle performance. The DOT can run in full autonomous mode in fields or by remote control closer to barns and equipment sheds. However, autonomous mode cannot be activated if any portion of the implement is outside of the pre-determined field boundary. According to the company, the machine can be run as long as there’s fuel in the 75-gallon diesel tank, saving farmers an estimated 20 percent in fuel, labor, and equipment costs. Next steps will include an upgraded communication system that will allow multiple DOTs to cooperate in the field, and working with additional vendors in developing a wider range of implements. Currently, the only implements available are for planting and spraying. The company hopes to have a half-dozen DOTs on farms in Saskatchewan later this year before ramping up production and commercial availability in 2019.
<urn:uuid:8d165607-9e55-4550-997a-73ff23a3816c>
CC-MAIN-2022-40
https://www.mbtmag.com/home/video/21099396/greener-acres
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00244.warc.gz
en
0.920003
421
2.796875
3
The Internet service providers (ISP), are there to let us get onto the Web. Soon they might start keeping us off the Internet if the US government imposes new laws and regulations. There were hearings held in The US Congress over this issue. It is not a secret for anyone that as the Internet looms larger in American politics, US Congress faces a tough debate over how to balance election campaign finance restrictions and the free speech rights of web sites and internet bloggers. US election campaign finance reform had been debated for years without any major changes to laws. The Bipartisan Campaign Reform Act however was passed in 2002. It banned national political party committees from accepting or spending “soft money” contributions. The decision was challenged in McConnell v. FEC in 2003. “Hard” and “Soft” Money Campaign money in the United States election system comes in two forms: “hard money” and “soft money”. “Hard money” refers to donations made directly to political candidates. These must be declared with the name of the donor, which becomes public knowledge, and are limited by legislation. “Soft money” are money that are not made directly to a candidate’s campaign. Soft money refers to contributions given, at least nominally, to a political party for “party building” activities rather than for the direct support of particular candidates and campaigns. Hoping to influence a Federal Election Commission (FEC) rule-making House leaders promoted, March this year, a bill to exclude the Internet from restrictions on public communications under campaign law. Critics however hinted the real goal is to knock a hole in the 2002 ban against using “soft money” contributions from corporations and labor unions for political advertising. The Wall Street Journal reported that conservative Redstate and liberal Daily Kos political blogs joined forces, saying the FEC must “tread lightly” for fear that new campaign reporting rules will “chill free-spirited discourse online”. Proponents say money can not dominate the market as it might with broadcasts for a single candidate or point of view. Opponents argue that dot-com political ads are potential force, and a blanket exemption for paid content on blogs would invite abuse as well as efforts to unravel campaign-finance limits. This debate is about big ideas and both sides say they receive bipartisan support. A debate in November 2005 showed a narrow House majority of 225 members supported exempting Internet communications. Is Network Neutrality an answer of apprehensions about web’s role in politics? A whole virtual world emerged only a decade since the commercialization of the Internet began. The number of users, the speed of internet connections and the variety of things everyone can do online created web culture. This process has been accomplished with light regulatory and in some countries even out of any regulations. That means officials have not been involved in decision makeing process over how to develop the Net. There were an important “exception to the rule” – the pornography and gambling. Those have been put under regulations over the Internet. Having a whole industry out of regulations has produced some results. Web surfers can make cheap or free phone calls, legally download music, movies and software. They can also watch a network television shows online. Some of the internet business ideas succeeded, others failed. Many of them however would never have been even tested if a regulatory bodies such as national goverments or the U.S. Federal Communications had to say “Yes” first! Do We Need To Regulate What Isn’t Broken Becoming a part of political agenda is a worst thing that may happen to Internet. The web improved a lot out of government control and probably is a better ideia to keep it less regulated. Otherwise we will step back from the idea of open market! The good news is that things like “open access networks” and “free trade” are both conservative an liberal values and they can not be put on partisan political debate! It is better to keep the web clean out of political debate instead of putting restrictions because of the election and political agenda. Discuss this topic at B10WH Web Hosting Forums!
<urn:uuid:b8c89113-da49-4d82-a22d-37d5534c7490>
CC-MAIN-2022-40
https://www.dawhb.com/web-as-a-political-agenda/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00244.warc.gz
en
0.954551
878
2.5625
3
The 2014 Google Science Fair is now open for entries from 13- to 18-year-old students around the world who have great ideas for incredible science projects, and this year’s contest offers some great new prizes for the winners. The call for entries for this year’s fourth annual event was unveiled by Clare Conway of the Google Science Fair team in a Feb. 12 post on the Google Official Blog. “What if you could turn one of your passions into something that could change the world?” wrote Conway in her post. “That’s just what thousands of teens have done since the first Google Science Fair in 2011. These students have tackled some of today’s greatest challenges, like an anti-flu medicine, more effective ways to beat cancer, an exoskeletal glove, a battery-free flashlight, banana bioplastics and more efficient ways of farming.” This year’s competition, which is being conducted in partnership with Virgin Galactic, Scientific American, LEGO Education and National Geographic, is now accepting entries through May 22, and the winners will be announced at a finalist event at Google headquarters in Mountain View, Calif., Sept. 22. The grand prize winner of this year’s Google Science Fair will join the Virgin Galactic team at Spaceport America in New Mexico as the crew prepares for a space flight, and then the winner will be among the first to welcome the astronauts back to Earth after that space voyage, wrote Conway. The winner will also receive a 10-day trip to the Galapagos Islands aboard the National Geographic “Endeavour” and a full year’s digital access to Scientific American magazine for his or her school. Other winners by age category will have a choice between going behind the scenes at the LEGO factory in Billund, Denmark, or spending time at either a Google office or at National Geographic’s offices, wrote Conway. Several new awards are also being added to the competition this year. The Computer Science Award will be given to a project that champions innovation and excellence in the field of computer science, while local award winners whose projects address issues facing their communities will be honored in select locations, according to Google. Also up for grabs is the annual Scientific American Science In Action award that will honor a project that addresses a health, resource or environmental challenge, according to Google. The winner of that prize will receive a year’s mentoring from Scientific American and a $50,000 grant toward their project. “Stay updated throughout the competition on our Google+ page, get inspired by participating in virtual field trips and ask esteemed scientists questions in our Hangout on Air series,” wrote Conway. “If you need help jump-starting your project, try out the Idea Springboard for inspiration.” In the 2013 Google Science Fair, three students, one each from the United States, Canada and Australia, were selected as winners from thousands of entries that came in from more than 120 nations, according to an earlier eWEEK report. The top 15 projects in 2013 included a multi-step system created for early diagnosis of melanoma cancers to the invention of a metallic exoskeleton glove that assisted, supported and enhanced the movement of the human palm to help people who suffer from upper-hand disabilities. In the 13- to 14-year-old age group in 2013,Viney Kumar of Australia was named the winner for his project, called “The PART (Police and Ambulances Regulating Traffic) Program,” while in the 15- to 16-year-old age group, Ann Makosinski of Canada won for her “Hollow Flashlight” project. “ In the 17- to 18-year-old age category, Eric Chen of the U.S. won for his project, a “Computer-Aided Discovery of Novel Influenza Endonuclease Inhibitors to Combat Flu Pandemic,” which was also selected as the competition’s grand-prize winner. Another student participant, Elif Bilgin, of Istanbul, Turkey, was named as the winner of the competition’s Scientific American Science in Action Award and of the Voter’s Choice award with her project creating plastic from banana peels.
<urn:uuid:58960108-1a38-4dfa-9a3d-8478e11b75c4>
CC-MAIN-2022-40
https://www.eweek.com/innovation/google-opens-its-2014-science-fair-for-entries/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00244.warc.gz
en
0.956007
888
2.578125
3
Analytics in Practice Have you ever played the game Twenty Questions? In this classic guessing game, one person chooses a mystery object. Once they’ve chosen, they tell the rest of the players only the broad category this object fits into, such as “place.” The players then try to win the game by identifying the mystery object using only twenty yes or no questions. If you’re new to exploratory data analysis, I encourage you to think about it a bit like Twenty Questions. Just like this game, the final answer is not clear from the get-go. You’ll need to begin by casting a wide net, which you will narrow down as you learn more. Each line of inquiry will build upon those that came before it. Finally, just like in Twenty Questions, an inquisitive nature will be rewarded, because questions are the start of answers. Casting a Wide Net: Visualize Your Raw Data While in the game of Twenty Questions you are given a category as a starting point, in the world of analytics you are given a dataset. Your first mission is to investigate what that dataset includes. Read each column name to get an idea of what information your dataset encompasses. Explore with an open mind and think about what insights different variables might provide for your analysis, report, or business. Your dataset is likely to be much too large to get a visual read on variables simply by skimming columns and rows. Instead, make use of graphical representations to explore variables one at a time, such as histograms, pie charts, or box plots. The intent here is not to memorize every data point, but to understand the general structure of your variables. If you’re using Alteryx Designer for your analysis, be sure to add a Browse Tool to take advantage of the all-new holistic data profiling. Here you can quickly identify the shape of your data through a series of auto generated charts, graphs, and data statistics. Narrowing In: Calculate Descriptive Statistics Have a big picture idea of what your dataset includes? Alright, now let’s take a closer look at the variables you’re particularly interested in. Descriptive statistics, sometimes referred to as summary statistics, give a quick and simple numerical description of your data. This includes mean, median, mode, minimum value, maximum value, and standard deviation. Descriptive statistics may help you identify things that you did not catch when you plotted your variables, such as outliers or skewed distributions. You can calculate descriptive statistics for discrete or continuous quantitative variables, along with categorical variables. Building Upon What You Know: Investigate Relationships Much like in the game of Twenty Questions, it’s important to think critically about what information you’ve gotten so far, and how it all fits together. As you unearth more insights about your data through exploration, you’ll also want to consider the relationships between variables. Bivariate dimensionality is the relationship between two variables per subject, while multivariate dimensionality is a measurement made on many variables per subject. It's also important to consider the degree of correlation between variables. If your data exploration leads you down the path of advanced analytics where you employ regression modeling, using variables that are too closely related may yield results that don’t reveal any new or real insights. Data exploration in the early stages can help you to identify variables that are truly independent, and to avoid multicollinearity. Data exploration allows you to get familiar with your variables, form working hypotheses, and discover complex relationships early on in your analytics process. When exploring, think of yourself like a data detective playing Twenty Questions. Remember to start broad and then narrow in, and to always approach your data with an attitude of curiosity and intrigue. Data always has something to tell us, we just need eyes and ears that are alert to see and listen. Learn more about exploratory analysis, why it's important, and the future of data exploration. Read This Next How to Increase ROI Through Data Democratization See 4 things you should do to increase ROI through democratization. Three Ways 7-Eleven Is Optimizing Its Retail Analytics With Alteryx Throughout the pandemic, 7-Eleven has used Alteryx to make data-driven decisions. KPMG MADA Program Teams with Alteryx SparkED KPMG U.S., an industry leader in data-driven insights, is helping prepare the next generation of audit professionals through its Master of Accounting with Data and Analytics (MADA) program.
<urn:uuid:dfd42e70-47ae-4c6d-9b2c-9d0b9be09c73>
CC-MAIN-2022-40
https://www.alteryx.com/input/blog/a-beginners-guide-to-exploratory-data-analysis
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00244.warc.gz
en
0.905642
955
2.953125
3
About The White PaperCyber threats are rapidly increasing in the business world as cyber criminals continue to discover new ways to evade defenses and target valuable data. According to the ITRC Data Breach Report, over 169 million personal records were exposed in 2015 alone from more than 700 publicized breaches across the financial, business, education, government and healthcare sectors. Impacted by data growth, new technologies and evolving cyber threats, organizations must work even harder to effectively set the strategies, framework and policies for securing all of their information. This white paper will provide best practices and guidelines by answering the following questions: - How is Information Security Governance defined? - What are the misconceptions about Information Security Governance? - Why is Information Security Governance important? - Who is responsible for Information Security Governance?
<urn:uuid:c135531a-2207-409a-9efa-c8aafb59bc0b>
CC-MAIN-2022-40
https://www.diligent.com/insights/white-paper/five-best-practices-for-information-security-governance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00244.warc.gz
en
0.921895
162
2.84375
3
Advanced FPGA-based Host CPU Offload An ANIC adapter is configured to intelligently steer packets in to specific host packet buffers (HPB). The benefit of packet steering is that each thread in a multithreaded application (often utilizing multiple CPU cores) can process packets from its own HPB. In this way a security or networking application can take advantage of parallel processing of data thus achieving higher levels of speed and efficiency. There are three different ways to steer packets into a HPB: - ANIC adapter is configured to use its own internal algorithms to evenly and efficiently distribute or load balance packets across a specified number (from 1 to 64) of HPBs. This is done to ensure that no processing thread is overwhelmed with data while others are starved. - Based upon the results of packet filtering, packets can be steered to specific HPBs. For example, packets that match a specific packet filter rule might all be steered to the same HPB for processing. - Based upon flow classification, packets are steered to specific HPBs. In other words, specific flows are identified and explicitly steered to a specific HPB for processing. Packet traffic is typically transferred across the PCIe bus (DMA) for consumption by the host application. However there may be circumstances under which select traffic must be locally redirected or retransmitted out of one of the ANIC network ports. Packet filtering or flow classification can be used to identify which specific packets or flows must be redirected out a given port.
<urn:uuid:bd79bffe-a54a-471f-b2f0-63c0c00b66cb>
CC-MAIN-2022-40
https://accoladetechnology.com/portfolio-item/packet-steering/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00244.warc.gz
en
0.90009
324
2.515625
3
Textbooks are so last century. The end of the traditional textbook may be near, as schools increasingly look to mobile technology. Schools from Alabama to the Isle of Wight in the U.K. have begun ditching textbooks in favor of downloads on portable electronics like iPads. For example, last year, Lake Minneola High School in Florida distributed about 2,000 iPads to students, according to the Orlando Sentinel. Such technology offers cash-strapped school districts an opportunity to save money. Casey Wardynski, the school superintendent in Huntsville, Alabama, told The Huntsville Times the district expects to save $1.8 million by switching to tablets over textbooks. “It will be the first major transition of a district in the U.S. from paper to digital,” Wardynski said. “We will fade out textbooks and fade in digital books.” Now, schools have to buy new textbooks for a variety of subjects each year. With tablets, the schools invest in the technology once and then provide updates when needed. This way, schools make a big upfront investment, as opposed to one yearly, according to The Huntsville Times. Proponents also say electronic texts better engages students. It also frees up student backpack space. “It will make more information available to the kids and let us broaden the curriculum,” Huntsville school board member David Blair said to The Huntsville Times. “It will also keep the kids from carrying 60 to 70 pound backpacks with all their books.” Still, schools face obstacles in implementing such changes. For one, there is a need to implement classroom control over all of those additional devices to reduce theft and issues with students using the tablets for non-educational purposes, according to the Orlando Sentinel. Cost concerns related to initially purchasing the technology mean that school districts could have issues in giving new technology to students to replace the old textbooks at the beginning. For example, some schools required private donations to purchase new technology, according to the Orlando Sentinel. Textbook publishers already feel the financial heat from new portable technology. Publishers like McGraw-Hill and Houghton-Mifflin saw decreased revenue last year, according to the Wall Street Journal. This helps explain why Houghton decided in May to load some of its content onto Nook tablets. Do you think all schools should replace textbooks with tablets? How does such a change affect classroom control of technology?
<urn:uuid:ded6084f-c941-4fb9-adee-767262fc17d1>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/can-technology-take-on-textbooks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00244.warc.gz
en
0.936962
504
2.609375
3
Cyberattacks target more than companies and governments. Millions of Americans using high-tech medical devices are also vulnerable to hackers. We got a reminder of that deadly possibility this week as the federal government announced the recall of a device used by people with a chronic illness that can be hacked. This isn’t the first time such a recall has occurred. Fortunately, there have been no reports of deaths due to cyberattacks. We’ll tell you what product was recalled and what steps are being taken to prevent deadly hacks. Hacked insulin pumps could turn deadly The Federal Drug Administration (FDA) announced it is recalling certain Medtronic MiniMed insulin pumps due to potential cybersecurity risks. In the U.S. alone, Medtronic has identified 4,000 patients who are potentially using insulin pumps that can be hacked. The small computerized pumps deliver insulin to a diabetic patient throughout the day using a catheter implanted under the skin. They are an alternative to injections. People with Type 1 or Type 2 diabetes may need an insulin pump when they require insulin to maintain acceptable blood glucose levels. The affected Medtronic devices wirelessly connect to the patients’ blood glucose meter and a continuous glucose monitoring system. Using a remote control, patients send insulin dosing commands to the pumps. Patients also can use a CareLink USB to download data from the insulin pump to monitor their own progress. Cybersecurity vulnerabilities identified in the MiniMed insulin pumps have the FDA concerned that someone other than a patient, caregiver or health care provider could connect wirelessly to a nearby pump and change the settings. This could allow hackers to over deliver insulin to a patient, leading to low blood sugar. Hackers could also stop insulin delivery, leading to high blood sugar and a buildup of acids in the blood. The FDA says it is not aware of any patients being harmed by the Medtronic MiniMed insulin pumps due to hacking. But the agency warns patients and health care providers to remain vigilant against any medical device hacks. “Any medical device connected to a communications network, like Wi-Fi or public or home internet, may have cybersecurity vulnerabilities that could be exploited by unauthorized users,” said Dr. Suzanne Schwartz with the FDA. “However, at the same time, it’s important to remember that the increased use of wireless technology and software in medical devices can also offer safer, more convenient and timely health care delivery.” The recalled pumps are Medtronic’s MiniMed 508 and MiniMed Paradigm series insulin pumps. The FDA says Medtronic is providing alternative insulin pumps with enhanced built-in cybersecurity capabilities. Medtronic also is working with its distributor to identify patients using the affected pumps. Medtronic is unable to effectively update the recalled models of insulin pumps with any software or patch to address the devices’ vulnerabilities. The FDA is working with Medtronic to address the cybersecurity issues. The agency also is helping patients with the affected pumps switch to newer models with better security. The following is a list of the recalled pumps: - MiniMed 508 (all versions) - MiniMed Paradigm 511 (all versions) - MiniMed Paradigm 512/712 (all versions) - MiniMed Paradigm 515/715 (all versions) - MiniMed Paradigm 522/722 (all versions) - MiniMed Paradigm 522K/722K (all versions) - MiniMed Paradigm 523/723 (version 2.4A or lower) - MiniMed Paradigm 523K/723K (version 2.4A or lower) - MiniMed Paradigm 712E* (all versions) - MiniMed Paradigm Veo 554CM/754CM* (version 2.7A or lower) - MiniMed Paradigm Veo 554/754* (version 2.6A or lower) The FDA is urging all patients with the affected pumps to call Medtronic at 1-866-222-2584 for information on replacement devices. Hacking a constant threat The insulin pumps are only the latest electronic medical devices the federal government has recalled or issued warnings about due to cybersecurity flaws. In April, a new study by the Department of Homeland Security found a critical vulnerability in a common health care device that thousands of people depend on each day. Homeland Security found a host of flaws and holes in a range of implantable cardioverter defibrillators (ICD) also made by Medtronic. The devices protect the patient after being implanted near their heart. The device’s onboard computer keeps track of heart rate and performance while relaying this data wirelessly to an internet connected device. Because of this sensitivity, Homeland Security posted a medical advisory warning of security vulnerabilities in a range of defibrillator products made by Medtronic. These devices have wireless antennas that don’t encrypt data when broadcasting, allowing hackers to inject custom code into the ICD with a wireless device of their own. Best-case scenario, the hackers grab private medical data. Worst-case scenario, a patient’s life is put at risk. In 2017, 460,000 pacemakers were recalled due to serious security flaws. That same year, 8,000 bugs made pacemakers hackable. Over the past few years, the FDA has expanded its initiatives to keep medical devices hacker free. You can protect yourself by staying informed with the latest cybersecurity news and alerts with Kim Komando’s free newsletters. Subscribers get frequent updates on new data breaches, security tips and tricks, as well as trusted device recommendations for a safer digital lifestyle.
<urn:uuid:434d5892-b16d-4724-8eb3-d19752528fb4>
CC-MAIN-2022-40
https://www.komando.com/security-privacy/medical-nightmare-electronic-health-devices-can-be-hacked-with-deadly-results/577149/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00244.warc.gz
en
0.923633
1,183
2.625
3
Passwords are your first line of defense against identity theft and hacking. Yet every year, the Federal Trade Commission receives hundreds of thousands of identity theft complaints—some instances of which might have been prevented with stronger passwords. Here are a few stats about passwords that should give you pause: - According to Keeper, a password management tool, about 1 in 5 users has the password “123456.” - The word “password” has been the second most popular choice for three years running. - According to Pew Research, 45% of survey respondents use the same passwords across multiple sites. Given those findings, it’s safe to say quite a few of us are what you might call “password-challenged.” In today’s world of cybersecurity and growing attacks, developing good password hygiene is one of the simplest and most basic steps you can take to keep your and your company’s information safe. It’s everyone’s favorite kind of IT support because it only takes a few seconds. Here’s what you need to know. Passwords protect you from brute-force attacks A brute-force attack sounds pretty terrifying, and rightly so. When this happens, a hacker tries to break into your account, site or applications by guessing your username and password—except instead of trying manually, the hacker uses automated software to quickly run through passwords until it guesses correctly. And if you have one of those frequently used passwords, guess what? You’re going to get hacked first. Luckily, there are a few things you can do to improve your likelihood of surviving an attack: - If your password is on this top 25 list, change it immediately. Hackers know how to use Google, so they’ll be much more likely to target low-hanging fruit. - Don’t repeat passwords across different sites. Hackers often target sites with lower security—like retail and social media platforms—to score passwords. They they’ll try these passwords on higher-security sites, such as financial organizations and email providers. Once they gain access, they can tear through your business’ assets, credit cards and petty cash. - Don’t use “Log in with Facebook” or “Log in with Google” shortcuts. These save time, but relying on them is kind of like having one key that opens your home, your office, your car… You see what we mean. Once hackers have access to that set of passwords, they can enter every site you use. Password managers help you juggle a lot of unique passwords One of the top password recommendations you’ll get these days is to not duplicate passwords on different sites and apps. But how can you do that and keep track of all your credentials when even news sites request a login just to give you the latest updates? The solution is actually a lot simpler than it seems. A password manager is a piece of software that you can use to create a single master key login for all your accounts. This key is heavily encrypted, and is never sent directly to the sites you want to use. Instead, the password manager creates multiple unique complex logins for each of your accounts and inserts them in the background, securely logging you in. Some managers are free, but you’ll need to pay a small monthly fee (usually $1 to $3) for many top applications. Check out MyITpros’ article on password managers to get the full scoop and review our recommendations. Multi-factor authentication gives you extra protection Most security experts will tell you that your best bet is to use multi-factor or two-factor authentication. Essentially, multi-factor logins have two layers: a username and password and a second set of information, like your phone number, a one-time code or a security question. Some sites, like Google, offer this as a settings option automatically and will send you a login code over SMS. But for those that don’t, you’ll have to install it yourself. Using an authentication app has the extra bonus of being a little more secure than SMS codes. Once you install the authentication app on your phone, you’ll be asked to scan an onscreen QR code. Thereafter, you’ll be able to log in directly through your phone—it’s as easy as that. Passwords may be your first line of defense against hacking, but they shouldn’t be your only defense. Keep your data extra-safe using firewalls, cloud security, encryption and a number of other techniques. You can learn more by reading through our security articles on the MyITpros blog or by talking to one of our IT consultants to get a rundown of best security practices. With our IT support, you can go from being password-challenged to a password pro!
<urn:uuid:357650db-b3b7-4c74-b654-37b716224ab0>
CC-MAIN-2022-40
https://integrisit.com/the-protective-power-of-the-password/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00444.warc.gz
en
0.936239
1,011
2.703125
3
Consumers aren’t the only ones benefiting from the internet of things (IoT). Manufacturers are also reaping the rewards of leveraging the industrial internet of things (IIoT) to increase energy efficiency and improve productivity in their factories and more. For some manufacturers, IoTT is a foreign abbreviation, but there’s time to learn. What is IoT? The IoT is a concept many users are familiar with. Even if they don’t understand the granular details, users, at the very least, grasp the basics of how the IoT impacts their daily lives, and even if they don’t, a brief overview of the concepts basics should suffice. Basically, think of the concept like this: Some physical devices connected to the internet can collect and share data with one another. Sounds simple enough, but there’s a little bit more to it than that. Consumers benefit when their connected devices collaborate. Take home automation as an example. Smart devices (simply, devices capable of connecting to other devices and networks) have changed homelife for the better. For instance, now, instead of changing room temperatures at thermostats, users can control room temperatures from their own smartphones. Better yet, some thermostats learn user preferences and adjust accordingly, potentially saving consumers on heating costs. If the IoT can save consumers time, money and energy, what could it do for industries? IoT machines leverage the IoT concept for the industrial setting. The IoT focuses mainly on bringing together machines and devices by connecting them and enabling them to monitor, collect and share data. The IoT’s main goal is to enhance the results of business operations. Manufacturers willing to take a chance on implementing IoT technologies can benefit greatly from improved productivity, predictability and efficiency. How can manufacturers benefit from IoT technologies? One of the biggest benefits of implementing IoT technologies for manufacturers is predictive analytics. For example, instead of stocking replacement parts to prepare for machine failures (this would be considered to be preventative maintenance), a system using predictive analytics would order replacement components only when needed, thereby saving manufacturers time, money and resource in the long term. While predictive analytics is a top benefit for manufacturers, there’s so much more to consider. A top challenge for manufacturers is operating with energy efficiency. Of course, in manufacturing, energy costs are high, so companies within the space are always trying to cut costs by finding ways to become more efficient. IoT technologies help manufacturers with optimizing energy consumption by collecting real-time data from machines operating across numerous facilities and analyzing it to determine where there are inefficiencies. What’s essential to the process is the technology underneath all of it. What should manufacturers know about deploying IoT technologies? Setting up manufacturers for the IoT isn’t easy. Deploying IoT technologies for businesses — especially large corporations — is way more complex than installing smart automation in residential homes. To avoid confusion, IoT deployment should be done by IT experts who’ve already successfully deployed IoT technologies for manufacturers. Also, it’s important to hire an IT provider with knowledge in not only IT but manufacturing. Otherwise, there could be a disconnect between the manufacturer’s goals and the IT provider’s vision. More than anything, it’s important for manufacturers to be able to express objectives clearly and have somewhat of an idea of how they’d like to enhance operations. (A couple of common applications of the IIoT were noted above). Ultimately, the possibilities are endless for manufacturers open to deploying IIoT solutions throughout their factories. Many manufacturers have already implemented the IIoT and are experiencing the benefits of gathering and analyzing large volumes of data.
<urn:uuid:51313e10-0109-416a-b73d-cdda441aee3d>
CC-MAIN-2022-40
https://dartmsp.com/what-manufacturers-should-know-about-iot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00444.warc.gz
en
0.935821
762
2.828125
3
DNS over HTTPS is a protocol used to encrypt the data in the query used to perform remote Domain Name System (DNS) resolution. By using the HTTPS protocol to perform encryption between the device using the DoH client and the DoH-based DNS resolver, this method aims to prevent eavesdropping and manipulation of DNS data by man-in-the-middle attacks and other misuse of user confidential data DNS over HTTPS was initially proposed by the Internet Engineering Task Force (IETF) in late 2018 following rising concerns over malicious attacks on networks and subscribers using the DNS. Previously, DNS queries had been made in plain text from an app to the recursive DNS server using DNS settings provided by an internet service provider or other network provider. With DoH, DNS queries are disguised as regular HTTPS traffic and sent to special DoH-capable DNS servers (called DoH resolvers). The query is resolved inside a DoH request, and the reply to the user is encrypted as well. Following its introduction, some concerns have been raised about DNS over HTTPS. ISPs see a risk of being cut out of the resolution process by third-party DNS providers, which would make it more difficult to ensure quality of service and provide some value-added services such as parental controls and anti-malware. The use of a different DNS resolver might also introduce increased latency. Still, DNS over HTTPS does appear to address customer concerns about malware, intrusions, data theft, and privacy. To allow operators to take advantage of the opportunities offered by DNS over HTTPS, A10 Networks has developed a DNS security solution using the A10 Networks Thunder® Convergent Firewall (CFW) that allows ISPs to provide DNS over HTTPS services without disrupting their existing DNS infrastructure or investments. The solution helps the carrier ensure the continuity of its existing value-added services and maintain control of service quality. A10 Networks offers DNS over HTTPS (DoH) natively through its Thunder® CFW for those organizations, that want to offer this capability to their subscribers. As demonstrated by service provider production use, the solution can handle the scale and DNS security requirements that DoH will deliver.Download the Solution Brief
<urn:uuid:c1d9923c-6a7e-48c6-aad1-49d8f3c0c64a>
CC-MAIN-2022-40
https://www.a10networks.com/glossary/what-is-dns-over-https-doh/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00444.warc.gz
en
0.943215
442
2.84375
3
What In The World is Ethical Hacking Nowadays, the word is primarily used to identify skilled programmers who gain unauthorized access into computer systems by exploiting weaknesses or deploying bugs. They’re often thought to be motivated by malice, mischief, or money – and sometimes all three. With the persistent popularity of the internet and the ever-expanding evolution of e-commerce, malicious hacking has become the most widely recognized form, a perception supported by its portrayal in various kinds of news media and entertainment. That being said, not all hacking is bad. Which brings us to the second major type of hacking. Ethical hacking, in and of itself, might seem like a contradiction in terms – after all, hacking into somebody’s account or service doesn’t seem particularly ethical. But you may be surprised by the good that it can do. Before we go further, let’s sort out the major differences between malicious and ethical hackers. Malicious hacking is carried out in an attempt to breach the systems or networks of an organization (or individual) in order to compromise important data by stealing it, thereby tarnishing the organization’s reputation as well as its assets. Malicious hackers, often referred to as “Black Hat” hackers, will gladly take advantage of any mistakes made by programmers during the software development process in order to penetrate the security framework of the software. By definition, ethical hacking is the authorized process of intentionally bypassing the security defenses of an organization’s IT infrastructure with the express purpose of identifying any vulnerabilities, weaknesses, and other potential security threats. Afterwards, the ethical hacker notifies the organization of any issues that they discovered while assessing the systems or network and propose solutions in order to help protect the organization’s assets from future attacks by malicious hackers. Granting permission to have your crucial infrastructure ethically hacked by professional cybersecurity experts can go a long way toward improving the overall security posture of your organization. Hiring an outsider to perform this service is generally preferable as it ensures that the ethical hacker uses a systematic and measured approach, thus closely mirroring what an external cyberattack might look like. Key Protocols of - Seek authorization from the organization before performing any security assessment on the system or network. - Define the scope of the assessment and ensure that all work remains within the organization’s predefined legal boundaries. - Report any security breaches and vulnerabilities identified during the assessment, and suggest possible remedies for resolving them. - Respect the privacy of the individual or company whose system or network is being assessed. Abide by all terms and conditions of any non-disclosure agreement required by the assessed organization. - After checking the systems for vulnerabilities, erase all traces of the hack. This will prohibit malicious hackers from infiltrating the system via any identified loopholes. - Inform the software developer or hardware manufacturer of any security risks discovered if said risks were previously unknown. In general, an ethical hacker seeks to answer the following questions: - What kinds of vulnerabilities would a potential attacker see? - What specific information or systems would a hacker most want to access? - What could an attacker potentially do with this information? - How many people might notice the attempted hack? - What is the best way to resolve these vulnerabilities? What Are The Main Benefits of Ethical Hacking? There are four primary benefits of ethical hacking, particularly when compared with the disadvantages that are part and parcel of nearly all malicious hacks. Discover Vulnerabilities from an Attacker’s Point of View Enhance Computer and Network Security An ethical hacker can help determine which security measures are effective, which need to be updated, and which prove to be little deterrent to nefarious cyberattackers. With this knowledge in hand, an organization can make more informed decisions as to how to enhance the underlying security of its IT infrastructure. By doing this, the organization further defends itself against would-be attackers that might seek to exploit the computer network or take advantage of mistakes made by personnel. What Practical Advantages Can Ethical Hackers Bring To Your Organization? They Understand How the “Bad Guys” Think Getting inside the mind of a hacker is no easy task, even if you have a background in IT. Failing to comprehend how hackers think and what they want could be catastrophic to your business – and the bad guys are more than willing to turn your weak spots to their advantage. White Hat hackers may be ethical in their own endeavors, but they know perfectly well how the minds of their questionable counterparts work. They understand how hackers operate, and they can leverage that knowledge to safeguard your network against intrusion. They Know Where to Look Each business network is incredibly complex, with interconnected computers, mobile devices, home-based workers, and traveling employees logging on from the road. Understanding what to look for when assessing an organization’s cybersecurity can be challenging, but ethical hackers know where to start and where potential blind spots are likely to be lurking. They Can Discover Weak Spots You May Have Failed to Notice You may believe that your network is as secure as it can possibly be, but it might have hidden weaknesses that you aren’t aware of. Those weak spots may be imperceptible to you, but a seasoned ethical hacker can recognize them from a mile away. Pinpointing hidden weaknesses in a system’s cyberdefenses is one of the predominant reasons to enlist the services of an ethical hacker. These “good guy” hackers are experts at finding open ports, backdoors, and other plausible entry points into your computer network. Their Testing Skills Are Beyond Compare Testing and retesting your network is an integral part of a successful cyberdefense, but the effectiveness of your strategy depends upon the skillfulness of the testers. If the people testing your network don’t know what to keep an eye out for, this could produce a false sense of security – and culminate in a devastating data breach. With regard to network testing and intrusion detection, ethical hackers’ skills are unsurpassed. With years of experience scrutinizing networks for vulnerabilities, they know how testing should be carried out, and you can count on the accuracy of the results. If you’re a newcomer to the business world, having an ethical hacker as part of your startup team can help you create a superior and more robust network from day one. Constructing a computer network with integrated security features will considerably reduce your susceptibility to breaches and data theft, and bringing White Hat hackers on board gives you an undeniable advantage. Ethical hackers have encountered all kinds of networks, and they know how those systems should be constructed. If you want to create a network that’s fast, scalable, and impervious to hackers, these cybersecurity experts can help you accomplish it. It might seem peculiar to welcome hackers into your company, but the right hackers can truly enhance the security of your organization and your network. Hiring ethical hackers is a phenomenal way to evaluate your cyberdefenses, so you can build a better and more secure corporate network. Data breaches are becoming more common and costly every year. In its latest report, the Center for Strategic and International Studies stated that cybercrime costs an estimated $600 billion per year globally. Most businesses can’t afford to absorb the fines, loss of trust, and other negative impacts associated with data breaches. With malicious hackers discovering newer ways to penetrate the defenses of networks nearly every day, the role of ethical hackers has become increasingly important across all areas. Whether yours is a small, mid-sized, or large business, there’s always a possibility that it could fall victim to a cyberattack. Most businesses deploy some type of IT infrastructure to deliver services to their customers – whether it be computers, laptops, servers, printers, wireless routers, or (most likely) a combination of these. All these devices are in danger of being breached at some point in time by cybercriminals, unless your organization takes measures to ensure that they aren’t vulnerable to attacks. This is the critical role that ethical hackers perform.
<urn:uuid:ea9aae44-5a97-4c0a-b0c7-c4fc118efc40>
CC-MAIN-2022-40
https://dtinetworks.com/what-in-the-world-is-ethical-hacking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00444.warc.gz
en
0.931904
1,679
3.265625
3
Researchers are using big data analysis on images and videos for a variety of purposes, from managing traffic and detecting autism in children to providing warning of imminent landslides. It goes without saying that big data applications need big data. Uni Research, a company 85 percent owned by the University of Bergen in Norway, has turned that truism into a research strategy by focusing on the developing analytic tools to deal with issues in areas where big data is available, including biotechnology, health, environment, climate, energy and social sciences. The most recent project incubating in Uni Research’s Centre for Big Data Analysis is one that studies still images and videos for a variety of purposes, from detecting autism in children to providing warning of imminent landslides to managing traffic. The trick, said Eirik Thorsnes, team leader of the effort, is to develop algorithms that “mimic the way the human brain distinguishes significant from unimportant information.” Thanks to the “patience” of computers and deep learning techniques, Thorsnes said, his team’s algorithms have already surpassed human levels of image recognition. “After all,” he said, “computers never get tired of looking at near-identical images and may be capable of noticing even the tiniest nuances that we humans cannot see.” Alla Sapronova, an artificial intelligence researcher on the project, said that the algorithms being developed by the team teach computers to recognize visual patterns “in the same way we teach children. I show the computer patterns of input signals and tell it what I expect the output signal to be,” she said. “I repeat this process until the system begins to recognize the patterns. Then I show the computer an input signal, such as an image that it has not seen before, and test whether the system understands what it is." In one trial, Thorsnes’ team accessed a publicly available webcam at Bergen’s busiest traffic intersection to teach computers how to analyze the numbers and types of vehicles moving through. According to Thorsnes, the software can identify traffic patterns, which can help traffic managers, and can be used to alert authorities to safety hazards, such as cars traveling in the wrong direction, accidents and abandoned cars. The software could even be deployed to monitor slopes susceptible to landslides so that traffic could be diverted before a slide occurs. Sapronova said that the software being developed is still “at a very early pilot stage.” Nevertheless, she said, “we know what methods, software tools and hardware to use to obtain the best result in a given situation.” The technology has also been deployed to map the movements of salmon and trout in a river and to detect whether they are wild or farmed fish. "Traditionally, these kinds of analyses have been carried out by people who have to sit and watch hours of video footage," Thorsnes said. NEXT STORY: IGs suggest terror-info sharing improvements
<urn:uuid:48d0e79c-caf6-442a-8726-7d5e8aac7c6e>
CC-MAIN-2022-40
https://gcn.com/data-analytics/2017/04/deep-learning-tools-surpass-humans-in-analyzing-images/317756/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00444.warc.gz
en
0.945976
625
3.328125
3