text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Purdue Researchers Develop Programmable Switch to Help Lay Groundwork for Quantum Internet (Purdue.edu) Purdue University engineers have addressed an issue barring the development of quantum networks that are big enough to reliably support more than a handful of users. The method could help lay the groundwork for when a large number of quantum computers, quantum sensors and other quantum technology are ready to go online and communicate with each other. The team deployed a programmable switch to adjust how much data goes to each user by selecting and redirecting wavelengths of light carrying the different data channels, making it possible to increase the number of users without adding to photon loss as the network gets bigger. If photons are lost, quantum information is lost – a problem that tends to happen the farther photons have to travel through fiber optic networks. “We show a way to do wavelength routing with just one piece of equipment – a wavelength-selective switch – to, in principle, build a network of 12 to 20 users, maybe even more,” said Andrew Weiner, Purdue’s Scifres Family Distinguished Professor of Electrical and Computer Engineering. “Previous approaches have required physically interchanging dozens of fixed optical filters tuned to individual wavelengths, which made the ability to adjust connections between users not practically viable and photon loss more likely.” Instead of needing to add these filters each time that a new user joins the network, engineers could just program the wavelength-selective switch to direct data-carrying wavelengths over to each new user – reducing operational and maintenance costs as well as making a quantum internet more efficient. The wavelength-selective switch also can be programmed to adjust bandwidth according to a user’s needs, which has not been possible with fixed optical filters. Some users may be using applications that require more bandwidth than others, similarly to how watching shows through a web-based streaming service uses more bandwidth than sending an email.
<urn:uuid:52934231-c7fd-48d3-819e-5f4d3ba5f6dc>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/purdue-researchers-develop-programmable-switch-to-help-lay-groundwork-for-quantum-internet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00222.warc.gz
en
0.931069
389
2.96875
3
I just finished reading an interesting hard science fiction book called The Punch Escrow, by Tal M. Klein (a movie is in the works). What makes the difference between hard and soft science fiction is that hard science fiction is based on science, while soft is, let’s just say, far more imaginative. To be honest, I enjoy both types, and the soft stuff is a ton easier to write. Those pesky physical rules don’t get in the way, and you don’t have to do research. The story takes place several decades in the future, and it revolves around the idea of quantum foam and teleportation. It points out why teleportation never may be practical, but it brings up the idea of human 3D printing, which could be used more effectively for space exploration. However, it also would have a massive number of other uses, both good and bad, which got me thinking about what else could change our future in a massive way. I came up with a list of five potentially world-changing technologies. I’ll close with my product of the week: a book on management that could have a massive effect on your company’s success, based on the black boxes used in airplanes. It’s called Black Box Thinking. Technology 1: Organic Printing We can use 3D printers for plastics, ceramics, metals and some blends, but our efforts even to print food have been more in line with automated icing machines for cakes than printing food. If we could print food affordably using nonperishable components, it would mean not only that we would be better able to address the massive amount of global hunger that exists, but also that we potentially could cut the cost of food manufacturing and eliminate most food-borne illnesses. Given that this same technology likely could manufacture drugs and better prosthetics, this single step could have a massive impact on how we live — far beyond the way we eat. Technology 2: Advanced Bio-engineering A division of Google is releasing millions of bio-engineered mosquitoes to eliminate those that carry sicknesses. Granted, I do remember that many apocalyptic movies start this way. The ability to manufacture insects that can address certain problems could have a massive impact, good and bad, on our environment. The bad would come from a mistake, or if someone decided to create militarized mosquitoes. In the world of The Punch Escrow, there are mosquitoes that have been engineered to eat pollutants in the air and pee H20 — and characters have to dodge constant pee drenchings from the mosquitoes. Still, bio-engineered life forms could offset much of the damage we’ve done to the world — addressing global warming as well as land, sea and air pollution — and go places that people currently are unable to go. Technology 3: AI Salting Artificial intelligence salting is another concept author Klein introduces as a major plot element in The Punch Escrow. AI salting isn’t meal preparation, for when we humans eat AIs (boy, talk about a concept that could start a Terminator event) it means a specialized technician teaches an AI to think more like a human. Basically, it is individual AI deep learning of human behaviors. The underlying concept, making computers think more like humans, is critical to make them more effective at interacting with humans and interfacing with us more effectively. If we really can’t tell the difference between an AI and a human, or if an AI handling a human-related task could be made to be empathetic, the improvement in the interaction and the effectiveness of the AI would be improved vastly. However, few are focused on the human part, and the challenge to train AIs to be more human-like could change forever the way we interact with and use them. At the very least, it would be a huge step in creating robots indistinguishable from humans and making the Westworld experience real. Technology 4: Ultracapacitor Batteries As Elon Musk repeatedly has said, batteries suck. Ultracapacitors can be charged and discharged almost instantly. They don’t have the level of temperature problems that batteries currently exhibit. They are much lighter, which increases efficiency in things like cars, and their life cycle is vastly longer than current batteries. The problem is, they don’t do a good job of storing energy for any length of time. Some recent promising news from the scientific community suggests we may be close to sorting this out. Batteries that could charge instantly and produce far more energy without problems would be a huge step toward making off-grid home power and electric-powered cars far more convenient. Technology 5: Wireless Power Ever since Nikola Tesla started talking about being able to broadcast power, it has been a known game-changer. Granted, Tesla may have gotten his ideas from aliens, but if you don’t need batteries, then electric cars, planes, trains and personal electronics become smaller and far more reliable. Qualcomm is working on a technology called “Halo”, initially to charge electric cars without having to plug them in. However, its vision includes putting this technology in roads so that you’d never have to charge your car again — it would charge while you were driving. Rather than replacing a gas pump with a far slower charging station, you would just get rid of it. While not as good as true broadcast power, technology like this could work in cars, planes and offices, and we would never have to worry about charging our personal stuff or cars ever again. A similar technology from WiTricity is being used to develop wireless charging for all our devices and currently being built into Dell’s laptop charging docks. Put these technologies together, and we’d have our food coming to us anyplace in any form and at any time we wanted. We’d have bugs making the world a better place to live. AIs would be our friends — not the problem Elon Musk is envisioning (though I kind of question his idea that government should fix this, given how bad it is at fixing things), or they’d just be much better at “taking care” of us — but not in a good way. Finally, if we can get better energy storage and distribution, we end up in a far more reliable and less-polluted world, coming damn close to a future Utopia. Though, as The Punch Escrow points out, if we can’t fix ourselves, the result still could be pretty nasty. Just think of the implications of printing people… As the only sure thing about the future is that it will be very different than the world of today, here’s hoping that is a good thing. Summer is the time I get caught up on my reading, and after reading The Punch Escrow, I moved to another recommended book that is far more practical. Black Box Thinking is based largely around comparing the healthcare industry to the airline industry, and pointing out that airlines have become massively safer over the years. However, hospitals may be the third biggest killer of people, largely because airlines have black boxes. The reason this hits home for me is that it points to hospitals as places where errors are covered up aggressively to avoid liability. Black boxes, which capture errors but can’t be used in litigation, are used to determine fault — not to assign blame, but to ensure that the mistake never happens again. This one practice has helped transform air travel from one of the least safe ways to travel to one of the safest. The big takeaway is that if you and your company can focus more on mistakes as learning opportunities and on ensuring that they are one-time events, rather than focusing on shooting the poor sap who made the mistake, which is much more typical, you’ll end up not only with a far less hostile working environment, but also a far more successful company. One of my big personal concerns is that we’ll transfer this process of blame and covering up mistakes to our coming wave of ever-more-intelligent machines, which could speed up the related problems to machine speed. I doubt we’d survive that. So, a book that makes workplace environments better, companies more successful, and humans more likely to survive is worth reading, I think, and it’s my product of the week.
<urn:uuid:0e29e833-a5ea-4816-ae19-ad60cc5206f5>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/the-5-technologies-we-need-to-change-the-world-84692.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00422.warc.gz
en
0.964245
1,736
2.515625
3
On November 2, 2016, California Attorney General Kamala Harris released a report outlining best practices for the education technology industry (“Ed Tech”). In Ready for School: Recommendations for the Ed Tech Industry to Protect the Privacy of Student Data, Attorney General Harris noted the need to implement robust safeguards for collection, use, and sharing In a blog post published on the Federal Trade Commission (FTC) website, Jessica Rich, Director of the FTC’s Bureau of Consumer Protection, recently stated that: “we regard data as ‘personally identifiable,’ and thus warranting privacy protections, when it can be reasonably linked to a particular person, computer, or device. In many cases, persistent identifiers such as device identifiers, MAC addresses, static IP addresses, or cookies meet this test.” The post (which reiterates Ms. Rich’s remarks at the Network Advertising Initiative’s April meeting) suggests a shift in the FTC’s treatment of IP addresses and other numbers that identify a browser or device. The FTC previously has taken the position that browser and device identifiers are deserving of privacy protections, but the FTC generally has avoided classifying these identifiers as equivalent to personally identifiable information (such as name, email, and address) except in the narrow context of children’s privacy. (The FTC’s rule implementing the Children’s Online Privacy Protection Act defines “personal information” to include a “persistent identifier that can be used to recognize a user over time and across different Web sites or online services.”)… Continue Reading FTC’s Jessica Rich Argues IP Addresses and Other Persistent Identifiers Are “Personally Identifiable” As part of our continuing coverage of the Congressional Privacy Bill, we provide below a deeper examination and explanation of Title II of the bill, the Do Not Track Kids Act of 2015. The Do Not Track Kids Act of 2015 amends the Children’s Online Privacy Protection Act (“COPPA”) by making its protections more expansive and robust. Specifically, the bill extends COPPA’s protections to teenagers, expands the scope of the entities subject to COPPA’s provisions, and imposes new obligations on those entities. COPPA currently requires websites and online services that knowingly collect information from children under the age of 13 or that are targeted toward children under the age of 13 to make certain disclosures and obtain parental consent before collecting and using personally identifiable information obtained from children. Continue Reading Congressional Privacy Bill: Do Not Track Kids Act of 2015 By Caleb Skeath As we reported last this week, the Congressional Privacy Bill (S. 547/H.R. 1053) contains provisions that would establish a national data breach notice law, along with the Commercial Privacy Rights Act of 2015 and the Do Not Track Kids Act of 2015. Following our analysis of the Commercial Privacy Rights Act, we have analyzed the bill’s data breach provisions below. These provisions would allow for up to 60-days for individual notifications following discovery of a breach, and the bill’s definition of “personally identifiable information” (PII) is significantly broader than any anologous definition within the current state data breach notification laws. Continue reading for an in-depth analysis of the data breach provisions, and stay tuned for forthcoming analysis of the Do Not Track Kids Act of 2015. Continue Reading Congressional Privacy Bill: Data Breach Notice Provisions By Caleb Skeath As we reported yesterday, the Congressional Privacy Bill has been released, following the release of the White House’s proposal for a privacy bill in late February. The bill contains the Commercial Privacy Rights Act of 2015, the Congressional counterpart to the White House’s proposal, along with data breach notification provisions and the “Do Not Track Kids Act of 2015,” which proposes substantial revisions to the Children’s Online Privacy Protection Act (COPPA). As with the White House proposal, the Privacy Rights Act would implement a comprehensive regime of substantive privacy requirements. Our analysis of the Commercial Privacy Rights Act is below, and we will separately post further analysis of the data breach provisions as well as the Do Not Track Kids Act. Continue Reading Congressional Privacy Bill: Commercial Privacy Rights Act of 2015 By Caleb Skeath The House and Senate versions of the Consumer Privacy Bill of Rights have been released, following the release of the White House’s legislative proposal at the end of February. We are reviewing the contents of both bills and will post an update shortly with a more in-depth analysis. Unlike the White House… SB 1177 prohibits operators of online sites or mobile apps who know that their services are used primarily for K-12 school purposes and whose services designed and marketed as such (“operators”) from using K-12 student data in four specific ways. First, SB 1177 prohibits operators from engaging in targeted advertising on any website or mobile app (including their own) if the advertising would be based on any information obtained from the operations of its K-12 online site or mobile app. Second, SB 1177 prohibits operators from using information obtained from the operations of the K-12 online site or mobile app to create a “profile” about a K-12 student, unless the profile is created in furtherance of K-12 school purposes. Third, operators are prohibited from selling a student’s information. And, fourth, SB 1177 prohibits operators from disclosing personally identifiable information, unless certain special circumstances exist, such as responding to or participating in judicial process. In addition to the four prohibitions listed above, SB 1177 places two affirmative requirements on operators. The bill requires that operators “[i]mplement and maintain reasonable security procedures and practices” appropriate to the information protected, and to specifically protect the information from “unauthorized access, destruction, use, modification, or disclosure.” In addition, SB 1177 requires operators to delete personally identifiable information regarding a K-12 student upon request by a school or school district. AB 1584 addresses the access and use of K-12 student data by third party vendors. AB 1584 explicitly permits local educational agencies to enter into contracts with third parties to provide online services relating to management of pupil records or to otherwise access, store, and use pupil records in the course of performing contractual obligations. Continue Reading California Strengthens Student Privacy Protections
<urn:uuid:bf828f3e-c551-4a3c-9334-37fb855f5a05>
CC-MAIN-2022-40
https://www.insideprivacy.com/tag/personally-identifiable-information/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00422.warc.gz
en
0.91027
1,324
2.515625
3
VoIP is a communications technology that changes the meaning of a "telephone call". VoIP stands for Voice over Internet Protocol and it means "voice transmitted over a computer network." Internet Protocol (IP) networking is supported by all sorts of networks: corporate, private, public, cable, and wireless networks. It goes far beyond the "Internet" part of the acronym. VoIP runs over any type of network. Currently, in the corporate sector, the private dedicated network option is the preferred type. For the telecommuter or home user, the hands-down favorite is broadband. VoIP networks can be accessed by a desktop telephone, a wireless IP or smart phone, a laptop, home PC, and any device with an Internet connection.
<urn:uuid:8edb32cb-4886-4547-a1e6-4033d2b33bac>
CC-MAIN-2022-40
https://www.dialogic.com/glossary/voice-over-ip-voip
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00422.warc.gz
en
0.867582
151
3.265625
3
Because firewalls and other defensive security measures are not failsafe, you need additional tools to detect and respond to security breaches as they occur. A network analyser can detect known (and even some unknown) virus attacks and make the cleanup process much more efficient. A protocol analyser shows you what is happening on your network by decoding the different protocols that devices on the network use to communicate, and presenting the results in human-readable form. Most mature analysers also include some statistical reporting functionality. The usefulness of such a tool for day-to-day troubleshooting is obvious; less obvious (and therefore underutilised) is how essential an analyser becomes when responding to security threats such as hacker intrusions, worms, and viruses. Even the Best Defenses Fail Every administrator of a corporate LAN of any size these days has already built strong defenses against hackers and virus attacks. But the viruses and hackers continue to get through. Why? Anti-virus and IDS systems are designed to prevent the incursion of known viruses and attacks. The hackers and “script kiddies” have the same access to all the threat bulletins and Windows patches that you have, and are always looking for the new vulnerabilities. In short, your firewalls and operating systems often won’t get a patch until the damage is already done. Imported disks, deliberate actions by employees, and visitors bringing infected laptops are some other weak spots in your security system that perimeter defenses alone cannot address. A good network analyser can both help you detect when breaches have already occurred, and make the cleanup/recovery far less painful once a breach has been identified. Viruses and hacker attacks typically generate a recognisable pattern or “signature” of packets. A network analyser can identify these packets and alert the administrator to their presence on the network via email or page. Most analysers let you set alarms to be triggered when a particular pattern is seen. Some analysers can be programmed to send an email or page when these conditions are met. Of course, this assumes that the virus and its signature have been seen before and incorporated the analyser’s list of packet filters. (A filter specifies the set of criteria under which an analyser will capture packets or trigger an alarm or some other action.) New viruses and worms have different signatures depending on the vulnerabilities they are trying to exploit, but once systems have been successfully breached, there are a relatively small number of things that hackers actually want to do with your network, the top ones being: Use your systems in a Denial of Service (DoS) on a third party. A good network analyser can easily identify such systems by the traffic they generate. Use your system as an FTP server to distribute “warez” and other illegal files. You can configure an analyser to look for FTP traffic or traffic volume where it is unexpected. The very nature of viruses and worms is to produce unusual levels of network traffic. High frequency of broadcast packets or specific servers generating an unusual number of packets are logged in the analyser’s record of longer term traffic, allowing the administrator to follow up on suspicious traffic patterns. The analyser can also help in identifying inappropriate traffic which may leave your network open to attack, or may signify potential weaknesses. This would vary with the particular network or corporate policy, but could include automatic notification of traffic such as MSN, NNTP or outbound telnet. Choosing and Implementing a Network Analyser To be useful as a corporate security tool, the analyser must be “distributed” so that it covers all the areas of your network. It must also be able to capture and decode all of the protocols from all of the media (Ethernet, WAN, 802.11, etc.) on which your corporate data flows. The other crucial feature is flexible filtering that allows triggered notification. What “Distributed” Means and Why it is Essential A network analyser can only capture and decode the information that it can “see.” In a switched network environment, an analyser is only able to see traffic local to the switch. To overcome this, most modern analysers are supplied with multiple agents or probes that are installed on each switch in the LAN. An analyser console can then query the probe for either raw packets or statistical traffic reports. When an analyser is used in a general troubleshooting or monitoring mode, it is nice to have as much visibility as possible. When used in a protection mode, the visibility is vital. So – the more distributed the analyser, the better. The distribution needs to be reviewed in both qualitative as well as quantitative terms. Look for an analyser that can install probes or agents on the topologies present within both your existing network, and any planned enhancements. Look not only for Ethernet capabilities, but WAN and wireless capabilities if these are either present or possible additions. Probe functionality is another important factor. They should be able to perform all the functions required by the organisation – the capture and decode of packets, analysis of traffic levels both in terms of stations active as well as applications being used. Application analysis is important because a rapid increase in volumes of email is one of the obvious signs of many viruses. A final consideration would be the method of data transfer between the probe and the analyser’s console or management station. The transfer of data must be minimal (to prevent unnecessary load on the network) and as secure as possible. Probes need to be placed where they can see the critical points of the network. These would include the network’s default gateway (since all broadcast packets and all packets with unknown destination addresses will be sent here), E-Mail server(s) and any other servers deemed as critical or likely to be attacked. In order for a probe to detect a certain device it will ideally be located on a hub onto which the device is also directly connected. If this is not possible – and the device to be protected is connected directly to a switch port, then the switch should be configured to mirror (or span) all traffic from that switch port onto a separate switch port on which the probe is located. For continuous monitoring of viruses and attacks, probes must be implemented. More probes may need to be deployed if some are to be used for general monitoring, and some to be used for protection. Alternatively some analysers are supplied with multi-function probes that can perform both tasks simultaneously. If you want to analyse WAN, WLAN, or gigabit traffic, you must choose a vendor with solutions for those media as well. Filtering Power and Flexibility Look for a solution that offers the ability to “roll your own” traffic pattern filters as well as offering packaged filters for known viruses and hacker threats. Another thing to look for is the vendor’s willingness to offer timely updates as new security threats are discovered. A quick response to a breach can mean the difference between an inconvenience for a few users and a disaster for your company. Look for an analyser that can be configured to email or page you when the virus or hacker attack is sensed. Depth of Analysis Features Most analysers can tell you what machines are generating the most traffic, what protocols are taking up the most bandwidth, and other such useful information allowing you to detect attacks and infected systems. The most powerful analysers have “expert” functionality available that looks at conversation threads and identifies more subtle problems (missing ACKs, high wireless reassociation counts being two examples) automatically. Network Analysers will never replace your firewall, anti-virus software or intrusion detection system. However, because it is not possible for these precautions to be completely effective, you cannot maintain the security of your network without a network analyser. A good analyser alerts you when the other defenses have failed, and takes much of the pain out of identifying, isolating, and cleaning up compromised machines. Considering the general troubleshooting and monitoring features included “for free” in such tools, the decision to purchase a comprehensive analyser with network security features is easily justified.
<urn:uuid:2785f8c4-7d2e-4077-bfb5-2c4f4fe562f0>
CC-MAIN-2022-40
https://it-observer.com/using-network-analyser-security-tool.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00422.warc.gz
en
0.935169
1,692
3.03125
3
There has been a lot of hype about the so-called ‘supermaterial’ graphene. Composed of a single layer of carbon atoms, arranged in a hexagonal lattice, graphene has been tipped to “kill off cancer cells”, “mend human hearts” and “solve the world’s water crisis”. It is expected to one day be found in hair dye, footwear and food. In terms of recent scientific advances people are most pumped about, the development of graphene is probably only surpassed by the progress being made in quantum computing. Now, a former University of Sydney researcher is bringing the two together and is hoping to commercialise the result. “There is a need within the quantum computing market to develop componentry that can be integrated into electronic circuitry while remaining functional at room-temperature, allowing practical non-disruptive solutions that could facilitate the wide-scale point-of-use by consumers,” explained USyd fellow Dr Mohammad Choucair, now CEO of the ASX-listed Archer Exploration. While working at the university, Choucair led research into the use of graphene in the production of quantum electronic devices that store and process quantum bits or qubits. Dr Mohammad Choucair The patent rights are currently jointly held by the University of Sydney and ?cole Polytechnique F?d?rale de Lausanne in France, and managed by the university’s Commercial Development and Industry Partnerships group. Archer today announced it had entered negotiations with University of Sydney for exclusive rights to develop and commercialise intellectual property (IP) related to graphene-based quantum computing technology. The negotiations will facilitate the filing of an international patent application under the Patent Cooperation Treaty, the company said. The IP has the potential to “positively impact the quantum computing industry by developing and integrating critical componentry (qubits) that can operate under practical conditions” as opposed to the far below zero temperatures which are required today. In turn this would reduce “many of the technological barriers to realising practical quantum computing using solid-state materials,” the company said. “Our negotiationshellip;will allow Archer to leverage our strategic graphite and graphene resources, and our inventory of specialised materials assets held in our Carbon Allotropes business, to find high value, materials-centric, end-to-end solutions to solve one of the most significant problems in our technological age,” Choucair said. “Given the established years of research and results supporting this IP, it has the potential, over a short time frame, to allow Archer to develop and commercialise a world first, practical quantum computing chip, with significantly reduced costs compared to current approaches,” he added. Morgan Stanley research suggests there will be a US$10bn addressable market for quantum computing within the next ten years. Goldman Sachs said the quantum computing by 2021 will be a US$29 billion industry. Commercialinterest in the potential of quantum computing as a market and business tool has heightened in recent years. In Australia, theCommonwealth Bank of Australia and Telstraare backing the country’s “first quantum computing hardware company” Silicon Quantum Computing (SQC) Pty Ltd which is working to develop and commercialise a prototype circuit, which will serve as a “forerunner to a silicon-based quantum computer”. The country is also home to quantum technology start-ups Q-Ctrland QxBranch. The University of Sydney is home to one of Microsoft’s quantum research centres, where researchers are pursuing a topological approach to forming qubits. “It is important to note that Australia has globally recognised expertise in quantum materials and is at the forefront of quantum technology. Archer is in a strong position to develop and commercialise strategically relevant IP for long-term company success and business development,” Choucair said.
<urn:uuid:98b40f04-301b-4698-8e98-c562b82a7f23>
CC-MAIN-2022-40
https://www.cio.com/article/201892/archer-shoots-for-graphene-based-quantum-computing-tech-ip.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00622.warc.gz
en
0.928144
816
2.59375
3
5 Autonomous Vehicle Technology Uses in Shipping and Logistics Autonomous trucks and vehicles promise significant benefits for an industry that struggles with a growing labor shortage and the demand for shorter delivery times. The American Trucking Association estimates a shortage of as many as 174,500 drivers by 2024, due to an aging workforce and the difficulty of attracting younger drivers. Meanwhile, the rise of e-commerce and shorter delivery times is driving a need to overcome restrictions on hours driven and capital utilization. According to McKinsey, 65% of the United States’ consumable goods are trucked to market. With full autonomy, operating costs would decrease by about 45%, saving the U.S. for-hire trucking industry between $85 and $125 billion. In addition to improved operational efficiencies, autonomous trucks and vehicles can help lower freight costs, improve truck utilization, reduce logistics costs, improve fuel efficiency — and, of course, reduce delivery times. However, the thought of large trucks driving themselves on highways or in busy urban areas give rise to a number of concerns — and reasonably so. While progress is being made toward realizing the benefits of autonomous vehicle technology, manufacturers and technology developers are taking baby steps to ensure the right safety technology is in place and society is ready. Here are five ways that autonomous vehicles and trucks will be used for shipping and logistics.
<urn:uuid:cd3cb75c-458c-48b6-b53d-019d41203165>
CC-MAIN-2022-40
https://www.iotworldtoday.com/2019/10/23/5-autonomous-vehicle-technology-uses-in-shipping-and-logistics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00622.warc.gz
en
0.95466
273
2.671875
3
Network attacks are getting more complicated in today’s security environment. To obtain basic access information, attackers use various methods such as Phishing attack or Malware infection. After they enter the relevant IT system, they disguise as the user with wide access authorization while trying to increase their privileges. Many institutions do not have the staff, tools or bandwidth that will detect any extraordinary activities. After the attacker leaks into the network, it may take them days or weeks to discover the weaknesses in the systems. It is necessary for the lateral movements in this time period to be detected. Lateral movement refers to the gradual movements of cyber attackers, and the techniques they use to search for important targeted data and assets. It is necessary for the lateral movements in this time period to be detected. Lateral movement refers to the gradual movements of cyber attackers, and the techniques they use to search for important targeted data and assets. Detecting Compromised User Logsign SIEM identifies abnormal behavior of users by means of correlation. For instance, Logsign SIEM creates alerts to warn relevant IT managers in case of access to extraordinary data or systems at extraordinary hours. Detecting Suspicious Privileged Authorization Increase Main target is to detect privileged user account accesses. Logsign SIEM immediately identifies users that increase authorization for critical systems. Command and Control (C&C) Communication Logsign SIEM may associate the network traffic with Cyber Intelligence Module to discover malware that communicates with external attackers. This refers to a compromised user account. Detecting Data Leakage You can use Logsign Correlation and Cyber Threat Intelligence (TI) service to analyse incidents that may seem irrelevant – such as USB disc driver adding and process information, personal e-mail services, cloud storage services or creating high data traffic through local network. It can detect the ciphering of the data on user systems. These abnormal movements on user data may be a ransomware attack. Detecting Lateral Movements Lateral Movements can be detected via alert rules created based on the Mitre Attack framework. Windows file server acts as a file and folder storage that can be accessed by many users. Even though a working environment based on cooperation has many benefits, it may be difficult to prevent unauthorized access by monitoring the authorizations to shared folders. As a strong Windows command file language, PowerShell is used by both IT specialists and attackers. PowerShell is an on-board command line tool.
<urn:uuid:3c50e206-ac5d-4e9d-b8a2-b7a3f49068b5>
CC-MAIN-2022-40
https://www.logsign.com/siem-use-cases/detecting-lateral-movements/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00622.warc.gz
en
0.92133
500
2.53125
3
Wireless sniffers are customized packet analyzers specifically designed to capture data over wireless networks. Packet analyzers are software programs, occasionally hardware tools, which will detect, intercept and decode data over a wireless connection. Wireless sniffers are used for many legitimate actions, including detecting, investigating and diagnosing network problems; filtering network traffic; monitoring network security, usage and activity; detecting and identifying network bottlenecks and configuration issues; detecting network vulnerabilities, malware and attempted security breaches and much more. However, they can also be used by malicious attackers to harvest confidential data and sensitive company information. How are wireless sniffer attacks performed? Wireless sniffers can be used to monitor network traffic, steal sensitive data such as passwords and credit card information and also can be used to acquire information about the network. Malicious attackers typically use wireless sniffers in areas with unsecured wireless networks such as coffee shops, restaurants, libraries and other public places. Wireless sniffers can also be used in spoofing attacks. In these cases, malicious attackers use the information acquired from the wireless sniffer to disguise their attack as an authorized communication from a legitimate source within the network. Wireless sniffing can be broken down into two different types of modes: promiscuous and monitor. - Promiscuous: The wireless sniffer can access and read all data traveling to and from a wireless access point. This enables the sniffer to transmit data which can result in easier detection of the sniffer. This is the most common type of sniffing attack. - Monitor: This type of wireless sniffer monitors incoming data but does not actually send out anything, making it very hard to detect and locate.
<urn:uuid:fa6e82d7-1bed-4625-a22d-be4e7c34220f>
CC-MAIN-2022-40
https://checkmarx.com/glossary/how-to-avoid-wireless-sniffers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00622.warc.gz
en
0.931504
337
2.90625
3
(By Becky Bracken) With the use of quantum sensors, researchers have made a breakthrough on the road to creating a new data storage medium, one which cannot be accidentally overwritten by magnetic fields. The innovation is critical, since electric and magnetic fields are how qubits are excited, forming the very basis of quantum computing. The researchers pulled it off using antiferromagnetic materials, which, unlike their ferromagnetic counterparts, materials like iron, don’t generate a magnetic field. The report added 90 percent of all magnetically ordered materials are antiferromagnets. “We can alter the single crystal in such a way as to create two areas (domains) in which the antiferromagnetic order has different orientations,” Natascha Hedrich, lead author of the study said. She, along with her international team coordinated by the Department of Physics and the Swiss Nanoscience Institute at University of Basel. For this research, the team used a single crystal of chromium(III) oxide (Cr2O3), which they selected because the structure is nearly perfectly ordered a crystal lattice, making it a good candidate to manipulate and observe. “Thanks to the high sensitivity and excellent resolution of our quantum sensors, we were able to experimentally demonstrate that the domain wall exhibits behavior similar to that of a soap bubble,” team researcher Professor Patrick Maletinsky explained. The domain wall, Maletinsky added is elastic like a bubble of soap. The domain “soap bubble” is constantly trying to minimize surface energy, which is an accurate predictor of how the antiferromagnetic material behaves. Demonstrating the ability to use domain walls to manipulate the trajectory of antiferromagnetic material is the first step toward the development to non-magnetic storage medium. “We show that a wide range of spin clusters with antiferromagnetic intracluster exchange interaction allows one to define a qubit,” the report said. “For these spin cluster qubits, initialization, quantum gate operation, and readout are possible using the same techniques as for single spins.” The team was able to manipulate the crystal’s structure to create small, raised squares which would move the domain wall on demand. The domain wall could be moved around at-will using the heat from a localized laser, the researchers explained. “Next, we plan to look at whether the domain walls can also be moved by means of electrical fields,” Maletinsky said. “This would make antiferromagnets suitable as a storage medium that is faster than conventional ferromagnetic systems, while consuming substantially less energy.”
<urn:uuid:762cd397-b25d-4670-bf1b-e6d85157ee41>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/researchers-make-data-storage-breakthrough-using-antiferromagnets/amp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00022.warc.gz
en
0.916753
559
3.09375
3
Packet Tracer file (PT Version 7.1): https://goo.gl/iJg2cJ Get the Packet Tracer course for only $10 by clicking here: https://goo.gl/vikgKN Get my ICND1 and ICND2 courses for $10 here: https://goo.gl/XR1xm9 (you will get ICND2 as a free bonus when you buy the ICND1 course). For lots more content, visit http://www.davidbombal.com – learn about GNS3, CCNA, Packet Tracer, Python, Ansible and much, much more. The Point-to-Point Protocol (PPP) provides a standard method for transporting multi-protocol datagrams over point-to-point links. PPP is comprised of three main components: ● A method for encapsulating multi-protocol datagrams. ● A Link Control Protocol (LCP) for establishing, configuring, and testing the data-link connection. ● A family of Network Control Protocols (NCPs) for establishing and configuring different network-layer protocols. The Challenge Handshake Authentication Protocol (CHAP) (defined in RFC 1994) verifies the identity of the peer by means of a three-way handshake. These are the general steps performed in CHAP: After the LCP (Link Control Protocol) phase is complete, and CHAP is negotiated between both devices, the authenticator sends a challenge message to the peer. The peer responds with a value calculated through a one-way hash function (Message Digest 5 (MD5)). The authenticator checks the response against its own calculation of the expected hash value. If the values match, the authentication is successful. Otherwise, the connection is terminated. This authentication method depends on a “secret” known only to the authenticator and the peer. The secret is not sent over the link. Although the authentication is only one-way, you can negotiate CHAP in both directions, with the help of the same secret set for mutual authentication. For more information on the advantages and disadvantages of CHAP, refer to RFC 1994 In this lab you need to configure point to point protocol or PPP. You need to configure Point-to-Point Protocol on the link between ISP 1 and Customer Router 1. You also need to configure Point-to-Point Protocol but with CHAP between ISP3 and Customer 2. In other words you’re going to configure PPP with a CHAP or Challenge Handshake Authentication Protocol. This lab consists of required tasks as well as bonus tasks. The required tasks are once again that you need to configure the link between Customer 1 and ISP1. This link here with PPP. You need to configure this link using PPP CHAP and a password of cisco You then need to configure static default routes on the customer routers pointing to the ISPs. The reason for doing that is that, these devices representing the Internet in this topology of running BGP in autonomous systems 65000, 65001, 65002. So you need to configure the customer routers to use static default routes so that they can send traffic on to the Internet and access the Google DNS server 184.108.40.206 You need to verify that things are working by ensuring that the customer routers can ping the DNS server and that they can ping Cisco.com So make sure that you configure both of the ISP side and customer side with PPP between ISP 1 and ISP 2. Configure IP addresses and anything else that’s relevant and again the side needs to be configured with PPP CHAP. That’s the required portion of the lab but to make the lab more real world, we have some bonus tasks. In the bonus tasks, you need to create a DHCP pool on the customer routers to allocate IP addresses to the PCs. Customer Router 1 needs to be configured with this IP address on gigabit 0.0.0 and it needs to allocate IP addresses to the PC in that subnet. Customer Router 2 needs to be configured with this IP address 10.1.2.1 on gigabit 0 /0 / 0 And you need to configure a DHCP pool on the customer router to allocate IP addresses to this PC in this subnet. Now without giving it away think about all the DHCP options that you need to allocate to your PCs to allow the PCs to ping Cisco.com The verification for this section is that PC 1 and PC 2 can ping Cisco.com. So think about what’s required from a DHCP point of view but also from a NAT or Network Address Translation point of view. You’re going to have to configure both of these routers with network address translation and to be specific; it’s actually port address translation so that the PCs can access the Internet. So make sure that these PCs which are using RFC 1918 addresses, in other words private IP addresses can access the Internet which is a public network. Notice as an example, that the BGP routers on the Internet only know about Network 8, they have no visibility of network 10. You are not going to advertise Network 10 to the Internet. Network 10 is a private IP address; it’s none routable on the Internet because ISPs will block that network. So can you complete this lab? Can you configure the network with PPP, PPP CHAP, DHCP Network Address Translation and DNS information?
<urn:uuid:425eab43-26e5-4bf2-93f4-dd9d318f59a6>
CC-MAIN-2022-40
https://davidbombal.com/cisco-ccna-packet-tracer-ultimate-labs-ppp-ppp-chap-can-complete-lab-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00022.warc.gz
en
0.900197
1,151
2.59375
3
Unlike with alcohol or nicotine, there is not a pharmacological option available to individuals addicted to cocaine to help them stop using the drug. However, researchers at the University of Alabama at Birmingham believe the tool to help individuals treat their addiction may very well exist. Researchers in the School of Public Health are conducting a clinical trial to see whether psilocybin, the active compound found in Psilocybe mushrooms, will help individuals addicted to cocaine stop using the harmful drug. “We aren’t advocating for everyone to go out and do it,” said Peter Hendricks, Ph.D., associate professor of health behavior in the School of Public Health at UAB. “What we are saying is that this drug, like every other drug, could have appropriate use in a medical setting. We want to see whether it helps treat cocaine use disorder.” Nearly 20 people are enrolled in the trial, but researchers are still seeking participants. Participation is free, but the person must currently use cocaine and have a serious motivation to stop using the drug. “Our goal is to create a tool or drug that provides significantly better outcomes for individuals addicted to cocaine than those that currently exist,” said Sara Lappan, Ph.D., a postdoctoral scholar in the Department of Health Behavior. Participants are given a dose of psilocybin and then monitored for six hours. After the participant is no longer under the effect of the drug, researchers track his or her cocaine use. “Our idea is that six hours of being under the effects of psilocybin may be as productive as 10 years of traditional therapy,” Lappan said. Psilocybin is theorized to work from three angles: Biochemically, psilocybin disrupts the receptors in the brain that are thought to be responsible for reinforcing addictive behaviors. Psychologically, it is thought to reduce cravings, increase a sense of one’s self-efficacy and increase motivation. Transcendentally or spiritually, psilocybin is thought to increase one’s sense of purpose and a sense of oneness with a higher power, which have both been shown to be powerful protective factors against addiction. “If our hypotheses are supported, this has the potential to revolutionize the fields of psychology and psychiatry in terms of how we treat addiction,” Lappan said. According to Hendricks, UAB is one of six universities in the world currently investigating the medicinal benefits of psilocybin. The other five are Johns Hopkins University, Imperial College London, New York University, University of California-San Francisco and Yale. Hendricks was also interviewed about how psilocybin may be used to help individuals stop using cocaine by author Michael Pollan in his latest book, “How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence.” The book explores the renaissance of scientific research into these compounds and their potential to relieve several kinds of mental suffering, including depression, anxiety and addiction. If you are interested in participating in this study, contact Lappan at 205-975-7721. Source: Holly Gainer – University of Alabama at Birmingham
<urn:uuid:421401f5-590d-4380-879e-58dac6ec4a40>
CC-MAIN-2022-40
https://debuglies.com/2018/07/01/psilocybin-can-help-curb-cocaine-addiction/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00022.warc.gz
en
0.933972
702
2.859375
3
The emergence of autism in children has not only been linked to genes encoding synaptic proteins—among others—but also environmental insults such as zinc deficiency. Although it is unclear whether zinc deficiency contributes to autism, scientists have now defined in detail a possible mechanistic link. Their research shows how zinc shapes the connections or ‘synapses’ between brain cells that form during early development, via a complex molecular machinery encoded by autism risk genes. Published in Frontiers in Molecular Neuroscience, the findings do not directly support zinc supplementation for the prevention of autism—but extend our understanding of its underlying developmental abnormalities, towards an eventual treatment. “Autism is associated with specific variants of genes involved in the formation, maturation and stabilization of synapses during early development,” says study senior author Dr. Sally Kim of Stanford University School of Medicine. “Our findings link zinc levels in neurons—via interactions with the proteins encoded by these genes—to the development of autism.” Kim and colleagues found that when a signal is transferred via a synapse, zinc enters the target neuron where it can bind two such proteins: Shank2 and Shank3. These proteins in turn cause changes in the composition and function (‘maturation’) of adjacent signal receptors, called ‘AMPARs’, on the neuron’s surface at the synapse. A mechanistic link Through an elegant series of experiments, the paper describes the mechanism of zinc-Shank-mediated AMPAR maturation in developing synapses. “In developing rat neurons, we found that Shank 2 and 3 accumulate at synapses in parallel with a switch to mature AMPARs. Adding extra zinc accelerated the switch—but not when we reduced the accumulation of Shank 2 or 3,” explains Dr. Huong Ha – the study’s lead author, a former Stanford graduate student. “Furthermore, our study shows mechanistically how Shank2 and 3 work in concert with zinc to regulate AMPAR maturation, a key developmental step.” In other words, zinc shapes the properties of developing synapses via Shank proteins. “This suggests that a lack of zinc during early development might contribute to autism through impaired synaptic maturation and neuronal circuit formation,” concludes co-senior author Professor John Huguenard, also of Stanford University School of Medicine. “Understanding the interaction between zinc and Shank proteins could therefore lead to diagnostic, treatment and prevention strategies for autism.” Will zinc supplements help prevent autism? “Currently, there are no controlled studies of autism risk with zinc supplementation in pregnant women or babies, so the jury is still out. We really can’t make any conclusions or recommendations for zinc supplementation at this point, but experimental work in autism models also published in this Frontiers Research Topic holds promise,” points out Professor Craig Garner of the German Centre for Neurodegenerative Diseases, also co-senior author. Taking too much zinc reduces the amount of copper the body can absorb, which can lead to anemia and weakening of the bones. Furthermore, zinc deficiency does not necessarily imply a dietary deficiency—and could result instead from problems with absorption in the gut, for example. “Nevertheless, our findings offer a novel mechanism for understanding how zinc deficiency—or disrupted handling of zinc in neurons—might contribute to autism,” adds Garner. More information: Frontiers in Molecular Neuroscience, 2018. DOI: 10.3389/fnmol.2018.00405 , https://www.frontiersin.org/articles/10.3389/fnmol.2018.00405/full Provided by: Frontiers
<urn:uuid:87bb0d2e-428e-4c1a-aca9-3a264a8470cf>
CC-MAIN-2022-40
https://debuglies.com/2018/11/09/the-emergence-of-autism-in-children-has-not-only-been-linked-to-genes-encoding-synaptic-proteins-but-also-environmental-insults-such-as-zinc-deficiency/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00022.warc.gz
en
0.898334
767
3.609375
4
Welcome back to our third instalment on authentication in the Security 101 series. If you haven’t already, you can read part one that introduces authentication and part two which discusses authentication protocols and methods. In part three, we’re going to discuss protecting authentication. Authentication is critical, as it helps to determine who is and who isn’t a valid user so that authorization can take place. It also helps to identify actions to a specific individual. Even unprivileged users’ authentication is critical because many attacks these days involve privilege escalation. An unprivileged user, guest account or simple test account can act as a foothold to give an attacker a way in. The bottom line is, it’s critical to protect all authentication. Unique credentials per user Let’s start with what should seem obvious but is often overlooked. Each and every user should have their own, unique, set of credentials that is known only to them. There should never be a master admin account that is used by several SysAdmins, and every time an admin must unlock or reset an end user’s account, they need to set a temp password that must be changed at next logon. This can be harder to manage, but I recommend not even using the same initial password for new users, as that is something a lot of people are going to know and could be used with a new hire’s account before the day they start. Ensure that users’ passwords are complex with a mixture of uppercase, lowercase, numbers, letters and with a certain minimum length. However, never set a maximum length or require that a password starts or ends with a particular character type. Forcing complexity and length makes it exponentially more difficult to brute-force. Setting a specific pattern makes it much easier to crack and harder for users to remember. Passwords should be changed often. Unique credentials per system For domain based systems, using the same credentials for everything on the domain may make sense, but for remote systems, personal email accounts, shopping accounts, credit card and banking sites, social networks, etc. you will need to use a unique set of credentials for every system. You may want to use your email address (or you may not be given a choice!) but use a different password on each site as it’s very common for attackers to break into one system and then use the stolen credentials to gain access to others. This is where a third party password manager will come in useful. Separate administrative credentials by system/purpose Don’t use the same admin account that you administer the domain with for remotely troubleshooting workstations. Pass-the-hash attacks count on grabbing admin credentials from a user’s workstation so that they can be used on key systems. If you have a workstation admin account that is different from your server admin account, the worst an attacker could do is exploit other workstations if they successfully execute a PTH attack. That’s bad, sure, but it’s not as bad as if they got a true domain admin account or could get onto core line of business servers. Encryption in transit Cleartext? Just say no! Everything that is sensitive enough to require credentials must use encryption and should use the most recent/strongest available. If you admin systems, you should disable weaker encryption protocols and ciphers on your servers and the workstations you manage. SSL 3 and TLS 1.0 should both be considered insecure. Certificate and validation Always secure customer-facing sites with certificates issued by a major public CA. Always use 2048 bit certificates and consider purchasing extended validation certificates if your competition does too. Public perception is important! Never use self-signed certificates for anything end users will access, as the very last thing you ever want to train a user to do is click through a certificate warning! Ensure that all clients can perform certificate validation against CRLs and OCSP endpoints. The best defense against stolen or guessed credentials is to use multi-factor authentication. Whether you use tokens, smartcards, certificates, phone calls, or something else, make sure users must authenticate with something they know, like a PIN, and something they have, like a physical object. This will greatly reduce an attacker’s ability to compromise credentials. Encryption at rest Never store credentials in cleartext. Never store credentials in cleartext. Whether you’re using a flat file, a database, a password manager or an LDAP service, never store credentials in cleartext. Disable stale accounts Parse your systems at least once a month and disable any accounts that haven’t been logged into at least 30 days. For customer-facing systems, you may want to extend that to a year – but whatever the period is, unused accounts are low-hanging fruit for attackers to use. Delete dead accounts If you know a user is no longer employed with the company or a customer has closed their account delete those dead accounts. For stale accounts that you disabled, follow up after another 30 to 60 days after you disabled them and declare them dead, so they are not lying around cluttering up your database. No legacy protocols Disable old legacy protocols like LANMAN and NTLM and ensure that all LDAP is encrypted. Legacy protocols can be easily intercepted and decrypted, leaving your systems at risk. If you follow the above tips, you will greatly reduce the risk to your systems from compromised credentials, and help protect yourself online with all of your personal sites, services and social networking too.
<urn:uuid:e6a54ea6-fa13-4f2f-be77-5d407bb98b3d>
CC-MAIN-2022-40
https://techtalk.gfi.com/security-101-authentication-part-3-protecting-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00022.warc.gz
en
0.928634
1,171
3.125
3
More about Digital Transformation - Cloud Computing Deployment Models and Architectures - Cloud Adoption Strategy: What’s the Best Approach for Your Organization? - 8 Digital Transformation Technologies and Their Business Impact - What Is Digital Transformation in Banking? - Digital Transformation in Healthcare: 4 Key Trends - Digital Transformation: Examples from 5 Industries - The Future of Cloud Computing: 5 Trends You Must Know About - 5 Types of Digital Transformation and the Technologies that Power Them - Digital Transformation Strategy: 6 Tips for Success What Is Digital Transformation? Digital transformation is the process of incorporating computer-based technologies into an organization's products, processes and strategies. Which digital technologies are used, how they are implemented, and what are the goals guiding the implementation can differ from company to company. But all digital transformation initiatives aim to improve business results by adopting modern digital technology. A digital transformation initiative can contain many different aspects. At one extreme, it can involve reinventing all facets of an organization, including supply chains, workflows, employee skill sets, the product itself, how customers interact with the organization, and organizational structure. More limited initiatives can involve adopting digital technology to improve a specific business process, product, or service, reduce costs, or solve a specific business problem. In this article: - Why Is Digital Transformation Important? - 5 Types of Digital Transformation - What Are Digital Transformation Drivers and Technologies? - Digital Transformation Use Cases and Industry Examples - How to Create a Digital Transformation Strategy - Cloud Transformation with NetApp Cloud Volumes ONTAP Why Is Digital Transformation Important? The digitalization of society began in the second half of the 20th century and accelerated during the early 21st century. It led to global cultural changes, and a new generation of consumers who are digital-first, discovering and interacting with the world through the lens of a digital device. In this environment, businesses must adapt to appeal to consumers and speak in their language. For many companies, new technologies can make their products obsolete, and are thus at risk of being taken over by competitors who are more agile and technically savvy. Emerging technologies enable new business models, more engaging customer experiences, innovative products and services, and other innovations, and companies must undertake digital transformation to stay competitive. In addition, the pace of digital business transformation is accelerating. Digital transformation cannot be a one-time activity. Organizations must develop a mentality and culture of ongoing development, adapting services and systems faster and more often, continuously incorporating new technology and practices to better meet customer expectations. 5 Types of Digital Transformation 1. Business Process Transformation Business process innovation changes the way companies work internally. It affects how employees gain access to new technology and use it for their day-to-day jobs. This includes automating manual processes and maximizing investments in marketing and R&D by gathering new data and incorporating it into business decisions. A key objective of business process transformation is cost reduction. Other objectives include reducing time to market, improving the quality of the products and services, improving customer experience, and improving brand image. 2. Business Model Transformation This type of transformation changes business models to adapt them to the new digital environment. This requires careful consideration of how the core business of the industry operates. A successful change can disrupt an entire industry—like Netflix did for home videos and Amazon for retail. Business model transformation often involves the strategic part of the business to explore the potential of new ways of doing business, beyond what is currently established in the industry. It focuses on innovation and “thinking outside the box” to improve business outcomes. 3. Domain Transformation Domain transformation means transcending traditional boundaries currently facing a brand’s markets and possibilities. For example, the online retailer Amazon launched Amazon Web Services, now the world’s largest cloud computing provider with billions of dollars in revenue. Amazon took existing capabilities (their large, highly advanced data centers) and turned them into a new opportunity that created a whole new market. This potential for domain transformation exists in many industries and is driven by the use of technologies like artificial intelligence, new types of mobile and wearable technology, and the internet of things. 4. Cultural Transformation Digital transformation has a cultural component, which is critical to the success of any digital transformation initiative. For many organizations, this transition can be challenging. Digital transformation must start with education for existing staff, to ensure everyone understands the potential of new technology for improving the business, forging internal collaboration, and creating new methods for engaging customers. Leveraging technology, and creating a process of continuous innovation, requires a workforce that can adapt to change and is willing to continuously learn and develop. These skills and capabilities are necessary to integrate technology into the fabric of a company and transform processes, business models, products, and communications. 5. Cloud Transformation The cloud transformation process helps organizations migrate information systems to cloud computing environments. It can take various shapes—for example, a company can migrate only specific applications, data, or services, and retain some legacy infrastructure, or move their entire infrastructure to the cloud. Another dimension of cloud transformation is ownership—some organizations leverage the public cloud, a third-party data center operated by a cloud provider. Others set up cloud computing infrastructure in-house, which is known as a private cloud. Many combine the two models to create hybrid cloud management of infrastructure. Cloud innovation offers many benefits, including more efficient data sharing and storage, faster time-to-market, and greater organizational scalability and flexibility. At the same time, it raises major organizational challenges including governance, security, and cost control. What Are Digital Transformation Drivers and Technologies? Here are key technologies enabling digital transformation: - Cloud computing—enables organizations to quickly and affordably access computing resources, advanced software, new updates and functionalities. Instead of waiting weeks and months to set up on-premises resources, organizations can use the cloud to access resources over the Internet from any location in the world. - Commoditized IT—enables organizations to shift the focus of investments and human resources from infrastructure to innovation that provides value to customers and differentiates the organization in the marketplace. - Mobile platforms—enable work from anywhere in the world, at any time, providing quick access to corporate resources. - Machine learning and artificial intelligence (AI)—provide organizations with intelligent insights that empower more accurate and faster decision-making. It helps inform various strategic areas like marketing, sales, and product development. - Automation—enables organizations to delegate repetitive tasks to machines, freeing up time for human resources to improve productivity and efficiency. - Augmented reality (AR) and virtual reality (VR)—provide access to services remotely in a gamified way that encourages engagement. - Internet of Things (IoT) devices—provide data to inform AI services, which offer automation and personalization. - Edge computing—helps process data at the device level, keeping it private and accessible and enabling low latency data processing. - Blockchain technology is revolutionizing the entire technology landscape, enabling decentralization across numerous industries. Payment processing through ledgers that anonymize transactions is the basic use case. All of these technologies are employed by businesses worldwide to deliver unique services and experiences. Digital transformation strategies help organizations keep up with this ever-changing pace, identify relevant technologies, and implement them at scale and on budget. Learn more in our detailed guide to digital transformation technologies Digital Transformation Use Cases and Industry Examples Here are some of the common objectives of digital transformation in organizations: - Improve customer experience—use analytics to build richer content experiences and forge deeper relationships with audiences across multiple channels. - Rethink computing infrastructure—migrate from a physical data center to a cloud platform to accelerate development, improve reliability, and shorten time to market. - Improve internal collaboration—use collaboration tools like video conferencing, online task management, and messaging platforms to boost productivity of employees. - Improve content accessibility—most organizations rely on content, either for internal activities or as part of customer-facing activity. Digital transformation can make content accessible in new ways through enterprise search and new delivery platforms. Here are some examples of how digital transformation can contribute to organizations in different industry sectors: - Manufacturing—the internet of things (IoT) can increase efficiency and profitability by enabling predictive maintenance and process improvement, reducing machine downtime and scrap. - Retail—digital technology uses digital devices, channels, and platforms to create an improved customer experience. For example, many retailers are extending the shopping experience from brick-and-mortar stores to websites and web applications. Chatbots, artificial intelligence and advanced data analytics enable retailers to deliver personalized recommendations. Digital technology is also providing a more engaging in-store experience. - Healthcare—the healthcare industry is introducing a range of digital capabilities that can adapt medical services, making them more patient-centric and value-based. For example, many healthcare providers are offering virtual medical appointments and networked electronic health records (EHR). Digital capabilities can benefit not only patients but also providers and payers, while reducing service provisioning costs. - Smart cities—by combining existing physical infrastructure with digital technologies, digital smart cities are becoming more efficient, providing citizens with a higher quality of life. For example, smart cities can monitor and solve environmental issues, optimize public transport, and promote citizen participation and collaboration through digital means. Smart cities leverage technologies such as sensors, artificial intelligence, and video analytics. - Banking—financial institutions are transitioning from the traditional operating model of branch offices with bankers providing face-to-face service, to a variety of digital models. Banks are rolling out web applications and mobile applications to provide access to bank services, and are collaborating with FinTech startups to provide new mobile payment solutions. Learn more in our guide to digital transformation in banking See more examples in our detailed guide: digital transformation examples How to Create a Digital Transformation Strategy Here are a few ways your company can create a successful digital transformation strategy. Learn more in our detailed guide to digital transformation strategy Identify and Measure Business Goals Start by clearly identifying the business drivers of your digital transformation initiative. Consult with stakeholders to see where technology can have the biggest impact, and make sure the relevant teams understand the motivation and buy into the project, even before you start. The most effective digital transformation goals come with key performance indicators (KPIs) that can be tracked as they reach their goals, helping you justify the effort and investment to everyone involved. In many organizations, there is insufficient internal expertise to lead a digital transformation. There are several ways to bolster internal skill sets. You can hire full-time employees with digital transformation experience or bring in a consulting firm. As the digital transformation project evolves, many companies hire specific roles required in IT, marketing, analytics, and other areas. Build on your Strengths When considering a digital transformation project, think about the strengths of your company today—your existing customer base, successful products, and the values your brand has become known for. Make sure you are not erasing any existing value. Digital transformation should be a natural extension of your previous efforts, preserving your company’s strengths and taking them to new heights, rather than erasing them and starting anew. For example, a retailer with a strong brand image might be tempted to revitalize their brand and create a new look and feel for the digital space. However, continuing the same logo, branding, and voice from existing retail stores and traditional advertising could make the digital efforts more successful. Establish New Operational Policies Before you deploy a new tool or technology, think carefully about how you want your employees to use it. When procedures are not clearly defined or are not practical in the context of current business operations, employees will naturally return to the old way of doing things. Work with departments and teams to define operational policies that will leverage new technologies, provide value to employees, and make them more productive. This often requires a cultural transformation and ongoing educational process across the company. Establish an Iterative Transformation Process Digital transformation is not done overnight. Make small changes, evaluate their value, and continue from there. Don’t be afraid to revert changes or change strategy according to the results and feedback from the field. A digital transformation project that evolves through a series of stages is more likely to succeed than a “big bang” change. Keeping the strategy agile, will help you learn what works and rapidly adapt to changing circumstances. Consider Following a Digital Transformation Framework Several analysts and consulting firms have developed digital transformation frameworks, which you can use instead of building your strategy from scratch. Starting from a recognized framework ensures you don’t miss important steps and avoid repeating mistakes that other organizations have made. This can be especially important if your management team has no experience in digital transformation. Of course, it is important to adapt frameworks to your organization’s specific goals and requirements. Cloud Transformation with NetApp Cloud Volumes ONTAP Cloud computing is at the core of digital transformation, elevating it from the adoption stage of digital technology to include also the tools, rebuilding process, and the experience of a virtual environment that is accessible from anywhere. In order for an organization to achieve its goals and secure future viability, it needs to adopt a cloud-first or hybrid cloud management strategy. NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP capacity can scale into the petabytes, and it supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, storage efficiencies, Kubernetes integration, and more. NetApp and Cloud Volumes ONTAP play a key role in the cloud transformation process, helping enterprises move workloads and data to the cloud securely, manage them efficiently, and integrate them with modern cloud technologies. This frees the organizations from the burden of managing large-scale storage infrastructure and allows them to focus on their core business. In particular, Cloud Volumes ONTAP assists with cloud migration in digital transformation projects. Learn more about how Cloud Volumes ONTAP helps with lift and shift cloud migration. Read how Cloud Volumes ONTAP helps customers in these Cloud Migration Case Studies. Read More About Digital Transformation There’s a lot more to learn about Digital Transformation. To continue your research, take a look at the rest of our blogs on this topic: 8 Digital Transformation Technologies and Their Business Impact Digital transformation sometimes involves changing existing services into digital form, but it can be even more impactful when it uses technology to improve existing services, or even create completely new ones. Learn about the exciting technologies driving business transformation - including cloud, AI, IoT, AR, and additive manufacturing. What Is Digital Transformation in Banking? Digital transformation aims to integrate computer technologies into an organization's business processes, strategies, and products. In the banking sector, digital transformation requires migrating to digital and online services. Learn how this trend is changing the industry and discover cutting edge technologies are being adopted by banks worldwide. Read more: What Is Digital Transformation in Banking? The Future of Cloud Computing: 5 Trends You Must Know About Cloud computing has grown into a major paradigm in the tech world. It enables ubiquitous and simple on-demand access to shared computing resources via configurable Internet services. Get a brief history of cloud computing, current adoption trend, and key trends driving the cloud computing industry forward. Digital Transformation in Healthcare: 4 Key Trends Digital transformation involves incorporating computer-based technologies into the organization's processes, strategies, and products. Understand how digital transformation in healthcare can benefit healthcare providers and patients, and discover technological trends driving the revolution. Digital Transformation: Examples from 5 Industries Digital transformation involves building digital technologies into an organization's products, processes, and strategies. Understand the general use cases of digital transformation and see digital transformation examples from banking, healthcare, retail, and additional industries. Digital Transformation Strategy: 6 Tips for Success Any digital transformation requires a well-planned strategy to help businesses achieve their desired outcomes. Learn 6 tips that can make your digital transformation strategy a success and help you build a winning team. Cloud Adoption Strategy: What’s the Best Approach for Your Organization? Adopting the cloud is an important part of a digital transformation. Choosing a cloud adoption strategy will be a crucial decision point along the way. This article dives deep into each aspect of today’s cloud architecture options, looking at the benefits and the drawbacks to each approach— from hybrid deployment that keeps one foot on-prem to cloud-only deployment—so you can see which is right for you. Cloud Computing Deployment Models and Architectures If the cloud is part of your digital transformation, it’s key to understand the basic cloud computing models and deployment architectures available. Knowing the difference will be key to making your cloud adoption a reality. In this post we’ll give you a primer on the basic structure of the cloud’s offerings, and the different ways that users are adopting the cloud. Read more in: Cloud Computing Deployment Models and Architectures
<urn:uuid:206f686c-4965-4ba7-b994-9e77752a3475>
CC-MAIN-2022-40
https://cloud.netapp.com/blog/cvo-blg-5-types-of-digital-transformation-and-the-tech-that-powers-them
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00022.warc.gz
en
0.893339
3,550
2.6875
3
Blockchain is a type of distributed ledger for maintaining a permanent and tamper-proof record of transactional data. A Blockchain functions as a decentralised database that is managed by computers belonging to a peer-to-peer (P2P) network. Industry experts share with Intelligent CIO Africa how enterprises are adopting Blockchain technology to drive finance and trade on the continent. In the past, Blockchains were commonly associated with digital currencies such as Bitcoin or alternate versions of Bitcoin like Bitcoin Cash. Now, Blockchain applications are being explored in many industries as a secure and cost-effective way to create and manage a distributed database and maintain records for digital transactions of all types. Mervyn George, Innovation Strategy Lead, SAP Africa, said the shared, distributed and decentralised capabilities of Blockchain are finding their way into the mainstream with great potential. George said ambitious entrepreneurs and innovators are investing long hours and mind-boggling sums of money to rework anything from automating finance and restoring public trust in healthcare to feeding over 10 billion people by 2050 and providing disaster relief to remote villagers. “Blockchain is a distributed ledger, which involves recording transactions in blocks that are linked and secured by cryptography and then verified and stored across a network, making the ledger resistant to modification,” he said. “The interesting part is that this combination of capabilities in computing, connectivity and cryptography has applications not only in the financial world, but in any transactional environment, including a decentralised personal data management system that ensures users own and control their data.” He pointed out that this is important because identification provides a foundation for human rights. “An estimated 1.1 billion people worldwide cannot officially prove their identity and we simply don’t know how many of the world’s more than 200 million migrants, 21.3 million refugees, or 10 million stateless persons have some form of identification. Many of these unidentified people are African,” he said. Hamilton Ratshefola, Country General Manager, IBM, said Blockchain technology is changing the usual course of businesses in every industry. “There are many exciting examples – from food safety, smart contracts, healthcare, education, cross border payments, to luxury goods – where we have seen this technology work and make industries smarter, more efficient, resulting in strengthened trust,” he said. Ratshefola pointed out that across the continent, Blockchain adoption is rising – and the industry is seeing this from both large enterprises and increasingly, individual entrepreneurs. He said many business leaders are attracted to Blockchain, for the business model benefits. “CEOs are recognising that Blockchain is useful for their businesses and more than half of mid-sized businesses and considering deploying Blockchain solutions,” he said. Bernard Bussy, Software Engineer at Andile, saidBlockchain is an enabling technology that leads to other innovations that create the use cases businesses want. “At Andile we currently use Blockchain technologies in building larger platforms. The users of the platform don’t necessarily see the Blockchain in action – it operates behind the scenes while they gain the benefits. There are numerous start-ups and enterprises globally and in South Africa working on Blockchain,” he said. Busy said the most well-known is probably the Blockchain development being done at major banks such as Singapore Central Bank and JP Morgan. “As for its state in Africa, this isn’t easy to answer. But if we go by cryptocurrencies, the most popular current use-case for Blockchain, most African countries haven’t passed legislation to regulate and guide the industry. South Africa, Senegal, Sierra Leone and Tunisia are the few trailblazers in that regard,” he observed. “The South African Reserve Bank has also been very progressive around Blockchain, cryptocurrency conversations and has launched a number of initiatives in this regard.” According to George, smart contracts that reside on the Blockchain ledger are a way in which trust in transacting with businesses in Africa can be restored. “For decades, foreign investment in Africa has been jeopardised by corruption, fraud and the misappropriation of funds. The advent of Blockchain – and the adoption of the technology across Africa – will have a significant impact in supporting Africa’s entrepreneurial spirit by providing transparency and accountability for all finance and service level transactions,” he said. According to Ratshefola, the financial services and telecommunications sectors in Africa have been early adopters of the technology especially in the areas of cross border trace and insurance. “Globally we have more than 500 client engagements globally in Blockchain, touching industries like education, food safety, identity, insurance, luxury goods, supply chain management and trade finance,” he said. “Dozens of live Blockchain networks are currently running on the IBM Blockchain Platform, including IBM Food Trust and TradeLens to name just a couple.” He said across the African continent the industry is seeing how Blockchain increases economic efficiency, security, transparency and simplicity – leading to less administration, duplication and friction in Africa. “And the applications are endless, since virtually anything of value can be traced and traded without a central point of control – making it a big win for business on the continent as it entices all to participate in it and benefits all,” he said. Ratshefola said in Agriculture, IBM Research and agtech start-up Hello Tractor have developed an AI and Blockchain-driven platform for Africa’s farmers. This cloud-based service aims to support Hello Tractor’s business of connecting small-scale farmers to equipment and data analytics for better crop production. “In Kenya, Twiga Foods ran a Blockchain pilot to better process and expand the reach of, microloans to small fruit and vegetable kiosk owners. Credit scores serve as a proxy for trust and loans are issued via smartphone, making them significantly more accessible, secure and efficient than before,” he said. “In Mining, Ford Motor Company, Huayou Cobalt, LG Chem and RCS Global have joined forces with IBM to ensure the responsible sourcing of industrially mined cobalt, by using Blockchain technology to trace and validate ethically sourced minerals. CHO, a Tunisian olive oil producer, uses IBM Blockchain to create a provenance record that traces their Terra Delyssa extra virgin olive oil from retailer back to the tree.” “What we need to remember when implementing Blockchain is that it’s still relatively new technology and basically described as a ledger of transactions that can’t be corrupted. It should not however, replace the other security checks and balances that still need to take place in a process framework,” added George. He explained that Blockchain should be implemented in a manner that aligns with the security policies of a business. “Similarly, smart contracts also should not be implemented in isolation. There needs to be facilitation and collaboration between the coders and the legal side of a business to ensure the contracts are compliant with the legislation of the territories in which they will be applied,” he said. IBM’s Ratshefola explained that the promise of Blockchain is that it enables new means of exchanging business value in a decentralised manner – from facilitating the transfer of assets, to rewiring record-keeping processes, to supporting data sharing and preventing data tampering. “But as with all new technologies, it’s important for businesses to first be clear on the desired outcomes and determine the Blockchain approach from that,” he said. “The type of Blockchain approach that’s optimal for a business depends on the industry and business goals. Businesses must decide on the right mix that will work for the problem they’re trying to solve and adopt the right type of Blockchain (public or private).” According Bussy, the most common mistake with all technologies, not just Blockchain, is starting without a goal in mind. He added that technology for the sake of it quickly goes nowhere and that’s doubly true for Blockchain. “It’s also a complex technology requiring a variety of skill sets, so there are risks and costs associated with Blockchain development. The typical places you encounter Blockchain currently are either as part of a new product, such as the fintech platform Mesh,” he said. “Trade, as part of a sector-wide effort, such as diamond tracking, or as part of long-term innovations emerging from what banks and retailers like Walmarts are doing.” Looking ahead, George said current factors that will drive Blockchain implementation across the continent include the growth and development of the technical skills pool, along with the expansion of infrastructure to support greater bandwidth availability and decreasing data costs. “These elements will be vital in setting up the essential processing nodes required by Blockchain networks to make the investment in the technology feasible. The number of use cases are also steadily growing and as they prove the technology benefits and ROI, we can expect to find broader implementation,” he said. At IBM Ratshefola said Blockchain is doing for new business models what the web did for e-commerce and product-sharing services. “Anywhere you have complex business problems – from tracking shipping details to streamlining payments – Blockchain is a good solution,” he said. “It has the ability to completely transform how businesses operate. Blockchain is an independent and neutral technology that is here and already being used in many ways, well beyond powering new digital currencies – and is effectively helping businesses implement smarter processes and run more effectively by changing how they view and conduct trusted transactions.”Click below to share this article
<urn:uuid:7b9a93db-07ef-47bf-8ca1-3ff1c9db8433>
CC-MAIN-2022-40
https://www.intelligentcio.com/africa/2020/11/05/blockchain-is-propelling-finance-trade-and-government-initiatives-in-africa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00022.warc.gz
en
0.94635
1,992
2.828125
3
A Binary Lookup algorithm is much like the Secure Lookup algorithm but is used when entire files are stored in a specific column. This algorithm replaces objects that appear in object columns. For example, if a bank has an object column that stores images of checks, you can use a Binary Lookup algorithm to mask those images. The Delphix Engine cannot change data within images themselves, such as the names on X-rays or driver’s licenses. However, you can replace all such images with a new, fictional image. This fictional image is provided by the owner of the original data. Creating a Binary Lookup Algorithm via UI¶ At the top right of the Algorithm tab, click Add Algorithm. Select Binary Lookup Algorithm. The Create Binary SL Algorithm pane appears. Enter an Algorithm Name. This MUST be unique. Enter a Description. Select a Binary Lookup File on your filesystem. A maximum of 50 Lookup Files can be selected. For information on creating Binary Lookup algorithms through the API, see API Calls for Creating Algorithms - Binary Lookup.
<urn:uuid:d1bb8477-e2ff-4277-a2c6-65743f76c883>
CC-MAIN-2022-40
https://maskingdocs.delphix.com/Securing_Sensitive_Data/Algorithms/Algorithm_Frameworks/Binary_Lookup/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00222.warc.gz
en
0.807504
254
3.203125
3
Ransomware is an ever-evolving form of malware that employs encryption to hold a victim’s information at ransom. It is one of the biggest cyber security issues today, facing individuals and businesses alike. Victims often only realize that they’ve been compromised when files, servers, and other systems have been encrypted and they are presented with a ransom note demanding payment in cryptocurrency for the decryption key. So how do you combat these threats? How to Stop Ransomware Attacks In Their Tracks Fortunately, even if cybercriminals are already inside your network, it is not necessarily too late to prevent a full-blown ransomware attack. If your organization has a good threat-hunting strategy, you can detect strange or suspicious activity and counter the threat before it becomes a major problem. Criminals can spend weeks in the network before triggering a ransomware attack, so even if initial security measures fail, this delay can provide an opportunity to prevent real damage from occurring. The US Department of Commerce’s National Institute of Standards and Technology (NIST) cyber security framework (CSF) lists five main functions of securing networks: Identify, Protect, Detect, Respond, and Recover. Many organizations rely too heavily on the “protect” aspect as their main line of defense, without a clear strategy for detecting and responding to threats that bypass protections. “My team will see an initial malware family like QBot – then the adversaries will look around the environment, do some reconnaissance, and then they install a tool called Cobalt Strike, then they move laterally. It’s the same playbook – ransomware is coming,” said Katie Nickels, director of intelligence at Red Canary. It’s common for cybercriminals to gain access to a network and install malware to help examine the environment they’ve compromised — followed by a standard set of practices during the days or weeks they’re in the network. These are very detectable, predictable behaviors. If you have a good knowledge of your network, this activity can be identified, removed, and remediated before the problem grows to become a full-scale ransomware attack. The Importance of Threat-Hunting Capabilities Early detection is the key to preventing a ransomware attack. Threat hunting is a proactive security search through networks, endpoints, and datasets to detect and isolate advanced threats that evade existing security solutions. Smaller businesses or those without a significant IT or information security budget could struggle to engage in threat hunting themselves, but it’s much less costly than falling victim to a ransomware attack. If you don’t already have threat-hunting capabilities on your team, partner up within the ecosystem because threat hunting is the best way to identify threats that slip through perimeter-based security architectures. When you’re able to see that your network has been compromised, preventing an incident — or at least reducing the impact — is possible. Keeping a ransomware attack restricted to one part of the network is better than allowing it to spread across the whole environment. The harsh reality is that ransomware attacks are not going anywhere anytime soon, so the best approach is to provide and develop more effective tools to detect and prevent them from occurring in the future. Protect Your Network From Ransomware with Mindcore Mindcore is a leading provider of cyber security solutions in New Jersey and Florida, customized to fit your organization’s specific needs. After conducting a thorough evaluation, our team will prevent hackers from exploiting your IT vulnerabilities using state-of-the-art technology. Contact us today for more information about our cyber security services or to schedule a consultation with a member of our cyber security team. Learn More About Matt Matt Rosenthal is a technology and business strategist as well as the President of Mindcore, the leading IT solutions provider in New Jersey. Mindcore offers a broad portfolio of IT services and solutions tailored to help businesses take back control of their technology, streamline their business and outperform their competition.Follow Matt on Social Media
<urn:uuid:e651d2ab-dded-4c85-943f-58c7817d8b98>
CC-MAIN-2022-40
https://mind-core.com/blogs/cybersecurity/how-to-prevent-ransomware-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00222.warc.gz
en
0.929747
819
2.78125
3
Enabling Technology is a tool of independence for this young woman Celeste was unsure what she wanted to do as far as working, but one thing was for sure –– she wanted to be around people and help them. After an overwhelming job-hunting day at the mall, Celeste and her Emory Valley Center Job Developer went to McDonald’s for lunch. There, they noticed someone cleaning the tables and helping the customers as they came in. Celeste’s staff turned to her and asked, “Do you think you would like a job like that since you have a bubbly personality and want to be around people?” Celeste wanted to try it out before deciding, so she went back to shadow the lobby attendant. She loved it. It went so well that she decided to apply and was hired. Emory Valley Center (EVC) staff supports more than 3,100 children and adults with blindness, deafness, intellectual and developmental disabilities, and other complex health conditions. One way they are helping clients like Celeste live a more independent life is through Enabling Technology. Enabling Technology ranges from a cellphone with programmed reminders to a device that allows people to stay home for a certain amount of time without a Direct Support Professional. For example, Celeste uses a Mobile Personal Emergency Response (mPERS) pendant. It is a two-way communication device that will enable her to contact her job coach when she needs assistance at work. Two-way voice communication and Short Message Service (SMS) texting provide prompts to help Celeste stay on task, reminders for breaks, and recognition for completing tasks. Celeste’s device also helps her family confirm her arrival and departure from pre-determined locations. It has not only made Celeste more independent at her job, but it also makes her family feel good knowing that she is safe and supported while at work. With the support of their Maximus grant, EVC continues to lead the state of Tennessee in providing clients like Celeste with opportunities and tools to gain the independence they desire and deserve using Enabling Technology. Celeste has been at her job as a lobby attendant since May 2018 and loves it. Her co-workers and customers love her, especially her big smile and laughter. Learn more at emoryvalleycenter.org.
<urn:uuid:b99ff696-99d7-496b-b212-f77bfffa13ab>
CC-MAIN-2022-40
https://maximus.com/article/enabling-technology-tool-independence-young-woman
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00222.warc.gz
en
0.971389
482
2.5625
3
Storage - Managing z/OS Data Using DFSMS Constructs 2.4 This course introduces you to the family of DFSMS products that are used to manage z/OS data and then focuses on the creation and implementation of data, storage and management classes, as well as storage groups, to automate processes in the storage environment. This course is suitable for Operators, System Programmers, Storage Administrators and other staff who are responsible for monitoring and managing data storage in a z/OS environment. General knowledge of ISPF and z/OS data processing concepts. After completing this course, the student should be able to: - Describe how DFSMS is Used to Manage the Storage Environment - Identify the purpose of DFSMS components - Display SMS Storage Information using ISMF - Describe how ACS Routines are Created and Activated Introduction to DFSMS Storage Requirements in Today's Data Center Backup and Recovery The Role of DFSMS in Managing Storage Using SMS Constructs How Constructs are Used Automatic Class Selection Routines What is an Automatic Class Selection (ACS) Routine? The ACS Programming Language Creating and Testing an ACS Routine Activating ACS Routines ACS Routine Execution Who Uses ISMF? Generating Data, Class and Storage Lists ISMF Tasks for Storage Administrators
<urn:uuid:7d9a441d-355b-451d-bed6-76b68e94fd05>
CC-MAIN-2022-40
https://bmc.interskill.com/course-catalog/Storage24-Managing-zOS-Data-Using-DFSMS-Constructs.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00222.warc.gz
en
0.772397
346
2.6875
3
Save to My DOJO Anyone who has ever had to replace a hard disk because their old disk was simply too small to accommodate all of their data understands that there is a direct cost associated with data storage. In an effort to reduce storage costs, many organizations turn to data reduction technologies such as deduplication and compression. What is Deduplication? Data deduplication is a technology for decreasing physical storage requirements through the removal of redundant data. Although deduplication comes in several forms, the technology is most commonly associated with backup applications and associated storage. Storage volumes are made up of individual storage blocks containing data (although it is possible for a block to be empty). Deduplication works by identifying blocks containing identical data and then using that information to eliminate redundancy within the backup. Imagine for a moment that you needed to back up a volume containing lots of redundant storage blocks. It does not make sense to back up what is essentially the same block over and over again. Not only would the duplicate blocks consume space on the backup target, but backing up redundant blocks increases the duration of the backup operation, as well as the amount of bandwidth consumed by the backup process (which is an especially important consideration if you are backing up to a remote target). Rather than backing up duplicate copies of a storage block, the block can be backed up once, and pointers can be used to link data to the block on an as-needed basis. There are three main ways in which the deduplication process occurs. First, deduplication can occur inline. Inline deduplication is a process in which data is deduplicated in real-time as the backup is running. When inline deduplication is used, only non-redundant storage blocks are sent to the backup target. The second form of deduplication is post-process deduplication. When post-process deduplication is used, all storage blocks are backed up, regardless of whether they are unique or not. Later, a scheduled process deduplicates the data that has been backed up. Post-process deduplication is effective, but it requires the backup target to have sufficient capacity to accommodate all of the data that is being backed up at its original size. Furthermore, data is not deduplicated prior to being sent to the backup target, which means that bandwidth consumption is higher than it would be with inline deduplication. The third main type of deduplication is global deduplication. Global deduplication is essentially a combination of inline and post-process deduplication. Imagine, for example, that you needed to back up a dozen different servers, and you use inline deduplication to reduce the volume of data that is being sent to the backup target. Even though deduplication has been used, redundancy may still exist on the backup target because there may be data that exists on multiple servers. If, for example, all of the servers are running the same operating system, then the operating system files will be identical from one server to the next. Because redundancy could exist within the backup target, post-process deduplication is used as a secondary means of eliminating redundancy from the backup target. Disadvantages of Deduplication Deduplication can be a highly effective way to reduce the backup footprint. However, deduplication only works if redundancy exists. If every storage block is unique, then there is nothing to deduplicate (although some deduplication algorithms work on the sub-block level). A secondary disadvantage to deduplication is that the deduplication process tends to be resource-intensive. Inline deduplication, for instance, usually requires a significant amount of memory and CPU time, whereas post-process deduplication requires memory and CPU time, and also creates significant storage I/O. As such, it is necessary to ensure that you have sufficient hardware resources to support the use of deduplication. What is Compression? Like data deduplication, compression is a technique that is used to reduce the storage footprint of data. Whereas deduplication commonly occurs at the block level, however, compression generally occurs at the file level. Data compression can be classified as being either lossy or lossless. Lossy compression is used in the creation of digital media files. The MP3 format, for example, allows an entire song to be stored in just a few MB of space but does so at the cost of audio fidelity. The lossy conversion process reduces the recording’s bitrate to save storage space, but the resulting audio file may not sound quite as good as the original recording. In contrast, compression is also used in some data backup or data archiving applications. Because data loss is undesirable in such use cases, lossless compression is used. Like lossy compression, lossless compression reduces a file’s size by removing information from the file. The difference is that the information removal is done in such a way that allows the file to be reconstructed to its original state. There are numerous lossless file compression algorithms in existence, and each one works in a slightly different way. Generally speaking, however, most lossless file compression algorithms work by replacing recurring data within the file with a much smaller placeholder. How Does Compression Work? Compression can be used to reduce the size of a binary file, but for the sake of example, let’s pretend that we have a text file that contains the following text: “Like lossy compression, lossless compression reduces a file’s size by removing information from the file. The difference is that the information removal is done in such a way that allows the file to be reconstructed to its original state.” The first step in compressing this text file would be to use an algorithm to identify redundancy within the file. There are several ways of doing this. One option might be to identify words that are used more than once. For example, the word ‘compression’ appears twice, and the word ‘the’ appears four times. Another option might be to look at groups of characters that commonly appear together. In the case of the word compression, for example, a space appears both before and after the word, and those spaces can be included in the redundant information. Similarly, the block of text uses the word lossy and the word lossless. Even though these words are different from one another, they both contain a leading space and the letters l, o, s, and s. Hence “ loss” could be considered to be part of the redundant data within the text block. There are many different ways of identifying redundancy within a file, which is why there are so many different compression algorithms. Generally speaking, once the redundancy is identified, the redundant data is replaced by a placeholder. An index is also used to keep track of the data that is associated with each placeholder. To give you a simple example, let’s pretend that we wanted to compress the previously referenced block of text by eliminating redundant words and word fragments (I won’t worry about spaces). Normally the placeholders would consist of an obscure binary string, but since this block of text does not include any numbers, let’s use numbers as placeholders for the sake of simplicity. So here is what the compressed text block might look like: “Like 8y 1, 8less 1 reduces a 2’s size by 9ing 6 from 32. 3 difference 45369al 4 done in such a way 5 allows 327 be reconstructed 7 its original state.” By compressing the data in this way, we have reduced its size from 237 bytes to a mere 157 bytes. That’s a huge reduction, even though I used a super simple compression technique. Real-life compression algorithms are far more efficient. Of course, in reducing the data’s footprint, I have also rendered the text unreadable. But remember that one of the requirements of lossless compression is that there must be a way to reconstruct the data to its original state. For that, we need a reference table that maps the removed data to the placeholder values that were inserted in place of the data. Here is a very simple example of such a table: 1 = compression 2 = file 3 = the 4 = is 5 = that 6 = information 7 = to 8 = loss 9 = remove Keep in mind that the index also has to be stored, and the overhead of storing the index slightly reduces the compression’s efficiency. Disadvantages of Compression Even though compression can significantly reduce a file’s size, there are two main disadvantages to its use. First, compressing a file puts the file into an unusable state, as you saw with the compressed text block above. Second, not all data can be compressed. A file can only be compressed if there is redundancy within the file. A compressed media file such as an MP3 or an MP4 file is already compressed and usually cannot be further compressed. Similarly, a file containing random data may not contain a significant amount of redundancy and may therefore not be significantly uncompressible. These Technologies in Action We’ve discussed how these technologies work and have also discussed some disadvantages to plan for, but we haven’t really talked about the real-world benefit of deduplication and compression. As mentioned earlier, the main place you see these technologies in use is in the realm of backup. Backup, being an archival type service, aims to keep large amounts of data for long periods of time. The more efficient you can utilize the storage, the lower the cost. For example, let’s say we have a working data set of 50TBs, with a similarity of data of roughly 20%. The like blocks are first deduped, bringing in our data set to 40TBs. Then let’s say we get a compression ratio of 30%. Combined, this would bring the storage requirements for the base set of data down to 28TBs as opposed to 50+. It’s important to keep in mind that all data dedupes and compresses differently. On top of that, certain kinds of deduplication technologies work better in certain situations. For example, when you’re sending data to an offsite location, in-line deduplication is far superior because only the blocks that need to go are actually going across the wire. It’s important to remember that your mileage may vary depending on your particular organization, but in all cases, both technologies will provide cost savings in terms of storage needed. Not a DOJO Member yet? Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!
<urn:uuid:8eda38a6-c0ed-49c0-baea-7fa5182665f0>
CC-MAIN-2022-40
https://www.altaro.com/backup-dr/backup-deduplication-compression/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00222.warc.gz
en
0.936917
2,257
3.359375
3
A study by the Electric Power Research Institute focusing on distributing power to 24 utilities throughout the U.S., tells us that 85% of power disturbances happen because of voltage dips or swells, harmonics, wiring, and grounding problems. It postulates that these mishaps cause a financial loss of more than $156 billion per year. In the industrial world, switching on and off of heavy loads can cause voltage sags and swells. They derail network voltages far past their optimal operating parameters. A lot of this equipment is created to operate within these conditions. These dips and swells are often the culprits of shutdowns and process outages. In the current business atmosphere, a lot of corporations are pondering the installation of locally generated, renewable energy sources. Such as solar and wind. In a lot of instances, distributed generation sources need a switch-mode power supply in their electrical installations. With the need for more power electronics and switching power supplies. Harmonics will play a bigger part in causing problems for industrial equipment. These power supplies can integrate harmonics into their systems. They can corrupt power output so that all that is linked to the supply system are affected. This includes transformers and cables. Plant managers can see the effects of huge harmonic currents. As network elements go way past normal capacity. In several instances, elevations in total losses of 0.1% to 0.5% on network systems can trigger the tripping of protection components. Several other events can cause diminished power quality. These are differential loading of phases, faulty wiring and grounding methods, load interactions, EMI/EMC, and switching to massive reactive systems. Power Quality Standards To be able to adapt and maintain power quality, a solid monitoring and reporting method must be used. The standards set in stone by the industry are the IEC-61000-4-30 Class A and Class S. And IEC-61000-4-7 harmonic measurements, and IEC-61000-4-15 for flicker. All have absorbed these power quality standards to set the parameters for operations. Failure to meet these standards can result in steep penalties. Standards not only give a comprehensive baseline for real-world applications. They can also boost user confidence in the data provided for providing answers to issues connected to these events. Measurement precision is the answer for generating solid and replicable output. Here are a few Power Monitoring Devices: 1.Electricity Usage Sensor This is a device that monitors electrical consumption. It takes the form of a current transformer (CT), clamp, or optical sensor. CT sensor clamps gauge the electromagnetic field of a power cable. Like the clamp meters that electricians carry around. They wrap around the main supply cable or sub-circuit wires in your meter board. Optical sensors work by gauging the pulse output on your utility or kWh sub-meter. The pulse output is a red LED light that blinks in time with your power consumption. Some of these can even measure the more traditional mechanical electricity meters that use spinning disks. The energy sensor connects to a transmitter that resides within the switchboard. The main function of the transmitter is to send data from the sensor receiver that is situated in another place. Energy monitor transmitters apply many forms of sending data, such as: Bluetooth or Zigbee – both are short-range but are perfect for multiple purposes. Zigbee often utilizes a smart meter. But you cannot tap into a Zigbee signal without the correct gear. 433MHz – the same frequency employed by some remotes, weather stations, and other equipment. Can send data up to 70 meters away. Wi-fi – offers a wireless range over the two choices above The receiver may be a display screen, an internet router, a separate hub, or even a mobile phone. At times, the receiver is an internet entry point that lets you peruse the information elsewhere. Receivers may be battery-powered or plugged into an electrical socket. Uses of Data Center Power Monitoring Systems Watch your energy consumption before you get saddled with an exorbitant power bill Optimize solar power provided on-site with solar PV panels Inform your family or colleagues about your electricity expenditures Discover your stand-by, after hours or overnight energy consumption Know early if the power company is overcharging you Discover the actual utility of your air conditioning, electric hot water, lighting, and more Learn to adjust your daily practices to help you conserve energy Lessen your reliance on fossil fuels and reduce your carbon footprint The monitoring of data power centers has remained stagnant for the past two decades. Some monitoring has now become intuitive with the emergence of high-powered processors. This carries a myriad of helpful new additions making data centers more efficient. These include waveform capture, CBEMA ITIC monitoring, and common circuit creation. Here’s a brief overview of these additions: Waveform capture is a feature that instantly takes a high-resolution photograph of voltage and current waveforms after a certain limit is crossed. To be of any utility, these snapshots must include pre and post-event data which needs a continuous rolling log. The system should be able to recover past data capture at 40 points per data cycle resolution. It is also essential that all data captures be performed on all current channels for multi-circuit and branch-circuit meters. The advantages of waveform capture are: Crucial diagnostic information can be utilized to investigate errors, disruptions, and outages. A perfect example is a disruption generated on the load side of the bus, that spreads all over the bus and hits other essential loads. If waveforms are monitored on all branch circuits or feeder breakers, it can be clearly identified which circuit was responsible. Photo Credit: www.narda-sts.com Precautionary load failure examination: Manual captures can be analyzed next to the benchmark captures to pinpoint increasing aberrations in the loads. This technique may also be used on faulty pumps and chillers. They can even identify dropping power supplies before the disruption of the load. This examination may also be done using an Artificial Intelligence script. Within the monitor to sound off early alerts about differences in load signatures. CBEMA ITIC Logging The standard voltage window for IT loads is the CBEMA ITIC curve. Aberrations outside this window have the capacity to disrupt essential loads. But a lot of these breaches are fleeting, and happen either too fast or too slow to identify, as they can last from micro-seconds to minutes. Recognizing variations as they occur needs a monitor that measures the voltage at speeds of 40kHz. This is way past the ability of any existing legacy processors. The emergence of quicker operating processors is now lessening the cost of manufacture and helping to make this technique of monitoring the new standard at the branch circuit level. Using a data center power monitoring is important in securing the operational stability of the essential bus. In order to be useful, aberrant data needs to include the length, magnitude, and time of the occurrence. It would also be beneficial if the meter can analyze the type of event., such as sag, swell, transient or other forms. By documenting this information, any occurrences can be thoroughly investigated for a root cause. Moreover, these occurrences can go uncorrected until they disrupt the essential bus. Vigilant monitoring is an important early warning measure that notifies an operator of certain signs before an untoward event happens. Common Circuit Creation With Muti-Circuit Monitors, readings are commonly displayed as per circuit data. Leaving it up to the BMS/DCIM to calculate the result of any multi-phase circuit. In order to be useful, monitors should be able to organize circuits to add up the result for the circuit breaker. This characteristic will facilitate the mathematical calculation of the neutral even without a neutral CT. This method is a hundred percent precise. Common Circuit Creation also integrates with BMS/DCIMs easily. Here we have presented you with a few tools and techniques that give you a glimpse of the next generation of data center power monitoring. We have also shown you the most optimal daily uses of data center power monitoring systems. They will allow you to change your lifestyle and adjust your power consumption to get the best use of the energy that you consume. It also lessens your reliance on fossil fuels and reduces your carbon footprint. But the technology changes and advances itself daily. And constant vigilance and monitoring of brand new techniques and tools will keep you up to date with the latest in this field. With these helpful tips, you can maximize your income stream and increase the efficiency of your power sources. We hope that this article was both helpful and informative. As part of a project to monitor remote sites run by a hybrid generator and solar panel system, AKCP has provided Powercity Generators with an intelligent monitoring solution. The SP2+E interfaced via Modbus to the control panel of the generator. The SP2+E polls data on generator runtime, engine speed, oil pressures and fuel levels. The data was transmitted at some sites via Ethernet, and others over a longer distance via LoRa wireless communications
<urn:uuid:1ff25938-553c-4f2b-b331-f19ddaefe096>
CC-MAIN-2022-40
https://www.akcp.com/blog/next-generation-of-data-center-power-monitoring/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00222.warc.gz
en
0.918997
1,878
3.203125
3
Are you taking time to care for your teeth? It’s one of the most important preventative measures you can take for your oral health and overall health. Suppose you don’t properly care for your teeth and gums. In that case, it can lead to many issues, including infections of the mouth, facial pain, gum disease, and more serious health issues, including infertility, diabetes, kidney disease, dementia, heart disease, cancer, and stroke. Don’t worry. There are more than a few things you can do to care for your teeth and gums naturally. Keep reading to learn what these things are and how you can ensure good oral health without using chemicals and other potentially harmful products. Start from the Inside Out Having a healthy mouth is something that involves your whole body. If you want to keep your mouth healthy, your body requires fat-soluble vitamins and minerals. While the vitamins and minerals support your entire body, they also help create more mineral-rich saliva that is used to protect your teeth. A great way to get these nutrients is by using herbal infusions. The right nourishing herbal infusion includes higher levels of minerals and vitamins that are easily absorbed and that help to support healthy teeth. Saliva and Oral Health Saliva is the way your body protects your teeth. From a practical perspective, teeth are re-mineralized as saliva moves over them. However, it’s necessary to consume the proper nutrients, or your saliva won’t have the minerals necessary to strengthen and protect your teeth. You can also use natural toothpaste, which will help you provide your teeth with an additional layer of protection. Watch Your Diet Make sure you eat a balanced diet that includes dairy, meats, vegetables, and fruits. Doing this will ensure you have all the nutrients required to keep your teeth healthy and strong. Remember, acidic foods will increase the possibility of tooth decay since they break down the enamel and let bacteria make it into your teeth. While this is true, you should not completely avoid acidic foods. Some are quite beneficial to your body. It’s also possible to reduce the impact of acidic foods. You can do this by rinsing your mouth out with water after you eat. You can also drink acidic liquids, like soda and black coffee, using a straw, which ensures it bypasses your teeth completely. Make sure you don’t brush your teeth right after eating. The acids from your food weaken the enamel, which makes it vulnerable when you brush. It’s also important to limit how much sugar you consume. While sugar is not acidic on its own, the microbes in your mouth will thrive on it. The microbes produce acid that will cause serious damage to your teeth. Additionally, excess sugar is the cause of many health issues. Vegetables and Fruits Eating peppers, carrots, celery sticks, and apples all help your teeth be healthy and strong because this triggers saliva production, stimulates the gums, and gives your body the nutrients required to strengthen your teeth. You can use sesame seeds as a type of oral scrub. Chew them, but don’t swallow. You can then use your toothbrush to rub the sesame seeds over your teeth, similar to how you would use toothpaste. The chewed seeds will gently remove tartar and plaque on your teeth without causing damage. Flossing is essential. Unfortunately, many people don’t do it, or they don’t do it properly. It’s a good idea to learn how to floss your teeth properly and do this daily for good oral health. Keeping Your Teeth Healthy Naturally If you want to keep your teeth healthy naturally, start with the tips and information above. Each of the steps here will help you maintain healthy, strong teeth for your life. Being informed and knowing what to do to keep your teeth healthy will pay off and set you up for minimal oral health issues now and down the road. Publish Date: December 1, 2021 2:55 PM
<urn:uuid:e7c4d9e7-b213-479c-8571-a45daafb0c71>
CC-MAIN-2022-40
https://www.contactcenterworld.com/blog/mytechblog/?id=de2cb2d2-69ea-4a0b-92f3-795581dd2ff2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00222.warc.gz
en
0.94013
850
3.078125
3
Previously we have had a look on the ways wireless networks can be installed and configured. We are aware of the fact that how difficult it is to maintain and run a wireless network. The popularity of wireless networks has led to the emergence of more technical and large networks. Many of the people, all over the world, are using wireless networks for various purposes. However, as the popularity of these networks has increased, a lot of problems associated with them have also emerged. These problems might occur with the running of the network or with the devices in use. All of these issues call for the need for an efficient method to troubleshoot and eliminate these problems. It is quite essential that you know various ways in which you can troubleshoot the wireless systems to ensure their smooth operation. You will encounter a number of problems related to wireless networks relating to interference, signal strength, incompatibilities, configurations, latency, encryption type, bounce, SSID Mismatch, Incorrect switch placement and incorrect channel. It is quite important that you understand these problems and devise effective ways to ensure that they are eliminated. Below, you will find in detail the procedure to get rid of each of these problems and to troubleshoot the network. One of the most prominent problems faced by wireless networks is interference. We are aware of the fact that wireless devices function by transmitting signals in the form of radio waves. This is the reason that any device close to a WAP which produces radio or even micro waves tend to cause interference. This interference leads to the disruption of the signal. It may also let the signal divert from its original path and result in the client receiving weak signals. This is the reason why it is essential to place the WAP and the client in a place where there are not electrical devices which transmit radio waves. In addition to this, you should also ensure that there are no metal objects close to the place where you set up your wireless network, as metal objects also tend to disrupt signals. It is very important to ensure that there is no interference so that the client can receive the maximum amount of signals and get the best output. When you set up a wireless network, one of the most essential issues to keep into consideration is the signal strength which the clients will receive. This means that you have to ensure that the signals reach the client with a high strength. In this regard, the location of the WAP is quite essential. You should keep the WAP in a place which is central to ensure that all the clients receive the signals. You should also considering using the most appropriate antenna type. For example, if the signal has to be transmitted in all directions, they you should consider using an Omnidirectional antenna. In addition to this, your wireless network should also be configured in a way that it cannot be connected by unauthorized clients. You should also be aware of the fact that you tend to receive high signal strength if you are close to the WAP. The signal strength might also get reduced if there are a lot of walls between the WAP and the client. The aspect of security impacts the way in which wireless networks are configured. Many of the people usually hide their SSIDs to add an extra layer of security to their network. However, they do not consider that this might make things quite difficult for the client. If the SSIDs are hidden, then there is a lengthy procedure to find it on the client's device and then perform some settings to connect to it. This is one problem which many people wish to avoid as in this case every client will have to be provided with a set of instructions which they will have to follow to connect to the network. It would be much better if the SSIDs are broadcasted openly but the password is strong enough to increase security. This password will also have to be communicated with the clients, but it will be much less of a problem than communication of complete instructions. The issue of compatibility between various wireless standards is also an essential issue to concentrate on. We are aware of the fact that there are 4 different types of wireless standards; 802.11a, 802.11b, 802.11g, 802.11n. These wireless standards are configured in the devices according to the frequency at which the device transmits signals. However, the main problem is that any two devices which want to communicate over a wireless network must have the same wireless standards. This is an essential requirement. However, in order to cater to this issue, the best solution is using the backward compatible devices. These devices can communicate with all of the other wireless standards. This will ensure that there are not compatibility issues. In case the devices are not backward compatible, you should wisely select the type of wireless standard to be configured on each device to ensure compatibility. The setting up of frequencies is usually done by using channels. It is vital to realize that you have a band of frequencies available from which you can choose the best frequency for your network. A channel basically comprises of a number of frequencies which are combined together. These channels are created to increase the bandwidth which the user is provided with. However, to allow the WAP and the client to communicate, both of them should be set on the same channel. This will ensure that the signals transmitted by the WAP are picked by the client. However, you should also take certain precautions. While setting up channels, you should ensure that you choose the chancel which does not contain a frequency range which is being used by any other device in the surrounding area. This is done to prevent overlapping and stop any kind of interference. In addition to this, you should also ensure that the wireless network and the client are working on the same channel, otherwise there will be no connection established. The most appropriate channels to be used for wireless networks are 1, 6 and 11. Latency is the time which it takes the wireless signals to be transmitted from the WAP to the client. The lower the latency, the better is the efficiency of the network. There are a number of factors which affect the latency of a network and make the signals reach their destination in more time. Most of the factors which we discuss above can result in latency problems. Interferences can cause the signals to be disrupted and reach their destination late. Furthermore, the placement of WAP and the selection of antenna types can also lead to the variation in latency. Hence, all of these factors need to be kept in consideration while dealing with latency issues. It is quite important nowadays to transfer the data over a network in an encrypted form. This will increase the security of data and anyone who intercepts the data will not be able to decipher it in any way. In this regard, there are a number of protocols which provide security to the wireless networks. However, the main issue that requires consideration is that the WAP and the client should both be set up with the same encryption type. This means that the protocol providing encryption at the WAP should be the same one providing encryption to the client. If the encryption type is not same, then it will not be possible for the WAP and device to connect. In addition to this, you should also make sure that the encryption type used is the most appropriate with the required situation as different encryption protocols provide different types of security. We are aware of the fact that wireless signals are transmitted in the form of radio waves. This means that these waves have the ability to bounce off reflective surfaces, such as metal surfaces. This causes the radio waves to bounce off the surface if there is any object between the WAP and the client. This will result in a weak signal being received at the client. It is essential to carry out a check to ensure that there is not any bounce in the signals so that the client receives the highest strength of signals. In this regard, the first step should be to make sure that there is not object, especially a metal object, between the WAP and the client. If there is any, you should consider changing the position of your WAP to ensure full signal strength. SSIDs are referred to as Service Set Identifiers. Whenever a wireless network is launched, its SSID is broadcasted in the surrounding areas which lie under the range of the network. Every device in that range will be able to search for this SSID and view it in the list of networks available in that area. This SSID will actually be the name which is given to the network. The clients will be able to connect to the network by selecting the SSID of your network and then entering necessary details to join it. However, in a crowded area where there are a number of WAPs providing wireless signals, the problem of SSID mismatch arises. In this environment, it might be difficult for the client to identity which network is yours. For example, if your clients are having trouble joining your network, this means that they are probably trying to connect to another network in the area. In this regard, you will check the device of client regarding the network details to let the client know that which network is yours to which they can connect. Thus, you should make sure that your SSID is distinct from the other networks in the network to eliminate any such problems. A WAP is also referred to as a Wireless switch. There are a lot of problems which arise due to the incorrect placement of the WAP and its antenna. This may cause the client to receive weak signals and experience poor network functionality. In order to avoid this, it is essential that you place your wireless switch and its antenna in the correct place. You should make sure that the wireless switch is placed in a central location from where the signals could be transmitted in all directions. You can also connect the antenna with a wire and place it in a location where the WAP cannot be placed in order to make sure that the maximum signals are received. However, in case it is not possible to find the correct place for setting up the switch, you can also consult other companies which provide special services to ensure that all the switches and clients are placed so that the signals received are maximal. This is essential is situations where there are a lot of WAPs in the network. The process of troubleshooting wireless networks is of high significance. Before you begin the process of troubleshooting, you should be able to identify the actual reason due to which the problem is occurring. In this regard, you should have in-depth knowledge regarding the operation of wireless networks and its components. You should make sure that all of the issues discussed above should not occur in the network to enable it to perform perfectly. In case any related problem arises, you can use the knowledge endowed to you in this chapter to eliminate these issues. Also, you should be familiar with the fact that more than one issue can contribute towards a problem. Hence, you should check the system thoroughly after you carry out the process of troubleshooting. Please keep in mind that if you do not perform this delicate process with perfection, this might result in new problems being created while the elimination of a previous problem. If you keep all of these factors in mind and follow the steps indicated, you will surely be able to master the process of troubleshooting of wireless networks. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from [email protected] and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:4873e7cf-a6cc-4d71-8cd3-cf97886cce27>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/network-plus-how-to-troubleshoot-common-wireless-problems.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00222.warc.gz
en
0.964215
2,390
2.8125
3
Women began dropping out of ICT roles in 1984, the same year personal computers were on the rise, according to Lynwen Connick, First Assistant Secretary – Cyber Policy and Intelligence Division, the Department of the Prime Minister and Cabinet. Speaking as the keynote at the packed out Females in IT and Telecommunications (FITT) International Women’s Day lunch event in Sydney on Wednesday, Connick described how, when she chose to pursue a career in computer science, it was quite normal for women to work in this sector. At the time, she said, computers were not yet being bought for the home and her only exposure to computers up until university had been on television. There was also a known history of fashionable female ICT pioneers, such as Ada Lovelace – English mathematician and countess, said to have created one of the world’s first programming languages. Connick’s slide demonstrating the drop-off of women in IT careers from 1984 onwards “Once I got to university it was clear computing was an exciting field to study and be working in. In those days many women were working in computing, and many of the world’s ICT pioneers were women. But when I graduated something happened; it all went wrong,” said Connick. “In 1984, the number of women studying computer science began to stagnate. Women were going into other science subjects, but computer science numbers were dropping.” According to Connick, this also coincided with when computers were becoming popular in the home and schools had begun teaching computer science as a subject. “I hadn’t studied computing at school, and neither had half my cohort. So it must have been something in the way children were being exposed to ICT in school that was changing how they wanted to work in this field,” she concluded. By observing the natural behaviour in her children, having both a son and a daughter, Connick said she noticed that when in preschool, all the children were gathered around computers in equal number, but by around mid-primary school it was the boys that were laughing around a screen in the classroom, while the girls instead stood outside chatting. “At that time, personal computers, computer games, even television programs and movies, the technology in them was all marketing towards boys and men, so ICT really started to become a boy thing,” she said. Connick made efforts to encourage her daughter to become interested in computing by buying a book that taught children how to code in Python, but this wasn’t enough to drive enthusiasm in an ICT career. “I asked my daughter if she would consider a career in IT and she said no. I was shocked – both her parents enjoyed careers in IT, she was good at mathematics and science, so why not? And she told me, she didn’t know what a career in ICT would be like. So we failed a bit there,” she said. Further, currently across all of Asia-Pacific, Australian girls between 15 and 19 years old are the least interested in studying or pursuing a career in STEM, according to recent research by MasterCard. While a renewed focus on getting kids involved in STEM (science, technology, english and mathematics) subjects is very important, along with teaching young students how to code, Connick said it’s also crucial that we let kids know that there is a lot more variety to a career in ICT than just coding, particularly when programmers are often stereotyped as men in hoodies leaning over computers, in dark rooms. “That’s some knowledge we need to impart. I’m very lucky to have an amazing job, just one of the many amazing jobs I’ve had in my career, and that’s just one example,” she said. “I get to advise the prime minister of cyber security policy, I participate in international events, there are so many different things you can do with a career in IT and we must sell the message about how diverse our jobs are.” The National Innovation and Science Agenda has recognised the need for a more concerted effort nationally to overcome cultural, institutional and organisational factors that discourage girls from studying STEM subjects. Last year the government pledged $13 million, budgeted over five years, to encourage more women to choose and stay in STEM research, related careers, startups and entrepreneurial firms. The initiative includes the expansion of the Science in Australia Gender Equity pilot to cover more Australian science and research institutions; establishing a new initiative under the Male Champions of Change project to focus on STEM-based industries; and partnering with the private sector on initiatives to celebrate female STEM role models and foster interest in STEM among girls and women. The FITT International Women’s Day panel (photo: Holly Morgan) Around 600 men and women attended this year’s FITT International Women’s Day lunch at Doltone House in Pyrmont to discuss challenges still facing women in business and technology today, as well as applaud progress in gender equality, such as an increased percentage of women on Australian Boards in the past year. Attendees were also able to ask questions of a panel that included Connick, Microsoft Australia managing director, Pip Marlow; SAGE steering committee co-chair, science in Australia gender equality, Susan Pond; Qlik vice president and regional director A/NZ, Sharryn Napier; GoDaddy managing director, Tara Commerford, and Intel Australia managing director, Kate Burleigh. “Events like this are also a great way to raise awareness, build momentum and share our passion for ICT, highlight successful women in the sector, and hopefully encourage others to get involved,” said Connick.
<urn:uuid:4a752764-b139-42eb-a50e-3c6cbcb8e5e4>
CC-MAIN-2022-40
https://www.cio.com/article/201879/1984-was-the-year-everything-changed-for-women-in-it-lynwen-connick.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00222.warc.gz
en
0.979832
1,192
2.625
3
Biometrics used to protect livestock A new livestock application of biometrics promises a way to protect sheep from big bad wolves. Swiss biologists have developed a biometric sheep collar that registers changes in the heart rate of sheep to indicate wolf attacks. The device will be ultimately be designed to alerts shepherds about attacks via smartphone text message and fend off wolf attacks by releasing a repellent. The research team, which includes highly-regarded Swiss biologist Jean-Marc Landry, put high-tech collars, similar to those used by runners, on 12 sheep and then encircled them in an enclosure with muzzled Czechoslovakian wolf dogs. The wolf dogs circled the sheep before attempting to attack. Collar readings indicated a substantial rise in the sheep’s heart rate. The team plans on testing a new version of the device this autumn. The next version of the collar will include a built-in wolf-repelling device, in the form of a non-lethal spray or a sound repellant, which will activate when a sheep’s heart rate reaches over 200 beats per minute. The regular heart rate for sheep is between 60 and 80 beats per minute. Landry and other scientists have outlined both their invention and intentions in a research paper. The device is designed to protect livestock, thereby reducing the protection costs and providing producers with a more high tech means of protection than sheep dogs. The collar will ultimately be in use to protect sheep in France, Switzerland and Norway in 2013.
<urn:uuid:6e237f15-c7e1-466e-bb1e-1483c46c210c>
CC-MAIN-2022-40
https://www.biometricupdate.com/201208/biometrics-used-to-protect-livestock
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00222.warc.gz
en
0.918178
312
3.015625
3
Intrinsic functions are subprograms that are built into the ACUCOBOL-GT library. They save time by simplifying common tasks that your COBOL programs might need to perform. For example, intrinsic functions can perform statistical calculations, convert strings from upper to lower case, compute annuities, derive values for trigonometric functions such as sine and cosine, and perform general utility tasks such as determining the compile date of the current object file. Intrinsic functions are sometimes called built-in or library functions. To access an intrinsic function, you include it inside a COBOL statement (typically a MOVE or COMPUTE statement). Here's an example of a statement that uses the min intrinsic function: move function min(3,8,9,7) to my-minimum. This COBOL statement can be translated into: move the result derived from performing the min function on the literals "3, 8, 9, and 7" to the variable my-minimum. Note the presence of the required word function, followed by the name of the function (min) and then its parameters. This required word can be replaced by a $ sign, as shown in the following example. Each intrinsic function is evaluated to a data value. This value is stored in a temporary storage area that you cannot access directly in your program. The only way to get the derived value of an intrinsic function is to provide the name of a data item into which the resulting value should be placed. In the example shown above, the variable my-minimum receives the derived value of the min function. In the example above, the parameters passed to the min function are literals. It is also permissible to pass data items, as shown here: compute my-sine = $sin(angle-a). However, if your COBOL program is compiled for 31-digit support (-Dd31), numeric functions are computed using special floating point arithmetic that is accurate to approximately 33 digits, regardless of the floating-point representation on the host machine. The functions that return a double include: ABS, ABSOLUTE-VALUE, ACOS, ANNUITY, ASIN, ATAN, COS, LOG, LOG10, MEAN, MEDIAN, MIDRANGE, NUMVAL, NUMVAL-C, PRESENT-VALUE, RANDOM, REM, SIN, SQRT, STANDARD-DEVIATION, TAN, and VARIANCE.
<urn:uuid:c13b00e5-6b0b-4a37-a39c-b518b1222b2e>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/extend-acucobol/1031/extend-Interoperability-Suite/BKPPPPINTRS001.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00222.warc.gz
en
0.877952
517
3.390625
3
Encryption Does Not Equal Integrity Intercepted e-mail can become he-said-she-said without you knowing it. - By Roberta Bragg Sometimes I wonder what my purpose in life is. I used to think it was helping people and organizations improve their information security posture. In effect, help them stand up straight, do the right thing, lock down systems, train users, prepare defenses against the bad guys. I used to think I could make a difference simply by teaching people what I knew. Then everyone got into the act. It seemed like everyone knew lots about everything, security-wise. Who needs an evangelist when everyone goes to church? Recently however, I've discovered a big secret. Some of those people preaching security have their definitions confused. Now, I'm not talking about differences of opinion. It's true that in the information security field we sometimes have different definitions for the same thing or disagree on terminology. For example, you may say certificate authority, while others say certification authority. You may believe that a VPN means encryption; I say, strictly speaking, a VPN is a tunnel that may or may not be encrypted. In the former example it's more a naming issue; for the VPN, it could get you into trouble if you think your data is encrypted when it's not. Please understand that encryption does not mean integrity. Just because the data is encrypted before being sent doesn't guarantee it doesn't change before it's delivered. You might think it's so, and it's easy to see why. After all, if you can't unscramble the encrypted message, how can you change it to say Let me give you an example of how you can, using my friends Alice, Bob and Chester. Alice wants to send a confidential message to Bob. Fortunately, they both work for the Acme Roadrunner Delivery Company. Acme has installed Microsoft PKI, so Alice uses Outlook to compose and send an encrypted message to Bob. Behind the scenes, the e-mail is encrypted with a secret key; the key is encrypted using Bob's public key and sent with the message. When Bob receives the e-mail, his private key is used to decrypt the secret key and the secret key is used to decrypt the message. Because Bob is the only one who has access to his private key, he's the only one who can read the message Alice sent. The question is, does the message say what Alice wrote before she sent the message? Here's where Chester comes in. Chester also works for Acme. Because Bob's public key is, well, public, Chester can also use it to send an encrypted message to Bob. The message will be identified as coming from Chester. But what if Chester intercepts Alice's message and replaces the message part of her e-mail with the message part of the e-mail he composed, then sends this altered message to Bob? Chester can't read the message Alice sent, but that doesn't matter. The message Bob receives can be decrypted by Bob; thus, Bob gets the message that Chester sent, not the one Alice sent, although he thinks it's from her. Now, imagine the havoc Chester could cause. Then substitute some interesting public personalities for our little trio. Note that I've made this sound easier than it is, but that's not important. What's important is that it could happen, and public key/private key cryptography is not the only encryption technology subject to this type of attack. My point is this: Encryption does not guarantee integrity. We need to use other cryptographic tools such as digital signatures or specific integrity algorithms, such as SHA1. So maybe you don't have a Chester working for you and you feel this is a very unlikely scenario. That doesn't matter. In your organization you may decide that the risk of such an attack is low enough that you don't need to add integrity checks to your e-mail. But you should still be clear on the difference between encryption, scrambling (which keeps information confidential) and integrity (which guarantees that data remains the same). Someday it might make a difference. Roberta Bragg, MCSE: Security, CISSP, Security+, and Microsoft MVP is a Redmond contributing editor and the owner of Have Computer Will Travel Inc., an independent firm specializing in information security and operating systems. She's series editor for Osborne/McGraw-Hill's Hardening series, books that instruct you on how to secure your networks before you are hacked, and author of the first book in the series, Hardening Windows Systems.
<urn:uuid:b71d0ac5-bad7-4324-bc50-0d3ae4454f9d>
CC-MAIN-2022-40
https://mcpmag.com/articles/2004/10/18/encryption-does-not-equal-integrity.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00222.warc.gz
en
0.937068
1,000
2.75
3
CCL's Principal Analyst Alex Caithness asks the question: After SQLite, what comes next? A must-read primer on LevelDB - tomorrow's ubiquitous format? SQLite has become a ubiquitous data storage format for digital forensic practitioners to consider. First popularised by smart phone platforms it now forms part of almost every investigation in one form or another. SQLite’s ubiquity was built upon the growing market share of the platforms that used it extensively (we’re talking iOS and Android – name a more iconic duo, I’ll wait), so it’s interesting to ask the question: what’s the next platform, and what’s the next data format? Although web browsers provide APIs (Application Programming Interface) for opening and saving data to a user’s local machine, making use of them is often clunky. On one side, making use of the user’s host system requires an understanding of the operating system currently in use, and on the other the browser tries to “sandbox” Web Apps, separating them from each other, and restricting their access to the host system for security reasons. Because of this, data persistence is often achieved by some combination of storing data in the cloud and making use of a variety of browser managed storage mechanisms; keep this last part in mind as you read on – we’ll be returning to this idea. Spoiler: Chrome won. Safari and Chrome actually have a shared lineage in their rendering Engine – Safari using WebKit and Chrome using Blink, which is based on WebKit. From that we can see that around 82% of browsers currently in use are based on the WebKit rendering engine, which is probably a good thing for standardisation, but not so great for competition. But it actually goes further (or ‘gets worse’, depending on your how you look at things) – on the chart above alone, Opera, Android Browser and Samsung Internet are all based on the Chromium codebase (Chromium being the open source version of Chrome) and the new version of Edge that now ships with Windows 10 is also based on Chromium. As these browsers are all sharing aspects of a single codebase many of the artefacts associated with these browsers will also be shared – and convergence is usually good news from a digital forensics standpoint. Because of this convergence I’ve started referring to these browsers that borrow heavily from Chrome (or Chromium) as being “Chrome-esque”. Chrome is winning a war on another front as well – but we’ll get to that in a bit. As Web Apps become more and more common, the need to persist data between sessions is often necessary to properly provide the functionality on offer. A word processor running the browser is all well and good, but if you can’t save your work as you go, it becomes a risky business. Of course, storing data in the cloud has become standard practice – being able to access your data from any location and on any device is an expectation of most users. That being said, web apps should also operate offline in situations where that functionality makes sense – as would be the case for a word processor for example. For this, data needs to be stored locally, if only ephemerally. Local data storage can also be used for speed, where resources need to be accessed multiple times, for tracking, to provide functionality for browser extensions, and so on. As mentioned, browsers tend to abstract websites away from the underlying file system and instead provide APIs that provided storage to websites via an API. One such API is IndexedDB. To create an analogy with “classic” relational databases, you might consider each domain to have a single database which contains multiple “object stores” (tables), which contain a sequence of objects (records or rows). Records can be addressed based on their primary key, or any index field set on the object store. For a concrete example of IndexedDB being used in the wild, try creating a document in Google Docs and then setting it to be “Available offline”; the data is stored by IndexedDB in the “Documents” object store, rather than on the local filesystem. Compared to other storage APIs, IndexedDB shines in terms of its flexibility – good old Cookies provide only a very small storage space per record and needs to be text based; Web Storage (Local Storage and Session Storage) is again text-based only; Web SQL and Application Cache (aka AppCache) are both deprecated legacy technologies (and never formally standardised in the case of Web SQL). Finally there is the “FileSystem API” which is certainly interesting as it allows a Web App to make use of a file-system-like structure to store data locally; it is currently still officially an experimental API, although it is well supported by mainstream browsers. The FileSystem API on Chrome also makes use of the same technology as IndexedDB as it happens, albeit storing data in a different structure – but that’s for another blog. The thing to keep in mind about IndexedDB is that it is an API: a way for developers to interact with the browser in order to store and retrieve data – how the browser chooses to store the data is an implementation detail that is up to each browser (it’s just odds on that the browser will be Chrome, or Chrome-esque). Before we jump into the bits and bytes of how IndexedDB is stored by our Chrome-esque browsers, let’s talk about Web Apps again for a moment. We’ve established that modern web development allows one to create incredibly rich app experiences, but the desktop still needs apps, and they’re going to be running natively on the operating system, right? The thing is: web development technologies have matured so far now that in many cases it can be desirable to create a Web App because you can deploy that anywhere that can run a browser. But what if you want your app to look and act like a “real app”? So how common are these applications? Well if you’re running an up-to-date version of Windows, you almost certainly have at least one Electron based application already installed: Skype. Yep: Skype is based on Electron which means that it’s Chromesque and at the time of writing much of the chat information is persisted through IndexedDB. But a quick survey of apps we’ve investigated we’ve seen the following: • GitHub Desktop • Signal (desktop) • Microsoft Teams • WebTorrent (desktop) • WhatsApp (desktop) • Microsoft Yammer All built on top of Electron; all Chrome-esque. The answer in this case is LevelDB. And if you’ve ever seen a folder like the one below, you’ve seen LevelDB: LevelDB is an on-disk key-value store where the keys and values are both arbitrary blobs of data. Each LevelDB database occupies a folder on the file system. The folder will contain some combination of files named “CURRENT”, “LOCK”, “LOG”, “LOG.old” and files named “MANIFEST-######”, “######.log” and “######.ldb” where ###### is a hexadecimal number showing the sequence of file creation (higher values are more recent). The “.log” and “.ldb” files contain the actual record data; the other files contain metadata to assist in reading the data in an efficient manner. When data is initially written to a LevelDB database it will be added to a ”.log” file. “Writing data” to LevelDB could be adding a key to the database, changing the value associated with a key or, indeed, deleting a key from the database. Each of these actions will be written to the log as a new entry, so it is quite possible to have multiple entries in the log (and, indeed, “.ldb” files) relating to the same key. Each entry written to LevelDB is given a sequence number, which means that it is possible to track the order of changes to a key and recover keys that have been deleted (indeed, it is actually more effort to exclude old and deleted entries when reading the database!). When a log file reaches a particular size (4 MB by default), the data it holds will be converted into a permanent table file (a “.lbd” file) and a new log file started. The ldb files that are created by converting a log file are said to be “level 0” files and will contain the same data from the entries found in the log file - including all the updated or deleted records. As the number of level 0 files reaches a threshold (4 files by default), a “compaction” will be performed where the records from the level 0 files will be sorted and deduplicated based on their keys and moved into a new “level 1” file. These compactions continue in the same vein through the lifetime of the database, adding new levels as records from earlier levels are compacted, further sorting the data and removing redundant data. The deduplication of keys means that records in level 1 files and up no longer contain the old or deleted versions of records (although, there may still be newer versions of those records available in earlier levels or in the current logs). The LevelDB “.log” files are broken up into blocks of a fixed size (32 kB). The blocks contain a block header followed by data related to log entries. The header is 7 bytes long and has the following format: As data can be added to LevelDB in batches, more than one block in the log file may be needed to contain all records for that batch. In this case a number of blocks can be designated as forming part of a batch: the first block being marked as the “Start” of the batch; the final block being marked as the “Last”; and all intermediate blocks marked as being “Middle” blocks. If all records from a batch fit on a single block it will be marked as being a “Full” block (multiple batches could be stored in a single Full block if they fit). The data is written to the log in the “batch format” which begins with a 12 byte batch header: This is followed by a number of record format entries as specified in the batch header: The sequence number of the first record will be the value given in the batch header, all subsequent sequence numbers can be inferred by incrementing that value for each record. The record format makes use of VarInts (variable length integers) for encoding numeric values; more details can be found here: https://developers.google.com/protocol-buffers/docs/encoding#varints From a data recovery perspective, the log file format offers some great benefits: keys and values are stored in full, without any compression applied, therefore string searches and carving will be successful in a lot of cases. It is worth considering that as batched data can flow across block boundaries it will be interrupted by the 7 byte block header if it spans multiple blocks. If searching and carving is the primary concern then, it is worth pre-processing the log by removing 7 bytes every 32 kB to optimise the chances to recover data. The ldb file format it a little more involved than its log file counterpart. The ldb files are made up of Blocks (slightly confusingly this term is reused in the Log, but the structure and contents are different). Unlike the blocks in the log files, blocks in ldb files do not have a fixed length. All of the blocks in the ldb files are structured the same way, but do different jobs: data is held in Data blocks, metadata is stored in Meta blocks, the locations and sizes of Meta blocks are stored in the Meta Index block, the locations and sizes of Data blocks are stored in the Index block. The footer of the ldb file is a special case: it is 48 bytes in length and is always found at the end of the file. It begins with the locations and sizes of the Index block and Meta Index block; the final 8 bytes contain the file signature: 0x 57 FB 80 8B 24 75 47 DB. Whenever I’ve mentioned the “location and size” of a thing in the ldb file, this information is stored as what the developers of LevelDB describe as a BlockHandle structure. A BlockHandle is made up of two little endian VarInts, the first being the absolute offset, the second being the size in bytes. Blocks will always contain zero or more BlockEntry structures followed by a “restart array” (the restart array is useful for reading the data if you need to do it fast by skipping keys, but it isn’t actually essential for reading the data sequentially). Blocks will always be followed by a 5 byte long “trailer” structure; the length of the trailer is not included in the length of a block as defined by a BlockHandle. The block trailer is made up of the following values: You might have noticed that the table above mentions compression. Data in ldb files can be compressed using the Snappy compression algorithm (https://github.com/google/snappy) designed by Google. Snappy is a compression algorithm where speed is always preferred over high compression ratios. Because of the way that it operates, it is normal to see data appearing to be more or less uncompressed and legible towards the start of a block but becomes more and more “broken” looking as repeated patterns are back-referenced. It is important to understand that this compression is in use in the long-term storage for LevelDB records: naïve searching and carving is unlikely to be successful when applied to this data, without first decompressing the contents of the blocks. The BlockEntry structures in the Data blocks contain the keys and values for the records. The format of the BlockEntry is: In the table above there are two different lengths for the key – the shared key and the inline key. The keys in a single block are encoded using key sharing. Key sharing means that when a key shares a common prefix with the previous key (which is not unlikely as the keys should be ordered as far as possible), rather than storing the shared start of the key, only the part at the end that is different will actually be stored inline, and the shared prefix can be inferred from the previous entry, eg: As with the compression, it is important to note that key sharing is used in the ldb files. As before, if string searching is being used to identify data, if that data tends to reside near the start of a key, for records in close proximity this data may be “shared out” and not be captured until the keys have been expanded. In the final 8 bytes of every key in the ldb file there is additional metadata (this is not part of the key as a database user would see it). The metadata contains a 56-bit sequence number in the 7 most significant bytes and a state in the least significant byte, which is the state of the record where zero is deleted and one is live.
<urn:uuid:f842d572-fa14-44cc-8b52-3628d040b45d>
CC-MAIN-2022-40
https://www.cclsolutionsgroup.com/post/hang-on-thats-not-sqlite-chrome-electron-and-leveldb
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00222.warc.gz
en
0.925921
3,826
2.625
3
Written By Pravin Mehta Updated on July 22, 2022 Min Reading 3 Min The Gramm-Leach-Bliley Act (GLBA) is a United States federal law that regulates the companies designated as “financial institutions” on how they handle their customer’s nonpublic personal information or NPI. GLBA mandates financial institutions to ensure the security, confidentiality, and integrity of their customer’s NPI including names, addresses, phone numbers, bank statement, social security number, credit history, etc. It also obligates financial institutions to notify the customers about their information-sharing practices and inform customers of their right to “opt-out” if they don't wish their information to be shared with third-party affiliates. The Federal Trade Commission (FTC) enforces compliance with GLBA. GLBA is also known as - Failure to comply with GLBA can lead to significant penalties and even imprisonment, as discussed in a later section of this article. Key objective of GLBA is to ensure the confidentiality of customers' financial information and Personally Identifiable Information (PII) through the implementation of proper privacy and security standards, as follows: 1. Privacy Standards According to GLBA Privacy standards, organizations must notify their customers about their information-sharing practices. The customers must be provided with a means to opt-out of unnecessary sharing, if needed. 2. Security Standards The security standards of GLBA seek: Along with Financial Privacy Rule and protection against social engineering or pretexting (Pretexting Provision), GLBA also lays down an explicit compliance component called the Safeguards Rule. The Safeguards Rule obligates financial institutions to develop and implement detailed information security plans to protect the Nonpublic Public Information (NPI) of customers. All companies or entities designated as a “financial institution” as per the law fall under the ambit of GLBA. As per FTC guidance, “the Rule applies to all businesses, regardless of size, that are “significantly engaged” in providing financial products or services.” These businesses may include The GLBA applies to any company providing financial products and services like those mentioned above irrespective of the size of their business. It is important to note here that as per 16 CFR § 682.3 - Proper disposal of consumer information, “all persons subject to the GLBA and the FTC “Safeguards Rule” must properly dispose of nonpublic personal information (NPI) by taking reasonable measures to protect against unauthorized access to or use of the information in connection with its disposal.” As per FTC, NPI is any "personally identifiable financial information" that a financial institution collects about an individual with regard to providing a financial product or service, unless that information is otherwise "publicly available." The scope of NPI includes— Nonpublic Personal Information (NPI) commonly includes the following details of a customer: In most cases, publicly available information is not considered as NPI. GLBA compliance is a part of the Federal Trade Commission (FTC). According to the GLBA, the three main components that a company needs to meet are: 1. The Financial Privacy Rule The first component of the GLBA compliance checklist is the Financial Privacy Rule. The main purpose of the Financial Privacy Rule is to provide an agreement between financial institutions and their customers regarding the protection of their Non-Personal Information (NPI). According to the Financial Policy Rule, a business should provide suitable notices of its privacy norms and policies to consumers. Consumers are defined as those individuals who are using the product or services of the business. The notice should include details regarding: The Financial Policy Rule provides details pertaining to the collection and disclosure of private financial information to regulate the sharing of Non-Personal Information (NPI) of your customers with external agencies. It also requires businesses to provide the customers the choice to opt-in or out of having their NPI (Non-Personal Information) disclosed to non-affiliated third parties. 2. The Safeguards Rule According to the Safeguards Rule, financial institutions must develop, implement, and maintain a detailed information security plan that explains how the business is protecting customers' and previous customers' nonpublic personal information (NPI). The ‘Safeguards Rule’ should provide details regarding the measures adopted towards building up an NPI protection plan for better cybersecurity including: 3. The Pretexting Provisions The Pretexting Provisions rule supports financial institutions to build up protection against the problem of social engineering or pretexting. This usually happens when a fraud person impersonates the account holder by telephone, mail, or often by phishing or spear-phishing and tries to get unauthorized or fraudulent access to personal information. The most popular compliances for Pretexting Provisions include: GLBA non-compliance penalties can be quite serious for financial institutions. They include both monetary fines as well as imprisonment. The GLBA non-compliance penalties include: GLBA non-compliance also means a damage to the organization’s reputation. In the past, companies such as PayPal and Venmo had also faced GLBA non-compliance issues and had to reach settlements with the FTC. Following are some of the key items that can help attain compliance with GLBA regulations especially the Safeguards Rule that mandates a documented information security plan to protect customer information: A significant provision of GLBA is anchored on safeguarding the nonpublic personal information of customer through implementation of an information security plan. The Safeguards Rule identifies three key areas for ensuring information safety and thereby compliance, which include Employee Management and Training, Information Systems, and Detecting and Managing System Failures. Of these areas, Information Systems deals with collection and storage of NPI, i.e. what information is being collected, how it is stored and whether there is a business need to collect the information. Data erasure technology can help businesses meet the compliance needs concerning this Information Systems provision in the Safeguards Rule of GLBA. Software tools such as BitRaser can overwrite (i.e. erase) all the unwanted or redundant NPI stored on hard drives & SSDs of computers and servers to safeguard the data from breach or unwanted exposure. By wiping clean the data storage media along with offering tamperproof certificate and reports of erasure, BitRaser can guarantee compliance with GLBA. BitRaser is NIST Certified |US Department of Defense, DoD 5220.22-M (3 passes)| |US Department of Defense, DoD 5200.22-M (ECE) (7 passes)| |US Department of Defense, DoD 5200.28-STD (7 passes)| |Russian Standard – GOST-R-50739-95 (2 passes)| |B.Schneier’s algorithm (7 passes)| |German Standard VSITR (7 passes)| |Peter Gutmann (35 passes)| |US Army AR 380-19 (3 passes)| |North Atlantic Treaty Organization-NATO Standard (7 passes)| |US Air Force AFSSI 5020 (3 passes)| |Pfitzner algorithm (33 passes)| |Canadian RCMP TSSIT OPS-II (4 passes)| |British HMG IS5 (3 passes)| |Pseudo-random & Zeroes (2 passes)| |Random Random Zero (6 passes)| |British HMG IS5 Baseline standard| |NAVSO P-5239-26 (3 passes)| |NCSG-TG-025 (3 passes)| |5 Customized Algorithms & more|
<urn:uuid:fc122b4b-0d80-4fee-ad68-20dee6da3e02>
CC-MAIN-2022-40
https://www.bitraser.com/article/the-basics-of-gramm-leach-bliley-act-worth-knowing.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00422.warc.gz
en
0.914108
1,630
2.875
3
Global warming may hit the eastern United States harder than scientists thought, resulting in extremely hot summers. Average summer high temperatures could reach almost 10 degrees higher in 2080 than they do today. Eastern U.S. summer daily highs currently average in the low to mid-80s Fahrenheit. They will likely soar into the low to mid-90s during typical summers in the 2080s, according to a new NASA study, the results of which were announced this week and appear in the journal Climate. In extreme seasons with little rain, daily high temperatures in July and August could average between 100 and 110 degrees Fahrenheit in cities such as Chicago, Washington and Atlanta, the researchers found. The researchers analyzed nearly 30 years of observational temperature and precipitation data, and also used computer model simulations that considered soil, atmospheric and oceanic conditions, and projected changes in greenhouse gases. The simulations were produced using a widely used weather prediction model coupled with a global model developed by NASA’s Goddard Institute for Space Studies. “What we found is that most conventional models tend to simulate precipitation too frequently in the summer, at least in the eastern United States,” Leonard Druyan, senior research scientist at Columbia University and one of the authors on the paper, told TechNewsWorld. Zeroing in on the relevant grid elements in a conventional model, for example, “it indicated that some 65 percent of summer days would have rain, whereas observations show that it rains more like 48 percent in those areas,” Druyan said. Observed daily temperatures are usually higher on rainless days and when precipitation falls less frequently than normal. More Realistic Simulation When the research team, led by Barry Lynn of NASA’s Goddard Institute for Space Studies and Columbia University, used a regional forecast model specific to the eastern United States, “we found we could simulate a more realistic precipitation frequency,” Druyan said. “When we did that, the temperatures that came out were much higher.” Specifically, while a conventional model, which predicts frequent rainfall, forecasts an increase in temperature of just over 5 degrees Fahrenheit by 2080, NASA’s model, which expects less rain, predicts an increase of almost 10 degrees, he said. The Trouble With Models “We need to tread cautiously when we decide how much weight to put into the results of prediction models, because so far, models haven’t replicated what’s happened very well,” cautioned Kevin Trenberth, head of the climate analysis section at the National Center for Atmospheric Research, who was lead author of a similar study by the Intergovernmental Panel on Climate Change (IPCC). In fact, the United States east of the Rockies hasn’t warmed up as much as other parts of world have so far, or as much as the models predict, and the reason is that it is actually wetter and cooler than it used to be, with much heavier rainfalls, Trenberth said. “If you look at the 20th century for the contiguous 48 states, precipitation has increased by 7 percent in the United States, and most of that was after 1970,” Trenberth said. “Very heavy rainfalls have increased by 20 percent,” as evidenced by heavy rains and flooding in the Northeast in the last year, he said. A Different Starting Point Because many climate models predate this increased precipitation in the eastern U.S., “when they predict, they are actually starting from a higher level of temperatures,” Trenberth explained. Nevertheless, whereas temperatures so far have been cooler than most models predict for the Eastern United States, “if the atmospheric circulation reverts to one with more settled, drier weather, that could create the potential for increased warming,” Trenberth said. The Bottom Line Regardless of how far temperatures actually increase, the fact remains that greenhouse gas emissions are to blame. An assumption used in most modeling, including NASA’s, is that carbon dioxide emissions will continue to increase by about 2 percent a year, which is considered the “business as usual” scenario by the IPCC. So, if steps can be taken to reduce the growth rate to less than 2 percent, some of this heating might be staved off. “A lower rate of increase of these greenhouse-gas concentrations would give you less of an effect,” Druyan said.
<urn:uuid:a559c5f5-f49a-4b52-a913-e996fd649aae>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/eastern-u-s-summer-temps-could-reach-blazing-new-highs-by-2080-57348.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00422.warc.gz
en
0.956999
925
3.421875
3
As artificial intelligence (AI) programs become more powerful and more common, organizations that use them are feeling pressure to implement ethical practices in the development of AI software. The question is whether ethical AI will become a real priority, or whether organizations will come to view these important practices as another barrier standing in the way of fast development and deployment. A cautionary tale could be the EU General Data Protection Regulation (GDPR). Enacted with good intentions and hailed as a major step toward better, more consistent privacy protections, GDPR soon became something of an albatross for organizations trying to adhere it. The GDPR and other privacy regulations that followed were often seen as just adding more work that kept them from focusing on projects that really mattered. Organizations that attempt to solve for each new regulation in a silo end up adding significant overhead and making themselves vulnerable to competition in form of agility and cost effectiveness. Could an emphasis on ethics in AI go the same route? Or should organizations realize the risks—as well as their responsibilities—in putting powerful AI applications into use without addressing ethical concerns? Or is there another way to deal with yet another area of quality without the excessive burden? AI bias is human bias AI programs are undoubtedly smart, but they’re still programs; they’re only as smart as the thought—and the programming—put into them. Their ability to process information and draw conclusions on their own adds layers to the programming that isn’t necessary with more traditional computing programs in which accounting for obvious factors is relatively simple. When, for example, an insurance company is determining the cost of a yearly policy for a driver, they typically take data like gender and ethnicity out of the equation to come up with a quote. That’s easy. But with AI, it gets complicated. You don’t micro-control AI. You give it all the information, and the AI decides what to do with it. AI starts out with no understanding of the impact of factors such as race, so if programmers haven’t limited how data can be used by the AI, you can wind up with racial data being used, thus creating AI bias. There are many examples of how bias creeps into AI programs, often because of incomplete data. One of the most infamous examples involved the Correctional Offender Management Profiling for Alternative Sanctions, known as COMPAS, an algorithm used in some U.S. state court systems to generate sentencing recommendations. COMPAS used a regression model to predict whether someone convicted of a crime would become a repeat offender. Based on the data sets put into the system, the model predicted twice as many false positives for recidivism for Black offenders. In another example, a health care risk-prediction algorithm used on more than 200 million U.S. patients to determine which ones needed advanced care was found to be preferential toward white patients. Race wasn’t a factor in the algorithm, but health care cost history was, and it tended to be lower for Black patients with the same conditions. Compounding the problem is that AI programs aren’t good at explaining how they reached a conclusion. Whether an AI program is determining the presence of cancer or simply recommending a restaurant, its “thought” processes are inscrutable. And that adds to the burden on programming in ethics up front. Ethics and privacy together Continued improvements in AI have potentially far-reaching consequences. The Department of Defense, for one, has launched a slew of AI-based initiatives and centers of excellence focused on national security. Seventy-six percent of business enterprises are prioritizing AI and machine learning in their budgeting plans, according to a recent survey. Alongside the ethical concerns of AI’s role in decision-making is the inescapable issue of privacy. Should an AI scanning social media be able to contact authorities if it detects a pattern of suicide? Apple, as an example, is considering a plan to scan users’ iPhone data for signs of child abuse. Considering the ethical and potential legal implications, it makes sense that privacy and ethics get folded into the same security process as organizations plan on how to address ethics. The two should not be treated separately. As these and other programs move forward, new guidelines on ethics in AI are inevitable. This will create even more work for teams trying to get new products or capabilities into production, but it also raises issues that can’t be ignored. Successful AI ethics policies will likely depend on how well they are integrated with existing programs. Organizations’ experience with GDPR can offer a good example. Where it once was seen primarily as a burden, some organizations that are integrating it into their security processes have gained a lot more maturity by treating privacy and security as one bucket. Consider future regrets Ultimately, it comes down to programmers baking in certain guidelines and rules on how to treat various types of data differently, and how to make sure that data segregation is not happening. Integrating these guidelines into overall operations and software development will depend on an organization’s leaders making ethics a priority. Enterprises should be addressing ethics and security together, leveraging systems and tools they use for security for ethics. This will ensure effective management of the software development lifecycle. I would go so far as to say that ethics should be considered an essential part of a threat modeling process. The question organizations should ask themselves is: Five years down the road, looking back at how you handled the question of ethics in AI, what could be your regrets? Considering the history of how the impact of other game-changing technologies (e.g., Facebook) were overlooked until legal issues arose, the potential for regret may well be in not taking it seriously and not acting proactively until it becomes a pressing priority. People tend to address the loudest problem at the time, the squeaky wheel getting the most attention. But that’s not the most effective way of handling things. The ethical implications of AI need to be confronted now, in tandem with security.
<urn:uuid:3df0ca7a-3c74-46c3-ba27-13d3bce7d928>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2021/10/26/ethics-in-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00422.warc.gz
en
0.951076
1,234
2.859375
3
Librarians like our acronyms, but we’re not the only profession to indulge in linguistic gymnastics. The technology field is awash in acronyms: HTTP, AWS, UI, LAN, I/O, etc. etc. etc. One acronym you might know from working in libraries, though, is OSS – Open Source Software. Library technology is no stranger to OSS. The archived FOSS4LIB site lists hundreds of free and open source library applications and systems ranging from integrated library systems and content management systems to metadata editing tools and catalogs. Many libraries use OSS not specific to libraries – a typical example is installing Firefox and Libre Office on public computers. Linux and its multitude of distributions ensure that many library servers and computers run smoothly. It’s inevitable, though, that when we talk about OSS, we run into another acronym – FUD, or Fear, Uncertainty, and Doubt. FUD is commonly used to create a negative picture of the target in question, usually at the gain of the person making the FUD. In the technology world, OSS often is depicted by proprietary software companies as being inferior to proprietary software – the Microsoft section in the FUD Wikipedia page gives several good examples of such FUD pieces. It should be no surprise that FUD exists in the library world as well. One example comes from a proprietary software company specializing in library management systems (LMS). We’ll link to an archived version of the page if the page is taken down soon after this post is published; if nothing else, companies do not like being called out on their marketing FUD. The article poses as an article talking about the disadvantages of an LMS. In particular the company claims that OS LMSes are not secure: they can be easily breached or infected by a computer virus, or you can even lose all your data! The only solution to addressing all these disadvantages is to have the proprietary software company handle all of these disadvantages for you! The article is a classic example of OSS FUD – the use of tactics to sow fear, hesitation, or doubt without providing a reasoned and well-supported argument about the claims made in the article. However, this is probably not the first time you ran into the idea that OSS is insecure. A talking point about OSS insecurity is OSS security bugs stay unaddressed in the software for years. For example, the Heatbleed bug that caused so much havoc in 2014 was introduced into the OpenSSL code in 2012, resulting in a two-year gap where bad actors could exploit the vulnerability. You’ve also probably run into various versions of the thinking around OSS security that Bruce Schneier describes below: “Open source does sound like a security risk. Why would you want the bad guys to be able to look at the source code? They’ll figure out how it works. They’ll find flaws. They’ll — in extreme cases — sneak back-doors into the code when no one is looking.” OSS is open for all to use, but it’s also available for all to exploit if you go down the path described in the above line of thinking. The good news is that, despite the FUD, OSS is not more insecure than its proprietary counterparts. However, we also must be weary of the unchecked optimism in statements claiming that OSS is more secure than proprietary software. The reality is that OS and proprietary software are subject to many of the same information security risks mixed with the unique risks that come with each type of software. It’s not uncommon for a small OSS project to become dormant or abandoned, leaving the software vulnerable due to a lack of updates. Conversely, a business developing proprietary software might not prioritize security tests and fixes in its work, leaving their customers vulnerable if someone exploits a security bug. While there are differences between the two examples, both share the risk of threat actors exploiting unaddressed security bugs in the software. OSS, therefore, should be assessed and audited like its proprietary counterparts for security (and privacy!) practices and risks. The nature of OSS requires some adjustment to the audit process to consider the differences between the two types of software. A security audit for OSS would, for example, take into account the health of the project: maintenance and update schedules, how active the community is, what previous security issues have been reported and fixed in the past, and so on. Looking at the dependencies of the OSS might uncover possible security risks if a dependency is from an OSS project that is no longer maintained. Addressing any security issues that might arise from an audit could take the form of working on and submitting a bug fix to the OSS project or finding a company that specializes in supporting OSS users that can address the issue. As we wrap up Cybersecurity Awareness Month in the runup to Halloween, let’s get our scares from scary movies and books and not from OSS FUD.
<urn:uuid:d7aa0f2b-3ae7-4f70-8c54-80b86c3c8727>
CC-MAIN-2022-40
https://ldhconsultingservices.com/2021/10/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00422.warc.gz
en
0.93932
1,030
2.875
3
Jeeva has just dropped the world’s lowest power wireless chip, Parsair, a breakthrough in IoT innovation. Jeeva has announced the world’s lowest power wireless chip for streaming real-time sensor data. Jeeva’s Parsair™ chip consumes 100 times less power than typical Bluetooth and enables many novel use cases previously out of reach due to cost, size and power constraints. The low power nature of this new wireless chip can enable densely deployed sensors to communicate at an unprecedented scale. As the demand for “connected things” continues to grow exponentially, low-power radio and battery technologies have failed to keep up with the large scale Internet of Things (IoT) deployments. “Until now, devices could continuously stream wireless data, rapidly draining their batteries, or could transmit data intermittently to try and stretch battery life,” said Scott Bright, CEO of Jeeva. “Parsair™ makes it possible to truly stream data without draining the battery, which will be game-changing for a lot of different industries and applications.” Jeeva’s Parsair chip achieves this breakthrough capability by enabling communication using reflections rather than generating a radio signal of its own. A nearby wireless router transmits radio signals which the chip reflects to communicate data. Since reflecting energy consumes significantly less power than emitting energy, this approach can enable wireless communication with decades-long battery life. Using Jeeva’s pioneering technology, the reflected signal is made to look exactly like a standard radio packet in one of several supported radio protocols, making it possible to integrate with commodity hardware and existing product ecosystems easily. The ability to continuously stream data enables a range of new devices and applications, unlocking the potential for low power streaming audio devices, high bandwidth accelerometer sensors, or other highly interactive devices that last years on a small coin cell battery. The Parsair chip supports data rates up to 1,000 kbps and connected range up to 100 meters, all at far lower power than any conventional radio and with a silicon footprint of just over 1 square millimetre. In addition to enabling streaming applications, the chip can build wireless sensor networks to solve multiple critical business problems. Applications include consumable product monitoring and cold-chain tracing for both perishable products and vaccines. Consumer and medical product customers are deploying the chip to enable automated replenishment, inventory management, and asset proximity tracking. “This chip provides low-latency, item-level data from places and things that were never before possible,” said Bright. “It shows the industry that it’s possible to sidestep conventional tradeoffs and get fully-featured wireless connectivity at very low power and meagre cost.” - How long can we expect the chip shortage to last? - Market Access for MedTech: Speakers now announced! - Why customer conversations are vital for brand survival in a post-COVID-19 world - What can corporates learn from digital transformation in the COVID era? Because the breakthrough chip has broad applicability, it is first made available to a select group of customers with whom Jeeva works closely. “We’re supporting qualified customers to accelerate the development, integration and testing of customized edge-to-cloud connectivity solutions,” said Bright. “We have several reference designs to help accelerate specific custom applications. Our platform and wireless chip are ready for pilot-stage deployment now, and we’re scaling to high-volume availability later this year. ”
<urn:uuid:e8d9346c-54b3-45e9-a627-b93a4524da3b>
CC-MAIN-2022-40
https://tbtech.co/innovativetech/iot/jeeva-reveals-worlds-lowest-power-wireless-chip/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00422.warc.gz
en
0.921345
735
2.5625
3
While the term may be relatively obscure, we benefit from machine learning everyday. Watch a few British crime dramas and suddenly your Netflix queue is populated with a stream of related movies and TV shows. These highly curated suggestions are based on the viewing patterns of thousands of similar customers. With each new user or selection, these high-powered computers become smarter — learning over time until they know us almost as well as we know ourselves. What is Machine Learning? At the most basic level, machine learning is another name for teaching computers to learn for themselves. A way to predict, prevent and respond to fraud, machine learning is a powerful blend of applied math and computer science. By teaching computers how to behave and perform complex tasks, machines are able to predict future outcomes. Using statistical learning, these highly sophisticated computers use rational, if-then statements to identify boundaries, define data patterns and make highly accurate predictions. While machine learning is an asset for almost any industry, it is becoming an increasingly powerful risk management tool for companies hoping to identify and prevent fraud. As technology evolves, online fraud is becoming more prevalent and more damaging, particularly for retailers who operate across platforms (apps, desktop and mobile). As e-commerce grows, companies are experiencing a proportional rise in fraudulent transactions. In 2014, 65% of organizations with revenues in excess of $1 billion were victims of online payment fraud, and by 2015 every $100 of fraud actually cost merchants $223. With over 90% of companies experiencing online fraud, machine learning gives retailers access to powerful tools that were previously unattainable. The Evolution of Machine Learning Throughout their 60-year existence, computers have been a grand experiment in machine learning — making strides in complexity and accuracy with each passing year. Early computers were only able to read digits and text, but today’s computers are trained to recognize context and sentiment in ways that help them predict a user’s behavior. This kind of technology is the driving force behind corporate chatbots and virtual assistants like Siri. In the financial realm, machine learning can examine and analyze thousands of data sets to detect and reject fraudulent transactions — saving time and manpower in the process. Machine Learning in the Real World While media and retailers have been using machine learning for years to echo and amplify consumer tastes based on prior purchases and viewing habits, machine learning is also improving the operability of self-driving cars and automated assistants. Though machine learning uses AI to make decisions in the same way a human brain does, these powerful computers employ complex algorithms to analyze millions of individual transactions, making real-time evaluations quicker and with more accuracy than any human. As payments spread across platforms, from credit and debit cards to ATMs, smartphones, desktops and mobile devices, machine learning has become an increasingly useful tool for combating fraud. Unfortunately, criminals have also become more adept at using big data and analytics to disguise their crimes, making it harder for businesses to approve or reject transactions. Detecting system vulnerabilities, fraudsters employ big data and the dark web to seek out vulnerabilities and maximize their monetary gains. To fight back in this brave new world of fraud, companies must be able to fend off attacks in real time, making accurate cause-and-effect predictions within seconds. Computers then use these insights to adjust their algorithms — learning and processing at speeds far faster than the human mind. Besides its advantages in speed, machine learning can accurately identify fraudulent transactions by constantly processing and analyzing new data sets, thus minimizing the time and expense of costly manual reviews. How Machine Learning Fights Fraud Soon after the first credit card was invented more than 50 years ago, digital fraud became a normal cost of doing business. Back in those days, big data consisted of physical books listing thousands of hot credit card numbers that were pored over by fraud detection specialists. While the process has become automated, identifying suspicious activity remains a necessary evil, with businesses determining, in a matter of seconds, whether to approve or decline a transaction. If they incorrectly identify a transaction as fraud, companies run the risk of losing a customer. As computers become more robust and statistical analytics and analysis improves, machine learning is replacing legacy fraud management systems. With a huge amount of data attached to each transaction, the buying habits and patterns of good and bad customers can be identified long before any fraud occurs. According to the Global Fraud Attack Index, at the beginning of 2015, less than $2 out of every $100 was subject to a fraud attack, but by the beginning of 2016 it was $7.30 out of every $100. With fraud attempts almost inevitable these days, the manpower required to prevent fraud can be almost as expensive as the fraud itself. Enter machine learning, which utilizes both historical and live data to create patterns that can predict customer behavior. From identity verification and payment authorization to checkout scoring and merchant underwriting, machine learning is useful for analyzing large amounts of data for decision-making purposes. Replacing time-consuming, rules-based management tools, organizations can use machine learning to leverage the power of big data — performing analytics and delivering risk scores efficiently, in real time and with greater accuracy. Drawbacks to Machine Learning As amazing as machine learning can be, even the most advanced machines can’t replace the power of human decision making. While computers can make decisions based on patterns, they do not do well with aberrant data like holiday shopping patterns. Furthermore, machine learning is only as accurate as the data it is given, so inappropriate or erroneous data will result in irrelevant fraud scores. There must be enough relevant data to identify legitimate cause-and-effect relationships. And there is always the risk that a machine may teach itself the wrong thing. Self-learning models are great at identifying fraud, but it is almost impossible for a human to track, control or adjust what the machine learns, which could be bad if it draws incorrect insights and begins blocking good customers. While computers excel at detecting objective trends and links that humans are unable to spot, machines may be able to learn, but they are still unable to think. They can only process the data they are given, and they are unable to find insightful solutions to problems. Fraudsters are almost impossible to predict, so it’s often difficult — even for machines — to tell the different between a criminal and a genuine customer. This problem is exacerbated by the fact that hackers do their best to make their profiles look more convincing by adopting the characteristics of a good buyer. And while machines can reduce labor costs and improve fraud detection precision, they won’t replace humans completely. Machines excel at “big data” and finding patterns in huge datasets, but fraud is often about small data. Many companies use machine learning up to a point, and then hand problems over to manual reviewers who examine transactions flagged as fraudulent. According to the Annual Fraud Benchmark Report of 2016, 83% of U.S. merchants still rely on manual reviews. Even though 30% of all declined orders marked as fraudulent are likely legitimate, fraud attacks still increase every quarter. Since businesses do best when transactions are user-friendly and hassle-free, machine learning can still help protect company revenue and customer data. The Best of Both Worlds Perhaps the best method of fraud prevention and detection is for human intelligence and artificial intelligence to work together. While machines examine transactional data and consumer buying patterns, fraud specialists can use their logic and fraudster know-how to continually improve fraud detection techniques. Fast, smooth and designed to scale, machine learning is capable of making accurate decisions in fractions of a second — helping businesses become more efficient and protecting their revenue, reducing fraud losses, streamlining manual review, increasing sales, maintaining customer loyalty and reducing false positives. While machine learning is helping businesses and financial companies stay one step ahead of fraudsters, Point-to-Point Encryption (P2PE) solutions that secure payment data the moment a card is swiped can add another layer of protection for your business. To keep your company safe and secure in the years to come, contact Bluefin today to learn more about our seamless P2PE solutions.
<urn:uuid:74173b46-402c-4f54-96e3-d7e216e2f1b7>
CC-MAIN-2022-40
https://www.bluefin.com/bluefin-news/can-machine-learning-fight-fraud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00422.warc.gz
en
0.942668
1,639
3.328125
3
SNMP is one of the most widely used communication protocols for remote monitoring systems. That's because it allows the real-time exchange of information between network devices - and it also allows notifications of events to be sent to technicians. If you are already familiar with SNMP, you probably know that every version of this protocol (SNMPv1, SNMPv2c, and SNMPv3) supports three request types - Get, GetNext and Set. However, the newer versions of SNMP also support an additional request message called GetBulk. Let's take a look at GetBulk and how it works. SNMP defines two types of devices - agents and managers. Agents are devices that gather information about monitored equipment and send notifications to the manager. Managers are the devices that will query change-of-status information and receive notifications. In remote monitoring systems, agents are normally RTUs and managers are the central master stations. As I've said before, every version of the SNMP protocol supports the same three different request messages. These are: Although these messages are supported by all the SNMP versions, the v2c and v3 also support an additional requested type called GetBulk. A GetBulk request execute multiple GetNext requests and returns it all in a single response. In addition to requesting the OID that is in every SNMP request type, GetBulk gives you two additional capabilities. They are: An Object Identifier (OID) is a value that uniquely identifies managed objects and their statuses in a MIB hierarchy. In other words, imagine that you want to collect utilization information for all interfaces on an agent. One of the problems with utilization information is that you need a piece of reliable time information to be able to calculate the returned value between two retrieved utilization values. You can't simply use the clock on the management system because the delay between replies can vary request by request (network delay can be 50ms one request and then 150ms on the next and then 2 seconds), which makes the value quite unreliable. In order to work around the unreliable clock issue, you can use sysUpTime value on the agent. This is the value representing the time since the agent was last initialized, in 1/100 of a second (10 milliseconds). Now, your request will need to retrieve sysUpTime and utilization information for all interfaces. In the following example, let's assume that the device has 10 interfaces with consecutive instance values. This is what the request would look like: non-repeaters = 1 max-repetitions = 10 The agent will respond to this request with: sysUpTime.0 : (TimeTicks)<some value> ifInOctets.1 : (Counter32)<some value> ifInOctets.2 : (Counter32)<some value> ifInOctets.3 : (Counter32)<some value> ifInOctets.4 : (Counter32)<some value> ifInOctets.5 : (Counter32)<some value> ifInOctets.6 : (Counter32)<some value> ifInOctets.7 : (Counter32)<some value> ifInOctets.8 : (Counter32)<some value> ifInOctets.9 : (Counter32)<some value> ifInOctets.10: (Counter32)<some value> As you can see from this example, non-repeaters value has instructed the agent to treat the first OID in the request (sysUpTime.0) as a Get request. Remaining OIDs in the request had GetNext operation performed max-repetitions times (in this case 10) and all values were returned in a single response. GetBulk is significantly more efficient than other messages when multiple consecutive values need to be obtained. The industry's best practice is to use them whenever possible. In the example above, for instance, you have retrieved data that would require one Get message and ten GetNext requests to perform without GetBulk. Also, note that if all you want to do is perform consecutive GetNext operations in a GetBulk request, you'll need to set the non-repeater value to 0. As you might notice, GerNext and GetBulk are similar command options, but they have main differences. With GetNext, your master station needs to keep asking for each item one-by-one until it reaches the end of the list. So, for example, if you want to get a list of all interfaces names from a device, then this means that you might end up sending 20 requests (or how many interface names this device might have - 2, 10, or even 30) might go out to the agent and 20 replies are sent back. When using GetBulk requests, your master station will send one request only asking for an item and all following items up to a limit. Usually, the number of requests is greatly reduced through GetBulk. In some scenarios, the complete data set com potentially be returned with just one request/reply pair, but you normally won't know until you've sent the Get request. GetBulk provides some savings in terms of bandwidth. But, even more importantly, it also reduces the effects of a long wait for responses (latency time). For example, imagine you have a high latency time, maybe over a LAN or WAN, where the request/response exchange takes 50ms. The twenty GetNext requests required for the previous example will add up to a full second (1000ms) before all the data has been collected from the agent. With GetBulk, the request/response exchange will take only 50ms. In theory, this means that you can monitor 20 more systems in the same given timeframe. In practice, you would see a bit of additional overhead, but not much at all. If you can choose at all, it's better to always use GetBulk. This is because you'll have to send fewer requests and you'll have a much better information retrieval speed. Now, let's use a simpler analogy. Imagine someone in your household just bought some groceries and asked you to bring them inside the house from the car. Instead of taking each item inside one by one (Get and GetNext), you might find that it is more efficient and less time consuming to simply use a bag to bring all the food inside (GetBulk). GetBulk is not a popular capability of the SNMP protocol. This is mostly because it is not available in SNMPv1 and many devices still support this version. In order to be able to have the GetBulk capability, you need equipment that works with SNMPv2c or v3. These newer versions enable you to take advantage of these feature benefits. However, if you have different devices working with different versions of SNMP, the industry's best practice is to invest in a multiprotocol master station. A multiprotocol master station is able to support many different devices that are enabled to work with many different protocols. This means that, if you have SNMPv2c and v3-enabled equipment you'll be able to integrate them all under the same interface. You'll keep your initial investment in your older devices while also getting the benefits of your most modern gear. The T/Mon LNX is a known example of an efficient multiprotocol master station. It provides you with scalability and robustness for your network and an intuitive web interface. The T/Mon is a feasible and cost-effective way to combine the benefits of all your devices and protocols in a single screen. Before embarking on the development of your SNMP project, it's important to ensure that all your critical requirements are being met. You need a competent vendor that will work with you to evaluate your scenario and pinpoint all your needs while also finding some monitoring opportunities to optimize your network. But, none of this is valuable if you can't have a perfect-fit solution. We are a US-based, vertically integrated monitoring systems manufacturer, so we are experts in what it takes to deliver a truly powerful solution. Talk to us today, and let's protect your network. You need to see DPS gear in action. Get a live demo with our engineers. Have a specific question? Ask our team of expert engineers and get a specific answer! Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
<urn:uuid:49685ba9-9a7e-4c0f-888e-840bd6723fef>
CC-MAIN-2022-40
https://www.dpstele.com/blog/what-is-a-snmp-getbulk-request.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00422.warc.gz
en
0.927559
1,827
2.609375
3
Wireless, WiFi, and Bluetooth are all common terms people use, but they do not always understand the nuances between what they do and how they work. Wireless and WiFi are used synonymously and are the same thing: WiFi is short for wireless fidelity, which is often shortened to wireless, both of which mean a wireless network. Contrarily, there are differences between how Bluetooth and wireless technologies support our devices. This post discusses the differences between wireless and Bluetooth, as well as how and when to use each. The Difference Between Wireless and Bluetooth and When to Use Each Every day we connect devices to wireless and use Bluetooth, mostly without thinking about how either works. The way these technologies work, and what purpose they can be used for, does in fact vary. However, there is one thing they do have in common: once we connect a device to a wireless network or Bluetooth device, the connection information remains so we can connect to those networks and devices automatically later unless we remove the connection. Wireless networks are typically broadcasted by access points that are connected to, and supported by, other local network hardware. Basically, access points push out a radio signal on certain frequencies that is known as wireless. Wireless networks can also exist without broadcasting themselves, meaning if you were to search for wireless networks on a device, a network not being broadcast would not be seen by the device. When you connect to a wireless network, you can use it to access other devices on that network, including printers, and also use it to access the internet as long as internet access is available. This is an important distinction because you can be connected to a wireless network with a phone, laptop, tablet or other device and access local resources like printers, etc., even if the internet goes down. These other devices are still available to you because a wireless network still broadcasts itself even if it cannot pass traffic to the internet which is external to the local network. In the simplest terms, wireless is a technology that provides access to the internal devices on the network it is being broadcast from, as well as the external internet. Bluetooth uses a radio frequency, similarly to wireless networks, but uses it for a different reason. First of all, Bluetooth has a limit of 10 meters, or just over 30 feet. Additionally, Bluetooth connects devices that support Bluetooth to one another for specific purposes. One common example is connecting wireless keyboards and mice to computers. There is an irony that these are called wireless, because they do not use a wireless network, but rather do not have physical "wires", or cords, connecting them to the computer so they are labeled "wireless". These keyboards and mice work by connecting a USB dongle into a computer. The USB dongle provides Bluetooth so that the keyboard, mouse, or keyboard/mouse combo (which also have Bluetooth built-in) can connect to the computer. This connection does not need access to the wireless network to work and will work even if all devices are "offline", aka off network. This means if the internet went down, you would not be able to search or access the web, but your keyboard and mouse connected to the computer using Bluetooth would work fine. Another example of how Bluetooth is used is airdropping items between Apple devices (Mac's, tablets, iPhones). When you use the airdrop feature to share files, contacts, images, videos, etc., Bluetooth is used to locate available devices near you that you might want to share with. When you initiate a file transfer, the data plan is used to pass those items, but Bluetooth is what finds and makes a connection with other devices that allows the passage of data between them. One last example are fitness trackers and smart watches which use Bluetooth to connect these devices with your phone. This allows messages and calls to also appear on your smart watch, and exercise and health data to flow from a fitness tracker to the app installed on your phone. In the simplest terms, Bluetooth connects devices so they can communicate and data can flow between them, but has nothing to do with the internet. Wireless and Bluetooth are very common technologies that most of use seamlessly in our every day lives. However, there are some big differences between how the two are supplied, what they do and where they are best used. Wireless provides access to other network devices as well as the internet. Bluetooth is used in a more specifically targeted way to connect two or more devices for the purposes of passing data between them outside of the internet. As always, knowing how a technology works helps ensure you get what you need when you need it and that it works the way you expect!
<urn:uuid:6e90baea-4732-4f3e-9dbc-26ea89134523>
CC-MAIN-2022-40
https://blogs.eyonic.com/the-difference-between-wireless-and-bluetooth-and-when-to-use-each/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00422.warc.gz
en
0.957712
949
2.875
3
The Department of Veterans Affairs' innovation chief is eyeing novel uses of new technology in the pursuit of continuing to improve veteran care, and one area he sees growing is in point-of-care manufacturing. "One of the market dynamics that's shifting predominantly in health care is that concept of point-of-care manufacturing. This is using additive technology,” said Dr. Ryan Vega, VA’s chief officer of Healthcare Innovation and Learning, during an ACT-IAC conference. “If you think of traditional manufacturing, it's subtractive. ‘Additive’ manufactures something layer by layer.” Using computer-aided design (CAD) or 3D object scanners, additive manufacturing creates health care solutions with precise geometric shapes. These are built layer by layer with a 3D-printing process, which contrasts with traditional manufacturing that often requires machining or other techniques to remove surplus material. Vega explained that the rise of 3D printing in medicine has enabled VA to create and replicate personalized veteran care. VA launched a series of new medical 3D-printing applications in March to provide advanced prosthetics care. The agency is working to find more advanced and customizable ways of restoring mobility and limb function to veterans who have suffered critical injuries. VA’s 3D-printing network has been tasked with a range of local pilot programs, conducting autonomous research to personalize and improve veteran care. VA has also used 3D printing to treat hearing loss. VA’s integrated 3D printing network team designed and created a 3D-printed stent that can be inserted in the external ear canal to keep it from collapsing and allow sound to pass through. The device is not surgically implanted and can be easily removed by the patient. “We were able to come up with and, in a matter of months, actually get a developed medical device. So, you think of the idea of either surgery or coming up and making a medical device at the point of care, that's in some ways revolutionary in terms of how we're going to think about care and service for veterans,” Vega said. VA is also developing the use of extended reality (XR), which can be used to help treat post-traumatic stress disorder (PTSD), anxiety and depression, as well as in clinical applications like pain management. "We're seeing the use of extended reality, whether it's augmented or virtual, for a whole host of different types of care delivery, whether it's virtual reality for physical therapy in the home or ... surgical navigation and operating,” Vega said. In a surgical setting, surgeons can use XR to see below patient layers by using advanced computer spatial technology that takes a CT scan or an MRI and layers it over the patient in the operating room. This allows the surgeon to see below the surface before they make the cut, leading to more precise, targeted interventions in the operating room. “Those two types of emerging technologies are really growing. I think you're going to see those markets change drastically and have a big impact on how we experience health care,” Vega said. To support ongoing innovation, VA is focusing on human capital development to support new health care solutions. VA is continuing to accelerate workforce training programs to ensure that systems and solutions can effectively deliver on the agency’s mission and veteran needs. Vega said it's critical that new procedures and technologies are incorporated into training. By building in new modalities of care, trainees can conduct virtual care, telehealth and patient monitoring. “It's not to play on the rhetoric of, ‘You need to incorporate into the workflow.’ We may develop better workflows. If we're not pushing the solutions into the training environment, we're going to have physicians coming out that aren't ready to use these [innovations],” Vega said. Private-public and interagency partnerships will enable VA to deliver novel devices and innovations to veterans quickly. Instead of taking a technology-centered approach, the agency is developing robust care models, focusing on sustainment, and then looking to technology to enable these frameworks. Vega noted that as the venture capital model shifts, industry should design solutions that transcribe a “new standard of care,” as opposed to a one-off solution. “The core of innovation is creating value. In order to do that, you have to invest in your people and the infrastructure,” Vega said. “The idea is that you can bring the best of the private sector and government together to co-design solutions that meet the mission of the agency, or joint collaborations that meet broad perspective — so benefits, delivery, digital, cyber — that's really where you find the acceleration of innovations.”
<urn:uuid:e416e582-f5d5-48c4-bbd6-17982276add1>
CC-MAIN-2022-40
https://governmentciomedia.com/emerging-uses-3d-printing-manufacturing-are-transforming-veteran-care
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00622.warc.gz
en
0.954363
983
2.59375
3
Like human hygiene, organizations must maintain regular cyber hygiene for healthy outcomes, but it’s critical they don’t neglect the tools and processes that mitigate cyber risk — the most serious threats to our security — says Bryan Ware in a piece for Network World. Typically, the discussion around the need for risk management has focused on cyber hygiene and ultimately, compliance. What we really need, Ware argues, is a holistic risk framework and a solid commitment to risk-based measurements in order to accurately understand and defend against the most serious cybersecurity threats facing our country. Cyber hygiene, although valuable, doesn’t protect against more real risks; these often require something much more analytically sound and scientifically grounded. Additionally, it should ask important questions like “which threats are most likely to occur?” or “what are our greatest vulnerabilities?” Translating these into business terms is key, and measuring them so that risks and countermeasures can be prioritized is essential.
<urn:uuid:2a9148e1-fd73-4d88-a880-32592adb3712>
CC-MAIN-2022-40
https://haystax.com/network-world-cyber-hygiene-isnt-enough-says-haystax-technology-ceo-bryan-ware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00622.warc.gz
en
0.956072
202
2.546875
3
In recent years, deep learning has revolutionized computer vision. And thanks to transfer learning and amazing learning resources, anyone can start getting state of the art results within days and even hours, by using a pre-trained model and adapting it to your domain. As deep learning is becoming commoditized, what is needed is its creative application to different domains. Today, deep learning in computer vision has largely solved visual object classification, object detection, and recognition. In these areas, deep neural networks outperform human performance. Even if your data is not visual, you can still leverage the power of these vision deep learning models, mostly CNNs. To do that, you have to transform your data from the non-vision domain into images and then use one of the models trained on images with your data. You will be surprised how powerful this approach is! In this post, I will present 3 cases where companies used deep learning creatively, applying vision deep learning models to non-vision domains. In each of these cases, a non-computer vision problem was transformed and stated in such a way as to leverage the power of a deep learning model suitable for image classification. Case 1: Oil Industry Beam pumps are often used in the oil industry to extract oil and gas from under the ground. They are powered by an engine connected to a walking beam. The walking beam transfers rotational motion of the engine to the vertical reciprocating motion of the sucker rod that acts as a pump and transfers oil to the surface. A walking beam pump, also known as pumpjack. Source. As any complex mechanical system, beam pumps are prone to failures. To help with diagnostics, a dynamometer is attached to the sucker for the purpose of measuring the load on the rod. After measuring it is then plotted to produce a dynamometer pump card that shows the load across parts of the rotation cycle of the engine. An example dynamometer card. Source. When something goes wrong in the beam pump, dynamometer cards will change their shape. Often times an expert technician will be invited to examine the card and make a judgment call about which part of the pump is malfunctioning and what is needed to be done to fix it. This process is time-consuming and requires very narrow expertise to be solved efficiently. On the other hand, this process looks like it could be automated, this is why classical machine learning systems were tried but did not achieve good results, around 60% accuracy. One of the companies that applied deep learning to this domain is Baker Hughes1. In their case, dynamometer cards were converted to images and then used as inputs to an Imagenet-pretrained model. Results were very impressive – accuracy went up from 60% to 93% by just taking a pretrained model and finetuning it with new data. After further optimizations of model training, they were able to achieve an accuracy of 97%. An example of a system deployed by Baker Hughes. On the left, you can see the input image, and on the right is a real-time classification of failure mode. The system runs on a portable device, and classification time is shown in the lower right corner. Source. Not only did it beat previous classical machine learning based methods, the company now could be more efficient by not needing beam pump technicians to spend time trying to diagnose a problem. They could come and start fixing mechanical failures immediately. To learn more, you can also read a paper that discusses a similar approach2. Case 2: Online Fraud Detection Computer users have unique patterns and habits when they use a computer. The way you use your mouse when you browse a website or type at a keyboard when composing an email is unique. In this particular case, Splunk solved a problem3 of classifying users by using the way they use a computer mouse. If your system can uniquely identify users based on mouse usage patterns, then this can be used in fraud detection. Imagine the following situation: fraudsters steal someone’s login and password and then use them to log in and make a purchase at an online store. The way they use computer mouse is unique to them and the system will easily detect this anomaly and prevent fraudulent transactions from taking place, and also notify the real account owner. The solution was to convert each user’s mouse activity on each web page into a single image. In each image, mouse movements are represented by a line whose color encodes mouse speed and left and right clicks are represented by green and red circles. This way of processing initial data solves both problems: first of all, all images are of the same size, and secondly, now image-based deep learning models can be used with this data. In each image, mouse movements are represented by a line whose color encodes mouse speed and left and right clicks are represented by green and red circles. Source. Splunk used TensorFlow + Keras to build a deep learning system for classifying users. They performed 2 experiments: - Group classification of users of a financial services website – regular customers vs. non-customers while accessing similar pages. A relatively small training dataset of 2000 images. After training a modified architecture based on VGG16 for only 2 minutes, the system was able to recognize these two classes with above 80% accuracy. - Individual classification of users. The task is for a given user make a prediction whether it is this user or an impersonator. A very small training dataset of only 360 images. Based on VGG16 but modified to take account of the small dataset and reduce overfitting (probably dropout and batch normalization). After 3 minutes of training achieved an accuracy of about 78%, which is very impressive considering the very challenging nature of the task. To read more, please refer to the full article describing the system and experiments. Case 3: Acoustic Detection of Whales In this example, Google used convolutional neural networks to analyze audio recordings and detect humpback whales in them4. This can be useful for research purposes, such as to track individual whale movements, properties of songs, the number of whales etc. It is not the purpose that is interesting, but how data was processed to be used with a convolutional neural network, which needs images. The way to convert audio data to an image is by using spectrograms. Spectrograms are visual representations of frequency-based features of audio data. An example of a spectrogram of a male voice saying “nineteenth century”. Source. After converting audio data to spectrograms, Google researchers used a ResNet-50architecture for training the model. They were able to achieve the following performance: - 90% precision: 90% of all audio clips classified as whale songs are classified - 90% recall: given an audio recording of a whale song, there is 90% chance it will be labeled as such. This result is very impressive and will definitely help whale researches. Let’s switch focus from whales to what you can do when working with your audio data. When creating a spectrogram, you can select frequencies to be used, and that will depend on the type of audio data that you have. You will want different frequencies for human speech, humpback whale songs, or industrial equipment recordings because in all these cases most important information is contained in different frequency bands. You will have to use your domain knowledge to select that parameter. For example, if you are working with human speech data, then your first choice should be a mel-frequency cepstrum spectrogram. There are good packages to work with audio. Librosa is a free audio-analysis Python library that can produce spectrograms using CPU. If you are developing in TensorFlow and want to do spectrogram computation on the GPU, that is also possible. Please refer to the original Google AI blog article to learn more about how Google worked with humpback whale data. To summarize, the general approach outlined in this post follows two steps. First, find a way to convert your data into images and second, use a pretrained convolutional network or train one from scratch. The first step is harder then the second, this is where you have to be creative and think if the data you have can be converted to images. I hope that the examples I provided can be useful for solving your problem. If you have other examples or questions, please write them in the comments below.
<urn:uuid:ff356c72-8a4d-4e13-a844-cb38c79de091>
CC-MAIN-2022-40
https://resources.experfy.com/ai-ml/deep-learning-vision-for-non-vision-tasks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00622.warc.gz
en
0.936132
1,839
2.78125
3
What is Code Injection?GRIDINSOFT TEAM Code injection (a.k.a. remote code execution) is an attack based on the input of improper data into a program. If hackers manage to exploit program vulnerabilities, they may succeed in injecting malicious code through the input line or uploaded file with the subsequent execution of this code. Those files usually exist as a DLL or a script, placed somewhere on your disk. Hackers may download them after the initial compromise and use it only when the time has come. The actions performed by the malicious code may go beyond the hackers’ user clearance, and their effects are limited by the capacities of the programming language in which the program is written. As a result, the program’s execution can be distorted, data removed, altered, sent somewhere (stolen), or accessed without clearance. Access for other service users can be denied, and even host takeover is possible. Another probable consequence of code injection is a computer virus or internet worm introduction and subsequent spreading. Command Injection vs. Code Injection Speaking of code injection in cybersecurity, we mean a certain type of attack - a special case of a wider group of cyber offenses, the malicious code attacks. The latter, when spoken of, usually implies the so-called command injection attacks rather than code injection. The attacks of both types exploit the vulnerabilities of software environments. Command injection uses the program wherein it is introduced to execute commands within a wider environment. For example, a malicious website script or a script-fitted excel file can initiate the execution of Windows Shell commands, therefore working beyond the application from which they originate. The field of action for a code injection attack is limited by the program where it is executed. For example, there are programs to work where users input search requests and commands, such as databases. Introducing specially crafted data can break this program and perform legally unavailable actions. For example, hackers can use the request line of an SQL database to tamper with data in it. That would be code injection. How Code Injection is Possible? Code injection exploits the vulnerabilities of an interpreter – a program that executes instructions directly without compiling them into machine code. The environments most susceptible to code injection are SQL, LDAP, XPath, NoSQL. Code injection attack is, of course, possible in operating system scripting, SMTP headers, program arguments, XML parsers, etc. Data format, characters used in queries, the amount of data being input, etc., are the tools of code injection attacks. Sometimes, code injection works like a hacker-launched pun. An equivocate wordplay, that can confuse the interpreter software, either leading to the execution of the desired malicious instruction at once or bringing the program into the state vulnerable to another injection. Non-Malicious Use of Code Injection Code injection is not necessarily a harmful tool for program overriding. Experienced users can knowingly use it to detour some procedures or perform actions unintended by the program. In databases, for example, it is possible to use code injection to create a column for search query results that the program previously didn’t display. It allows the implementation of search results filters based on the criteria of these newly introduced parameters. In file hosting services, it is possible to use code introduction to parse data from online resources in an offline program. The number of benevolent usages of code injection is virtually unlimited. However, the cases of “good” code introduction are as hard to find on purpose as the cases of malicious code injection. Actually, the trials aimed at finding vulnerabilities in question can also be called cases of benign code introduction. It is also possible to cause code injection accidentally. Users may unintentionally use symbols reserved for the environment they are working with to have some function. For example & or @. The simplest example is an unintended tagging of a person in a group chat of an instant messenger application via using @ before the name of the chat member. Single and double quotes are also candidates for an accidental code injection trigger since one of these pairs can be used by software developers for special purposes. Malicious Code Injection and Its Effects The effects of injection of malicious code, written intentionally to harm, vary. They are mostly unauthorized access, privilege escalation, and obtaining information via hacking. The attacks can be performed on the client’s side (if the application validates the input data on the client's side, for example, in the browser) or on the server side (if the validation takes place on the server). The client-side code injection includes: - SQL code injection is a rampant practice that targets SQL databases via queries allowing hackers to access desired data from the structure and even obtain sensitive data, such as sign-in credentials or information on the configuration of the attacked program itself. - Python code injection is used against applications written in Python. If the vulnerability is exploited well, the hacker gets the full scope of data manipulation. The range of possible consequences is broad and ranges from insignificant to grave depending on the hackers' intentions. - HTML code injection, a.k.a. cross-site scripting (XSS) allows criminals to access cookies, session tokens, and other data related to other users as the latter visit the targeted webpage. It is important to note that HTML code injection can be performed on a trustworthy website. The code injected by a hacker later targets the visitors of the page. It can collect their data, initiate downloading of malware to their machines, etc. The server-side code injection includes: - PHP code injection becomes possible if a PHP-written program has validation flaws that allow criminals to alter the program's code execution by the introduction of their own code with various thinkable consequences. How to protect yourself? Protection against code injection includes safety measures for developers and precautions for users who can fall victim to such attacks. We will touch on the latter within this post. Stay away from questionable websites. Substantial malicious code attacks happen from untrustworthy web resources. Watch out for the absence of SSL certificates on websites recognizable by HTTP in an address bar (instead of HTTPS). Along with counterfeits from dubious links you found somewhere, beware of DNS hijacking practices. Those tricks may lead you right to the server controlled by crooks. Keep the software updated. Most code injection cases happen because of software vulnerabilities. Undiligent check-up of the used DLLs, the ability to slip the arbitrary code or command for execution through the PowerShell - those breaches could be met even in the most popular programs. Software vendors check their software regularly - and release the security patches that can save your time and money. Install an anti-malware solution to keep your system protected from malware that can be installed via code injection. Not each anti-malware program will fit - the one with on-run protection is needed. However, the ideal solution for preventing code injection is using the EDR system. It will be a monolith shield for the whole network, rather than scattered security apps on each computer. Such a program will effectively counteract the threats above or malware injection attempts.
<urn:uuid:03f071ef-57de-4a30-ba72-fba9774753ea>
CC-MAIN-2022-40
https://gridinsoft.com/code-injection
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00622.warc.gz
en
0.899021
1,586
3.8125
4
Internet Control Message Protocol (ICMP) is a network layer protocol from OSI model which provides troubleshooting, control and error message services. It is commonly used by network administrators to troubleshoot Internet connections in diagnostic utilities including ping and traceroute. ICMP for Internet Protocol version 4 is called ICMPv4 and for Internet Protocol version 6 is called ICMPv6. SOME OF ICMP’S FUNCTIONS ARE TO: - Announce network errors – For eg when a host or network unreachable, due to link failure or some other reason. A transport layer packet directed at a port number with no receiver attached is also reported via ICMP. - Announce network congestion – When a router receives packets at a much faster rate than it can forward and begins buffering too many packets, it will generate ICMP Source Quench messages. Directed at the sender, these messages ask for the rate of packet transmission to be slowed. - Troubleshooting – ICMP supports Echo function which sends a packet on a round–trip between two hosts. A common network management utility is PING, Ping will transmit a series of packets, measuring average round–trip times and computing loss percentages. - Announce Timeouts – If an IP packet’s TTL field drops to zero, the router discarding the packet will often generate an ICMP packet announcing this fact. Traceroute is a utility which maps network routes by sending packets with small TTL values and watching the ICMP timeout announcements. Related- ICMP vs IGMP The header starts after the IPv4 header and is identified by IP protocol number ‘1’. All ICMP packets have an 8-byte header and variable-sized data section. The first 4 bytes of the header have fixed format, while the last 4 bytes depend on the type/code of that packet. TYPE – Type is 8 bits in size and specifies the format of the ICMP message. Code – It is 8 bits in size and further qualifies the ICMP message. Code error types have also been separately described in the below table – ICMP Header Checksum – It is 16 bits in size. This is the 16-bit one’s complement of the one’s complement sum of the ICMP message starting with the Type field. The checksum field should be cleared to zero before generating the checksum. Data – This parameter is of Variable length and contains the data specific to the message type indicated by the Type and Code fields. Related- ICMP Redirects
<urn:uuid:14393828-8e18-431a-840a-d5c281f3f6ad>
CC-MAIN-2022-40
https://ipwithease.com/internet-control-message-protocol-icmp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00622.warc.gz
en
0.887448
533
3.734375
4
What Are Route Pattern Wildcards? It is estimated that there are more than 10 billion possible phone numbers in the United States. When configuring routes in CUCM, it would simply be untenable to list every possible number required in a route table. In this post, we will discuss how to create generalized route patterns. These generalized route patterns will allow you to easily cover a wide swath of numbers without sacrificing hundreds of hours of configuration. First however, let's briefly review what a route pattern is. Then, we'll discuss how to implement route pattern wildcards. What is a Route Pattern? A route pattern is a string of numbers that CUCM uses to determine where to route calls. For example, a simple route pattern could be a number like 812-555-4001. A route pattern has a route list associated with that route pattern. However, as we'll see later this does not take into account the external routing number, which is usually 9. So if an end user calls the aforementioned number, CUCM finds it in the route pattern table. Then, it looks at the associated route list and group to determine how that external call should be routed. For instance, the call can either be routed through a SIP Trunk or a Gateway. The key takeaway is this: a route pattern gives CUCM the ability to call numbers outside of itself — whether that is to a PTSN, ITSP, or some other CUCM Cluster. Entering a route pattern is simple enough, but what if there are hundreds of numbers to enter? That's where wildcards come into play. What is a Wildcard? To put it simply, a wildcard in computer science is a symbol attached to a string of information. The symbol essentially says, "apply a specified pattern on the given number sequence." In CUCM, a wildcard can be displayed as an X. However, there are several other route pattern wildcards at our disposal. The purpose of a wildcard in CUCM is to make route patterns more concise and easier to read. After all, the only alternative would be to program thousands of route patterns for every possible phone number! Let's take a look at some examples to make it more clear. The X wildcard is used to specify a number in the range 0-9. It is probably the most common wildcard seen on a route table. Let's say you work on a sales team that needs to reach out to all potential customers whose numbers begin with 812-365. So 812-365-8888 would be a valid phone number, but 812-555-1234 would not. In CUCM, one option would be to add each possible number individually. Unfortunately this would mean creating nearly 10,000 different phone numbers. No thanks! Instead, use wildcards to create one route pattern that would look like this: 812-356-XXXX. Each of these Xs represents a digit 0-9. This is far easier to write, maintain, and troubleshoot. Discard Digit Wildcard The discard digit wildcard is represented as a dot. The dot wildcard separates the CUCM access code from the directory number. For example, say your organization requires a user to dial 9 before making an external call. That would mean if they wanted to call a family member, they'd have to dial 9-<the phone number>—so 9-808-555-1234. However, the discard wildcard removes that necessity. In CUCM, the discard wildcard could be used like this: 9.[2-9]XXXX. This wildcard expression can be translated to, "exclude nine when users make local calls." You may have noticed that this pattern sequence leveraged a wildcard we have not discussed, yet: the bracket sign. Let's delve into that one right now. The bracket () means that the number must be in the specified range between the brackets. Recall our previous example, 9.[2-9]XXXX. In that example, the numbers 2 through 9 are enclosed in brackets. In the United States, numbers that begin in that number range are public or private telephone listings. In other words, local phone calls. Notice that 1 is omitted. If a phone number begins with a 1 then it is some sort of 1-800 number, which is generally a customer service line. Because we do not want employees calling 1-800 numbers on company time, it can be omitted using the bracket wild card. The @(At) Wildcard The @ wildcard is an especially convenient tool, but can only be used once per route pattern. The @ wildcard matches all National Numbering Plan numbers. So for instance, you may have certain employees that need to have access to every phone number they could possibly call. The pattern sequence 9.@ would take care of that perfectly. (Assuming external routing is done with a nine.) Keep in mind that this allows for any phone number — including 1-800 numbers and international calls. The Question Mark Wildcard In our previous example, we looked at the number 9.[2-9]XXXX. Recall that 4 X's means that the user can dial four numbers, each of them being 0-9. However, what if we wanted to allow them to dial as many numbers as they want? One example of this could be handling international calls. It may not be clear how long phone numbers may be in each country. The Question Mark (?) Wildcard is the perfect solution when you need to handle an indeterminate amount of numbers. The Question Mark matches zero or more occurrences of the preceding digit or wildcard value. Our previous example could have been replaced with the following wildcard: 9.[2-9]X?. In this example, we are allowing any phone call that begins with 2 through 9, but then allows as many digits as the user desires after they meet that requirement. Notice that the previous example limits the user to four digits, while the question mark give them access to pass as many numbers as they would like. Remember, that the Question Mark wildcard is also valid if they do not match the previous symbol at all. So in our previous example, simply dialing the number 2 would be a valid sequence. To understand CUCM, it is a requirement to have a thorough understanding of route pattern wildcards. One of the most important aspects of CUCM administration is writing, reading, and designing them. In this post, we discussed what a route pattern is—it is a sequence of numbers and/or wildcards that are used for SIP routing. After that, we went over what a wildcard is. A wildcard is just a symbol used to represent a blanket of possibilities. After that, this article explained the five following wildcards: X Wildcard. Specifies any digit between 0 and 9. Bracket Wildcard. Specifies a range of numbers that the user is required to dial. Discard Digit Wildcard. A wildcard represented by a dot that omits the number that precedes it. Discard Wildcard is generally used to omit the external digit number. (In this instance, the number 9.) The @ Wildcard. A wildcard that symbolizes all numbers in the National Numbering Plan. Question Mark Wildcard. The last wildcard we discussed allows a user to enter an indeterminate amount of symbols that precede the question mark. Up to and including zero amount of times. While we may have gone over several wildcards, route patterns allow for seven more. Wildcards may seem like a small step in getting a CCNA certification, but they are the backbone of route tables. Once these are memorized, I encourage you to understand what the rest of them do.
<urn:uuid:74244d0d-0b30-4b6f-ac40-88f004d629f0>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/technology/networking/what-are-route-pattern-wildcards
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00622.warc.gz
en
0.926788
1,633
3.09375
3
Facebook has a new US patent which will allow it to control personal Internet of Things (IoT) devices. With the Internet of Things now starting to become established in the home, this could represent a big addition to Facebook’s range of activities – and a change in direction. The social network is currently focussed on allowing people to interact with other people. This new patent would augment that, and allow people to interact with things. The patent includes examples of the kinds of ‘machines’ that might be controlled. These include a thermostat, an automobile, a drone, a toaster, a computer, a refrigerator, an air conditioner, a robot, a vacuum, an actuator, and a heater. Share and share alike The patent also allows for a person to give control of their IoT devices to other people. This could be family, friends, work colleagues and so on. It could, for example, mean all the members of a family would be able to set temperature controls remotely. There are plus points for end users here. Providing a centralised control system for all the IoT devices in a home or other location, regardless of the company they were bought from, could save people from having to install multiple apps for fine-level control, and provide instead an integrated, all-in-one solution. IoT and a wider remit It would allow Facebook to extent its central role in people’s lives, as the IoT starts to take off in the home and at other locations, and so could extend the company’s reach beyond its current focus on social interactions. Co-founder and CTO of EVRYTHNG, an IoT platform builder, Dominique Guinard, told Internet of Business: “Allowing friends to control devices is an interesting concept. Actually we proposed a standard way of achieving this with social networks in 2010 in a project called the Social Web of Things that was using Facebook to share control and sensing on real-world devices. The security issues are not necessarily greater than with other IoT systems as sharing can be based on standard authentication systems.” Guinard went on to tell IoB: “In terms of Facebook gaining access to critical information this indeed could be a concern but if implemented correctly Facebook does not need to have direct access to the devices (e.g., by using an authentication proxy that gives access to Things through Facebook). What they would however probably gain access to are the usage patterns of our physical devices which is highly valuable information and, in some cases, highly private information.”
<urn:uuid:d1ba7f1c-bd98-4709-b9c8-c4dfd0bf6dbf>
CC-MAIN-2022-40
https://internetofbusiness.com/facebook-wants-talk-iot-home-tech/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00622.warc.gz
en
0.959341
536
2.546875
3
The question still remains as to how end-users can protect themselves from hackers. End-users deal with confidential data. Consider the following confidential data we use on a daily basis: - Personal email and password - Corporate email and password - Facebook login ID and password - Linked login ID and password - Internet banking login ID and password - Credit card login ID and password - Healthcare data accessed online - Tax information accessed online The above credentials can be stolen and misused. Malware can be remotely installed on computers to steal data and transmit it across the globe where hackers compile, misuse, and sell the credentials to the highest bidder. A powerful Anti-Malware solution is the only solution to ensure endpoint security. Apple is going one step further to ensure user-authentication is impeccable. Apple plans to use Biometrics in the near future. Apple acquired AuthenTec, a company specializing in Biometric security. Imagine fingerprint recognition being integrated into smartphones. Scan your fingerprint and gain access to emails, Facebook, online banking, and hundreds of websites where your fingerprint can be used to authenticate your identity! Not to mention, you may no longer need to remember the numerous user IDs and passwords. Fingerprint recognition for end-users may not be science fiction anymore. Fingerprint recognition may be widely adopted by end-users thereby turning into reality. Fingerprints may not be the final solution, but the first step towards the final solution against hackers. Voice recognition and Iris scan may follow soon.
<urn:uuid:e92c3c73-b2ec-48ab-ae04-bddfcadda281>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/biometrics-the-final-frontier-in-endpoint-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00622.warc.gz
en
0.885839
312
2.9375
3
What is svchost.exe? Is it a virus? A svchost.exe or Service Host file is a legitimate system process in the Windows operating system. However, users tend to confuse it with a virus because hackers can disguise malicious activities as integral system parts. Official svchost.exe has a designated location and represents one of the essential components in Microsoft Windows. Therefore, legitimate svchost.exe is safe in most cases unless an infection hides behind it. But how do you differentiate between a trusted and fake Service Host process in your Task Manager? Let’s figure out the official purpose of this Windows component and when it might be a threat. What is svchost.exe in Windows? What does it do? A svchost.exe process is an essential component of Windows services. It is found in %SystemRoot%\SysWOW64\ or %SystemRoot%\System32. Other locations for this .exe file could be a red flag. Furthermore, since Windows uses this process for many tasks, its RAM usage might be higher. So, the official responsibility of svchost.exe is to host services and optimize the use of system resources. It does so by launching dynamic-link libraries (DLLs). Since Windows cannot activate DLLs directly, it dedicates svchost.exe to do this job. Thus, Service Host processes are in control of your Windows device, running as efficiently as possible. Killing them could adversely affect your device and prevent it from working properly. Why are there multiple svchost.exe processes active? Multiple Service Host processes might be active in your Task Manager simultaneously. It is because each process deals with hosting different services. For instance, Windows Defender uses svchost.exe for tasks like reaching available updates. A separate process could be in charge of managing other network-related procedures. Having multiple Service Host processes also works as a way of mitigating issues with this process. For instance, if one process halts, others can continue functioning. Why is the svchost.exe using so much memory? Svchost.exe needs computer resources to operate, mainly memory and CPU. When your PC performs an action associated with it, the use of these assets can increase. For the most part, offline operations can rise but won’t be as heavy as those reaching the internet. For instance, a svchost.exe netsvcs process should show significant resource use when Windows installs updates. Such actions can lead to a substantial growth spurt for memory and CPU usage. Therefore, consumers can find these upsurges suspicious. Usually, they are normal, and usage levels should return to normal after computers finish setting up updates. However, if a process consumes 90-100% of available resources, it could indicate a problem. One reason behind this could be that malicious activity has disguised itself as this legitimate Windows process. However, it might not always be the case. Can svchost.exe be dangerous? Svchost.exe can only be dangerous if your computer has a malicious program running. Such processes could hide behind names of critical services, such as Service Host. Thus, it could dodge detection longer since users will assume it to be safe. Infections can be responsible for many malicious activities, like collecting data from your PC and sending it to hackers. In this case, the fake Service Host process should continuously gobble a significant amount of resources and bandwidth. Also, consumption should not go down regardless of your activities or changes. How to stop svchost.exe from using the internet Generally, you should not prevent a legitimate Service Host file from using the internet. It is possible to stop BITS (Background Intelligent Transfer Service). However, if you want the process to be less resource-consuming, halt activities associated with that Service Host. Can you delete the fake svchost.exe? Signs of a malicious process You should not delete legitimate svchost.exe files. However, there are signs that this process conceals more disturbing activities: - You can find svchost.exe outside %SystemRoot%\SysWOW64 or %SystemRoot%\System32. For instance, the process should be suspicious if a random folder like Music or Downloads contains it. - Open Task Manager and find the questionable svchost.exe process. Pick the Processes tab, rick-click on Service Host, and select Properties. Opt for Details, and see the name under Copyright. If it states anything but Microsoft Corporation, it might be dangerous. - The Service Host process utilizes the maximum amount of CPU regardless of what you do. Resource usage can exceed normal levels as soon as you boot your computer. - You find the process in regular folders, but its name slightly differs. For instance, instead of svchost.exe, it is svcchost.exe or svhost.exe. The best action is to use trusted antivirus software and scan your system. You can also inspect that file individually. A more tech-savvy solution is getting rid of the file manually: - Open Task Manager and right-click on the svchost.exe process. Choose the Open file location option. - Keep the folder open. - Return to the Task Manager and right-click on the process. Select End task. - Stop each process within the targeted Service Host in the same way. - Return to the folder and delete the svchost.exe file like any other. Ways malware disguised as svchost.exe can enter your device Malware has many venues for distribution. Here are some common ways to accidentally receive an infection that will pretend to be svchost.exe. - Phishing emails. Links or files in emails can cause many issues. Attachments like PDFs or Word documents have been known as the most popular file types for distributing malware. Additionally, random links can also aim to capture personal details or urge you to download malicious software. - Unknown software. Avoid downloading programs from unknown sources and developers. It could be dangerous or, to the very least, an unwanted tool. - Drive-by downloads. Such downloads happen without users’ knowledge, and you can trigger them by clicking on links or pop-ups. - Vulnerability exploitation. Update your software as often as possible. Malware could slither into your device thanks to flaws facilitating their arrival.
<urn:uuid:55ae2d78-1d88-4eeb-9a91-4df28a0e7d16>
CC-MAIN-2022-40
https://atlasvpn.com/blog/what-is-svchost-exe-is-it-a-virus
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00622.warc.gz
en
0.919416
1,339
2.515625
3
New Technology is Reducing Trauma for Child-abuse Investigators Fill out form to get the Insights Investigating crimes against children often comes at a heavy psychological price. However, new technology is able to reduce the trauma law enforcement faces when investigating these cases. This article discusses how: - Artificial Intelligence (AI) and machine-learning algorithms, allow investigators to streamline the process of collating, analyzing and reporting on evidence. - Technology can speed the process of these investigations and bring criminals to justice fast.
<urn:uuid:ca31867c-180a-46ce-976b-94f288a870a1>
CC-MAIN-2022-40
https://cellebrite.com/en/new-technology-is-reducing-trauma-for-child-abuse-investigators/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00622.warc.gz
en
0.903046
115
2.734375
3
This article describes how Infrastructure-as-a-Service helps businesses quickly get the computing capacities they need, reducing capital expenditures on IT equipment, frees the IT department from routine tasks, and simplifying scaling. What is IaaS Infrastructure-as-a-Service (IaaS) is providing computing resources for rental. The IT infrastructure is owned and managed by the cloud provider. Using virtualization technology, the provider creates several virtual servers on one physical server. Each customer gets one or more virtual servers, depending on their needs. They are connected to the servers via the Internet or other networks. As for the architecture, IaaS includes the same components as a local data center. These are the physical servers, storage systems, and network components. But unlike physical infrastructure, the virtual one is easier to scale – resources can be increased or decreased depending on the load. The company does not need to keep reserve capacities for peak loads. A common mistake is to think that the cloud provider is fully responsible for the health, security, and protection of the cloud infrastructure. Providers only guarantee the physical security of the data center, ensure fault tolerance, protect the network, etc. The customer is responsible for the security of its virtual resources – servers and virtual machines, their protection, data backup, access control. Use cases for IaaS Backup and disaster recovery. IaaS enables recovery or access to applications in the event of a disaster without the high cost associated with additional hardware and staff. Organizations can consolidate their disaster recovery systems into a single environment, ensuring data security. Development and testing. IaaS enables teams to set up flexible and scalable environments for application testing and development. It speeds up the development and consequently the release of new solutions to the market. Website hosting. Hosting a website on the cloud computing infrastructure of a provider makes its hosting more reliable and robust. This is especially true for online stores and large projects with sudden bursts of traffic. IaaS financial benefits Building your own IT infrastructure is a big investment in hardware and software. Such purchases are often very time-consuming - the process itself, from purchase to installation, can take from several months to several years. You have to select the configuration, approve the purchase, and include it in the annual budget. At the same time, it is difficult to predict the total cost of ownership (TCO) of the IT infrastructure. Besides the initial cost, you need to take into account all repairs, maintenance, and upgrades during the lifetime, the cost of additional equipment, maintenance, and staff. This is not an easy task. With cloud services, organizations pay only for the resources used and do not keep extra capacity in case of sudden growth. This is one of the main benefits of moving to cloud hosting. Cloud providers bill IaaS services using one of these models: Subscription. Subscription pricing can be more favorable for long-term contract customers but also locks you into the provider, which can be a disadvantage if either your needs change or your experience with the provider does not meet your expectations. Pay-as-you-go. An hourly payment system that only requires users to pay for what they use. This is advantageous because it makes it easy to change cloud providers if necessary, and your bill may increase or decrease based on usage.At Cloud4U, we have all the necessary tools to migrate and deploy an organization's virtual infrastructure. There is no need to migrate the entire infrastructure – it is better to start with certain services that are easier to deploy in the cloud rather than on local capacities. Over time, you can also migrate other services to the cloud.
<urn:uuid:14d0bb7c-3a62-47d1-ac2a-05b1adb971c2>
CC-MAIN-2022-40
https://www.cloud4u.com/blog/infrastructure-as-a-service-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00622.warc.gz
en
0.937174
749
2.5625
3
"Fair is foul, and foul is fair... Oftentimes, to win us our harm, the instruments of darkness tell us truths, win us with honest trifles, to betrays in deepest consequence" Macbeth, Act 1 Is Fairness the new oil? I think that "new oil" phrase is dumb, but there is a lot of activity in Fairness in AI lately. It is, however, a complex problem. It's easy to calculate the MTBF (mean time between failure) of a device or derive endless statistics from a model's training and the output. How does one measure whether the results are fair? Much of the literature on Fairness is intensely academic, but an emerging consensus places equal emphasis on statistics and human perceptions. One intriguing paper is The (I'm)Possibility of Fairness: Different Value Systems Require Different Mechanisms for Fair Decision Making, which I'll attempt to summarize. I was also briefed by a company recently, Monitaur.ai, which claims to support all types of X(AI) - explainability by plugging into a running model and capturing detailed data of its operation as well as incremental data. They call this MLA machine learning assurance ,and describe it as: Machine Learning Assurance (MLA) is a controls-based process for ML systems that establishes confidence and verifiability through software and human oversight. The objective of MLA is to assure interested stakeholders that an ML system is functioning as expected, but in particular, to assure an ML system's transparency, compliance, Fairness, safety, and optimal operation. Customers are using this product to understand bias, disparate impact, outliers, and data and model drift. Monitaur does not, however, provide fairness analysis methodology. That is up to the client. So why is Fairness impossible? It isn't. The (Im)Possibility of Fairness paper takes a fairly deep dive into the subject, with an interesting framework for establishing Fairness. It starts with some assumptions: - The world is structurally biased (inequities to the systemic disadvantage of one social group compared to other groups with whom they coexist) and makes biased data. - Observation is a process. When we create data, we choose what to look for. - Every automated system encodes a value judgment. Accepting training data as given implies structural bias does not appear in the data and that replicating the data would be ethical - Key conclusion: Different value judgments can require contradictory fairness properties, each leading to different societal outcomes. Researchers and practitioners must document data collection processes, worldviews, and value assumptions. - Value decisions must come from domain experts and affected populations; data scientists should listen to them to build values that lead to justice. To design fair systems, there must be solid agreement on what it means to be fair. One definition is individual Fairness, which is defined as individuals with like characteristics (within the model's scope) who should receive the same evaluation or treatment. This involves a seriatim (variance by individual from the expected outcome) combined with any number of analytical methods determining which features influenced the outcome. The more or less opposite point of view holds that demographic groups should, mostly, have similar outcomes, despite variation between members. The group fairness definition is in line with civil rights law in the US and UK, and is somewhat controversial. It evolved into a concept of disparate impact (I wrote about this in a previous article). There is some agreement among academics that, depending on which type of Fairness you aim for, individual or group, definitions and their implementations are at odds between their different beliefs about the world - their two worldviews are incompatible: - What-you-see-is-what you-get (WYSIWYG): data scientists typically use whatever data is available without modification. - We're-All-Equal (WAE): Within the scope of the model, all groups are the same. More importantly, a single algorithm cannot logically accommodate both simultaneously, so data scientists and AI developers must be clear at the beginning which worldview they take In the individual fairness model, the assumption is that the observation processes that generate data for machine learning are structurally biased (first bulleted assumption above). As a result, there is justification for seeking nondiscrimination against individuals. If you believe that (your observed) demographic groups are fundamentally similar, group fairness mechanisms guarantee the adoption of nondiscrimination: similar groups receiving equal treatment. - Under a WYSIWYG assumption, individual Fairness can be guaranteed - Under a WAE assumption, nondiscrimination can be guaranteed Algorithms make predictions about individuals as a mapping from information about people, a feature space, to a space of decisions, which is a decision space. Thinking about it, it is easy to imagine two different types of spaces: construct spaces and observed spaces. Construct spaces are what we imagine is in the data in the feature space (for example, people with low FICO scores are an elevated risk for auto insurance). Constructs are the idealized features and decisions we wish we could use for decision-making. Observed features and decisions are the measurable features and outcomes that are used to make decisions. These two distinctions are the framework for deriving a mathematical model proving the incompatibility of different fairness models based on the data scientist's worldview. The Construct Feature Space (CFS) represents our best current understanding of the underlying factors and is contingent on ideas about how to decide in that context. IOW, the modeler may be selecting features to illuminate attributes that aren't in the data, such as productivity or threat level, projecting their leanings and prejudices. Therefore, the Construct Decision Space (CDS) is the space of idealized outcomes of a decision-making procedure. For example, this includes what processes to redesign or steps to take to rectify security gaps. The problem is, these outcomes are not explicitly derived in the model and depend on interpretation. The Observed Feature Space (OFS) contains the observed information about people, data generated by recording by third parties such as transactions The Observed Decision Space (ODS) is the observed decisions from a decision-making procedure, generated by an observational process mapping This brings us to the issue of Fairness. Individual Fairness means similar individuals (for the task) in the CFS should receive similar CDS decisions. Fairness and Nondiscrimination Existing data science and machine learning, at their most basic function, create transformations (through the operation of the algorithms) between the features and observed decisions. These are not just academic exercises. They are applied as decision-making tools in the real world. On the other hand, Fairness is a mechanism that maps individuals to Construct decisions based on their Construct features. Algorithmic Fairness aims to develop real-world mechanisms that match the conclusions generated by these Construct mechanisms. Individual Fairness. Fairness is an underlying and potentially unobservable phenomenon about people. Since the solution to a study is a mapping, defining Fairness on an individual basis is that similar individuals (for the model) receive similar decisions. Nondiscrimination. The fairness definition applies to individuals, groups may share characteristics (such as race, gender), and Fairness compares these group characteristics (or combinations of them). Group membership can be determined, based on immutable characteristics, or those protected for historical reasons. It is usually considered unacceptable (and sometimes illegal) to use group membership as part of a decision-making process. Therefore nondiscrimination is defined as follows: groups who are similar (for the task) should, as a whole, receive identical decisions I've summarized much of the theoretical positioning and mainly presented the conclusions. In previous articles about Fairness, I wrote about the issues and many technical tools available, and their limitations. I thought these articles about worldview, constructs, and their implications were interesting enough to cover. The gap between what is and what is taken for granted is natural, but awareness is needed to apply the proper framework to evaluate Fairness.
<urn:uuid:9d10c1b3-4952-4c29-bbe9-6267bc9ddb26>
CC-MAIN-2022-40
https://diginomica.com/fairness-ai-practical-possibility-new-angle-designing-ethical-systems
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00022.warc.gz
en
0.94508
1,667
2.578125
3
Most Commented Posts Every day, technology advances. Every year, new innovations are introduced to make living easier. We live in a 21st-century world that values innovation and creativity. In today’s world, a semiconductor test system is also essential. Testing equipment is used by the tech industry to ensure that its functionalized based on digital technology functions as expected. Professionals examine your laptops, telephones, and other electronic gadgets before they are released on the market. Semiconductor testing entails numerous tests and trials of the device’s various operations. Some, however, point out the drawbacks of testing. Let’s take a look at the advantages and disadvantages of semiconductor testing. A Quick Overview of Semiconductor Testers The semiconductor test or automated test equipment is the most widely utilized (ATE). This tester sends electrical signals to a semiconductor or test system, which compares output signals to predicted values. As previously said, the goal of the testing is to uncover problems or determine whether the item performs as intended. Memory testers, logic testers, and analog testers are just a few examples of semiconductor testers. The testing is done in two stages: wafer testing and packaging testing. Wafer testing is done with a probe card, while package testing is done with a test socket. The system level exam, which necessitates the participation of engineers, is one of the most critical tests. The test can test a single chip or numerous chips in a package. Semiconductors are important in a variety of industries, including medical, automotive, and aerospace. It’s best to look for safe, quick-to-respond testing that produces accurate findings. Bring Efficiency to the Table: The most significant benefit of semiconductor testing is that it can detect flaws. Before releasing a smartphone to the market, it must pass a test to guarantee that it is free of defects. Everything is well tested, from cameras to touch screen functionality. The purpose of an ATE semiconductor or testing is to ensure long-term performance. Reduce the amount of energy you use: Today’s testing methodologies have improved, including the testing equipment’s safety and durability. The testing circuits use less energy, which saves you money on utility expenses. Less Noise and a Longer Life: Semiconductor testing can take a wide range of voltages. In addition, unlike typical testing systems, there is no or very little humming noise during the testing procedure. Currently, testing equipment has an indefinite lifespan and can be utilized indefinitely. Core Disadvantage: Doesn’t respond well in the ultra-high-frequency range. When evaluating equipment in the high-frequency spectrum, some persons have a poor response. Well, that relies on the testing equipment’s quality and other factors. Make sure you choose wisely while making a purchase. In today’s world, semiconductor testing is unquestionably the most popular. Make sure to do the testing for releasing effective smart technology devices to the world, especially in the mobile business and a few other high-tech areas. To know more about semiconductors or test systems, browse the internet. Any information you want to find will be available here.
<urn:uuid:73e8b590-983d-4323-bf57-e79f4eb2089e>
CC-MAIN-2022-40
https://ihowtoarticle.com/semiconductor-test-systems-benefits-and-drawbacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00022.warc.gz
en
0.920645
670
2.953125
3
The latest IP cameras have much better video quality than the early analog CCTV cameras. Even though they both capture video, IP cameras do it dramatically better. The reason; they contain high performance digital processing computers. The computers provide reduced noise, improved wide dynamic range, reduced smearing, and enhanced low light performance. This article reviews how these processors work and why they are important to the total IP camera system performance. How Computers Improve the Video Quality In the past, analog video cameras used low noise amplifiers and filtering circuits to improve the video image. The new IP cameras add special purpose digital signal processing computers that greatly improve the quality of the video. Digital signal processing uses frame buffering and sometimes complex Fourier transforms to improve image quality. The amount of computer power available, determines how much video improvement can be obtained. For example, the higher the resolution of the camera, the faster the processor needs to be. Today, video processors are fast enough to support cameras with up to 2 megapixel of resolution. IP camera manufacturers continue to increase the performance of their video processing computers. As the computers get faster they will be able to handle the higher resolution cameras, including the latest 4K IP (8 megapixel) cameras. Improving Image Quality Sony, Axis and Samsung have digital signal processors. For example, Sony has the IPELA Engine system, which provides a number of separate functions such as View-DR™, that enhances the wide dynamic range of the camera, and XDNR™ technology that reduces noise. The latest IP cameras from Samsung include powerful signal processing that improve the image quality. They call this “Samsung Super Noise Reduction” (SSNR). The image processing is achieved by buffering a frame of video and then comparing groups of pixels. Using special digital processes, which they call “Adaptive Temporal Infinite Impulse Response Filter”, Samsung is able to smooth out the noise in the image. This also greatly reduces the smearing that can occur when objects are moving at night. Their “Motion Edge Adaptive 2D filter” process, detects the edges of objects and reduces smear. Here is an example of the image noise reduction achieved: Why this is important? Clearer images not only allow us to see things better at night, they also reduce the amount of video storage required. When it’s dark, the low light video includes a lot of noise. This noisy video can’t be compressed effectively, so it uses a lot more bandwidth than clean video. The higher the bandwidth the more storage is required. The processed video cleans up the noise so it uses almost one quarter of the storage when viewing low light scenes. Improved video greatly increases the effectiveness of your IP camera system. Wide Dynamic Range The dynamic range of a camera determines the range of light that can be seen. Some scenes can have bright sunlit areas as well as areas in dark shade. Wide dynamic range allows us to see everything. Digital processing has greatly increased the wide dynamic range (WDR) available. To increase WDR, the computers in the camera average a number of frames together (using slightly different amplification levels for each frame). The dark frames and the light frames are averaged to create one frame with greatly increased dynamic range. Wide dynamic range is increased from 40 dB to over 120 dB using digital processing. Why this is important? WDR means that we can see scenes that have very dark areas as well as very bright areas. For example, the new cameras allow us to view a person inside a room that has a very bright window in the background. We are able to see not only the person’s face inside, but also the faces of the people outside the window in bright sunlight. The special computer processors in the camera provide more than improved video images. They also provide many analytic functions. For example, Samsung currently offers the following analytics (specific analytics may vary by model): - Audio detection - Auto tracking - Face detection - Heat mapping - Intelligent motion detection - Tampering/scene change detection Why are analytic functions important? These features are used to notify a security person when an alarm condition occurs. This means the security person doesn’t have to look at the video all the time. Your IP camera security system becomes much more powerful. It can actually reduce the number of security people required. As you can see, IP cameras include some very powerful digital computers. They can improve the quality of the video as well as providing analytic functions that improve the functionality of the total IP camera system. If you need help selecting the best IP camera or any of the other components that makes up the total IP camera system, please contact us. We can be reached at 1-800-431-1658 in the USA, and 914-944-3425 everywhere else, or just use our contact form.
<urn:uuid:ad21102c-e3eb-4d8a-854e-7b680fccf5a1>
CC-MAIN-2022-40
https://kintronics.com/how-digital-processing-in-the-ip-camera-improves-video-quality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00022.warc.gz
en
0.912758
1,024
2.84375
3
How the Internet of Things Impacts Businesses and What to Do With It The Internet of Things (IoT) is the ever-growing collection of devices — ranging from smartphones to sensor-equipped manufacturing robots — that are interconnected via the internet. Thanks to this interconnection, they can send and receive data — a capability that facilitates a wide range of actions in both everyday life and in business. IoT is so popular that by 2025, the analysts at International Data Corporation (IDC) predict that more 55.7 billion connected devices will be in the marketplace, with 75 percent of them connected to an IoT platform. IoT has become an integral part of many workplaces. Regardless of your industry, there’s a good chance IoT devices play a role in the success of the business. Whether you use something simple, like smart lightbulbs for an efficient office environment; or something complex, like a network of industrial machines detecting quality control issues in a manufacturing process, IoT is there. What Is IoT? IoT is a broad term that refers to many types of devices connected to the internet. These devices can be everything from shipping labels and speakers to cars and planes. They might include smart sensors, lightbulbs, security systems, and heavy industrial machinery — all items that communicate back to the internet and work with it. There is also the Industrial Internet of Things (IIoT), which refers to the same principles but is used in business settings and devices, such as a piece of manufacturing equipment. So why is IoT important for businesses? IoT in business can take many different forms, but it often involves collecting data on behavior, processes, and other conditions. Many IoT devices also can take action to correct, improve, or otherwise use this data to enact some kind of change. There are many possibilities, and the benefits of IoT are wide-reaching. The Beneficial Impact of IoT on Businesses Considering the explosion of IoT-connected devices, perhaps you’re wondering, “How does IoT affect business?” The short answer is, “In every way.” Accessibility to big data sets, along with the autonomous collection and exchange of data, means that it is becoming easier to gain insights into things like customer behaviors and product performance. IoT also facilitates the continuous optimization of business processes and even impacts employee engagement and performance. In certain industries, IoT in business can instruct systems to autonomously execute transactions in supply chains when certain conditions have been met. There are many exciting new technologies that make the future of IoT incredibly versatile. Some of these include battery-free sensors, extensive wearable technology, and “tiny” machine learning microcontrollers. There are also many network changes occurring to improve the performance of IoT devices. For instance, network slicing can be used to deliver low-latency, high-bandwidth connections for greater reliability in mission-critical devices. Combine IoT with technologies like artificial intelligence (AI) and 5G, and there’s endless possibilities for businesses. Entire cities are becoming powered by IoT, and security is improving every day. Large manufacturing operations can connect every machine in the facility to a remote monitoring system. Utility companies can remotely collect data from smart meters and connected infrastructure. Health care devices can use IoT to communicate a patient’s status to physicians. Farmers can optimize their harvest with IoT analysis. It’s an extraordinary asset to many businesses. In short, IoT will allow businesses to better help their customers and manage their workforces, as well as improve their products, services, and processes. How IoT Helped During the COVID-19 Pandemic With the events of 2020, what is new in IoT? Some of the newest IoT technologies were used to address the problems presented by the global pandemic: - Smart building features: Safe facilities were key to preventing the spread of COVID-19, and IoT helped control access and monitor environmental characteristics, such as particulate matter and volatile organic compounds (VOCs). IoT can connect to heating, ventilation, and air conditioning (HVAC) systems, cleaning robots, air quality monitoring, and more, to offer situation-specific, flexible building maintenance that supports safe behaviors. - Touchless service: Limiting physical contact between employees, customers, and devices was promoted with the help of IoT. From new payment methods to automatic temperature and health screening upon entry, IoT can significantly decrease the need for contact. - Monitoring: Many facilities have used IoT to monitor and track the risk of infection. They might use sensors to assess how many people are in a certain part of the building and increase disinfection practices. They can also track the temperatures of incoming guests to monitor risk. Overall, IoT and associated technologies have been an invaluable resource in the fight against the pandemic. How to Leverage IoT for Your Business When it comes to how to use the Internet of Things in businesses, the key thing to remember is that new levels of communication and interconnectedness have significant payoffs for almost any business. That’s why the way in which each business decides to leverage IoT within its respective industry and sector is an important choice. It’s not a one-size-fits-all approach, but rather a highly customized method of gaining a deeper understanding of enhancing and executing specific business objectives. With this in mind, there are several key points or benefits from IoT that many businesses can explore for their own individual plans for improvement: - Understand big data: The collection and analysis of big data can offer insights into many factors that are essential to the effective running of a business. First and foremost, IoT can provide insights into all-important market trends as well as product performance. You can leverage these insights to help inform your short- and long-term business strategies. - Engage each and every customer or client: IoT can provide you with data about each individual customer, so you can deliver personalized service. And with IoT devices connecting you to your consumer base, you can analyze data to better understand each stage of your customers’ purchasing cycles — from how they research, to how they buy and even use your products and services. This will enable you to create more focused and effective marketing campaigns. - Remote workforce: Research shows that remote work is on the rise. With IoT, the ability for your remote workforce to be even more connected to everything from files to inventory equals greater productivity and a wider range of tasks that can be accomplished remotely. - Expand your presence: From smarter marketing campaigns keeping you connected to your clients to better communications with all members of your workforce, IoT allows your business to expand its presence both with consumers and employees. What’s more, through collaboration solutions like Presence from Consolidated Technologies, Inc., you achieve levels of flexibility and speed that empower your presence while it expands. Take Off With the IoT and Consolidated Technologies, Inc. Clearly, with IoT and its business applications set to expand, you need an IT and communications partner that can guide you through the plethora of cutting-edge innovations so you can successfully leverage this new technology and the resulting data into your business model. That’s why you want Consolidated Technologies, Inc. looking to the future with your business in mind. We don’t just offer tech solutions — we understand the challenges of your business in order to deliver next-generation vision! Contact us today for more information and answers to all your IoT and IT-related questions.
<urn:uuid:5dcc1294-a93c-4fb7-9fe2-6eff4257addd>
CC-MAIN-2022-40
https://consoltech.com/blog/iot-in-business/amp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00022.warc.gz
en
0.947239
1,528
2.84375
3
One of the main concerns of the organizers of the Olympic Games to be held in Athens this summer is security, but not only physical security, computer security as well. The emphasis placed on avoiding problems with the computers that will manage huge amounts of data during the games will be proportional to the magnitude of this global event. The information that must be protected at any Olympic Games is so valuable that it justifies all efforts to guard it. However, in companies, where the scale of the IT structure is not usually on the level of the Olympic Games, financial investment in security is not always enough to protect information. On the one hand, it is possible that security investment is insufficient, and therefore inefficient. On the other hand, it is just as absurd to leave a system unprotected, as it is to overprotect it, as, in this case, money invested becomes money wasted. When you evaluate the expenditure to be made on an IT security structure, there are three aspects that must be taken into account. First, you must know the value of the data or systems to be protected. This is probably some of the information most difficult to obtain in a company. How much is a company´s know how? Or even more difficult, what is the current value of the project of a new product that is still at the development stage? The number of variables to be considered is endless, and in many cases, impossible to quantify objectively. The best way to obtain this data is through indirect calculation, that is, by measuring not total losses, but financial loss caused by loss of information. Just imagine, for example, the cost of having your company´s network halted for an hour. If you divide your annual turnover by the number of working hours, you will see the cost of having your servers at a standstill for an hour. The second aspect to be considered is the investment to be made on security systems. Under no circumstance should you have a budget that exceeds the value of the information to be protected. This would be like keeping an old stained rag in a safe, as the cost of the safe is greater than the cloth. A security system like this would be redundant. (Unless of course the rag was stained by Leonardo da Vinci, and called the Mona Lisa, then maybe some additional expenditure on extra security measures might be in order). Finally, you have to calculate how much it would cost for an attacker to breach security measures and access protected information. This should be very high, that is, to obtain certain information must be far more costly than the information itself. In this way, you are setting up an intangible barrier that is very difficult to get over, since, if it is not worth breaking into a system, almost nobody will try to do it. At least, most attackers will be dissuaded from doing it. As usually happens when you try to assess a security risk, establishing the right measuring standards is rather complicated, as there is no perfect metric and, even if there was, it needs to be capable of adapting to every business alternative. In fact, a parameter which is valid for a certain business vision is completely different for another, irrespective of how similar businesses might be. Luckily enough, you can be helped by computer security experts with the necessary experience and knowledge to draw up a close approximation of your IT security needs and the investments to be made. On the contrary, to establish an investment policy based on the opinions of unknowledgeable people can lead to highly undesirable effects. To sum up, leave computer security to experts that are up-to-date with this area and know the issues involved. This is the best way to ensure that you are investing just what you need in security systems, no more, no less.
<urn:uuid:d17bd523-c030-4cdd-a579-6cb5781f4793>
CC-MAIN-2022-40
https://it-observer.com/how-much-should-you-invest-it-security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00022.warc.gz
en
0.964854
759
2.578125
3
The bring-your-own-device (BYOD) movement, a recent trend that has been redefining the corporate world, offers some intriguing possibilities for implementation in schools. It can also introduce a new set of challenges for educators, from security concerns to classroom control policies. Using school-funded computing devices to take advantage of the expanded opportunities for classroom exploration that the internet offers may simply not be a viable possibility for some schools, but the costs would be defrayed by students bringing their own devices. Other schools could benefit from devices like tablets, which can increase mobility within the classroom. Some critics and analysts, including StudyMode CEO Blaine Vess, have argued that advanced technology in the classroom is a vital component of 21st century learning, from the dual perspectives of expanded resources and teaching children to interact with the technology that will become an important part of their adult lives. While he acknowledged that smartphones can be sources of student distraction, he appealed to the reality of 21st century lifestyles – that smartphone use is everywhere, all the time, and it may be impractical to wage a battle against it in the classroom. "Banning mobile devices from the classroom would likely have the opposite effect, with students rebelling, ignoring the policy and in turn, hindering effective performance," wrote Vess. "If the current workforce is any indication of the future, today's students need to have the freedom to work on their own devices as adults, so they can hone the essential skills of filtering out distractions proposed by mobile freedom early on." Strategies for mobile device management in schools Mobile device management often falls to an entity that is not the device's actual user – corporations for business users or parents for children – practices that educators will have to emulate. The first step to BYOD classroom management could be in determining which device will be allowed to augment or substitute for classroom computers. Smartphones may be a better solution than laptops since they're easier for students to transport, and because they're cheaper, better bridging the gap between students' family incomes. By embracing BYOD instead of shunning it, teachers can better enforce mobile application control. Teachers can instruct students on what applications can and cannot be downloaded and used in the classroom. Blocking applications can help keep distracting and potentially hazardous downloads off of students' devices and the school's network. While BYOD necessitates more user responsibility for his or her device, schools can protect their networks with software layered security while teaching students about safe techniques for computer usage.
<urn:uuid:4eb96a9b-7f33-49bc-ac3d-50ce3ef796e8>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/classroom-management-strategies-for-byod
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00223.warc.gz
en
0.95976
506
3.15625
3
Many connected device makers have yet to prioritize security and that makes everyone more vulnerable. Jason Haddix is the head of trust and security at Bugcrowd. Today, internet of things devices outnumber humans. Internet-enabled children’s toys, household appliances, automobiles, industrial control systems and medical devices—new IoT devices are being designed and released every day but many of these devices are built with little-to-no security in place. Given the rapid growth of these devices and unregulated market, it’s no surprise that these devices represent a growing threat as well as a major opportunity for hackers. How Manufacturers Play a Role in IoT Insecurity The sheer number and types of the devices being networked and connected to cloud interfaces and on-the-internet APIs are one of the greatest challenges in security today. Each device has its own set of technologies, thus its own set of security vulnerabilities. Add to that the pressure to rush to market and meet consumer demand, many manufacturers have simply not implemented a robust security review process. » Get the best federal technology news and ideas delivered right to your inbox. Sign up here. What’s especially concerning is that IoT manufacturers are collecting large amounts of life pattern behavior on their users, as well as access to home and work networks. This is a treasure trove of useful data for those that would target phishing attacks or product marketing, or pivot off these relatively insecure devices to compromise other systems on the network that contain more valuable data. Unfortunately, many internet-enabled device manufacturers have not yet fully realized that they are now complex software vendors, shipping not only the embedded control system on a toy or vacuum, but frequently also managing mobile applications across multiple platforms, web applications, cloud storage, and web APIs. They have a responsibility to ensure product security throughout the life of the device. However, many IoT devices have poor software update mechanisms that compound the impact of design flaws and security vulnerabilities. As a result, while the attacks we've seen in the last year have been massive, they’re built around trivial vulnerabilities. The Mirai botnet, for example, has grown by exploiting IoT devices with weak or default passwords. It was responsible for unleashing one of the largest DDoS attacks to date and is still at large. It's important for IoT vendors who haven't prioritized security to take attacks as a wake-up call, and understand that we're entering a period where there is a very real, calculable, and painful impact to having insecure products. These types of attacks will only grow until the industry gets a handle on the issues of IoT security. Defenders: Evolving today’s policies IoT security is in the standards phase right now, which means legislators haven’t yet prescribed specific policies around what security devices need to have in place for manufacturers to ship them. Some existing efforts have been made to classify the devices by the confidentiality of data connected devices handle, but even this proves to be troublesome with such a large diversity of devices. Another challenge is the physical aspect of security when it comes to IoT devices. Should they be held to a standard that requires not only protection from remote exploitation, but also having protections from reverse engineering a device that an adversary has physical access to? If so, the requirements become very high in the development and electrical engineering aspects of these devices and systems. Simple policies (that should be enforced by the FCC or some other regulatory/industry-council) should require annual third-party security testing on both the device and the websites or APIs it uses. This should mimic the likes of what the PCI Security Standards Council mandates for payment processors. In addition, minimal standards should be enforced, like the use of HTTPS or SSL in all communications, forced changing of default administration passwords, two factor authentication options, encryption of data at rest, and logging. Several projects have been spun up to threat model (generically) the IoT landscape that would work as standards for policy, including the OWASP Internet of Things (Security) Project. The industry has learned some major lessons around IoT security in the last year. However, change takes time. Security isn’t a destination, it’s a process. The adversary is going to continue to find new ways to attack devices, and the industry needs to be better prepared. That’s why securing IoT begins at the bottom: bringing together security experts that can engage in this process. IoT is an enormous category so creating a set of standard requirements will be challenging yet it’s vital to start identifying vulnerabilities no matter how minor or obvious, and making changes to move the security maturity of this market forward.
<urn:uuid:17295026-1e3a-4804-9b11-a05daac47749>
CC-MAIN-2022-40
https://www.nextgov.com/ideas/2017/04/hackers-pov-internet-things-security/137292/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00223.warc.gz
en
0.953493
956
2.546875
3
Viruses are malware that can infect your computer and cause havoc, and make you lose more than just sleep. You can potentially lose your data and your identity if a hacker can get through your computer’s defences via a virus. What Is An Antivirus Program An Antivirus program is designed specifically to prevent any virus from entering your system, detects any virus located in your computer and resolves any issues by removing the unauthorized viruses from your computer. Antivirus programs have now been designed to combat a large range of malware instead of only viruses. Why Do You Need An Antivirus Program It is important for users to have antivirus tools and programs installed in their computers as without one, your computer will be riddled with viruses within minutes of getting connected to the online world. Viruses are never ending, bombarding your computer in an endless barrage. Can you imagine that thousands of new viruses are created on a daily basis? This is one reason why antivirus programs have to be up to date and relevant. This is also why there are regular updates so that the database has the new viruses on record so that they can identify and remove any viruses that may enter your computer or network. How Do Antivirus programs Work Antivirus programs offer certain basic features and protections like: - Scanning files for any viruses - Automatically running scheduled scans - Allowing scans of any files or drives at any moment - Removal of infections, viruses and malicious code - Display the status of your antivirus scans - Malware are malicious code that can illegally enter your network or computer with the intent of destruction, ransom or theft of confidential data. What Is An Anti-Malware Program An antimalware program is software that prevents, detects and removes malware from your system. Your computer is sure to get infected with viruses, Trojan horses, worms, ransomware, key loggers and more, if it is not protected. Why Do You Need An AntiMalware Program Malware or malicious software can install itself on your computer without your knowledge via suspicious emails, dubious websites and more. This malicious code can then wipe out your hard drive; can cause identity theft, get access to all sensitive private information and more. Your computers should have their own antivirus tools, but antimalware programs are much powerful and act like a strong barrier against all the cyber threats that would be bombarding your computers. With all of us being more connected online than ever before, it is time we paid more attention to cyber threats and online protection. AntiMalware programs work on the same principle as antivirus programs, but as malware is an umbrella term under which viruses fall, it is safe to say that antimalware programs are a lot more powerful and reliable when it comes to dealing with many forms of malware. Different Types of Malware There are so many malware programs out there, that you could get dizzy just thinking about them. This will help break down what malware is and the various malware there are, for easy understanding. Short for malicious software, malware is software that contains malicious code that carries the intent to compromise your confidential data and causing harm to the host computer. Various types of Malware include: This is the most commonly known form of malware. It self replicates and can be transferred via infected programs, security loopholes in web apps, infected documents and downloads and more. They can harm your computer and do much more damage. Worms exploit security faults in your system and in comparison to viruses and other malware, they are seemingly harmless. They’ll use up your bandwidth, which is why your speed may seem slow, but they can also damage the host computer as they can contain certain ‘payloads’ that can steal confidential data and more. Worms can self replicate and spread without human intervention (like opening files etc), and can send out mass mails with infected files attached to all user contacts in the infected host computer’s list. If you have a computer, chances are high that you may have come across this term. This is a bit of a tricky thing, and it tricks you into thinking that it is a useful program that you need, so that you will download and install it. Trojans can give access to hackers who can then steal confidential data, take control in your system to monitor computer activities, file modification, installation of other malware programs and more. As the name suggests, this type of malware monitors and ‘spies’ on you, without your knowledge. They can accompany a Trojan, or can get through security crack sin your system. Get ready for annoying pop ups on ads you just don’t want to see. Adware is sponsored ads that generate revenue for the advertisers – but not in a legit way. They can cause annoying pop ups and can also contain spyware which is harmful for your computer. This type of malware can hold your computer ransom as it steals confidential data, or locks down your system so you cannot access it. It can spread like a worm or can get through a security loophole in your system. Rootkits are really stealthy malware that are virtually undetectable with simple security measures. Rootkits hold a wide array of malware tools that can be used for malicious intent, making them hard to remove. In the mean time, they can tweak your security software (uh oh), modify system configs, control your computer, remotely access sensitive data and much more. Best Antivirus and Anti-Malware Software Avast is a well known antivirus software program that remains as one of the best to download and use. Avast antivirus and anti-malware software programs tend to have a complete program that will help keep cyber threats at bay. It protects all online threats via your incoming email, IMs, P2P connections, local file transfers and more. You can also opt for a free Avast Home version, which works just as well, and is a great replacement for other antivirus competitors. It’s easy to set up and install, has a good reputation backing it up, and works on all Windows versions and more. All you have to do is register it to use it more than a month, but hey – it’s free. The home version is great and free, but it is also great antivirus software for your business, offering different tier plans according to your budget. AVG also has a solid fan base, and the reliability of its antivirus software has continued to increase the number of AVG users. However, for those who didn’t know, AVG was bought up by Avast. We all know Avast has a solid reputation as well, but both companies don’t layer each other and operate separately. The AVG antivirus software includes AVG Zen, which allows you to check AVG security on all your devices. The free version offers a basic security plan, but if you really want to protect your computer, you’ll have to upgrade to a paid version which will offer full protection. The full protection includes extra security when making online payments, added security against hackers and authorised access to your confidential data. AVG is still a pretty good antivirus software, but it doesn’t have a quick scan option, which is great for those times when you suspect something may be up with your computer, but don’t have the time for a full scan. This feature is present in Avast and Bitfinder. However, if you do the upgrade, you’ll have access to enhanced firewall, dedicated ransomware protection, encryption software and much more, including its file shredder. Overall, AVG has a great array of customization options for any antivirus program. This is one of the oldest companies in this list, and the free version is great, but with many features missing, it’s better to upgrade to the paid edition so you have access to Web Protection, Game Mode and Mail Protection which is part of the internet Protection panel. It has an improved malware Protection and with regular updates, Avira is an option to consider when thinking of which antivirus or antimalware program to get for your home or business. It can detect different types of malware very well. The only downside we could think of is that it takes simply ages to do a full scan. Even if you do a repeat scan, things won’t go any faster, it will still take the same amount of time as the firsts can which is hours more than any competitor. This of course, in turn, makes your computer a little slower than usual. Eset is more than just an Antivirus program. It teaches you how to handle your security through a series of educational modules so you end up smarter and more knowledgeable about online security than ever before with its cyber security Training. It is also geared towards social media users, scanning your social media accounts for any malicious code or content that could potentially harm your computer. This is also a top favourite amongst many users. It helps keep your digital experience danger free. Kaspersky is a company that has specialized in antivirus and antimalware programs and tools that prevent ransomware and other forms of malware attacks from targeting you and your computer. This internet security suite handles not only viruses, but also other malwares that could enter your system. If you have a large family and each of your family members is glued to the internet in some form or the other, helping them keep their private data private is more important than ever before. McAfee turns out to be a steal deal, as a small amount can have your whole family and all connected devices covered under its protection plan. This offers a way better deal as the fast scanning can rest your mind at ease, especially if you suspect foul play at work. Just remember that every action has an equal or opposite reaction – and for this reason alone, your computer system will slow down. This is the economical version of any antivirus or antimalware program. Norton is a world leader in the antivirus and antimalware segment, and even years down the line after its launch, it doesn’t disappoint. Its Malware detection feature is simply amazing, and we have to give a thumbs up for the simple and easy UI. You have the choice of a myriad of features that will leave your devices and computers in safe hands. The only downside to this is that you may be stuck with a certain pricing tier according to the amount of connected devices. Many more of us connect with loved ones far away via video calls or handle work meetings via video conferencing. This has lead Bitdefender to create a feature that protects your privacy of your webcam. It offers great malware protection. While it lacks in VPN and backup software, it takes the top lead for PC protection with additional security features, excellent malware defences, super quick scans and much more. As our technology evolves, so do hackers. Many malware have been created to imitate real programs which fool the person into downloading, thinking it is something they need. This antivirus and antimalware program helps disguise itself so that it can counteract the malicious code stealthily. While many other companies’ programs kind of turn a blind eye to annoying pop ups, adware and the like, MalwareBytes manages to address all issues, while effectively protecting your computer form cyber threats.
<urn:uuid:1d03fda2-93e9-4b63-bf1b-a1bd5014f25f>
CC-MAIN-2022-40
https://bluegadgettooth.com/online-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00223.warc.gz
en
0.944761
2,413
3.28125
3
There are many types of phishing attack nowadays, to the extent it can be tricky to keep up with them all. We have unique names for mobile attacks, postal attacks, threats sent via SMS and many more besides. However, we often see folks mix up their spears and their whales, and even occasionally confuse them with regular phish attempts. We're here to explain exactly what the difference between all three terms is. What is a phishing attack? Think of this as the main umbrella term for all phishing attempts. It doesn't matter if it's a spear, a whale, a smish or a vish, or anything else for that matter. They're all able to be grouped under the banner of "phishing". This is where someone tries to have you login on an imitation website. This site may emulate your bank, or a utility service, or even some form of parcel delivery. They get you on the site in the first instance by sending a fake email, or text, or some other missive. The bogus message will emulate the real thing, and may be very convincing in terms of looking like the genuine article. They may also use real aspects of the actual website inside the email. The phishing page, too, may steal real images or text from the genuine website. It'll ask you for logins, or payment details, or both. Depending on what the phishers intend to do with stolen accounts, you may find they change your logins too. What is spear phishing? Regular phishing attacks are blasted out to random recipients in their hundreds, thousands, or hundreds of thousands. The sky is the limit. The attackers are hoping that if just a few people respond, they'll be able to make their ill-gotten gains pay off. It's potentially low risk, high reward. Spear phishing, by contrast, is when the phisher targets specific people. It could be individuals, or people at a certain business. The intent may be financial, or it could be a nation state attack targeting folks in human rights, or legal services, or some other sensitive occupation. What is whaling? Whaling is the gold standard for targeted phish. They're the biggest and most valuable people or organisations to go after. "Whales" are typically CEOs or other people crucial to the running of a business. They'll have access to funds or be deeply embedded in payment processes/authorisation. CEO/CFO fraud, where scammers convince employees that the CEO/CFO needs large sums of money wired overseas, is common. This is also more broadly known as a business email compromise scam. The only way you'll likely run into this attack if you're not a CEO/CFO/similar is if you work in a department tied to money transfers. For example, in payroll, or some other financial aspect of the organisation. You'll need to keep an eye out for bogus wire transfer requests, and the business should have processes and safeguards in place to combat CEO/CFO fraud attempts. We have a longer guide to avoiding spear phishing here. We also have a more general guide to detecting phishing attacks, which will hopefully help keep you safe from harm no matter what variety of phish you're facing.
<urn:uuid:cb928e29-09eb-4ae9-b75c-b2ac93abbc80>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2021/12/spear-phish-whale-phish-regular-phish-whats-the-difference
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00223.warc.gz
en
0.961612
674
2.90625
3
The ‘Internet of Things’, a term coined by Kevin Ashton in the year 1999, began gaining momentum over the last decade given to the recent advancements attributed by the human mind’s inquisitive nature and the need to widen the scope of technology beyond the confines of industrial use cases. A report claims that by the year 2021, the world will witness an increase in the number of IoT devices by 35 billion. While IoT has found its potential in several industries such as infrastructure, military, consumer IoT has promised the most favourable growth for tech enthusiasts in the current decade. Major players in the industry, Amazon, Google, and Apple, nurtured tech enthusiasts’ interest in bringing artificial intelligence to everyday life. Light fixtures, home appliances, home security systems, and even entertainment are integrated with IoT. To that end, it is interesting to note that Statista predicts the home IoT market to have a growth of $53.45 billion by 2022 The potential of cellular IoT in today’s connected world Today, with increasing worldwide penetration of the Internet of Things (IoT) and fast-paced adoption of smartphones, cellular IoT has become one of the significant growth contributors for the smart home segment. When the tech giants created convivial smart assistants and other everyday devices for the smart home segment, they designed the cellular IoT to tap into the existing cellular technology to create a sensor-based communication line between the phone and the device. As a result, the use of 2G/3G/4G increased multifold, with almost 3.5 Billion predicted cellular IoT connections in 2020. The growing number of internet users and the rising need to use voice-control technology is a testament to smart home devices’ evolving capabilities. With the emergence of 5G and advanced machine learning capabilities, cellular IoT showcases vast potential to penetrate the consumer IoT network. IoT Use cases for your smart home Deep diving further into how cellular IoT revolutionised the smart home segment, four use cases depict the true potential of smart devices in everyday life. As our lives get busier by the day, the need for automation grew. Home automation is increasingly becoming popular as IoT allows devices to ‘interact’ with each other to create a ‘smart home.’ It has become common practice in households with smart assistants to use voice commands to control thermostats, electronic devices like coffee machines, and even larger appliances, including washers, refrigerators, and more. The growing popularity of IoT is not merely because of voice-command accessibility. Automating daily tasks or creating routines has been the driving factor for the smart home concept. However, the range of devices that can be controlled is dependent on the control protocols. While WiFi is the most commonly used and popular form of IoT control, LTE-M is gaining momentum among tech fanatics for more advanced automation tasks such as sending alerts to purchase essentials based on consumption patterns. A simple and secure alternative to Wi-Fi, LTE-M is perfect for building a sustainable smart home solution that is cost-efficient and includes high connectivity capability. Home security and monitoring The growth of security devices holds a formidable place in the comprehensive list of IoT usability. Smart security systems are now the most promising component of the smart home segment. With the growing use of the internet, it is reported that given a choice, 56% of consumers would switch home security providers to monitor their home remotely using smartphones. Door locks, security cameras, doorbells, motion detectors, and facial scanners are some of the prominent smart devices used in establishing a home security system. These devices leverage cellular IoT’s promising capabilities to detect anomalies, record data for future review. Using Zigbee or Z-wave technology, smart security systems establish wireless communication with the central hub and capitalises 4G cellular connectivity for instant access. Thanks to the highly secure and easy accessibility of security systems with smartphones, wireless smart home security is gaining momentum. Smart utility metering Considered a potential tool to reduce household energy consumption, home utility metering systems are gaining popularity to monitor a range of consumption metrics from electricity, gas, and water. According to a recent report, the smart meter market’s size is expected to be $12 billion by 2024. Interestingly, cellular communications have proven to provide reliable connectivity options for smart metering infrastructure through low latency in 4G LTE. As the reach of modern cellular networks expands, home utility metering is becoming easy and inexpensive to install. Smart entertainment devices have infiltrated the entertainment segment as hobbyists, and other non-tech users shift towards voice-command operated devices to control their entertainment units right from playing music, opening streaming apps to interacting with other devices like smartphones to answer calls. It is interesting to note that significant players in the audio system and TV industry, including Bose, Sonos and countless others have developed smart devices that include voice controls and smart apps to replace the traditional remote. As smartphones play a pivotal role, VoLTE is also used to direct calls to smart entertainment devices, thus creating a newer need for smart entertainment units. Smart TVs, Bluetooth music devices, and wireless speakers have become the sector’s promising offerings as their Wi-Fi dependency is constantly diminishing. Smart assistants, including Alexa and Siri, have since evolved to make the hands-off technology work better. Media streaming devices such as Amazon Fire TV Stick and Apple’s Homepod are also gathering a fanbase for using NLP and voice recognition software to improve user experience. Smart lighting system Owing to the retail cost, smart lights were once a distant dream but no more. IoT powered lighting solutions allow users to control devices directly from their smartphones and other voice-command devices. Apart from the freedom to control lighting inside the house, these smart lights gain momentum as they are considered a source of clean energy. Utilising cellular technology, including NB-IoT and other 5G compatible connectivity infrastructure, electronic companies worldwide cater to the growing need for smart solutions both from technology innovation and environmentally conscious standpoints. Witnessing the gamut of offerings currently available in the smart home segment, a recent report states that smart-home technology will have a different landscape in the future with a market size of USD135.3 billion by 2025. Some of the promising areas of smart home IoT include Artificial intelligence that can use data to understand a person’s habits and robotic appliances that will innovate to support the elderly. Having said that, it is interesting to draw a parallel to the usage of IoT between smart homes and smart cities, from security systems such as surveillance cameras to infrastructural automation like controlling traffic lights. Tech giants, particularly mobile carriers and networking companies, are becoming increasingly interested in expanding their service scope to become more cellular IoT friendly. Interestingly, the passage of cellular IoT in the industry is gaining further momentum thanks to 5G and cellular enable sensors that have overtaken the need for WiFi to create smart devices. As cellular IoT bridges the gap between instant connectivity and automation, we are inching towards a fully ‘smart’ world. Written by Ajit Thomas, Co-Founder & CMO, Cavli Wireless
<urn:uuid:582d2754-a5dd-4dd1-b017-7931fdf12b8f>
CC-MAIN-2022-40
https://disruptive.asia/cellular-iot-transforming-smart-home-segment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00423.warc.gz
en
0.930587
1,473
2.59375
3
Encryption protects end-user privacy, and its adoption is increasing rapidly. Unfortunately, one side-effect of the increased use of encryption is the erosion of visibility for network defenders. Encrypted Traffic Analysis is a way to restore network visibility for defenders while maintaining privacy for users. To understand encrypted traffic analysis, we first need to define cryptanalysis. Cryptanalysis is derived from two Greek root words, CRYPT – hidden, ANALYSIS – loosen, which investigates the hidden aspects of communication systems. Historically, there are two kinds of cryptanalysis in the context of network security: breaking encryption and side-channel analysis of potential information “leaks.” Encrypted traffic analysis is a side-channel analysis that allows network defenders to identify malware communications and threat actors hiding activity in secure encrypted traffic. There are three levels and categories of Encrypted Traffic Analysis: |Level 1||Simple||Traffic Analysis||Network Transaction Monitoring| |Level 2||Enhanced||Certificate Analysis||Deep Packet Inspection| |Level 3||Advanced||Cryptanalysis||Deep Packet Dynamics (DPD)| LEVEL 1: Traffic Analysis – Information available in the network transaction (IP address, ports, protocol, and timing) LEVEL 2: Certificate Analysis – Looking at the particulars of the encryption used (cipher suites and extensions, etc.) LEVEL 3: Deep Packet Dynamics (DPD) – Looking at network traffic characteristics and traits, such as patterns in the sequence of packet lengths and times. Learn more about DPD. Techniques for Encrypted Traffic Analysis ThreatEye groups ETA techniques into three main categories: Unique identification of network entities such as devices, domains, IPs, users, and connections Identify meaningful relationships between network entities on the globe, in the network, and with similar features Observe changing behavior of network entities over time with comparisons to established baselines. |Level 1: Traffic Analysis||Protocol Fingerprint – each machine has a protocol fingerprint based on the services it utilizes or provides||Shared IP or ASN – often multi-tenant servers host multiple malicious sites in the same location||Pattern of life/time of day – traffic at odd hours of the day or night can indicate malicious traffic| |Level 2: Certificate Analysis||TLS Fingerprinting – unique combinations of cipher suites and extensions||Malware use of TLS – identify malware propensity with specific fingerprints||Novel Fingerprints – the emergence of new fingerprints can indicate the presence of malware or other unwanted software on the network| |Level 3: Deep Packet Dynamics||OS Fingerprinting – identify host and IoT device types from “instinctive” packet header details||Application ID – characterize applications based on similar byte patterns of typical usage||Interactive Sessions – detect usage of Remote Access Toolkits (RATs) by identifying the characteristic patterns of transmission of individual keystrokes| Encrypted Traffic Analysis to Uncover Command & Control (C2) Activity Malicious threat actors and malware system operators communicate with infected target systems using a set of techniques called Command and Control (C2). Threat actors employ C2 techniques to mimic expected, benign traffic using common ports and standard encryption protocols to avoid detection. Despite these precautions, Encrypted Traffic Analysis with machine learning effectively uncovered different types of C2 activity. Level 1 Defends Against: Beaconing An infected system uses beaconing to reestablish contact with the control infrastructure. This activity is characterized by sending identical messages at a specified interval. When repeated messages surface, Level 1 ETA recognizes potential beaconing activity by capturing patterns within both the communication intervals and the byte totals in both directions. Level 2 Defends Against: TLS Fingerprinting The encryption software libraries used by malware often differ from the encryption libraries used by the browser, apps, and other legitimate software. When beaconing activity identifies a suitable command host, an encrypted C2 protocol initiates a secure connection using these same libraries. These events create a distinctive signature that can be identified on the network. ThreatEye will identify new encrypted sessions, both through libraries used and by identifying encrypted fingerprints, alerted as “new tls sha1 found” and “new tls ja3s found”. Level 3 Defends Against: Sequence of Packet Lengths Once a secure connection is made, communication between the C2 infrastructure and the infected target begins. Due to the specific nature of the C2 commands, the number and size of the packets being exchanged over this connection often have characteristic signatures that distinguish them from typical web traffic. Here, real-time analysis of packet traits like these can yield signature deviations that point to C2 activity. In summary, ETA combined with machine learning techniques effectively identifies malicious C2 activity on the network. Despite having no visibility into the content of the exchange, ETA tells us a great deal about encrypted traffic and provides valuable insights to aid network defenders. Defending Against Exfiltration with Encrypted Traffic Analysis Once a threat actor has identified information of value, they must find a way to transport that data out of the network to resources they control. Bulk transfers of large data sets are easily detectable; therefore, attackers use other, stealthy techniques to exfiltrate data. Level 1 Defends Against: “Low and Slow” Rather than exfiltrating the data in a single transfer, threat actors can choose to release small amounts of data over time. Basic traffic analysis recognizes this “low and slow” technique by tracking byte totals over time. Level 2 Defends Against: Tunneling Tunneling encapsulates one protocol—or layer—of encryption within another one. This type of traffic has a different packet dynamic profile than standard traffic on that port. ThreatEye’s parsing capabilities can even detect nested layers of encryption. Some forms of tunneling, such as DNS tunneling, are also detectable by analyzing the ratio of bytes transferred in each direction during a connection. Level 3 Defends Against: Cloud Service Each cloud application has a highly recognizable packet dynamics fingerprint tied to its typical usage. Exfiltration to cloud-based accounts requires extensive data transfer. Profiling behavioral usage can highlight and identify exfiltration events outside regular enterprise activities. Encrypted Traffic Analysis, coupled with machine learning capabilities, evaluates complex data patterns over time and highlights which activities grade as normal (potentially benign) or abnormal (potentially malicious)—all without access to the content of the data being transferred.
<urn:uuid:95707378-e357-41ec-87f9-2f2a2b6f43a4>
CC-MAIN-2022-40
https://www.liveaction.com/encrypted-traffic-analysis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00423.warc.gz
en
0.866231
1,366
2.78125
3
There is no silver bullet when it comes to cyber security. Organizations with multi-million dollar IT budgets still make headlines for being successfully breached, and even government intelligence organizations can’t keep their hacking tools secret despite having some of the strongest protections and strictest policies on the planet. While providers of software solutions, apps and services, and hardware can deliver quality security solutions, the difference between stopping a breach and falling victim to one often comes down to human oversight. Unfortunately, technical security protections are often easily undermined by social engineering and human error. In fact, according to CompTIA’s 2016 International Trends in Cybersecurity report, 58 percent of security breaches are caused by human error, versus 42 percent caused by technology error. For example, look at Sony Pictures’ catastrophic data breach, where the company lost employee personal information, emails, and even copies of un-released films. When the dust finally settled around this attack, evidence suggested that the intruders began with credentials harvested from spear-phishing campaigns that deceived employees. Sometimes attackers don’t even need to trick employees into giving up their credentials; they can just guess an over-simplified password. According to Verizon, 63 percent of all intrusions involve stolen, weak, default or easily guessed credentials. If human error plays such a big role in protecting a network, what are some of the security best practices that organizations should teach to employees? Let’s examine two primary areas, password protection and recognizing phishing attacks. Why are weak credentials such a critical factor in data breaches? Because when people are given the choice between security and convenience, they often opt for convenience. Password reuse is a prime example of this. The 2012 Dropbox data breach succeeded because a Dropbox employee used the same password for his corporate account and his personal LinkedIn account, which was compromised during the LinkedIn breach earlier that year. Common password policies include using a range of alpha-numeric and special characters, and requiring employees to change passwords every few months. Recent password dumps have shown however, that employees instead opt to just change a single character in their password when asked to update or reset it. One could make the argument that relaxing certain policies and protections could increase password security, if done in the proper context. The U.S. National Institute of Standards and Technology (NIST) recently released a draft of its upcoming digital identity guidelines document. In it, they recommend against password composition rules that require complex, hard-to-remember passwords. Instead, they encourage companies to have employees use longer, more easily remembered passphrases, such TelevisionBrainsHurtEverything or SometimesDoggyOthersChair. Organizations should also encourage the use of password managers, as they solve the problem of both password reuse and password complexity. While they do have the drawback of allowing all accounts to effectively become unlocked by a single master password, users are much more likely to create and remember a single highly-complex password, such as Min97$XP19*244, rather than multiple complex passwords for each individual service. Improvements that will benefit users are being made on the technical side as well. For example, the Wi-Fi Alliance, which oversees the “Wi-Fi Certified” designation for wireless devices, recently launched the Wi-Fi Passpoint standard that improves both usability and security on public Wi-Fi hotspots for connecting clients. Instead of unencrypted (open) hotspots or entering a shared key, the Wi-Fi Passpoint program enables hotspot users to create a single Wi-Fi Passpoint account. People use this single account, saved on their mobile device, to automatically connect to any Wi-Fi Passpoint hotspot protected by WPA2 Enterprise security. Phishing emails also rely on human error to function, so organizations need to train their employees to make better security decisions. Teaching your employees how to spot a phishing attack could be the difference between enjoying your weekend, or spending it restoring backups after a ransomware infection. Being suspicious of unsolicited emails is step one towards spotting a phishing attack. Phishing emails are designed to look like a legitimate message, whether it be a notice to reset your Apple.com password or a shipment tracking confirmation with a zip file containing malware attached. Most phishing emails have one very common trait; they are unexpected. If you didn’t request your apple.com password be reset, chances are that password reset email is fake. If you haven’t ordered anything from an online store recently, that shipment tracking info is bogus. The most successful phishing attacks are surprisingly convincing, spoofing the typical format of a legitimate website or email. However, one thing phishing attacks cannot do is use the legitimate URL of the website they are mimicking (outside of very specific cases). Always treat any web links you receive in email messages as suspicious, and double check that the URL matches the intended site. Instead of clicking links in the email, browse directly to the organization’s website and find the desired page. Cyber criminals usually go for the weakest link when looking to infiltrate a network. More often than not, that link is a human. That means organizations need to be diligent when educating and creating policies for their employees. Finding a balance between hardcore “NSA-level” security and usable security is key. Providing phishing education, requiring password managers and encouraging employees to use longer passphrases will help your organization fight the cyber security human condition, and improve your overall security posture.
<urn:uuid:5013b33a-bee2-4aa9-ae38-1f8bb876aed7>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2017/05/03/cyber-security-human-condition/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00423.warc.gz
en
0.936184
1,129
2.953125
3
Email was one of the earliest forms of communication on the internet, and if you’re reading this you almost undoubtedly have at least one email address. Critics today decry the eventual fall of email, but for now it’s still one of the most universal means of communicating with other people that we have. One of the biggest problems with this cornerstone of electronic communication is that it isn’t very private. By default, most email providers do not provide the means to encrypt messages or attachments. This leaves email users susceptible to hackers, snoops, and thieves. So you want to start encrypting your email? Well, let’s start by saying that setting up email encryption yourself is not the most convenient process. Not only must the sender have the means to encrypt an email, but the recipient of your encrypted email must have the means to decrypt it. You don’t need a degree in cryptography or anything, but it will take a dash of tech savvy. We’ll walk you through the process in this article. How email encryption works Encryption, put simply, is no more than scrambling up the contents of a message so that only those with a key can decrypt it. Sort of like those puzzles you did in school where every letter of the alphabet had to be converted to some other letter of the alphabet so as to decode the final message. Computers make the scrambling far more complex and impossible for a human to crack by hand. When you encrypt an email, its contents are scrambled, and only the recipient has the key to unscramble it. To make sure only the intended recipient can decrypt the message, email encryption uses something called public key cryptography. Each person has a pair of keys–the digital codes that allow you to encrypt and decrypt messages. Your public key is stored on a key server where anyone can find it, along with your name and email address. Conversely, you can find other people’s public keys on keyservers to send them encrypted email. When you encrypt an email, you use the recipient’s public key to scramble the message. Due to the technology behind this type of cryptography, the public key cannot be used to decrypt it. The email can then only be decrypted by the recipient’s private key, which is stored somewhere safe and private on his or her computer. Note that you cannot send encrypted email to someone without access to their public key. We’ll talk about a couple different types of email encryption and explain how key sharing works in each. Types of email encryption There are two main types of email encryption methods you need to know exist: S/MIME and PGP/MIME. In order for the recipient to decrypt an email encrypted by the sender, both parties must use the same type of encryption. S/MIME is built into most OSX and iOS devices. When you receive an email sent from a Macbook or iPhone, you’ll sometimes see a 5-kilobyte attachment called “smime.p7s”. This attachment verifies the identity of the receiver so only he or she can read the email. - Recipients must be in sender’s organization or have received at least one signed email from the sender in the past - S/MIME relies on a centralized authority to choose the encryption algorithm and key size - Easy to maintain - Harder to set up with web-based email clients like Gmail - More widely distributed thanks to Apple and Outlook built-in support The other heavyweight in email encryption is PGP/MIME, which is what we’re going to focus on in the latter part of this tutorial. - Recipient must have both public and private encryption keys, and the public key must be available to sender - Relies on a decentralized, distributed trust model - Fairly easy to use with web-based email clients - Free to get a certificate, which S/MIME is usually not (you buy an S/MIME certificate when you buy an iPhone or Macbook) - Choose how you encrypt and how well-encrypted the messages you receive must be - Not widely supported by email clients, so requires third-party tools This makes PGP/MIME cheaper and more flexible, but before we get into that, we’ll look at the S/MIME encryption features built into Outlook and Apple products. Encrypting email with Outlook Before you start sending secret admirer notes on Outlook, a couple requirements stand in your way. The first is that you must have a digital certificate. If you don’t already have a digital certificate, either one you created or from your organization, then you’ll need to create one: - Go to File > Options > Trust Center > Trust Center Settings > Email Security, Get a Digital ID. - Choose which certification authority you want to receive a digital ID from (we recommend Comodo). - You will receive your digital ID in an email. Now that you have a digital certificate/ID, follow these instructions to get it into Outlook: - Select Tools > Options and click the Security tab - Input a name of your choice into the Security Settings Name field - Make sure S/MIME is selected on the Secure Message Format box - The Default Security Setting should be checked - Under Certificates and Algorithms, go to the Signing Certificate section and click Choose - In the Select Certificate box, choose your Secure Email Certificate if it hasn’t been selected by default - Check Send these Certificates with Signed Messages - Click OK to save your settings and return to Outlook Okay, so now you’ve got a digital signature to put on your emails, but they won’t appear by default. To attach your digital signature: - Click New Message - Go to Tools > Customize and click the Commands tab - In the Categories list, choose Standard - In the Commands list, click Digitally Sign Message - You can click and drag the listing onto your toolbar, so from now on just click that to add your digital signature - While we’re at it, click and drag Encrypt Message Contents and Attachments onto the toolbar as well At this point we want to remind you that digitally signing an email is not the same as encrypting it. However, if you want to send someone an encrypted message on Outlook, that person needs to have sent you at least one email with their digital signature attached. This is how Outlook knows it can trust the sender. Conversely, if you want to receive an encrypted email from someone else, you’ll need to send them one unencrypted email first with your digital signature on it. This is a tedious downside to S/MIME. You can digitally sign your email just by clicking the new Sign button before sending. Now that you have each other’s digital signatures and certificates saved into your respective key chains (address books), you can start exchanging encrypted emails. Just click the Encrypt button that we added before hitting send, and that’s all there is to it! Encrypting email on iOS S/MIME support is built into the default email app on iOS devices. Go into the advanced settings, switch S/MIME on, and change Encrypt by Default to Yes. Now when you compose a new message, lock icons will appear next to recipients’ names. Simply click the lock icon closed to encrypt the email. iOS consults the global address list (GAL), a sort of keyserver for S/MIME certificates, to find contacts in your exchange environment. If found, the lock icon will be blue. You’ll probably notice a red lock icon next to some recipients’ email addresses. This means they are either not in your exchange environment (e.g. you don’t work at the same company) or you haven’t installed that person’s certificate, and you cannot send them encrypted messages. In this case, the process is similar to Outlook above. That person needs to send you at least one email with a digital signature attached. The option to attach signatures to your emails by default is found in the same advanced settings menu as the encryption options. When you receive that email, do the following: - Click the sender’s address - A red question mark icon will appear indicating the signature is untrusted. Tap View Certificate - Tap install. When done, the install button will change colors to red and say “Remove.” Click Done on the top right corner. - Now when you compose a message to that person, the lock icon will be blue. Tap it to close the lock and encrypt your message. OSX email encryption To send encrypted messages in the default mail program in Mac OSX requires the same condition as iOS and Outlook: you must first have the recipient’s digital signature stored on your device. When you compose a message and type in the recipient’s email, a checkmark icon will appear to show the message will be signed. Next to the signature icon, a lock icon also appears. Unlike iOS where you can select which recipients will receive encrypted email and which don’t, OSX is an all-or-nothing affair. If you don’t have the certificate for all of the recipients, the email cannot be encrypted. Remember to sign emails only after you’ve finished writing them. If it’s been altered, the certificate will show up as untrusted. Android email encryption On Android, you’ve got a couple options for how to encrypt your email. The CipherMail app allows you to send and receive S/MIME encrypted mail using the default Gmail app and some 3rd-party apps like K-9. It follows the same certificate rules as what we already discussed above. The other option is to use PGP/MIME, which requires both an email app and a keychain to store certificates. PGP requires a bit more setup, but you don’t need to receive someone’s digital signature in advance to send them encrypted email. OpenKeychain is a simple and free keychain tool for storing other people’s certificates and PGP public keys. It works well with K-9 Mail, but some other email apps might also be compatible. In OpenKeychain, you can create your own public and private keys. Input your email address, name, and password, and it will generate these keys for you. If you have an existing key, you can import it. To use a generated key with other devices and apps, you may export it. OpenKeychain also helps you search for other people’s public keys online so you can send them encrypted email. After you’ve added someone’s public key to your keychain, they will be saved for more convenient use later. To use OpenKeyChain with an email app, go into the email app’s settings and make OpenKeyChain your default OpenPGP provider. This process varies from app to app, but it should just take a bit of digging through settings menus to find it. Not all email apps (including Gmail) will support encryption, however. Webmail encryption (Gmail) For web-based email clients like Gmail, we recommend a PGP/MIME encryption solution, as they are far easier to incorporate than S/MIME. For the purposes of this tutorial, we’re going to use a Chrome extension called Mailvelope with Gmail. Most browser extensions work in a similar manner, however, and follow the same basic principles. You can also consider EnigMail, GPGTools, and GNU Privacy Guard. To get started, install the extension and open the options menu. Start by generating your own key: enter a name, email, and password and click Generate. Most email encryption extensions come with a built-in key generator and key ring. If you already have a key, just select the option to import it via copy and paste. Now you’ve got an encryption key, but it doesn’t do much good if no one can find your public key to send you encrypted mail. You can upload your public key to a keyserver. We suggest MIT’s keyserver because it’s popular, free, and easy to use. - In the Mailvelope settings, navigate to Display Keys and click on the one you just made. - Go to Export to see the plain text of your public key. Copy it to your clipboard. - Head to the MIT PGP Keyserver and paste your key into the “Submit a Key” field and hit submit. Now go back to the MIT keyserver homepage and search the name you entered. You should see your key listed. Take note of the key ID, which is displayed both in the Mailvelope settings and on the MIT listing. This is useful if you have the same name as someone else on the keyserver because it serves as a unique identifier. Journalists, for instance, often publish their key ID onto their online profiles and social media so sources know for certain that they are emailing the right person. While we’re on the MIT keyserver site, you can use it to search for the public keys of others. Click on the key ID of the person you are searching for to display the plain text of their key. Copy it and paste it into the “import” section of Mailvelope to add it to your keyring. Now that you’ve added recipients to your key ring and made your own public key available to others, you can start sending and receiving encrypted mail. Mailvelope adds a button to the Gmail composer that opens another window where you can type out the message you want to encrypt. When you’re done, hit the encrypt button, choose the recipient, and transfer the encrypted text into the email. You can add unencrypted text in the email as well, but don’t tamper with the encrypted text. When you receive an encrypted email, the browser extension you chose should automatically recognize it and offer to decrypt it. The recipient will need an extension or some sort of PGP decryptor app on their end. In Mailvelope’s case, I just click the icon that appears hovering over the encrypted text, enter my password, and voila! The downside to Mailvelope, and indeed most web-based encryption extensions, is that they don’t encrypt attachments. You can use Gnu Privacy Guard to encrypt attachments with PGP before uploading them, which allows you to encrypt using the same key pair. Or you can opt for any one of these file encryption apps. Burner email addresses Encryption only hides the content of the message, not the sender’s email address. For any number of reasons, a time may come when you need to send an email anonymously to hide your identity. To do this, a few burner email services will give you a temporary “fake” email address. Guerrilla Mail is our top choice. You can set up a disposable email address from which you can send and receive messages. It includes a password manager so you don’t have to memorize passwords for multiple burner accounts. Best of all, it’s completely web-based with no registration required, which makes hiding your identity that much more effective. Zmail is another solid option for sending fake email if you prefer a desktop client rather than a web app. Best practices for protecting your email Nine out of 10 viruses that infect computers come from email attachments. No level of encryption will protect you from being careless. It’s therefore very important to scan all email attachments before opening them, especially from senders you don’t recognize. Viruses disguised as Microsoft Office documents are especially common. Many email clients, including Gmail, will automatically scan attachments for you, but others will require you do so manually. Don’t click on links in emails from unreliable sources. In fact, just don’t open emails altogether if they don’t look trustworthy. A spam blocker will go a long way toward avoiding these. If you email a large group of people, use BCC so spammers can’t get a hold of the list. Conversely, if someone includes you in a long list of CC’ed email addresses, don’t hit “reply all” without carefully considering the alternatives. Finally, set a strong password on your email account. Read through our guidelines if you’re not sure what constitutes a strong password or use a password strength checker if you’re still unsure how strong yours is. Related: Cyber security statistics Alternative email encryption apps If fiddling with certificates and key pairs sounds like too much trouble, you can use an off-the-shelf encrypted email client. Tutanota is one such secure email service, with apps for mobile and a web mail client. It even encrypts your attachments and contact lists. Tutanota is open-source, so it can be audited by third parties to ensure it’s safe. All encryption takes place in the background. Hushmail is a paid web-based email client that allows you to send encrypted email to anyone even if recipients don’t have any email decryption tools. Recipients will receive an email notification to let them know they need to visit the Hushmail site, enter the code provided in the notification, and then correctly answer your challenge question. Check out our full tutorial on how to use Hushmail. While we can vouch for Tutanota and Hushmail, it’s worth mentioning that there are a lot of email apps out there that claim to offer end-to-end encryption, but many contain security vulnerabilities and other shortcomings. Do your research before choosing an off-the-shelf secure email app. Be wary of encrypted email apps that don’t use S/MIME or PGP/MIME Many apps and email services out there promise email encryption but don’t use S/MIME or PGP/MIME. These are indeed much easier and faster to set up, but be aware that they roll their own encryption and may not strive for the same privacy standards. SafeGmail and Virtru are examples of these, and we don’t recommend them. We encourage you to upload your public PGP key to a keyserver, but it’s not required. Instead, you can just send the plain text of your public key to the person(s) that you want to receive encrypted emails from. Email encryption provides a secure means of sending messages containing sensitive material as well as a means for others to send you sensitive material. Journalists use it to correspond confidentially with sources. Businesses use it to relay trade secrets and classified documents. Lawyers use it to keep sensitive client and case information safe. You get the idea. In our opinion, email encryption is something you should have readily available when the need arises, but it’s not necessary for everyday communication. Related: Looking for a VPN to protect your privacy? See our list of the best VPN services.
<urn:uuid:cf429bc9-63b2-43bc-b4e6-3310dcc7f998>
CC-MAIN-2022-40
https://www.comparitech.com/blog/vpn-privacy/how-to-encrypt-email/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00423.warc.gz
en
0.915512
4,019
3.53125
4
New path-breaking digital technologies are being introduced into our daily lives with amazing speed. Some of these make it increasingly difficult to differentiate between real and fake media. Among more recent development adding to the problem is the emergence of deepfakes. What is Deepfake Technology? Deepfakes are hyper-realistic videos. They leverage artificial intelligence (AI) to portray an individual as saying and doing things that, in reality, never happened. Considering social media’s tremendous reach and incredible speed, these craftily made deepfakes can quickly reach millions of people. It also can impact society negatively if the creator has made these videos with ulterior motives. The game-changing factor of deepfakes is the sophistication, scale, and scope of the technology involved. Practically anyone with a computer can create fabricated fake videos. The scary part also is that these videos are virtually indistinguishable from genuine media. How Does Deepfake Technology Work? The disruptive technology of deepfake videos is digitally manipulated to illustrate people as saying and doing things that never happened. Deepfakes leverage the power of neural networks. So, they scrutinize huge volumes of data samples to study a person’s facial expressions, gestures, voice, and other nuances. The footage of two people is fed into the deep learning algorithm to train it to swap faces. Deepfakes use facial mapping technology and AI to swap a person’s face on video into another person. Deepfakes use real footage and hence are difficult to detect. The audio sounds authentic. Individuals and computers optimize the videos to go viral on social media quickly. Ordinary viewers assume that the video they are looking at is genuine. The early deepfakes had a doctored look to them. However, rapid and significant technological advances have made it harder to tell a real video from a fake. Like all other technologies, deepfakes have undergone a major development curve on the path to realizing their full potential. With algorithms improving, video makers need less source material to produce a more accurate deepfake. What are the Benefits of Deepfake Technology? While deepfake videos are slammed for their nefarious intent, the technology creates many benefits too. Various industries use it too, including education, films, digital communications, social media, healthcare, applied sciences, gaming, e-commerce, entertainment, etc. Deepfake Technology in the Film-Making Industry The use of deepfake technology increased dramatically in the film industry over the years. - The technology is used for composing digital sounds for actors who are unable to lend their voice to dubbing for various reasons. - It also comes in handy in updating any portion of the footage of a film. - Producers can save huge sums spent on reshooting. - Filmmakers can now recreate the traditional scenes in movies. - They can even produce new movies for fans of popular actors who have passed away. Deepfake technology can help create stunning special effects and featured-face editing. It is a highly useful tool for post-production work, certainly. It also can help enhance video quality more professionally. Deepfake Technology In The Gaming/Entertainment Industry Deepfake technology is the perfect gift for players and developers in the gaming industry. Because the technology enables users to play multiplayer games and engage in virtual chats. Their natural-sounding and smart assistance features of them help create digital twins of people effortlessly. They also present similarly in virtual reality. Indeed, it boosts human relationships and assists in improved online interaction. Digital businesses are becoming easier and more efficient with the application of deepfake technology. It has the potential to transform the shape and functioning of major industries such as e-commerce and advertising. Popular brands can use the technology to tie up with supermodels. These brands craft fake models to appear like real ones using deepfake technology. They can display their fashion outfits on popular models. They can change the skin colors, weight, body structure, and heights of models for a greater variety of displays. So, by using deepfake, they can generate targeted fashion ads in tune with the trends and needs of their audience. Online clothing businesses also can benefit hugely from the technology. Threats of Deepfake Technology With the increasing sophistication of deepfake technologies, their future applications can be potentially terrifying. Using deepfake technology, content can be edited in any way. It can be represented out of context and is misleading. Missing context can be in the form of misrepresentation and isolation. In misrepresentation, the information presents erroneously and deceitfully. For instance, videos showing one country attacking another from an entirely different conflict years earlier present in a different context. An example of isolation is when a small clip is taken out of a longer video. The core context is removed to create a false narrative of the events. There are two forms of deceptive editing. In omission, they edit out various portions of a video. The rest presents as a complete narrative. In splicing, two videos combine to alter the core narrative intensely. Commonly people use them for interviews. So, the audio of the interviewee moves elsewhere in the clip to create a drastically different narrative. Use in Criminal Activity Deepfakes have become more realistic, and they are becoming faster to make. That creates a major problem because they are a powerful tool for weaponization. Audio deepfake to create synthetic voices means another tool for criminals. Deepfake is a remarkable technology that can provide practical, real-life applications. Yet, we have seen that most of its recent uses are deceitful. Indeed, with enhanced technology, this platform’s misuse is increasing daily. So, users must check the originality of everything they see, hear or browse online. In the deepfake world, it is vital to define the legitimacy of every piece of information spread via media.
<urn:uuid:a382e100-5bc8-4e7e-9e7b-dc44acceb156>
CC-MAIN-2022-40
https://www.baselinemag.com/uncategorized/what-are-deepfakes-major-opportunities-and-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00423.warc.gz
en
0.936422
1,208
3.203125
3
The radical increase in mobile users has entailed the improvement of the mobile app industry. Mobile application development agencies help businesses in planning and designing exclusive and robust mobile apps that can fulfill the requirements of clients. People are depending more on mobile apps for performing daily jobs. And these applications consume lots of data. To assess and handle this data effectively, you need a unique data management tool. This is where big data analytics plays a role. It helps organizations get data-driven understanding from apps. What is Big Data? These days, tools, devices, and people create a diverse and dynamic volume of data. And this amount of data requires scalable, creative, enhanced, and new technology for analytics, hosting, and collection. The big data technology processes data gathered for deriving rich and real-time business information associated with productivity management, performance, profit, users, augmented shareholder value, and risk. Speed is the main factor in the world of Big Data. Generally, conventional analytics concentrates on the assessment of historical data, while, you can encompass the real-time data, using Big Data Analytics. Some top international brands that have used Big Data for boosting the operations of their organizations are: - Capital One - American Express What is The Requirement Of Big Data In Mobile App Development? As per a Statista survey report, the global number of mobile application downloads is predicted to increase to 258.2 billion by 2022 and was 205 billion in 2018. This wide mobile app user base creates large volumes of raw data. Raw or unstructured type of data requires high-level analytics for testing the numbers and creating precise insights out of those numbers. Here big data plays its role. Big data tools enable mobile app developers to gather, streamline, and assess various data sets for addressing patterns in client preferences. Then developers can produce big data insights to develop futuristic and innovative mobile applications. A few popular big data tools that developers use are Hive, Cloudera, Hadoop, Tableau, and Spark. These tools enable developers to produce robust mobile applications with the integration of new features. How Does Big Data Work? Big data usually works on an easy procedure: Data Collection -> Integration -> Handling -> Analysis. As an outcome, you receive the data for beginning the procedure. 1. Data Collection This is primarily done through various applications like Instagram, Facebook, Twitter, and YouTube, and phone calls also. After the data collection, it is processed for making sure that the business analysts don’t find it difficult. This helps them get a clear insight into what will be great before creating a mobile app according to client insight with big data trends. 3. Data Handling In simple words, data is stored or handled in a way that helps it assess easily. Businesses preserve the data on the cloud, premises, etc. for keeping them safe and utilizing them when required. Analysts consider and process the stored data. This assessment helps them take unauthorized decisions according to the requirement. The data is assessed for bringing up some scopes and possibilities that are required for taking a business to a further level. Before delving into the role of big data in mobile app development, you should understand its function because raw data has no usage. Thereby, you should assess it and extract significant data for reaching development. Four V’s of Big Data The big data generally works on the 4 V’s. It helps understand the ideology of data that is assessed by the businesses that reputed mobile app development agencies take care of. This is the main factor that analysts count in big data as it defines the speed of analysis. Every second, the data is obtained from search history, order history, mobile app messages, Twitter data feeds, clickstreams, and more. Moreover, the data volume helps determine the value also for making sure that the data stream is handled. Data production is done at a rate of speed that holds much importance during assessing and utilizing it. Moreover, speed relies on the data volume that is created or processed for use for the business. It incorporates the data stream in websites, social platforms, mobile devices, and more. The big data incorporates many varieties as data arrives in several types. Whether it is video, voice, text, and more, data is all around us. The data is gathered in the unstructured, semi-structured form, which is changed to a structure. The structured data is established in the order after processing for drawing out the precise data that is required. Big data is sourced actively from different places; and as an outcome, you need to check the quality or veracity of the data that can be significant for the organizations. What Makes Big Data a Significant Factor in Mobile App Development? The necessity of big data is huge as it epitomizes data collected from video and voice recordings, machine data, social media, continued preservation, and structured and unstructured data logging. The role of big data in mobile app development is not only for assessing the data and providing you insight, but it runs more intensely. So, what makes big data a meaningful and important factor in mobile app development? Let’s explain! 1. Analysis of User Experience The mobile app development industry is growing rapidly. Aside from customer requirements, developers are susceptible to know the ways users utilize their mobile apps. Hence, using big data app development, you can conduct a detailed analysis of user experience. As an outcome, it provides a detailed analysis of user engagement for every feature and page. You can utilize similar data for preparing a list of everything that users require, want to change, or enhance. 2. Real-Time Data Availability Customers’ needs change almost daily and rapidly. They need something exclusive and want organizations to offer useful solutions. Henceforth, organizations are also transforming their trends, requirements, and tactics. Organizations are utilizing big data for boosting mobile app development process for being aware of the continuous change and new client needs. It provides awareness into the world of clients on what they need and what can be great for them. It is highly prospective because of real-time data assessment. It helps organizations take data-driven and real-time decisions that can improve user experience. 3. Customized Marketing Campaigns User behavior data analysis incorporating requirements, likes, dislikes, and expectations can create customized marketing campaigns. With big data, you can assess buying patterns, demographic data, and users’ social behavior for changing your marketing plans as per their present requirements. By creating the right tactics, you can fuel engagement, drive adoption, build app revenue, and boost satisfaction. 4. Client Requirements High-quality applications always meet users’ requirements. They lookout for more managed mobile app development cost. Big data helps you assess the huge data flow that users create regularly. It would benefit you with reviving awareness. Big data brings up users’ requirements in the present marketplace. Moreover, you can generate concepts for quality and innovative mobile applications, by understanding the interaction and reaction of the users of various lifestyles, locations, backgrounds, and age groups. 5. Sales Conversions The main mottos of creating a mobile app are helping in customer interaction, fixing client issues at some clicks, boosting revenue, etc. Big data is not restricted to only collecting local data and utilizing it for understanding the users. The mobile applications go primarily in a particular nation or globally. Thereby, the use of big data is important to enhance mobile app development process. The developers research the data for making sure that they can provide an enhanced and better solution. Moreover, a better understanding makes it easier for developers to include mobile app monetization models in it. 6. Social Media Analytics Big data analytics help organizations recognize mentions of their items on social platforms. These mentions can be complaints or dynamic media like video and images, and client reviews. Organizations utilize social networks for benefitting positive client experiences and respond to the client who mentions their organizations. You can discover greater ways of product selling if you know your clients better and understand their ways of interaction with several social networks. 7. Connected Devices In case you know about IoT-based devices, you should also know that they will stay for a long time. Moreover, there has been a large influence of IoT on mobile app development and big data is incorporated in it. IoT is a superb way of moving ahead using automation and develop in the market with the help of the right tech use of big data. IoT helps ease complicated procedures and connect gadgets with apps for users. So, what is the role of IoT devices and big data in app development? IoT devices help gather data from users and assess them for actionable awareness. These help developers get user-friendly and result-oriented applications that can influence the market. Future of Big Data in the Mobile App Industry It can be expected that there will be over $581.9 billion in revenue by the end of 2020 produced in the mobile app industry. This is usually because of a standard shift and an increase in users in mobile app technology. When it comes to the future of the mobile app, big data is included in digital technologies. Since big data plays a pivotal role in mobile app development, it is predicted to be with us in the future also. Due to its simple performance and advanced features, it has become an essential part of the mobile app industry. Henceforth, investment in big data analysis is one of the best decisions for agencies. If you explore the mobile app development industry, you can see that developers acquire much using big data. It helps enhance the app development process, offering a perfect user experience. Mobile applications transform more than web applications. They are higher in demand because of the simplicity of use and easy display. Thereby, developers should work hard for offering an engaging and unique user experience. Big data offers a massive amount of data regarding location, user needs, and choices. To stay ahead of the competition, organizations should use the data obtained by big data analytics effectively.
<urn:uuid:3685b6c2-129d-490b-83e5-ffa7d03e4b12>
CC-MAIN-2022-40
https://www.crayondata.com/role-big-data-mobile-app-development/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00423.warc.gz
en
0.91417
2,102
2.5625
3
Types of Optical Fiber Dispersion and Compensation Strategies What Is Optical Fiber Dispersion? Optical fiber dispersion describes the process of how an input signal broadens/spreads out as it propagates/travels down the fiber. Normally, dispersion in fiber optic cable includes modal dispersion, chromatic dispersion and polarization mode dispersion. Modal dispersion is a distortion mechanism occurring in multimode fibers and other waveguides, in which the signal is spread in time because of different propagation velocity for all modes. As we know, light rays entering the fiber at different angles of incidence will go through different paths/modes. Some of these light rays will travel straight through the center of the fiber (axial mode) while others will repeatedly bounce off the cladding/core boundary to zigzag their way along the waveguide, as illustrated below with a step-index multimode fiber. Whenever there is a bounce off, modal dispersion (or intermodal dispersion) happens. The longer the path is, the higher the model dispersion will be. For example, the high-order modes (light entering at sharp angles) have more model dispersion than low-order modes (light entering at smaller angles). Multimode fiber can support up to 17 modes of light at a time, suffering much modal dispersion. Whereas, if the fiber is a single mode fiber, there will be no modal dispersion since there is only one mode and the light enters along the fiber axis (enters in axial mode) without bouncing off the cladding boundary. However, things are different if one uses a graded-index multimode fiber. Although the light rays travel in different modes as well, the modal dispersion will be greatly decreased because of the various light propagation speeds. For more details, refer to Step-Index Multimode Fiber vs Graded-Index Multimode Fiber. Chromatic dispersion is a phenomenon of signal spreading over time resulting from the different speeds of light rays. The chromatic dispersion is the combination of the material and waveguide dispersion effects. Material dispersion is caused by the wavelength dependence of the refractive index on the fiber core material. Waveguide dispersion occurs due to dependence of the mode propagation constant on the fiber parameters (core radius, and difference between refractive indexes in fiber core and fiber cladding) and signal wavelength. At some particular frequency, these two effects can cancel each other out giving a wavelength with approximately 0 chromatic dispersion. What’s more, chromatic dispersion isn’t always a bad thing. Light travels at various speeds at different wavelengths or materials. These varying speeds cause pulses to either spread out or compress as they travel down the fiber, making it possible to customize the index of refraction profile to produce fibers for different applications. For example, the G.652 fibers are designed in this way. Polarization Mode Dispersion Polarization mode dispersion (PMD) represents the polarization dependence of the propagation characteristics of light waves in optical fibers. In optical fibers, there is usually some slight difference in the propagation characteristics of light waves with different polarization states. When the light is defined as an energy wave or energy region, it possesses 2 mutually perpendicular axes, namely the electromotive force and magnetomotive force. The moment the energy inside these two axes transfers at different speeds in a fiber, PMD occurs. PMD has small effects for networks whose link speeds are lower than 2.5 Gbps even if the transmission distance is longer than 1000 km. However, as speeds increase, it becomes a more important parameter especially when the speeds are over 10 Gbps. In addition to the major inherent PMD caused by the glass manufacturing process, the PMD can be affected or caused by the fiber cabling, installation and the operating environment of the cable as well. How to Make Fiber Dispersion Compensation? Although optical fiber dispersion does not weaken the signal, it shortens the distance that signal travels inside optical fibers and blurs the signal. For example, a pulse of 1 nanosecond at the transmitter will be spread out to 10 nanoseconds at the receiver, resulting in signals not properly received and decoded. Therefore, it is important to reduce optical fiber dispersion or make dispersion compensation in long-haul transmission like DWDM systems. Here, we will introduce three compensation strategies or techniques to compensate for the fiber dispersion. Dispersion Compensation With DCF In DCF (Dispersion Compensating Fiber) technique, one can use a fiber having large negative dispersion alongside a typical fiber. The number of light distributed by a traditional fiber is reduced or maybe nullified by using a dispersion compensating fiber having a really giant value of dispersion of opposite sign as compared to that of normal fiber. There are primarily 3 schemes (fiber-pre, post or symmetrical) which will be used for dispersion compensation. And the dispersion compensating fibers are used extensively for upgrading the installed 1310nm optimized optical fiber links for operation at 1550nm. Dispersion Compensation With FBG Fiber Bragg Grating (FBG) is a reflective device composed of an optical fiber that contains a modulation of its core refractive index over a definite length. By applying FBGs, the dispersion effects can be dramatically decreased in long transmission systems like 100 km. The fiber grating reflects light-weight propagating through the fiber once its wavelength corresponds to the modulation regularity. Using FBGs for dispersion compensation may be a promising approach since FBGs are passive optical element fiber compatible, having low insertion losses and prices. The FBGs can not only be used as filters for dispersion compensation, but also be used as sensors, wavelength stabilizers for pump lasers, in narrow band WDM add drop filters. Dispersion Compensation With EDC Electronic Dispersion Compensation (EDC) is a method using electronic filtering (also known as equalization) to compensate for dispersion in an optical communications link. The filtering can be included in a communications channel to compensate for signal degradation caused by the medium. EDC is typically implemented with a transversal filter, the output of which is the weighted sum of a number of time-delayed inputs. EDC solution has the ability to automatically adjust the filter weights according to the characteristics of the received signal, which is known as adaptation. EDC can be used both in single mode fiber systems and multimode fiber systems. Furthermore, it can be combined with other functions on 10-Gbit/s receiver ICs. It can get significantly reduced transmitter cost for single mode fiber systems or increased transmission distance for multi-mode systems at a small receiver cost penalty. Although optical fiber dispersion tends to temporally spread and distort signals in many ways, it is not always bad for the transmission of telecom signals in fiber optic links. Actually, it is better to have some amount of dispersion when using wavelength division multiplexing because it could mitigate nonlinear effects.
<urn:uuid:c9c4cc1c-9027-4fce-bc27-5c8b950c5378>
CC-MAIN-2022-40
https://community.fs.com/blog/types-of-optical-fiber-dispersion-and-compensation-strategies.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00623.warc.gz
en
0.91193
1,463
3.4375
3
With the blogs in this series I want to reach not only my typical audience, security professionals, but especially less security aware people to help them improve their personal security. If you think the content is helpful for people you know, share it with them! For us adults, it's hard to stay safe online and not to fall for the latest deception techniques used by criminals. Children are even a lot more vulnerable to the potential dangers of using the internet. If you have children you should educate them about online security and privacy. In this post I will give you some advice that hopefully helps to keep your children safe online. Keep your devices secure I've talked about basic device security throughout this blog series. But I want to reiterate the importance quickly. Make sure the devices your children use: - Run antivirus sofware - Are kept up to date: the operating system and other applications must be updated regularly - Are backed up regularly and securily For more tips about mobile device security read this post. Learn your children the online security and privacy basics The online behavior you want to learn your children is pretty similar to how you want them to behave in the real world. The big difference is that the risks are a lot bigger online. I listed a few tips here. - Show them how computers work. Sit next to them regularly and help them. - Tell them what they can and can not share online. Explain why they shouldn't share personal information about themselves, family and friends. - Emphasize that they should not interact with people they don't know. Even not if they seem really nice, on the internet nothing is what it seems. If a person starts asking them personal questions they should ask themselves why this person wants to know all of this information. - Tell them that if things look to good to be true, they mostly aren't true. - Explain that criminals send malicious links and documents in order to steal personal information like usernames and passwords and that they even do so by impersonating people you know vie email or text messages. - Tell them not to download software from shady websites. Learn them what kind of sites to avoid. In any case don't scare your children with horror stories, but tell them for instance that people lost money because of clicking on malicious links. Or people's computer no longer worked because of viruses that got installed on their machines after downloading a malicious file. It's also very important that they know they should contact you whenever they doubt about something or feel something is not ok. Encourage them to ask questions and assure them that you will not be angry if something happens. It's not their fault, anyone can fall for the devious tricks from online criminals. As a parent you should know what your children are doing on the computer or on their mobile devices and block content that's not suitable for them. It's important that you know who your children are talking to online. If you notice they are chatting with people they don't know you should tell them to stop the conversation. Also look what's the conversation about and if it's really inapproprate report the abuse. But even if the people seem familar, impersonation is still possible. And cyberbullying by "friends" or class mates is not that uncommon either. If your children use the computer or mobile devices make sure you can see what they are doing. You don't have to stand behind the computer or look at their tablet all the time, but when you're in the same room as them you will have at least a better view of what they are doing and you can observe their reactions. If they look worried there might be something going on online. There's also parental control software available that can help to follow up and restrict what your children are allowed to do on the internet. Typically these applications can block websites or applications, set time limits and monitor the online behavior of your children. Windows has a built-in parental control feature which offers the following options: macOS also has a parental control feature. In this video you can see how you can enable it. If you want to enable it for iOS watch this video. If you have an Android device you can find a step by step guide here. Some apps also foresee a version that's appropriate for kids and that offers parental control options. For instance the YouTube Kids app. As we have seen there's a lot you can do to help protect your children online. Tools like parental control can be certainly helpful, but it's equally - if not more - important that you inform your children how to deal with the risks that are inherently associated with the internet. That's all for today, tomorrow more security tips in part 26 of this blog series. In the meantime stay safe online!
<urn:uuid:61c7d6c4-c43e-4e4a-9892-24d261917c59>
CC-MAIN-2022-40
https://johnopdenakker.com/help-your-children-stay-safe-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00623.warc.gz
en
0.96172
989
3.015625
3
In order to increase the transmission distance of optical signals, many technologies, like the TDM (time division multiplexing) and WDM (wavelength division multiplexing), have been used. Except for that, several optical components like single mode fiber optic cables, optical amplifiers and dispersion compensating modules (DCMs) are also put into use to realize the goal. Today, this article intends to illustrate the solutions to achieve longer transmission distances with DWDM technology. When it comes to long-haul optical transmissions, DWDM (dense wavelength division multiplexing) is a topic that cannot be ignored. DWDM technology enables different wavelengths to transmit over a single optical fiber. Different wavelengths are combined in a device—Mux/Demux which is short of multiplexer/demultiplexer. The DWDM Mux/Demux provides low insertion loss and low polarization-dependent loss for optical links. Here take a 8CH DWDM Mux/Demux for example to illustrate how to extend distance in long haul transmission. The first solution is suitable for applications that are less than 50km. The picture below shows a unidirectional application with 8CH DWDM Mux/Demux. As we can see, in this links, the DWDM Mux/Demux transmits 1550nm signal over one single mode fiber. The eight different signals from the transmitters are multiplexed into 1550nm signal by the 8CH DWDM Mux. Then they go through the single mode fiber and are separated into the original wavelengths by the DWDM Demux. The use of DWDM Mux/Demux and single mode fiber allows the system to transmit over 50km without optical amplifier or DCM. Notes: this solution is the basic application of DWDM Mux/Demux in a relative long distance comparing to CWDM technology which suits short distance deployment. Different from the first solution, if the link distance is longer than 50km, this solution can be taken into account. Optical signal loss will become greater as the links are getting longer, which means an optical amplifier module or dispersion compensator is needed. Therefore, to achieve a satisfying signal quality in long-distance transmission, an EDFA which can boost the weakened optical signals is added in this solution (as shown in the picture below). This DWDM configuration is similar to the former one, but with the EDFA, the link distance on the single mode fiber is up to 200km. However, sometimes an EDFA is not enough to achieve a quality signal, especially in some long haul systems like CATV system. Because these systems often have a high requirement for the quality of optical signal. Therefore, as we can see in the following picture, except for the DWDM Mux/Demux and EDFA, there is also a DCM. This solution is a point-to-multipoint long haul system deploying a DCM to extend the transmission distance. From the picture, the EDFA is placed midway between the transmitter and receiver in the transmission path. And in order to ensure the quality of the whole transmission, a DCM module is added in this link to deal with the accumulated chromatic dispersion without dropping and regenerating the wavelengths on the link. Notes: all the three solutions are unidirectional transmission on single mode fiber cables. If a network requires bidirectional transmission to transfer eight signals, you can use a 16CH DWDM Mux/Demux over single fiber or a 8CH DWDM Mux/Demux over dual fiber. WDM technology, especially the DWDM, is the critical step to go into the super-long distance transmission in optical communication. This post mainly introduces three basic solutions to realize long haul transmission with DWDM Mux/Demux. All the components including the DWDM Mux/Demux (both 8 channels and 16 channels), EDFA, DCM and optical modules are available in FS.COM. If you have any needs, please contact us via [email protected].
<urn:uuid:96ff2e73-0a98-4fda-8001-320cde3d6bbc>
CC-MAIN-2022-40
https://www.fiber-optic-components.com/tag/dwdm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00623.warc.gz
en
0.912712
850
3.53125
4
“History is a cyclic poem written by Time upon the memories of Human Being.”- Shelley, English Poet. History is the study of the past; specifically the people, societies, events, and problems of the past as well as our attempts to understand the past. It is a pursuit common to all human communities. History can take the form of a tremendous story, a rolling narrative filled with ideal personalities and tales of turmoil and triumph. Each generation adds its chapters to history while reinterpreting and finding new things in those chapters already written. Let’s discover what happened on August 18 in World History: 1920: 19th Amendment ratified thanks to one vote A dramatic battle in the Tennessee House of Representatives come to an end with the state ratifying the 19th Amendment to the U.S. Constitution on 18 August1920. After decades of struggle and protest by suffragettes across the country, the decisive vote is cast by a Twenty fourth-year-old representative who reputedly changed his vote after receiving a note from his mother. America’s suffrage movement was founded in the mid-19thcentury by women who had become politically active through their work in the abolitionist and temperance movements. In July 1848, 200 woman suffragists, organized by Elizabeth Cady Stanton & Lucretia Mott, met in Seneca Falls, New York, to discuss women’s rights. After approving proposal asserting the right of women to educational and employment opportunities, they passed a resolution that declared “the women of this country have to guard to themselves their sacred right to the elective franchise.” 1795: George Washington signs the Jay Treaty with Britain On 18 August 1795, President George Washington signs the Jay (or “Jay’s”) Treaty with Great Britain. The Jay treaty, officially known as the “Treaty of Amity Commerce and Navigation, between His Britannic Majesty; and The United States of America” attempted to diffuse the conflict between England and the United States that had arise to renewed heights since the end of the War. The United States government objected to English military posts along with America’s northern and western borders and Britain’s violation of American neutrality in the year1794 when the Royal Navy seized American ships in the West Indies with France during England’s war. The treaty, written and negotiated by Supreme Court Chief Justice (and Washington appointee) John Jay, was signed in London in the year 1794 by Britain’s King George III.
<urn:uuid:3410eec1-0604-4a09-ac4b-a4e23179413a>
CC-MAIN-2022-40
https://areflect.com/2020/08/18/today-in-history-august-18/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00623.warc.gz
en
0.966701
534
3.859375
4
What Is MITRE ATT&CK? MITRE ATT&CK refers to a group of tactics organized in a matrix, outlining various techniques that threat hunters, defenders, and red teamers use to assess the risk to an organization and classify attacks. Threat hunters identify, assess, and address threats, and red teamers act like threat actors to challenge the IT security system. The objective of the MITRE ATTACK framework is to strengthen the steps taken after an organization has been compromised. In this way, the cybersecurity team can answer important questions regarding how the attacker was able to penetrate the system and what they did once they got inside. As information is collected over time, a knowledge base is formed. This serves as an ever-expanding tool that teams can use to bolster their defenses. Using the reports generated by the MITRE ATT&CK, an organization can figure out where their security architecture has vulnerabilities and ascertain which ones to remedy first, according to the risk each presents. For threat hunters, the MITRE ATT&CK framework presents an opportunity to analyze and evaluate the techniques attackers use. The framework is also a useful tool for assessing to what extent an IT team has achieved visibility across the network, specifically when it comes to cyber threats. Origin of the ATT&CK Framework Back in 2013, the MITRE Corporation started developing MITRE ATT&CK. But what does MITRE stand for? It means MIT Research Establishment. The term “ATT&CK” is an acronym for Adversarial Tactics, Techniques, and Common Knowledge. The framework was first presented to the public in May 2015, but it has been changed several times since then. The MITRE Corporation is a nonprofit organization set up to support government agencies in the U.S. The MITRE ATT&CK framework was created to develop a straightforward, detailed, and replicable strategy for handling cyber threats. The underlying concept driving the framework is to use past experiences to inform future cyber threat detection and mitigation. The Techniques and Tactics of the ATT&CK Framework There are three different kinds of ATT&CK matrices: Enterprise ATT&CK, PRE-ATT&CK, and Mobile ATT&CK. Each individual matrix employs different techniques and tactics. The Enterprise ATT&CK matrix consists of tactics and techniques that apply to Linux, Windows, and macOS systems. When one of these operating systems is penetrated, the Enterprise matrix helps identify the nature of the threat and outlines information that can be used to defend against it in the future. The Mobile ATT&CK matrix has the same objective, but it applies to mobile devices. The PRE-ATT&CK matrix focuses on techniques and tactics used by attackers before they attempt to penetrate a system or network. The report generated by an ATT&CK matrix is separated into columns. Each column describes tactics, which are what the attacker aims to accomplish. The techniques are the methods they use to succeed in the tactics. This information can be used in an ATT&CK evaluation to gain insight into the attacker’s methodologies. There are 11 different tactics in the matrix for an Enterprise ATT&CK: - Initial access - Privilege escalation - Defense evasion - Credential access - Lateral movement Each tactic is essentially a goal of the attacker. If cyber criminals are able to accomplish these individual goals, they are one step closer to their objective. In some cases, the attack will not seek to realize every tactic because some may go beyond what the attacker seeks to do. For example, an attacker may not want their attack to perform lateral movement if they simply want to steal information from a specific computer. In this case, the MITRE ATT&CK matrix may not have entries in the “Lateral Movement” section. To illustrate how the techniques and tactics come to play in ATT&CK, suppose an attacker wants to access a network to install mining software. Their objective is to infect as many workstations as possible within the network, thereby increasing the yield of the mined cryptocurrencies. The end goal necessitates several smaller steps. Initially, the attacker has to get inside the network. They may use spear-phishing links, for example, that are sent to one or more users on the network. Then, to escalate their privileges, they may use process injection, which involves injecting code to get around defenses and elevate privileges. Once inside the network, the miner may try to infect other systems. In this attack, the miner had to use a few different tactics. When they used spear phishing, they did so to attain Initial Access. This got them inside the network. Then, when they used process injection, they achieved the tactic of Privilege Execution. Further, as the miner infected other systems, they used the tactic of Lateral Execution. The ATT&CK report would outline how the miner accomplished each tactic and also the techniques used to get them done. As security personnel analyze the results, they can ascertain not just the methods used but also why they were successful. For example, the phishing attack could only have been effective if someone clicked on a link. This raises important questions such as: - Does all staff in the organization understand how to avoid phishing attacks? - Are employees and management personnel educated regarding what a phishing attack looks like? - Was there something about the target’s behavior, browsing habits, position, or personal network safety practices that made them a more likely target? - What did the attack actually look like? How likely were other employees to have fallen for it? - How can this information be used in future cybersecurity training? Strengths and Challenges of MITRE ATT&CK MITRE formalizes the process of categorizing attacks and allows for a common language when different security teams have to communicate with each other. MITRE provides you with a system you can use to consistently address threats. However, MITRE also presents challenges because it’s only a security framework, which means it may or may not work in a real-life scenario. For instance, if one company decides that the cyber risk associated with a threat is higher than that of another, the steps MITRE requires may end up being applied differently—even though both are facing the same threat. How Does ATT&CK Help in Sharing Threat Intelligence? Even though this framework is not new, it has become more and more popular as a tool for helping organizations, the government, and end-users combine efforts to combat cyber threats. Threat intelligence gives organizations, IT departments, and individual users an advantage when it comes to spotting and preventing cyber threats. Furthermore, with MITRE ATT&CK reports being generated on a consistent basis, the collection of threat profiles grows larger and more relevant. Over time, the portfolio of threats can help users prevent more types of attacks. However, it is important to keep in mind that MITRE ATT&CK matrices are not a foolproof solution. While an attack may be well-described and the report contains a high level of detail, that does not mean that the same kind of attack cannot be accomplished using other techniques. To again use the cryptomining example, the objective could have still been accomplished using whale phishing. While whale phishing merely goes after “bigger fish” in the organization, this may considerably change the nature of the attack. Specifically, the methods used to make the initial penetration successful may have taken more time to develop, perhaps incorporating social engineering or gathering personal data to help disguise the attacker’s approach. As a result, the MITRE ATT&CK report that began with a spear-phishing attack may have little relevance to one with the same objective but different initial steps. To prevent succumbing to this vulnerability in the MITRE ATT&CK format, it is best to: - Assume there are multiple ways to successfully execute ATT&CK techniques. - Log the test results carefully so it can be easier to see the gaps attackers can use to their advantage, as well as specific techniques to accomplish tactics. - Research the different methods attackers use and then test them against your current defenses, noting which protections work well and which fall short. - Examine which tools do the best job of protecting your network, as well as where there are gaps that can threaten your system. - Make sure you stay up to date with the most recent attack methods and continually test your strategies to defend against them. It is also important to remember that not all attacks within one category behave the same and can be stopped using the same methods. For example, there are several different ways of getting ransomware into a network. An attacker can use drive-by downloading or it can be a more targeted assault, such as one that employs a Trojan horse. 5 Uses Cases of the MITRE ATT&CK Framework Here are five different ways enterprises can use MITRE: - Sharing information between organizations regarding how threats behave - Keeping track of the techniques, tactics, and procedures (TTP) threat actors use over time - Emulating the behavior and tactics of different types of hackers for internal training purposes - Mapping out the connections between the tactics malicious actors use and the kinds of data they are after - Figuring out which tactics are used the most frequently so cyber defense teams can keep an eye out for them MITRE ATT&CK Framework: Understanding the Behaviors and Techniques That Hackers Use Against Organizations MITRE removes ambiguity and provides a common vocabulary for IT teams to collaborate as they fight threats. This is because, with the ATT&CK framework, the techniques hackers use are broken down, step-by-step. As a result, cybersecurity teams can communicate more clearly about MITRE ATT&CK techniques. MITRE ATT&CK vs. Cyber Kill Chain vs. Diamond Model The MITRE ATT&CK framework is designed to address a broad range of attacks that could impact many different types of organizations. The Cyber Kill Chain, on the other hand, was developed by Lockheed Martin for the military, and it segments an intrusion into seven specific phases: reconnaissance, weaponizing, attack delivery, exploitation of the target, installation of malicious software, command and control (C2), and actions taken on objectives. How Fortinet Can Help? Network Detection and Response (NDR) uses artificial intelligence and other analytics to identify suspicious network activity outside of the norm, which may be an indicator of a cyber attack in progress. FortiNDR enables full-lifecycle network protection, detection, and response. It covers both network traffic and file-based analysis, along with root-cause identification. New threats can be identified by FortiNDR so you can instantly adapt threat containment and protection to new attacks.
<urn:uuid:117acea3-77ab-409e-804d-03c3d78a6294>
CC-MAIN-2022-40
https://www.fortinet.com/tw/resources/cyberglossary/mitre-attck
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00623.warc.gz
en
0.938133
2,248
2.78125
3
What is FISMA (Federal Information Security Management Act)? The Federal Information Security Management Act (FISMA) is a United States federal law enacted as Title III of the E-Government Act of 2002. It requires federal agencies to implement information security programs to ensure their information and IT systems' confidentiality, integrity, and availability, including those provided or managed by other agencies or contractors. Information security policy FISMA recognizes "the importance of information security to the economic and national security interests of the United States" and is aimed at all federal agencies. It mandates that directors of federal agencies should oversee information security policies and practices that: - Provide information security protections that adequately reflect the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, the destruction of information, or information systems. - Comply with the requirements of FISMA and related policies, procedures, standards, and guidelines, as developed by NIST. - Ensure that information security management processes are integrated with agency strategic and operations planning processes. FISMA implementation and compliance FISMA tasked theNational Institute of Standards and Technology (NIST) was tasked by FISMA to develop information security standards (Federal Information Processing Standards) and guidelines for the minimum requirements of information security systems (published as Special Publications in the 800-series). NIST has also developed a six-step Risk Management Framework (RMF) to enable agencies to achieve compliance with FISMA: - Categorize information system - Select security controls - Implement security controls - Assess security controls - Authorize information system - Monitor security controls Each step in the RMF security lifecycle is supported by a Federal Information Processing Standard (FIPS) or Special Publication (SP). For example, Federal Information Processing Standard Publication 199 (FIPS 199) establishes security categories for information and information systems and supports Step 1 of the RMF. Compliance with FIPS is mandatory, and agencies must follow NIST guidance. Annual reporting and auditing Agencies must undertake an annual independent evaluation to determine the effectiveness of their information security policies, procedures, and practices. The results of this audit form the basis of a report on the adequacy of the agency’s information security posture and the state of its compliance with FISMA, which must be submitted to the Office of Management and Budget (OMB) annually. The report is then submitted to Congress, which provides funding to each agency. Although there are no formal penalties for failing to comply with the law’s requirements, there are several notable disadvantages. All federal agencies are graded annually on their FISMA compliance programs, and FISMA scorecard results are publically available. - A low grade reflects poorly on the agency, The reputational damage caused by the resulting negative media coverage can have profound effects. - IT breaches have occurred due to poor IT security postures becoming common knowledge. Agency officials have had to resign, and CIOs testify before Congress. - Agencies that fail to comply with FISMA may be subject to increased oversight or see their budgets reduced by the OMB. As of March 2012 only seven of 24 agencies were more than 90% compliant with FISMA, according to FCW. ISO 27001 and FISMA The relevance of the NIST standards developed for FISMA is limited to the mandates provided by the US legislation. ISO 27001 on the other hand is an international Standard that is relevant globally and is often used by organizations with a global presence. It may be appropriate for organizations with an international footprint to consider conformance to both frameworks. This is true for multinationals such as Google, Microsoft, and Salesforce, which are compliant with both. ISO 27001 is the internationally recognized best-practice Standard that lays out the requirements of an Information Security Management System (ISMS). The latest version of the Standard, ISO 27001:2013, is simple to follow and has been developed with business in mind. It presents a comprehensive and logical approach to developing, implementing, and managing an ISMS. It also provides associated guidance for conducting risk assessments and applying the necessary risk treatments. In addition, ISO 27001:2013 has been developed to harmonize with other standards, so auditing other ISO standards will be an integrated and smooth process, removing the need for multiple audits. Purchase your copy of the standard today >> The additional external validation offered by ISO 27001 registration is likely to improve an organization’s cybersecurity posture while providing a higher level of confidence to customers and stakeholders—essential for securing certain global and government contracts.
<urn:uuid:3dea65c8-b7e8-48bc-ab95-da72c62269c8>
CC-MAIN-2022-40
https://www.itgovernanceusa.com/fisma
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00023.warc.gz
en
0.922102
966
2.875
3
Some of the most talented and dangerous cyber warriors and criminals come from Russia, a long-time meddler in other nations' digital systems. Beyond carrying all of our phone, text and internet communications, cyberspace is an active battleground, with cybercriminals, government agents and even military personnel probing weaknesses in corporate, national and even personal online defenses. Some of the most talented and dangerous cyber crooks and cyber warriors come from Russia, which is a longtime meddler in other countries’ affairs. Over decades, Russian operators have stolen terabytes of data, taken control of millions of computers and raked in billions of dollars. They’ve shut down electricity in Ukraine and meddled in elections in the U.S. and elsewhere. They’ve engaged in disinformation and disclosed pilfered information such as the emails stolen from Hillary Clinton’s campaign chairman, John Podesta, following successful spearphishing attacks. Who are these operators, why are they so skilled and what are they up to? Back to the 1980s The Russian cyber threat dates back to at least 1986 when Cliff Stoll, then a system administrator at Lawrence Berkeley National Laboratory, linked a 75-cent accounting error to intrusions into the lab’s computers. The hacker was after military secrets, downloading documents with important keywords such as “nuclear.” A lengthy investigation, described in Stoll’s book “The Cuckoo’s Egg,” led to a German hacker who was selling the stolen data to what was then the Soviet Union. By the late 1990s, Russian cyberespionage had grown to include the multiyear Moonlight Maze intrusions into U.S. military and other government computers, foretelling the massive espionage from Russia today. The 1990s also saw the arrest of Vladimir Levin, a computer operator in St. Petersburg. Levin tried to steal more than US $10 million by hacking Citibank accounts, foreshadowing Russia’s prominence in cybercrime. And Russian hackers defaced U.S. websites during the Kosovo conflict, portending Russia’s extensive use of disruptive and damaging cyberattacks. Conducting advanced attacks In more recent years, Russia has been behind some of the most sophisticated cyberattacks on record. The 2015 cyberattack on three of Ukraine’s regional power distribution companies knocked out power to almost a quarter-million people. Cybersecurity analysts from the Electricity Information Sharing and Analysis Center and the SANS Institute reported that the multistaged attacks were conducted by a “highly structured and resourced actor.” Ukraine blamed the attacks on Russia. The attackers used a variety of techniques and adapted to the targets they faced. They used spearphishing email messages to gain initial access to systems. They installed BlackEnergy malware to establish remote control over the infected devices. They harvested credentials to move through the networks. They developed custom malicious firmware to render system control devices inoperable. They hijacked the supervisory control and data acquisition system to open circuit breakers in substations. They used KillDisk malware to erase the master boot record of affected systems. The attackers even went so far as to strike the control stations’ battery backups and tie up the energy company’s call center with thousands of calls. The Russians returned in 2016 with more advanced tools to take down a major artery of Ukraine’s power grid. Russia is believed to have also invaded energy companies in the U.S., including those operating nuclear power plants. Top-notch cyber education Russia has many skilled cyber operators, and for good reason: Their educational system emphasizes information technology and computer science, more so than in the U.S. Every year, Russian schools take a disproportionate number of the top spots in the International Collegiate Programming Contest. In the 2016 contest, St. Petersburg State University took the top spot for the fifth time in a row, and four other Russian schools also made the top 12. In 2017, St. Petersburg ITMO University won, with two other Russian schools also placing in the top 12. The top U.S. school ranked 13th. As Russia prepared to form a cyber branch within its military, Minister of Defense Sergei Shoigu took note of Russian students’ performance in the contest. “We have to work with these guys somehow, because we need them badly,” he said in a public meeting with university administrators. Who are these Russian cyber warriors? Russia employs cyber warriors within its military and intelligence services. Indeed, the cyber espionage groups dubbed APT28 (aka Fancy Bear) and APT29 (aka Cozy Bear and The Dukes) are believed to correspond to Russia’s military intelligence agency GRU and its state security organization FSB, respectively. Both groups have been implicated in hundreds of cyber operations over the past decade, including U.S. election hacking. Russia recruits cyber warriors from its colleges, but also from the cybersecurity and cyber crime sectors. It is said to turn a blind eye to its criminal hackers as long as they avoid Russian targets and use their skills to aid the government. According to Dmitri Alperovitch, co-founder of the security firm CrowdStrike, when Moscow identifies a talented cyber criminal, any pending criminal case against the person is dropped and the hacker disappears into the Russian intelligence services. Evgeniy Mikhailovich Bogachev, wanted by the FBI with a reward of $3 million for cyber crimes, is also on the Obama administration’s list of people sanctioned in response to interference in the U.S. election. Bogachev is said to work “under the supervision of a special unit of the FSB.” Allies outside official channels Besides its in-house capabilities, the Russian government has access to hackers and the Russian media. Analyst Sarah Geary at cybersecurity firm FireEye reported that the hackers “disseminate propaganda on behalf of Moscow, develop cyber tools for Russian intelligence agencies like the FSB and GRU, and hack into networks and databases in support of Russian security objectives.” Many seemingly independent “patriotic hackers” operate on Russia’s behalf. Most notably, they attacked critical systems in Estonia in 2007 over the relocation of a Soviet-era memorial, Georgia in 2008 during the Russo-Georgian War and Ukraine in 2014 in connection with the conflict between the two countries. At the very least, the Russian government condones, even encourages, these hackers. After some of the Estonian attacks were traced back to Russia, Moscow turned down Estonia’s request for help -- even as a commissar in Russia’s pro-Kremlin youth movement Nashi admitted launching some of the attacks. And when Slavic Union hackers successfully attacked Israeli websites in 2006, Deputy Duma Director Nikolai Kuryanovich gave the group a certificate of appreciation. He noted that “a small force of hackers is stronger than the multi-thousand force of the current armed forces.” While some patriotic hackers may indeed operate independently of Moscow, others seem to have strong ties. Cyber Berkut, one of the groups that conducted cyberattacks against Ukraine, including its central election site, is said to be a front for Russian state-sponsored cyber activity. And Russia’s espionage group APT28 is said to have operated under the guise of the ISIS-associated CyberCaliphate while attacking the French station TV5 Monde and taking over the Twitter account of U.S. Central Command. One of many cyber threats Although Russia poses a major cyber threat, it is not the only country that threatens the U.S. in cyberspace. China, Iran and North Korea are also countries with strong cyberattack capabilities, and more countries will join the pool as they develop their people’s skills. The good news is that actions to protect an organization’s cybersecurity (such as monitoring access to sensitive files) that work against Russia also work against other threat actors. The bad news is that many organizations do not take those steps. Further, hackers find new vulnerabilities in devices and exploit the weakest link of all -- humans. Whether cyber defenses will evolve to avert a major calamity, from Russia or anywhere else, remains to be seen.
<urn:uuid:980f8009-f6a6-4b46-8b34-5d4a7aa757ea>
CC-MAIN-2022-40
https://gcn.com/cybersecurity/2017/08/tracing-the-sources-of-todays-russian-cyber-threat/312871/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00023.warc.gz
en
0.947529
1,696
2.546875
3
If you're one of those who knows a bit of networking but you feel uncomfortable touching AWS networking resources, then this article is for you. We're going to go through real AWS configuration and you can follow along to solidify your understanding. I'm going through the process of what I personally do to create 2 simple virtual machines, one in a private subnet and another one in a public subnet running Amazon Linux AMI instance. I will assume you already have an AWS account and corresponding credentials. If not, please go ahead and create your free tier AWS account. Just keep in mind that Amazon's equivalent to a Virtual Machine (VM) is known as EC2 instance. In short, we can think of Virtual Private Cloud (VPC) as our personal Data Centre. Our little private space in the cloud. Because it's our personal Data Centre, networking-wise we should have our own CIDR block. When we first create our VPC, a CIDR block is a compulsory field. Think of a CIDR block as the major subnet where all the other small subnets will be derived from. When we create subnets, we create them as smaller chunks from CIDR block. After we create subnets, there should be just a local route to access "objects" that belong to or are attached to the subnet. Other than that, if we need access to the Internet, we should create and attach an Internet Gateway (IGW) to our VPC and add a default route pointing to the IGW to route table. That should take care of it all. This summarises what we're going to do. It might be helpful to use it as a reference while you follow along: Don't worry if you don't understand everything in the diagram above. As you follow along this hands-on article, you can come back to it and everything should make sense. I'll explain the following VPC components as we go along configuring them: We'll then perform the tests: Note that we only tested our Public instance above as it'd be very repetitive configuring Private instance so I added Private Instance config to Appendix section: The first logical question I get asked by those with little experience with AWS is which basic components do we need to build our core VPC infrastructure? First we pick an AWS Region: This is the region we are going to physically run our virtual infrastructure, i.e. our VPC. Even though your infrastructure is in the Cloud, Amazon has Data Centres (DC) around the world in order to provide first-class availability service to your resources if you need to. With that in mind, Amazon has many DCs located in many different Regions (EU, Asia Pacific, US East, US West, etc). The more specific location of AWS DCs are called Availability Zones (AZ). That's where you'll find one (or more DCs). So, we create a VPC within a Region and specify a CIDR block and optionally request an Amazon assigned /56 IPv6 CIDR block: If you're a Network Engineer, this should sound familiar, right? Except for the fact that we're configuring our virtual DC in the Cloud. Now that we've got our own VPC, we need to create subnets within the CIDR block we defined (192.168.0.0/16). Notice that I also selected the option to retrieve an Amazon's provided IPv6 CIDR block above. That's because we can't choose an IPv6 CIDR block. We've got to stick to what Amazon automatically assigns to us if we want to use IPv6 addresses. For IPv6, Amazon always assigns a fixed /56 CIDR block and we can only create /64 subnets. Also, IPv6 addresses are always Public and there is no NAT by design. Our assigned CIDR block here was 2600:1f18:263e:4e00::/56. Let's imagine we're hosting webserver/database tiers in 2 separate subnets but keep in mind this just for lab test purposes only. A real configuration would likely have instances in multiple AZs. For our Public WebServer Subnet, we'll use 192.168.1.0/24 and 2600:1f18:263e:4e00:01:/64. For our Private Database Subnet, we'll use 192.168.2.0/24 and 2600:1f18:263e:4e00:02:/64 Here's how we create our Public WebServer Subnet on Availability Zone us-east-1a: Here's how we configure our Private Database Subnet: Notice that I put Private Database Subnet in a different Availability Zone. In real life, we'd likely create 1 public and 1 private subnet in one Availability Zone and another public and private subnet in a different Availability Zone for redundancy purposes as mentioned before. For this article, I'll stick to our config above for simplicity sake. That's just a learn by doing kind of article! 🙂 If we now look at the Route Table, we'll see that we now have 2 local routes similar to what would appear if we had configured 2 interfaces on a physical router: However, that's the default/main route table that AWS automatically created for our DevCentral VPC. If we want our Private Subnet to be really private, i.e. no Internet access for example, we can create a separate route table for it. Let's create 2 route tables, one named Public RT and the other Private RT: Private RT should be created in the same way as above with a different name. The last step is to associate our Public subnet to our Public RT and Private subnet to our Private RT. The association will bind the subnet to route table making them directly connected routes: Up to know, both tables look similar but as we configure Internet Gateway in next section, they will look different. Yes, we want to make them different because we want Public RT to have direct access to the Internet. In order to accomplish that we need to create an Internet Gateway and attach it to our VPC: And lastly create a default IPv4/IPv6 route in Public RT pointing to Internet Gateway we've just created: So our Public route table will now look like this: EC2 instances created within Public Subnet should now have Internet access both using IPv4 and IPv6. Our database server in the Private subnet will likely need outbound Internet access to install updates or for ssh access, right? So, first let's create a Public Subnet where our NAT gateway should reside: We then create a NAT gateway in above Public Subnet with an Elastic (Public) IPv4 address attached to it: Yes, NAT Gateways need a Public (Elastic) IPv4 address that is routable over the Internet. Next, we associate NAT Public Subnet to our Private Route Table like this: Lastly, we create a default route in our Private RT pointing to NAT gateway for IPv4 Internet traffic: We're pretty much done with IPv4. What about IPv6 Internet access in our Private subnet? As we know, IPv6 doesn't have NAT and all IPv6 addresses are Global so the trick here to make an EC2 instance using IPv6 to behave as if it was using a "private" IPv4 address behind NAT is to create an Egress-only Gateway and point a default IPv6 route to it. As the name implies, an Egress-only Gateway only allows outbound Internet traffic. Here we create one and and then add default IPv6 route (::/0) pointing to it: What we've done so far: Are we ready to finally create an EC2 instance running Linux, for example, to test Internet connectivity from both Private and Public subnets? Before we get started, let's create a key-pair to access our EC2 instance via SSH: Our EC2 instances are accessed using a key-pair rather than a password. Notice that it automatically downloads the private key for us. Ok, let's create our EC2 instance. We need to click on Launch Instance and Select an image from AWS Marketplace: As seen above, I picked Amazon Linux 2 AMI for testing purposes. I selected the t2.micro type that only has 1 vCPU and 1 GB of memory. For the record, AWS Marketplace is a repository of AWS official images and Community images. Images are known as Amazon Machine Images (AMI). Amazon has many instance types based on the number of vCPUs available, memory, storage, etc. Think of it as how powerful you'd like your EC2 instance to be. We then configure our Instance Details by clicking on Next: Configure Instance Details button: I'll sum up what I've selected above: Network: we selected our VPC (DevCentral) Subnet: Public WebServer Subnet Auto-assign Public IP: Enabled Auto-assign IPv6 IP: Enabled The reason we selected "Enabled" to auto-assignment of IP addresses was because we want Amazon to automatically assign an Internet-routable Public IPv4 address to our instance. IPv6 addresses are always Internet-routable but I want Amazon to auto-assign an IPv6 address for me here so I selected Enabled to Auto-assign IPv6 IP too.. Notice that if we scroll down in the same screen above we could've also specified our private IPv4 address in the range of Public WebServer Subnet (192.168.1.0/24😞 The Public IPv4 address is automatically assigned by Amazon but once instance is rebooted or terminated it goes back to Amazon Public IPv4 pool. There is no guarantee that the same IPv4 address will be re-used. If we need an immutable fixed Public IPv4 address, we would need to add an Elastic IPv4 address to our VPC instead and then attach it to our EC2 instance. IPv6 address is greyed out because we opted for an auto-assigned IPv6 address, remember? We could've gone ahead and selected our storage type by clicking on Next: Add Storage but I'll skip this. I'll add a Name tag of DevCentral-Public-Instance, select default Security Group assigned to our VPC as well as our previously created key-pair and lastly click on Launch to spin our instance up (Animation starts at Step 4😞 After that, if we click on Instances, we should see our instance is now assigned a Private as well as a Public IPv4 address: After a while, Instance State should change to Running: If we click on Connect button above, we will get the instructions on how to SSH to our Public instance: Let's give it a go then: It didn't work! That would make me crack up once I got started with AWS, until I learn about Network ACLs and Security Groups! When we create a VPC, a default NACL and a Security Group are also created. All EC2 instances' interfaces belong to a Security Group and the subnet it belongs to have an associated NACL protecting it. NACL is a stateless Firewall that protects traffic coming in/out to/from Subnet. Security Group is a stateful Firewall that protects traffic coming in/out to/from an EC2 instance, more specifically its vNIC. The following simplified diagram shows that: A Security Group (stateful) rule that allows an outbound HTTP traffic, also allows return traffic corresponding to outbound request to be allowed back in. This is why it's called stateful as it keeps track of session state. A NACL (stateless) rule that allows an outbound HTTP traffic does not allow return traffic unless you create an inbound rule to allow it. This is why it's called stateless as it does not keep track of session state. Now let's try to work out why our SSH traffic was blocked. Let's have a look. This is what we see when we click on Subnets → Public WebServer Subnet: As we can see above, the default NACL is NOT blocking our SSH traffic as it's allowing everything IN/OUT. This is what we see when we click on Security Groups → sg-01.db... → Inbound Rules: Yes! SSH traffic from my external client machine is being blocked by above inbound rule. The above rule says that our EC2 instance should allow ANY inbound traffic coming from other instances that also belong to above Security Group. That means that our external client traffic will not be accepted. We don't need to check outbound rules here because we know that stateful firewalls would allow outbound ssh return traffic back out. To fix the above issue, let's do what we should've done while we were creating our EC2 instance. We first create a new Security Group: A newly created non-default SG comes with no inbound rules, i.e. nothing is allowed, not even traffic coming from other instances that belong to security group itself. There's always an explicit deny all rule in a security group, i.e. whatever is not explicitly allowed, is denied. For this reason, we'll explicitly allow SSH access like this: In real world, you can specify a more specific range for security purposes. And lastly we change our EC2 instance's SG to the new one by going to EC2 → Instances → <Name of Instance> → Networking → Change Security Groups: Another Window should appear and here's what we do: Now let's try to connect via SSH again: That's why it's always a good idea to create your own NACL and Security Group rules rather than sticking to the default ones. Let's create our private EC2 instance to test Internet access using our NAT gateway and Egress-Only Gateway here. Our Private RT has a NAT gateway for IPv4 Internet access and an Egress-Only Gateway for IPv6 Internet access as shown below: When we create our private EC2 instance, we won't enable Auto-assign Public IP (for IPv4) as seen below: It's not shown here, but when I got to the Security Group configuration part I selected the previous security group I created that allows SSH access from everyone for testing purposes. We could have created a new SG and added an SSH rule allowing access only from our local instances that belong to our 192.168.0.0/16 range to be more restrictive. Here's my Private Instance config if you'd like to replicate: Here's the SSH info I got when I clicked on Connect button: Here's my SSH test: All Internet tests passed and you should now have a good understanding of how to configure basic VPC components. I'd advise you to have a look at our full diagram again and any feedback about the animated GIFs would be appreciated. Did you like them? I found them better than using static images.
<urn:uuid:7c1cd634-d47f-49e7-b180-43d5d9ccefc0>
CC-MAIN-2022-40
https://community.f5.com/t5/technical-articles/an-illustrated-hands-on-intro-to-aws-vpc-networking/ta-p/281114
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00023.warc.gz
en
0.907704
3,209
2.78125
3
Directory listing is a web server function that displays the directory contents when there is no index file in a specific website directory. It is dangerous to leave this function turned on for the web server because it leads to information disclosure. For example, when a user requests www.acunetix.com without specifying a file (such as index.html, index.php, or default.asp), the web server processes this request, returns the index file for that directory, and the browser displays the website. However, if the index file did not exist and if directory listing was turned on, the web server would return the contents of the directory instead. Many webmasters follow security through obscurity. They assume that if there are no links to files in a directory, nobody can access them. This is not true. Many web vulnerability scanners such as Acunetix easily discover such directories and all files if directory listing is turned on. This means that black hat hackers can also find such files easily. This is why directory listing should never be turned on, especially in the case of dynamic websites and web applications, including WordPress sites. Directory Browsing Without Directory Listing Even if directory listing is disabled on a web server, attackers might discover and exploit web server vulnerabilities that let them perform directory browsing. For example, there was an old Apache Tomcat vulnerability, where improper handling of null bytes ( %00) and backslash ( \) made it prone to directory listing attacks. Attackers might also discover directory indexes using cached or historical data contained in online databases. For example, Google’s cache database might contain historical data for a target, which previously had directory listing enabled. Such data allows the attacker to gain the information needed without having to exploit vulnerabilities. Directory Listing Example A user makes a website request to www.vulnweb.com/admin/. The response from the server includes the directory content of the directory admin, as seen in the below screenshot. From the above directory listing, you can see that in the admin directory there is a sub-directory called backup, which might include enough information for an attacker to craft an attack. The attacker can display the whole list of files in the backup directory. This directory includes sensitive files such as password files, database files, FTP logs, and PHP scripts. It is obvious that this information was not intended for public view. Misconfiguration of the web server has led to file list disclosure and the data is publicly available. Moreover, files like these, such as FTP logs, might contain other sensitive information such as usernames, IP addresses, and the complete directory structure of the web hosting operating system. How to Disable Directory Listing To disable directory listing, you must change your web server configuration. Here is how you can do it for the most popular web servers: Apache Web Server You can disable directory listing by setting the Options directive in the Apache httpd.conf file by adding the following line: <Directory /your/website/directory>Options -Indexes</Directory> You can also add this directive in your .htaccess files but make sure to turn off directory listing for your entire site, not just for selected directories. Directory indexing is disabled by default in nginx so you do not need to configure anything. However, if it was turned on before, you can turn it off by opening the nginx.conf configuration file and changing autoindex on to Get the latest content on web security in your inbox each week.
<urn:uuid:94e66e15-a3d0-499f-bd44-c5331b6ddc86>
CC-MAIN-2022-40
https://www.acunetix.com/blog/articles/directory-listing-information-disclosure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00023.warc.gz
en
0.906803
736
3.03125
3
Arizona State University, like many colleges across the United States, has a problem with students who enter their freshman year ill prepared in math. Though the school offers remedial classes, one-third of students earn less than a C, a key predictor that they will leave before getting a degree. To improve the dismal situation, ASU turned to adaptive-learning software by Knewton, a prominent edtech company. The result: pass rates zipped up from 64% to 75% between 2009 and 2011, and dropout rates were cut in half. But imagine the underside to this seeming success story. What if the data collected by the software never disappeared and the fact that one had needed to take remedial classes became part of a student’s permanent record, accessible decades later? Consider if the technical system made predictions that tried to improve the school’s success rate not by pushing students to excel, but by pushing them out, in order to inflate the overall grade average of students who remained. These sorts of scenarios are extremely possible. Some educational reformers advocate for “digital backpacks” that would have students carry their electronic transcripts with them throughout their schooling. And adaptive-learning algorithms are a spooky art. Khan Academy’s “dean of analytics,” Jace Kohlmeier, raises a conundrum with “domain learning curves” to identify what students know. “We could raise the average accuracy for the more experienced end of a learning curve just by frustrating weaker learners early on and causing them to quit,” he explains, “but that hardly seems like the thing to do!”
<urn:uuid:86208ac2-1137-4d01-b7ae-6d7e6484c974>
CC-MAIN-2022-40
https://www.crayondata.com/how-big-data-will-haunt-you-forever-your-high-school-transcript/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00023.warc.gz
en
0.972681
337
3.125
3
Do you find it difficult to help your kids achieving a balance between technology and life? If yes, read further to discover a few digital parenting guidelines and you can also use parental control apps to keep an eye on your child’s online activities. We’ve put together these 8 Parenting Tips as part of our digital technology guidelines for parents, so your children can safely use screen time as a tool to promote learning and innovation. There is no escaping from the omnipresent digital media. Wherever you turn, you will find screens—in your pockets, at airports, in stores, taxis, restaurants. Children, while being inquisitive, are naturally drawn to things that are prevalent in their surroundings. We decided to share a few digital technology guidelines for parents. Parents might have experienced traditional bullying as children, but cyberbullying is something that they have encountered at a mature age. So, the chances of them falling short are more when they find their children being cyberbullied or meeting with other online hazards, such as cyberstalking, online predators, sexting. We’ve put together these 8 Parenting Tips so your children can safely use screen time as a tool to promote learning and innovation. Tips for effective parenting in the digital age Be a role model for your child When a parent’s words and actions do not match, a wrong message is forwarded to the kids. What would happen if your child wants to inform you about some important incident from their schools and you just reply with a mindless nod, staring at your screen? In such cases, a child realizes that they are not at the center of their parent’s attention. As they grow up, they imitate the exact behavior, and it soon becomes their way of life. Encourage physical activities One of the biggest drawbacks of being overexposed to the screens is the impact on the child’s fitness level. Playing outdoors allows them to explore their environment, develop muscle strength and coordination. You can ask your kids to play more with the children their age outdoors and to socialize with their school’s sports team as well. Doing so would enhance their social skills and boost their self-confidence too. The more they play outdoors, the less time they have to gaze at mobile. Limit their screen time with the help of a child monitoring app. Involve them in the mundane household chores When children start making their beds, cleaning the room, or setting the table, they feel a sense of pride and accomplishment. Assigning duties help to teach them the value of teamwork, to help build a strong work ethic, and improve time-management skills. You can reward them with a few minutes of screen time after the completion of the tasks. Involving the kids in regular, age-appropriate chores has been associated with social and emotional benefits that help them succeed throughout life. Set a proper routine for your child by using a kid’s safety apps. Have honest conversations regularly Meaningful conversations build trust and also develop your kid’s confidence. Your child realizes that you care about them, and you are interested in their lives. As they grow up, you can talk to them about technology and can understand their perspective about social media. In a relationship where communication channels are weaker, a child hesitates to open up about the challenges they are facing online. Avoid it by ensuring that you are always there having their back in any situation. Let your child send you an alert in the case of an emergency by using parental control apps. Occasionally convert ‘screen time’ into ‘family time’ Join you, the child in online activities such as watching, reading or playing to encourage social interactions, bonding, and learning. Playing a video game with your kids is a good way to demonstrate sportsmanship and gaming etiquette. You can watch a show with them, where you will have the opportunity to share your own life experiences and perspectives. Interact with them while they are online, to understand what they are doing. Educate your children about the cons of technology and digital etiquette Help your children understand how overexposure to screens affects physical well-being and can foster unhealthy dependence on technology. Make them aware of online hazards, such as cyberbullying, cyberstalking, online predation, and more. This is another important parenting tip. Tell them to treat others how they wanted to be treated online. Teach about good manners in digital communication such as email, forums, blogs, and social networking sites. Know about the latest technology Generally, parents believe that they cannot catch up with the skills their kids acquire at a young age. Kids might stay one step ahead in their technical knowledge and abilities, but parents need to attempt to match their levels. Only then, they can effectively educate and guide them through the pitfalls. Stay informed: our ‘Digital Technology Guidelines For Parents’ series provides lots of parenting tips regarding online threats and how can these be avoided. Children’s lack of knowledge, their developing brains, the tendency to take risks and the ignorance of the consequences can make them the targets of cyber-criminals. When parents are aware of the prevailing malware, spam, and other threats, they can guide their children safely on the digital path. Invest in parental controls and allow guided access Bit Guardian Parental Control allows parents to set limitations on a child’s phone. In case your child would come to you with the request to unblock some app, you can start a conversation about why you denied access in the first place. By doing so, you will be teaching them to think about their actions online: a significant and invaluable skill for all digital users. Everyone in the family should be asked to follow specific predefined rules regarding device usage. Smartphones can be distracting despite immense productivity. You keep getting notifications from different applications interrupting the momentum which can potentially affect your efficiency and make you addicted. Parental control apps offer many useful features to ensure that your child’s attraction towards the smartphone does not get converted to the addiction. Bit Guardian Parental Control app is a wonderful child monitoring app that allows parents to block inappropriate apps/unwanted calls on the child’s phone. It lets you create a geofence, the virtual boundary around your child, restrict a speed limit on their vehicles, limit the screen-time, set a schedule. Download parental control app for android, to let your child navigate safely across the web. Stay up to date with the digital technology guidelines for parents and parenting tips.
<urn:uuid:de31b9b7-32f8-46c4-b645-043c484df22a>
CC-MAIN-2022-40
https://blog.bit-guardian.com/digital-technology-guidelines-for-parents/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00023.warc.gz
en
0.951333
1,350
3.03125
3
Appropriate Data Types - Choosing appropriate data types is a key factor during data modeling. It provides a good compression and avoids unnecessary type conversions or expression indexes. - Keep formats short, do not use VARCHAR(2000000) or DECIMAL(32,X) if a smaller range is sufficient. Prefer exact data types. - Prefer DECIMAL over DOUBLE if possible. Double has a wider scale but is less efficient. - Prefer CHAR(x) over VARCHAR(x) if the length of the strings is a known constant (CHAR is fixed size). - Use identical types for same content: Avoid joining columns of different data types. For example, do not join an INT column with a VARCHAR column containing digits. - See Exasol GitHub repository to know more about appropriate data types. An Exasol Cluster uses a shared-nothing architecture where data is distributed between the cluster nodes. However, if tables are small, their data is not distributed. Instead, these small tables are replicated on every node. The replication guarantees local joins are possible with these tables. The replication border is used to define what “small” means. The default value is 100000 rows. Consider to increase the replication border so that all your dimension tables are replicated if that doesn’t exhaust your node’s memory You can use the database parameter -soft_replicationborder_in_numrows=<number_of_rows> to configure the replication border. A table is considered as small and will be replicated across all active nodes if none of the two thresholds is reached. Distribution key scenarios Any table whose size exceeds the replication border (large tables) is distributed on a row-by-row basis across all active nodes. Without specifying a distribution key, the distribution is random and uniform across all nodes. Setting a proper distribution key on join columns of the same data type converts global joins to local joins and improves performance. Take the following two tables as an example: The tables are joined on the join_col and filtered on the where_col. Other columns are not displayed to keep the sketch small. Small tables like these are normally replicated, but for this example, we ignore that. A random distribution on a three node cluster may look like this: If the tables are now joined on the join_col with the following statement, the random distribution is not optimal: This is processed as a global join internally. It requires network communication between the nodes on behalf of the join because the highlighted rows don't find local join partners on the same node. You can change the distribution of both tables using the following statements, allowing the system to recognize a match between join column and table distribution. You can only set one distribution key per table. Then the same query is processed internally as a local join: Every row finds a local join partner on the same node, so the network communication between the nodes on behalf of the join is not required. The performance with this local join is better than with the global join although it’s the same SELECT statement. It’s a good idea to distribute on JOIN columns. However, it’s not good if you want to distribute columns that you use for filtering with WHERE conditions. If you distribute both tables on the where_col columns, the result will look like this: This distribution is worse than the initial random distribution. This causes global joins between the two tables and statements like <Any DQL or DML> WHERE t2.where_col='A'; utilize only one node (the first with this WHERE condition). This disables Exasol’s Massive Parallel Processing (MPP) functionality. This distribution leads to poor performance because all other nodes in the cluster are on standby while one node does all the work. Exasol's resource management assumes uniform load across all nodes. Therefore, this distribution leads to resources being unused even when multiple simultaneous statements operate on different filter values (for example, nodes).The MPP relies on a near-uniform distribution of data across the nodes, so columns where few values dominate the table are usually bad candidates for distribution. Distributing by a combination of columns (see Distribution on multiple columns) may alleviate this problem. The ALTER TABLE (distribution/partitioning) statement allows distribution over multiple columns in a table. However, to take advantage of this, all those columns must be present in a join. Therefore, for most scenarios, distributing by a single column is appropriate. The requirement for a local join is that the distribution columns for both tables must be a subset of the columns involved in the join. Let's look at the following example. Imagine you have two tables defined with the following DDL: CREATE TABLE T1 ( DISTRIBUTE BY X CREATE TABLE T2 ( DISTRIBUTE BY X Since these tables are both distributed by the column X, join conditions which include (but are not limited to) these columns will be local. This includes the following queries: SELECT * FROM T1 JOIN T2 ON T1.X = T2.X; SELECT * FROM T1 JOIN T2 ON T1.X = T2.X AND T1.Y = T2.Y; SELECT * FROM T1 JOIN T2 ON T1.X = T2.X AND T1.Y = T2.Y AND T1.Z = T2.Z; Since the distribution keys (T1.X and T2.X) are included in each of the joins, the entire join is operated locally. Changing the distribution key to cover multiple columns will have a negative impact in this case. For example, let's distribute the tables now by columns X and Y: Now, only the second and third examples will have local joins. A simple join on T1.X = T2.X will be a global join because the distribution keys are no longer a subset of the join conditions. Check how a new distribution key distributes rows Let's consider a scenario where you plan to change the distribution key to use column aa_00 and you want to see how it is going to distribute rows. This is a type of check that allows you to see the distribution before you implement the distribution key. Run the following statement to check the distribution of the new distribution key. SELECT value2proc(aa_000) as Node_Number, round(count(*) / sum(count(*)) over() * 100 ,2) as Percentage from <table name> group by 1; If the distribution is very uneven or skewed, it is not recommended to implement the new distribution key. - Do: Distribute on JOIN columns to improve performance by converting global joins to local joins. - Do: Pick the smallest viable set of columns that will be part of all your most frequent or expensive joins as distribution key. - Don't: Distribute on WHERE columns, which leads to global joins and disables the MPP functionality, both causing poor performance. - Don't: Add unneeded columns to a distribution key. Avoid ORDER BY Views are usually treated like tables by end users who aren't aware of an ORDER BY clause being part for the view’s query. This means that clause is superfluous and slows down the view performance. Similarly, ORDER BY clauses in sub-queries will slow down query performance and cause materialization which may not be required. There should be only one ORDER BY clause (if any) at the end of an SQL statement. In some cases, using an ORDER BY FALSE statement to materialize a sub-select may actually improve performance. For more information, see Enforcing Materializations with ORDER BY FALSE. Manual Index Creation Exasol creates and maintains required indexes automatically. It is not recommended to interfere in this. In rare cases, additional indexes on filter columns may improve query performance. But indexes will also slow down DML operations like INSERT. Existing indexes can be seen in EXA_DBA_INDICES and created with ENFORCE GLOBAL|LOCAL INDEX ON <table>(<columns>); Exasol recommends creating local indexes if possible. Use Surrogate Keys Joins on VARCHAR and DATE/TIMESTAMP columns are expensive compared to joins on DECIMAL columns. Joins on multiple columns generate multi-column indexes which require more space and effort to maintain internally compared to single column indexes. Use instead surrogate keys to avoid above problems. Favor UNION ALL The difference between UNION and UNION ALL is that UNION eliminates duplicates, thereby sorting the two sets. This means UNION is more expensive than UNION ALL, so in cases where it is known that no duplicates can occur or when duplicates are acceptable, UNION ALL should be used instead of UNION. If tables are too large respectively too many to fit completely into the node’s memory, partitioning large tables can help improve performance. Take these two tables as example: Say t2 is too large to fit in memory and may get partitioned therefore. In contrast to distribution, partitioning should be done on columns that are used for filtering: ALTER TABLE t2 PARTITION BY where_col; Now without taking distribution into account (on a one-node cluster), the table t2 looks like this: Partitioning changes the way the table is physically stored on disk. It may take much time to complete such an ALTER TABLE (distribution/partitioning) command with large tables. A statement like SELECT * FROM t2 WHERE where_col=’A’; would have to load only the red part of the table into memory. Should the two tables reside on a three-node cluster with distribution on the join_col columns and the table t2 partitioned on the where_col column, they look like this: Again, this means that each node has to load a smaller portion of the table into memory if statements are executed that filter on the where_col column while joins on the join_col column are still local joins. EXA_(USER|ALL|DBA)_TABLES (see Metadata System Tables) shows both the distribution key and the partition key if any. Exasol will automatically create an appropriate number of partitions. Use Jumbo Frames The maximum transmission unit (MTU) can be set for the private and public networks in EXAoperation. Exasol data blocks have a minimum size of 4kB which does not fit into the default MTU size of 1500 bytes. For best performance we recommend using Jumbo Frames (MTU size of 9000 bytes). All network components must then be enabled to use MTU 9000. MTU 9000 must also be enabled in the license server. Jumbo Frames are not supported with GCP. The tables in the Statistical System Tables schema can be used to monitor the health of the database. They come with detailed values for the last 24 hours and aggregated values on an hourly, daily and monthly basis: EXA_DB_SIZE*, EXA_MONITOR*, EXA_SQL* and EXA_USAGE*. Other important tables to monitor are EXA_DBA_TRANSACTION_CONFLICTS and EXA_SYSTEM_EVENTS. The meta view EXA_SYSCAT lists all these tables together with a description. A high CPU utilization is normal for a healthy system. Low CPU utilization (on a busy system) is an indicator that a bottleneck exists somewhere else. CPU_MAX from EXA_MONITOR_DAILY should be normally above 85% therefore. Analytical queries often consume a lot of memory for sort operations. The EXA_MONITOR* tables should be monitored to confirm that the total TEMP memory consumption is below 20% of the total available database RAM, while the EXA_SQL* and EXA_*_SESSIONS tables should be monitored to confirm that a single query doesn’t consume more than 10% of available database memory. Exasol performs well with parallel in-memory processing analytical queries. This requires sufficient memory availability. The EXA_DB_SIZE* tables have RECOMMENDED_DB_RAM_SIZE where the system reports if it would be beneficial to add more RAM to the data nodes. Avoid the USING Syntax The use of the USING syntax in queries containing multiple joins leads to an inefficient execution and high amount of heap memory usage by the SQL processes. In the above example, USING(id) is not just syntactic sugar for ON T1.id = T2.id. The database has to decide the values for the columns that appear in both the tables. For example, when T1.name has a NULL value, and T2.name is different from NULL, then the value of the output column name will be T2.name. This effort increases exponentially on the number of joins. Instead, you can use the following statement for a better performance.
<urn:uuid:df39e14a-1988-4701-ab93-11e0ff575dcb>
CC-MAIN-2022-40
https://docs.exasol.com/db/7.0/performance/best_practices.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00223.warc.gz
en
0.846134
2,808
2.53125
3
What about this course? OpenFlow is the technology that started the SDN movement, the two terms were often used interchangeably in the early days. Despite being an important component in any SDN discussion, OpenFlow remains to be one of the mysterious topics in traditional network engineer's tool belt. The course will aim to demystify the technology by using free, open sourced tools such Python, Mininet, and POX controller to build different network applications. In the process, we will gain the knowledge necessary to build a OpenFlow network. The applications will move from basic hub, learning switch, spanning-tree, to interacting with other network components. Instructor for this course This course is composed by the following modules Lab and Setup Quick Start Example Learning Switch Lab 1 Learning Switch Lab 2 Command Line Options Command Line Demonstration Simple Router Example Overview Simple Firewall Example Overview Simple Firewall Example Lab Interactiving with Physical Switch Overview Interactiving with Physical Switch Lab 1 Interactiving with Physical Switch Lab 2 Common Course Questions If you have a question you don’t see on this list, please visit our Frequently Asked Questions page by clicking the button below. If you’d prefer getting in touch with one of our experts, we encourage you to call one of the numbers above or fill out our contact form. Do you offer training for all student levels? Are the training videos downloadable? I only want to purchase access to one training course, not all of them, is this possible? Are there any fees or penalties if I want to cancel my subscription?
<urn:uuid:0c0cdeee-7758-43c2-8ed0-a02043a3f91d>
CC-MAIN-2022-40
https://ine.com/learning/courses/learn-sdn-with-open-flow-and-ryu-controller
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00223.warc.gz
en
0.87304
370
2.546875
3
Do you know what makes a successful entrepreneur? According to Harvard Business School academics it's a combination of skill, luck and good timing. What makes some entrepreneurs successful and others unsuccessful? It’s a combination of skill, luck and good timing, according to Harvard Business School academics. In their paper “Performance Persistence in Entrepreneurship,” Paul Gompers, Anna Kovner, Josh Lerner and David Scharfstein looked into the factors behind an entrepreneur’s success (in this case, starting a company that subsequently goes public). Here are the key points: Serial entrepreneurs really are more successful than first-timers • An entrepreneur who is backed by venture capital and succeeds in one venture has a 30% chance of succeeding in his next venture, according to the study. • Whereas, a rookie entrepreneur has only an 18% chance of succeeding, while those who failed before have a 20% chance of succeeding in a future endeavor. So the lesson is "If at first you don’t succeed, try again." Success breeds more success If you’re a good entrepreneur then you’ll succeed. But the perception of success may be a factor, too. • Entrepreneurs who have had success in the past are more likely to attract capital and critical resources. • In addition, higher quality people and potential customers are more likely to be attracted to that firm, because they think it has a better likelihood of success. That investors choose to back it probably increases the venture’s chances of success. So success breeds further success, even if the entrepreneur was just lucky first time out. Market timing is a skill "A good year" isn't just a term used for fine wine. Choosing the right time to set up a venture is a knack that successful entrepreneurs have. • Of those computer companies that set up in 1983, 52% eventually went public, i.e., they were successful. • Whereas, of those computer companies that were created in 1985, only 18% went public – they missed the tide. If an entrepreneur set up a company in a "good industry year" (in which success rates were high) they are more likely to succeed in their next venture. They have a skill for choosing to start up in the right industry at the right time. Entrepreneurs who start up a new venture in a good industry year are more likely to invest in a good industry year in their next ventures, the study finds. Companies backed by top-tier venture capital firms are more likely to succeed The top VC firms help companies succeed, either because they are better at spotting good companies and entrepreneurs, they help it attract better resources or help it to formulate a better small business plan. Interestingly, top-tier VC firms were only observed as adding to the success of small business start ups by first-time entrepreneurs or by those that failed in a previous venture. Successful entrepreneurs don’t need a top-tier VC If an entrepreneur with a track record of success starts a company it is no more likely to succeed if it is funded by a top-tier VC firm than a lesser known firm. That’s because if successful entrepreneurs are better, then top-tier venture capital firms have no advantage identifying them (because success is public information), and they add little value. And if successful entrepreneurs have an easier time attracting high-quality resources and customers because of the perception that they’re successful, then top-tier venture capital firms add little value. In fact, another study cited by the Harvard academics found it was rare for serial entrepreneurs to receive backing from the same VC firm across all their ventures and that relationships with VC firms play little role in enhancing performance. Those entrepreneurs that invest in proper liability insurance to protect their ventures have a better chance of succeeding, because their plans are less likely to be blown out of the water by an unexpected and expensive legal disaster or claim. At Hiscox, we’re here to help small business entrepreneurs realize their dreams of success.
<urn:uuid:0e209e61-8239-4608-8598-76149dae7729>
CC-MAIN-2022-40
https://www.hiscox.com/blog/do-you-know-what-makes-successful-entrepreneur
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00223.warc.gz
en
0.959324
839
2.5625
3
Shared secrets are a piece of data that is known to two or more parties. They are most commonly recognized in the form of passwords, which are known to both service provider and end user. Shared secrets cam be plaintext or another piece of data so long as they are known to the two or more distinct parties. Commonly used in cryptography, a shared secret can be used to decrypt information used in symmetric encryption algorithms, by all parties. Mishandling of shared secrets is a leading cause of identity theft, financial fraud, account takeover (ATO), and mass data breaches. Once in the hands of a hacker, shared secrets enable these bad actors to impersonate the legitimate user and abuse their rights as consumer or employee. "Shared secrets come in many forms, but the most popular ones used every day are passwords, PINs, and credit card numbers. Even 2-factor codes are shared secrets."
<urn:uuid:d053a422-54be-472e-9bb0-26e07bcd707d>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/shared-secrets
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00223.warc.gz
en
0.947874
191
3.265625
3
What is FIDO Authentication? FIDO, which stands for Fast IDentity Online, is a set of authentication standards aimed at strengthening the user login process to online services. The standards are developed by the FIDO Alliance and promote faster, more secure authentication processes with the overall goal of eliminating password-based logins altogether. The first set of FIDO standards were released in 2014 where members of the FIDO Alliance included Google, PayPal, NXP and Infineon. Latterly the membership has grown to include Apple, Amazon, ARM, Intel, MasterCard, Microsoft, Samsung and Visa to name just a few. In This Post How Does FIDO Authentication Work? FIDO uses public-key cryptography to underpin its security. When a user registers with an online service, such as a website or web application, the user’s FIDO device generates a new keypair. The public key is shared with the online service while the private key is kept secure in the device and is never revealed. At subsequent logins the online service issues a challenge which the FIDO device signs internally using the private key. The signature is then returned to the online service where it can be verified using the stored public key. Crucially, the device will only perform the signing operation after the user has confirmed their presence. This can be accomplished by one of several methods including pressing a button on the device or scanning their fingerprint (in the case of a biometric key). This requirement is to prevent malware from using an attached FIDO key without the user’s knowledge. By requiring the user to perform an action they are asserting that they are present and they want the authentication to proceed. This is referred to as "user presence detection". Discover: View our range of FIDO Security Keys FIDO 1.0 (U2F and UAF) The first generation of FIDO specifications described two protocols, namely Universal 2nd Factor (U2F) and the Universal Authentication Framework (UAF). FIDO U2F describes the "second factor experience". This is where the user possesses a separate FIDO-compliant device such as a USB security key which they must use to login in addition to their password. This supplemental method of authentication mitigates against phishing attacks by requiring the user to present something they have, the FIDO security key, in addition to something they know like their password. This is two-factor authentication (2FA) that relies on a separate dedicated hardware device. U2F defined the client-side protocol used to communicate with the FIDO security key called the Client to Authenticator Protocol (CTAP). This protocol was designed to operate over USB, near-field communication (NFC), or Bluetooth. FIDO UAF on the other hand was designed to provide a passwordless experience. In UAF the user registers their UAF-enabled device (eg, their smartphone) with an online service and selects a local authentication method supported by that device, primarily a biometric method such as fingerprint or facial recognition. The user is then able to access the online service from that device in future, authenticate locally using that device and without the need to enter a password. FIDO2 (WebAuthn and CTAP2) While FIDO 1.0 was basically two separate protocols, FIDO 2.0 (or just simply, FIDO2) is an effort to combine features of U2F and UAF and bring strong authentication into the mainstream. It is doing this through global standardization by the World Wide Web Consortium (W3C) and integration into compliant web browsers including Chrome, Firefox, and Microsoft Edge. FIDO2 is made up of two parts: - CTAP2 - an expanded version of the Client-to-Authenticator protocol. By making FIDO authentication easy for users through simple, clean integration into online services FIDO2 aims to make strong-authentication ubiquitous and eventually eliminate the traditional password-based login altogether.
<urn:uuid:799a2dda-2a68-467e-b729-996c2f52ebd5>
CC-MAIN-2022-40
https://de.microcosm.com/blog/what-is-fido-authentication
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00223.warc.gz
en
0.925933
907
2.859375
3
As multi-factor authentication (MFA) becomes a mandatory security practice, especially for almost all cyber insurance policies, cyber criminals are seeking new ways to break through these walls to steal your data. Multi-factor authentication (MFA) is the authentication method that requires the user to provide two or more verification components to gain entry to an asset, such as an application, online account, or a VPN. Two-factor authentication (2FA) is a type of authentication that requires two factors to verify the user’s identity. According to CSO Online, cyber criminals are finding new ways to exploit multi-factor authentication, such as SMS-based man-in-the-middle attacks, supply chain attacks, compromised MFA authentication workflow bypass, pass-the-cookie attacks, and server-side forgeries. Recent MFA Attacks In recent news, cyber criminals stole $34M in funds from 483 Crypto.com users. It appears they don’t know exactly how these criminals got by their two-factor authentication. According to American Banker, it seems that Crypto.com, a cryptocurrency exchange company, provided one-time passwords — these are usually six-digit codes provided via text message or in a multi-factor authentication app — to affected users after hackers initiated a transaction from their compromised account. With little information released, it appears to have been an MFA attack. However, it very easily could have been a different vulnerability including a zero-day attack. Luckily for Crypto.com users, Crypto.com said they will be reimbursing customers who were affected by this attack. Another recent 2FA attack included the company Coinbase, another cryptocurrency exchange company, where more than 6,000 Coinbase users had funds stolen from their accounts after hackers used a vulnerability in Coinbase’s SMS-based two-factor authentication system to breach accounts, according to The Record. Coinbase said that a third party took advantage of a flaw in Coinbase’s SMS Account Recovery process in order to receive an SMS two-factor authentication token and gain access to these accounts. Cyber Insurance Requirements Every cyber insurance policy is different, but a majority often address costs associated with operational disruption, data loss, incident response and investigation, crisis management, ransomware payment, and legal expenses. In recent years, organizations must implement multi-factor authentication as a requirement of their cyber insurance coverage. Without MFA, clients risk non-renewal or a retention hike of 100% or more. In the past, insurers have identified MFA as being among the most effective risk management tools for preventing ransomware attacks, but with cyber criminals producing new ways to bypass MFA, there need to be additional preventions in place. “Multi-factor authentication is good – but it’s only as good as the way it is implemented or the quality of the third-party provider. I believe we can expect cyber insurance requirements to become stricter in the future, as we continue to see these attacks in the news,” said Joe Jeanjaquet, Eclypses Senior Director of Applied Technologies. Looking Towards the Future From these recent attacks, it shows how difficult it is to set up all the pieces of security and have these systems work together correctly. Whether it’s an MFA attack, zero-day attack, or any other common cyber threats out there, it is important to focus on securing the data at the application level. Eclypses MTE technology allows you to control your customer data and stop trusting things you do not control, like third-party providers. Eclypses MTE technology is an easy-to-use data security module that enables consistent security to be easily orchestrated across the entirety of an environment. MTE protects the data as soon as it is generated to remove the data exposure risk that is associated with setting securities up incorrectly or zero-day vulnerabilities in the operating system or communication protocol. “MFA and 2FA are not enough, it needs to be used in a more sophisticated way than it is currently. The current method of just echoing data back to the server over the communication protocol leaves it very vulnerable. By incorporating MFA into the initialization of a security protocol like MTE it serves a more significant role that enhances the synchronization between endpoints,” comments Aron Seader, Eclypses Senior Director of Core Engineering.
<urn:uuid:088c6462-a058-41cb-b549-ae26fb3411c2>
CC-MAIN-2022-40
https://eclypses.com/news/hackers-are-bypassing-multi-factor-authentication-mfa-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00223.warc.gz
en
0.946604
893
2.71875
3
In this lesson we’ll take a look at different protocols for gateway redundancy. So what is gateway redundancy and why do we need it? Let’s start with an example! The network in the picture above is fairly simple. I have one computer connected to a switch. In the middle you’ll find two multilayer switches (SW1 and SW2) that both have an IP address that could be used as the default gateway for the computer. Behind SW1 and SW2 there’s a router that is connected to the Internet. Which gateway should we configure on the computer? SW1 or SW2? You can only configure a one gateway after all… If we pick SW1 and it crashes, the computer won’t be able to get out of its own subnet because it only knows about one default gateway. To solve this problem we will create a virtual gateway: Between SW1 and SW2 we’ll create a virtual gateway with its own IP address, in my example this is 192.168.1.3. The computer will use 192.168.1.3 as its default gateway. One of the switches will be the active gateway and in case it fails the other one will take over. There are three different protocols than can create a virtual gateway: - HSRP (Hot Standby Routing Protocol) - VRRP (Virtual Router Redundancy Protocol) - GLBP (Gateway Load Balancing Protocol) In the next lessons I will explain each of these protocols and show you how to configure them. For now, I hope this lesson has helped to understand why we need a virtual gateway in the network.
<urn:uuid:642446b1-162b-49af-bfe9-97fcfc2746d1>
CC-MAIN-2022-40
https://networklessons.com/cisco/ccie-routing-switching-written/introduction-gateway-redundancy
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00223.warc.gz
en
0.904473
350
3.8125
4
Information technology attacks that penetrate or shut down company networks are now part of everyday news. Companies from large enterprises to small businesses fall victim to security breaches due to improper implementation of security control – this leads to loss of customer trust, market reputation, and of course, loss in revenues. According to a report by IBM, 2021 has the highest average data breach cost of $4.24 million since last 17 years. As cyberattacks become more sophisticated and more common, companies are increasingly adapting security measures to secure their systems with solutions and services such as penetration testing, vulnerability scanning, and vulnerability management. Penetration testing, also known as pen testing, is an integral part of any comprehensive cybersecurity strategy. It’s a misunderstanding that penetration tests are always carried out with zero knowledge but that’s not always the case. Gray box penetration testing is a type of penetration testing in which the pentesters have partial knowledge of the network and infrastructure of the system they are testing. Then, the pentesters use their own understanding of the system to do a better job of finding and reporting vulnerabilities in it. In a sense, a gray box test is a combination of a black box test and a white box test. The black box test is a test that is done from the outside in, with the tester not knowing the system before testing it. A white box test is a test that is done from the inside out, with the tester having full knowledge of the system before testing it. In this blog, we will only discuss gray box penetration testing to provide you enough information on the same. Why Gray Box Penetration Testing? Gray Box Penetration Testing is a method of pen-testing that attempts to combine the best of both the Black Box and White Box methodologies. A successful gray box pentest requires a solid understanding of the target environment before any testing takes place. This unknown approach is why gray box penetration testing service is often used in more controlled environments, such as military and intelligence agencies. However, there is plenty of room for improvement in the application, and the testing can be effectively applied to any environment with the proper planning and experience. Gray box testing not only allows you to test the security of the network but also the security of the physical environment. It is especially useful when a test involves a breach of a perimeter device, such as a firewall. Also, gray box tests use a combination of penetration testing techniques, including network scanning, vulnerability scanning, social engineering, and manual source code review. This provides valuable insight regarding the amount of damage a hacker or attacker can create. How does Gray Box Penetration Testing differ from the black box and white box? Penetration testing is divided into three categories: black box, white box, and gray box. Let’s understand the differences between these three: |S No.||Black Box Penetration Testing||Gray Box Penetration Testing||White Box Penetration Testing| |1||Little or No knowledge of network and infrastructure is required.||Somewhat knowledge of the Infrastructure, internal codebase and architecture.||Complete access to organization infrastructure, network and codebase.| |2||Black box testing is also known as closed box testing.||Gray box testing is also known as translucent testing.||White box testing is known as clear box testing.| |3||No syntactic knowledge of the programming language is required.||Requires partial understanding of the programming language.||Requires high understanding of programming language.| |4||Black box testing techniques are executed by developers, user groups and testers.||Performed by third party services or by testers and developers.||The internal Development team of the organization can perform white box testing.| |5||Some standard black box testing techniques are: Boundary value analysis, Equivalence partitioning, Graph-Based testing etc.||Some standard gray box testing techniques are Matrix testing, Regression testing, Orthogonal array testing, Pattern testing.||Some standard white box testing techniques are Branch testing, Decision coverage, Path testing, Statement coverage.| 5 steps to perform Gray box Penetration Testing Gray box penetration testing is usually performed in 5 different steps mentioned below: 1. Planning and Requirements Analysis: This phase includes understanding the scope of the application and the tech stack being used. The security team also requests some application-related information, such as dummy credentials, access roles, etc. This phase includes understanding the scope of the application and the tech stack being used. Furthermore, preparing a documentation map is also part of this phase. 2. Discovery Phase: This phase is also known as Reconnaissance, which includes discovering the IP addresses being used, hidden endpoints, and API endpoints. The Discovery phase is not limited to networks but includes gathering information about the employees and their data, also known as Social Engineering. 3. Initial Exploitation: Initial exploitation includes planning what kind of attacks will be launched in the later steps. This phase also includes finding misconfigurations in the servers and cloud-based infrastructure. The requested information helps the security team in creating various attack scenarios like privilege escalation etc. Further, behind the login, scanning would also be possible. Further, behind the login, scanning also goes on. 4. Advanced Penetration Testing: This phase includes launching all planned attacks on the discovered endpoints—execution of Social Engineering attacks based on the collected information of employees. Furthermore, various vulnerabilities found are combined to create real-life attack situations. 5. Document & Report preparation: The last step is preparing a detailed report of every endpoint tested along with a list of launched attacks. Top 3 gray box penetration testing techniques Gray box pentest uses various types of techniques to generate test cases. Let’s understand some of them in detail: 1. Matrix testing Matrix testing is a technique of software testing that helps to test the software thoroughly. It is the technique of identifying and removing all the unnecessary variables. Programmers use variables to store information while writing applications. Several variables should be as per requirement. Otherwise, it will reduce the efficiency of the program. 2. Regression testing Regression testing is retesting the software components to find defects introduced by the changes made previously or in first the testing iteration. Regression testing is also known as retesting. It is performed to ensure that weaknesses are not introduced or reintroduced into a software system by modifications after the initial development. Regression Testing is an essential part of software testing because it helps to ensure that newly introduced software features continue to work as intended. 3. Orthogonal Array Testing Orthogonal array testing is a software testing technique used to reduce test cases without reducing the test coverage. Orthogonal array testing is also known as Orthogonal array method (OAM), Orthogonal array testing method (OATM), and Orthogonal test set. What are the benefits of gray box penetration testing? 1. Insider Information: Gray box testing is a perfect blend of black-box testing with knowledge of specific internal structures (or “inside knowledge”) of the item being tested. This inside knowledge could be available to the tester in the form of design documentation or code. 2. Less time consuming: With insider knowledge, testers can plan and prioritize the testing, which will take less than planning test cases with no understanding of the network or codebase. 3. Non-intrusive and unbiased: Gray box test, which is also called non-intrusive and fair. It is said to be the best way to analyze the system without the source code. The gray box test treats the application as a black box. The tester will know how program components interact with each other but not about the detailed program functions and operations. How does gray box testing help secure your system? Gray box penetration testing combines the best of black box and white box testing where the tester is provided with some knowledge of the application’s inner workings. In a typical black-box test, you don’t need to know anything about the application to find and verify the defects. This is to simulate how the actual user will experience the application. In a gray box test, you already know some information about the application, allowing the tester to act better on how the actual user will experience the application. One of the best ways to test your defense is with an outsider threat. Let’s say you are protecting your environment with “standard” security controls. An outsider is anyway going to get in if they want to. So it doesn’t make sense to invest too much time or money in trying to stop an outsider that is motivated enough. Instead, you need to know how they will behave once they are in. And the best way to do this is with a gray box test. Applying gray box penetration testing will help you secure your system from outside attacks and malicious insiders. In a gray box test, pentesters already know some information about the application, allowing them to simulate better how the actual user will experience the application. This means you will be able to test the application with a more extensive set of test cases, which will help you find errors, exploits, and security flaws before cybercriminals find them. Why Astra’s Pentest Suite is a perfect fit for you? All 3 types of penetration testing techniques have their own pros and cons but which one is perfect for you? Astra’s pentest suite is equipped with real-life hacking intelligence gathered from 1000+ vulnerability assessments and penetration tests (VAPT) done by our security experts on varied applications. Say NO to the old boring way to test your organization’s security. Astra’s Vulnerability Scanner is ever learning from new CVEs, bug bounty data & intelligence gathered from pentest we do for companies in varied industries. Your CXOs get a birds-eye view on the security posture of your organization with data-backed insights which help them make the right decisions. In addition, to ensure utmost security We here at Astra believe in ‘proactive security’ measures where we anticipate the infiltration techniques used by hackers and recommend additional security countermeasures keeping your and your customer’s data secure. Features of Astra’s pentest suite: - Self-served, on the cloud continuous scanner that runs 2500+ test cases covering OWASP, SANS, ISO, SOC, etc. - Rich and easy-to-understand dashboard with graphical representation that helps with vulnerability & patch management. - Developer & CXO level reporting. - Team collaboration options for assigning vulnerabilities for fix. - Multiple asset management under the same scan project. - Dedicated ‘Vulnerabilities’ section that offers insights on vulnerability impact, severity, CVSS score, potential loss (in $). - Comprehensive scanner that includes all the mandatory local and global compliance requirement checks. 1. What is gray box penetration testing? Gray box pentesting refers to the approach where the pentester receives partial information about the system before the test. 2. What are the 5 stages of penetration testing? The 5 stages of penetration testing are planning, information gathering and recon, scanning, exploitation, and reporting. Find out more on penetration testing guide 3. Why Gray Box Penetration Testing? Gray box pentesting allows you to understand how much damage a user with limited privilege can cause.
<urn:uuid:29cb0f9b-8b52-4fb8-9c7c-d986381a5ab9>
CC-MAIN-2022-40
https://www.getastra.com/blog/security-audit/gray-box-penetration-testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00223.warc.gz
en
0.919793
2,365
2.65625
3
Thin provisioning (TP) is a method of optimizing the efficiency with which the available space is utilized in storage area networks (SAN). TP operates by allocating disk storage space in a flexible manner among multiple users, based on the minimum space required by each user at any given time. In computing, thin provisioning involves using virtualization technology to give the appearance of having more physical resources than are actually available. If a system always has enough resource to simultaneously support all of the virtualized resources, then it is not thin provisioned. The term thin provisioning is applied to disk layer in this article, but could refer to an allocation scheme for any resource. For example, real memory in a computer is typically thin-provisioned to running tasks with some form of address translation technology doing the virtualization. Each task acts as if it has real memory allocated. The sum of the allocated virtual memory assigned to tasks typically exceeds the total of real memory. Thin Provisioning is a storage area network (SAN) management process where the storage capacity for a device is reserved and allocated on demand through a shared storage pool. Thin provisioning is also known as virtual provisioning. However, thin provisioning relates to physical computing environments. The efficiency of thin or thick/fat provisioning is a function of the use case, not of the technology. Thick provisioning is typically more efficient when the amount of resource used very closely approximates to the amount of resource allocated. Thin provisioning offers more efficiency where the amount of resource used is much smaller than allocated, so that the benefit of providing only the resource needed exceeds the cost of the virtualization technology used. Just-in-time allocation differs from thin provisioning. Most file systems back files just-in-time but are not thin provisioned. Overallocation also differs from thin provisioning; resources can be over-allocated / oversubscribed without using virtualization technology, for example overselling seats on a flight without allocating actual seats at time of sale, avoiding having each consumer having a claim on a specific seat number. Thin provisioning is a mechanism that applies to large-scale centralized computer disk-storage systems, SANs, and storage virtualization systems. Thin provisioning allows space to be easily allocated to servers, on a just-enough and just-in-time basis. Thin provisioning is called “sparse volumes” in some contexts. Thin Provisioning vs Thick Provisioning In virtual storage, thick provisioning is a type of storage allocation in which the amount of storage capacity on a disk is pre-allocated on physical storage at the time the disk is created. This means that creating a 100GB virtual disk actually consumes 100GB of physical disk space, which also means that the physical storage is unavailable for anything else, even if no data has been written to the disk. Thick provisioning contrasts with thin provisioning, which provisions storage on an as-needed basis. Thin provisioning helps to avoid wasted physical capacity and can save businesses on up-front storage costs. However, thick provisioning has the benefit of less latency because all storage is allocated at once when virtual machines are created. Arxys | Sentio key data storage features and benefits - Acceleration of random writes with Write Log (ZIL) - Hybrid Read Cache with first level in RAM (ARC -Adaptive Replacement Cache) and second Read Cache level on an SSD or NVMe (L2ARC) - High cache hit rate by using ARC’s most recently used (MRU) and most frequently used (MFU) Read Cache algorithms - Data resilvering (rebuilding of used space) - Compression and Deduplication for improved performance - iSCSI,Fibre Channel (FC), NFS, SMB (CIFS) - Use of Virtual IPs - High-Availability Dual-Controller Clustering - Active-Active and Active-Passive HA Cluster - Dual Storage (Stretched) Metro HA Cluster and Common Storage Cluster architectures - Data and metadata check-summing - Self-healing to detect and correct silent data errors with Scrub utility - Atomic Transaction Writes to keep data consistent - Transactional copy-on-write I/O operations eliminate silent data corruption and data fragmentation - Storing multiple copies of user data - Online Compression ( LZ4, LZJB, ZLE, GZIP 1..9 ) - Online Deduplication - Thick, Thin and Over-Provisioning - Thin Provisioning leveraging virtualization - Unlimited Snapshots and Clones File System & Storage Virtualization - 128-bit ZFS file system - Pooled storage model - Unlimited scalability with storage expansion during production - Unlimited file and volume size - Nagios, Check_MK - GUI statistics
<urn:uuid:adc4685f-9012-4799-a929-2a3cf89ef1d6>
CC-MAIN-2022-40
https://www.arxys.com/thin-provisioning-using-virtualization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00223.warc.gz
en
0.890469
1,034
3.53125
4
You may have seen an excellent program in many of our schools called Character Counts. It’s a wonderful school program that encourages positive behavior in addition to academics. One of the core paradigms of the program is that teaching a child to be a good person is just as important as getting good grades and getting into good a college. At D&D, we believe in living by the “Six Pillars of Character” as promoted by Character Counts program – that includes our relationship with vendors and suppliers. We also seek out and recruit employees with these same character traits, and we encourage employees to adhere to them in day to day business: - Good Citizens When evaluating new suppliers, we think “character” should be a factor in your decision making process. A vendor who is honest and cares, is a critical factor when you are trying to meet important deadlines and deliverables. TIP: Ask the vendor for customer references to check on these six character traits. Be honest • Don’t deceive, cheat, or steal • Be reliable — do what you say you’ll do • Have the courage to do the right thing • Build a good reputation • Be loyal — stand by your family, friends, and country Treat others with respect; follow the Golden Rule • Be tolerant and accepting of differences • Use good manners, not bad language • Be considerate of the feelings of others • Don’t threaten, hit or hurt anyone • Deal peacefully with anger, insults, and disagreements Do what you are supposed to do • Plan ahead • Persevere: keep on trying! • Always do your best • Use self-control • Be self-disciplined • Think before you act — consider the consequences • Be accountable for your words, actions, and attitudes • Set a good example for others Play by the rules • Take turns and share • Be open-minded; listen to others • Don’t take advantage of others • Don’t blame others carelessly • Treat all people fairly Be kind • Be compassionate and show you care • Express gratitude • Forgive others • Help people in need Do your share to make your school and community better • Cooperate • Get involved in community affairs • Stay informed; vote • Be a good neighbor • Obey laws and rules • Respect authority • Protect the environment • Volunteer
<urn:uuid:541927f6-6edf-4681-a226-89e6f9b30a2a>
CC-MAIN-2022-40
https://ddsecurity.com/2014/08/13/character-counts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00223.warc.gz
en
0.916939
527
2.71875
3
Zero trust is gaining momentum. Understanding what zero trust is, and how to improve it, is imperative to cybersecurity. The zero-trust model was created by John Kindervag in 2010, when he was a principal analyst at Forrester Research Inc. The zero-trust architecture is a powerful, holistic security strategy that is helping to drive businesses faster and more securely. A zero-trust architecture eliminates the idea of a trusted network inside a defined perimeter. In other words, it is a security model that focuses on verifying every user and device, both inside and outside an organization’s perimeters, before granting access. The zero-trust model: The zero-trust approach is primarily focused on protecting data and services, but it should be expanded to include all enterprise assets (devices, infrastructure components, applications, virtual and cloud components) and subjects (end users, applications, and other non-human entities that request information from resources). In the past, perimeter security approaches followed a simple paradigm: “Trust but verify.” While the user experience was better, evolving cybersecurity threats are now pushing organizations to reexamine their postures. In recent years, a typical enterprise infrastructure has grown increasingly complex and is outpacing perimeter security models. Examples of these new cybersecurity complexities include: Along with these complexities, securing the network perimeter is insufficient because apps are now on multiple cloud environments, with 81% of enterprises having apps with at least two cloud providers (IBM Mobile Workforce Report). Also, global remote work trends continue, with 65% of workers citing they would like to continue to work from home or remotely (Gallup Survey). Furthermore, global mobile workforce growth continues, as indicated by Gartner’s Why Organizations Choose a Multicloud Strategy report, which estimated there would be 1.87 billion mobile workers globally by 2022. First, a successful zero trust model should provide visibility for all traffic – across users, devices, locations, and applications. Additionally, it should enable visibility of internal traffic zoning capabilities. You should also consider having the enhanced ability to properly secure the new control points in a zero-trust environment. The right access policy manager secures, simplifies, and centralizes access to apps, APIs, and data, no matter where users and their apps are located. A zero-trust model validation based on granular context-and-identity awareness, and securing every application-access request, is key to this and should continuously monitor each user’s device integrity, location, and other application-access parameters throughout their application-access session. Having a robust application security portfolio in a zero-trust approach is also important. The right solutions can protect against layer 7 DoS attacks through behavioral analytics capability and by continuously monitoring the health of your applications. A credential protection to prevent attackers from gaining unauthorized access to your users’ accounts can strengthen your zero-trust security posture. Plus, with the growing use of APIs, you need a solution that protects them and secures your applications against API attacks. F5 leans heavily on the NIST Special Publication 800-207 Zero Trust Architecture when it comes to our efforts around zero trust, because it provides industry-specific general deployment models and use cases where zero trust might improve an enterprise’s overall information technology security posture. The document describes zero trust for enterprise security architects and aids understanding of zero trust for civilian unclassified systems. In addition, it offers a road map for migrating and deploying zero trust security concepts to an enterprise environment. Collecting info on current assets, network infrastructure, and communications state to improve your security posture is critical to zero trust improvements. We recommend following these steps to guide your organization in this process: F5 can specifically help you deploy an effective zero trust model that leverages our Trusted Application Access, Application Infrastructure Security, and Application Layer Security solutions. Learn more here.
<urn:uuid:9edf1cab-7226-41c4-8170-666dde810a8c>
CC-MAIN-2022-40
https://www.f5.com/fr_fr/services/resources/glossary/zero-trust
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00223.warc.gz
en
0.920918
792
2.578125
3
Cybercrime is not new; everyone is well aware of such an offense. There are several ways through which the attack could be made. The most common among them is social engineering. Do you know that 98% of cyberattacks involve social engineering? Wooh! It’s a huge rate, but Why do cyber attackers commonly use social engineering attacks? The answer to this question will help you in many ways, and you will learn more about cyber security. Continue reading to know the details. What is Social Engineering Attack? It is a “non-technical approach of tricking peoples into violating basic security standards and mainly focuses on social interaction. The success of this strategy depends upon the attacker’s skills to persuade their targets to carry out a particular action, for example, giving sensitive data like a social security number or password, etc. Social engineering is one of the most efficient techniques to gather information and get beyond a defense’s barriers in the digital world. It works so well because technical protections (such as firewalls and general software protection) have significantly improved in defending against external threats. Humans, on the other hand, who are known as the “weak point in your system security,” cannot say the same. Why do Cyber Attackers Commonly Use Social Engineering Attacks? Now that you know about social engineering, move to our question, why is it the most effective and commonly used by attackers? Because people are the weak link, social engineering is the hacker’s primary strategy. The truth is that breaking into computers is typically time-consuming and complicated, becoming more complicated nowadays with advanced encryption and security. Believe it or not, humans love socializing, and most people easily trust friendly behavior and sharing their personal information. Curiosity, urgency (don’t read the entire form and just proceeds to download), to try something new are all human nature, and they enjoy exploring new things on the internet regardless of the risks they get themselves trapped into. People still blindly start downloading from suspicious websites, reuse credentials, and use login details that are very simple to decrypt. Regardless of how far security has come, humans tend to follow the old ways and repeat the same errors. The simplest answer to this query is: Humans have flaws. Machines are constructed with a focus on security. They are revised often to ensure that the defensive system is up to date and the bugs or errors are taken care of. But the same thing doesn’t imply to humans, as most people are far away from this technical coding and the hacking world and are busy in their own life, their minds full of many things and problems that have nothing to do with security. With social engineering, hackers will contact you, try to make friends through their sweet talk, build trust and get what they want. This is much easier than breaking the complex technical code of the mechanical defensive system. So that’s the main reason “why do cyber attackers commonly use social engineering attacks.” What Method Would a Cyber Attacker Use to Infect a System With Malware? Ok, you get it now that the social engineering attack relies on human interaction. The attacker interacts with you, makes friends, builds trust, and asks for personal information, but how did they attack your system? Social engineering involves different techniques to attack your system, such as: - Phishing Attack (Attacker shows fake identity as a trusted company or a person to get personal information. It could be through SMS, emails, phone calls, etc.) It is one of the most common methods used by attackers. Spear Phishing Attacks Linked to the Coronavirus Rise by 667% in March 2020 - Baiting Attack (Attacker misuses human’s nature of curiosity to get the information they want by making them greedy for some free prize, etc.) - Scareware Attack (false alarms and fake threats are constantly being thrown at the victims) - Pretexting (An attacker gathers information by telling a series of carefully constructed lies) So, what should you do if you suspect you are experiencing a social engineering attack? After getting enough details about social engineering, you may think about a way to protect yourself from such an attack. Here are some suggestions. - Never open emails or attachments from unknown senders. - Use multifactor verification - Watch out for tempting offers. - Update your antivirus and antimalware software. - Make online friendships with caution. - Don’t mention personal information. - Use Password manager - Create secure passwords All the above information gives you a better idea about social engineering attacks, their tricks, and their effectiveness. You also have some tips to remain secure from such an attack. Now you can prepare yourself more to protect your privacy.
<urn:uuid:791955a3-6553-49bd-8506-21fdfcd19c1f>
CC-MAIN-2022-40
https://nextdoorsec.com/why-do-cyber-attackers-commonly-use-social-engineering-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00423.warc.gz
en
0.946644
992
2.890625
3
The financial sector is one of the most essential industries that touch the lives of each and every individual and organization across the globe. As we step into a more connected world, it becomes ever more important to secure the financial ecosystem through robust cyber defenses. The financial sector is one of the prime targets for malicious cyber actors, forcing institutions to invest heavily into upgrading their security operations to improve their readiness for different threats. Major Threats to the Financial Sector The prominent cyberthreats facing the financial sector include: Payment Fraud: Online scams and fraud schemes resulting in fraudulent transfers of money from victims to cybercriminals have become an everyday occurrence. Such payment fraud cases lead to billions of dollars of losses for companies and individuals every year. Social Engineering/Business Email Compromise: Along similar lines, social engineering attacks and spear-phishing attacks can result in the compromise of employee credentials at financial sector organizations. This kind of access can enable the lateral movement of cybercriminals within the organization’s systems and networks to cause more damage. Credential/Identity Theft: Through historical data breaches and credential stuffing attacks, cybercriminals can steal the identity of banking customers and use it to take over their accounts and steal their funds. State-Sponsored Attacks: In recent years, we have witnessed a growing threat from state-backed cyber adversaries targeting critical infrastructure sectors. Such threat groups possess a high level of technical skills and resources. They often aim to conduct espionage or disrupt crucial business operations to disrupt a nation’s economy. Unauthorized Access: Instances of data leaks or breaches due to inappropriate access controls or security misconfigurations in cloud servers and applications have become all too common these days. Financial sector organizations need to be cognizant of the risks posed by publicly exposed assets. Supply chain Attacks/Third-Party Risks: Cybersecurity cannot be treated as an insular concern for an organization. Organizations have to consider the security implications from third-party risks posed due to vulnerabilities in their hardware or software supply chains or due to breaches at their vendors, partners, or other stakeholders. Ransomware Attacks: If there is a single kind of cyberthreat that has grabbed mindshare across various industries, it is the threat of ransomware attacks. With the use of clever double/triple extortion techniques in the last few years, ransomware gangs have caused major disruption. Zero-day Vulnerabilities: Organizations in the financial sector rely on a variety of applications, tools, and technologies to conduct their business operations. Any unreported vulnerabilities in such applications can be exploited by cybercriminals to infiltrate their systems and networks. Significant Cybersecurity Challenges In order to defend against the aforementioned threats, financial sector organizations need robust security operations that can secure the diverse assets and data deployed by organizations. However, it is not all smooth sailing for security teams at financial sector organizations as they face the challenge of countering new and sophisticated threats rising every day in the cyber landscape, while efficiently leveraging the people, processes, and technologies at hand. Some of the challenges they face include: Threat response: Modern cybersecurity teams cannot afford to just respond to incidents after they are reported, but they also need to take proactive actions to curb other emerging threats facing their organization. Any delay in threat response can lead to greater operational disruption, data leakage, or recovery time. Financial institutions need to leverage orchestration and automation to upgrade their threat detection and response capabilities to rapidly respond to threats in real-time and even proactively. Visibility: To gain a complete understanding of the threat environment, security teams and the organizational leadership need extensive visibility around different kinds of security risks, threats, security controls, and exceptions, across their cloud-based or on-premise infrastructure. Governance: As security teams need to triage hundreds or thousands of alerts on a daily basis, financial institutions need effective security governance to manage their human and technical resources to smoothly conduct threat investigations, response, hunting, vulnerability management, and other functions. Collaboration: In legacy security operations centers (SOCs), various security teams often end up operating in their own silos with minimal scope for information exchange and collaboration with other functions. This leads to inefficiencies, knowledge gaps, and an incohesive response to threats. Use Cases of Cyber Fusion To boost their cyber resilience, financial institutions need to rethink how all the moving parts in their security operations are organized and how they can make the most out of it. The concept of a Cyber Fusion Center (CFC) allows organizations to integrate their various security functions into a single, connected operational unit. This approach to cybersecurity can help address a number of pertinent security use cases for financial institutions. Threat Intel Operationalization: The use of threat intelligence can help dramatically improve the threat detection and response capabilities of an organization. This includes threat intel from external sources such as ISAC advisories, OSINT sources, commercial intel feeds, research blogs, and insights from internal telemetry from SIEM, firewall, IDS/IPS, and other tools. Security teams in a CFC benefit from the last-mile delivery of this threat intelligence to smartly direct their security processes and proactively counter potential threats before they manifest into an incident. Information Sharing: A CFC enables real-time information sharing among different security teams within an organization as well as allows decision-makers to coordinate and collaborate with other financial sector organizations through information sharing communities (ISACs/ISAOs) or private enterprise sharing networks. Cyber/Physical Incident Reporting: In the financial sector, there are tangible, monetary consequences of delays in responding to security incidents. A CFC enables users to share threat intelligence or report cyber/physical incidents or threats 24x7 using the web or mobile devices from any location. Enriched, anonymized, and actionable threat intelligence can also be shared with members spread across different locations through a centralized CFC. Intel Collaboration: To promote a collaborative approach to security operations, CFCs provide members the ability to create Requests for Information (RFIs) to assemble information on specific threats, operational activities, policies, or other issues. Members can also create alerts from RFIs submitted by other members, thereby boosting cooperation among security analysts and other professionals developing and managing the organization’s technology infrastructure. Threat Response Automation: A CFC brings the power of Security Orchestration, Automation, and Response (SOAR) to accelerate the threat response processes using automated, cross-functional workflows that drive security actions across cloud-based and on-premise infrastructures. Vulnerability Management: Whenever a new critical vulnerability is discovered, the clock starts ticking as cybercriminals are in a race to exploit it to breach organizations. In a CFC, security teams can create automated workflows to patch vulnerabilities or implement workarounds to prevent the exploitation of their backend systems, servers, endpoints, applications, and more. Threat Hunting: The legacy systems used in the financial sector that lack vendor support and critical vulnerability patches can create room for attackers to enter their networks. Through a CFC, security teams can proactively hunt for any threats attempting to intrude on their systems and also use known vulnerability exploitation indicators as intelligence inputs to trigger response actions to prevent a crisis. Crisis Communication: In times of a cybersecurity crisis, an organization cannot afford to suffer any delays in providing adequate response and communicating it to their stakeholders. Financial institutions can ensure that even if one of their systems faces an intrusion, it can be prevented from spreading laterally across their networks by sharing threat information and coordinating response actions with all stakeholders through the CFC. Threat Correlation and Analysis: When security events and threat data from multiple internal and external sources are combined in a single interface in a CFC, it unlocks the opportunity to connect the dots between assets, alerts, incidents, Indicators of Compromise (IOCs), and other key elements. This allows security teams to assess the true impact of an incident and conduct in-depth investigations. Financial Fraud Response: The use of security orchestration and automation in a CFC can help trigger the necessary actions for detecting and investigating fraudulent activity, disabling compromised accounts, communicating to the affected parties and stakeholders, and then restoring the affected assets. Furthermore, security teams can leverage fraud intelligence from OSINT sources, ISAC/CERT advisories, intel feeds, dark web forums, as well as internal telemetry to correlate and analyze the malicious activity on their networks with other historical incidents. This enables a more in-depth understanding of the threat and a better response Third-party Risk Monitoring: By connecting security operations with internal and external stakeholders through cyber fusion, organizations can keep a check on their third-party security risks and automate response workflows in case of any intrusions. Alert Aggregation and Centralized Storage: A CFC simplifies alert triage and investigation for security analysts through automated alert aggregation from external sources (TI feeds, ISAC/CERT advisories) and internal sources (SIEM, VM, IT/ITSM tools) in a single window. On top of this, it categorizes alerts based on contextual parameters such as TLP, category, and sources. All in all, a CFC provides a central organized management interface for all historical alerts that helps in sharing real-time alerts with security teams and CISOs, performing threat investigations, prioritizing threat actioning, and more. Threat Alerting: Based on their role, location, and business unit, employees require different alerts to stay cognizant of the critical threats affecting their operations. A CFC provides the ability to disseminate threat alerts in real-time to members across different teams to spread situational awareness and enable rapid actions during a cybersecurity crisis. Action Management: When there are tons of incidents, alerts, and threats to manage, security teams cannot afford to rely on conventional methods of task/action management. A CFC addresses this by providing SOC managers with an easy-to-use and customizable system for assigning, tracking, and managing threat response and asset management operations. This makes security governance a breeze for decision-makers as they can create their customized incident workflows, map them to different parameters, and define rules for assigning different workflows based on their needs. Upsides of Adopting Cyber Fusion As described above, cyber fusion involves the amalgamation of different security functions, such as incident response, vulnerability management, and threat hunting, under a common umbrella. This lays the ground for streamlined security operations that result in several benefits for financial institutions. Enhanced Threat Visibility: It is not enough for financial institutions to just monitor threats on certain endpoints and servers located on-premise. Cyber fusion tackles this challenge once and for all by providing unparalleled visibility across all assets, regardless of where they are located or the type of technology infrastructure they are hosted on. Resilient Cyber Strategy: Through the use of cyber fusion, organizations can build security operations workflows that can withstand the demands of an evolving threat landscape. A CFC provides decision-makers the capability to shape their strategies as per changing security policies, compliance requirements, and technology evolution. Enhanced Security Maturity: While the digital transformation of financial institutions has been a priority in recent years, cyber fusion helps bring the same level of attention to the maturity of security operations by providing threat intel operationalization, situational awareness, and orchestration among humans and machines in the loop. Collective Defense: The significance of information sharing and collaboration in cybersecurity is now more apparent than ever as financial institutions are facing shared cyber threats from growing nation-state attacks and organized cybercriminal groups. A CFC is built with this very concept of collective defense at its core as it makes security operations a collaborative affair with inclusion of all internal and external stakeholders for an organization. Whether it is banks, insurance firms, stock exchanges, payment service providers, financial asset managers, central banks, and regulatory bodies, the financial sector includes many stakeholders that face growing cyber risks to their operations. To overcome the security challenges in the present cyberspace, financial institutions are turning to cyber fusion as the driver of change for building resilience through security integration and collective defense.
<urn:uuid:bb96a593-3b29-44fa-b746-330694abfeca>
CC-MAIN-2022-40
https://cyware.com/educational-guides/cyber-fusion-and-threat-response/why-are-financial-institutions-adopting-cyber-fusion-strategies-57b5/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00423.warc.gz
en
0.933125
2,472
2.625
3
There are few certainties in life: Death, taxes, and turning your computer off and on when there’s a problem. This advice is usually the first tip you get from friends, family, and tech support. Rebooting your computer helps keep it running smoothly. It clears the memory, stopping any tasks that are eating up RAM. Even if you’ve closed an app, it could still tap your memory. A reboot can also fix peripheral and hardware issues. So, how often should you be rebooting your computer? Let’s look at how rebooting can impact your system and when exactly you should be doing it. This tip is brought to you by my sponsor, Dell. If you have a small business, Dell’s smart team offers free advice to help you buy the right gear. Yes, totally free! Call a Dell Technologies Advisor to help you at 877-ASK-DELL or Dell.com. Give your computer a fresh start We recommend that you shut down your computer at least once a week. A reboot process returns everything to its bootup state, from your computer’s CPU to its memory. Many people will shut down their computer by holding in the power button. This way may cause additional problems. Tap or click here to see how to restart your PC or Mac correctly. Rebooting your computer involves two steps — shutting down the computer and then starting it up again. When you reboot/restart your computer, it will lose power during the process and start up again on its own. Your computer itself will occasionally prompt you to restart it, usually after downloading an update. Newer machines need fewer restarts, but a significant software patch usually requires one. Reduce wear and tear Your computer is full of moving parts. Its CPU, essentially the brain, has a fan. High-end graphics cards also need a cooling system. Though solid-state drives are becoming more popular, most PCs still use hard disk drives, consisting of spinning discs. These components wear down over time, and the longer you keep your computer running, the shorter their lifespan will be. It’s easy to fall into the habit of leaving it on to avoid having to go through the bootup process, but it will help you get more life out of your machine. If you are stepping away for a few hours or would rather not wholly shut things down, you can put your PC down for a nap. Sleep it off Sleep mode puts your computer into a low-power state. The fans will stop spinning and the hard drive will stop functioning, so things will get quiet. With sleep mode, your computer’s current state stays in the memory. When you wake up your machine, your open apps, documents, music, etc., will be right where you left them. Tap or click here to see how your iPhone and Apple Watch can help you improve your sleeping habits. To put your PC in sleep mode: - Open power options: - For Windows 10, tap Start > Settings > System > Power & sleep > Additional power settings. - For Windows 8.1 / Windows RT 8.1, swipe in from the edge of the screen, tap Search (or if you’re using a mouse, point to the upper-right corner of the screen, move the mouse pointer down and click Search), enter Power options in the search box and tap Power options. - For Windows 7, tap Start > Control Panel > System and Security > Power Options. - Do one of the following: - If you’re using a desktop, tablet, or laptop, select Choose what the power buttons do. Next to When I press the power button, select Sleep > Save changes. - If you’re using only a laptop, select Choose what closing the lid does. Next to When I close the lid, select Sleep > Save changes. - When you’re ready to make your PC sleep, press the power button on your desktop, tablet, or laptop, or close your laptop’s lid. On most PCs, you can resume working by pressing your PC’s power button. However, not all PCs are the same. You might be able to wake it by pressing any key on the keyboard, clicking a mouse button, or opening the lid on a laptop. Check the manual that came with your computer or go to the manufacturer’s website. It takes less time to wake up a computer than to turn it on after a shutdown, but sleep mode still consumes power. To clear out bugs, memory leeches, nonfunctioning network connections, and more issues, a reboot is the way to go. Own a small business? Dell is here to help, for free When you own your own business, every job under the sun ultimately falls to you. That includes purchasing the tech to keep your business going. That’s a difficult job, even for the most tech-savvy person. Don’t just guess and hope it all works out. The pros at Dell Small Business can help. Chat, call or email an advisor and get free, helpful advice to help you find the right solutions. That’s right. It really is free. You don’t have to buy a thing. You can get smart, reliable help to choose the right tools for your company. Need any tech help fixing a printer, slow PC or audio issues? Post your tech questions to get fast, concrete answers from me and other tech pros. Visit my Q&A Forum and get tech help now. What digital lifestyle questions do you have? Call Kim’s national radio show and tap or click here to find it on your local radio station. You can listen to or watch The Kim Komando Show on your phone, tablet, television or computer. Or tap or click here for Kim’s free podcasts.
<urn:uuid:9ef2d402-cad3-40fe-8360-ce9263a623a4>
CC-MAIN-2022-40
https://www.komando.com/kims-column/when-to-restart-your-computer/777237/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00423.warc.gz
en
0.904787
1,224
2.8125
3
Advanced Analytics uses sophisticated tools for granular data analysis to enable forecasts and predictions from data. Quick Takeaway: Advanced analytics is a very effective form of data analysis because it allows you to dig deeper into data to predict the future of your business. Analytics is a process that involves identification, interpretation, and communication of critical patterns found in raw data. This information is used by organizations to make strategic decisions that impact performance. Quick Takeaway: Analytics puts your data to work. (aka Outlier Analysis) is a technique used to identify a random pattern in data, namely anomaly, that does not conform to expected behavior. This method has a range of real-world applications such as intrusion detection (strange patterns in network traffic signaling a hack) health monitoring (identifying malignant tumors in MRI scans), fraud detection (credit card transactions), technical glitches (malfunctioning equipment) and changes in consumer behavior. Quick Takeaway: Anomaly Detection helps find unusual activity in data thereby indicating an area that needs further investigation. Artificial Intelligence (AI) is the ability of a machine to perform actions that otherwise require human intelligence. For instance, tasks like visual perception, speech recognition, translation between languages, and decision-making, are supported and automated by AI using intelligent machines. Quick Takeaway: AI enables computer programs to think and act like intelligent humans. Minus the mood swings. Augmented Analytics uses advanced technology to independently examine data, reveal hidden patterns, provide insights, make predictions, and generate recommendations. Artificial Intelligence and Machine Learning tools are used to automate the end-to-end process, right from data preparation, insight generation, and explanation, to augmenting the output with visualization and narratives. Quick Takeaway: Augmented Analytics has revolutionized the way people explore and analyze data on BI and analytics platforms. It is unanimously crowned “The Future of Business Intelligence.” Behavioral Analytics is a part of Business Intelligence that uses data to focus on how and why users behave the way they do, on social media platforms, eCommerce sites, while playing online games, and when using any other web application. Quick Takeaway: Behavioral Analytics follows virtual data trails to gain insights into user behavior online. Big Data includes a variety of structured and unstructured data, sourced from documents, emails, social media, blogs, videos, digital images, satellite imagery, and data generated by machines/sensors. It comprises large and complex datasets, which cannot be processed using traditional systems. Quick Takeaway: Big Data is large volumes of data generated at high speeds, in multiple formats that can be of value when analyzed. Business Intelligence (BI) is analyzing data and presenting actionable insights to stakeholders to help them make informed business decisions. Quick Takeaway: BI enables the right use of information by the right people for the right reasons. Clustering groups data points from multiple tables, with similar properties, together for statistical analysis. Quick Takeaway: Clustering is an Unsupervised Machine Learning technique. Dashboard is a tool used to create and deploy reports. It helps monitor and analyze key metrics on a single screen and see the correlations between them. Quick Takeaway: Dashboard provides an overview of the reports and metrics that matter most to you. Data Blending is a fast and easy method to extract data from multiple sources and blend it into one functional dataset. Quick Takeaway: Data Blending combines data and finds patterns without the hassle of deploying a data warehouse architecture, which is why it is preferred. Data Cleaning is also referred to as data cleansing or scrubbing. It improves data quality through the detection and removal of inconsistencies and errors found in data. Quick Takeaway: Data Cleaning transforms data from its original state into a standardized format to maximize the effect of data analysis. Data Cube is the grouping of data into multidimensional hierarchies, based on a measure of interest. Quick Takeaway: Data Cube helps interpret a stack of data. Data Democratization enables all users to access and analyze data freely to answer questions and make decisions. Quick Takeaway: Data Democratization is a ‘free-for-all’ access to data and its use. No holds barred. Data Fabric is a unified environment of data services that provide consistent capabilities namely, data management, integration technology, and architecture design being delivered across on-premises and cloud platforms. A data fabric ensures complete automation of data access and sharing. Quick Takeaway: Data Fabric puts the management and use of data into high gear using technology. Data Wrangling (aka Data Munging, Data Transformation) is the process of unifying acquired datasets with actions like joining, merging, grouping, concatenating, etc. and cleansing it for easy access and further analysis. Quick Takeaway: Data Wrangling is the step between data acquisition and data analysis. Diagnostic Analysis (aka Root Cause Analysis) takes over from Descriptive Analysis to answer the question Why it happened. It drills down to find causes for the outcomes and identify patterns of behavior. Quick Takeaway: Diagnostic Analysis provides reasoning for the outcomes of the past by breaking down the data for closer inspection. Drill-Down Capability helps visualize data at a granular level by providing flexibility to go deep into specific details of the information required for analysis. It is an important feature of Business Intelligence because it makes reporting a lot more useful and effective. Quick Takeaway: Drill-Down Capability offers an interactive method to display multi-level data on request without changing the underlying query. Exception Handling is the process of responding to unexpected events (exceptions) encountered when a predefined set of steps is executed. Quick Takeaway: Exception Handling deals with unexpected instances that may arise when an action is performed. Feature Engineering creates new features from raw data using data mining methods. Quick Takeaway: If data cleaning is a subtractive process, feature engineering can be looked at as an additive process. Key Performance Indicator (KPI) (aka Key Metric) is an important indicator that helps measure a department or organization’s performance and health. Quick Takeaway: KPIs indicate how a business is performing based on certain parameters. Machine Learning is an application of AI that enables computer applications to learn without specific programming using large datasets and improve when exposed to new data. ML is used to automate the building of analytical models. Quick Takeaway: ML is the ability of machines to self-learn based on data provided and accurately identify instances of the learned data. Metadata provides information about other data within a database. It provides references to data, which makes finding and working with collected data for the end-user easier in some cases. Quick Takeaway: Metadata is data about data. For instance, username, date created/modified, file size are basic document metadata. Predictive Analysis uses summarized data to answer the question What is likely to happen. It uses past performance to make logical predictions of future outcomes. It uses statistical modeling to forecast estimates. The accuracy of these estimates depends largely on data quality and details used. It is widely used across industries to provide forecasts and risk assessment inputs in various functions namely Sales, Marketing, HR, Supply Chain, Operations. Quick Takeaway: Predictive Analysis attempts at predicting the future using advanced technology and skilled resources to analyze data and not look into a crystal ball. Prescriptive Analysis is the last level of data analysis wherein insights from other analyses (Descriptive, Diagnostic, Predictive) are combined and used to determine the course of action to be taken in a situation. Needless to say, the technology used is a lot more advanced and so are the data practices. Quick Takeaway: Prescriptive Analysis prescribes data-driven next steps for decision making. Slice and Dice refers to the division of data into smaller uniform sections, that present the information in diverse and useful ways. For instance a pivot table in a spreadsheet. Quick Takeaway: Slice and Dice is the breakdown of data into smaller parts to reveal more information. Snapshot refers to the state of a dataset at a given point in time. Quick Takeaway: A snapshot provides an instant copy of data, captured at a certain time. Software as a Service (SaaS) is a delivery model for software, centrally hosted by the vendor and licensed to customers on a pay-for-use or subscription basis. Quick Takeaway: SaaS packages and sells software as a service.
<urn:uuid:48d13c52-0c92-49e1-a6d7-70d60e30cff2>
CC-MAIN-2022-40
https://phrazor.ai/resources/glossary
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00423.warc.gz
en
0.885806
1,760
2.875
3
This week, at the International Electron Devices Meeting (IEDM) and the Conference on Neural Information Processing Systems (NeurIPS), IBM researchers will showcase new hardware that will take AI further than it’s been before: right to the edge. Our novel approaches for digital and analog AI chips boost speed and slash energy demand for deep learning, without sacrificing accuracy. On the digital side, we’re setting the stage for a new industry standard in AI training with an approach that achieves full accuracy with 8-bit precision, accelerating training time by two to four times over today’s systems. On the analog side, we report 8-bit precision—the highest yet—for an analog chip, roughly doubling accuracy compared with previous analog chips while consuming 33x less energy than a digital architecture of similar precision. These achievements herald a new era of computing hardware designed to unleash the full potential of AI. Into the post-GPU era Innovations in software and AI hardware have largely powered a 2.5x per year improvement in computing performance for AI since 2009, when GPUs were first adopted to accelerate deep learning. But we are reaching the limits of what GPUs and software can do. To solve our toughest problems, hardware needs to scale up. The coming generation of AI applications will need faster response times, bigger AI workloads, and multimodal data from numerous streams. To unleash the full potential of AI, we are redesigning hardware with AI in mind: from accelerators to purpose-built hardware for AI workloads, like our new chips, and eventually quantum computing for AI. Scaling AI with new hardware solutions is part of a wider effort at IBM Research to move from narrow AI, often used to solve specific, well-defined tasks, to broad AI, which reaches across disciplines to help humans solve our most pressing problems. Digital AI accelerators with reduced precision IBM Research launched the reduced-precision approach to AI model training and inference with a landmark paper describing a novel dataflow approach for conventional CMOS technologies to rev up hardware platforms by dramatically reducing the bit precision of data and computations. Models trained with 16-bit precision were shown, for the very first time, to exhibit no loss of accuracy in comparison to models trained with 32-bit precision. In the ensuing years, the reduced-precision approach was quickly adopted as the industry standard, with 16-bit training and 8-bit inferencing now commonplace, and spurred an explosion of startups and venture capital for reduced precision-based digital AI chips. The next industry standard for AI training The next major landmark in reduced-precision training will be presented at NeurIPS in a paper titled “Training Deep Neural Networks with 8-bit Floating Point Numbers” (authors: Naigang Wang, Jungwook Choi, Daniel Brand, Chia-Yu Chen, Kailash Gopalakrishnan). In this paper, a number of new ideas have been proposed to overcome previous challenges (and orthodoxies) associated with reducing training precision below 16 bits. Using these newly proposed approaches, we’ve demonstrated, for the first time, the ability to train deep learning models with 8-bit precision while fully preserving model accuracy across all major AI dataset categories: image, speech, and text. The techniques accelerate training time for deep neural networks (DNNs) by two to four times over today’s 16-bit systems. Although it was previously considered impossible to further reduce precision for training, we expect this 8-bit training platform to become a widely adopted industry standard in the coming years. Reducing bit precision is a strategy that’s expected to contribute towards more efficient large-scale machine learning platforms, and these results mark a significant step forward in scaling AI. Combining this approach with a customized dataflow architecture, a single chip architecture can be used to efficiently execute training and inferencing across a range of workloads and networks large and small. This approach can also accommodate “mini-batches” of data, required for critical broad AI capabilities without compromising performance. Realizing all of these capabilities with 8-bit precision for training also opens the realm of energy-efficient broad AI at the edge. Analog chips for in-memory computing Thanks to its low power requirements, high energy efficiency, and high reliability, analog technology is a natural fit for AI at the edge. Analog accelerators will fuel a roadmap of AI hardware acceleration beyond the limits of conventional digital approaches. However, whereas digital AI hardware is in a race to reduce precision, analog has thus far been limited by its relatively low intrinsic precision, impacting model accuracy. We developed a new technique to compensate for this, achieving the highest precision yet for an analog chip. Our paper at IEDM, “8-bit Precision In-Memory Multiplication with Projected Phase-Change Memory” (authors: Iason Giannopoulos, Abu Sebastian, Manuel Le Gallo, V. P. Jonnalagadda, M. Sousa, M. N. Boon, Evangelos Eleftheriou), shows this technique achieved 8-bit precision in a scalar multiplication operation, roughly doubling the accuracy of previous analog chips, and consumed 33x less energy than a digital architecture of similar precision. The key to reducing energy consumption is changing the architecture of computing. With today’s computing hardware, data must be moved from memory to processors to be used in calculations, which takes a lot of time and energy. An alternative is in-memory computing, in which memory units moonlight as processors, effectively doing double duty of both storage and computation. This avoids the need to shuttle data between memory and processor, saving time and reducing energy demand by 90 percent or more. A chip comprising several PCM devices. The electrical probes coming into contact with it is used to send signals to individual devices to perform the in-memory multiplication. Our device uses phase-change memory (PCM) for in-memory computing. PCM records synaptic weights in its physical state along a gradient between amorphous and crystalline. The conductance of the material changes along with its physical state and can be modified using electrical pulses. This is how PCM is able to perform calculations. Because the state can be anywhere along the continuum between 0 and 1, it is considered an analog value, as opposed to a digital value, which is either a 0 or a 1, nothing in between. We have enhanced the precision and stability of the PCM-stored weights with a novel approach, called projected PCM (Proj-PCM), in which we insert a non-insulating projection segment in parallel to the phase-change segment. During the write process, the projection segment has minimal impact on the operation of the device. However, during read, conductance values of programmed states are mostly determined by the projection segment, which is remarkably immune to conductance variations. This allows Proj-PCM devices to achieve much higher precision than previous PCM devices. The improved precision achieved by our research team indicates in-memory computing may be able to achieve high-performance deep learning in low-power environments, such as IoT and edge applications. As with our digital accelerators, our analog chips are designed to scale for AI training and inferencing across visual, speech, and text datasets and extending to emerging broad AI. We’ll be demonstrating a previously published PCM chip all week at NeurIPS, using it to classify hand-written digits in real time via the cloud. Founded in March 2020 just as the pandemic’s wave was starting to wash over the world, the Consortium has brought together 43 members with supercomputing resources. Private and public enterprises, academia, government and technology companies, many of whom are typically rivals. “It is simply unprecedented,” said Dario Gil, Senior Vice President and Director of IBM Research, one of the founding organizations. “The outcomes we’ve achieved, the lessons we’ve learned, and the next steps we have to pursue are all the result of the collective efforts of these Consortium’s community.” The next step? Creating the National Strategic Computing Reserve to help the world be better prepared for future global emergencies. IBM is supporting marine research organization ProMare to provide the technologies for the Mayflower Autonomous Ship (MAS). Named after another famous ship from history but very much future focussed, the new Mayflower uses AI and energy from the sun to independently traverse the ocean, gathering vital data to expand our understanding of the factors influencing its health.
<urn:uuid:fbb60c86-c5f2-4fcf-84d3-3a50b019a1c7>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2018/12/8-bit-breakthroughs-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00423.warc.gz
en
0.928327
1,770
2.546875
3
Is Cyber Security a Good Career- A job in cybersecurity can be extremely rewarding and enjoyable, but it can also be quite difficult and stressful. Understanding some of the job tasks, as well as the job characteristics and personality factors, will help you decide if a cybersecurity profession is suited for you. Understanding some of the job tasks, as well as the job characteristics and personality factors, will help you decide if a cybersecurity profession is suited for you. There are a variety of cybersecurity occupations available, each with its unique role in data and network security. Here are some of the most in-demand cybersecurity jobs based on your degree of expertise and education. Now that you’ve had more ideas about cyber security, all you need to do now is hone your skills in this field if you want to be amazing and competitive in the market. Land yourself on a good job and secure those paychecks. Be sure to ask for the pay stubs from your future employer. If they cannot provide you with one, you can always depend on https://www.paystubcreator.net/. Popular Career Paths - Entry-level: Entry-level jobs are a good opportunity to get your foot in the door and gain experience. System engineers, system administrators, web developers, IT technicians, network engineers, and security specialists are examples of these vocations. - Mid-level: Job opportunities include security technician, security analyst, incident responder, IT auditor, cybersecurity consultant, or penetration tester for people with mid-level experience. - Advanced-level: Cybersecurity managers, cybersecurity architects, cybersecurity engineers, and chief information security officers are all possibilities for those with a lot of job experience. What Degree is Needed? The degree required for a job in cybersecurity varies depending on your career choice. With a bachelor’s degree, you might be able to get into cybersecurity as an entry-level position. A bachelor’s degree in an IT subject such as computer science or information technology is required for the majority of cybersecurity positions. A master’s degree in cybersecurity, such as a master’s in computer engineering or a master’s in data science, will be required for some security professions. A higher education degree may qualify you for better work chances. Different Personalities for Cybersecurity Careers Understanding the various personalities in the cybersecurity sector might assist you in matching your preferences to a certain career path. Knowing where you belong might help you decide if a job in cybersecurity is the best fit for you. Here are some examples of cybersecurity personalities: The Problem Solver Analytical and insightful problem solvers. Their critical thinking abilities enable them to handle and manage security issues as well as cyber threats within a firm or organisation. When it comes to cybersecurity careers, incident responders frequently employ problem-solving skills. They’re the cybersecurity’s analytical brains, utilising powerful computer technologies to find security flaws and minimise dangers. The Quick Learner If you’re curious and enjoy learning new things, you might have the rapid learner personality. When pushed under pressure, quick learners often do well and are skilled at guessing what cybercriminals are planning. If you’re a fast learner, a career as a security architect might be for you. These specialists use your interest in IT technology to create and maintain the security structure required for a company’s computer network. If you’re empathetic in your profession and have an instinctive urge to fight cybercrime, you’ll recognise yourself as the avenger. Security engineers frequently have avenging personality qualities, relishing the act of anticipating potential security threats in the future. The security engineer, who is usually a company’s first line of defence against a security breach, combines their natural instincts and technical capabilities to identify risks from outside sources. When working as part of a security team, cybersecurity specialists with a teacher’s personality are helpful and kind. They use their good listening skills to figure out what the rest of the team is talking about and then jump in to share their knowledge and expertise when necessary. Consider a job as a cybersecurity consultant if you appreciate assisting others and sharing your knowledge. As a security adviser, you’ll collaborate with the rest of the security team as well as company executives. Cybersecurity is a passion for the enthusiast personality. They have a strong desire to protect data security and computer networks against security breaches. They frequently employ their ingenuity to keep one step ahead of cybercriminals. Ethical hackers, often known as penetration testers, have all of the characteristics of an enthusiast. Companies hire them to hack into their computer networks, allowing them to identify security weaknesses and the best ways to reduce these risks. With more cybersecurity specialists in demand than ever before, pursuing a career in cybersecurity may be incredibly rewarding. Knowing which cybersecurity profession best matches your hobbies and personality attributes will help you narrow down your options and choose the perfect cybersecurity career for you.
<urn:uuid:b43d5e19-c141-4972-81c0-f6f6f55bc776>
CC-MAIN-2022-40
https://cybersguards.com/is-cyber-security-a-good-career/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00423.warc.gz
en
0.946077
1,044
2.515625
3
In 1971, Intel, then a manufacturer of random access memory, officially released the 4004, its first single-chip central processing unit, thus kickstarting nearly 50 years of CPU dominance in computing. In 1989, while working at CERN, Tim Berners-Lee used a NeXT computer, designed around the Motorola 68030 CPU, to launch the first website, making the machine used the world’s first web server. CPUs were the most expensive, the most scientifically advanced, and the most power-hungry parts of a typical server: they became the beating hearts of the digital age, and semiconductors turned into the benchmark for our species' advancement. Few might know about the Shannon limit or Landauer's principle, but everyone knows about the existence of Moore’s Law, even if they have never seated a processor in their life. CPUs have entered popular culture and, today, Intel rules this market, with a near-monopoly supported by its massive R&D budgets and extensive fabrication facilities, better known as ‘fabs.’ But in the past two or three years, something strange has been happening: data centers started housing more and more processors that weren’t CPUs. It began with the arrival of GPUs. It turned out that these massively parallel processors weren’t just useful for rendering video games and mining magical coins, but also for training machines to learn - and chipmakers grabbed onto this new revenue stream for dear life. Back in August, Nvidia’s CEO Jen-Hsun ‘Jensen’ Huang called AI technologies the “single most powerful force of our time.” During the earnings call, he noted that there were currently more than 4,000 AI start-ups around the world. He also touted examples of enterprise apps that could take weeks to run on CPUs, but just hours on GPUs. A handful of silicon designers looked at the success of GPUs as they were flying off the shelves, and thought: we can do better. Like Xilinx, a venerable specialist in programming logic devices. The granddaddy of custom silicon, it is credited with inventing the first field-programmable gate arrays (FPGAs) back in 1985. Applications for FPGAs range from telecoms to medical imaging, hardware emulation, and of course, machine learning workloads. But Xilinx wasn’t happy with adopting old chips for new use cases, the way Nvidia had done, and in 2018, it announced the adaptive compute acceleration platform (ACAP) - a brand new chip architecture designed specifically for AI. “Data centers are one of several markets being disrupted,” CEO Victor Peng said in a keynote at the recent Xilinx Developer Forum in Amsterdam. “We all hear about the fact that there's zettabytes of data being generated every single month, most of them unstructured. And it takes a tremendous amount of compute capability to process all that data. And on the other side of things, you have challenges like the end of Moore's Law, and power being a problem. "Because of all these reasons, John Hennessy and Dave Patterson - two icons in the computer science world - both recently stated that we were entering a new golden age of architectural development." He continued: “Simply put, the traditional architecture that’s been carrying the industry for the last 40 to 50 years is totally inadequate for the level of data generation and data processing that’s needed today.” “It is important to remember that it’s really, really early in AI,” Peng later told DCD. “There’s a growing feeling that convolutional and deep neural networks aren’t the right approach. This whole black box thing - where you don’t know what’s going on and you can get wildly wrong results, is a little disconcerting for folks.” A new approach Salil Raje, head of the Xilinx data center group, warned: “If you’re betting on old hardware and software, you are going to have wasted cycles. You want to use our adaptability and map your requirements to it right now, and then longevity. When you’re doing ASICs, you’re making a big bet.” Another company making waves is British chip designer Graphcore, quickly becoming one of the most exciting hardware start-ups of the moment. Graphcore’s GC2 IPU has the world’s highest transistor count for a device that’s actually shipping to customers - 23,600,000,000 of them. That’s not nearly enough to keep up with the demands of Moore’s Law - but it’s a whole lot more transistor gates than in Nvidia’s V100 GPU, or AMD’s monstrous 32-core Epyc CPU. “The honest truth is, people don’t know what sort of hardware they are going to need for AI in the near future,” Nigel Toon, the CEO of Graphcore, told us in August. “It’s not like building chips for a mature technology challenge. If you know the challenge, you just have to engineer better than other people. “The workload is very different, neural networks and other structures of interest change from year to year. That’s why we have a research group, it’s sort of a long-distance radar. "There are several massive technology shifts. One is AI as a workload - we’re not writing programs to tell a machine what to do anymore, we’re writing programs that tell a machine how to learn, and then the machine learns from data. So your programming has gone kind of ‘meta.’ We’re even having arguments across the industry about the way to represent numbers in computers. That hasn’t happened since 1980. “The second technology shift is the end of traditional scaling of silicon. We need a million times more compute power, but we’re not going to get it from silicon shrinking. So we’ve got to be able to learn how to be more efficient in the silicon, and also how to build lots of chips into bigger systems. “The third technology shift is the fact that the only way of satisfying this compute requirement at the end of silicon scaling - and fortunately, it is possible because the workload exposes lots of parallelism - is to build massively parallel computers.” Toon is nothing if not ambitious: he hopes to grow “a couple of thousand employees” over the next few years, and take the fight to GPUs, and their progenitor. Then there’s Cerebras, the American start-up that surprised everyone in August by announcing a mammoth chip measuring nearly 8.5 by 8.5 inches, and featuring 400,000 cores, all optimized for deep learning, accompanied by a whopping 18GB of on-chip memory. “Deep learning has unique, massive, and growing computational requirements which are not well-matched by legacy machines like GPUs, which were fundamentally designed for other work,” Dr. Andy Hock, Cerebras director, said. Huawei, as always, is going its own way: the embattled Chinese vendor has been churning out proprietary chips for years through its HiSilicon subsidiary, originally for its wide array of networking equipment, more recently for its smartphones. For its next trick, Huawei is disrupting the AI hardware market with the Ascend line - including everything from tiny inference devices to Ascend 910, which it claimed is the most powerful AI processor in the world. Add a bunch of these together, and you get the Atlas 900, the world's fastest AI training cluster, currently used by Chinese astronomy researchers. And of course, the list wouldn’t be complete without Intel’s Nervana, the somewhat late arrival to the AI scene. Just like Xilinx and Graphcore, Nervana believes that AI workloads of the future will require specialized chips, built from the ground up to support machine learning, and not just standard chips adopted for this purpose. “AI is very new and nascent, and it’s going to keep changing,” Xilinx’s Salil Raje told DCD. “The market is going to change, the technology, the innovation, the research - all it takes is one PhD student, to completely revolutionize the field all over again, and then all of these chips become useless. It’s waiting for that one research paper.”
<urn:uuid:171afdaa-40cd-4a04-b94d-ca14877a207d>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/analysis/new-chip-bestiary/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00423.warc.gz
en
0.945754
1,807
2.796875
3
It can be argued that industrial facilities have taken to digital transformation much earlier than other enterprises. While it’s only now that some businesses are committing to adopting digital tools, factories have been using robots and programmable logic controllers (PLCs) decades before the dotcom boom of the nineties. Industrial cybersecurity comes to the forefront as industries increasingly adopt digital technologies. Contributed by Joshua Blackborne What’s probably sweeping industries today are technologies that rely on connectivity: the cloud, mobile computing, and the Internet-of-Things (IoT). These technologies offer some very exciting applications. The cloud has allowed organizations to shift part of their IT infrastructure off-premises and easily scale their available computing resources. Mobile computing and connectivity have allowed engineers to monitor and control their machines remotely. Sensors and robots are now even smarter, and through the IoT, are capable of interfacing with external artificial intelligence (AI) or analytics engines that allow these machines to automatically adjust for greater efficiency even without human intervention. The need for Industrial Cybersecurity However, this increasing connectivity of industrial facilities is now also raising cybersecurity concerns calling for more attention to industrial cybersecurity. Previously, industrial facilities were largely air-gapped, so hackers had to manipulate staff through social engineering attacks, or infiltrate facilities themselves. But as more industrial IT components connect to the internet, they become more exposed to cyberattacks from advanced persistent threats (APTs). “Industrial facilities have become more connected. Cloud computing has prompted a growing number of enterprises to shift their workload online. More facilities are also incorporating smart devices into their infrastructure. Unfortunately, this is also expanding the attack surface. Given how tenacious threat groups are these days, increasing connectivity can make these enterprises vulnerable to attack,” Oren Eytan, CEO of enterprise cybersecurity firm odix, shares. Here are three areas where industries are becoming more connected and how they can expose infrastructure to possible attacks: Adoption of Cloud Components One area that should concern industries regarding their cybersecurity is their adoption of cloud computing. For many organizations, the emergence of cloud computing has been a boon. They can now essentially outsource their computing needs to providers, lessening the need for acquiring and maintaining servers and applications on-site. Unfortunately, cloud instances can be compromised whether through vulnerabilities at the provider’s end or through weak access controls at the user’s end. Hackers can then steal, hijack, and destroy critical data. They can even perform supply chain hacks that could introduce malicious code or malware into the company’s cloud storage and repositories. Access to these cloud components is often whitelisted, allowing malware to reach the facility’s infrastructure unhindered. “What could be more troubling is that hackers have become crafty, disguising their malware within legitimate files. They can even feature polymorphic code that continuously changes, allowing it to evade conventional signature-based detection. What’s often needed is for enterprises to integrate solutions like content disarm and reconstruction that can sanitize all files coming into the network, whether through email or repositories, to ensure that they are safe,” Eytan adds. Introduction of Smart Devices and IoT Another way that industries are becoming more connected is through the adoption of smart IoT devices. Previously, industries relied on PLCs to control their machinery which had limited connectivity outside facilities. Today, sensors and robots are connecting directly to the Internet, allowing them to readily send and receive data, or be remotely controlled. However, since these devices directly access the Internet, it’s possible for attackers to quickly interface with them. Unless they are equipped with capable security features, they may easily be compromised. One only has to recall how the Mirai malware compromised hundreds of thousands of low-security IP cameras and home routers and made them part of a massive botnet that nearly took down the Internet in 2016. “It’s reasonable for companies to be concerned about the security of IoT device deployments in industrial environments. Each device has an associated risk to data and operational integrity. A compromised internet-connected device could create a pathway for attacks on connected systems, including critical control systems,” writes Sid Snitkin, VP of industry and infrastructure advisory firm ARC. It is critical then for enterprises to be aware of these concerns and look to integrate only those devices have ample security features such abilities to change default administrator credentials, disable unused features, and update device firmware and applications. The industry has been working toward promoting device certification through bodies like ISASecure but manufacturers have yet to make this practice standard. Use of 5G for Industrial Applications 5G is set to explode this year as more areas and territories get better coverage. In the U.S., service providers are already gearing up to launch their mobile 5G services in major cities. Manufacturers have already released 5G-capable devices in their flagship and premium models. The feature is expected to trickle down to their more mainstream models as wider coverage becomes available. Aside from being capable of gigabit-level speeds, 5G is supposedly capable of much lower latency. This becomes a definite advantage where faster response times are critical, especially for remotely controlling devices, and machinery that requires precision. Self-driving cars can receive traffic and road data coming from external sources sooner, allowing them to make real-time adjustments. In Healthcare, this could enable remote robotic surgery to be done in even the most isolated locations. But the use of wireless connectivity has its weaknesses as well. Hackers can perform man-in-the-middle attacks where they hijack signals or use fake cell towers so that they can steal data in transit or even inject malware into connected devices. Committing to Industrial Cybersecurity Enterprises now have to weigh the risks and benefits of adopting these new technologies. As businesses, they would definitely want to leverage better connectivity to improve efficiency and enable new use cases. Still, they also have to seriously consider the cybersecurity threats that adopting these technologies can introduce to their infrastructures. Fortunately, security solutions providers are continually developing their tools to accommodate all these changes. Organizations and facilities must ultimately revisit their security strategies and practices to ensure that they keep their perimeters secure even if they choose to introduce new components and endpoints to their infrastructure. CISO MAG did not evaluate the advertised/mentioned product, service, or company, nor does it endorse any of the claims made by the advertisement/writer. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same.
<urn:uuid:884c610d-485b-4f55-9798-79ab24c6f91d>
CC-MAIN-2022-40
https://cisomag.com/industrial-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00623.warc.gz
en
0.951125
1,356
2.625
3
The 2016 Distributed Denial of Service attack on Dyn came from more than 100,000 infected devices. DDoS attacks leverage massive quantities of unsecured Internet-connected devices to disrupt Internet services worldwide [DYN]. The malicious and sophisticated attack kicked off serious conversations about network security and highlighted the vulnerability in the Internet of Things devices. These IoT infected devices connect globally to private and public sector networks, so the question is: how can we harden our networks against malicious attacks? In this blog, we’ll focus on the multi-layers of product security architecture and implementation. We’ll discuss Industrial IoT network devices, such as routers and switches, and hardening requirements to mitigate security risks (particularly when actors are willing to circumvent data breaches and software intrusion). Malicious attacks, as noted above, are not limited to IoT consumer devices. They also cause significant disruptive and financial consequences for businesses and services. Just a few months ago, a plant in North Carolina lost valuable production time when hackers deployed corrupted software [North-Carolina] designed to disrupt its production and force a payoff. This is one of the main drivers for data breaches as estimated in 2017 Cost of Data Breach Study [Ponemon]. Industrial IoT network designers must integrate device and platform integrity–access control policies, threat detection and mitigation capabilities on routers, switches and wireless access points deployed in plants or outdoor locations–to protect end-devices against attacks. Failing to address these key considerations may allow an attacker to gain access on industrial IoT network equipment, paving the way for data breaches or attacks against the rest of the infrastructure. We saw this happen in [Ireland]. As discussed in Cisco Trustworthy Systems At-a-Glance [CTS-AAG], the threat landscape has evolved, and it is critical that networks be protected from malevolent attacks and counterfeit and tampered products. While product security and security management technologies spread across all layers of the Open Systems Interconnect model, device hardening is an initial — and mandatory — component of trustworthy systems that will help prevent several types of threats, such as: - Device hardware changes and rogue devices – Halts open door to foreign control of devices joining the network; prevents the insertion of counterfeit equipment in the network to cause abnormal behaviors. - Device software changes – Thwarts data exfiltration from malicious software. - Unauthorized users access – Prevents unauthorized users privileges from gaining access to the devices and network security. - Unauthorized network access – Blocks a network device from being compromised, which includes data sniffing to data exfiltration, scan and reconnaissance of networked devices; man-in-the-middle attacks (MITM) secretly relays information or alters communicating between two parties who believe they are directly communicating with each other. - DDoS through network protocols – Inhibits the incoming flood of data to delay processing of valid traffic; modification of the control plane protocols behaviors, i.e. IPv4 ARP and DHCP attacks to subvert the host initialization process, routing attacks to disrupt or redirect traffic flows, Layer 3–Layer 4 spoofing to mask the intent or origin of the traffic, header manipulation, and fragmentation to evade or overwhelm the network (i.e. Smurf attack or broadcast amplification). - Malware infiltrates application – From viruses to application’s protocol exploitation, malicious data may lead to data exfiltration, DDoS or data corruption (i.e. ransomware and Spectre). With the emergence of Fog and Edge computing, network devices hosting applications may face both. This is what IIoT network and device security looks like The Cisco Industrial IoT portfolio starts from the initial design of a product as documented in Cisco Secure Development Lifecycle [CSDL]. Each hardware platform embeds an ACT2 chipset, a Secure Unique Device Identity compliant with IEEE 802.1ar that contains product identity information and a certificate chain (x.509) provisioned at manufacturing. The installed certificate ensures an anchor of trust for the boot chain, enabling detection and recovery of boot code integrity as shown in Figure 1. Software integrity against any backdoor image modification is achieved through Image Signing and Secure Boot support, with characteristics such as: - Golden bootloader images are always stored on a permanent read-only boot flash that is encapsulated in epoxy and has tamper evident label signed. - FPGA bootloader images are signed so they can be validated by Cisco Secure Boot using burnt into certificates in ACT2. This system protects the boot sequence against changing boot sequence, booting from an alternate device, bypassing integrity check, or adding persistent code. Each step of the software booting, as shown in Figure 2, is authenticated by the previous stack to ensure integrity all the way through the system. Finally, once the device boots, the device integrity may be validated as documented in Cisco IOS Software Integrity Assurance [CSIA] while some commands may vary depending on the platform. The next step is to harden the functionality of the software configuration following recommendations from Cisco IOS hardening to protect the user access and network control plane. The important take away is that we can’t always be perfect. “You protect against what you know, and mitigate the risk against what you don’t know.” Let us know the problems your organization is facing. Feel free to share your thoughts in the comments below.
<urn:uuid:b57baac4-366f-41d7-9312-6c4ea8338ccd>
CC-MAIN-2022-40
https://blogs.cisco.com/digital/how-to-harden-your-devices-to-prevent-cyber-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00623.warc.gz
en
0.896398
1,106
2.734375
3
As a follow on to our last write up about an effective digital vaccine we thought it would be worth pointing out a few facts about a critical topic in the health arena. Soap. Yeah, soap. Before you stop reading let me break down what we have been getting wrong about something as simple as washing our hands during the Covid19 fiasco and detail how that applies to our digital space if we think about it differently. Health bodies around the world recognize handwashing as one of the most important health care steps to prevent the spread of disease. Seriously, washing your hands effectively is noted as one of the most effective countermeasures to stopping the spread of viruses and diseases. The CDC estimates that about 30% of stomach illnesses and up to 20% of respiratory infections can be prevented through something as simple as handwashing: all you need is soap and water and 20 seconds of scrubbing. Proper handwashing with soap and clean running water removes germs from the hands, it literally washes them away from your skin, which stops people from catching viruses when they touch their mouth, nose, or eyes, and prevents germs being spread on surfaces like door handles. And it stops you or someone else from spreading the virus or disease to others by touching or casual contact. Antibacterial soap may seem like a more effective cleaning solution, and it is marketed as the “cure” for virus and disease spread but the reality is antibacterial soap is no better than regular soap at killing bacteria or viruses. And what the marketing forgets to remind us is that a virus is not a bacteria, so all the antibacterial chemicals in the universe won’t help kill a virus this way. To put it simply, anti-bacterial does not magically become anti-viral because it is being used to cleanse one’s hands for a viral infection. Antibacterial soap also contains chemicals that destroy bacteria, but not necessarily viruses. Antibacterial soap contains chemicals not found in regular soap, which can react with the surface of bacterial cells, notice bacterial cells, not viral cells. That fact means that antibacterial soap doesn’t necessarily make it more effective. Really any soap can destroy bacteria and some viruses, but the most important “feature” of soap is that it helps to wash away the infectant. Dead or not a virus down the drain is a good thing. Some antibacterial soaps can technically kill germs, but that isn’t necessarily better. The fact that the germs have left our hands is enough. What we should pay attention to here is that if we stick to regular soap and water, we reduce the risk of an infection. Overall, both the FDA and CDC have stated that antibacterial soap’s effectiveness at killing germs is unproven, and that it is no more effective than regular soap at removing germs, period. Though it may be tempting, don’t listen to the marketing ploys used by ‘antibacterial’ soaps. Just washing your hands frequently with regular soap and water is still the best way to remove viruses and bacteria. Ok great, what does that have to do with cyber security. Well it’s pretty simple and clear when you think about it. Many security vendors market their solutions as a “cure” for cyber security flaws and the inherent technical problems that enable compromises. That’s marketing, not a fix. That is essentially “antibacterial digital soap”. It won’t magically fix the issues that we most commonly face in cyber security and those solutions won’t effectively eliminate the technical problems that are present in today’s computer systems. In other words we cant kill a digital virus when we are using a solution for digital bacteria, it wont work. If that’s the case then what should we do. Well we need to digitally wash our hands more effectively. Simple, right. We need to use solutions that help us “wash away” the risk and flush it down the proverbial drain. By applying solutions that are vectored specifically to reduce the risk that is present because of technical misconfigurations and the inherent weaknesses that hackers exploit in systems we can do better and reduce our risk. Just like in the physical world we must deal with threats to our digital health as effectively and simply as possible to ensure maximum survivability and a return on our efforts and expenses. Doing something as simple as using the right solutions that are vectored to the specifics of the threat and not just based on sexy “antibacterial” marketing is what makes the most sense and helps us collectively be more secure and digitally healthy.
<urn:uuid:168569de-f061-4735-a640-6902f640e1ca>
CC-MAIN-2022-40
https://gytpol.com/cyber-soap/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00623.warc.gz
en
0.95433
966
3.15625
3
Have you ever been away from home and needed to access something on your home computer or router? It sounds convenient, right? It sure would be easier than driving/flying back to your home, or asking a friend or family member to go to your house to retrieve what you’re looking for. Well, never fear. In this guide, we’ll walk you through the process of setting up your router for accessing it remotely via the Internet. Here’s how to access your router from the Internet: If you aren’t very tech savvy, be sure to stick around until the end of the article where I discuss an alternate method that works just as well, but is easier to setup. The need for remote access There are many reasons someone would want to access their router from the Internet. Perhaps they need to change their Wi-Fi password for a roommate, set up remote access into their home network (called a VPN), or access files on a hard drive connected to their router. Some people may not have a specific need to access their router from the Internet today, but they want to have the flexibility to do so in the future – since they know that they may well have the need at a later date. Change your router’s admin password Most routers have two different passwords – your WiFi password (which pretty much everyone is familiar with because they need to know it on a regular basis) and your admin password. The admin password is what grants you access to the router’s management web interface, which is where you go to make changes to the router such as change/set IP addresses, change/set your WiFi network name (SSID), change/set your WiFi password, and much much more. Most routers come out of the box with either a default password of something like ‘admin’ or ‘password’, which is very insecure. Some routers even have a blank admin password by default! This isn’t a huge deal, because by default, the management interface is generally only accessible from a computer inside your network. However, we are about to enable access from the Internet, so you better believe it is important to change the password to something secure. - Find the private IP of your router and enter it into a web browser. This is usually 192.168.1.1, 192.168.0.1, 10.0.1.1, or something similar (depends on the brand of your router). See here for help in identifying your router’s private IP. - Enter your admin username and password. If you didn’t change these when you originally installed the router, they are likely still at the defaults. The username is usually ‘admin’ or ‘administrator’ and the password is usually ‘admin’ or ‘password’ by default. Again, this totally differs depending on the router manufacturer and model. If you can’t find it, I recommend searching Google for “[router model] default password”. - Once you are logged in, you need to find the password setting. Usually there will be a “General”, “Admin”, or “Administrator” area of the settings, so try looking there. You may be able to change the username in addition to changing the password. This is recommended as it will greatly increase security. Just make sure you record it somewhere – if you forget it, you will have to reset the router to defaults to get back in. Enable login from WAN While you are still logged in to the router’s management interface, let’s enable remote login capability: Generally the setting will be called something like “Allow login from WAN” or “Allow login from Internet”, but it differs widely between routers. This setting will usually live in the Advanced Settings area of the management interface. Again, if you can’t find it, try searching Google for “[router model] login from wan”. Enable logging and notifications This step is entirely optional and your router may or may not support this feature. While still logged in, look for “logging”, “notifications”, or a similar section. It will likely appear under the “Advanced Settings” area or the “Admin” area. Enable any logging you desire. This will cause the router to log events, such as when someone logs in to the management interface. You may even be able to have the router email you a notification when someone logs in – again this varies wildly by model and manufacturer. This type of information is helpful from a security standpoint – so you will know if someone else manages to log in to your router over the internet (which would be bad!). Determine your public IP address I say “public” IP address, because your router actually has two IP addresses, and we are looking for the public IP, not the private IP. Your IP address is exactly that – an address. It’s like your mailing address on the Internet. Your IP address is used anytime you want to send data to, or receive data from the Internet. Since you are wanting to access your router from the Internet, you will need to know what your IP address is. There are a couple of ways to figure this out: Look on your router One way to find your public IP address, is to log in to your router and have it show you your public IP address. Once logged in, look for a screen or tab labeled “Status”. Every router is a little bit different, so you may have to look around a bit. The status page will usually show your router’s status, including the “WAN” or “Internet” IP address. Ask a website The other easy method for determining your public IP, is to query a website. There are many “What is my IP address?” type websites out there that will examine the traffic that your computer sends to it when the page is loading, determine the IP address your traffic is originating from, and display that address for you. Write it down Once you have obtained your public IP address, write it down somewhere or email it to yourself. You will need it later. Usually, this isn’t a big deal and most people don’t even notice that their IP has changed. However, when you are going to be accessing your router from the Internet, it is important to be aware that your address could change. I’ve seen some ISP’s using DHCP and your IP address doesn’t change for months or even years. I’ve also seen some ISP’s change your address every day. My current ISP is like this – my IP address changes every 24 hours like clockwork. Dynamic DNS is absolutely critical for me because of this. Thus, if you are leaving for a trip and hoping to access your router from the road, it is best to record your public IP just before you leave the house to maximize your chances that that will still be the IP assigned to your router when you attempt login. Just be aware that if your IP address changes between the time you recorded it and the time you attempt to login remotely, you will not be able to login. For this reason, it is suggested to set up Dynamic DNS (DDNS), which will update automatically when your IP address changes. Thus, you will connect to your router using a hostname like “andrewshouse.no-ip.com” instead an IP address such as 18.104.22.168. The DDNS server will automatically update the IP address that “andrewshouse.no-ip.com” resolves to every time it changes. DDNS is an advanced topic and is only recommended to dabble in if you are a bit of an advanced user or if you are at least feeling adventurous! If you want to be sure that remote login will work once you are away from home, it is best to test it beforehand. To test, you will need to access a secondary internet connection other than your regular home broadband connection. This could be a neighbor’s house, using your internet connection at work, etc. You could also temporarily enable the hotspot on your smartphone and tether your computer to it. Once you are on a different Internet connection: - Open a web browser and enter your router’s public IP address (or DDNS fully qualified domain name) in the address bar, then press the enter key. - You should be presented with a login prompt. If you are not, try entering “http://” or “https://” before the address and press enter again. If it still doesn’t work, you may also need to append a colon and port number behind the address, such as “:8443”. - Once prompted, enter your management credentials and log in! You are now ready to log in remotely. Regardless of where you travel to, as long as you have an Internet connection, you should be able to log in. Hopefully you already tested your ability to login as shown above. The procedure for logging in when you are actually away from home will be the same. An easier way If you are less techie and are simply looking for a solution that works, you may also want to research setting up remote access to a computer at your home via a service such as TeamViewer. TeamViewer and similar services can be set up for free to access the computer in question over the internet. The downside, is that the computer has to be left on and connected to your home network at all times in order to work. The upside, is that it requires no special configuration on your router, and Dynamic DNS doesn’t need to be set up. Simply connect to the remote computer via the TeamViewer app on your smartphone, PC, or Mac, and TeamViewer takes care of the rest. If you still needed to access your router, you could launch a web browser on the remote PC using TeamViewer, and then login to the router’s private IP address normally as if you were at home. This solution is simpler, but also relies on TeamViewer to be working in order to function, so there are pros and cons for sure. Regardless of which method you choose, Good luck! Andrew Namder is an experienced Network Engineer with 20+ years of experience in IT. He loves technology in general, but is truly passionate about computer networking and sharing his knowledge with others. He is a Cisco Certified Network Professional (CCNP) and is working towards achieving the coveted CCIE certification. He can be reached at [email protected].
<urn:uuid:1dd7292c-fab9-43eb-970a-b7870c936e2f>
CC-MAIN-2022-40
https://www.infravio.com/how-to-access-my-router-from-the-internet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00623.warc.gz
en
0.948028
2,290
3
3
We could already be swimming in a world of deepfakes and the public wouldn’t even know. After all, how would we? Those being manipulated by microtargeting before the last US presidential election or the Brexit referendum only found out too late that their information had been weaponized against them. Like the frog being boiled, we don’t notice the water getting hotter until it’s too late. The terrifying prospect of a world of so-called deepfakes, where video is falsified so effectively that it is impossible to tell if it is true or not, is already at hand. “Thanks” to advances in machine learning, CGI, and facial mapping technology, deepfakes are not only possible, but probable. Never one to miss an opportunity, Google is helping to develop a system that can detect deepfake videos … by creating deepfakes! In a blogpost on 24 September, Google Research said it was taking the issue seriously. Google created a large dataset of 363 real videos of 28 consenting actors and an additional 3,068 manipulated videos that will be used by researchers from the Technical University of Munich, the University Federico II of Naples and University of Erlangen-Nuremberg FaceForensics project. Deepfakes use Generative Adversarial Networks (GANs) – a set of algorithms that can create new data from existing datasets. Quite obviously the implications are alarming. As well as manipulating politicians’ images, the move also heralds a new and disturbing form of revenge porn. But the threats to society extend beyond the technical. In a world where you can’t believe your eyes, how do you know what’s real? How can you prove that your video is NOT a deepfake. A recent report from cybersecurity company Deeptrace Labs found mixed results: although it didn’t detect any instances in which deepfakes were actually used in disinformation campaigns, it did emerge that the knowledge that they could be used had a powerful effect. “From the perspective of the news business, the ability to trust or indeed identify disinformation in third party content is crucial,” explained Nick Cohen, Reuters head of video products. Reuters therefore set up its own deepfake experiment, using flesh and blood humans! The experiment identified certain red flags to help detect deepfakes: audio to video synchronization issues; unusual mouth shape, particularly with sibilant sounds; a static subject. But as the technology improves, humans will be increasingly unable to trust the evidence of their own eyes. Using AI to detect deepfakes is not yet widely possible, so policymakers are starting to take note – primarily in response to fears that their own campaigns could be derailed. But despite some noise about disinformation, so far no binding rules have been put forward. The European Union’s Code of Practice on disinformation does not even mention the issue. In June, then-MEP, Marietje Schaake, called for a parliamentary inquiry into the role of technology companies on democracy. “Self-regulation has proven insufficient,” she said. “Not a day goes by without new information about malicious actors (ab)using tech platforms, undermining democracy, without technology companies taking sufficient action. The companies’ business models, as well as the use of botnets and deepfakes, further impacts access to information. Oversight and accountability urgently need to improve.” She suggested a committee should hear experts under oath to uncover the many unknown details of the workings of technology platforms and how they impact democracy and elections. The assumption is that deep fakes will primarily be deployed to derail democratic institutions. And that’s not an unreasonable assumption for a number of reasons. Firstly, as previously mentioned, the very fact of deepfakes has a destabilizing effect, something that certain foreign actors actively welcome. Secondly for those already in power, videos of wrongdoing can be dismissed as “deepfakes,” providing the cover of deniability. But lastly and most importantly, creating really good deepfakes is expensive. Despite online sites that offer “cheapfakes” for humorous at-home use, only those with significant resources can create long-term convincing videos and the high costs and technical barriers aren’t going away anytime soon. Politicians (and celebrities) are also desirable targets because there is so much footage and imagery of them available, something useful in creating a believable deepfake. And therein lies some of the armour we everyday mortals can deploy to avoid being “deepfaked.” As with so much protection in daily life, it involves NOT giving too much of your personal data away – limit the number of videos and images of you publicly available. Businesses should also be wary. According to reports, criminals have successfully stolen millions of dollars by mimicking CEOs’ voices with an AI program “that had been trained on hours of their speech – culled from earnings calls, YouTube videos, TED talks, etc.” Other risks include leaking a deepfake of a CEO saying something that could send stock prices tumbling. In time, convenient detection technology may emerge, or law-makers might find some way to deter the practice. In the meantime, don’t annoy anyone who might have the time, know-how, resources and thirst for revenge.
<urn:uuid:20d29d86-d899-4caf-9ea7-ce9aaa570db7>
CC-MAIN-2022-40
https://www.cpomagazine.com/cyber-security/deepfakes-could-break-the-internet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00623.warc.gz
en
0.949856
1,102
2.609375
3
Wi-Fi security took a severe blow on Monday as Mathy Vanhoef, a Belgian security researcher, released details of a major weakness in the Wi-Fi Protected Access 2 (WPA2) protocol. The WPA2 security protocol was first made available in 2004 and has since served as the de-facto standard for securing wireless networks using data encryption. The hack was named KRACK for Key Reinstallation Attack and allows an attacker physically within range of a victim to decrypt and read data off the Wi-Fi network. More worrying is the ability for the attacker, in certain situations, to inject (e.g. ransomware and other malware) and manipulate data. KRACK is also particularly devastating for Android and Linux users due to the way Wi-Fi is implemented, making it trivial to intercept and manipulate the traffic sent by these devices. To demonstrate, the researchers posted a demonstration video on their website. Why is this Wi-Fi security flaw significant? While previous security issues are the result of weaknesses in the way hardware vendors implemented the protocol, the discovery by Vanhoef however exposes a fundamental flaw in the way the protocol was designed. In other words, even if a vendor has implemented the standard perfectly, the vendor would have inadvertently built the flaw into his product. A vulnerability in the protocol means that every device is affected and will require vendors to roll out Wi-Fi security patches to every connected device that relies on Wi-Fi. This includes wireless networks, computers and mobile devices. This will take time. How can organizations protect themselves? While waiting for vendors to provide security updates to affected equipment, organizations can try to limit their exposure by implementing additional Wi-Fi security measures. Reduce your Wi-Fi network footprint Since the hack requires the attacker to be in close proximity and connected to your Wi-Fi access point, you can make sure that access to Wi-Fi signal is controlled. In many implementations, the organization’s Wi-Fi network signal may be covering a physical area that is much larger than needed. If you are getting a signal from the parking lot, you may be subjecting your network to unnecessary Wi-Fi security risks as malicious attackers can tap onto these signals. Here are some steps you can take to reduce signal leakage: Keep the wireless access points away from windows Position the wireless access points on the ceiling Use the right wireless antennas to keep signals within your building/office Add an additional encryption layer To protect your Wi-Fi security, organizations may consider implementing an additional layer of encryption on their network traffic using a Virtual Private Network (VPN). Even if the attacker can access the Wi-Fi network, data cannot be siphoned due to the VPN encryption. As the attack must be executed within range of the Wi-Fi signal, the impact will not be as devastating compared to an online threat that can spread quickly over the internet. However, the widespread impact in terms of the number of networked devices that must be updated may mean a prolonged period where vulnerable devices will be at risk.
<urn:uuid:9ee873ba-e965-4e03-84bc-b91c506739e5>
CC-MAIN-2022-40
https://www.cpomagazine.com/cyber-security/protect-yourself-from-the-latest-wpa2-wi-fi-security-flaw/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00623.warc.gz
en
0.934381
628
2.71875
3
Machine learning (ML) teaches computers to learn from data without being explicitly programmed. Unfortunately, the rapid expansion and application of ML have made it difficult for organizations to keep up, as they struggle with issues such as labeling data, managing infrastructure, deploying models, and monitoring performance. This is where MLOps comes in. MLOps is the practice of optimizing the continuous delivery of ML models, and it brings a host of benefits to organizations. Below we explore the definition of MLOps, its benefits, and how it compares to AIOps. We also look at some of the top MLOps tools and platforms. What Is MLOps? MLOps combines machine learning and DevOps to automate, track, pipeline, monitor, and package machine learning models. It began as a set of best practices but slowly morphed into an independent ML lifecycle management approach. As a result, it applies to the entire lifecycle, from integrating data and model building to the deployment of models in a production environment. MLOps is a special type of ModelOps, according to Gartner. However, MLOps is concerned with operationalizing machine learning models, whereas ModelOps focuses on all sorts of AI models. Benefits of MLOps The main benefits of MLOps are: - Faster time to market: By automating deploying and monitoring models, MLOps enables organizations to release new models more quickly. - Improved accuracy and efficiency: MLOps helps improve models’ accuracy by tracking and managing the entire model lifecycle. It also enables organizations to identify and fix errors more quickly. - Greater scalability: MLOps makes it easier to scale up or down the number of machines used for training and inference. - Enhanced collaboration: MLOps enables different teams (data scientists, engineers, and DevOps) to work together more effectively. MLOps vs. AIOps: What are the Differences? AIOps is a newer term coined in response to the growing complexity of IT operations. It refers to the application of artificial intelligence (AI) to IT operations, and it offers several benefits over traditional monitoring tools. So, what are the key differences between MLOps and AIOps? - Scope: MLOps is focused specifically on machine learning, whereas AIOps is broader and covers all aspects of IT operations. - Automation: MLOps is largely automated, whereas AIOps relies on human intervention to make decisions. - Data processing: MLOps uses pre-processed data for training models, whereas AIOps processes data in real time. - Decision-making: MLOps relies on historical data to make decisions, whereas AIOps can use real-time data. - Human intervention: MLOps requires less human intervention than AIOps. Types of MLOps Tools MLOps tools are divided into four major categories dealing with: - Data management - End-to-end MLOps platforms - Data Labeling: Large quantities of data, such as text, images, or sound recordings, are labeled using data labeling tools (also known as data annotation, tagging, or classification software). Labeled information is fed into supervised ML algorithms to generate new, unclassified data predictions. - Data Versioning: Data versioning ensures that different versions of data are managed and tracked effectively. This is important for training and testing models as well as for deploying models into production. - Feature Engineering: Feature engineering is the process of transforming raw data into a form that is more suitable for machine learning algorithms. This can involve, for example, extracting features from data, creating dummy variables, or transforming categorical data into numerical features. - Experiment Tracking: Experiment tracking enables you to keep track of all the steps involved in a machine learning experiment, from data preparation to model selection to final deployment. This helps to ensure that experiments are reproducible and the same results are obtained every time. - Hyperparameter Optimization: Hyperparameter optimization is the process of finding the best combination of hyperparameters for an ML algorithm. This is done by running multiple experiments with different combinations of hyperparameters and measuring the performance of each model. - Model Deployment/Serving: Model deployment puts an ML model into production. This involves packaging the model and its dependencies into a format that can be run on a production system. - Model Monitoring: Model monitoring is tracking the performance of an ML model in production. This includes measuring accuracy, latency, and throughput and identifying any problems. End-to-end MLOps platforms Some tools go through the machine learning lifecycle from end to end. These tools are known as end-to-end MLOps platforms. They provide a single platform for data management, modeling, and operationalization. In addition, they automate the entire machine learning process, from data preparation to model selection to final deployment. Also read: Top Observability Tools & Platforms Best MLOps Tools & Platforms Below are five of the best MLOps tools and platforms. SuperAnnotate: Best for data labeling & versioning Superannotate is used for creating high-quality training data for computer vision and natural language processing. The tool enables ML teams to generate highly precise datasets and effective ML pipelines three to five times faster with sophisticated tooling, QA (quality assurance), ML, automation, data curation, strong SDK (software development kit), offline access, and integrated annotation services. In essence, it provides ML teams with a unified annotation environment that offers integrated software and service experiences that result in higher-quality data and faster data pipelines. - Pixel-accurate annotations: A smart segmentation tool allows you to separate images into numerous segments in a matter of seconds and create clear-cut annotations. - Semantic and instance segmentation: Superannotate offers an efficient way to annotate Label, Class, and Instance data. - Annotation templates: Annotation templates save time and improve annotation consistency. - Vector Editor: The Vector Editor is an advanced tool that enables you to easily create, edit, and manage image and video annotations. - Team communication: You can communicate with team members directly in the annotation interface to speed up the annotation process. - Easy to learn and user-friendly - Well-organized workflow - Fast compared to its peers - Enterprise-ready platform with advanced security and privacy features - Discounts as your data volume grows - Some advanced features such as advanced hyperparameter tuning and data augmentation are still in development. Superannotate has two pricing tiers, Pro and Enterprise. However, actual pricing is only available by contacting the sales team. Iguazio: Best for feature engineering Iguazio helps you build, deploy, and manage applications at scale. New feature creation based on batch processing necessitates a tremendous amount of effort for ML teams. These features must be utilized during both the training and inference phases. Real-time applications are more difficult to build than batch ones. This is because real-time pipelines must execute complex algorithms in real-time. With the growing demand for real-time applications such as recommendation engines, predictive maintenance, and fraud detection, ML teams are under a lot of pressure to develop operational solutions to the problems of real-time feature engineering in a simple and reproducible manner. Iguazio overcomes these issues by providing a single logic for generating real-time and offline features for training and serving. In addition, the tool comes with a rapid event processing mechanism to calculate features in real time. - Simple API to create complex features: Allows your data science staff to construct sophisticated features with a basic API (application programming interface) and minimize effort duplication and engineering resources waste. You can easily produce sliding windows aggregations, enrich streaming events, solve complex equations, and work on live-streaming events with an abstract API. - Feature Store: Iguazio’s Feature Store provides a fast and reliable way to use any feature immediately. All features are stored and managed in the Iguazio integrated feature store. - Ready for production: Remove the need to translate code and break down the silos between data engineers and scientists by automatically converting Python features into scalable, low-latency production-ready functions. - Real-time graph: To easily make sense of multi-step dependencies, the tool comes with a real-time graph with built-in libraries for common operations with only a few lines of code. - Real-time feature engineering for machine learning - It eliminates the need for data scientists to learn how to code for production deployment - Simplifies the data science process - Highly scalable and flexible - Iguazio has poor documentation compared to its peers. Iguazio offers a 14-day free trial but doesn’t publish any other pricing information on its website. Neptune.AI: Best for experiment tracking Neptune.AI is a tool that enables you to keep track of all your experiments and their results in one place. You can use it to monitor the performance of your models and get alerted when something goes wrong. With Neptune, you can log, store, query, display, categorize, and compare all of your model metadata in one place. - Full model building and experimentation control: Neptune.AI offers a single platform to manage all the stages of your machine learning models, from data exploration to final deployment. You can use it to keep track of all the different versions of your models and how they perform over time. - Single dashboard for better ML engineering and research: You can use Neptune.AI’s dashboard to get an overview of all your experiments and their results. This will help you quickly identify which models are working and which ones need more adjustments. You can also use the dashboard to compare different versions of your models. Results, dashboards, and logs can all be shared with a single link. - Metadata bookkeeping: Neptune.AI tracks all the important metadata associated with your models, such as the data they were trained on, the parameters used, and the results they produced. This information is stored in a searchable database, making it easy to find and reuse later. This frees up your time to focus on machine learning. - Efficient use of computing resources: Neptune.AI allows you to identify under-performing models and save computing resources quickly. You can also reproduce results, making your models more compliant and easier to debug. In addition, you can see what each team is working on and avoid duplicating expensive training runs. - Reproducible, compliant, and traceable models: Neptune.AI produces machine-readable logs that make it easy to track the lineage of your models. This helps you know who trained a model, on what data, and with what settings. This information is essential for regulatory compliance. - Integrations: Neptune.AI integrates with over 25 different tools, making it easy to get started. You can use the integrations to pipe your data directly into Neptune.AI or to output your results in a variety of formats. In addition, you can use it with popular data science frameworks such as TensorFlow, PyTorch, and scikit-learn. - Keeps track of all the important details about your experiments - Tracks numerous experiments on a single platform - Helps you to identify under-performing models quickly - Saves computing resources - Integrates with numerous data science tools - Fast and reliable - The user interface needs some improvement. Neptune.AI offers four pricing tiers as follows: - Individual: Free for one member and includes a free quota of 200 monitoring hours per month and 100GB of metadata storage. Usage above the free quota is charged. - Team: Costs $49 per month with a 14-day free trial. This plan allows unlimited members and has a free quota of 200 monitoring hours per month and 100GB of metadata storage. Usage above the free quota is charged. This plan also comes with email and chat support. - Scale: With this tier, you have the option of SaaS (software as a service) or hosting on your infrastructure (annual billing). Pricing starts at $499 per month and includes unlimited members, custom metadata storage, custom monitoring hours quota, service accounts for CI workflows, single sign-on (SSO), onboarding support, and a service-level agreement (SLA). - Enterprise: This plan is hosted on your infrastructure. Pricing starts at $1,499 per month (billed annually) and includes unlimited members, Lightweight Directory Access Protocol (LDAP) or SSO, an SLA, installation support, and team onboarding. Kubeflow: Best for model deployment/serving Kubeflow is an open-source platform for deploying and serving ML models. Google created it as the machine learning toolkit for Kubernetes, and it is currently maintained by the Kubeflow community. - Easy model deployment: Kubeflow makes it easy to deploy your models in various formats, including Jupyter notebooks, Docker images, and TensorFlow models. You can deploy them on your local machine, in a cloud provider, or on a Kubernetes cluster. - Seamless integration with Kubernetes: Kubeflow integrates with Kubernetes to provide an end-to-end ML solution. You can use Kubernetes to manage your resources, deploy your models, and track your training jobs. - Flexible architecture: Kubeflow is designed to be flexible and scalable. You can use it with various programming languages, data processing frameworks, and cloud providers such as AWS, Azure, Google Cloud, Canonical, IBM cloud, and many more. - Easy to install and use - Supports a variety of programming languages - Integrates well with Kubernetes at the back end - Flexible and scalable architecture - Follows the best practices of MLOps and containerization - Easy to automate a workflow once it is properly defined - Good Python SDK to design pipeline - Displays all logs - An initial steep learning curve - Poor documentation Databricks Lakehouse: Best end-to-end MLOPs platform Databricks is a company that offers a platform for data analytics, machine learning, and artificial intelligence. It was founded in 2013 by the creators of Apache Spark. And over 5,000 businesses in more than 100 countries—including Nationwide, Comcast, Condé Nast, H&M, and more than 40% of the Fortune 500—use Databricks for data engineering, machine learning, and analytics. Databricks Machine Learning, built on an open lake house design, empowers ML teams to prepare and process data while speeding up cross-team collaboration and standardizing the full ML lifecycle from exploration to production. - Collaborative notebooks: Databricks notebooks allow data scientists to share code, results, and insights in a single place. They can be used for data exploration, pre-processing, feature engineering, model building, validation and tuning, and deployment. - Machine learning runtime: The Databricks runtime is a managed environment for running ML jobs. It provides a reproducible, scalable, and secure environment for training and deploying models. - Feature Store: The Feature Store is a repository of features used to build ML models. It contains a wide variety of features, including text data, images, time series, and SQL tables. In addition, you can use the Feature Store to create custom features or use predefined features. - AutoML: AutoML is a feature of the Databricks runtime that automates building ML models. It uses a combination of techniques, including automated feature extraction, model selection, and hyperparameter tuning to build optimized models for performance. - Managed MLflow: MLflow is an open-source platform for managing the ML lifecycle. It provides a common interface for tracking data, models, and runs as well as APIs and toolkits for deploying and monitoring models. - Model Registry: The Model Registry is a repository of machine learning models. You can use it to store and share models, track versions, and compare models. - Repos: Allows engineers to follow Git workflows in Databricks. This enables engineers to take advantage of automated CI/CD (continuous integration and continuous delivery) workflows and code portability. - Explainable AI: Databricks uses Explainable AI to help detect any biases in the model. This ensures your ML models are understandable, trustworthy, and transparent. - A unified approach simplifies the data stack and eliminates the data silos that usually separate and complicate data science, business intelligence, data engineering, analytics, and machine learning. - Databricks is built on open source and open standards, which maximizes flexibility. - The platform integrates well with a variety of services. - Good community support. - Frequent release of new features. - User-friendly user interface. - Some improvements are needed in the documentation, for example, using MLflow within existing codebases. Databricks offers a 14-day full trial if using your own cloud. There is also the option of a lightweight trial hosted by Databricks. Pricing is based on compute usage and varies based on your cloud service provider and Geographic region. Getting Started with MLOPS MLOps is the future of machine learning, and it brings a host of benefits to organizations looking to deliver high-quality models continuously. It also offers many other benefits to organizations, including improved collaboration between data scientists and developers, faster time-to-market for new models, and increased model accuracy. If you’re looking to get started with MLOps, the tools above are a good place to start. Also read: Best Machine Learning Software in 2022
<urn:uuid:91f651c1-4c86-4da0-9a1e-261dc260dad0>
CC-MAIN-2022-40
https://www.itbusinessedge.com/development/mlops-tools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00623.warc.gz
en
0.905292
3,764
3.421875
3
A Historic Look at Silicon Valley In the 90’s, there was the internet. Or rather, the dawn of the internet, which came with it the need for data facilities to house increasingly complex websites, applications and services. Laying to the south of San Francisco, Silicon Valley and more specifically the Santa Clara valley, was home to some of the world’s original technology innovators. The explosion of development that took place in this region eventually led to an increasing number of multi-tenant carrier hotels and internet exchanges. Data Center Frontier detailed the evolution of the Silicon Valley data center market, in a recently published special report. They noted when this era of innovation gave way to the dot-com boom, a new industry emerged of colocation providers and hosting companies, who began building their own data center facilities around Silicon Valley. “Data center requirements in Silicon Valley typically originate from companies already located in the region, often representing the primary West Coast footprint for larger web infrastructures. In many cases, companies see data center space in Silicon Valley as a strategic imperative that outweighs other factors,” Data Center Frontier, Special Report, Silicon Valley Data Center Market. Growth and Capacity in California’s Northern Corridor Silicon Valley is the second largest data center region in the United States, rivaling “Data Center Alley” in Ashburn, Virginia, as the world’s most concentrated data center area. The region has some notable differences from other top data center hubs, with a unique set of value propositions. In spite of high costs of both real estate and power, risk of earthquakes and other environmental factors such as wildfires and water shortages, Silicon Valley, and more specifically Santa Clara, continues to attract an ever-increasing interest from cloud providers and data center users/ operators. This is due to an attractive proximity to key industry players and technology enterprises, the continued cloud boom, and recently an explosion of high-density computing requirements as a result of the adoption of artificial intelligence and machine learning. Silicon Valley has entered a new phase of advancement, with accelerated growth projected over the upcoming years. Santa Clara is home to nearly 40 data centers across 18 square miles, with new colocation providers and increased construction already underway in the market. According to research from datacenterHawk, as of 2019, the Silicon Valley Data Center Market was home to 2.9 million square feet (SF) of commissioned data center space, representing 411 megawatts (MW) of commissioned power. At last report, there was only 42 MW of power still available, in a market that has absorbed approximately 40 MW annually. There is 498 MW of new capacity planned, and this development will continue over the next 2 – 3 years in an effort for suppliers to meet market demands which continue to be driven up largely by the recent explosion of cloud compute deployments. In the absence of available capacity, a robust leasing and subleasing market has emerged, which will continue while development catches up with demand. Potholes on the Digital Highway In a perplexing paradox, the telecommunications networks in and around the Bay Area have not seen material investment or upgrades in the last 20 years. California, home to some of the world’s greatest technological innovators, relies on fiber networks that are much closer to the end of their useful life than the beginning. Meanwhile, the world demands more of those connections than ever before – and that demand shows no sign of abating. Many companies now consider data to be their most valuable asset. When that data travels across older or low-quality fiber cables, the risk of it getting distorted becomes higher. Network engineers will attempt to overcome this quality-deficit by investing in more forgiving and consequently more expensive hardware. Yet as bandwidth demand grows this can become a vicious cycle. In order to support the promise of this data-driven future, infrastructure investment is required. This includes rethinking fiber networks from the ground up. “Twenty years ago, fiber networks were built to connect Central Offices (legacy hubs from the former Bell System) and multi-tenant office buildings. Over time, and successive rounds of elective surgery, the carriers operating these networks are finding they lack the integrity required by today’s data-intensive enterprise,” said Jim Nolte, CEO of Bandwidth Infrastructure Group. What does this mean for the Industry? All of the digital use cases being dreamed about today rely on data capture, storage and analysis which will have to take place in some kind of a data center. Or just as likely, spread across multiple data centers. Maintaining the integrity of those digital transactions will require a resilient, state-of-the-art fiber network to ensure data arrives promptly and uncorrupted. “We knew all along that being able to meet and exceed the network specifications of today’s data-intensive businesses would require a brand new, purpose-built fiber network,” Nolte added. “We are investing in support of productivity and a bright digital future made possible by modern infrastructure like that which is provided by Bandwidth IG.” Learn more about how Bandwidth IG is working to upgrade and expand available network infrastructure in Northern California.
<urn:uuid:cf4ef044-f4a1-4eef-b4ef-64325cbfc7cc>
CC-MAIN-2022-40
https://bandwidthig.com/bandwidth-ig-the-future-of-data-in-silicon-valley/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00623.warc.gz
en
0.948156
1,069
2.75
3